Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

imu data hangs on Raspberry pi 4B #11089

Open
Yoki-pjx opened this issue Nov 11, 2022 · 39 comments
Open

imu data hangs on Raspberry pi 4B #11089

Yoki-pjx opened this issue Nov 11, 2022 · 39 comments

Comments

@Yoki-pjx
Copy link

Yoki-pjx commented Nov 11, 2022

Required Info
Camera Model D455
Firmware Version 05.13.00.50
Operating System & Version Linux raspberrypi 5.10.103-v8+ aarch64 GNU/Linux
Kernel Version (Linux Only) 5.10.103
Platform Raspberry Pi
SDK Version 2.50.0
Language python
Segment

Issue Description

Hi Marty @MartyG-RealSense,

I have the same problem as (https://github.com/IntelRealSense/librealsense/issues/7391) and https://github.com/IntelRealSense/librealsense/issues/7979 IMU data hangs on after a couple of minutes even though I have only one pipeline to gather IMU data.

Both of the issues actually did not fix the problem, I would like to enquire about any solutions for this.

Also, I have another problem with setting the acceleration and acceleration frequency.
config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 63)
config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)
pipeline.start(config)
I got RuntimeError: Couldn't resolve requests with pipeline.start(config), BUT, it works when I deleted the frequency as
config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f)
config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f)
pipeline.start(config)
Could you please provide some solutions?

@MartyG-RealSense
Copy link
Collaborator

Hi @Yoki-pjx Your accel and gyro frequency problem is unusual, as your frequency values for each are correct. Couldn't resolve requests means that the requested configuration could not be provided, either because of a typing mistake in the instruction or because the requested configuration was not supported by the camera at the time that it was requested. Neither of those possibilities seems to apply here.

63 accel and 200 gyro should be the default values applied by an IMU-equipped 400 Series camera anyway though when custom values are not defined, so leaving out the frequencies should be fine in this particular case.


In regard to the hanging of the program after a couple of minutes, may I ask which method you used to install the RealSense SDK on your Pi 4, please? Kernel 5.10 is not officially supported by the SDK and whilst unsupported kernels can work, there may be unpredictable consequences in regards to stability. I would therefore recommend installing the SDK using the RSUSB backend method if you have not done so already. This method bypasses the Linux kernel and so is not dependent on Linux versions or kernel versions and does not require patching.

You could also try increasing the size of the swapfile on your Pi so that it has more 'virtual memory' to use when its real memory is used up.

https://pimylifeup.com/raspberry-pi-swap-file/

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 11, 2022

Hi @MartyG-RealSense,

I have tried reinstalling the SDK by the RSUSB backend method as https://github.com/IntelRealSense/librealsense/issues/6940#issuecomment-665713929. However, it doesn't work. Both issues still exist. I got 'RuntimeError: Frame didn't arrive within 5000' after a couple of minutes and 'RuntimeError: Couldn't resolve requests with pipeline.start(config)' with python.

Just double check with you showing how I installed by the RSUSB backend method below:

  1. Go to the librealsense root directory
    mkdir build && cd build
  2. run the code
    cmake ../ -DFORCE_RSUSB_BACKEND=true -DCMAKE_BUILD_TYPE=release -DBUILD_EXAMPLES=true -DBUILD_GRAPHICAL_EXAMPLES=true
  3. sudo make uninstall && make clean && make && sudo make install
  4. sudo ./scripts/setup_udev_rules.sh
  5. install pyrealsense2
    cd ~/librealsense/build
    cmake .. -DBUILD_PYTHON_BINDINGS=bool:true -DPYTHON_EXECUTABLE=$(which python3) -DFORCE_RSUSB_BACKEND=true
  6. make -j1
  7. sudo make install
  8. add python path
    nano ~/.zshrc export PYTHONPATH=$PYTHONPATH:/usr/local/lib
  9. source ~/.zshrc
  10. sudo reboot

I tried to increase the size of the swapfile to 2048 as well.

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 11, 2022

I think I find the problem for the low frequency seeting of the imu. It seems the accel only support 100/200 rather than 63 as the selection in realsense-viewer in SDK now.
image

However, the hanging problem of the program after a couple of minutes still exists.
It even happens in realsense-viewer without any other stream.
image

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the updates. Your RSUSB installation looks fine to me.

It is unusual that your Viewer is showing the supported IMU frequencies as 100 / 200, as those modes are usually provided on the L515 camera model which also has an IMU. The camera is clearly detected by the Viewer as a D455 model though.

From my own cameras:

D455

image

L515

image


In regard to the error RuntimeError: Frame didn't arrive within 5000', this indicates that new frames have not been received from the camera for a period of 5 seconds and so the program has 'timed out', generating the error.

Does it still occur in the Viewer if you disable the Global Time Enabled option? Global Time is enabled by default on 400 Series cameras, so would likely be active on your Python script too.

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 11, 2022

  1. Do you have any idea why I have 100/200 frequency for my accel? I have also double-checked on my Windows 10 desktop. It also shows 100/200 in the realsense viewer.
    BTW, I am using v2.51.1 SDK now.

  2. I have tried without the Global Time Enabled option in the viewer. The problem seems still occur, but seems it happens later. What's the code to disable the Global Time Enabled option in python? I also would like to have a try in python.

@MartyG-RealSense
Copy link
Collaborator

I will consult with my Intel RealSense colleagues about your IMU settings on D455. Thanks very much for your patience!

@Yoki-pjx
Copy link
Author

Okay, thank you Marty.

I have tried without the Global Time Enabled option in the viewer. The problem seems still occur but seems it happens later. What's the code to disable the Global Time Enabled option in python? I also would like to have a try in python.

@MartyG-RealSense
Copy link
Collaborator

An example of Python code for disabling Global Time is at #9172 (comment)

@Yoki-pjx
Copy link
Author

Thanks Marty.
I'm looking forward to your reply regarding the hang on problem and frequency setting.
Have a nice weekend.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 11, 2022

I checked the data sheet document for the 400 Series cameras.

https://dev.intelrealsense.com/docs/intel-realsense-d400-series-product-family-datasheet

The D455 model was originally equipped with the BMI055 IMU component with 63/250 frequencies but it later changed to the BMI085 IMU component which supports 100/200 frequencies. So your D455 hardware is okay and the time-out issue is likely due to either a software-related problem or with using accel / gyro on Pi 4.

image

image

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 13, 2022

I have tried to disable the Global Time Enabled in python, but it still got a hanging-on problem.

I also tried to use imu_frames = imu_pipeline.poll_for_frames() instead of imu_frames = imu_pipeline.wait_for_freams().

There is no frame arriving at variable imu_frames when the imu_frames hangs on. It is the same reason causing Frame didn't arrive within 5000, I believe.

The result of print(imu_frames) is <pyrealsense2.frame NULL> rather than <pyrealsense2.frameset MOTION_XYZ32F MOTION_XYZ32F #11 @194751.329000> when it works.

So I am thinking why the imu_frames is NULL here?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 13, 2022

Does your Python IMU script assign fixed index numbers to accel and gyro, such as 0 for accel and 1 for gyro? For example:

accel=accel_data(frames[0].as_motion_frame().get_motion_data())
gyro= gyro_data(frames[1].as_motion_frame().get_motion_data())

If it does assign fixed index numbers to accel and gyro then it is recommended that the streams are assigned dynamically - as described at #4018 (comment) and #4018 (comment) - otherwise the output of the streams can become reversed so that accel values are output for gyro and gyro values are output for accel.

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 13, 2022

No, they are dynamically assigned as the following codes:

While True:
    if enable_imu:
        imu_frames = imu_pipeline.poll_for_frames()
        print(imu_frames)

    if enable_imu and imu_frames:
        accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)
        gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)
        accel_sample = accel_frame.as_motion_frame().get_motion_data()
        gyro_sample = gyro_frame.as_motion_frame().get_motion_data()
        print("\tAccel = ", accel_sample.x, ",",  accel_sample.y, ",", accel_sample.z)
        print("\tGyro = ", gyro_sample.x, ",",  gyro_sample.y, ",", gyro_sample.z)

@MartyG-RealSense
Copy link
Collaborator

poll_for_frames() could in itself contribute to performance problems, as it is recommended to manually control when the CPU is put to sleep and for how long when using this instruction. Otherwise, the CPU percentage utilization may rise towards 100% as described at #2422 (comment)

This link also describes how using try_wait_for_frames() causes a 'false' state to be returned and the program being allowed to continue to run if a timeout occurs instead of a program-exiting error being generated.

Does the problem still occur if you use 200 accel and 400 gyro instead of 100 / 200?

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 14, 2022

  • When I invoke try_wait_for_frames() like following codes:
While True:
    if enable_imu:
        imu_frames = imu_pipeline.try_wait_for_frames()
        print(imu_frames)

    if enable_imu and imu_frames:
        accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)
        gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)
        accel_sample = accel_frame.as_motion_frame().get_motion_data()
        gyro_sample = gyro_frame.as_motion_frame().get_motion_data()
        print("\tAccel = ", accel_sample.x, ",",  accel_sample.y, ",", accel_sample.z)
        print("\tGyro = ", gyro_sample.x, ",",  gyro_sample.y, ",", gyro_sample.z)

I got error AttributeError: 'tuple' object has no attribute 'first_or_default'

  • I tried the combination of 200 accel and 400 gyro, 200 / 200, 100 / 400. The problem always occurs.

  • I am not sure the meaning of manually control when the CPU is put to sleep here. Do you mean add a time,sleep(0.1) after poll_for_frames()?

@MartyG-RealSense
Copy link
Collaborator

#9800 (comment) has an example of using poll_for_frames() and time.sleep.

In the header of the script beneath 'import pyrealsense2 as rs', include the instruction import time

image

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 14, 2022

The poll_for_frames() + time.sleep() doesn't work for me. It still occurs the problem. No frame received from the camera.
image

Do you have any example for using try_wait_for_frames() as I got the error in my previous attempt?

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 14, 2022

Hi @MartyG-RealSense,

I am also considering whether I can count the number of null frames, then stop the pipeline, and start it again. Is there any function like restarting the pipeline?

@MartyG-RealSense
Copy link
Collaborator

Hi @Yoki-pjx My apologies for the delay in responding further. The pipeline can be restarted using the hardware_reset() instruction, and its activation can be made conditional on a certain condition being satisfied (for example: If null frame count is greater than a certain value then reset).

A Python example of hardware_reset() for a single camera is below.

import pyrealsense2 as rs2
ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
dev.hardware_reset()

Here is another example that resets a camera with a particular serial number.

import pyrealsense2 as rs2
ctx = rs2.context()
list = ctx.query_devices()
for dev in list:
serial = dev.query_sensors()[0].get_info(rs2.camera_info.serial_number)
# compare to desired SN
dev.hardware_reset()

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 18, 2022

Hi @MartyG-RealSense,

I made a variable to count all the NULL frames during the period. When the variable is over 50, I stop the pipeline, then reset the camera, and restart the pipelines, But I got the error after several minutes: libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/002/008: **Too many open files**
The codes are like follows:

import pyrealsense2 as rs
ctx = rs.context()
devices = ctx.query_devices()
imu_error_count = 0
While True:
        if enable_rgb or enable_depth:
            frames = pipeline.wait_for_frames()
        
        if enable_imu:
            imu_frames = imu_pipeline.poll_for_frames()
......
        if  not imu_frames:
            imu_error_count += 1
            if imu_error_count > 50:
                imu_pipeline.stop()
                pipeline.stop()
                for dev in devices:
                    dev.hardware_reset()
                imu_pipeline.start(imu_config)
                pipeline.start(config)
                imu_error_count = 0

Does Too many open files mean the pipeline creates a new USB connection to the Pi each time?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 18, 2022

In general, when a hardware reset occurs the camera disconnects and then reconnects, which takes 2-3 seconds for the entire disconnect-reconnect process from start to finish. If the reconnection occurs within 5 seconds of the disconnect then the current pipeline should remain open and continue from the point that it left off. So you should not need a pipeline start instruction after dev.hardware_reset()

@Yoki-pjx
Copy link
Author

Is there any way to avoid the 2-3 seconds reconnection? Because the system only withstands 1-second reconnection.

@MartyG-RealSense
Copy link
Collaborator

There is no way to make the camera reset process complete faster, unfortunately. Even if the entire USB port was reset with a Linux bash script instead of just the camera with the SDK's hardware_reset() instruction, it would take around the same time for the camera to reboot (since a hardware reset acts in the same way as physically unplugging the camera and plugging it back in).

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Nov 19, 2022

I'm a bit broken up about this device. hardware_reset() without pipeline.stop and pipeline.start does not work in the system as well.

I also install Ubuntu 20.04.5 LTS (Kernel version 5.4) (which is mentioned in the note of the SDK) from the Raspberry official imager on my Pi today. It still occurs the error that does not receive frames in the realsense-viewer. Does it means there is compatibility problem with Pi and the camera?

I also found it works (no error) with the action of stop and start the imu pipeline when I only open the imu pipeline. When two pipelines are working, the error libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/002/008: Too many open files occurs when I restart the imu pipeline. I think I should go in this direction to fix the problem..Why it works when I stop and start the imu pipeline?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 19, 2022

The librealsense SDK can be installed for Ubuntu on Raspberry Pi but there can sometimes be complications compared to installing on a PC. The link below has an example of a guide for installing on Pi 4 and Ubuntu 20.04.

https://ramith.fyi/setting-up-raspberry-pi-4-with-ubuntu-20-04-ros-intel-realsense/

A RealSense user shared at the link below the method that worked for them with 20.04.

https://answers.ros.org/question/363889/intel-realsens-on-ubuntu-2004-ros-noetic-installation-desription/

Another guide:

https://admantium.medium.com/rgb-depth-camera-in-robotics-starting-with-the-realsense-r435-sdk-6f6c5bf3e5e4


Yet another approach is to install on the Raspbian / Raspberry Pi OS instead of Ubuntu, like in the official Intel guide here:

https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_raspbian.md


The install method that has been the most successful for Pi boards in the past has been to build librealsense from source code with the LibUVC backend or the RSUSB backend method, which bypasses the kernel and so is not dependent on Linux versions or kernel versions and does not require kernel patching.

@Yoki-pjx
Copy link
Author

I installed the SDK by the LibUVC backend or the RSUSB backend method in Ubuntu 20.04. The RGB and depth streams work, and the imu still has problem as I mentioned in the last comment. So I am thinking does it means there is a compatibility problem between Pi and the camera?

I also found it works (no error) with the action of stop and start the imu pipeline when I only open the imu pipeline. When two pipelines are working, the error libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/002/008: Too many open files occurs when I restart the imu pipeline. I think I should go in this direction to fix the problem..Why it works when I stop and start the imu pipeline?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 20, 2022

In regard to opening one pipeline or two pipelines, it sounds as though you are referring to the different types of streams, and all of the streams are opened in the same pipeline. When three types of streams are enabled in the same pipeline and one of those stream types is IMU, it can cause one of the stream types to stop receiving frames. It does not matter which order the streams are enabled in, as the trigger for the problem is the enabling of IMU.

A workaround for this in Python is to actually create two separate pipelines, with the depth and color streams placed on one pipeline and the IMU placed alone on the other pipeline. A Python example script for doing so is at #5628 (comment)

@Yoki-pjx
Copy link
Author

I apologise if I make you unclear. I list my codes here:

import pyrealsense2.pyrealsense2 as rs
import numpy as np
import cv2
import time
import pandas as pd
import os

# Enable components
device_id = None
enable_imu = True
enable_rgb = True
enable_depth = True

# Define image size
width = 424
height = 240
fps = 30

# Configure imu
if enable_imu:
    imu_pipeline = rs.pipeline()
    imu_config = rs.config()
    
    # if we are provided with a specific device, then enable it
    if None != device_id:
        imu_config.enable_device(device_id)
        
    # Configure streams
    imu_config.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 100) # acceleration
    imu_config.enable_stream(rs.stream.gyro, rs.format.motion_xyz32f, 200)  # gyroscope
    # Start imu streaming
    imu_profile = imu_pipeline.start(imu_config)
    # Disable global time enabled
    options = imu_profile.get_device().first_depth_sensor()
    time_option = options.set_option(rs.option.global_time_enabled,0)

# Configure image
if enable_depth or enable_rgb:
    pipeline = rs.pipeline()
    config = rs.config()

    if enable_depth:
        config.enable_stream(rs.stream.depth, width, height, rs.format.z16, fps)  # depth
        config.enable_stream(rs.stream.infrared, 1, width, height, rs.format.y8, fps)
        config.enable_stream(rs.stream.infrared, 2, width, height, rs.format.y8, fps)

    if enable_rgb:
        config.enable_stream(rs.stream.color, width, height, rs.format.bgr8, 60)  # rgb

    # Start img streaming
    profile = pipeline.start(config)

    # Getting the depth sensor's depth scale (see rs-align example for explanation)
    if enable_depth:
        depth_sensor = profile.get_device().first_depth_sensor()
        depth_scale = depth_sensor.get_depth_scale()
        print("Depth Scale is: ", depth_scale)
        if enable_depth:
            # Create an align object
            # rs.align allows us to perform alignment of depth frames to others frames
            # The "align_to" is the stream type to which we plan to align depth frames.
            align_to = rs.stream.color
            align = rs.align(align_to)

imu_error_count = 0

try:
while True:
                   
    # get the frames
    if enable_rgb or enable_depth:
        frames = pipeline.wait_for_frames()
    
    if enable_imu:
        imu_frames = imu_pipeline.poll_for_frames()
        print(imu_frames)

    if enable_rgb or enable_depth:
        # Align the depth frame to color frame
        aligned_frames = align.process(frames) if enable_depth and enable_rgb else None
        depth_frame = aligned_frames.get_depth_frame() if aligned_frames is not None else frames.get_depth_frame()
        color_frame = aligned_frames.get_color_frame() if aligned_frames is not None else frames.get_color_frame()
        left_frame  = aligned_frames.get_infrared_frame(1)
        right_frame = aligned_frames.get_infrared_frame(2)

        # Convert images to numpy arrays
        depth_image = np.asanyarray(depth_frame.get_data()) if enable_depth else None
        color_image = np.asanyarray(color_frame.get_data()) if enable_rgb else None
        left_image  = np.asanyarray(left_frame.get_data())
        right_image = np.asanyarray(right_frame.get_data())

        # Apply colormap on depth image (image must be converted to 8-bit per pixel first)
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET) if enable_depth else None

        # Stack both images horizontally
        images = None
        if enable_rgb:
            images = np.hstack((color_image, depth_colormap)) if enable_depth else color_image
        elif enable_depth:
            images = depth_colormap
            infrared_images = np.hstack((left_image, right_image))
        
        # Show images
        cv2.namedWindow('RealSense', cv2.WINDOW_AUTOSIZE)
        if images is not None:
            cv2.imshow('RealSense', images)
            cv2.imshow('Infrared image', infrared_images)
                   
    if enable_imu and imu_frames:
        accel_frame = imu_frames.first_or_default(rs.stream.accel, rs.format.motion_xyz32f)
        gyro_frame = imu_frames.first_or_default(rs.stream.gyro, rs.format.motion_xyz32f)
        accel_sample = accel_frame.as_motion_frame().get_motion_data()
        gyro_sample = gyro_frame.as_motion_frame().get_motion_data()
        
    if not imu_frames:
        imu_error_count += 1
        if imu_error_count > 100:
            imu_pipeline.stop()
            time.sleep(0.1)
            imu_pipeline.start(imu_config)
            imu_error_count = 0

    print("\tAccel = ", accel_sample.x, ",", accel_sample.y, ",", accel_sample.z)
    print("\tGyro  = ", gyro_sample.x, ",", gyro_sample.y, ",", gyro_sample.z)            
  
    # Press esc or 'q' to close the image window
    key = cv2.waitKey(1)
    if key & 0xFF == ord('q') or key == 27:
        cv2.destroyAllWindows()
        break

finally:
    imu_pipeline.stop()
    pipeline.stop()

This one above occurs the error ((libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/002/008: Too many open files) after a couple of minutes)

import pyrealsense2.pyrealsense2 as rs
import numpy as np
import time
import pandas as pd
import os

# Enable components
enable_imu = True

# Define image size
fps = 30

# Configure depth and color streams
pipeline_imu = rs.pipeline()
config_imu = rs.config()

# Configure depth and color streams
config_imu.enable_stream(rs.stream.accel, rs.format.motion_xyz32f, 100)  # Acceleration
config_imu.enable_stream(rs.stream.gyro,  rs.format.motion_xyz32f, 400)  # Gyroscope

# Disable global time enabled
profile = pipeline_imu.start(config_imu)
options = profile.get_device().first_depth_sensor()
options.set_option(rs.option.global_time_enabled,0)

imu_error_count = 0

try:
    while True:
        # get the frames
        frames_imu = pipeline_imu.wait_for_frames()                
        print(frames_imu)

        if frames_imu:
            accel_frame = frames_imu.first_or_default(rs.stream.accel)
            gyro_frame  = frames_imu.first_or_default(rs.stream.gyro)
            accel_sample = accel_frame.as_motion_frame().get_motion_data()
            gyro_sample = gyro_frame.as_motion_frame().get_motion_data()
                        
        if not frames_imu:
            imu_error_count += 1
            if imu_error_count > 100:
                pipeline_imu.stop()
                time.sleep(0.2)
                pipeline_imu.start(config)
                imu_error_count = 0

        print("\tAccel = ", accel_sample.x, ",", accel_sample.y, ",", accel_sample.z)
        print("\tGyro  = ", gyro_sample.x, ",", gyro_sample.y, ",", gyro_sample.z)

finally:
    imu_pipeline.stop()

This one works without any errors, but I cannot use the image stream.

I think I should go in this direction to fix the problem..
How can I avoid the error (libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/002/008: Too many open files) make the first code work?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Nov 22, 2022

I have not seen that 'Too many files' error before with RealSense and so do not have advice to offer about it unfortunately, though it does also occur with non-RealSense cases, as shown by googling for the term raspberry pi too many open files

In the second script that does work but does not provide image data, that is because you do not have any code for accessing the depth and color streams in it, right?

In the first script, it seems unusual to me to have two imshow instructions one after another, What happens if you comment out the infrared image line?

cv2.imshow('RealSense', images)
cv2.imshow('Infrared image', infrared_images)

@Yoki-pjx
Copy link
Author

I have not seen that 'Too many files' error before with RealSense and so do not have advice to offer about it unfortunately, though it does also occur with non-RealSense cases, as shown by googling for the term raspberry pi too many open files

In the second script that does work but does not provide image data, that is because you do not have any code for accessing the depth and color streams in it, right?

In the first script, it seems unusual to me to have two imshow instructions one after another, What happens if you comment out the infrared image line?

cv2.imshow('RealSense', images)
cv2.imshow('Infrared image', infrared_images)

Yes, the second script does work and provides imu data only.

I tested The first script for two hours, it also works without any errors. It does not matter on the number of windows I opened to show the image as below.
image

Thus, I believe two pipelines got certain conflicts when I started image stream and imu stream simultaneously. When the imu stream stop and start, it takes up the new usb port. and then got error libusb: error [_get_usbfs_fd] libusb couldn't open USB device

@MartyG-RealSense
Copy link
Collaborator

What happens if you reverse the order of the stop() instructions so pipeline stops first and imu_pipeline stops second?

Does it also make a difference if you put a sleep instruction between the stop() instructions to delay the stopping of the second pipeline for a short time?

@MartyG-RealSense
Copy link
Collaborator

Hi @Yoki-pjx Do you require further assistance with this case, please? Thanks!

@Yoki-pjx
Copy link
Author

Yoki-pjx commented Dec 3, 2022

What happens if you reverse the order of the stop() instructions so pipeline stops first and imu_pipeline stops second?

Does it also make a difference if you put a sleep instruction between the stop() instructions to delay the stopping of the second pipeline for a short time?

Hi @MartyG-RealSense ,
I changed the order of the stop(), and added a 0.2-second or 0.5-second delay between the pipeline stopping and restarting. It did not work...

@MartyG-RealSense
Copy link
Collaborator

At #10238 (comment) I tracked down another Python IMU + depth + color script that uses threading as its approach to resolving the problem of enabling three streams.

@MartyG-RealSense
Copy link
Collaborator

Hi @Yoki-pjx Do you have an update about this case that you can provide, please? Thanks!

@Yoki-pjx
Copy link
Author

Hi @MartyG-RealSense,
They all failed, I have bought a single BNO085 to connect now. Please try to fix this problem in the SDK in future as more than three people (I mentioned in the first comment) have raised this problem in the issues and didn't get fixed.

@MartyG-RealSense
Copy link
Collaborator

I have highlighted this case to my Intel RealSense colleagues. Thanks very much @Yoki-pjx

@MartyG-RealSense
Copy link
Collaborator

After discussion of the Raspberry Pi IMU issue with my Intel RealSense colleagues, it was decided that we will keep this issue open to give further Raspberry Pi users who have this problem the opportunity to add comments about their own experiences. Thanks very much for your report!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jan 4, 2023

Another couple of issues with IMU on Raspberry Pi were reported at #11292 and #11288 - I have highlighted these new cases to my Intel RealSense colleagues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants