Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ERROR] "RuntimeError: Frame didn't arrive within 5000" #6628

Closed
Kimminsu-94 opened this issue Jun 18, 2020 · 32 comments
Closed

[ERROR] "RuntimeError: Frame didn't arrive within 5000" #6628

Kimminsu-94 opened this issue Jun 18, 2020 · 32 comments

Comments

@Kimminsu-94
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D415
Firmware Version
Operating System & Version Linux (Ubuntu 18.04)
Kernel Version (Linux Only) (5.3.0-59-generic)
Platform PC
SDK Version pyrealsense2 (2.35.2.1937)
Language python
Segment

Issue Description

<Describe your issue / question / feature request / etc..>
Addr = ("librealsense/wrappers/python/examples/align-depth2color.py ")
When i run this code Addr as once, then it works very well. but if i run this code again, then i got a
this error
ERROR MSG = ["RuntimeError: Frame didn't arrive within 5000"]

so... i found some solution. but these are not correct answer.

  1. Physically connect the camera again
  2. run the code that i mentioned on another computer.

As i mentioned that these are not fundamental solution. can you tell me how to fix it.
if anybody who can solve this problem, help me plz. thanks.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 18, 2020

Hi @Kimminsu-94 There was another case of a user of the Python wrapper who also experienced the "Frame didn't arrive within 5000" error and had to unplug and replug the camera to make align_depth2color.py work. They found that the cause was using their own choice of USB cable.

#5717

Are you using the official short USB cable supplied with the camera or your own choice of cable, please?

@Kimminsu-94
Copy link
Author

thank you for your advice.
I followed your solution. but the result is same before
(Usb cable is official and i used another one.)

is there any other way?
if you know about it.
plz share it to me.

@MartyG-RealSense
Copy link
Collaborator

Do you experience problems when using the camera with non-Python RealSense programs in Ubuntu such as the RealSense Viewer please, or is the problem only with Python programs?

@Kimminsu-94
Copy link
Author

Kimminsu-94 commented Jun 19, 2020

I did it. but the result is still the same

@MartyG-RealSense
Copy link
Collaborator

If an unplug-replug of the camera corrects the problem but you do not want to have to do that each time that program is used, an alternative may be to insert a hardware_reset() instruction into your script before the pipeline start, so that it resets the camera once when the script launches. Here is some example Python code for doing so:

ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
dev.hardware_reset()

@min19828257
Copy link

import pyrealsense2 as rs
import numpy as np
import cv2

print("reset start")
ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
    dev.hardware_reset()
print("reset done")

pipeline = rs.pipeline()

config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

profile = pipeline.start(config)

depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)

clipping_distance_in_meters = 1 
clipping_distance = clipping_distance_in_meters / depth_scale

align_to = rs.stream.color
align = rs.align(align_to)

try:
    while True:
        frames = pipeline.wait_for_frames()
        
        aligned_frames = align.process(frames)
        
        aligned_depth_frame = aligned_frames.get_depth_frame() 
        color_frame = aligned_frames.get_color_frame()
        
        if not aligned_depth_frame or not color_frame:
            continue

        depth_image = np.asanyarray(aligned_depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        
        grey_color = 153
        depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) 
        bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)
    
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        images = np.hstack((bg_removed, depth_colormap))
        cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('Align Example', images)
        key = cv2.waitKey(1)
        
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break
finally:
    pipeline.stop()

=======================================================================

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()

config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

print("reset start")
ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
    dev.hardware_reset()
print("reset done")

profile = pipeline.start(config)

depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)

clipping_distance_in_meters = 1 
clipping_distance = clipping_distance_in_meters / depth_scale

align_to = rs.stream.color
align = rs.align(align_to)

try:
    while True:
        frames = pipeline.wait_for_frames()
        
        aligned_frames = align.process(frames)
        
        aligned_depth_frame = aligned_frames.get_depth_frame() 
        color_frame = aligned_frames.get_color_frame()
        
        if not aligned_depth_frame or not color_frame:
            continue

        depth_image = np.asanyarray(aligned_depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        
        grey_color = 153
        depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) 
        bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)
    
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        images = np.hstack((bg_removed, depth_colormap))
        cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('Align Example', images)
        key = cv2.waitKey(1)
        
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break
finally:
    pipeline.stop()

=======================================================================
reset start
reset done
Depth Scale is: 0.0010000000474974513
Traceback (most recent call last):
File "test.py", line 141, in
aligned_frames = align.process(frames)
RuntimeError: Error occured during execution of the processing block! See the log for more info

@min19828257
Copy link

MartyG-RealSense Your advice is very reliable and I am very grateful. But the results are still bad.
I tried two cases what you can see as i posted.
The difference between the postings is the location of the initializing sentence as you mentioned.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jun 20, 2020

I should emphasise that Pyrealsense2 coding is not one of my advanced skills, so I apologise in advance for any errors.

There is an alternative reset script for Pyrealsense2 in the link below:

#6132 (comment)

If this script was adapted for the first of the scripts that you posted, then I believe that you could change this line:

depth_sensor = profile.get_device().first_depth_sensor()

And use these 3 lines instead:

device = profile.get_device()
depth_sensor = device.first_depth_sensor()
device.hardware_reset()

So the start of the script, if your original reset code is taken out, may look like this:

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()

config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

profile = pipeline.start(config)

device = profile.get_device()
depth_sensor = device.first_depth_sensor()
device.hardware_reset()

@min19828257
Copy link

Thanks to your advice, there have been some changes. However, a new type of error appears.

The process is as follows.

Like the previous act, I succeed when I try first.

But not from the second. However, I continued to compile and succeeded in the fourth.

But this is not perfect either. The result is only one frame, not video output.

this is code & ERROR

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()

config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

profile = pipeline.start(config)

device = profile.get_device()
depth_sensor = device.first_depth_sensor()
device.hardware_reset()

depth_sensor = profile.get_device().first_depth_sensor()
depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)

clipping_distance_in_meters = 1 
clipping_distance = clipping_distance_in_meters / depth_scale

align_to = rs.stream.color
align = rs.align(align_to)

try:
    while True:
        frames = pipeline.wait_for_frames()
        
        aligned_frames = align.process(frames)
        
        aligned_depth_frame = aligned_frames.get_depth_frame() 
        color_frame = aligned_frames.get_color_frame()
        
        if not aligned_depth_frame or not color_frame:
            continue

        depth_image = np.asanyarray(aligned_depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        
        grey_color = 153
        depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) 
        bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)
    
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        images = np.hstack((bg_removed, depth_colormap))
        cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('Align Example', images)
        key = cv2.waitKey(1)
        
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break
finally:
    pipeline.stop()

Depth Scale is: 0.0010000000474974513
Traceback (most recent call last):
File "test.py", line 121, in
aligned_frames = align.process(frames)
RuntimeError: Error occured during execution of the processing block! See the log for more info

Depth Scale is: 0.0010000000474974513
Traceback (most recent call last):
File "test.py", line 119, in
frames = pipeline.wait_for_frames()
RuntimeError: Frame didn't arrive within 5000

@MartyG-RealSense
Copy link
Collaborator

Please remove the line indicated, as you have already defined depth_sensor two lines before,

image

@Kimminsu-94
Copy link
Author

import pyrealsense2 as rs
import numpy as np
import cv2

pipeline = rs.pipeline()

config = rs.config()
config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)

profile = pipeline.start(config)

device = profile.get_device()
depth_sensor = device.first_depth_sensor()
device.hardware_reset()

depth_scale = depth_sensor.get_depth_scale()
print("Depth Scale is: " , depth_scale)

clipping_distance_in_meters = 1 
clipping_distance = clipping_distance_in_meters / depth_scale

align_to = rs.stream.color
align = rs.align(align_to)

try:
    while True:
        frames = pipeline.wait_for_frames()
        
        aligned_frames = align.process(frames)
        
        aligned_depth_frame = aligned_frames.get_depth_frame() 
        color_frame = aligned_frames.get_color_frame()
        
        if not aligned_depth_frame or not color_frame:
            continue

        depth_image = np.asanyarray(aligned_depth_frame.get_data())
        color_image = np.asanyarray(color_frame.get_data())
        
        grey_color = 153
        depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) 
        bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)
    
        depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
        images = np.hstack((bg_removed, depth_colormap))
        cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
        cv2.imshow('Align Example', images)
        key = cv2.waitKey(1)
        
        if key & 0xFF == ord('q') or key == 27:
            cv2.destroyAllWindows()
            break
finally:
    pipeline.stop()

In this time, i had run this code several times. and i got this error.
So i searched for this error in google. but so far i couldn't find correct answer.

Depth Scale is: 0.0010000000474974513
Traceback (most recent call last):
File "test.py", line 122, in
aligned_frames = align.process(frames)
RuntimeError: Error occured during execution of the processing block! See the log for more info

@MartyG-RealSense
Copy link
Collaborator

Doronhi the RealSense ROS wrapper developer once did an investigation of the possible cause of RuntimeError: Error occurred during execution of the processing block!

His conclusion was that it may be related to a problem with the computer hardware having difficulty with handling the processing demand. This may be due to the computer being a lower-specification computer device such as a Raspberry Pi board, or because something is currently putting a heavy load on the computer's CPU and it is busy.

IntelRealSense/realsense-ros#652 (comment)

@Kimminsu-94
Copy link
Author

I read about "IntelRealSense/realsense-ros#652 (comment)".
After that, An important part of what I understand is cpu speed.
some people suggest that i should change computer device.
But... i can't understand and I am not sure if my computer specification is appropriate or not.
Here is my Computer specs. is this proper or not? if you know about it, Can you tell me solution?

minsukim@minsukim-System-Product-Name:~$ cat /proc/cpuinfo
processor	: 0
vendor_id	: AuthenticAMD
cpu family	: 23
model		: 1
model name	: AMD Ryzen 7 1700X Eight-Core Processor
stepping	: 1
microcode	: 0x8001105
cpu MHz		: 1883.270
cache size	: 512 KB
physical id	: 0
siblings	: 16
core id		: 0
cpu cores	: 8
apicid		: 0
initial apicid	: 0
fpu		: yes
fpu_exception	: yes
cpuid level	: 13
wp		: yes

@MartyG-RealSense
Copy link
Collaborator

Officially, the RealSense 400 Series cameras are validated to work with any Intel or ARM processor. Unofficially, there have been a few documented cases of the cameras working with AMD Ryzen and Threadripper but the success rate of how well it works may depend upon the particular AMD computer.

So although Ryzen 7 is a powerful processor, AMD compatibility with RealSense 400 Series cameras is not as certain as it is with Intel and ARM based computers / computing devices.

@min19828257
Copy link

As you said, it was a compatibility issue. I agree. Thanks for the answer.

@MartyG-RealSense
Copy link
Collaborator

Thanks so much for the update!

@dorodnic dorodnic closed this as completed Jul 9, 2020
@danial880
Copy link

This error has made my life miserable. I have tried all the solutions given above. I am trying to run frame_queue_example.py from official repo. I am using D435i realsense camera with NVIDIA Xavier (NX).
OS: Ubuntu 18.04

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Jul 22, 2020

Hi @danial880 If you are able to use the RealSense Viewer program in Ubuntu, would it be possible please to go to the More option at the top of the Viewer's options side-panel and update the camera's firmware driver to the newest 5.12.6 version by selecting Install Recommended Firmware from the More option's drop-down menu?

@steb6
Copy link

steb6 commented May 16, 2022

For anyone ending up here like me, in a multiprocessing application I was doing:

while True:
    rgb, depth = camera.read()

    for queue in processes.values():
        send(queue, {'rgb': rgb, 'depth': depth})

and I solved it in this way

while True:
    rgb, depth = camera.read()

    rgb_ = copy.deepcopy(rgb)
    depth_ = copy.deepcopy(depth)

    for queue in processes.values():
        send(queue, {'rgb': rgb_, 'depth': depth_})

check the wait_for_frames docstring!

@MartyG-RealSense
Copy link
Collaborator

Thanks so much @StefanoBerti for sharing your multiprocessing solution with the RealSense community!

@Xngzdai
Copy link

Xngzdai commented May 27, 2022

If an unplug-replug of the camera corrects the problem but you do not want to have to do that each time that program is used, an alternative may be to insert a hardware_reset() instruction into your script before the pipeline start, so that it resets the camera once when the script launches. Here is some example Python code for doing so:

ctx = rs.context() devices = ctx.query_devices() for dev in devices: dev.hardware_reset()

Plug-and-unplug works for me though. Might want to check the solutions below if I no longer want to manually do it.

@svrkrishnavivek
Copy link

@MartyG-RealSense I notice that I get this error when the camera USB cable runs closer to power cables (source of EMI), and if I move the camera USB cable away from any source of EMI, the video feed is proper without this error.

Always check if there are any external sources of EMI (like power cables) near the camera USB cable and try to reroute them.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented May 31, 2022

Hi @svrkrishnavivek The camera hardware should not be affected by an EM field, though the USB cable may be a different matter if it is not sufficiently shielded.

https://www.blackbox.be/en-be/page/28620/Resources/Technical-Resources/Black-Box-Explains/copper-cable/shielded-vs-unshielded-cables

@adrian1875
Copy link

In my case, this error occurred when I used videocaptur in opencv and wait_for_frames in realsenese together.

@MartyG-RealSense
Copy link
Collaborator

Hi @adrian1875 Did you solve your error or do you still have a problem, please?

@andyhebiao
Copy link

besause of stecking the plug into usb 3.0 port to slow, the sytem considers the interface as USB 2.0. I tried, plug it off, and plug in quickly, it works again. This situation happed several times by me.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 12, 2022

Thanks so much for sharing your experience, @andyhebiao

That is correct, a slow insertion instead of a firm, quick insertion motion increases the chances of a USB 2 mis-detection occurring.

@ibrahim810
Copy link

I can also confirm that the problem is due to the cables used.
In my case, I'm attempting to use four cameras, two of which are connected via USB3.0 and the other two via USB2.1, when implementing the solution suggested by @MartyG-RealSense.

If an unplug-replug of the camera corrects the problem but you do not want to have to do that each time that program is used, an alternative may be to insert a hardware_reset() instruction into your script before the pipeline start, so that it resets the camera once when the script launches. Here is some example Python code for doing so:

ctx = rs.context() devices = ctx.query_devices() for dev in devices: dev.hardware_reset()

only the 2 cameras with USB3.0 cables works.

Here is the updated code to view multiple depth cameras (RGB & Depth).
And it works perfectly with my D435i on Windows10 (don't forget about the cables).

import pyrealsense2 as rs
import numpy as np
import cv2


ctx = rs.context()
serials = []
devices = ctx.query_devices()
for dev in devices:
    dev.hardware_reset()

if len(ctx.devices) > 0:
    for dev in ctx.devices:
        print('Found device:', dev.get_info(rs.camera_info.name), dev.get_info(rs.camera_info.serial_number))
        serials.append(dev.get_info(rs.camera_info.serial_number))
else:
    print("No Intel Device connected")

pipelines = []
windows = []

for serial in serials:
    pipe = rs.pipeline(ctx)
    cfg = rs.config()
    cfg.enable_device(serial)
    cfg.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
    cfg.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
    pipe.start(cfg)
    pipelines.append(pipe)

    window_name = f"Camera {serial}"
    cv2.namedWindow(window_name, cv2.WINDOW_NORMAL)
    windows.append(window_name)

try:
    while True:
        for pipe, window_name in zip(pipelines, windows):
            frames = pipe.wait_for_frames()
            depth_frame = frames.get_depth_frame()
            color_frame = frames.get_color_frame()
            if not depth_frame or not color_frame:
                continue

            depth_image = np.asanyarray(depth_frame.get_data())
            color_image = np.asanyarray(color_frame.get_data())

            depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.5), cv2.COLORMAP_JET)

            cv2.imshow(window_name, color_image)
            cv2.imshow(window_name + " Depth", depth_colormap)

        key = cv2.waitKey(1)
        if key == 27:  # ESC key
            break

finally:
    for pipe in pipelines:
        pipe.stop()

    cv2.destroyAllWindows()

@MartyG-RealSense
Copy link
Collaborator

Thanks so much @ibrahim810 for sharing your experience and your code!

There was also a recent case at the link below where a RealSense user had problems with using USB2 with multicam.

https://support.intelrealsense.com/hc/en-us/community/posts/19732914930579

@Boyangs3
Copy link

besause of stecking the plug into usb 3.0 port to slow, the sytem considers the interface as USB 2.0. I tried, plug it off, and plug in quickly, it works again. This situation happed several times by me.

thank you! I finally solve this problem thank to your advice " plug in quickly"

@hatfield-c
Copy link

hatfield-c commented Sep 18, 2024

Wanted to say this issue for me was caused because I set up an rs.config() object, but I forgot to pass it to pipe.start(). So essentially I had:

self.rs_config.enable_stream(rs.stream.depth, self.width, self.height, rs.format.z16, self.fps)
self.rs_config.enable_stream(rs.stream.color, self.width, self.height, rs.format.bgr8, self.fps)
self.profile = self.data_pipe.start() # <---- missing self.rs_config argument

Then I tried to read self.data_pipe.wait_for_frames(), and it gave the error. When I made sure to pass in the config data, the error went away

@MartyG-RealSense
Copy link
Collaborator

Thanks so much @hatfield-c for sharing what worked for you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests