Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Frame didn't arrive within 5000 #12055

Closed
SylvanSi opened this issue Jul 31, 2023 · 52 comments
Closed

Frame didn't arrive within 5000 #12055

SylvanSi opened this issue Jul 31, 2023 · 52 comments

Comments

@SylvanSi
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model D435f
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Ubuntu 20.04
Platform
SDK Version { legacy / 2.. }
Language python
Segment {Robot/Smartphone/VR/AR/others }

Issue Description

Frame didn't arrive within 5000

current SWversion 2.54.1.0
rs-capture can get video
realsense-viewer can open the program
but when use python to run the code, it shows
frames = pipeline.wait_for_frames()
RuntimeError: Frame didn't arrive within 5000

Please look at this problem for me.
Thanks:)

@SylvanSi
Copy link
Author

SylvanSi commented Jul 31, 2023

The code as follow:
import pyrealsense2 as rs
import numpy as np
import cv2

  pipeline = rs.pipeline()
  config = rs.config()
  config.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
  config.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
  print("reset start")
  ctx = rs.context()
  devices = ctx.query_devices()
  for dev in devices:
      dev.hardware_reset()
  print("reset done")
  
  profile = pipeline.start(config)
  
  depth_sensor = profile.get_device().first_depth_sensor()
  depth_scale = depth_sensor.get_depth_scale()
  print("Depth Scale is: " , depth_scale)
  
  clipping_distance_in_meters = 1 
  clipping_distance = clipping_distance_in_meters / depth_scale
  
  align_to = rs.stream.color
  align = rs.align(align_to)
  
  try:
      while True:
          frames = pipeline.wait_for_frames()
          
          aligned_frames = align.process(frames)
          
          aligned_depth_frame = aligned_frames.get_depth_frame() 
          color_frame = aligned_frames.get_color_frame()
          
          if not aligned_depth_frame or not color_frame:
              continue
  
          depth_image = np.asanyarray(aligned_depth_frame.get_data())
          color_image = np.asanyarray(color_frame.get_data())
          
          grey_color = 153
          depth_image_3d = np.dstack((depth_image,depth_image,depth_image)) 
          bg_removed = np.where((depth_image_3d > clipping_distance) | (depth_image_3d <= 0), grey_color, color_image)
      
          depth_colormap = cv2.applyColorMap(cv2.convertScaleAbs(depth_image, alpha=0.03), cv2.COLORMAP_JET)
          images = np.hstack((bg_removed, depth_colormap))
          cv2.namedWindow('Align Example', cv2.WINDOW_AUTOSIZE)
          cv2.imshow('Align Example', images)
          key = cv2.waitKey(1)
          
          if key & 0xFF == ord('q') or key == 27:
              cv2.destroyAllWindows()
              break
  finally:
      pipeline.stop()

when I try this code from[ERROR] "RuntimeError: Frame didn't arrive within 5000"
#6628
I got:
reset start
reset done
Depth Scale is: 0.0010000000474974513
Traceback (most recent call last):
File "/root/pan/install_test2.py", line 32, in
frames = pipeline.wait_for_frames()
RuntimeError: Frame didn't arrive within 5000

Does this means that I got a frame at the begining?

@SylvanSi
Copy link
Author

I can use pyrealsense2 on my pc

@SylvanSi
Copy link
Author

SylvanSi commented Jul 31, 2023

one more issue
image
the name is not D435f, is that a problem?
image

@SylvanSi
Copy link
Author

When I use realsense-viewer it calls back
INFO [255085735026720] (rs.cpp:2697) Framebuffer size changed to 1066 x 652
31/07 08:50:25,480 INFO [255085735026720] (rs.cpp:2697) Window size changed to 1066 x 652
31/07 08:50:38,138 WARNING [255084977647808] (messenger-libusb.cpp:42) control_transfer returned error, index: 300, error: Resource temporarily unavailable, number: b
31/07 08:50:38,149 WARNING [255084977647808] (messenger-libusb.cpp:42) control_transfer returned error, index: 300, error: Resource temporarily unavailable, number: b
31/07 08:50:38,160 WARNING [255084977647808] (messenger-libusb.cpp:42) control_transfer returned error, index: 300, error: Resource temporarily unavailable, number: b

@MartyG-RealSense
Copy link
Collaborator

Hi @SylvanSi A D435f camera is detected as D435, so this is normal and not something to be concerned about.


It looks as though you have modified the align_depth2color-py example program and added a hardware reset mechanism.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/align-depth2color.py

When a camera is reset, it is disconnected and then reconnected. If it is not re-detected after the disconnection then this could cause frames not to arrive within the 5 second period (5000) allowed before the program time-outs and produces RuntimeError: Frame didn't arrive within 5000

Does the original align_depth2color.py program work if you run it without changes?


The Resource temporarily unavailable type of 'control_transfer returned' warning can indicate that there is a communication problem between the camera and the computer, such as an issue with the USB port or the USB cable.


Are you using the official 1 meter long USB cable supplied with the camera or a longer USB cable of your own choice, please?

@SylvanSi
Copy link
Author

SylvanSi commented Aug 1, 2023

Hi @MartyG-RealSense,thanks for the reply. It doesn't work either without changes. I am using the official 1m USB cable, I also use other cable to make sure it is not caused by the cable. And as I said before, the camera can work on my PC, I can use pyrealsense2 to call it.

@MartyG-RealSense
Copy link
Collaborator

If pyrealsense2 works on your PC, is the 'Frame didn't arrive within 5000' error when you run your script occuring on that same PC or on a different computer / computing device?

@SylvanSi
Copy link
Author

SylvanSi commented Aug 1, 2023

It won't occur. It works normally.

@MartyG-RealSense
Copy link
Collaborator

Do you mean you have pyrealsense2 installed on your PC and it works but you have this error when running your program?

@SylvanSi
Copy link
Author

SylvanSi commented Aug 1, 2023

I am sorry that I don't illustrate my problem clearly. I mean when I connect D435 on my PC and use python to run my project, the camera can shows rgb and depth images properly. But when I connect the Ubuntu device to run another program, it doesn't work and shows 'Frame didn't arrive within 5000'. So I use 'align_depth2color-py' to verify, it still shows 'Frame didn't arrive within 5000'. I just want to explain that there is no problem with my project. Thanks:)

@MartyG-RealSense
Copy link
Collaborator

So the first time that you run a program it is fine, but the second time that you run a program (even one that is not your own) it says 'Frame didn't arrive within 5000'.

So the problem is not with your script but with running a program after the first one?

@SylvanSi
Copy link
Author

SylvanSi commented Aug 1, 2023

Sorry again, it still works on PC, we don't have to discuss this. The problem is when I use camera on my ubuntu device, it says 'Frame didn't arrive within 5000'. It can't work even once.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Aug 1, 2023

I think the confusion is coming from 'ubuntu device'. So if the Ubuntu device is another computer, is the PC a Windows machine?

My apologies, and thanks for your patience!

@SylvanSi
Copy link
Author

SylvanSi commented Aug 1, 2023

I am sorry to make you confused. Yes, 'PC' is my windows device, 'Ubuntu device' actually is a development board. So what make this happens. I also find that when in this 5 seconds, the camera's infrared laser was working(it was flashing).

@MartyG-RealSense
Copy link
Collaborator

It's no problem at all. Thanks very much for the confirmation.

What is the Ubuntu development board that you are using (Raspberry Pi, Nvidia Jetson, etc)

@SylvanSi
Copy link
Author

SylvanSi commented Aug 2, 2023

It is atlas200IDK, ascend. I don't see any others use this with realsense.

@MartyG-RealSense
Copy link
Collaborator

It looks as though this is the model that you are using:

https://e.huawei.com/en/products/computing/ascend/atlas-200

There is not a previously reported use of this hardware with RealSense cameras. It has an Ascend 310 AI processor chip. RealSense cameras work with Intel, Arm and sometimes AMD processors. The Ascend 310 processor does not seem to use the architecture of any of these brands, so it may be not be fully compatible with the camera.

@SylvanSi
Copy link
Author

SylvanSi commented Aug 2, 2023

Thanks for your patience. I think it is built on the arm architecture. Is it a compatibility issue or there is other problem. I tried using a C program to access the camera, and it worked successfully. I was able to save a video and play it back without any issues. However, I'm unsure how to access the depth stream. Have there been similar issues on other devices, and are there any solutions available?

@MartyG-RealSense
Copy link
Collaborator

'Frame didn't arrive within 5000' is one of the most common errors experienced by RealSense users. It basically means that the camera stopped delivering new messages for more than 5 seconds, causing a time-out.

If you are using C (not C++) then there is a C example program called rs-depth at the link below.

https://github.com/IntelRealSense/librealsense/tree/master/examples/C/depth

@SylvanSi
Copy link
Author

SylvanSi commented Aug 2, 2023

'Frame didn't arrive within 5000' is one of the most common errors experienced by RealSense users. It basically means that the camera stopped delivering new messages for more than 5 seconds, causing a time-out.

If you are using C (not C++) then there is a C example program called rs-depth at the link below.

https://github.com/IntelRealSense/librealsense/tree/master/examples/C/depth

Thanks, but I think I still need to use python
to complete the project. I have to try again, or get another camera. Thank you again for your patience:)

@MartyG-RealSense
Copy link
Collaborator

Hi @SylvanSi Do you have an update about this case that you can provide, please? Thanks!

@SylvanSi
Copy link
Author

not yet

@MartyG-RealSense
Copy link
Collaborator

Okay, thanks very much @SylvanSi for the update.

@Nataraj-github
Copy link

tried to use the code given in the link
https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/python-tutorial-1-depth.py
dint work still getting the same error:
C:\Users\Ne1\AppData\Local\Temp\ipykernel_17360\1471619645.py:37: SyntaxWarning: "is" with a literal. Did you mean "=="?
if y%20 is 19:
Frame didn't arrive within 5000

@MartyG-RealSense
Copy link
Collaborator

Hi @Nataraj-github Does the error still occur if you insert code at line 18 of the script (before the pipeline start line) to reset the camera when the script is launched?

ctx = rs.context()
devices = ctx.query_devices()
for dev in devices:
dev.hardware_reset()

@MartyG-RealSense
Copy link
Collaborator

Hi @Nataraj-github Do you require further assistance with this case, please? Thanks!

@Nataraj-github
Copy link

Nataraj-github commented Sep 29, 2023 via email

@MartyG-RealSense
Copy link
Collaborator

The raw depth values of the camera are 'pixel depth' values that do not represent real-world distance in meters. To get the real-world depth value in meters, you can multiply the raw depth value by the depth scale value of the particular RealSense camera model being used. The scale of L515 is 0.000250.

3000 x 0.000250 = 0.75 meters, or 750 mm.

@Nataraj-github
Copy link

Nataraj-github commented Sep 29, 2023 via email

@MartyG-RealSense
Copy link
Collaborator

The L515 depth scale is not in documentation but you can confirm it in the RealSense Viewer tool by going to the 'L500 Depth Sensor > Controls' section of the Viewer's options side-panel and seeing the value '0.000250' beside the "Depth Units" option.

Instead of recording a bag file, you could export a separate ply file for each side of the plant from the Viewer's 3D pointcloud mode and then use CloudCompare to stitch the multiple ply files together into a single combined ply. The link below has information about doing so in CloudCompare.

#10640 (comment)

@Nataraj-github
Copy link

Nataraj-github commented Oct 2, 2023 via email

@MartyG-RealSense
Copy link
Collaborator

Hi @Nataraj-github Images cannot be posted to this forum by email and must instead be inserted into the comment writing box on the web-page.

The L515 camera scans an infrared laser beam over the entire field of vew (FOV). The surfaces reflect the light back to a photodiode component in the camera and the camera processes the data from the reflected beam. It then outputs a depth point representing a specific point in the scene. A depth pointcloud is generated by aggregating together all of the points in the scene that the camera is observing.

https://dev.intelrealsense.com/docs/lidar-camera-l515-datasheet

So answer 'A' will be the closest to the above explanation.

@Nataraj-github
Copy link

Nataraj-github commented Oct 2, 2023 via email

@MartyG-RealSense
Copy link
Collaborator

I cannot view your images, unfortunately. Please try pasting them into the comment box again. When the image is being inserted, it may take some time before it is loaded in so the Comment button should not be clicked until the import is complete or it will just display text saying [image: image.png]

The L515 lidar depth camera works on different principles to stereo depth cameras like the RealSense 400 Series. L515 calculates distance based on light reflected back to the camera from the surface of objects, as described in the quote from the L515 data sheet that I provided above.

@MartyG-RealSense
Copy link
Collaborator

In addition to the data sheet document, a User Guide for the L515 can be downloaded as a PDF file at the link below.

https://support.intelrealsense.com/hc/en-us/articles/360051646094-Intel-RealSense-LiDAR-Camera-L515-User-Guide

@MartyG-RealSense
Copy link
Collaborator

Hi @Nataraj-github Do you require further assistance with your problem, please? Thanks!

@Nataraj-github
Copy link

Nataraj-github commented Oct 9, 2023 via email

@MartyG-RealSense
Copy link
Collaborator

@Nataraj-github The RealSense SDK has a Python example program for measuring volume called box_dimensioner_multicam

https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/box_dimensioner_multicam

@MartyG-RealSense
Copy link
Collaborator

Hi @Nataraj-github Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

@Nataraj-github
Copy link

Hi Team,

I hope now you can see the images i have attached. I am just not clear whether the reference for measuring distance on camera is a PLANE or the laser emitting POINT.
From the images and written things I saw, I assume the reference is camera PLANE for measuring distances from any object. Correct me if I am wrong .I have taken screenshots such that you can see the page number and document name in case if you need more details to confirm, Thanks...!!

  1. L515_User_Guide_v1.0 page numbers 21 and 22
  2. Intel_RealSense_LiDAR_L515_Datasheet_Rev003 page numbers 10 and 11.

LASER POINT AS REFERENCE POINT AND CAMERA FACE AS REFERENCE PLANE
L515_USER_GUIDE_V10

Lidar Measurement reference

@MartyG-RealSense
Copy link
Collaborator

As mentioned in the second image, the depth measurement reference of L515 (where depth = 0) is the front glass of the camera. This location is known as the starting point or plane.

Light bounces back from surfaces to the camera's photodiode component and generates a depth point. All individual depth points are combined together into a point cloud image.

@Nataraj-github
Copy link

Thanks for the reply ,but I am still not sure you understand my question clearly.
We can see that the distance d1 (in the first image drawn by me) is different in the case of point as starting reference ( case A) and in the case of plane as reference ( case B in the same image). In other words the distance d1 when the stating reference on camera as a laser emitting point as reference and camera planar face as reference are not same. So I am interested to understand whether the system developed ( algorithm) uses Plane as reference or the point emitting source as reference because the distances vary based on the reference ( with plane those are perpendicular distances and with emitting point those are inclined ) ?

@MartyG-RealSense
Copy link
Collaborator

The depth algorithms of RealSense cameras are confidential closed-source information that is not available publicly, unfortunately. If the public data sheet or user guide does not contain the information then it will not be able to be disclosed.

However, page 19 of the user guide refers to an observed surface that is being depth-sensed as the plane. Pages 22 and 26 also refer to depth as a plane. Page 18 of the L515 data sheet document describes how the Depth Quality Tool can be used to test distance to plane accuracy.

@Nataraj-github
Copy link

Nataraj-github commented Oct 26, 2023 via email

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. Thanks very much for the update!

@mujiwob
Copy link

mujiwob commented Jan 30, 2024

It is atlas200IDK, ascend. I don't see any others use this with realsense.

Hi @SylvanSi ,I'm currently trying to use realsense on Atlas 200I DK A2.After I compiled librealsense, I could not detect the camera in realsense-viewer and pyrealsense. Now I can only get color and infrared frames from OpenCV, but I also need depth frames in my project. Have you found a way to use the realsense camera on Atlas 200I DK A2? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Hi @woblitent Did you build librealsense from source code with CMake and include the flag -DFORCE_RSUSB_BACKEND=TRUE in the CMake build instruction, please? An RSUSB = true source code build of librealsense can work well with 'exotic' computing hardware such as an industrial board that is not like a typical PC computer.

@mujiwob
Copy link

mujiwob commented Jan 31, 2024

Hi @woblitent Did you build librealsense from source code with CMake and include the flag -DFORCE_RSUSB_BACKEND=TRUE in the CMake build instruction, please? An RSUSB = true source code build of librealsense can work well with 'exotic' computing hardware such as an industrial board that is not like a typical PC computer.

Thank you for your response! This works!

@MartyG-RealSense
Copy link
Collaborator

You are very welcome. It's excellent to hear that RSUSB = true resolved your issue. Thanks very much for the update!

@Nataraj-github
Copy link

Nataraj-github commented Mar 26, 2024 via email

@MartyG-RealSense
Copy link
Collaborator

Hi @Nataraj-github Are you referring to Visual Studio Code, please? If you are then the approach that I have seen a couple of times from Viscode users is to install pyrealsense2 from a wheel package with the instruction below:

pip install pyrealsense2

The link below has more details.

#11789

The pip install instruction can be used with Python 3.7 to 3.11 on a PC and Python 3.7 to 3.9 on an Arm-based computer. If you want to use a newer Python version than these then the pyrealsense2 wrapper has to be installed from source code.

#980 (comment)
discusses installing pyrealsense2 from source in Visual Studio, though the procedure may not be the same for VS Code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants