-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Realsense python using post processing and alignment causing either blank depth frames or RuntimeError #11246
Comments
Hi @MartinPedersenpp I would first recommend removing the following two lines from the wait_for_exposure_stabilisation code section.
These two instructions are repeated further down the script, and at this point all you want to do is have the program skip the initial frames so that the auto-exposure settles down before the first frame is processed. |
Thanks for the feedback @MartyG-RealSense, but if I remove the wait_for_frames() from the wait_for_exposure_stabilisation, will the auto exposure still get corrected? is the pipeline fetching the 30fps all the time and only "saving" the frames that are extracted with wait_for_frames()? Also how about the post-processing, will the spatial and temporal filter automatically get smoothed out if I have a low alpha threshold, even if I don't pass my data through the filters? |
If you are able to disable auto-exposure and use manual exposure then you do not need a mechanism to skip frames as the exposure should be correct from the first frame with manual exposure. If you require auto exposure, I think that what would work for your script is to implement the skip mechanism in the way described at #9800 (comment) There is the possibility for the FPS speed to vary when using both depth and color streams. If you have auto-exposure enabled and the RGB option auto-exposure priority disabled then the SDK will try to enforce a constant FPS speed. A simple code snippet for disabling it in Python is at #5885 (comment) I would suggest removing the Spatial filter, as it can take a long time to process whilst not making a large difference to the image. Setting a low alpha for the Temporal filter such as '0.1' can reduce fluctuation in depth values but will cause the image to take longer to update. This can cause a wave effect when observing motion as the image slowly updates from one state to the next. |
@MartyG-RealSense isn't the solution in #9800 (comment) what I am already doing in wait_for_camera_stabilisation? only without the alignment? Is there any chance that the auto exposure priority will cause wait_for_frames() to pass empty depth frames due to slower processing? You suggest removing the spatial filter, but again, can the filters and their long processing time cause the blank depth images? aren't both wait_for_frames() and the align.process() and post-process filters blocking functions which would force the script to wait for them to finish? |
#9800 (comment) is similar to your approach, though in the skip mechanism they are using pipe.wait_for_frames() and not using frameset = pipe.wait_for_frames() until the skip has completed. As you are using a powerful Xavier model of Jetson, I would not expect processing to slow down enough to cause blank depth images. The less filters that are used the better though, as they are processed on the CPU instead of the camera hardware and so have a processing cost. Whilst it is generally recommended to place align after filters, there are rare cases where aligning before filters results in significantly better performance. wait_for_frames is a blocking function. If you use poll_for_frames() then frames are returned immediately without blocking. |
@MartyG-RealSense |
Thanks very much @MartinPedersenpp for the update. I look forward to your next report. Good luck! |
Unfortunately I just got empty frames again. |
If you are closing the pipeline then all the frames that are currently in the pipeline at the time of closure will be lost. If you are using an append instruction anywhere in your Python project then I would recommend not doing so if possible, as it can cause a RealSense application to stop providing new frames after 15 frames have been generated, as described at #946 If you are using append and it is not possible to remove it then storing the frames in memory with the SDK's Keep() instruction can be a workaround to resolve the problem: #6146 |
I am not using an append function anywhere. The only thing I am doing is that I am replacing pixels in the depth image that are lower valued than the current frame and then repeating for up to a second. But it seems like now that when I am running a more stable script (less crashes and closures) I haven't met any empty frames for a while, but I am not sure that the problem has been solved. I am/was using one/two object detection models(TensorRT engines) which are loaded into the GPU of the jetson on initiation. Is the possible that the post-processing performed on the GPU gets bottlenecked sometimes because of the two models? (any inference is performed after capturing the frames, but the models are located on the GPU from the start) |
RealSense post-processing filters are processed on the CPU instead of the GPU. |
Hi @MartinPedersenpp Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Sorry for not closing the issue myself, I have been having my holiday break, but thanks for the help. |
No problem at all, @MartinPedersenpp :) |
I am running into some issues when trying to capture frames from my D435 camera. The setup. of my stream looks like this
When everything is set up I have a main thread that looks like this:
On a seperate daemon thread, I have a input worker that looks like this:
I use the timed while loop to try and smoothen the depth data as much as possible by replacing any point with the data farthest away.
When I run the setup like it looks here, I sometimes get empty depth frames from the alignment which causes my script to crash because no data is received.
I read here: #10716 that I should be doing the post-processing before splitting the frameset and then align the data. I tried moving things around and performing the post-processing on the frameset and then aligning them and then extracting the data, but when I do that I run into the RuntimeError: Error occured during execution of the processing block! See the log for more info after processing a few frames in my timed loop.
Any idea how I can avoid the empty depth frames or the RuntimeError?
The text was updated successfully, but these errors were encountered: