-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D435 Align High CPU #5440
Comments
You may be able to reduce the load on the CPU if you offload some of the processing onto your computer's GPU. If you have an Nvidia GPU then you can build Librealsense with CUDA support. If your GPU is not Nvidia, you could alternatively offload to the GPU with a GLSL processing block, which is vendor-neutral. Details of both methods can be found in the link below. If your recording sessions are short (about 10 seconds) then another option might be to use an instruction called Keep() to store the frames in memory and then do the alignment on the frames all at once as a batch operation when the stream is closed and save the results. The requirement for a short recording duration is because the Keep() process consumes the computer's available memory resources over time. |
Hi MartyG-Realsense My GPU is Intel UHD Graphics 620, so I would have to use GLSL processing block. ¿Does the C# Wrapper implements GLSL processing block? My C++ knowledge is very poor. About the recording session, the camera will be running and processing continually (24x7), so apart from the fact that cpu consumption is important, I can't use keep. |
I had a look at the GLSL example program for adapting a project for GLSL use, but my programming knowledge isn't sufficiently advanced to offer advice about how convertible it is to C#. Hopefully one of the Intel RealSense staff on this forum can help with this. https://github.com/dorodnic/librealsense/tree/glsl_extension/examples/gl |
@amatabuena I am not sure how you are using it. But if there is a possibility for merging the final result of image processing of colour and depth (both non-aligned), by say simple overlap of (x,y) coordinates or minimum euclidean distance etc. approach, then you can just align the coordinates obtained from colour image processing to coordinates system of depth image or vice-versa. For around 100 different pixels, it will be done in less than millisecond time. |
Hi @amatabuena |
Did anyone tried this method?? |
Hi @ankittecholution, @kafan1986, To put you in context, I process Color image to detect faces and then I search a series of characteristics in exactly same region of depth image. So really I would need only (more or less) the coordinates of the square generated for the face detected). ¿How can I calculate the matching of color and depth points using C# Wrapper? |
Yes, for my own use case I have done the same. There is no ready made solution atleast not in the Android wrapper. One need to call inbuilt function through own code, which my Android I do via JNI. I can post the code if you want, it is already present in the C++ code in librealsense SDK. |
Hi @kafan1986 , It would be nice If you could give me an example. I'm not used to working with C++ and it's a bit difficult to me for understanding it. Thanks. |
I store the extrinsic and intrinsic camera parameters of the depth and colour frame. Do this after all the decimation filter step (if any). This will not only help you to map from one coordinate to another coordinate system, it will work even if the resolution of camera image and depth image is different.
The below code changes depth coordinate to colour coordinate. z value is supposed to be in meters by default.
Below function is used to convert colour coordinate to depth coordinate. You need the actual depth frame to be passed to function for this to work. So if you are doing this at a later step, clone the particular depth frame.
|
Thanks for the code. I understood the code and how to do it. I have only one doubt what are these parameter and how to get those:- |
It is the depth range in which it searches for the best match. Anyways according the SDK, you can use a suitable value around 0.1 meter for depthMinMeters and 10 meter for depthMaxMeters. |
Thanks, @kafan1986 for your help. I am trying to make an android application with this where I can find the dept of a frame corresponding to the colour frame. If you have any information about how to get a bitmap from the colour frame that would be great but other than converting GLRenderer into Bitmap. |
Thanks @ankittecholution, I will try it as soon as I can. I will have to find out how to create the methods you mentioned in C++ and the C# Wrapper for calling it. |
All the best. Anyways, you generally need either of the two functions. Either you want everything to be represented in depth coordinate system or color coordinate system. I use Colour coordinate system, so don't need the depth Frame. |
I believe you have 2 questions: |
Hi @amatabuena Will you be needing further assistance with this? Please note that if we don’t hear from you in 7 days, this issue will be closed. Thank you |
Hi, I couldn't find out a optimal solution, so I had to increase hardware specifications in order to ensure performance. (Cost increase is another matter). Thanks. |
@amatabuena Pairing the RealSense with hardware acceleration from an Intel Neural Compute Stick 2 in another USB port may be a less costly method to implement your project. The Stick is sold in the official RealSense online store. https://store.intelrealsense.com/buy-intel-neural-compute-stick-2.html https://www.intelrealsense.com/depth-camera-and-ncs2/ https://github.com/movidius/ncappzoo/tree/master/apps/realsense_object_distance_detection |
Hi @amatabuena Do you need any further assistance with your alignment/cpu-usage question or would you say we can we close this issue? Thanks |
Hi. Will you be needing further help with this? If we don’t hear from you in 7 days, this issue will be closed. Thanks |
Hi, I won`t need more help for this issue. Thanks. |
Thanks! :) |
Issue Description
Hi,
I'm trying to use D435 with lastest Production SDK, and I have realized that CPU usage is quite high (almost x2) when frame alignment is done. This is aprox de CPU usage:
This usage is for my laptop, an Intel Core i5-6300U, 2.40Ghz, x64 architecture and 8Gb RAM.
As I need alignment in order to process both Color and Depth Frames together, ¿There is any way to reduce this consumption, or impplement any oter kind of aligment?
Thanks,
The text was updated successfully, but these errors were encountered: