You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to obtain the depth stream from the d415 camera converted to the RGB camera's coordinates at the source? i.e. similar to what the "align" function does, but without the computing resource investment.
Thanks you
The text was updated successfully, but these errors were encountered:
Not as far as I know. The Vision Processor D4 hardware component inside the camera performs operations such as image rectification, described in the brochure image in the link below, and then the data travels through the USB cable to the computing device.
If freeing up CPU resources is a concern for you, then you could try "offloading" CPU work onto a GPU. For devices with an Nvidia GPU, this can be done by building Librealsense with CUDA support.
For non-Nvidia GPUs, there is alternatively "GLSL Processing Blocks" in Librealsense, though this method may be ineffective on low-power devices.
Is it possible to obtain the depth stream from the d415 camera converted to the RGB camera's coordinates at the source? i.e. similar to what the "align" function does, but without the computing resource investment.
Thanks you
The text was updated successfully, but these errors were encountered: