Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Difference in depth value when calculated using get_distance() vs multiplying depth pixel value with depth scale #8150

Closed
sandeshk1 opened this issue Jan 13, 2021 · 5 comments

Comments

@sandeshk1
Copy link

Required Info
Camera Model D435i
Firmware Version (Open RealSense Viewer --> Click info)
Operating System & Version Ubuntu 18.04
Kernel Version (Linux Only) 5.4.0-58-generic
Platform core i7
SDK Version Real Sense SDK 2.0
Language C++
Segment Vision

Issue Description

PFA below code snippet:

    // Ingest depth frame
    rs2::frameset data = pipe.wait_for_frames();
    rs2::depth_frame depth = data.get_depth_frame();

    // Query frame size (width and height)
    const int w = depth.get_width();
    const int h = depth.get_height();

    // Query depth distance using get_distance()
    float depth_distance = depth.get_distance(100, 100);

    // Query depth scale
    float depth_scale = depth.get_units();

    std::cout << "depth distance: " << depth_distance << "\n";
    std::cout << "depth_scale: " << depth_scale << "\n";

    // Create OpenCV matrix of size (w,h) from the depth data for some processing
    Mat image(Size(w, h), CV_16U, (void*)depth.get_data(), Mat::AUTO_STEP);

    // Use the OpenCV matrix to get the depth information
    uint16_t* fr_data = (uint16_t*)image.data;
    uint16_t pixel = fr_data[100,100];
    float meters = pixel * depth_scale;
    std::cout << "depth distance from converted cv Mat frame:  " << meters << "\n";

I had obtained depth distance from the depth frame for pixel co-ordinates (100,100). And then I am creating a cv Mat frame out of the depth frames's bytes data. Now I am trying to get the depth distance by multiplying the depth scale value with the depth pixel value of the converted cv Mat depth frame.

I am noticing a diff in the calculated depth distances. Why is this so ?

PFA o/p logs:
depth distance: 3.056
depth_scale: 0.001
depth distance from converted cv Mat frame: 2.35
depth distance: 3.07
depth_scale: 0.001
depth distance from converted cv Mat frame: 2.318
depth distance: 3.14
depth_scale: 0.001
depth distance from converted cv Mat frame: 2.279
depth distance: 3.14
depth_scale: 0.001
depth distance from converted cv Mat frame: 2.358
depth distance: 3.042
depth_scale: 0.001
depth distance from converted cv Mat frame: 2.264

@sandeshk1
Copy link
Author

I see even if one doesn't convert the depth frame to cv Mat and directly calculate the distance by multiplying depth scale and the pixel value of depth frame there is a variation in the distance measurement.

code snip :

    uint16_t* dp_data = (uint16_t*)depth.get_data();
    uint16_t dp_pixel = dp_data[100,100];
    float dp_meters = dp_pixel * depth_scale;
    std::cout << "depth distance from depth frame: " << dp_meters << "\n";

o/p snip :

depth distance: 3.155
depth_scale: 0.001
depth distance from depth frame: 2.295
depth distance from converted cv Mat frame: 2.295

@ev-mp
Copy link
Collaborator

ev-mp commented Jan 13, 2021

@sandeshk1 , the issue is within these lines:

    uint16_t* dp_data = (uint16_t*)depth.get_data();
    uint16_t dp_pixel = dp_data[100,100];

Casting depth.get_data() results in a pointer to a one-dimensional buffer but then you're treating it as it were a two-dimensional array....
The length of line is not specified so the result of dp_data[100,100]; is undefined and imo can lead to segfault.
You should probably use

    uint16_t* dp_data = (uint16_t*)depth.get_data();
    uint16_t dp_pixel = dp_data[100 *  w + 100];

With OpenCV the matrix dimensions are specified explicitly, but since the outcome is similar to results obtained in C++ then I'd suggest to check it again.

@sandeshk1
Copy link
Author

@ev-mp, thanks a lot for the quick reply.

My bad treating the 1-D buffer as though it was a 2-D array.

With your recommendation I was able to get the correct distance value.

So basically the index for the 1-D array should be "pixels's y-coordinate * width + pixel's x-coordinate" is it ? Could you explain a bit more on this solution for my understanding.

@ev-mp
Copy link
Collaborator

ev-mp commented Jan 13, 2021

@sandeshk1 , it is correct.
In float depth_distance = depth.get_distance(x,y); (x,y) stand for (col,row).
So finding the corresponding pixel with 1D buffer is y * width + x

@sandeshk1
Copy link
Author

@ev-mp Thanks for the explanation.

Closing this thread as the solution was provided

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants