Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D435i ouputs mapped Raw16 point cloud ERROR #11543

Closed
weisterki opened this issue Mar 8, 2023 · 31 comments
Closed

D435i ouputs mapped Raw16 point cloud ERROR #11543

weisterki opened this issue Mar 8, 2023 · 31 comments

Comments

@weisterki
Copy link


Required Info
Camera Model { D435i }
Firmware Version (05.14.00.00)
Platform Windows
SDK Version {2.53.1 }
Language {Python }
Segment {VR/AR }

Issue Description

I want to capture a point cloud with Raw16 stream via D435i. I changed the camera configure to:
config.enable_stream(rs.stream.color, 1920, 1080, rs.format.raw16, 30)
and transformed Raw16 stream to Demosaic RGB stream in the capturing loop:
imgRGB = cv2.cvtColor(imgRAW, cv2.COLOR_BAYER_GBRG2RGB).
In the viewer I got the wanted stream:
image
However, when I save the point cloud, I found the colour of the point cloud is incorrect.
image
That's weird because the demosaic stream should be same with RGB8 stream when saving. The demosaic stream is formatted as uint8. I am looking for the reason of this problem, and I would be very appreciative if you could give me some advices!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 8, 2023

Hi @weisterki Which method are you using to save the pointcloud, please? Is it a .ply pointcloud file or an image file such as .png?

If it is a .ply and you are exporting it with Python code then .ply color export is known to be problematic on Python unfortunately, with almost no solutions working. If you are exporting a ply and you are okay with exporting a color ply without normals then the export_to_ply based Python script at #6194 (comment) may work for you.

@weisterki
Copy link
Author

Yes I want .ply point cloud file, and I indeed exporting by export_to_ply, like:
points.export_to_ply('./output/out'+'%d.ply'%filenum, mapped_frame)
I found the problem may not be the export code, but is the mapping code, I paste my code here:

while True:

    rawframes = pipeline.wait_for_frames()
    depth_frame = rawframes.get_depth_frame()

    raw_frame = rawframes.get_color_frame()

    depth_frame = decimate.process(depth_frame)# Process the depth data

    depth_intrinsics = rs.video_stream_profile(
        depth_frame.profile).get_intrinsics()
    w, h = depth_intrinsics.width, depth_intrinsics.height

    depth_image = np.asanyarray(depth_frame.get_data())# Transform list to array
    imgRAW = np.asanyarray(raw_frame.get_data())

    depth_colormap = np.asanyarray(
        colorizer.colorize(depth_frame).get_data())


    if state.color:
        imgRAW = np.uint8(imgRAW / 256.0)
        imgRGB = cv2.cvtColor(imgRAW, cv2.COLOR_BAYER_GBRG2RGB)
        mapped_frame, color_source = raw_frame, imgRGB
    else:
        mapped_frame, color_source = depth_frame, depth_colormap

    points = pc.calculate(depth_frame)
    pc.map_to(mapped_frame)

    v, t = points.get_vertices(), points.get_texture_coordinates()
    verts = np.asanyarray(v).view(np.float32).reshape(-1, 3)# xyz(3 cols)
    texcoords = np.asanyarray(t).view(np.float32).reshape(-1, 2)# uv(2 cols)

if not state.scale or out.shape[:2] == (h, w):
    pointcloud(out, verts, texcoords, color_source)
else:
    tmp = np.zeros((h, w, 3), dtype=np.uint8)
    pointcloud(tmp, verts, texcoords, color_source)
    tmp = cv2.resize(
        tmp, out.shape[:2][::-1], interpolation=cv2.INTER_NEAREST)
    np.putmask(out, tmp > 0, tmp)

cv2.setWindowTitle(
    state.WIN_NAME, "My RealSense Controller: (%dx%d) %dFPS (%.2fms) %s " %
    (w, h, 1.0/dt, dt*1000, "PAUSED" if state.paused else ""))

cv2.imshow(state.WIN_NAME, out)
key = cv2.waitKey(1)

if key == ord("e"):
    file = "./output"
    if not os.path.exists(file):
        os.mkdir(file)
    filenum = filenum+1
    points.export_to_ply('./output/out'+'%d.ply'%filenum, mapped_frame)
    print("Output Successfully!")

For the viewer, the Raw frame is firstly changed to demosaic frame by
imgRGB = cv2.cvtColor(imgRAW, cv2.COLOR_BAYER_GBRG2RGB),
then adding to the viewer window by the function point cloud(out, verts, texcoords, color_source);

But saving point cloud to PLY file is not in this way. The Raw frame is mapped to point cloud by pc.map_to(mapped_frame),
while the mapped frame is not affected by the demosaic code.

Here is the question: the mapped_frame is actually raw_frame, and raw_frame contains a property or metadata called 'data' to save colour information. We can get colour data by raw_frame.get_data(), but we can not set data back to raw_frame.
How could I change the colour metadata of the frame? If I could, I can assign the raw_frame.data as any data I want, therefore I
can save a mapped Raw-colour point cloud.

@MartyG-RealSense
Copy link
Collaborator

How about defining two separate numpy arrays, such as imgRAW1 and imgRAW2, and storing the same RAW data in both arrays. For example:

imgRAW1 = np.asanyarray(raw_frame.get_data())
imgRAW2 = np.asanyarray(raw_frame.get_data())

Then if the data stored in imgRAW1 gets altered by imgRAW1 = np.uint8(imgRAW1 / 256.0) then you should still have the original data in imgRAW2 to go back to.

@weisterki
Copy link
Author

How about defining two separate numpy arrays, such as imgRAW1 and imgRAW2, and storing the same RAW data in both arrays. For example:

imgRAW1 = np.asanyarray(raw_frame.get_data())
imgRAW2 = np.asanyarray(raw_frame.get_data())

Then if the data stored in imgRAW1 gets altered by imgRAW1 = np.uint8(imgRAW1 / 256.0) then you should still have the original data in imgRAW2 to go back to.

I mean if I want to save point cloud, I should first map the point cloud using pc.map_to(), and the argument of this function should be rs2::frame type; The save PLY code export_to_ply('./output/out'+'%d.ply'%filenum, mapped_frame) also need frame data type as input.
Now I want to change the colour data of mapped_frame, so I transform the bayer Raw16 format imgRAW =raw_frame.get_data() to RGB after imgRGB = cv2.cvtColor(imgRAW, cv2.COLOR_BAYER_GBRG2RGB), and I should then transform the imgRGB to frame data structure, then I could map to point cloud by `pc.map_to()'. I don't know how to return the processed RGB to frame data.

@MartyG-RealSense
Copy link
Collaborator

I wonder whether you could reverse the order of the formats in the cvtColor instruction.

imgRGB = cv2.cvtColor(cv2.COLOR_BAYER_GBRG2RGB, imgRAW)

@weisterki
Copy link
Author

But no matter how you change the data of imgRGB, you can not pack it to the frame format, therefore you can not map it to point cloud by pc.map_to() and save as ply file.

@weisterki
Copy link
Author

I wonder whether you could reverse the order of the formats in the cvtColor instruction.

imgRGB = cv2.cvtColor(cv2.COLOR_BAYER_GBRG2RGB, imgRAW)

As I emphasized earlier, the pc.map_to() code can only map the frame type data to point cloud, but not RGB data. So I must
pack the RGB data as frame then mapping or saving. I tried the way of mapped_frame = raw_frame.set_data(imgRGB ) but there is no such a function.
Where could I find the source code of pc.map_to() and export_to_ply? or is there any other way I can do?

@MartyG-RealSense
Copy link
Collaborator

The section of src/points.cpp linked to below may be relevant source code for export_to_ply.

void points::export_to_ply( const std::string & fname, const frame_holder & texture )
{
auto stream_profile = get_stream().get();
auto video_stream_profile = dynamic_cast< video_stream_profile_interface * >( stream_profile );
if( ! video_stream_profile )
throw librealsense::invalid_value_exception( "stream must be video stream" );
const auto vertices = get_vertices();
const auto texcoords = get_texture_coordinates();
std::vector< float3 > new_vertices;
std::vector< std::tuple< uint8_t, uint8_t, uint8_t > > new_tex;
std::map< int, int > index2reducedIndex;
new_vertices.reserve( get_vertex_count() );
new_tex.reserve( get_vertex_count() );
assert( get_vertex_count() );
for( int i = 0; i < get_vertex_count(); ++i )
if( fabs( vertices[i].x ) >= MIN_DISTANCE || fabs( vertices[i].y ) >= MIN_DISTANCE
|| fabs( vertices[i].z ) >= MIN_DISTANCE )
{
index2reducedIndex[i] = (int)new_vertices.size();
new_vertices.push_back( { vertices[i].x, -1 * vertices[i].y, -1 * vertices[i].z } );
if( texture )
{
auto color = get_texcolor( texture, texcoords[i].x, texcoords[i].y );
new_tex.push_back( color );
}
}

There is not a clear source-code location for pc.map_to, though the pyrs_processing.cpp pyrealsense2 wrapper file has a reference.

.def("map_to", &rs2::pointcloud::map_to, "Map the point cloud to the given color frame.", "mapped"_a);

@weisterki
Copy link
Author

The section of src/points.cpp linked to below may be relevant source code for export_to_ply.

void points::export_to_ply( const std::string & fname, const frame_holder & texture )
{
auto stream_profile = get_stream().get();
auto video_stream_profile = dynamic_cast< video_stream_profile_interface * >( stream_profile );
if( ! video_stream_profile )
throw librealsense::invalid_value_exception( "stream must be video stream" );
const auto vertices = get_vertices();
const auto texcoords = get_texture_coordinates();
std::vector< float3 > new_vertices;
std::vector< std::tuple< uint8_t, uint8_t, uint8_t > > new_tex;
std::map< int, int > index2reducedIndex;
new_vertices.reserve( get_vertex_count() );
new_tex.reserve( get_vertex_count() );
assert( get_vertex_count() );
for( int i = 0; i < get_vertex_count(); ++i )
if( fabs( vertices[i].x ) >= MIN_DISTANCE || fabs( vertices[i].y ) >= MIN_DISTANCE
|| fabs( vertices[i].z ) >= MIN_DISTANCE )
{
index2reducedIndex[i] = (int)new_vertices.size();
new_vertices.push_back( { vertices[i].x, -1 * vertices[i].y, -1 * vertices[i].z } );
if( texture )
{
auto color = get_texcolor( texture, texcoords[i].x, texcoords[i].y );
new_tex.push_back( color );
}
}

There is not a clear source-code location for pc.map_to, though the pyrs_processing.cpp pyrealsense2 wrapper file has a reference.

.def("map_to", &rs2::pointcloud::map_to, "Map the point cloud to the given color frame.", "mapped"_a);

The map_to is a class of rs2::pointcloud, is there a file pointcloud in rs2 folder?

@MartyG-RealSense
Copy link
Collaborator

@weisterki
Copy link
Author

There is src/proc/pointcloud.cpp

https://github.com/IntelRealSense/librealsense/blob/master/src/proc/pointcloud.cpp

It's difficult to solve it from the source code. Could you give any advice of mapping image to the point cloud?
I just need the points and their correspond RGB.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 9, 2023

In #2948 and #6239 RealSense team members provide advice about finding the correspondence between an RGB coordinate and depth coordinate.

It may be easier though to drop using RAW16 images and just use the standard processed RGB stream with map_to.

@weisterki
Copy link
Author

In #2948 and #6239 RealSense team members provide advice about finding the correspondence between an RGB coordinate and depth coordinate.

Ok, I will try that. By the way, could you please explain the function get_texture_coordinates()?

float2 * points::get_texture_coordinates()
{
get_frame_data(); // call GetData to ensure data is in main memory
auto xyz = (float3 *)data.data();
auto ijs = (float2 *)( xyz + get_vertex_count() );
return ijs;
}

In the code #11543 (comment), the get_texture_coordinates() is seems to generate a coordinate, and then send to the pointcloud function to transform to UV coordinate:

def pointcloud(out, verts, texcoords, color, painter=True):

if painter:
    # Painter's algo, sort points from back to front

    # get reverse sorted indices by z(in view-space)
    # https://gist.github.com/stevenvo/e3dad127598842459b68
    v = view(verts)
    s = v[:, 2].argsort()[::-1]
    proj = project(v[s])
else:
    proj = project(view(verts))

if state.scale:
    proj *= 0.5**state.decimate

h, w = out.shape[:2]

# proj now contains 2d image coordinates
j, i = proj.astype(np.uint32).T# Transposition

# create a mask to ignore out-of-bound indices
im = (i >= 0) & (i < h)
jm = (j >= 0) & (j < w)
m = im & jm

cw, ch = color.shape[:2][::-1]# [::-1] reverse
if painter:
    # sort texcoord with same indices as above
    # texcoords are [0..1] and relative to top-left pixel corner,
    # multiply by size and add 0.5 to center(texture coordinate)
    v, u = (texcoords[s] * (cw, ch) + 0.5).astype(np.uint32).T
else:
    v, u = (texcoords * (cw, ch) + 0.5).astype(np.uint32).T
# clip texcoords to image(limit to 0~ch-1)
np.clip(u, 0, ch-1, out=u)
np.clip(v, 0, cw-1, out=v)

#temp = np.full((color.shape[0], color.shape[1], 3), -1)
#temp[:,:,0] = color
#temp[:,:,1] = color
#temp[:,:,2] = color
#color = temp

# perform uv-mapping
out[i[m], j[m]] = color[u[m], v[m]]

Can I utilize this get_texture_coordinates() to get 3D coordinates of each of the pixels in the colour image? Or could I do a inverse UV mapping to map the image UV coordinates back to 3D?

@MartyG-RealSense
Copy link
Collaborator

Another RealSense team member provides a detailed explanation about get_texture_coordinates() in relation to map_to at #1231 (comment)

@weisterki
Copy link
Author

Another RealSense team member provides a detailed explanation about get_texture_coordinates() in relation to map_to at #1231 (comment)

That issue indeed gives information about get_texture_coordinates() , but I still don't know how to map UV coordinates back to RGB.

The problem of saving a Raw16 format point cloud may be a bug in both RealSense SDK and Viewer, as I thought, they may deal with RA16 format Bayer image just as Y16 grayscale image. But that is incorrect because the pixel value of Bayer pattern image present both value of R, G and B, therefore cause the chaos of saved point cloud colour.

@MartyG-RealSense
Copy link
Collaborator

Is #6234 (comment) helpful for mapping UV coordinates to the color frame?

@weisterki
Copy link
Author

Is #6234 (comment) helpful for mapping UV coordinates to the color frame?

I used the code in #6234 (comment), and I don't know what colorLocation is, and I found it sometimes negative. I thought that is the index of a image. Assume I have a 1920* 1080 Image (shape: (1080, 1920)), so where is the colorLocation = 2500 pixel? Is it at Image[1][579]?

@MartyG-RealSense
Copy link
Collaborator

I'm not familiar with UV color programming or the colorLocation instruction. At #1429 though a RealSense team member provides the following advice.


When you calculate the location of the color pixel, note that each pixel is N-bytes wide (3 for RGB) when you access its raw data via char* pointer:

unsigned char r = colorData[colorLoation *size_of_the_reference_pixel];


So unsigned char has a value of [3] for RGB, which is split into a 3-part array:

unsigned char color[3] = {colorStream[colorLocation], colorStream[colorLocation +1], colorStream[colorLocation +2]};

Where the first colorLocation is the first part, 'colorLocation + 1' is the second part and 'colorLocation + 2' is the third part.

And the value of ColorLocation is provided by int colorLocation = y * color_intrinsic.width + x;

Where the values of y and x are calculated from:

int x = tex_coords[i].u * color_intrinsic.width;
int y = tex_coords[i].v * color_intrinsic.height;

@weisterki
Copy link
Author

weisterki commented Mar 12, 2023

I'm not familiar with UV color programming or the colorLocation instruction. At #1429 though a RealSense team member provides the following advice.

When you calculate the location of the color pixel, note that each pixel is N-bytes wide (3 for RGB) when you access its raw data via char* pointer:

unsigned char r = colorData[colorLoation *size_of_the_reference_pixel];

So unsigned char has a value of [3] for RGB, which is split into a 3-part array:

unsigned char color[3] = {colorStream[colorLocation], colorStream[colorLocation +1], colorStream[colorLocation +2]};

Where the first colorLocation is the first part, 'colorLocation + 1' is the second part and 'colorLocation + 2' is the third part.

And the value of ColorLocation is provided by int colorLocation = y * color_intrinsic.width + x;

Where the values of y and x are calculated from:

int x = tex_coords[i].u * color_intrinsic.width; int y = tex_coords[i].v * color_intrinsic.height;

I use Python so my code is :

while True:

if not state.paused:
    rawframes = pipeline.wait_for_frames()
    depth_frame = rawframes.get_depth_frame()
    raw_frame = rawframes.get_color_frame()

    depth_intrinsics = rs.video_stream_profile(
        depth_frame.profile).get_intrinsics()
    w, h = depth_intrinsics.width, depth_intrinsics.height
    depth_image = np.asanyarray(depth_frame.get_data())# Transform list to array
    imgRAW = np.asanyarray(raw_frame.get_data())
    depth_colormap = np.asanyarray(
    colorizer.colorize(depth_frame).get_data())
    imgRAW = np.uint8(imgRAW / 256.0)
    imgRGB = cv2.cvtColor(imgRAW, cv2.COLOR_BAYER_GBRG2RGB)
    mapped_frame, color_source = raw_frame, imgRGB

    points = pc.calculate(depth_frame)
    pc.map_to(mapped_frame)

    # Pointcloud data to arrays
    v, t = points.get_vertices(), points.get_texture_coordinates()
    verts = np.asanyarray(v).view(np.float32).reshape(-1, 3)# xyz(3 cols)
    texcoords = np.asanyarray(t).view(np.float32).reshape(-1, 2)# uv(2 cols)

    # Retrieve colour from UV
    color_intrinsic = mapped_frame.profile.as_video_stream_profile().intrinsics
    r = [0]
    g = [0]
    b = [0]
    for i in range(0, verts.shape[0]):
        x = texcoords[i][0] * color_intrinsic.width
        y = texcoords[i][1] * color_intrinsic.height
        colorLocation = y * color_intrinsic.width + x
        r[i] = color_source[colorLocation]
        g[i] = color_source[colorLocation + 1]
        b[i] = color_source[colorLocation + 2]

But it throws ERROR. Because python is different from C++ pointers, and it doesn't store the image in the way of a 3-part array, but a 3D array. Therefore I think I can not easily use r[i] = color_source[colorLocation] g[i] = color_source[colorLocation + 1] b[i] = color_source[colorLocation + 2] as the index. The color_source in my code is a 1920*1080 image, with three channels.
I think the sample you give me could only be used in C++, is there any Python instance?

@weisterki
Copy link
Author

I'm not familiar with UV color programming or the colorLocation instruction. At #1429 though a RealSense team member provides the following advice.

When you calculate the location of the color pixel, note that each pixel is N-bytes wide (3 for RGB) when you access its raw data via char* pointer:

unsigned char r = colorData[colorLoation *size_of_the_reference_pixel];

So unsigned char has a value of [3] for RGB, which is split into a 3-part array:

unsigned char color[3] = {colorStream[colorLocation], colorStream[colorLocation +1], colorStream[colorLocation +2]};

Where the first colorLocation is the first part, 'colorLocation + 1' is the second part and 'colorLocation + 2' is the third part.

And the value of ColorLocation is provided by int colorLocation = y * color_intrinsic.width + x;

Where the values of y and x are calculated from:

int x = tex_coords[i].u * color_intrinsic.width; int y = tex_coords[i].v * color_intrinsic.height;

I use python so my code is:

I'm not familiar with UV color programming or the colorLocation instruction. At #1429 though a RealSense team member provides the following advice.

When you calculate the location of the color pixel, note that each pixel is N-bytes wide (3 for RGB) when you access its raw data via char* pointer:

unsigned char r = colorData[colorLoation *size_of_the_reference_pixel];

So unsigned char has a value of [3] for RGB, which is split into a 3-part array:

unsigned char color[3] = {colorStream[colorLocation], colorStream[colorLocation +1], colorStream[colorLocation +2]};

Where the first colorLocation is the first part, 'colorLocation + 1' is the second part and 'colorLocation + 2' is the third part.

And the value of ColorLocation is provided by int colorLocation = y * color_intrinsic.width + x;

Where the values of y and x are calculated from:

int x = tex_coords[i].u * color_intrinsic.width; int y = tex_coords[i].v * color_intrinsic.height;

By the way, my UV coordinates are all 0, like #1429
image
But I used the right sequence, according to the answer #1429 (comment).

@MartyG-RealSense
Copy link
Collaborator

#6556 suggests using texcoords in Python in the following way.

pc.map_to(mapped_frame)
points = pc.calculate(depth_frame)

Pointcloud data to arrays
v, t = points.get_vertices(), points.get_texture_coordinates()

verts = np.asanyarray(v).view(np.float32).reshape(-1, 3) # xyz
texcoords = np.asanyarray(t).view(np.float32).reshape(-1, 2) #

@weisterki
Copy link
Author

pc.map_to(mapped_frame)

I've already used that in my code, as my answer #11543 (comment), but I want the python code of UV to RGB after getting the texcoords. Any suggestion about my question #11543 (comment)? Thanks!

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 13, 2023

I researched your question extensively but could not find a specific Python example of UV being converted back to RGB after texcoords are obtained. However, a RealSense user who knows more about this particular programming subject than myself once offered the following advice:

"The texture coordinates return uv coordinates mapping to the color image with the coordinates normalised to 0-1. So to get from uv to xy coordinates (i.e pixel coordinates in the color image) you have to multiply the u and v values by the color image width and height".

https://community.intel.com/t5/Items-with-no-label/How-to-project-points-to-pixel-Librealsense-SDK-2-0/m-p/479194#M4774

@weisterki
Copy link
Author

weisterki commented Mar 16, 2023

I researched your question extensively but could not find a specific Python example of UV being converted back to RGB after texcoords are obtained. However, a RealSense user who knows more about this particular programming subject than myself once offered the following advice:

"The texture coordinates return uv coordinates mapping to the color image with the coordinates normalised to 0-1. So to get from uv to xy coordinates (i.e pixel coordinates in the color image) you have to multiply the u and v values by the color image width and height".

https://community.intel.com/t5/Items-with-no-label/How-to-project-points-to-pixel-Librealsense-SDK-2-0/m-p/479194#M4774

According to the reply of https://community.intel.com/t5/Items-with-no-label/How-to-project-points-to-pixel-Librealsense-SDK-2-0/m-p/479194#M4774, The texture coordinates return uv coordinates mapping to the colour image with the coordinates normalised to 0-1. That means, the value of get_texture_coordinates() should in 0-1. However, as my try, the value could be higher than 1, or negative.
image
I believe the calculated x and y in
x = texcoords[i][0] * color_intrinsic.width
y = texcoords[i][1] * color_intrinsic.height
will be the location of the corresponding pixel in the colour image. As I said the UV coordinates is not in [0, 1], therefore the calculated locations will not in [0, 1919] or [0, 1079]. So the questions are:

  1. How should I deal with the overstep values? Why the UV coordinates present not in [0, 1]?
  2. I checked some of the related answers again, and I can not find the C++ demo mentioned at Pyrealsense2: Question about 3D reconstruction problem #1231 (comment). Do you know which one they were talking about?

@MartyG-RealSense
Copy link
Collaborator

  1. As mentioned earlier, I do not have much knowledge of the subject of UV programming. Further advice is provided though at Pyrealsense2: Question about 3D reconstruction problem #1231 (comment)

  2. The demo that they are referring to is the C++ rs-pointcloud example.

https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud

@weisterki
Copy link
Author

  1. As mentioned earlier, I do not have much knowledge of the subject of UV programming. Further advice is provided though at Pyrealsense2: Question about 3D reconstruction problem #1231 (comment)
  2. The demo that they are referring to is the C++ rs-pointcloud example.

https://github.com/IntelRealSense/librealsense/tree/master/examples/pointcloud

Here is my solution of saving PLY point cloud with raw RGB(plyfile is needed):

   rawframes = pipeline.wait_for_frames()
   depth_frame = rawframes.get_depth_frame()
   raw_frame = rawframes.get_color_frame()
   imgRAW = np.asanyarray(raw_frame.get_data())
   if state.color:
           imgRAW = np.uint8(imgRAW / 256.0)
           imgRGB = cv2.cvtColor(imgRAW, cv2.COLOR_BAYER_GBRG2RGB)
           mapped_frame, color_source, = raw_frame, imgRGB
   points = pc.calculate(depth_frame)
   pc.map_to(mapped_frame)
   depth_frame = decimate.process(depth_frame)
   v, t = points.get_vertices(), points.get_texture_coordinates()
   verts = np.asanyarray(v).view(np.float32).reshape(-1, 3)# xyz(3 cols)
   texcoords = np.asanyarray(t).view(np.float32).reshape(-1, 2)# uv(2 cols)
   key = cv2.waitKey(1)
       if key == ord("x"):
           color_intrinsic = mapped_frame.profile.as_video_stream_profile().intrinsics
           rgb = np.zeros((verts.shape[0], 3))
           state.paused = True
           for i in range(0, verts.shape[0]):
               x = texcoords[i][0] * color_intrinsic.width#1920
               y = texcoords[i][1] * color_intrinsic.height#1080
               x = int(x)
               y = int(y)
               if x<0:
                   x=0
               if x>color_intrinsic.width-1:
                   x = color_intrinsic.width-1
               if y<0:
                   y=0
               if y>color_intrinsic.height-1:
                   y=color_intrinsic.height-1
               rgb[i, :] = color_source[y, x, :]
           rgb.astype(int)
           pointnum = verts.shape[0]
           ply_file = [0]*verts.shape[0]
           outputply = np.zeros((verts.shape[0], 6))
           points = np.zeros((verts.shape[0]), dtype=[('x', 'f'), ('y', 'f'),('z', 'f'), ('red', 'u1'), 
     ('green', 'u1'),('blue', 'u1')])
           for i in range(0, verts.shape[0]):
               x = verts[i,0]
               y = verts[i,1]
               z = verts[i,2]
               r = np.uint8(rgb[i,2])
               g = np.uint8(rgb[i,1])
               b = np.uint8(rgb[i,0])
               points[i]=np.array([(x,y,z, r, g, b)],
                               dtype=[('x', 'f'), ('y', 'f'),('z', 'f'), ('red', 'u1'), ('green', 'u1'),('blue', 
       'u1')])
           print(points.shape)
           outfile="./output/MyRAWout.ply"
           write_ply(outfile, points)

Hope to be useful to anyone who may need it. I cut off the uv locations which are out of the boundary since I believe that is caused by the depth camera FOV is larger than RGB camera FOV.

@MartyG-RealSense
Copy link
Collaborator

Thanks so much for sharing your code with the RealSense community!

@weisterki
Copy link
Author

Thanks so much for sharing your code with the RealSense community!

Thanks, Marty, and I also hope Intel can fix this bug in SDK and RealSense Viewer:-)

@MartyG-RealSense
Copy link
Collaborator

Hi @weisterki As you have achieved a solution, do you require further asssistance with this case please? Thanks!

@weisterki
Copy link
Author

Hi @weisterki As you have achieved a solution, do you require further asssistance with this case please? Thanks!

Thanks, Marty. You can close this issue.

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the confirmation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants