-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
point cloud generation- x and y coordinates not accurate #12503
Comments
Hi @swishswish123 Exporting RealSense color data to a ply pointcloud fle with export_to_ply usually does not work in Python, unfortunately. The only Python export script that has been shown to successfully export color to ply is at #6194 (comment) |
Thanks @MartyG-RealSense for the quick reply! So for me the problem is not getting the point cloud itself, I seem to get the point cloud with color, and the coordinate in Z is correct. My problem is that the size of my object in the X and Y directions is smaller than it should be. When you say the export_to_ply doesn't usually work- what is usually the problem? Because from the comment you linked above it seemed they couldn't get both colours and the vertex normals, which isn't an issue I'm having. |
export_to_ply usually does not export color data to ply except with that one example, but exporting depth data only to ply works fine. The reason for the color export problem is not known, unfortunately. There is an alternative export instruction called save_to_ply that can export color data, though this is known to have problems too. You are wlcome to try the save_to_ply export script at #7747 (comment) to see whether it works for you. If your problem is one of incorrect scale when importing the exported ply file into another program (such as 3D modelling software like MeshLab and Blender), this can occur because the measurement scale in the program that the ply is imported into needs to be set to the same scale as the RealSense SDK that the ply was exported from. The default depth unit scale of the SDK for most RealSense 400 Series cameras (except D405) is 0.001 meters, or 1 millimeter. |
Is there a way of getting the points and colours of the point cloud as np arrays without saving it as a .ply file? |
The Python script at #4612 (comment) is an example of generating a depth and color pointcloud with pc.calculate, storing the depth and color data in separate numpy arrays and then retrieving the coordinates as vertices and the texture coordinates from the arrays with print instructions. |
Thanks @MartyG-RealSense that is very useful to know! However, when I tried this method of creating point clouds the same happens- My object of interest is over 50mm wide and when I measure the object width generated by the pointcloud it is only 43mm. |
Higher resolution can provide better accuracy. So if in the script at #4612 (comment) you change the resolutions from 640x480 to 848x480 for depth and color, does the XY measurement accuracy improve? |
I think that works now! Thank you so much for the help :) One final related question so I understand this correctly- when I align the pointcloud to the color stream, does that mean that the pointcloud generated is therefore relative to the RGB sensor? If I now grab an image with an aruco marker that tells me the 3D position relative to the camera, and I use the RGB image for that, would the position I obtain of the marker be the same as in the pointcloud? |
Yes, when depth is aligned to color, the center-line of the RGB sensor becomes the depth origin point. During depth to color alignment, the depth field of view resizes to match the RGB sensor's field of view size. The color field of view size does not change. So a coordinate on the RGB image should correspond to the same point on the RGB-aligned pointcloud. |
Perfect thank you so much for your help @MartyG-RealSense and for the quick responses. One final unrelated question before the issue is closed- I am struggling to get my camera working on my mac (Monterey) and looked everywhere probably most relevant issues and still haven’t managed to solve it- is there a specific issue I should continue the conversation on or should I start a new issue? |
The two main ways to install librealsense for MacOS Monterey are:
https://lightbuzz.com/realsense-macos/
|
Hi @swishswish123 Do you require further assistance with this case, please? Thanks! |
Apologies for the late reply @MartyG-RealSense, the initial case is resolved :) However, I still can't manage to get realsense working on new versions of Mac. Should I post on another related issue, a new one or continue the conversation here? Thanks so much for the help! |
You are very welcome! As far as I am aware, issues with installing librealsense on MacOS Ventura and Sonoma have not yet been resolved, though it is possible to get it working on Monterey. The two main ways to install librealsense for MacOS Monterey are: Use the guide at the link below. Perform a brew installation. The brew method is compatible with Ventura and Sonoma. |
Hi @enkaoua Do you require further assistance with this case, please? Thanks! |
Unfortunately those methods don't work for me... pipeline.start(config) |
Hi @swishswish123 If you are using brew then you are a MacOS user, yes? Which MacOS version are you using? For example, Monterey can work with RealSense cameras but Ventura has problems that are yet to be resolved. Using a USB C to C cable instead of the official USB Type-C (A to C) can also cause problems on Mac. |
Yes, I'm a MacOS user and on Monterey. I've managed to get it working on an old Mac with macOS Big Sur using pyrealsense2-macosx but on my M1 it's giving the above connection error. I am using a USB A with the official Mac converter to USB C- is that okay? my Mac doesn't have USB A ports. |
Adaptors can be problematic, but it is difficult to avoid using them with Macs because of how common it is for them to have C-ports. I note that the Lightbuzz source-code install guide or the brew method did not work for you. Does the Viewer still crash if you launch it whilst the camera is already plugged in? |
So the viewer only crashes when I plug the camera in |
Which version of the RealSense Viewer has been installed with brew? You can find this without inserting the camera by looking at the version number on top of the Viewer window. |
V2.54.2 |
Does your camera have firmware driver version 5.15.1.0 installed? This is the recommended firmware for 2.54.2. You can check the firmware version in the Viewer on your Big Sur Mac by clicking the Info button near the top of the Viewer's options side-panel. |
Ah it seems I had v5.13.0.50. I've now updated it and checked again but it still crashes when I connect the camera. In case it's of any use, this is the log I see in the terminal when I connect the camera: 02/01 14:24:48,929 INFO [0x104d70580] (rs.cpp:2697) Framebuffer size changed to 2116 x 1374 |
The log message Found 1 RealSense devices indicates that the camera was initially detected when inserted. But then it is unable to be accessed. The subsequent message containing the file darwin_usb.c reference has appeared in a couple of other Monterey Mac cases in the past, suggesting that it is a Mac-specific issue. One RealSense user suggested running the Viewer from the terminal with sudo, as described at #9916 (comment) |
What happens when you click OK to close the box. Is the Stereo Module able to be started when clicking on the red 'off' icon? |
Unfortunately I can't get to the stereo module as when I click OK it just comes back, even if I press on "don't show this error again" |
I guess that helps give us more information about the error. The script fails in the line
|
It sounds as though the reset code attempted to reset the camera by disconnecting it and then re-connecting it but then was unable to find it when trying to re-connect after disconnection. When the Viewer or your program is run after the computer is started up, does it always work the first time? There are some cases where a program will work the first time it is run but not the second time. |
Yes that makes sense as when I run the viewer it takes a few times before it works, giving me the power state error. The number of times I have to run it before it works is unpredictable. Do you know why this happens? |
It sounds more like a hardware issue than a program issue. It could be that the brand of USB controller chosen by your computer's manufacturer to control the USB ports may not get along well with your RealSense camera. A way to test this would be to plug the camera into a USB hub instead of directly into the computer's USB port if you are not using a hub already. A hub uses its own brand of USB controller for its ports instead of the one used by the computer's ports. You could also test whether reliability improves if you unplug the camera from the port, wait a couple of seconds and then re-insert it. Doing so would have the same effect as resetting the camera. |
I can't seem to get it to work 🙁 I tried with a different USB hub than I was using (I was previously using the apple official one). The unplugging the camera for a few seconds and plugging it back in seemed to work for the realsense-viewer for a bit but now it's gone back to being random... |
#11815 seems to be relevant to the issues that you are having regarding Monterey, the need for sudo and 'Failed to set power state' errors. |
Hi @swishswish123 Was the information in #11815 helpful to your problem, please? Thanks! |
Thanks @MartyG-RealSense, sorry for taking so long to reply, I've not yet had the chance to test it out but I will try to do it this week and get back to you |
It's no problem at all. I look forward to your next report. Good luck! |
I have the same issue. I still can't get the viewer to work on my Mac running Sonoma 14.2.1 (i havent tried pyrealsense-macos) I'm on v2.54.2 with the latest firmware version I flashed using a Windows desktop. I can launch the viewer normally without sudo, but as soon as I plug in the camera it crashes. If I launch the viewer with: no sudo, usb-c to usb-c cable:
If I launch the viewer with: sudo, usb-c to usb-c cable:
If I launch the viewer with: no sudo, usb-a to usb-c cable w/converter:
If I launch the viewer with: sudo, usb-a to usb-c cable w/converter:
You were discussing the cable, but it seems like it doesn't have an effect. |
@HasanTheSyrian There was a similar problem at #12307 (comment) with realsense-viewer when using sudo. One RealSense Mac user suggested dealing with the Failed to set power state error by using |
Unfortunately did not work. |
I tried what jackjansen said in #11815 of trying different ports and at the moment it seems like the port furthest from the screen, on the side of the power supply works. I am using USB-C to USB-A cable, then the apple USB-A to USB-C adaptor and connect that to my Mac. When I say works, I mean my python script AND the realsense-viewer seem to be running properly 🥳 So @HasanTheSyrian maybe you can try that way? You should also try inserting the USB-C in both ways, as to me it seems to make a difference which way I plug in the apple adaptor. (For the realsense viewer this is of course after doing Let's hope this doesn't stop working like last time! Thanks so much @MartyG-RealSense for the help :) |
I ordered a UBS-C - USB-A adapter, I'll report back when it comes. |
Thanks so much @swishswish123 for your advice to @HasanTheSyrian :) |
@MartyG-RealSense By the way, if I install Ubuntu [1,2] on my Mac (yes there is a native M1 Linux kernel with reverse-engineered drivers and everything) would it work considering it's ARM? This could be a solution for others as well since a lot of us (I think :3) also like Linux. |
I cannot recall a past case in which Ubuntu has been used with RealSense on a Mac. It was more usual for Windows to be used via MacOS' Boot Camp Assistant utility. Boot Camp does not work with Apple Silicon Macs though, but my understanding is that it is possible to install multiple OS on M1 / M2 Macs and access them via the Start-Up options by holding down the power button. librealsense can be compiled on Ubuntu on Arm devices (such as Raspberry Pi, Nvidia Jetson and other brands of Arm-based single board computers). I do not have an example of its installation and use on Ubuntu on Mac though. Looking at the details of Asahi, it seems like a very good match for M1 and M2 Macs. librealsense can be built from source code on non-Ubuntu flavors of Linux if compiled in RSUSB = true mode, as RSUSB is not dependent on Linux versions or kernel versions and does not need to have a patch applied to the kernel. |
The USB dongle didn't work... but I installed Asahi Linux and... It works!!! I obviously had to build it from scratch so it wasn't very straightforward (only missing packages) but IT WORKS! I still need to test out Pyrealsense, the SDK, examples, etc (which compiled successfully). Also, you are right Asahi Linux is a good match. The amount of work they did is BONKERS. They reverse-engineered firmware for almost of Macs' hardware in a very very short amount of time. |
Hi @HasanTheSyrian It's great to hear that you were successful with Asahi Linux! This should be a useful reference for other Mac M1 / M2 users. |
I was thinking of adding the steps to the repository, but thought I probably can't because it's not officially supported(?) |
RealSense users do still submit Mac-related updates as Pull Requests, although it is rare. https://github.com/IntelRealSense/librealsense/pulls?q=is%3Apr+is%3Aopen+mac |
Okay Ill add the installation steps to the documentation as a PR soon. |
Hi @HasanTheSyrian Have you had the opportunity to submit a PR yet? Thanks! |
@MartyG-RealSense I'm still experimenting with Asahi, I haven't forgotten. I'll make a draft within the next 1-3 weeks. |
@HasanTheSyrian Thanks very much! |
Hi @HasanTheSyrian Did you submit a PR about this case since your last comment on it in February 2024, please? Thanks! |
@MartyG-RealSense Yes! Long time ago. |
Thanks very much for the confirmation. I will add an Enhancement label to this case to affirm that it should be kept open whilst your PR is active. Thanks again! |
Issue Description
I am generating a point cloud of an object with known dimensions and known distance from the camera. When I check the coordinates, the z distance seems correct, but the dimensions in the x and y directions is always smaller than it should be.
This is the method I use as part of a class to generate the point cloud from the images:
Would appreciate any help!
Thanks so much
The text was updated successfully, but these errors were encountered: