-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merging with vision landing project #1
Comments
Vision Landing users/devs: Please check this: |
Hi @kripper Thank you
I did not do much thing in frontend, it just pack data in mavlink packet. It is located in https://github.com/chobitsfan/mavlink-udp-proxy/tree/new_main. In lastest commit. I moved to use https://github.com/chobitsfan/libcamera-apps/tree/pr_apriltag instead, because raspberry pi moved from v4l2 to libcamera. |
Ok, I'll take a look. I just finished moving your code into I'm testing with my OpenGL simulator that generates and sends the video to Here is a preview of two Any particular reason to use this family tag? I'm now struggling trying to project the (x,y,z) coordinates returned by
I'm specifically trying to figure out how to use the same
|
Ok, that means that the FlightController receives multiple landing targets (one for each detected marker) and selects which one to use (or what to do with this redundant information)? I believe this filtering process should be better done before sending the mavlink messages to the FC, since we have more information about the markers and their confidence levels. |
BTW, while I was coding I managed to wake up @fnoop from his hibernation process. |
Hi @kripper
No. raspberry pi computes the landing point based on which marker it detects. There is only one landing point, raspberry pi knows offsets from marker to the landing point. |
AprilTag dev team recommend it, see https://github.com/AprilRobotics/apriltag/wiki/AprilTag-User-Guide#choosing-a-tag-family |
Oh, right. I forgot you were only working with the first detected marker. |
Focal length is used, but lens distortion is not used. For raspberry pi camera and my application, it is accurate enough even without lens distortion correction. |
Merge is ready. I'm doing tests before releasing. I also started addressing the latency drift problem. In our case, we will also have to implement the motion control on our own: What is your experience with the latency drift (pose estimation is never current, so whatever motion instruction you send will always have some error). Please comment there. |
I published the result of the "merge" here: I included your "IPC communication protocol" (the pose values you were sending to your "frontend") in You could also be interested in the alternative input source "pipe-buffer" to pass raw images with less latency. See more details in the README. |
@chobitsfan, could you please declare here/in the code the licensing of this code? There is a kind of public domain declaration at the top of capture.c, but I think that might be from v4l2 project? |
Hi @fnoop It is modified from https://www.kernel.org/doc/html/v4.11/media/uapi/v4l/capture.c.html |
Hi @chobitsfan,
I'm the current maintainer of RosettaDrone and we are looking forward to contribute to an opensource visual based precision landing project.
I reviewed your implementation and understood everything (code is clean).
Your implementation has this pros:
Vision landing's track_targets has this pros:
I believe both projects should be merged somehow so a maintainer community can be built around.
Of course, they are completely different implementations, but the final goal is exactly the same.
Before proposing a project merge strategy, I would like to see the code of the "frontend" where you process the marker data sent via IPC from the backend "capture.c".
You are probably doing there stuff like filtering out detection errors, maybe generating AR images for debugging and other stuff that is done in the equivalent vision landing python script.
About the scope:
We are also interested in doing some extrapolation to:
Anyway, I believe the merged project should just focus on returning the position of the target relative to the camera, including the error filters required to compute a robust and consistent landing target position based on all markers and maybe also considering previous computations.
The rest I mentioned above could be implemented in the flight controller.
BTW, do you know if this has been implemented?
For RosettaDrone, we will need a independent implementation (apart from the FC), so it would be ideal to use a shared library for this (a second layer separated from the flight controller and from the target tracking/capture layer).
The text was updated successfully, but these errors were encountered: