-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need help with custom object training #186
Comments
These results are highly encouraging, the centroid is well detected. In your training data, do you think the camera field of view is similar to what you are using. Did you try to move away from the robot? It looks like your training data has the robot somewhat far. Also you should train with only a single instance. And since the robot is always going to be on the ground, you could make the poses not as random. I would say 1 or 2 more iteration(s) of data generation. For dope I probably had 10-15 iterations of data generation. I am sorry for the 3060 not delivering. I think epoch 20 or 30 should be enough to give you a good idea if it is working. |
can you share your 3d model, did you try with nvisii or used ndds? |
looking at it again, it seems like there is a symmetry on your model, e.g., the left and right side look similar, is that correct? |
It's not full because my engineer team doesn't need all colored models so I added using nvisii and also some materials. |
Yeah I would just limit the view of the object to one of it side, and do not let it go upside down. Also you can randomly color the robot. We did that in the robot pose estimation work and it helped. Make also the robot appear closer to camera. The testing image you share looks pretty good to me. |
Here's a script that will modify your existing dataset so that it only shows the object from one side: |
So, @TontonTremblay I have trained the network as you said and I got quite good results. But still, in some cases, it's still can't detect. It seems like it really likes the sides of AVG. |
There is a thing on top of your robot (sorry I just saw this) You should model the thing on top of your robot. But the results are quite good. Good work :P |
Yeah, I know it, but I don't need to detect it, only that base and its center because robots going to have different modules on top of it, and training that every time would be really times consuming and not practical. |
I am not saying add it to the pose estimation, but if you had it in the training data, the results would be more stable. But overall are you happy with the results? |
I think it's quite good ( of course could be better :D ), but as a network that is trained basically on synthetic data, I am quite impressed. Of course, further training will be done if the cost of implementation won't be too big, because even on my laptop with gtx 1050 it's struggling to run. Probably Nvidia jetson NX would be required. But everything depends on the people above me :D . |
Hello @LTU-Eimantas . |
I have been using dome light. https://github.com/owl-project/NVISII/blob/master/examples/17.materials_visii_interactive.py#L13-L14 I have downloaded a pretty large set from https://polyhaven.com/hdris This is what I used in the https://arxiv.org/abs/2105.13962 I hope this helps. Otherwhise you can use the segmentation mask to do copy and paste onto normal images. |
Thank you for helping me again . I'll check this method. |
Once I am back from vacation I will try to put a script together to share
here.
…On Fri, Dec 3, 2021 at 20:27 sejmoonwei ***@***.***> wrote:
I have been using dome light.
https://github.com/owl-project/NVISII/blob/master/examples/17.materials_visii_interactive.py#L13-L14
I have downloaded a pretty large set from https://polyhaven.com/hdris
This is what I used in the https://arxiv.org/abs/2105.13962 I hope this
helps.
Otherwhise you can use the segmentation mask to do copy and paste onto
normal images.
Thank you for helping me again . I'll check this method.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#186 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABK6JIERXDYC3BWNK2PD77TUPGKDHANCNFSM5EOBG3OA>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
https://github.com/NVlabs/Deep_Object_Pose/tree/master/scripts/nvisii_data_gen Here is the script I promised. It is pretty bared. Feel free to send PRs. |
Thanks for sharing this work. Though I haven't had a chance to try this script as I was looking into the loss term , I'll post there if any progress was made. |
Hello, I really like the work you have done! But I need help with my model. This is my model.




So I trained my model with 90k images and 10k for testing which is made with NVISII and run it 60 epochs. It was painfully slow took about 7 days. I got these tensor images from the training algorithm.












and I have run the save option and got these annotations.
So after 60 epochs, I got about 0.01-0.009 loss. After running the interface, I got these tensor results
But if I try to run on the real object I got nothing and this tensor map.

So do you have suggestions on how to improve it? It seems like some tensors are good but others don't make sense. One part says it should fix if I feed more data and train more, but it's real times consuming on my rtx 3060 12gb so I thought I might get a second opinion.
The text was updated successfully, but these errors were encountered: