Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

train on PointPillars and PointRCNN #11

Open
TimGor1997 opened this issue Feb 14, 2024 · 14 comments
Open

train on PointPillars and PointRCNN #11

TimGor1997 opened this issue Feb 14, 2024 · 14 comments

Comments

@TimGor1997
Copy link

Dear author,
May I ask if only 500/125 training samples will be used when retraining PointPillars and PointRCNN with 500frames and 125frames? Or 500/125 annotated samples plus 3712-500/125 unannotated samples?

@Cliu2
Copy link
Owner

Cliu2 commented Feb 15, 2024

Hi,

Thanks for the question. We use the 500/125 human annotated samples, plus the remaining 3712-500/125 samples (which are annotated by the MTrans auto labeler) to train the PointPillars/PointRCNN from scratch. So there are in total 500/125 human annotations plus 3712-500/125 neural-network-generated pseudo labels for training.

@TimGor1997
Copy link
Author

Thank you for your reply~
But there is a question, how to get the PointPillars and PointRCNN only with 500/125 frames without the remaining 3712-500/125 samples?
image

@Cliu2
Copy link
Owner

Cliu2 commented Feb 15, 2024

Those rows are the results for PointPillars/PointRCNN trained with 500/125 human annotations only, no pseudo labels. We use the "500f"/"125f" to denote how many human annotations are required for these experiments. Please also check Sec. 5.2 paragraph 1 for details.

@TimGor1997
Copy link
Author

Those rows are the results for PointPillars/PointRCNN trained with 500/125 human annotations only, no pseudo labels. We use the "500f"/"125f" to denote how many human annotation is required for these experiments. Please also check Sec. 5.2 paragraph 1 for details.

Thank you so much for your great patience and huge help.

Could you check if my understanding is correct?
The sencond line and the third line in PointPillars and PointRCNN is only train with 500/125 frames samples, total 500/125 training samples.
The fourth line is train wiht 500/125 human annotations + 2712-500/125 remaining samples, total 3762 training sampels.

@Cliu2
Copy link
Owner

Cliu2 commented Feb 15, 2024

Yes, that is correct. Thanks.

@TimGor1997
Copy link
Author

Yes, that is correct. Thanks.
Thank you for the assistance you provided me, answering many of my questions. Looking forward to seeing your new research work!
All the Best!

@TimGor1997
Copy link
Author

I'm terribly sorry, but I have another question to ask you
Why is the number of pseudo_labels out there only 3387 instead of 3769?
When you use OpenPCDet's code to compute AP3D, do you only compute these 3387?

@Cliu2
Copy link
Owner

Cliu2 commented Feb 20, 2024

We removed labels with too much truncation at here. The target objects have way too few LiDAR points to generate a good enough pseudo label, and therefore are omitted, resulting in slightly fewer pseudo labels than the human labels. Yes, only 3387 are used, during assessing the quality of pseudo labels.

@TimGor1997
Copy link
Author

We removed labels with too much truncation at here. The target objects have way too few LiDAR points to generate a good enough pseudo label, and therefore are omitted, resulting in slightly fewer pseudo labels than the human labels. Yes, only 3387 are used, during assessing the quality of pseudo labels.

Thank you for your reply.
I found that when regenerating the train label, a total of 3387 pseudo labels were generated, excluding 500 manually marked labels for training, there were 2727 pseudo labels left. When training PointPillars/PointRCNN with 500 frames, 3712-500-2727=485 samples should we use manual labels or empty txt instead?

@Cliu2
Copy link
Owner

Cliu2 commented Feb 20, 2024

They are simply not supervised, no loss is calculated for those objects. Empty txt can be used.

@TimGor1997
Copy link
Author

TimGor1997 commented Feb 20, 2024 via email

@TimGor1997
Copy link
Author

TimGor1997 commented Feb 22, 2024 via email

@TimGor1997
Copy link
Author

image
image

@Cliu2
Copy link
Owner

Cliu2 commented Feb 27, 2024

In our experiments, we just replace the label_2 files with the generated pseudo labels. Albeit the results could vary from time to time due to the randomness of different environments, the large gap is still strange. Have you checked the mIoU of generated pseudo labels against the ground truth labels? Does it match with the paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants