-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jaccard fluctuates seriously during training #18
Comments
@haochenheheda How did you do the backpropagation during the training (bp after all the frames are processed or bp through time)? How many samples you choose from each video? How did you calculate the loss (based on the final soft aggregated result or the output logit) ? |
@ryancll Hi. 1. after all the frames. 2. for each iteration, I randomly choose three frames from a random video (online) 3. I have tried both,and it doesn't seem to make any difference. |
@haochenheheda Thank you! I met the same problem like you said, especially for fine-tuning. |
Hi @ryancll @haochenheheda, |
Hi @haochenheheda, |
Hi, @ryancll. Our model is not too sensitive to the number of objects as it combined at the last step. We simply iterates over videos in the dataset regardless the number of objects. |
@seoungwugoh ,thanks for your help! I'll try heavy augmentation. |
Hi, thanks for sharing this great work. I have been working on the reproduction of STM for 2 months, and finally get a Jaccard of 77 on Davis-17-val.
I found that during training (both in pre-train and finetune), the Jaccard on val set jitters seriously. For example, the J reaches 70 at 1000 iteration, but will quickly drop to 60 at 1100 iteration, and then rises back to 70 at 1200 iteration.
The batch size is set to 4 and the optimizer is Adam with lr of 1e-5, which follows the setting proposed in the paper. I have tryed larger batch size and smaller lr, which didn't help. I'll apprecaite it if you could help me with this.
The text was updated successfully, but these errors were encountered: