Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation results are inconsistent during training and after saving the trained model #96

Closed
Sharifmhamza opened this issue Aug 31, 2022 · 3 comments

Comments

@Sharifmhamza
Copy link

Sharifmhamza commented Aug 31, 2022

Hi MONAI Team,

First of all, thumbs up for your great work. I am facing the issue of inconsistent results of the UNETR model while evaluating during the training and testing phases. During training, I am getting good results but when I saved the pre-trained UNETR model I am getting very bad results. I didn't change anything, just download your GitHub repo and trained the UNETR model but the results are inconsistent. Here I am uploading the screenshots when I have trained the model for 200 epochs. Even though I have trained it for 5K epochs and saved the model I still get poor results during testing. Please help me to figure out the issue. Waiting for your response. Thank you in advance.

training
testing

@tangy5
Copy link
Contributor

tangy5 commented Sep 2, 2022

@Sharifmhamza , thanks for trying UNETR. Is this experiment for BTCV multi-organ segmetnation? If so, I'm seeing the evaluation during training with ~0.27 is not correct somehow, (the normal training Dice can be up-to 0.80), which resulting testing stage with bad results. It might be better to check whether the data is set correctly, especially the pre-processing. Thanks, let us know if you can verify the data and dataloader. Or we can help check the transforms.

@Sharifmhamza
Copy link
Author

Sharifmhamza commented Sep 12, 2022

Thank you. But I have figured it out, it is an issue in loading the model during inference. The issue is resolved by loading model by specifying the state_dict in model.load_state_dict(model_dict["state_dict"]) which is missing in test.py file

@YUXIN-commit
Copy link

Thank you. But I have figured it out, it is an issue in loading the model during inference. The issue is resolved by loading model by specifying the state_dict in model.load_state_dict(model_dict["state_dict"]) which is missing in test.py file

Hello, I encountered the same issue as you, but when I followed your method and added the state_dict, it resulted in an error, as shown in the image.
Could you kindly let me know how you solved it?

微信图片_20241218111156

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants