-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't get int64 precision output of liteSeg model #3367
Comments
The output of a segmentation model is a index map, therefore it's unnecessary to use int64, becuase it will not improve the performance of model. |
|
So what precision is the exported onnx model, float32, float64, and what determines this? |
When deploying with rk3588, it is not recommended to export softmax and argmax operators. Why? |
Internal modules of segmentation model, such as convolution layers, fully connected layers, batch norm...... |
What I mean is keeping int32, instead of discarding argmax. |
问题确认 Search before asking
Bug描述 Describe the Bug
I want to get int64 precision output of segmentation model
so I change the code in PaddleSeg/tools/export.py from "output_dtype = 'int32' if args.output_op == 'argmax' else 'float32' "
to "output_dtype = 'int64' if args.output_op == 'argmax' else 'float32' ", but I still got the int32 preicision model. and I also want to ask why I can't get float32 precision when there is argmax op existing.
复现环境 Environment
-OS: Linux
-PaddleSeg: release/2.8
-Python: 3.8
Bug描述确认 Bug description confirmation
是否愿意提交PR? Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: