Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't get int64 precision output of liteSeg model #3367

Closed
3 tasks done
ccqedq opened this issue Jul 11, 2023 · 6 comments
Closed
3 tasks done

can't get int64 precision output of liteSeg model #3367

ccqedq opened this issue Jul 11, 2023 · 6 comments
Labels
bug Something isn't working

Comments

@ccqedq
Copy link

ccqedq commented Jul 11, 2023

问题确认 Search before asking

Bug描述 Describe the Bug

I want to get int64 precision output of segmentation model
so I change the code in PaddleSeg/tools/export.py from "output_dtype = 'int32' if args.output_op == 'argmax' else 'float32' "
to "output_dtype = 'int64' if args.output_op == 'argmax' else 'float32' ", but I still got the int32 preicision model. and I also want to ask why I can't get float32 precision when there is argmax op existing.

复现环境 Environment

-OS: Linux
-PaddleSeg: release/2.8
-Python: 3.8

Bug描述确认 Bug description confirmation

  • 我确认已经提供了Bug复现步骤、代码改动说明、以及环境信息,确认问题是可以复现的。I confirm that the bug replication steps, code change instructions, and environment information have been provided, and the problem can be reproduced.

是否愿意提交PR? Are you willing to submit a PR?

  • 我愿意提交PR!I'd like to help by submitting a PR!
@ccqedq ccqedq added the bug Something isn't working label Jul 11, 2023
@Asthestarsfalll
Copy link
Contributor

The output of a segmentation model is a index map, therefore it's unnecessary to use int64, becuase it will not improve the performance of model.

@Asthestarsfalll
Copy link
Contributor

argmax computes the indices of the max elements of the input tensor’s element along the provided axis,so it should be integer.

@ccqedq
Copy link
Author

ccqedq commented Jul 11, 2023

So what precision is the exported onnx model, float32, float64, and what determines this?

@ccqedq
Copy link
Author

ccqedq commented Jul 11, 2023

When deploying with rk3588, it is not recommended to export softmax and argmax operators. Why?

@Asthestarsfalll
Copy link
Contributor

Asthestarsfalll commented Jul 11, 2023

Internal modules of segmentation model, such as convolution layers, fully connected layers, batch norm......

@Asthestarsfalll
Copy link
Contributor

Asthestarsfalll commented Jul 11, 2023

When deploying with rk3588, it is not recommended to export softmax and argmax operators. Why?

What I mean is keeping int32, instead of discarding argmax.
From the perspective of deploying on edge devices, perhaps these two operators are too resource-intensive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants