-
-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BigDecimal
precision not retained for polymorphic deserialization
#2644
Comments
This sounds like a bug, from quick look. Thank you for providing reproduction! One thing you can try is to enable setting |
Hmmh. Probably related to buffering needed for polymorphic types. |
Ah. Ok, yes, I will add a failing test and perhaps some time in future handling can be improved to defer decoding, to allow full precision even in cases like this. But until then I think you should just enable feature I mentioned and retain full accuracy. |
BigDecimal
precision not retained for polymorphic deserialization
It's works. Thank you for you quick replaying) |
@rost5000 glad it works; thank you for reporting the issue and verifying work-around. |
Was able to figure out a way to fix this for 2.12.0. |
regressed handling inside polymorphic types. The TokenBuffer could hold values as BigDecimal objects which are heavy on heap, but critically also cannot be used to deserialize to a less precise target, e.g. double. This is the inverse problem of FasterXML#2644 where not enough precision was retained, and reintroduces precision loss while buffering floating point contents.
I used the gradle project, using implementation:
A simple test have failed:
The output is
expected: <9999999999999999.99> but was: <1.0E+16>
What should I do?
The text was updated successfully, but these errors were encountered: