You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following the 0.1.3 release, the additivity of my logit-linked xgboost local tree shaps broke. I found that pinning my numpy version back to either 1.21.6 or 1.22.4 temporarily solved the issue. I also tested every numpy version from 1.23.1 to 1.23.5 and found that all numpy versions 1.23.* were associated with the value changes.
Brainstorming
It makes me wonder if there is any numpy or numpy-adjacent usage in FastTreeSHAP that could be reimplemented to be compatible with new versions. I would be glad to contribute if that's the case and someone could help restrict the scope of the problem (if I can't restrict it myself quickly enough). As we all know from #14, that would be highly preferable to pinning numpy versions. My instinct tells me this issue may link to an existing NumPy update note or issue... haven't looked too deeply yet.
Good Release
In any case, I still think the decision to relax the numpy constraint is the right one and anyone who runs into the same behavior can just pin the version themselves in the short term.
Hi Nandish, thanks for raising this issue and providing some temporary resolutions. I just checked shap GitHub page, and found out some recent issues related to additivity, e.g., shap/shap#2778 and shap/shap#2777. Just curious whether you have tried your dataset on shap as well and whether the additivity issue still exist. Thanks!
Description
Following the
0.1.3
release, the additivity of my logit-linked xgboost local tree shaps broke. I found that pinning mynumpy
version back to either1.21.6
or1.22.4
temporarily solved the issue. I also tested everynumpy
version from1.23.1
to1.23.5
and found that allnumpy
versions1.23.*
were associated with the value changes.Brainstorming
It makes me wonder if there is any
numpy
ornumpy
-adjacent usage in FastTreeSHAP that could be reimplemented to be compatible with new versions. I would be glad to contribute if that's the case and someone could help restrict the scope of the problem (if I can't restrict it myself quickly enough). As we all know from #14, that would be highly preferable to pinning numpy versions. My instinct tells me this issue may link to an existing NumPy update note or issue... haven't looked too deeply yet.Good Release
In any case, I still think the decision to relax the
numpy
constraint is the right one and anyone who runs into the same behavior can just pin the version themselves in the short term.Environment Info
Partial result of
pip list
uname --all
lscpu
Thanks for your responsiveness to the community!
Nandish Gupta
Senior AI Engineer, SolasAI
The text was updated successfully, but these errors were encountered: