-
Notifications
You must be signed in to change notification settings - Fork 334
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
comment:re sklearn -- integer encoding vs 1-hot (py) #1
Comments
There is some discussion on this topic here https://stackoverflow.com/questions/15821751/how-to-use-dummy-variable-to-represent-categorical-data-in-python-scikit-learn-r So, I'm not sure the alternative is better, you are welcome to try it if you'd like by changing this https://github.com/szilard/benchm-ml/blob/master/2-rf/2.py and I would be happy to rerun/time it. |
This thread is interesting read as well https://www.mail-archive.com/scikit-learn-general@lists.sourceforge.net/msg07366.html |
I tried it out: 1. Generate integer-encoded categoricals https://gist.github.com/szilard/b2e97062025ac9347f84 2. https://gist.github.com/szilard/56706595b4594e297414 You get a >10x speedup, lower memory footprint and increase in AUC (n=1M): I would expect faster, lower memory but decrease in AUC (or same in some cases). I think the increase in AUC is because 3 variables are actually ordinal (month, day of month and day of week). I should probably use 1-hot encoding for those 3 to have a fair comparison with the previous results. |
Same with mixed encoding (above 3 variables 1-hot, the rest integer encoding - so that I don't give an advantage in accuracy by mapping the ordinal variables to integers): This make sense now, integer encoding vs 1-hot is faster (5x), lower memory footprint, same AUC (though not clear to me when to expect same AUC and when lower even after this excellent thread https://www.mail-archive.com/scikit-learn-general@lists.sourceforge.net/msg07366.html ) |
Thanks for the benchmarks! Proper handling of categorical variables is not an easy issue anyway.
When the categories are ordered, it makes more sense indeed to handle them as numerical variables. I dont have a strong argument as to why it may be also better when there is no natural ordering. I guess it could boil down to the fact that one-hot encoding splits are often very unbalanced, while integer encoded splits may be less unbalanced. |
Thanks @glouppe. I read somewhere a paper that AFAIR suggested to sort the (non-ordered) categoricals in order of their frequency in the data and encode them as integers as such. Any idea what that paper might be? |
Yes, it is Breiman's book :) When your output is binary, this strategy is in fact optimal (it will find the best subset among the values of the categorical variables) and linear. See section 3.6.3.2 of my thesis if you dont have the CART book. |
Great, thanks :) |
One-hot encoding could be helpful when the number of categories are small( in level of 10 to 100). In such case one-hot encoding can discover interesting interactions like (gender=male) AND (job = teacher). While ordering them makes it harder to be discovered(need two split on job). However, indeed there is not a unified way handling categorical features in trees, and usually what tree was really good at was ordered continuous features anyway.. |
Thanks @tqchen for comments |
For n=10M the results with integer encoding (along with previous n=1M result): |
@tqchen Hi Tianqi, would you please explain more on why tree was better at ordered continuous features than discrete ones? is it because there are much more split points for continuous features? Thanks. |
(Your post popped up in my twitter feed)
I'm not sure why you said you needed to one-hot encode categorical variables for scikit's random forest; I'm fairly certain you do not need to(and probably shouldn't). It's been awhile since I looked at the source, but I'm pretty sure it handles categorical variables encoded as a single vector of numbers just fine from empirical tests; performance is almost always worse if the features were one-hot encoded.
The text was updated successfully, but these errors were encountered: