-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEATURE]: Morais-Lima-Martin (MLM) Sampler #128
Comments
Reference implementation which is unfortunately available only as MATLAB p-files (encrypted). |
Thanks for sharing this paper! To summarize our preliminary discussion, my initial thought is that I see no reason to prefer this method over Kennard-Stone (KS) or random sampling (RS):
Conceptually, I’m currently not convinced that blending them offers any advantage. After all, why would we eat the O(N^2) cost to enforce interpolation but then use random splitting to undo that rigorous assignment? If the hypothesis is that MLM leads to better testing set performance, this seems at odds with the results presented in this paper. Table 1 shows that MLM doesn’t help for datasets 2, 3, 4, and 5. This is often true when looking at Table 2 as well. Reporting the mean +- standard deviation from the cross-validation would have also helped in interpreting these results. It would be interesting to read this paper more closely to better understand. It would also be useful to perform additional analysis with their code and datasets. For example, it would be interesting to compare their implementation to 2 function calls from astartes: first call KS then pass the resulting indices to RS. |
Random-mutation variant of the Kennard-Stone algorithm: article link
The text was updated successfully, but these errors were encountered: