Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE]: Morais-Lima-Martin (MLM) Sampler #128

Closed
JacksonBurns opened this issue Jun 6, 2023 · 2 comments · Fixed by #134 or #176
Closed

[FEATURE]: Morais-Lima-Martin (MLM) Sampler #128

JacksonBurns opened this issue Jun 6, 2023 · 2 comments · Fixed by #134 or #176
Labels
enhancement New feature or request

Comments

@JacksonBurns
Copy link
Owner

Random-mutation variant of the Kennard-Stone algorithm: article link

@JacksonBurns JacksonBurns added the enhancement New feature or request label Jun 6, 2023
@JacksonBurns
Copy link
Owner Author

Reference implementation which is unfortunately available only as MATLAB p-files (encrypted).

@kspieks
Copy link
Collaborator

kspieks commented Jun 6, 2023

Thanks for sharing this paper! To summarize our preliminary discussion, my initial thought is that I see no reason to prefer this method over Kennard-Stone (KS) or random sampling (RS):

  • KS enforces interpolation since it takes the points furthest away from each other in the X space and places them in the training set, which causes the testing set to be entirely contained within the space of the training set. However, this rigorous enforcement comes at the cost of scaling as O(N^2) since that’s simply the cost of computing a distance matrix. It's up to users to decide if this cost is worth it.
  • RS often results in similar interpolation splits since random sampling often causes the training and testing set to have similar distributions, especially if the original dataset is large enough. Importantly the computational cost is dramatically lower.

Conceptually, I’m currently not convinced that blending them offers any advantage. After all, why would we eat the O(N^2) cost to enforce interpolation but then use random splitting to undo that rigorous assignment? If the hypothesis is that MLM leads to better testing set performance, this seems at odds with the results presented in this paper. Table 1 shows that MLM doesn’t help for datasets 2, 3, 4, and 5. This is often true when looking at Table 2 as well. Reporting the mean +- standard deviation from the cross-validation would have also helped in interpreting these results.

It would be interesting to read this paper more closely to better understand. It would also be useful to perform additional analysis with their code and datasets. For example, it would be interesting to compare their implementation to 2 function calls from astartes: first call KS then pass the resulting indices to RS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: In Progress
Development

Successfully merging a pull request may close this issue.

2 participants