-
Notifications
You must be signed in to change notification settings - Fork 398
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make model selector metadata to metric more robust #386
Conversation
Codecov Report
@@ Coverage Diff @@
## master #386 +/- ##
==========================================
+ Coverage 86.85% 86.86% +<.01%
==========================================
Files 336 336
Lines 10950 10956 +6
Branches 351 578 +227
==========================================
+ Hits 9511 9517 +6
Misses 1439 1439
Continue to review full report at Codecov.
|
nm -> JsonUtils.fromString[MultiClassificationMetrics](valsJson).get | ||
case OpEvaluatorNames.Regression.humanFriendlyName => | ||
case `regression` => |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm, it is better not to have this hard coded as text, but I noticed a problem with matching in Enumns in my PR as well... @tovbinm is this related to the upgrade?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Weird. Looking at it...
@erica-chiu proposed tests are defining the custom evaluator incorrectly. Here is how is supposed to work #387 |
Related issues
Describe the proposed solution
Change the matching when parsing model selector metadata into metrics from name of metric to metric keys
Describe alternatives you've considered
N/A
Additional context
Allows for parsing of metadata to be based on content rather than the name, allowing for multiple metrics of the same type