Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Metric evaluation error handling #638

Closed
katxiao opened this issue Nov 16, 2021 · 3 comments · Fixed by #652
Closed

Metric evaluation error handling #638

katxiao opened this issue Nov 16, 2021 · 3 comments · Fixed by #652
Assignees
Labels
feature:evaluation Related to running metrics or visualizations feature request Request for a new feature
Milestone

Comments

@katxiao
Copy link
Contributor

katxiao commented Nov 16, 2021

Problem Description

When a metric returns nan, evaluate will drop the metric completely. As a result, when a metric errors out, it is omitted from the results. We should print a warning or display it as an error, so that end users know the metric was supposed to run but was not computed successfully.

@katxiao katxiao added feature request Request for a new feature feature:evaluation Related to running metrics or visualizations labels Nov 16, 2021
@npatki
Copy link
Contributor

npatki commented Nov 16, 2021

When you call evaluate with aggregate=False, it would be nice to print out all the metrics that were attempted. That way, it's easy to see:

  • Which metrics are considered. This should be the same for different synthesizers, and it's unexpected when that's not the case.
  • How many metrics errored, potentially for what reasons

@katxiao katxiao self-assigned this Dec 10, 2021
@katxiao katxiao added this to the 0.13.1 milestone Dec 22, 2021
@cafornaca
Copy link

Evaluation on official documentation is showing errors: https://sdv.dev/SDV/user_guides/evaluation/evaluation_framework.html

image

@katxiao
Copy link
Contributor Author

katxiao commented Feb 7, 2022

@cafornaca These errors are expected -- Those metrics are Machine Learning Efficacy metrics, and the user needs to specify which column is the target column that is going to be predicted. Since we now expect to see all attempted metrics and the corresponding errors, these errors are now shown in this table.

We may want to add more actionable errors, or an option to ignore errors (cc @npatki)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature:evaluation Related to running metrics or visualizations feature request Request for a new feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants