-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up metric columns #2964
Clean up metric columns #2964
Conversation
…olumns in an experiment's run list.
/assign @Bobgy |
Thanks! That sounds reasonable to me. Do you need to ask anyone else's opinion? If not, please unhold |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Bobgy The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@Bobgy Thanks for the review and update: In b/135048320, Katie lgtm'ed. So |
@jingzhang36: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
…olumns in an experiment's run list. (kubeflow#2964)
Internal bug thread b/135048320, reporting the metric columns of some runs are missing when displayed in RunList. This is not actually a bug but because the runs of the list are not required to have the same set of metrics and we currently show metric columns based on the first run in the list.
Reasoning:
Given the fact that the main intention of displaying metric columns in a run list is for comparing these metrics across runs, we can conclude that the metric columns in run list make sense if the runs under comparison have the same set of metrics. On the other hand, in AllRun list, the runs are from all runs of unrelated pipelines in the system, and their metric sets have no inherent connection or comparison need. Therefore, in the AllRun list, we disable metric columns display.
Meanwhile, e.g., if our users want to compare metric columns of runs from a same pipeline and hence having same metric columns, it is still available. They can leverage our "Experiment" feature. I.e., if they group runs of same metrics to a single experiment, our run list of a same experiment has the metric columns displayed.
This change is