Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes for multiple and default metric #1239

Merged
merged 4 commits into from
Jun 5, 2016
Merged

Fixes for multiple and default metric #1239

merged 4 commits into from
Jun 5, 2016

Conversation

khotilov
Copy link
Member

My PR #1153 had essentially disabled the multiple eval metrics functionality by allowing only the last metric. This one should fix it by providing a special duplication detection for eval_metric.

Also, when setting the DefaultEvalMetric right after loading, it can sometimes create unwanted default metric. So I've moved the lazy DefaultEvalMetric creation into EvalOneIter where it is invoked when really necessary.

khotilov added a commit to khotilov/xgboost that referenced this pull request Jun 5, 2016
@tqchen tqchen merged commit 9a48a40 into dmlc:master Jun 5, 2016
tlorieul pushed a commit to tlorieul/xgboost that referenced this pull request Jun 8, 2016
* fix multiple evaluation metrics

* create DefaultEvalMetric only when really necessary

* py test for dmlc#1239

* make travis happy
@lock lock bot locked as resolved and limited conversation to collaborators Jan 19, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants