Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable WitWidget for AI Platform including xgboost models #2194

Merged
merged 4 commits into from
May 10, 2019

Conversation

jameswex
Copy link
Contributor

@jameswex jameswex commented May 6, 2019

  • Motivation for features / changes

Continuing work on CMLE mode for WitWidget, which has been rebranded to AI Platform by Cloud. Added support for xgboost models in AI Platform as well.

  • Technical description of changes

Rename CMLE to AI Platform serving
Allow examples to be provided as JSON lists (which is true for xgboost models), not just JSON dicts.
Allow separate provided of feature names for display when examples are provided as flat lists.
Add better prediction result parsing logic for AI Platform use-case, handling more models out of the box.
Add ability for user to specify custom prediction adjustment function for when the AI Platform model returns results in a format that is unknown to WIT.

  • Screenshots of UI changes
    N/A

  • Detailed steps to verify changes work correctly (as executed by you)

Verified changes with two AI Platform models in a colab, including one that uses a custom prediction adjustment function.

  • Alternate designs / implementations considered

@jameswex
Copy link
Contributor Author

jameswex commented May 6, 2019

@tolga-b please review

Copy link
Contributor

@tolga-b tolga-b left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you James, I added some comments.

if 'adjust_prediction' in config else None)
self.compare_adjust_fn = (
config.get('adjust_prediction_2')
if 'adjust_prediction_2' in config else None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would be better if we are consistent with names (adjust_prediction_2 vs compare_adjust_fn) as in compare_custom_predict_fn.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

if 'custom_predict_fn' in copied_config:
del copied_config['custom_predict_fn']
if 'compare_custom_predict_fn' in copied_config:
del copied_config['compare_custom_predict_fn']
if 'adjust_prediction' in copied_config:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment about naming here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

# functions.
if self.config.get('use_aip'):
self.custom_predict_fn = self._predict_aip_model
if self.config.get('use_aip_2'):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment about naming here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -14,4 +14,4 @@

"""Contains the version string."""

VERSION = '1.0.1'
VERSION = '1.0.2'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bump this to 1.0.3 if you already pushed the previous PR to pip. Is the best practice having different version string for each PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

haven't pushed out 1.0.2. we should only bump version on PRs where we plan to push to pypi when they are submitted.

feat = feature_names[i]
else:
feat = str(i)
if isinstance(json_ex[i], (int, long)):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit, you can pull this code chunk into a function add_single_feature(key, value, ex) to avoid repetition.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

self.set_examples(self._add_feature_names_to_examples())
return self

def _add_feature_names_to_examples(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this fail if I call set_feature_names twice as int(feat) is now casting feature name string to an integer?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

after discussion, made this logic simpler by having feature_names optionally provided with examples, so no need to ever convert existing tf.Examples to have the new feature name mapping. its all done at set_examples time now.

@jameswex jameswex requested a review from wchargin May 9, 2019 20:26
@jameswex jameswex merged commit 005be3f into tensorflow:master May 10, 2019
@jameswex jameswex deleted the aip branch May 10, 2019 23:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants