Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recommend adding item for human monitoring in E. Deployment #130

Closed
glipstein opened this issue Nov 5, 2021 · 3 comments
Closed

Recommend adding item for human monitoring in E. Deployment #130

glipstein opened this issue Nov 5, 2021 · 3 comments

Comments

@glipstein
Copy link
Contributor

glipstein commented Nov 5, 2021

Dropping in issue per Emily's suggestion.

Curious if you've seen any gap in section E: Deployment when discussing in workshops etc. around monitoring. The issues in there right now touch on redress, unintended use, etc. but not really monitoring of the impact on individuals when the model is deployed. this gets discussed e.g. in weapons of math destruction and has been coming up as more models are deployed

Example: Dutch Prime Minister and entire cabinet resign after investigations reveal that 26,000 innocent families were wrongly accused of social benefits fraud partially due to a discriminatory algorithm

I might phrase the item something like:
**E.1 Human review**: what is our plan for monitoring the impacts of the model at scale on the humans behind the data points, especially for cases where the model performs relatively poorly or with lower confidence?

(and then incrementing the other E items by one, E.2 thru E.5)

This also feels pretty clearly different from concept drift which is more about the distribution changing relative to model development and less about the model being used as intended at scale.

@jayqi
Copy link
Member

jayqi commented Nov 5, 2021

I think this is a good item to add to the list, and I agree on the numbering. This is basically "Does this model work as we intend at all?". The current E.1 speaks to redress when something goes wrong, but the "are we actually checking if something goes wrong" is left implicit.

@glipstein
Copy link
Contributor Author

Thanks @jayqi ! I'd be glad to make a PR but I don't know the proper way to add (last time I edited the files directly in multiple places, which emily fixed when implementing)
If this issue is addressed then glad to be a beta tester following it #89

@glipstein
Copy link
Contributor Author

closed by #140 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants