Replies: 2 comments 2 replies
-
Thanks for sharing! I will ping Kelly from the townhall to share her workaround and thoughts here as well |
Beta Was this translation helpful? Give feedback.
-
Hey, sounds like it may not directly meet your need Juan (labeling of score configs by separate annotators), but if useful, we have explored score configs with naming conventions to align with the responsible/assigned annotator - ie., same scoring rule/set up, with initials denoting who is assigned. Then, filtering the traces such that you can see comparison across users. I definitely still see validity in having separate annotators/tags, but this has met our needs thus far. We are also running LLM evals that follow the same series of guidelines as human annotation, and building cues based on LLM fails for human annotation - which has worked really well thus far, thanks Marc and team for pushing forward! Excited for what's to come. (note, I am not a developer, I am a domain expert collaborating with our more tech-inclined teams. Appreciate how accessible this is!) |
Beta Was this translation helpful? Give feedback.
-
Describe the feature or potential improvement
Hi team, I’m exploring the annotation functionality, specifically regarding the Score Configs support for annotation. While I understand that I can add annotations to each LLM trace, I would like to know if it’s possible to enable multiple users to annotate the same trace independently, using the same set of labels. This feature would allow us to collect multiple perspectives from different experts, to check how they annotate the same labels.
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions