-
Notifications
You must be signed in to change notification settings - Fork 10.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Study how LM Evaluation Harness works and try to implement it #231
Comments
Half the fun in AI though is not completely understanding why the results are what they are. I'm only (half) joking though, this will obviously be a good thing. Pitting various models against each other in a common environment seems the right way forward. This would not only help in training better models but also present more options varying in quality, speed and the amount of resources required to run them. |
as far as i can tell, you just have to implement a python class for the model. eg: edit: or here is the "model" apiusage for bellard's textsynth-api. edit2: someone created an issue on their end EleutherAI/lm-evaluation-harness#417 |
Hi! We are quite interested in supporting ggml, but nobody on our team has experience with Python bindings for C AFAIK. Copying from the issue on our side,
We’d be happy to help however we can! |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
For the record, we successfully integrated this into the eval harness via llama-cpp-python). Currently it's llama.cpp specific and extending it to the entire ggml ecosystem would be awesome. Our real bottleneck is not being very familiar with using Python bindings (also manpower). |
Update 10 Apr 2024: #231 (comment)
It would be great to start doing this kind of quantitative analysis of
ggml
-based inference:https://bellard.org/ts_server/
It looks like Fabrice evaluates the models using something called LM Evaluation Harness:
https://github.com/EleutherAI/lm-evaluation-harness
I have no idea what this is yet, but would be nice to study it and try to integrate it here and in other
ggml
-based projects.This will be very important step needed to estimate the quality of the generated output and see if we are on the right track.
The text was updated successfully, but these errors were encountered: