Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compare out of the box completions of Action Events on GGML MPT7B, LLAMA-2, GPT-3.5-Davinci, etc #419

Open
FFFiend opened this issue Jul 21, 2023 · 5 comments
Labels
enhancement New feature or request good first issue Good for newcomers

Comments

@FFFiend
Copy link
Collaborator

FFFiend commented Jul 21, 2023

Feature request

We basically want to see what model performs the best out of the box on generating action events given reference window state, ref actions and active window state, because we don't know what models we would like to add to openadapt.ml_models.provider other than GPT-3.5/4 for fine tuning

Refer to #327 for a bare bones, simpler example of testing completions purely on GPT-3.5/4, and stateful.py for how event dicts are currently sanitized before being passed into the prompt.

Also serves as a good exercise for using different ML API's (HuggingFace and OpenAI) and understanding how completions are made.

Motivation

Helpful for #379

@FFFiend FFFiend added enhancement New feature or request good first issue Good for newcomers labels Jul 21, 2023
@KrishPatel13 KrishPatel13 self-assigned this Jul 21, 2023
@FFFiend
Copy link
Collaborator Author

FFFiend commented Jul 23, 2023

Latest comment here has some LLAMA-2 models that run on CPU: #417

@FFFiend
Copy link
Collaborator Author

FFFiend commented Jul 25, 2023

@KrishPatel13 I can get some work done on this if work with your other PRs is getting in the way, just let me know 👍

@KrishPatel13 KrishPatel13 removed their assignment Jul 25, 2023
@KrishPatel13
Copy link
Collaborator

@FFFiend Sure no worries! I can work on something else 😊.

@FFFiend
Copy link
Collaborator Author

FFFiend commented Jul 25, 2023

@FFFiend
Copy link
Collaborator Author

FFFiend commented Jul 26, 2023

meta-llama/llama#555 cant get LLama-2 to work inside a pipeline, not sure why.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants