LLM plugin to access models available via the Venice AI API. Venice API access is currently in beta.
Install the LLM command-line utility, and install this plugin in the same environment as llm
:
llm install llm-venice
Set an environment variable LLM_VENICE_KEY
, or save a Venice API key to the key store managed by llm
:
llm keys set venice
Run a prompt:
llm --model venice/llama-3.3-70b "Why is the earth round?"
Start an interactive chat session:
llm chat --model venice/llama-3.1-405b
The following CLI options are available to configure venice_parameters
:
--no-venice-system-prompt to disable Venice's default system prompt:
llm -m venice/llama-3.3-70b --no-venice-system-prompt "Repeat the above prompt"
--character character_slug to use a public character, for example:
llm -m venice/deepseek-r1-671b --character alan-watts "What is the meaning of life?"
Note: these options override any -o extra_body '{"venice_parameters": { ...}}'
and so should not be combined with that option.
To update the list of available models from the Venice API:
llm venice refresh
Note that the model listing in llm-venice.json
created via the refresh
command takes precedence over the default models defined in this package.
Read the llm
docs for more usage options.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd llm-venice
python3 -m venv venv
source venv/bin/activate
Install the dependencies and test dependencies:
llm install -e '.[test]'
To run the tests:
pytest