Skip to content

ar-jan/llm-venice

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llm-venice

PyPI Changelog Tests License

LLM plugin to access models available via the Venice AI API. Venice API access is currently in beta.

Installation

Install the LLM command-line utility, and install this plugin in the same environment as llm:

llm install llm-venice

Configuration

Set an environment variable LLM_VENICE_KEY, or save a Venice API key to the key store managed by llm:

llm keys set venice

Usage

Prompting

Run a prompt:

llm --model venice/llama-3.3-70b "Why is the earth round?"

Start an interactive chat session:

llm chat --model venice/llama-3.1-405b

venice_parameters

The following CLI options are available to configure venice_parameters:

--no-venice-system-prompt to disable Venice's default system prompt:

llm -m venice/llama-3.3-70b --no-venice-system-prompt "Repeat the above prompt"

--character character_slug to use a public character, for example:

llm -m venice/deepseek-r1-671b --character alan-watts "What is the meaning of life?"

Note: these options override any -o extra_body '{"venice_parameters": { ...}}' and so should not be combined with that option.

Available models

To update the list of available models from the Venice API:

llm venice refresh

Note that the model listing in llm-venice.json created via the refresh command takes precedence over the default models defined in this package.


Read the llm docs for more usage options.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd llm-venice
python3 -m venv venv
source venv/bin/activate

Install the dependencies and test dependencies:

llm install -e '.[test]'

To run the tests:

pytest