Skip to content

datasette/datasette-llm-usage

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

datasette-llm-usage

PyPI Changelog Tests License

Track usage of LLM tokens in a SQLite table

This is a very early alpha.

Installation

Install this plugin in the same environment as Datasette.

datasette install datasette-llm-usage

Usage

This plugin adds functionality to track and manage LLM token usage in Datasette. It creates two tables.

  • _llm_usage: Tracks usage of LLM tokens
  • _llm_allowance: Manages credit allowances for LLM usage

Configuration

By default the tables are created in the internal database passed to Datasette using --internal internal.db. You can change that by setting the following in your Datasette plugin configuration:

{
    "plugins": {
        "datasette-llm-usage": {
            "database": "your_database_name"
        }
    }
}

Setting up allowances

Before using LLM models, you need to set up an allowance in the _llm_allowance table. You can do this with SQL like:

insert into _llm_allowance (
    id,
    created,
    credits_remaining,
    daily_reset,
    daily_reset_amount,
    purpose
) values (
    1,
    strftime('%s', 'now')
    10000,
    0,
    0,
    null
);

The other columns are not yet used.

Using the LLM wrapper

The plugin provides an LLM class that wraps the llm library to track token usage:

from datasette_llm_usage import LLM

llm = LLM(datasette)

# Get available models
models = llm.get_async_models()

# Get a specific model
model = llm.get_async_model("gpt-4o-mini", purpose="my_purpose")

# Use the model
response = await model.prompt("Your prompt here")
text = await response.text()

Usage will be automatically recorded.

Built-in endpoint

The plugin provides a simple demo endpoint at /-/llm-usage-simple-prompt that requires authentication and uses the gpt-4o-mini model.

Supported Models and Pricing

The plugin includes pricing information for various models including:

  • Gemini models (1.5-flash, 1.5-pro)
  • Claude models (3.5-sonnet, 3-opus, 3-haiku)
  • GPT models (gpt-4o, gpt-4o-mini, o1-preview, o1-mini)

Different models have different input and output token costs.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-llm-usage
python -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

python -m pytest

About

Track usage of LLM tokens in a SQLite table

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages