Track usage of LLM tokens in a SQLite table
This is a very early alpha.
Install this plugin in the same environment as Datasette.
datasette install datasette-llm-usage
This plugin adds functionality to track and manage LLM token usage in Datasette. It creates two tables.
_llm_usage
: Tracks usage of LLM tokens_llm_allowance
: Manages credit allowances for LLM usage
By default the tables are created in the internal database passed to Datasette using --internal internal.db
. You can change that by setting the following in your Datasette plugin configuration:
{
"plugins": {
"datasette-llm-usage": {
"database": "your_database_name"
}
}
}
Before using LLM models, you need to set up an allowance in the _llm_allowance
table. You can do this with SQL like:
insert into _llm_allowance (
id,
created,
credits_remaining,
daily_reset,
daily_reset_amount,
purpose
) values (
1,
strftime('%s', 'now')
10000,
0,
0,
null
);
The other columns are not yet used.
The plugin provides an LLM
class that wraps the llm
library to track token usage:
from datasette_llm_usage import LLM
llm = LLM(datasette)
# Get available models
models = llm.get_async_models()
# Get a specific model
model = llm.get_async_model("gpt-4o-mini", purpose="my_purpose")
# Use the model
response = await model.prompt("Your prompt here")
text = await response.text()
Usage will be automatically recorded.
The plugin provides a simple demo endpoint at /-/llm-usage-simple-prompt
that requires authentication and uses the gpt-4o-mini model.
The plugin includes pricing information for various models including:
- Gemini models (1.5-flash, 1.5-pro)
- Claude models (3.5-sonnet, 3-opus, 3-haiku)
- GPT models (gpt-4o, gpt-4o-mini, o1-preview, o1-mini)
Different models have different input and output token costs.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-llm-usage
python -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
python -m pytest