Skip to content

Commit

Permalink
Release 0.19
Browse files Browse the repository at this point in the history
  • Loading branch information
simonw committed Dec 1, 2024
1 parent ac3d008 commit c018104
Show file tree
Hide file tree
Showing 3 changed files with 16 additions and 1 deletion.
11 changes: 11 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,16 @@
# Changelog

## 0.19 (2024-12-01)

- Tokens used by a response are now logged to new `input_tokens` and `output_tokens` integer columns and a `token_details` JSON string column, for the default OpenAI models and models from other plugins that {ref}`implement this feature <advanced-model-plugins-usage>`. [#610](https://github.com/simonw/llm/issues/610)
- `llm prompt` now takes a `-u/--usage` flag to display token usage at the end of the response.
- `llm logs -u/--usage` shows token usage information for logged responses.
- `llm prompt ... --async` responses are now logged to the database. [#641](https://github.com/simonw/llm/issues/641)
- `llm.get_models()` and `llm.get_async_models()` functions, {ref}`documented here <python-api-listing-models>`. [#640](https://github.com/simonw/llm/issues/640)
- `response.usage()` and async response `await response.usage()` methods, returning a `Usage(input=2, output=1, details=None)` dataclass. [#644](https://github.com/simonw/llm/issues/644)
- `response.on_done(callback)` and `await response.on_done(callback)` methods for specifying a callback to be executed when a response has completed, {ref}`documented here <python-api-response-on-done>`. [#653](https://github.com/simonw/llm/issues/653)
- Fix for bug running `llm chat` on Windows 11. Thanks, [Sukhbinder Singh](https://github.com/sukhbinder). [#495](https://github.com/simonw/llm/issues/495)

(v0_19a2)=
## 0.19a2 (2024-11-20)

Expand Down
4 changes: 4 additions & 0 deletions docs/python-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,8 @@ async for chunk in model.prompt(
print(chunk, end="", flush=True)
```

(python-api-conversations)=

## Conversations

LLM supports *conversations*, where you ask follow-up questions of a model as part of an ongoing conversation.
Expand Down Expand Up @@ -195,6 +197,8 @@ response = conversation.prompt(

Access `conversation.responses` for a list of all of the responses that have so far been returned during the conversation.

(python-api-response-on-done)=

## Running code when a response has completed

For some applications, such as tracking the tokens used by an application, it may be useful to execute code as soon as a response has finished being executed
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from setuptools import setup, find_packages
import os

VERSION = "0.19a2"
VERSION = "0.19"


def get_long_description():
Expand Down

0 comments on commit c018104

Please sign in to comment.