Skip to content

Commit

Permalink
openai: update ChatOpenAI api ref (#22324)
Browse files Browse the repository at this point in the history
Update to reflect that token usage is no longer default in streaming
mode.

Add detail for streaming context under Token Usage section.
  • Loading branch information
ccurme authored May 30, 2024
1 parent 2443e85 commit f343374
Showing 1 changed file with 28 additions and 2 deletions.
30 changes: 28 additions & 2 deletions libs/partners/openai/langchain_openai/chat_models/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1219,7 +1219,6 @@ class ChatOpenAI(BaseChatOpenAI):
AIMessageChunk(content=' programmation', id='run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0')
AIMessageChunk(content='.', id='run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0')
AIMessageChunk(content='', response_metadata={'finish_reason': 'stop'}, id='run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0')
AIMessageChunk(content='', id='run-9e1517e3-12bf-48f2-bb1b-2e824f7cd7b0', usage_metadata={'input_tokens': 31, 'output_tokens': 5, 'total_tokens': 36})
.. code-block:: python
Expand All @@ -1231,7 +1230,7 @@ class ChatOpenAI(BaseChatOpenAI):
.. code-block:: python
AIMessageChunk(content="J'adore la programmation.", response_metadata={'finish_reason': 'stop'}, id='run-bf917526-7f58-4683-84f7-36a6b671d140', usage_metadata={'input_tokens': 31, 'output_tokens': 5, 'total_tokens': 36})
AIMessageChunk(content="J'adore la programmation.", response_metadata={'finish_reason': 'stop'}, id='run-bf917526-7f58-4683-84f7-36a6b671d140')
Async:
.. code-block:: python
Expand Down Expand Up @@ -1353,6 +1352,33 @@ class Joke(BaseModel):
{'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33}
When streaming, set the ``stream_options`` model kwarg:
.. code-block:: python
stream = llm.stream(messages, stream_options={"include_usage": True})
full = next(stream)
for chunk in stream:
full += chunk
full.usage_metadata
.. code-block:: python
{'input_tokens': 28, 'output_tokens': 5, 'total_tokens': 33}
Alternatively, setting ``stream_options`` when instantiating the model can be
useful when incorporating ``ChatOpenAI`` into LCEL chains-- or when using
methods like ``.with_structured_output``, which generate chains under the
hood.
.. code-block:: python
llm = ChatOpenAI(
model="gpt-4o",
model_kwargs={"stream_options": {"include_usage": True}},
)
structured_llm = llm.with_structured_output(...)
Logprobs:
.. code-block:: python
Expand Down

0 comments on commit f343374

Please sign in to comment.