Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat (ai/core): add onError callback to streamText #4729

Merged
merged 3 commits into from
Feb 6, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .changeset/flat-tigers-heal.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'ai': patch
---

feat (ai/core): add onError callback to streamText
19 changes: 19 additions & 0 deletions content/docs/03-ai-sdk-core/05-generating-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,25 @@ It also provides several promises that resolve when the stream is finished:
- `result.finishReason`: The reason the model finished generating text.
- `result.usage`: The usage of the model during text generation.

### `onError` callback

`streamText` immediately starts streaming to enable sending data without waiting for the model.
Errors become part of the stream and are not thrown to prevent e.g. servers from crashing.

To log errors, you can provide an `onError` callback that is triggered when an error occurs.

```tsx highlight="6-8"
import { streamText } from 'ai';

const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
onError({ error }) {
console.error(error); // your error logging logic here
},
});
```

### `onChunk` callback

When using `streamText`, you can provide an `onChunk` callback that is triggered for each chunk of the stream.
Expand Down
19 changes: 19 additions & 0 deletions content/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -737,6 +737,25 @@ To see `streamText` in action, check out [these examples](#examples).
},
],
},
{
name: 'onError',
type: '(event: OnErrorResult) => Promise<void> |void',
isOptional: true,
description:
'Callback that is called when an error occurs during streaming. You can use it to log errors.',
properties: [
{
type: 'OnErrorResult',
parameters: [
{
name: 'error',
type: 'unknown',
description: 'The error that occurred.',
},
],
},
],
},
{
name: 'experimental_output',
type: 'Output',
Expand Down
24 changes: 24 additions & 0 deletions packages/ai/core/generate-text/stream-text.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -1887,6 +1887,30 @@ describe('streamText', () => {
});
});

describe('options.onError', () => {
it('should invoke onError', async () => {
const result: Array<{ error: unknown }> = [];

const { fullStream } = streamText({
model: new MockLanguageModelV1({
doStream: async () => {
throw new Error('test error');
},
}),
prompt: 'test-input',
onError(event) {
console.log('foo');
result.push(event);
},
});

// consume stream
await convertAsyncIterableToArray(fullStream);

expect(result).toStrictEqual([{ error: new Error('test error') }]);
});
});

describe('options.onFinish', () => {
it('should send correct information', async () => {
let result!: Parameters<
Expand Down
14 changes: 14 additions & 0 deletions packages/ai/core/generate-text/stream-text.ts
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,7 @@ If set and supported by the model, calls will generate deterministic results.
@param experimental_generateMessageId - Generate a unique ID for each message.

@param onChunk - Callback that is called for each chunk of the stream. The stream processing will pause until the callback promise is resolved.
@param onError - Callback that is called when an error occurs during streaming. You can use it to log errors.
@param onStepFinish - Callback that is called when each step (LLM call) is finished, including intermediate steps.
@param onFinish - Callback that is called when the LLM response and all request tool executions
(for tools that have an `execute` function) are finished.
Expand Down Expand Up @@ -151,6 +152,7 @@ export function streamText<
experimental_repairToolCall: repairToolCall,
experimental_transform: transform,
onChunk,
onError,
onFinish,
onStepFinish,
_internal: {
Expand Down Expand Up @@ -267,6 +269,11 @@ Callback that is called for each chunk of the stream. The stream processing will
>;
}) => Promise<void> | void;

/**
Callback that is invoked when an error occurs during streaming. You can use it to log errors.
*/
onError?: (event: { error: unknown }) => Promise<void> | void;

/**
Callback that is called when the LLM response and all request tool executions
(for tools that have an `execute` function) are finished.
Expand Down Expand Up @@ -317,6 +324,7 @@ Internal. For test use only. May change without notice.
continueSteps,
providerOptions,
onChunk,
onError,
onFinish,
onStepFinish,
now,
Expand Down Expand Up @@ -478,6 +486,7 @@ class DefaultStreamTextResult<TOOLS extends ToolSet, OUTPUT, PARTIAL_OUTPUT>
continueSteps,
providerOptions,
onChunk,
onError,
onFinish,
onStepFinish,
now,
Expand Down Expand Up @@ -520,6 +529,7 @@ class DefaultStreamTextResult<TOOLS extends ToolSet, OUTPUT, PARTIAL_OUTPUT>
}
>;
}) => Promise<void> | void);
onError: undefined | ((event: { error: unknown }) => Promise<void> | void);
onFinish:
| undefined
| ((
Expand Down Expand Up @@ -588,6 +598,10 @@ class DefaultStreamTextResult<TOOLS extends ToolSet, OUTPUT, PARTIAL_OUTPUT>
await onChunk?.({ chunk: part });
}

if (part.type === 'error') {
await onError?.({ error: part.error });
}

if (part.type === 'text-delta') {
recordedStepText += part.textDelta;
recordedContinuationText += part.textDelta;
Expand Down
Loading