-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StreamText error handling still not working properly in v4.0.18 #4099
Comments
The error will happen after the function in your example has returned the stream text result, and the return type is only a Therefore your I would need to know more about how you use the result / consume the stream. |
Thanks @lgrammel for pointing out that the issue occurs after the function has returned the stream text result. This clarification helped me resolve the problem. I truly appreciate your quick response and detailed explanation – it was incredibly helpful. Thanks again for your support! |
This was useful: import { generateText } from 'ai';
try {
const { fullStream } = streamText({
model: yourModel,
prompt: 'Write a vegetarian lasagna recipe for 4 people.',
});
for await (const part of fullStream) {
switch (part.type) {
// ... handle other part types
case 'error': {
const error = part.error;
// handle error
break;
}
}
}
} catch (error) {
// handle error
} Straight out of the docs: https://sdk.vercel.ai/docs/ai-sdk-core/error-handling#handling-streaming-errors-streaming-with-error-support |
I don't really understand the streaming process but how would this work when you return the stream as a response with try {
const result = streamText({
model: aiSdkOpenAI(GPT_VERSION),
messages: reqMessages
})
return result.toDataStreamResponse()
} catch (err) {
// TODO: THIS DOES NOT WORK
console.error(err)
throw err
} In this example, how can I capture any errors? If I add the example in the documentation it will work and will log the error but the text will no longer be streamed. try {
const result = streamText({
model: aiSdkOpenAI(GPT_VERSION),
messages: reqMessages
})
for await (const part of result.fullStream) {
switch (part.type) {
// ... handle other part types
case 'error': {
const error = part.error
// This works
console.error(error)
break
}
}
}
return result.toDataStreamResponse()
} catch (err) {
// TODO: THIS DOES NOT WORK
console.error(err)
throw err
} |
I have introduced an
To log errors, you can provide an import { streamText } from 'ai';
const result = streamText({
model: yourModel,
prompt: 'Invent a new holiday and describe its traditions.',
onError({ error }) {
console.error(error); // your error logging logic here
},
}); |
Ahh thanks Igrammel! That's amazing I will be using that for sure :) Thanks for the help |
Description
Despite the fix in #3987, error handling in
streamText
is still not working as expected in version 4.0.18. The application crashes instead of properly catching and handling API errors, even with proper try/catch blocks in place.Code example
Current Behavior
When an API error occurs (e.g., 404 Not Found), the error bypasses all error handling mechanisms and crashes the application with an uncaught exception, despite having proper error handling in place.
Error Log
Minimal Reproduction Code
Environment
AI provider
@ai-sdk/openai version : 1.0.8
Additional context
This issue was supposedly fixed in #3987, but the problem persists in the latest version (4.0.18). The error handling mechanisms (try/catch blocks) are not catching the errors from the streaming process, causing the application to crash unexpectedly.
Expected Behavior
Errors should be properly caught either:
Would appreciate guidance on whether this is a known limitation or if there's a workaround available.
The text was updated successfully, but these errors were encountered: