Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

StreamText error handling still not working properly in v4.0.18 #4099

Closed
samihamine opened this issue Dec 16, 2024 · 6 comments
Closed

StreamText error handling still not working properly in v4.0.18 #4099

samihamine opened this issue Dec 16, 2024 · 6 comments
Labels
bug Something isn't working

Comments

@samihamine
Copy link

Description

Despite the fix in #3987, error handling in streamText is still not working as expected in version 4.0.18. The application crashes instead of properly catching and handling API errors, even with proper try/catch blocks in place.

Code example

Current Behavior

When an API error occurs (e.g., 404 Not Found), the error bypasses all error handling mechanisms and crashes the application with an uncaught exception, despite having proper error handling in place.

Error Log

APICallError [AI_APICallError]: Not Found
    at /app/node_modules/@ai-sdk/provider-utils/src/response-handler.ts:72:16
    at processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async postToApi (/app/node_modules/@ai-sdk/provider-utils/src/post-to-api.ts:81:28)
    at async OpenAIChatLanguageModel.doStream (/app/node_modules/@ai-sdk/openai/src/openai-chat-language-model.ts:375:50)
    at async fn (/app/node_modules/ai/core/generate-text/stream-text.ts:532:25)
    at async /app/node_modules/ai/core/telemetry/record-span.ts:18:22
    at async _retryWithExponentialBackoff (/app/node_modules/ai/util/retry-with-exponential-backoff.ts:37:12)
    at async streamStep (/app/node_modules/ai/core/generate-text/stream-text.ts:487:15)
    at async fn (/app/node_modules/ai/core/generate-text/stream-text.ts:991:9)
    at async /app/node_modules/ai/core/telemetry/record-span.ts:18:22 {
  cause: undefined,
  url: XXXXX
  ....

Minimal Reproduction Code

import { streamText, StreamTextResult } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';

async function chatWithAPI(): Promise<StreamTextResult> {
    try {
        const apiProvider = createOpenAI({
            name: 'custom-api',
            apiKey: 'dummy-key',
            baseURL: 'http://api.example.com/v1', // Intentionally wrong URL to trigger 404
        });

        const result = streamText({
            model: apiProvider('model-id'),
            messages: [
                {
                    role: 'user',
                    content: 'Hello'
                }
            ],
            onFinish: async ({ text }) => {
                console.log('Finished:', text);
            },
            async onChunk(chunk) {
                if ('error' in chunk) {
                    console.error('Chunk error:', chunk.error);
                    throw new Error(API Error: ${ chunk.error });
                }
            },
            maxRetries: 2
        });
        return result;
    } catch (error) {
        console.error('Error caught:', error);
        throw new Error('Failed to process request');
    }
}

Environment

  • ai version: 4.0.18
  • @ai-sdk/openai version : 1.0.8
  • Node.js version: 18.x

AI provider

@ai-sdk/openai version : 1.0.8

Additional context

This issue was supposedly fixed in #3987, but the problem persists in the latest version (4.0.18). The error handling mechanisms (try/catch blocks) are not catching the errors from the streaming process, causing the application to crash unexpectedly.

Expected Behavior

Errors should be properly caught either:

  1. In the try/catch block
  2. Or propagated as a proper error response through the stream

Would appreciate guidance on whether this is a known limitation or if there's a workaround available.

@samihamine samihamine added the bug Something isn't working label Dec 16, 2024
@lgrammel
Copy link
Collaborator

lgrammel commented Dec 16, 2024

The error will happen after the function in your example has returned the stream text result, and the return type is only a Promise bc your method is async (it does not need to be in your example).

Therefore your try block can never catch it. The error happens during the consumption of the stream, not during the creation of the stream text result.

I would need to know more about how you use the result / consume the stream.

@samihamine
Copy link
Author

Thanks @lgrammel for pointing out that the issue occurs after the function has returned the stream text result. This clarification helped me resolve the problem. I truly appreciate your quick response and detailed explanation – it was incredibly helpful.

Thanks again for your support!

@kachar
Copy link

kachar commented Dec 20, 2024

This was useful:

import { generateText } from 'ai';

try {
  const { fullStream } = streamText({
    model: yourModel,
    prompt: 'Write a vegetarian lasagna recipe for 4 people.',
  });

  for await (const part of fullStream) {
    switch (part.type) {
      // ... handle other part types

      case 'error': {
        const error = part.error;
        // handle error
        break;
      }
    }
  }
} catch (error) {
  // handle error
}

Straight out of the docs: https://sdk.vercel.ai/docs/ai-sdk-core/error-handling#handling-streaming-errors-streaming-with-error-support

@Pascaltib
Copy link

I don't really understand the streaming process but how would this work when you return the stream as a response with toDataStreamResponse()?

try {

	const result = streamText({
		model: aiSdkOpenAI(GPT_VERSION),
		messages: reqMessages
	})

	return result.toDataStreamResponse()
} catch (err) {
	// TODO: THIS DOES NOT WORK
	console.error(err)
	throw err
}

In this example, how can I capture any errors?

If I add the example in the documentation it will work and will log the error but the text will no longer be streamed.

try {

	const result = streamText({
		model: aiSdkOpenAI(GPT_VERSION),
		messages: reqMessages
	})

        for await (const part of result.fullStream) {
		switch (part.type) {
			// ... handle other part types

			case 'error': {
				const error = part.error
                                 // This works
				console.error(error)
				break
			}
		}
	}

	return result.toDataStreamResponse()
} catch (err) {
	// TODO: THIS DOES NOT WORK
	console.error(err)
	throw err
}

@lgrammel
Copy link
Collaborator

lgrammel commented Feb 6, 2025

I have introduced an onError callback on streamText: #4729 (ai@4.1.22)

streamText immediately starts streaming to enable sending data without waiting for the model.
Errors become part of the stream and are not thrown to prevent e.g. servers from crashing.

To log errors, you can provide an onError callback that is triggered when an error occurs.

import { streamText } from 'ai';

const result = streamText({
  model: yourModel,
  prompt: 'Invent a new holiday and describe its traditions.',
  onError({ error }) {
    console.error(error); // your error logging logic here
  },
});

@Pascaltib
Copy link

Ahh thanks Igrammel!

That's amazing I will be using that for sure :)

Thanks for the help

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants