Skip to content

Commit

Permalink
chore (ai/core): move providerMetadata to stable (#4814)
Browse files Browse the repository at this point in the history
  • Loading branch information
lgrammel authored Feb 10, 2025
1 parent 2a4772d commit 74f0f0e
Show file tree
Hide file tree
Showing 54 changed files with 269 additions and 99 deletions.
5 changes: 5 additions & 0 deletions .changeset/nine-clouds-film.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
'ai': patch
---

chore (ai/core): move providerMetadata to stable
6 changes: 3 additions & 3 deletions content/docs/07-reference/01-ai-sdk-core/01-generate-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -676,7 +676,7 @@ To see `generateText` in action, check out [these examples](#examples).
'True when there will be a continuation step with a continuation text.',
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
isOptional: true,
description:
Expand Down Expand Up @@ -857,7 +857,7 @@ To see `generateText` in action, check out [these examples](#examples).
'Warnings from the model provider (e.g. unsupported settings).',
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
description:
'Optional metadata from the provider. The outer key is the provider name. The inner values are the metadata. Details depend on the provider.',
Expand Down Expand Up @@ -1055,7 +1055,7 @@ To see `generateText` in action, check out [these examples](#examples).
'True when there will be a continuation step with a continuation text.',
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
isOptional: true,
description:
Expand Down
12 changes: 6 additions & 6 deletions content/docs/07-reference/01-ai-sdk-core/02-stream-text.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -968,7 +968,7 @@ To see `streamText` in action, check out [these examples](#examples).
'True when there will be a continuation step with a continuation text.',
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
isOptional: true,
description:
Expand Down Expand Up @@ -1022,7 +1022,7 @@ To see `streamText` in action, check out [these examples](#examples).
],
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
description:
'Optional metadata from the provider. The outer key is the provider name. The inner values are the metadata. Details depend on the provider.',
Expand Down Expand Up @@ -1191,7 +1191,7 @@ To see `streamText` in action, check out [these examples](#examples).
],
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Promise<Record<string,Record<string,JSONValue>> | undefined>',
description:
'Optional metadata from the provider. Resolved whe the response is finished. The outer key is the provider name. The inner values are the metadata. Details depend on the provider.',
Expand Down Expand Up @@ -1515,7 +1515,7 @@ To see `streamText` in action, check out [these examples](#examples).
'True when there will be a continuation step with a continuation text.',
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
isOptional: true,
description:
Expand Down Expand Up @@ -1851,7 +1851,7 @@ To see `streamText` in action, check out [these examples](#examples).
description: 'The reason the model finished generating the text.',
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
isOptional: true,
description:
Expand Down Expand Up @@ -1907,7 +1907,7 @@ To see `streamText` in action, check out [these examples](#examples).
],
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
isOptional: true,
description:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -601,7 +601,7 @@ To see `generateObject` in action, check out the [additional examples](#more-exa
'Warnings from the model provider (e.g. unsupported settings).',
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
description:
'Optional metadata from the provider. The outer key is the provider name. The inner values are the metadata. Details depend on the provider.',
Expand Down
4 changes: 2 additions & 2 deletions content/docs/07-reference/01-ai-sdk-core/04-stream-object.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -538,7 +538,7 @@ To see `streamObject` in action, check out the [additional examples](#more-examp
],
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Record<string,Record<string,JSONValue>> | undefined',
description:
'Optional metadata from the provider. The outer key is the provider name. The inner values are the metadata. Details depend on the provider.',
Expand Down Expand Up @@ -638,7 +638,7 @@ To see `streamObject` in action, check out the [additional examples](#more-examp
],
},
{
name: 'experimental_providerMetadata',
name: 'providerMetadata',
type: 'Promise<Record<string,Record<string,JSONValue>> | undefined>',
description:
'Optional metadata from the provider. Resolved whe the response is finished. The outer key is the provider name. The inner values are the metadata. Details depend on the provider.',
Expand Down
16 changes: 8 additions & 8 deletions content/providers/01-ai-sdk-providers/01-openai.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -294,10 +294,10 @@ const result = streamText({
```

OpenAI provides usage information for predicted outputs (`acceptedPredictionTokens` and `rejectedPredictionTokens`).
You can access it in the `experimental_providerMetadata` object.
You can access it in the `providerMetadata` object.

```ts highlight="11"
const openaiMetadata = (await result.experimental_providerMetadata)?.openai;
const openaiMetadata = (await result.providerMetadata)?.openai;

const acceptedPredictionTokens = openaiMetadata?.acceptedPredictionTokens;
const rejectedPredictionTokens = openaiMetadata?.rejectedPredictionTokens;
Expand Down Expand Up @@ -383,13 +383,13 @@ Reasoning models support additional settings and response metadata:

- the `reasoningEffort` option (or alternatively the `reasoningEffort` model setting), which determines the amount of reasoning the model performs.

- You can use response `experimental_providerMetadata` to access the number of reasoning tokens that the model generated.
- You can use response `providerMetadata` to access the number of reasoning tokens that the model generated.

```ts highlight="4,7-11,17"
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const { text, usage, experimental_providerMetadata } = await generateText({
const { text, usage, providerMetadata } = await generateText({
model: openai('o3-mini'),
prompt: 'Invent a new holiday and describe its traditions.',
providerOptions: {
Expand All @@ -402,7 +402,7 @@ const { text, usage, experimental_providerMetadata } = await generateText({
console.log(text);
console.log('Usage:', {
...usage,
reasoningTokens: experimental_providerMetadata?.openai?.reasoningTokens,
reasoningTokens: providerMetadata?.openai?.reasoningTokens,
});
```

Expand Down Expand Up @@ -437,7 +437,7 @@ including `gpt-4o`, `gpt-4o-mini`, `o1-preview`, and `o1-mini`.

- Prompt caching is automatically enabled for these models, when the prompt is 1024 tokens or longer. It does
not need to be explicitly enabled.
- You can use response `experimental_providerMetadata` to access the number of prompt tokens that were a cache hit.
- You can use response `providerMetadata` to access the number of prompt tokens that were a cache hit.
- Note that caching behavior is dependent on load on OpenAI's infrastructure. Prompt prefixes generally remain in the
cache following 5-10 minutes of inactivity before they are evicted, but during off-peak periods they may persist for up
to an hour.
Expand All @@ -446,14 +446,14 @@ including `gpt-4o`, `gpt-4o-mini`, `o1-preview`, and `o1-mini`.
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';

const { text, usage, experimental_providerMetadata } = await generateText({
const { text, usage, providerMetadata } = await generateText({
model: openai('gpt-4o-mini'),
prompt: `A 1024-token or longer prompt...`,
});

console.log(`usage:`, {
...usage,
cachedPromptTokens: experimental_providerMetadata?.openai?.cachedPromptTokens,
cachedPromptTokens: providerMetadata?.openai?.cachedPromptTokens,
});
```

Expand Down
4 changes: 2 additions & 2 deletions content/providers/01-ai-sdk-providers/05-anthropic.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ Anthropic language models can also be used in the `streamText`, `generateObject`
In the messages and message parts, you can use the `providerOptions` property to set cache control breakpoints.
You need to set the `anthropic` property in the `providerOptions` object to `{ cacheControl: { type: 'ephemeral' } }` to set a cache control breakpoint.

The cache creation input tokens are then returned in the `experimental_providerMetadata` object
The cache creation input tokens are then returned in the `providerMetadata` object
for `generateText` and `generateObject`, again under the `anthropic` property.
When you use `streamText` or `streamObject`, the response contains a promise
that resolves to the metadata. Alternatively you can receive it in the
Expand Down Expand Up @@ -140,7 +140,7 @@ const result = await generateText({
});

console.log(result.text);
console.log(result.experimental_providerMetadata?.anthropic);
console.log(result.providerMetadata?.anthropic);
// e.g. { cacheCreationInputTokens: 2118, cacheReadInputTokens: 0 }
```

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ const result = await generateText({
Tracing information will be returned in the provider metadata if you have tracing enabled.

```ts
if (result.experimental_providerMetadata?.bedrock.trace) {
if (result.providerMetadata?.bedrock.trace) {
// ...
}
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ import { google } from '@ai-sdk/google';
import { GoogleGenerativeAIProviderMetadata } from '@ai-sdk/google';
import { generateText } from 'ai';

const { text, experimental_providerMetadata } = await generateText({
const { text, providerMetadata } = await generateText({
model: google('gemini-1.5-pro', {
useSearchGrounding: true,
}),
Expand All @@ -248,7 +248,7 @@ const { text, experimental_providerMetadata } = await generateText({

// access the grounding metadata. Casting to the provider metadata type
// is optional but provides autocomplete and type safety.
const metadata = experimental_providerMetadata?.google as
const metadata = providerMetadata?.google as
| GoogleGenerativeAIProviderMetadata
| undefined;
const groundingMetadata = metadata?.groundingMetadata;
Expand Down
8 changes: 4 additions & 4 deletions content/providers/01-ai-sdk-providers/11-google-vertex.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -364,7 +364,7 @@ import { vertex } from '@ai-sdk/google-vertex';
import { GoogleGenerativeAIProviderMetadata } from '@ai-sdk/google';
import { generateText } from 'ai';

const { text, experimental_providerMetadata } = await generateText({
const { text, providerMetadata } = await generateText({
model: vertex('gemini-1.5-pro', {
useSearchGrounding: true,
}),
Expand All @@ -375,7 +375,7 @@ const { text, experimental_providerMetadata } = await generateText({

// access the grounding metadata. Casting to the provider metadata type
// is optional but provides autocomplete and type safety.
const metadata = experimental_providerMetadata?.google as
const metadata = providerMetadata?.google as
| GoogleGenerativeAIProviderMetadata
| undefined;
const groundingMetadata = metadata?.groundingMetadata;
Expand Down Expand Up @@ -809,7 +809,7 @@ Anthropic language models can also be used in the `streamText`, `generateObject`
In the messages and message parts, you can use the `providerOptions` property to set cache control breakpoints.
You need to set the `anthropic` property in the `providerOptions` object to `{ cacheControl: { type: 'ephemeral' } }` to set a cache control breakpoint.

The cache creation input tokens are then returned in the `experimental_providerMetadata` object
The cache creation input tokens are then returned in the `providerMetadata` object
for `generateText` and `generateObject`, again under the `anthropic` property.
When you use `streamText` or `streamObject`, the response contains a promise
that resolves to the metadata. Alternatively you can receive it in the
Expand Down Expand Up @@ -842,7 +842,7 @@ const result = await generateText({
});

console.log(result.text);
console.log(result.experimental_providerMetadata?.anthropic);
console.log(result.providerMetadata?.anthropic);
// e.g. { cacheCreationInputTokens: 2118, cacheReadInputTokens: 0 }
```

Expand Down
4 changes: 2 additions & 2 deletions content/providers/01-ai-sdk-providers/30-deepseek.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ DeepSeek language models can be used in the `streamText` and `streamUI` function

### Cache Token Usage

DeepSeek provides context caching on disk technology that can significantly reduce token costs for repeated content. You can access the cache hit/miss metrics through the `experimental_providerMetadata` property in the response:
DeepSeek provides context caching on disk technology that can significantly reduce token costs for repeated content. You can access the cache hit/miss metrics through the `providerMetadata` property in the response:

```ts
import { deepseek } from '@ai-sdk/deepseek';
Expand All @@ -92,7 +92,7 @@ const result = await generateText({
prompt: 'Your prompt here',
});

console.log(result.experimental_providerMetadata);
console.log(result.providerMetadata);
// Example output: { deepseek: { promptCacheHitTokens: 1856, promptCacheMissTokens: 5 } }
```

Expand Down
4 changes: 2 additions & 2 deletions content/providers/01-ai-sdk-providers/70-perplexity.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ Perplexity models can be used in the `streamText` and `streamUI` functions (see

### Provider Metadata

The Perplexity provider includes additional experimental metadata in the response through `experimental_providerMetadata`:
The Perplexity provider includes additional experimental metadata in the response through `providerMetadata`:

```ts
const result = await generateText({
Expand All @@ -95,7 +95,7 @@ const result = await generateText({
},
});

console.log(result.experimental_providerMetadata);
console.log(result.providerMetadata);
// Example output:
// {
// perplexity: {
Expand Down
20 changes: 12 additions & 8 deletions examples/ai-core/src/e2e/feature-test-suite.ts
Original file line number Diff line number Diff line change
Expand Up @@ -926,8 +926,9 @@ export function createFeatureTestSuite({
expect(result.text.toLowerCase()).toContain('tokyo');
expect(result.usage?.totalTokens).toBeGreaterThan(0);

const metadata = result.experimental_providerMetadata
?.google as GoogleGenerativeAIProviderMetadata | undefined;
const metadata = result.providerMetadata?.google as
| GoogleGenerativeAIProviderMetadata
| undefined;
verifyGroundingMetadata(metadata?.groundingMetadata);
});

Expand All @@ -942,8 +943,9 @@ export function createFeatureTestSuite({
chunks.push(chunk);
}

const metadata = (await result.experimental_providerMetadata)
?.google as GoogleGenerativeAIProviderMetadata | undefined;
const metadata = (await result.providerMetadata)?.google as
| GoogleGenerativeAIProviderMetadata
| undefined;

const completeText = chunks.join('');
expect(completeText).toBeTruthy();
Expand All @@ -959,8 +961,9 @@ export function createFeatureTestSuite({
prompt: 'What is the current population of Tokyo?',
});

const metadata = result.experimental_providerMetadata
?.google as GoogleGenerativeAIProviderMetadata | undefined;
const metadata = result.providerMetadata?.google as
| GoogleGenerativeAIProviderMetadata
| undefined;
verifySafetyRatings(metadata?.safetyRatings ?? []);
});

Expand All @@ -974,8 +977,9 @@ export function createFeatureTestSuite({
// consume the stream
}

const metadata = (await result.experimental_providerMetadata)
?.google as GoogleGenerativeAIProviderMetadata | undefined;
const metadata = (await result.providerMetadata)?.google as
| GoogleGenerativeAIProviderMetadata
| undefined;

verifySafetyRatings(metadata?.safetyRatings ?? []);
});
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,7 @@ async function main() {

console.log(result.text);
console.log();
console.log(
JSON.stringify(
result.experimental_providerMetadata?.bedrock.trace,
null,
2,
),
);
console.log(JSON.stringify(result.providerMetadata?.bedrock.trace, null, 2));
}

main().catch(console.error);
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ async function main() {
});

console.log(result.text);
console.log(result.experimental_providerMetadata?.anthropic);
console.log(result.providerMetadata?.anthropic);
// e.g. { cacheCreationInputTokens: 2118, cacheReadInputTokens: 0 }
}

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ This is a test file.
});

console.log('TEXT', result.text);
console.log('CACHE', result.experimental_providerMetadata?.anthropic);
console.log('CACHE', result.providerMetadata?.anthropic);
console.log();
console.log('EDITOR CONTENT', editorContent);
}
Expand Down
2 changes: 1 addition & 1 deletion examples/ai-core/src/generate-text/deepseek-cache-token.ts
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ async function main() {

console.log(result.text);
console.log(result.usage);
console.log(result.experimental_providerMetadata);
console.log(result.providerMetadata);
// "prompt_cache_hit_tokens":1856,"prompt_cache_miss_tokens":5}
}

Expand Down
2 changes: 1 addition & 1 deletion examples/ai-core/src/generate-text/google-grounding.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ async function main() {
console.log(result.sources);
console.log();
console.log('PROVIDER METADATA');
console.log(result.experimental_providerMetadata?.google);
console.log(result.providerMetadata?.google);
}

main().catch(console.error);
Loading

0 comments on commit 74f0f0e

Please sign in to comment.