Skip to content

Commit fb1c0f2

Browse files
authoredJun 24, 2024··
UsageMetadata - add cachedContentTokenCount field (#178)
Add a `cachedContentTokenCount` field to the `UsageMetadata` interface returned by `generateContent` responses.
1 parent 92662ca commit fb1c0f2

File tree

5 files changed

+22
-0
lines changed

5 files changed

+22
-0
lines changed
 

‎.changeset/angry-hotels-learn.md

+5
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
"@google/generative-ai": minor
3+
---
4+
5+
Add a `cachedContentTokenCount` field to the `UsageMetadata` interface returned by `generateContent` responses.

‎common/api-review/generative-ai.api.md

+1
Original file line numberDiff line numberDiff line change
@@ -638,6 +638,7 @@ export interface ToolConfig {
638638

639639
// @public
640640
export interface UsageMetadata {
641+
cachedContentTokenCount?: number;
641642
candidatesTokenCount: number;
642643
promptTokenCount: number;
643644
totalTokenCount: number;
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
<!-- Do not edit this file. It is automatically generated by API Documenter. -->
2+
3+
[Home](./index.md) &gt; [@google/generative-ai](./generative-ai.md) &gt; [UsageMetadata](./generative-ai.usagemetadata.md) &gt; [cachedContentTokenCount](./generative-ai.usagemetadata.cachedcontenttokencount.md)
4+
5+
## UsageMetadata.cachedContentTokenCount property
6+
7+
Total token count in the cached part of the prompt, i.e. in the cached content.
8+
9+
**Signature:**
10+
11+
```typescript
12+
cachedContentTokenCount?: number;
13+
```

‎docs/reference/main/generative-ai.usagemetadata.md

+1
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ export interface UsageMetadata
1616

1717
| Property | Modifiers | Type | Description |
1818
| --- | --- | --- | --- |
19+
| [cachedContentTokenCount?](./generative-ai.usagemetadata.cachedcontenttokencount.md) | | number | _(Optional)_ Total token count in the cached part of the prompt, i.e. in the cached content. |
1920
| [candidatesTokenCount](./generative-ai.usagemetadata.candidatestokencount.md) | | number | Total number of tokens across the generated candidates. |
2021
| [promptTokenCount](./generative-ai.usagemetadata.prompttokencount.md) | | number | Number of tokens in the prompt. |
2122
| [totalTokenCount](./generative-ai.usagemetadata.totaltokencount.md) | | number | Total token count for the generation request (prompt + candidates). |

‎packages/main/types/responses.ts

+2
Original file line numberDiff line numberDiff line change
@@ -98,6 +98,8 @@ export interface UsageMetadata {
9898
candidatesTokenCount: number;
9999
/** Total token count for the generation request (prompt + candidates). */
100100
totalTokenCount: number;
101+
/** Total token count in the cached part of the prompt, i.e. in the cached content. */
102+
cachedContentTokenCount?: number;
101103
}
102104

103105
/**

0 commit comments

Comments
 (0)
Please sign in to comment.