You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -142,10 +154,11 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
142
154
143
155
### streamObject function
144
156
145
-
`streamObject` records 2 types of spans:
157
+
`streamObject` records 2 types of spans and 1 type of event:
146
158
147
-
-`ai.streamObject`: the full length of the streamObject call. It contains 1 or more `ai.streamObject.doStream` spans.
159
+
-`ai.streamObject` (span): the full length of the streamObject call. It contains 1 or more `ai.streamObject.doStream` spans.
148
160
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
161
+
149
162
-`operation.name`: `ai.streamObject` and the functionId that was set through `telemetry.functionId`
150
163
-`ai.operationId`: `"ai.streamObject"`
151
164
-`ai.prompt`: the prompt that was used when calling `streamObject`
@@ -155,9 +168,11 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
155
168
-`ai.response.object`: the object that was generated (stringified JSON)
156
169
-`ai.settings.mode`: the object generation mode, e.g. `json`
157
170
-`ai.settings.output`: the output type that was used, e.g. `object` or `no-schema`
158
-
-`ai.streamObject.doStream`: a provider doStream call.
171
+
172
+
-`ai.streamObject.doStream` (span): a provider doStream call.
159
173
This span contains an `ai.stream.firstChunk` event.
160
-
It contains the [basic LLM span information](#basic-llm-span-information) and the following attributes:
174
+
It contains the [call LLM span information](#call-llm-span-information) and the following attributes:
175
+
161
176
-`operation.name`: `ai.streamObject.doStream` and the functionId that was set through `telemetry.functionId`
162
177
-`ai.operationId`: `"ai.streamObject.doStream"`
163
178
-`ai.prompt.format`: the format of the prompt
@@ -166,21 +181,25 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
166
181
-`ai.response.object`: the object that was generated (stringified JSON)
167
182
-`ai.response.msToFirstChunk`: the time it took to receive the first chunk
168
183
-`ai.response.finishReason`: the reason why the generation finished
184
+
169
185
-`ai.stream.firstChunk` (event): an event that is emitted when the first chunk of the stream is received.
170
186
-`ai.response.msToFirstChunk`: the time it took to receive the first chunk
171
187
172
188
### embed function
173
189
174
190
`embed` records 2 types of spans:
175
191
176
-
-`ai.embed`: the full length of the embed call. It contains 1 `ai.embed.doEmbed` spans.
192
+
-`ai.embed` (span): the full length of the embed call. It contains 1 `ai.embed.doEmbed` spans.
177
193
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
194
+
178
195
-`operation.name`: `ai.embed` and the functionId that was set through `telemetry.functionId`
179
196
-`ai.operationId`: `"ai.embed"`
180
197
-`ai.value`: the value that was passed into the `embed` function
181
198
-`ai.embedding`: a JSON-stringified embedding
182
-
-`ai.embed.doEmbed`: a provider doEmbed call.
199
+
200
+
-`ai.embed.doEmbed` (span): a provider doEmbed call.
183
201
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
202
+
184
203
-`operation.name`: `ai.embed.doEmbed` and the functionId that was set through `telemetry.functionId`
185
204
-`ai.operationId`: `"ai.embed.doEmbed"`
186
205
-`ai.values`: the values that were passed into the provider (array)
@@ -190,14 +209,17 @@ It also records a `ai.stream.firstChunk` event when the first chunk of the strea
190
209
191
210
`embedMany` records 2 types of spans:
192
211
193
-
-`ai.embedMany`: the full length of the embedMany call. It contains 1 or more `ai.embedMany.doEmbed` spans.
212
+
-`ai.embedMany` (span): the full length of the embedMany call. It contains 1 or more `ai.embedMany.doEmbed` spans.
194
213
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
214
+
195
215
-`operation.name`: `ai.embedMany` and the functionId that was set through `telemetry.functionId`
196
216
-`ai.operationId`: `"ai.embedMany"`
197
217
-`ai.values`: the values that were passed into the `embedMany` function
198
218
-`ai.embeddings`: an array of JSON-stringified embedding
199
-
-`ai.embedMany.doEmbed`: a provider doEmbed call.
219
+
220
+
-`ai.embedMany.doEmbed` (span): a provider doEmbed call.
200
221
It contains the [basic embedding span information](#basic-embedding-span-information) and the following attributes:
222
+
201
223
-`operation.name`: `ai.embedMany.doEmbed` and the functionId that was set through `telemetry.functionId`
202
224
-`ai.operationId`: `"ai.embedMany.doEmbed"`
203
225
-`ai.values`: the values that were sent to the provider
@@ -219,6 +241,15 @@ Many spans that use LLMs (`ai.generateText`, `ai.generateText.doGenerate`, `ai.s
219
241
-`ai.telemetry.metadata.*`: the metadata that was passed in through `telemetry.metadata`
220
242
-`ai.usage.completionTokens`: the number of completion tokens that were used
221
243
-`ai.usage.promptTokens`: the number of prompt tokens that were used
244
+
245
+
### Call LLM span information
246
+
247
+
Spans that correspond to individual LLM calls (`ai.generateText.doGenerate`, `ai.streamText.doStream`, `ai.generateObject.doGenerate`, `ai.streamObject.doStream`) contain
248
+
[basic LLM span information](#basic-llm-span-information) and the following attributes:
249
+
250
+
-`ai.response.model`: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
251
+
-`ai.response.id`: the id of the response. Uses the ID from the provider when available.
252
+
-`ai.response.timestamp`: the timestamp of the response. Uses the timestamp from the provider when available.
222
253
-[Semantic Conventions for GenAI operations](https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-spans/)
223
254
-`gen_ai.system`: the provider that was used
224
255
-`gen_ai.request.model`: the model that was requested
@@ -230,6 +261,8 @@ Many spans that use LLMs (`ai.generateText`, `ai.generateText.doGenerate`, `ai.s
230
261
-`gen_ai.request.top_p`: the topP parameter value that was set
231
262
-`gen_ai.request.stop_sequences`: the stop sequences
232
263
-`gen_ai.response.finish_reasons`: the finish reasons that were returned by the provider
264
+
-`gen_ai.response.model`: the model that was used to generate the response. This can be different from the model that was requested if the provider supports aliases.
265
+
-`gen_ai.response.id`: the id of the response. Uses the ID from the provider when available.
233
266
-`gen_ai.usage.input_tokens`: the number of prompt tokens that were used
234
267
-`gen_ai.usage.output_tokens`: the number of completion tokens that were used
0 commit comments