Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GenAI: Rename prompt and completion tokens attributes to input and output #1200

Merged
merged 6 commits into from
Jul 4, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .chloggen/1200.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
change_type: enhancement
component: gen_ai
note: >
Rename `gen_ai.usage.prompt_tokens` to `gen_ai.usage.input_tokens` and `gen_ai.usage.completion_tokens` to `gen_ai.usage.output_tokens`
to align terminology between spans and metrics.
issues: [ 1200 ]
16 changes: 14 additions & 2 deletions docs/attributes-registry/gen-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@

# Gen AI

- [Gen Ai](#gen-ai-attributes)
- [Gen Ai Deprecated](#gen-ai-deprecated-attributes)

## Gen AI Attributes

This document defines the attributes used to describe telemetry in the context of Generative Artificial Intelligence (GenAI) Models requests and responses.
Expand All @@ -28,8 +31,8 @@ This document defines the attributes used to describe telemetry in the context o
| `gen_ai.response.model` | string | The name of the model that generated the response. | `gpt-4-0613` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.system` | string | The Generative AI product as identified by the client or server instrumentation. [3] | `openai` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.token.type` | string | The type of token being counted. | `input`; `output` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.completion_tokens` | int | The number of tokens used in the GenAI response (completion). | `180` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.prompt_tokens` | int | The number of tokens used in the GenAI input or prompt. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.usage.input_tokens` | int | The number of tokens used in the GenAI input or prompt. | `100` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
lmolkova marked this conversation as resolved.
Show resolved Hide resolved
| `gen_ai.usage.output_tokens` | int | The number of tokens used in the GenAI response (completion). | `180` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |

**[1]:** It's RECOMMENDED to format completions as JSON string matching [OpenAI messages format](https://platform.openai.com/docs/guides/text-generation)

Expand All @@ -53,3 +56,12 @@ For custom model, a custom friendly name SHOULD be used. If none of these option
| -------- | ------------------------------------------ | ---------------------------------------------------------------- |
| `input` | Input tokens (prompt, input, etc.) | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `output` | Output tokens (completion, response, etc.) | ![Experimental](https://img.shields.io/badge/-experimental-blue) |

## Gen AI Deprecated Attributes

Describes deprecated `gen_ai` attributes.

| Attribute | Type | Description | Examples | Stability |
| -------------------------------- | ---- | ----------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------ |
| `gen_ai.usage.completion_tokens` | int | Deprecated, use `gen_ai.usage.output_tokens` instead. | `42` | ![Deprecated](https://img.shields.io/badge/-deprecated-red)<br>Replaced by `gen_ai.usage.output_tokens` attribute. |
| `gen_ai.usage.prompt_tokens` | int | Deprecated, use `gen_ai.usage.input_tokens` instead. | `42` | ![Deprecated](https://img.shields.io/badge/-deprecated-red)<br>Replaced by `gen_ai.usage.input_tokens` attribute. |
4 changes: 2 additions & 2 deletions docs/gen-ai/gen-ai-spans.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,8 @@ These attributes track input data and metadata for a request to an GenAI model.
| [`gen_ai.response.finish_reasons`](/docs/attributes-registry/gen-ai.md) | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `["stop"]` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.id`](/docs/attributes-registry/gen-ai.md) | string | The unique identifier for the completion. | `chatcmpl-123` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.response.model`](/docs/attributes-registry/gen-ai.md) | string | The name of the model that generated the response. [3] | `gpt-4-0613` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.completion_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI response (completion). | `180` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.prompt_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI input or prompt. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.input_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI input or prompt. | `100` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| [`gen_ai.usage.output_tokens`](/docs/attributes-registry/gen-ai.md) | int | The number of tokens used in the GenAI response (completion). | `180` | `Recommended` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |

**[1]:** The name of the GenAI model a request is being made to. If the model is supplied by a vendor, then the value must be the exact name of the model requested. If the model is a fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.

Expand Down
17 changes: 17 additions & 0 deletions model/registry/deprecated/gen-ai.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
groups:
- id: registry.gen_ai.deprecated
type: attribute_group
brief: Describes deprecated `gen_ai` attributes.
attributes:
- id: gen_ai.usage.prompt_tokens
type: int
stability: experimental
deprecated: Replaced by `gen_ai.usage.input_tokens` attribute.
brief: "Deprecated, use `gen_ai.usage.input_tokens` instead."
examples: [42]
- id: gen_ai.usage.completion_tokens
type: int
stability: experimental
deprecated: Replaced by `gen_ai.usage.output_tokens` attribute.
brief: "Deprecated, use `gen_ai.usage.output_tokens` instead."
examples: [42]
4 changes: 2 additions & 2 deletions model/registry/gen-ai.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -90,12 +90,12 @@ groups:
type: string[]
brief: Array of reasons the model stopped generating tokens, corresponding to each generation received.
examples: ['stop']
- id: usage.prompt_tokens
- id: usage.input_tokens
stability: experimental
type: int
brief: The number of tokens used in the GenAI input or prompt.
examples: [100]
- id: usage.completion_tokens
- id: usage.output_tokens
stability: experimental
type: int
brief: The number of tokens used in the GenAI response (completion).
Expand Down
4 changes: 2 additions & 2 deletions model/trace/gen-ai.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -36,9 +36,9 @@ groups:
fine-tuned custom model, the value should have a more specific name than the base model that's been fine-tuned.
- ref: gen_ai.response.finish_reasons
requirement_level: recommended
- ref: gen_ai.usage.prompt_tokens
- ref: gen_ai.usage.input_tokens
requirement_level: recommended
- ref: gen_ai.usage.completion_tokens
- ref: gen_ai.usage.output_tokens
requirement_level: recommended
events:
- gen_ai.content.prompt
Expand Down
5 changes: 5 additions & 0 deletions schema-next.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,11 @@ versions:
messaging.rocketmq.client_group: messaging.consumer.group.name
messaging.evenhubs.consumer.group: messaging.consumer.group.name
message.servicebus.destination.subscription_name: messaging.destination.subscription.name
# https://github.com/open-telemetry/semantic-conventions/pull/1200
- rename_attributes:
attribute_map:
gen_ai.usage.completion_tokens: gen_ai.usage.output_tokens
gen_ai.usage.prompt_tokens: gen_ai.usage.input_tokens
spans:
changes:
# https://github.com/open-telemetry/semantic-conventions/pull/1002
Expand Down
Loading