Skip to content

Commit

Permalink
Add additional LLM span attributes
Browse files Browse the repository at this point in the history
  • Loading branch information
karthikscale3 committed May 22, 2024
1 parent 9adff43 commit ada3a9a
Show file tree
Hide file tree
Showing 3 changed files with 18 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/attributes-registry/gen-ai.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ This document defines the attributes used to describe telemetry in the context o
| `gen_ai.request.model` | string | The name of the LLM a request is being made to. | `gpt-4` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.temperature` | double | The temperature setting for the LLM request. | `0.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.top_p` | double | The top_p sampling setting for the LLM request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.stop` | string | Up to 4 sequences where the API will stop generating further tokens. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.request.top_k` | double | The top_k sampling setting for the LLM request. | `1.0`
| `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `stop` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
| `gen_ai.response.model` | string | The name of the LLM a response was generated from. | `gpt-4-0613` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |
Expand Down
12 changes: 12 additions & 0 deletions model/registry/gen-ai.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,18 @@ groups:
brief: The top_p sampling setting for the LLM request.
examples: [1.0]
tag: llm-generic-request
- id: request.top_k
stability: experimental
type: double
brief: The top_k sampling setting for the LLM request.
examples: [1.0]
tag: llm-generic-request
- id: request.stop
stability: experimental
type: double
brief: The stop sequences to provide.
examples: ['\n']
tag: llm-generic-request
- id: response.id
stability: experimental
type: string
Expand Down
4 changes: 4 additions & 0 deletions model/trace/gen-ai.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,10 @@ groups:
requirement_level: recommended
- ref: gen_ai.request.top_p
requirement_level: recommended
- ref: gen_ai.request.top_k
requirement_level: recommended
- ref: gen_ai.request.stop
requirement_level: recommended
- ref: gen_ai.response.id
requirement_level: recommended
- ref: gen_ai.response.model
Expand Down

0 comments on commit ada3a9a

Please sign in to comment.