diff --git a/docs/attributes-registry/gen-ai.md b/docs/attributes-registry/gen-ai.md index 56710feb2e..9f33f62bd0 100644 --- a/docs/attributes-registry/gen-ai.md +++ b/docs/attributes-registry/gen-ai.md @@ -22,7 +22,6 @@ This document defines the attributes used to describe telemetry in the context o | `gen_ai.request.top_k` | double | The top_k sampling setting for the LLM request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) | | `gen_ai.request.frequency_penalty` | double | The frequency penalty setting for the LLM request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) | | `gen_ai.request.presence_penalty` | double | The presence penalty setting for the LLM request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) | -| `gen_ai.request.top_p` | double | The top_p sampling setting for the LLM request. | `1.0` | ![Experimental](https://img.shields.io/badge/-experimental-blue) | | `gen_ai.response.finish_reasons` | string[] | Array of reasons the model stopped generating tokens, corresponding to each generation received. | `stop` | ![Experimental](https://img.shields.io/badge/-experimental-blue) | | `gen_ai.response.id` | string | The unique identifier for the completion. | `chatcmpl-123` | ![Experimental](https://img.shields.io/badge/-experimental-blue) | | `gen_ai.response.model` | string | The name of the LLM a response was generated from. | `gpt-4-0613` | ![Experimental](https://img.shields.io/badge/-experimental-blue) |