diff --git a/.github/workflows/publish-schemas.yml b/.github/workflows/publish-schemas.yml
new file mode 100644
index 00000000000..80bfd634bf4
--- /dev/null
+++ b/.github/workflows/publish-schemas.yml
@@ -0,0 +1,35 @@
+name: Update Schema files at OpenTelemetry Website
+
+on:
+ # triggers only on a manual dispatch
+ workflow_dispatch:
+
+jobs:
+ update-docs:
+ runs-on: ubuntu-latest
+ steps:
+ - name: checkout
+ uses: actions/checkout@v2.3.4
+ - name: make-pr
+ env:
+ API_TOKEN_GITHUB: ${{secrets.DOC_UPDATE_TOKEN}}
+ # Destination repo should always be 'open-telemetry/opentelemetry.io'
+ DESTINATION_REPO: open-telemetry/opentelemetry.io
+ # Destination path should be the absolute path to directory to publish in
+ DESTINATION_PATH: static/schemas
+ # Source path should be 'schemas', all files and folders are copied from here to dest
+ SOURCE_PATH: schemas
+ run: |
+ TARGET_DIR=$(mktemp -d)
+ export GITHUB_TOKEN=$API_TOKEN_GITHUB
+ git config --global user.name austinlparker
+ git config --global user.email austin@lightstep.com
+ git clone "https://$API_TOKEN_GITHUB@github.com/$DESTINATION_REPO.git" "$TARGET_DIR"
+ rsync -av --delete "$SOURCE_PATH/" "$TARGET_DIR/$DESTINATION_PATH/"
+ cd "$TARGET_DIR"
+ git checkout -b schemas-$GITHUB_REPOSITORY-$GITHUB_SHA
+ git add .
+ git commit -m "Schemas update from $GITHUB_REPOSITORY"
+ git push -u origin HEAD:schemas-$GITHUB_REPOSITORY-$GITHUB_SHA
+ gh pr create -t "Schemas Update from $GITHUB_REPOSITORY" -b "This is an automated pull request." -B main -H schemas-$GITHUB_REPOSITORY-$GITHUB_SHA
+ echo "done"
diff --git a/.vscode/settings.json b/.vscode/settings.json
index 77fab00c653..45a5840412a 100644
--- a/.vscode/settings.json
+++ b/.vscode/settings.json
@@ -10,7 +10,7 @@
"MD040": false,
},
"yaml.schemas": {
- "https://raw.githubusercontent.com/open-telemetry/build-tools/main/semantic-conventions/semconv.schema.json": [
+ "https://raw.githubusercontent.com/open-telemetry/build-tools/v0.5.0/semantic-conventions/semconv.schema.json": [
"semantic_conventions/**/*.yaml"
]
},
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 4cf5dfdbed6..af10493b84a 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -30,6 +30,14 @@ release.
([#1863](https://github.com/open-telemetry/opentelemetry-specification/pull/1863))
- Lambda instrumentations should check if X-Ray parent context is valid
([#1867](https://github.com/open-telemetry/opentelemetry-specification/pull/1867))
+- Update YAML definitions for events
+ ([#1843](https://github.com/open-telemetry/opentelemetry-specification/pull/1843)):
+ - Mark exception as semconv type "event".
+ - Add YAML definitions for grpc events.
+- Add `messaging.consumer_id` to differentiate between message consumers.
+ ([#1810](https://github.com/open-telemetry/opentelemetry-specification/pull/1810))
+- Clarifications for `http.client_ip` and `http.host`.
+ ([#1890](https://github.com/open-telemetry/opentelemetry-specification/pull/1890))
### Compatibility
@@ -37,6 +45,9 @@ release.
### SDK Configuration
+- Change default value for OTEL_EXPORTER_JAEGER_AGENT_PORT to 6831.
+ ([#1812](https://github.com/open-telemetry/opentelemetry-specification/pull/1812))
+
## v1.6.0 (2021-08-06)
### Context
diff --git a/Makefile b/Makefile
index d7eb60cff3e..989f353ac8e 100644
--- a/Makefile
+++ b/Makefile
@@ -7,8 +7,10 @@ MISSPELL_BINARY=bin/misspell
MISSPELL = $(TOOLS_DIR)/$(MISSPELL_BINARY)
MARKDOWN_LINK_CHECK=markdown-link-check
MARKDOWN_LINT=markdownlint
+
# see https://github.com/open-telemetry/build-tools/releases for semconvgen updates
-SEMCONVGEN_VERSION=0.4.1
+# Keep links in semantic_conventions/README.md and .vscode/settings.json in sync!
+SEMCONVGEN_VERSION=0.5.0
.PHONY: install-misspell
install-misspell:
diff --git a/schemas/1.6.1 b/schemas/1.6.1
new file mode 100644
index 00000000000..1c4ac3172a7
--- /dev/null
+++ b/schemas/1.6.1
@@ -0,0 +1,6 @@
+file_format: 1.0.0
+schema_url: https://opentelemetry.io/schemas/1.6.1
+versions:
+ 1.6.1:
+ 1.5.0:
+ 1.4.0:
diff --git a/semantic_conventions/README.md b/semantic_conventions/README.md
index 3b59de62dac..95ce5ba2ef2 100644
--- a/semantic_conventions/README.md
+++ b/semantic_conventions/README.md
@@ -17,12 +17,12 @@ i.e.:
Semantic conventions for the spec MUST adhere to the
[attribute naming conventions](../specification/common/attribute-naming.md).
-Refer to the [syntax](https://github.com/open-telemetry/build-tools/tree/main/semantic-conventions/syntax.md)
+Refer to the [syntax](https://github.com/open-telemetry/build-tools/tree/v0.5.0/semantic-conventions/syntax.md)
for how to write the YAML files for semantic conventions and what the YAML properties mean.
A schema file for VS code is configured in the `/.vscode/settings.json` of this
repository, enabling auto-completion and additional checks. Refer to
-[the generator README](https://github.com/open-telemetry/build-tools/tree/main/semantic-conventions/README.md) for what extension you need.
+[the generator README](https://github.com/open-telemetry/build-tools/tree/v0.5.0/semantic-conventions/README.md) for what extension you need.
## Generating markdown
@@ -33,7 +33,7 @@ formatted Markdown tables for all semantic conventions in the specification. Run
make table-generation
```
-For more information, see the [semantic convention generator](https://github.com/open-telemetry/build-tools/tree/main/semantic-conventions)
+For more information, see the [semantic convention generator](https://github.com/open-telemetry/build-tools/tree/v0.5.0/semantic-conventions)
in the OpenTelemetry build tools repository.
Using this build tool, it is also possible to generate code for use in OpenTelemetry
language projects.
diff --git a/semantic_conventions/trace/exception.yaml b/semantic_conventions/trace/exception.yaml
index 81c1a298ce7..29573771030 100644
--- a/semantic_conventions/trace/exception.yaml
+++ b/semantic_conventions/trace/exception.yaml
@@ -1,6 +1,7 @@
groups:
- id: exception
prefix: exception
+ type: event
brief: >
This document defines the attributes used to
report a single exception associated with a span.
diff --git a/semantic_conventions/trace/http.yaml b/semantic_conventions/trace/http.yaml
index 9c03ccbf500..55b81f433ce 100644
--- a/semantic_conventions/trace/http.yaml
+++ b/semantic_conventions/trace/http.yaml
@@ -28,7 +28,13 @@ groups:
type: string
brief: >
The value of the [HTTP host header](https://tools.ietf.org/html/rfc7230#section-5.4).
- When the header is empty or not present, this attribute should be the same.
+ An empty Host header should also be reported, see note.
+ note: >
+ When the header is present but empty the attribute SHOULD be set to
+ the empty string. Note that this is a valid situation that is expected
+ in certain cases, according the aforementioned
+ [section of RFC 7230](https://tools.ietf.org/html/rfc7230#section-5.4).
+ When the header is not set the attribute MUST NOT be set.
examples: ['www.example.org']
- id: scheme
type: string
@@ -134,8 +140,18 @@ groups:
brief: >
The IP address of the original client behind all proxies, if
known (e.g. from [X-Forwarded-For](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-For)).
- note: >
- This is not necessarily the same as `net.peer.ip`, which would identify the network-level peer, which may be a proxy.
+ note: |
+ This is not necessarily the same as `net.peer.ip`, which would
+ identify the network-level peer, which may be a proxy.
+
+ This attribute should be set when a source of information different
+ from the one used for `net.peer.ip`, is available even if that other
+ source just confirms the same value as `net.peer.ip`.
+ Rationale: For `net.peer.ip`, one typically does not know if it
+ comes from a proxy, reverse proxy, or the actual client. Setting
+ `http.client_ip` when it's the same as `net.peer.ip` means that
+ one is at least somewhat confident that the address is not that of
+ the closest proxy.
examples: '83.164.160.102'
constraints:
- any_of:
diff --git a/semantic_conventions/trace/messaging.yaml b/semantic_conventions/trace/messaging.yaml
index 6a9630bafdd..03c8baeb176 100644
--- a/semantic_conventions/trace/messaging.yaml
+++ b/semantic_conventions/trace/messaging.yaml
@@ -112,6 +112,14 @@ groups:
[Operation names](#operation-names) section above.
If the operation is "send", this attribute MUST NOT be set, since the
operation can be inferred from the span kind in that case.
+ - id: consumer_id
+ type: string
+ brief: >
+ The identifier for the consumer receiving a message. For Kafka, set it to
+ `{messaging.kafka.consumer_group} - {messaging.kafka.client_id}`, if both are present, or only
+ `messaging.kafka.consumer_group`. For brokers, such as RabbitMQ and Artemis, set it to the `client_id`
+ of the client consuming the message.
+ examples: 'mygroup - client-6'
- id: messaging.consumer.synchronous
prefix: messaging
diff --git a/semantic_conventions/trace/rpc.yaml b/semantic_conventions/trace/rpc.yaml
index f549cd10ca7..c4399519f23 100644
--- a/semantic_conventions/trace/rpc.yaml
+++ b/semantic_conventions/trace/rpc.yaml
@@ -2,6 +2,7 @@ groups:
- id: rpc
prefix: rpc
brief: 'This document defines semantic conventions for remote procedure calls.'
+ events: [rpc.grpc.message]
attributes:
- id: system
type: string
@@ -141,3 +142,26 @@ groups:
note: >
This is always required for jsonrpc. See the note in the general
RPC conventions for more information.
+ - id: rpc.grpc.message
+ prefix: "message" # TODO: Change the prefix to rpc.grpc.message?
+ type: event
+ brief: "gRPC received/sent message."
+ attributes:
+ - id: type
+ type:
+ members:
+ - id: sent
+ value: "SENT"
+ - id: received
+ value: "RECEIVED"
+ brief: "Whether this is a received or sent message."
+ - id: id
+ type: int
+ brief: "MUST be calculated as two different counters starting from `1` one for sent messages and one for received message."
+ note: "This way we guarantee that the values will be consistent between different implementations."
+ - id: compressed_size
+ type: int
+ brief: "Compressed size of the message in bytes."
+ - id: uncompressed_size
+ type: int
+ brief: "Uncompressed size of the message in bytes."
diff --git a/spec-compliance-matrix.md b/spec-compliance-matrix.md
index 5280e0fa412..5281aadbc32 100644
--- a/spec-compliance-matrix.md
+++ b/spec-compliance-matrix.md
@@ -163,9 +163,9 @@ Note: Support for environment variables is optional.
| In-memory (mock exporter) | | + | + | + | + | + | + | - | - | + | + | + |
| [OTLP](specification/protocol/otlp.md) | | | | | | | | | | | | |
| OTLP/gRPC Exporter | * | + | + | + | + | | + | | + | + | + | + |
-| OTLP/HTTP binary Protobuf Exporter | * | + | - | + | [-][py1106] | + | + | | | - | - | - |
+| OTLP/HTTP binary Protobuf Exporter | * | + | + | + | [-][py1106] | + | + | | | - | - | - |
| OTLP/HTTP JSON Protobuf Exporter | | + | - | + | [-][py1003] | | - | | | - | - | - |
-| OTLP/HTTP gzip Content-Encoding support | X | + | - | + | + | + | - | | | - | - | - |
+| OTLP/HTTP gzip Content-Encoding support | X | + | + | + | + | + | - | | | - | - | - |
| Concurrent sending | | - | + | + | [-][py1108] | | - | | + | - | - | - |
| Honors retryable responses with backoff | X | + | | + | + | + | - | | | - | - | - |
| Honors non-retryable responses | X | + | | - | + | + | - | | | - | - | - |
diff --git a/specification/context/api-propagators.md b/specification/context/api-propagators.md
index cb33df8d462..4529b182149 100644
--- a/specification/context/api-propagators.md
+++ b/specification/context/api-propagators.md
@@ -101,7 +101,7 @@ in order to preserve any previously existing valid value.
Required arguments:
- A `Context`.
-- The carrier that holds the propagation fields. For example, an incoming message or http response.
+- The carrier that holds the propagation fields. For example, an incoming message or HTTP request.
Returns a new `Context` derived from the `Context` passed as argument,
containing the extracted value, which can be a `SpanContext`,
diff --git a/specification/logs/data-model.md b/specification/logs/data-model.md
index 82ae2703282..b2df17d5864 100644
--- a/specification/logs/data-model.md
+++ b/specification/logs/data-model.md
@@ -668,6 +668,8 @@ Rest of SDIDs -> Attributes["syslog.*"]
### Splunk HEC
+We apply this mapping from HEC to the unified model:
+
Field |
@@ -719,6 +721,35 @@ Rest of SDIDs -> Attributes["syslog.*"]
+When mapping from the unified model to HEC, we apply this additional mapping:
+
+
+
+ Unified model element |
+ Type |
+ Description |
+ Maps to HEC |
+
+
+ SeverityText |
+ string |
+ The severity of the event as a human-readable string. |
+ fields['otel.log.severity.text'] |
+
+
+ SeverityNumber |
+ string |
+ The severity of the event as a number. |
+ fields['otel.log.severity.number'] |
+
+
+ Name |
+ string |
+ Short event identifier that does not contain varying parts. |
+ fields['otel.log.name'] |
+
+
+
### Log4j
diff --git a/specification/metrics/sdk.md b/specification/metrics/sdk.md
index 8a550608028..17bef8acb21 100644
--- a/specification/metrics/sdk.md
+++ b/specification/metrics/sdk.md
@@ -33,6 +33,12 @@ Table of Contents
## MeterProvider
+A `MeterProvider` MUST provide a way to allow a [Resource](../resource/sdk.md) to
+be specified. If a `Resource` is specified, it SHOULD be associated with all the
+metrics produced by any `Meter` from the `MeterProvider`. The [tracing SDK
+specification](../trace/sdk.md#additional-span-interfaces) has provided some
+suggestions regarding how to implement this efficiently.
+
### Meter Creation
New `Meter` instances are always created through a `MeterProvider` (see
@@ -128,6 +134,9 @@ are the inputs:
applies to [synchronous Instruments](./api.md#synchronous-instrument).
* The `aggregation` (optional) to be used. If not provided, a default
aggregation will be applied by the SDK. The default aggregation is a TODO.
+ * The `exemplar_reservoir` (optional) to use for storing exemplars.
+ This should be a factory or callback similar to aggregation which allows
+ different reservoirs to be chosen by the aggregation.
The SDK SHOULD use the following logic to determine how to process Measurements
made with an Instrument:
@@ -405,6 +414,118 @@ active span](../trace/api.md#context-interaction)).
+------------------+
```
+## Exemplars
+
+An [Exemplar](./datamodel.md#exemplars) is a recorded measurement that exposes
+the following pieces of information:
+
+- The `value` that was recorded.
+- The `time` the measurement was seen.
+- The set of [Attributes](../common/common.md#attributes) associated with the measurement not already included in a metric data point.
+- The associated [trace id and span id](../trace/api.md#retrieving-the-traceid-and-spanid) of the active [Span within Context](../trace/api.md#determining-the-parent-span-from-a-context) of the measurement.
+
+A Metric SDK MUST provide a mechanism to sample `Exemplar`s from measurements.
+
+A Metric SDK MUST allow `Exemplar` sampling to be disabled. In this instance the SDK SHOULD not have overhead related to exemplar sampling.
+
+A Metric SDK MUST sample `Exemplar`s only from measurements within the context of a sampled trace BY DEFAULT.
+
+A Metric SDK MUST allow exemplar sampling to leverage the configuration of a metric aggregator.
+For example, Exemplar sampling of histograms should be able to leverage bucket boundaries.
+
+A Metric SDK SHOULD provide extensible hooks for Exemplar sampling, specifically:
+
+- `ExemplarFilter`: filter which measurements can become exemplars
+- `ExemplarReservoir`: determine how to store exemplars.
+
+### Exemplar Filter
+
+The `ExemplarFilter` interface MUST provide a method to determine if a
+measurement should be sampled.
+
+This interface SHOULD have access to:
+
+- The value of the measurement.
+- The complete set of `Attributes` of the measurment.
+- the `Context` of the measuremnt.
+- The timestamp of the measurement.
+
+See [Defaults and Configuration](#defaults-and-configuration) for built-in
+filters.
+
+### Exemplar Reservoir
+
+The `ExemplarReservoir` interface MUST provide a method to offer measurements
+to the reservoir and another to collect accumulated Exemplars.
+
+The "offer" method SHOULD accept measurements, including:
+
+- value
+- `Attributes` (complete set)
+- `Context`
+- timestamp
+
+The "offer" method SHOULD have the ability to pull associated trace and span
+information without needing to record full context. In other words, current
+span context and baggage can be inspected at this point.
+
+The "offer" method does not need to store all measurements it is given and
+MAY further sample beyond the `ExemplarFilter`.
+
+The "collect" method MUST return accumulated `Exemplar`s.
+
+`Exemplar`s MUST retain the any attributes available in the measurement that
+are not preserved by aggregation or view configuration. Specifically, at a
+minimum, joining together attributes on an `Exemplar` with those available
+on its associated metric data point should result in the full set of attributes
+from the original sample measurement.
+
+The `ExemplarReservoir` SHOULD avoid allocations when sampling exemplars.
+
+### Exemplar Defaults
+
+The SDK will come with two types of built-in exemplar reservoirs:
+
+1. SimpleFixedSizeExemplarReservoir
+2. AlignedHistogramBucketExemplarReservoir
+
+By default, fixed sized histogram aggregators will use
+`AlignedHistogramBucketExemplarReservoir` and all other aggregaators will use
+`SimpleFixedSizeExemplarReservoir`.
+
+*SimpleExemplarReservoir*
+This Exemplar reservoir MAY take a configuration parameter for the size of
+the reservoir pool. The reservoir will accept measurements using an equivalent of
+the [naive reservoir sampling algorithm](https://en.wikipedia.org/wiki/Reservoir_sampling)
+
+ ```
+ bucket = random_integer(0, num_measurements_seen)
+ if bucket < num_buckets then
+ reservoir[bucket] = measurement
+ end
+ ```
+
+*AlignedHistogramBucketExemplarReservoir*
+This Exemplar reservoir MUST take a configuration parameter that is the
+configuration of a Histogram. This implementation MUST keep the last seen
+measurement that falls within a histogram bucket. The reservoir will accept
+measurements using the equivalent of the following naive algorithm:
+
+ ```
+ bucket = find_histogram_bucket(measurement)
+ if bucket < num_buckets then
+ reservoir[bucket] = measurement
+ end
+
+ def find_histogram_bucket(measurement):
+ for boundary, idx in bucket_boundaries do
+ if value <= boundary then
+ return idx
+ end
+ end
+ return boundaries.length
+ ```
+
## MetricExporter
`MetricExporter` defines the interface that protocol-specific exporters MUST
@@ -453,8 +574,93 @@ Push Metric Exporter sends the data on its own schedule. Here are some examples:
* Sends the data based on a user configured schedule, e.g. every 1 minute.
* Sends the data when there is a severe error.
+#### Interface Definition
+
+A Push Metric Exporter MUST support the following functions:
+
+##### Export(batch)
+
+Exports a batch of `Metrics`. Protocol exporters that will implement this
+function are typically expected to serialize and transmit the data to the
+destination.
+
+`Export` will never be called concurrently for the same exporter instance.
+`Export` can be called again only after the current call returns.
+
+`Export` MUST NOT block indefinitely, there MUST be a reasonable upper limit
+after which the call must time out with an error result (Failure).
+
+Any retry logic that is required by the exporter is the responsibility of the
+exporter. The default SDK SHOULD NOT implement retry logic, as the required
+logic is likely to depend heavily on the specific protocol and backend the metrics
+are being sent to.
+
+**Parameters:**
+
+`batch` - a batch of `Metrics`. The exact data type of the batch is language
+specific, typically it is some kind of list.
+
+Returns: `ExportResult`
+
+`ExportResult` is one of:
+
+* `Success` - The batch has been successfully exported. For protocol exporters
+ this typically means that the data is sent over the wire and delivered to the
+ destination server.
+* `Failure` - exporting failed. The batch must be dropped. For example, this can
+ happen when the batch contains bad data and cannot be serialized.
+
+Note: this result may be returned via an async mechanism or a callback, if that
+is idiomatic for the language implementation.
+
+##### ForceFlush()
+
+This is a hint to ensure that the export of any `Metrics` the exporter has
+received prior to the call to `ForceFlush` SHOULD be completed as soon as
+possible, preferably before returning from this method.
+
+`ForceFlush` SHOULD provide a way to let the caller know whether it succeeded,
+failed or timed out.
+
+`ForceFlush` SHOULD only be called in cases where it is absolutely necessary,
+such as when using some FaaS providers that may suspend the process after an
+invocation, but before the exporter exports the completed metrics.
+
+`ForceFlush` SHOULD complete or abort within some timeout. `ForceFlush` can be
+implemented as a blocking API or an asynchronous API which notifies the caller
+via a callback or an event. OpenTelemetry client authors can decide if they want
+to make the flush timeout configurable.
+
+##### Shutdown()
+
+Shuts down the exporter. Called when SDK is shut down. This is an opportunity
+for exporter to do any cleanup required.
+
+Shutdown should be called only once for each `MetricExporter` instance. After
+the call to `Shutdown` subsequent calls to `Export` are not allowed and should
+return a Failure result.
+
+`Shutdown` should not block indefinitely (e.g. if it attempts to flush the data
+and the destination is unavailable). OpenTelemetry client authors can decide if
+they want to make the shutdown timeout configurable.
+
### Pull Metric Exporter
Pull Metric Exporter reacts to the metrics scrapers and reports the data
passively. This pattern has been widely adopted by
[Prometheus](https://prometheus.io/).
+
+## Defaults and Configuration
+
+The SDK MUST provide the following configuration parameters for Exemplar
+sampling:
+
+| Name | Description | Default | Notes |
+|-----------------|---------|-------------|---------|
+| `OTEL_METRICS_EXEMPLAR_FILTER` | Filter for which measurements can become Exemplars. | `"WITH_SAMPLED_TRACE"` | |
+
+Known values for `OTEL_METRICS_EXEMPLAR_FILTER` are:
+
+- `"NONE"`: No measurements are eligble for exemplar sampling.
+- `"ALL"`: All measurements are eligible for exemplar sampling.
+- `"WITH_SAMPLED_TRACE"`: Only allow measurements with a sampled parent span in context.
diff --git a/specification/sdk-environment-variables.md b/specification/sdk-environment-variables.md
index adde7bbfd34..edbf7f67839 100644
--- a/specification/sdk-environment-variables.md
+++ b/specification/sdk-environment-variables.md
@@ -117,12 +117,14 @@ See [OpenTelemetry Protocol Exporter Configuration Options](./protocol/exporter.
| Name | Description | Default |
|---------------------------------|------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|
| OTEL_EXPORTER_JAEGER_AGENT_HOST | Hostname for the Jaeger agent | "localhost" |
-| OTEL_EXPORTER_JAEGER_AGENT_PORT | Port for the Jaeger agent | 6832 |
+| OTEL_EXPORTER_JAEGER_AGENT_PORT | Port for the Jaeger agent `compact` Thrift protocol | 6831 |
| OTEL_EXPORTER_JAEGER_ENDPOINT | HTTP endpoint for Jaeger traces | "http://localhost:14250" |
| OTEL_EXPORTER_JAEGER_TIMEOUT | Maximum time the Jaeger exporter will wait for each batch export | 10s |
| OTEL_EXPORTER_JAEGER_USER | Username to be used for HTTP basic authentication | - |
| OTEL_EXPORTER_JAEGER_PASSWORD | Password to be used for HTTP basic authentication | - |
+See [Jaeger Agent](https://www.jaegertracing.io/docs/latest/deployment/#agent) documentation.
+
## Zipkin Exporter
**Status**: [Stable](document-status.md)
diff --git a/specification/trace/semantic_conventions/http.md b/specification/trace/semantic_conventions/http.md
index 45d19180f8e..ad2f50d33f1 100644
--- a/specification/trace/semantic_conventions/http.md
+++ b/specification/trace/semantic_conventions/http.md
@@ -58,10 +58,10 @@ Don't set the span status description if the reason can be inferred from `http.s
| `http.method` | string | HTTP request method. | `GET`; `POST`; `HEAD` | Yes |
| `http.url` | string | Full HTTP request URL in the form `scheme://host[:port]/path?query[#fragment]`. Usually the fragment is not transmitted over HTTP, but if it is known, it should be included nevertheless. [1] | `https://www.foo.bar/search?q=OpenTelemetry#SemConv` | No |
| `http.target` | string | The full request target as passed in a HTTP request line or equivalent. | `/path/12314/?q=ddds#123` | No |
-| `http.host` | string | The value of the [HTTP host header](https://tools.ietf.org/html/rfc7230#section-5.4). When the header is empty or not present, this attribute should be the same. | `www.example.org` | No |
+| `http.host` | string | The value of the [HTTP host header](https://tools.ietf.org/html/rfc7230#section-5.4). An empty Host header should also be reported, see note. [2] | `www.example.org` | No |
| `http.scheme` | string | The URI scheme identifying the used protocol. | `http`; `https` | No |
| `http.status_code` | int | [HTTP response status code](https://tools.ietf.org/html/rfc7231#section-6). | `200` | If and only if one was received/sent. |
-| `http.flavor` | string | Kind of HTTP protocol used. [2] | `1.0` | No |
+| `http.flavor` | string | Kind of HTTP protocol used. [3] | `1.0` | No |
| `http.user_agent` | string | Value of the [HTTP User-Agent](https://tools.ietf.org/html/rfc7231#section-5.5.3) header sent by the client. | `CERN-LineMode/2.15 libwww/2.17b3` | No |
| `http.request_content_length` | int | The size of the request payload body in bytes. This is the number of bytes transferred excluding headers and is often, but not always, present as the [Content-Length](https://tools.ietf.org/html/rfc7230#section-3.3.2) header. For requests using transport encoding, this should be the compressed size. | `3495` | No |
| `http.request_content_length_uncompressed` | int | The size of the uncompressed request payload body after transport decoding. Not set if transport encoding not used. | `5493` | No |
@@ -70,7 +70,9 @@ Don't set the span status description if the reason can be inferred from `http.s
**[1]:** `http.url` MUST NOT contain credentials passed via URL in form of `https://username:password@www.example.com/`. In such case the attribute's value should be `https://www.example.com/`.
-**[2]:** If `net.transport` is not specified, it can be assumed to be `IP.TCP` except if `http.flavor` is `QUIC`, in which case `IP.UDP` is assumed.
+**[2]:** When the header is present but empty the attribute SHOULD be set to the empty string. Note that this is a valid situation that is expected in certain cases, according the aforementioned [section of RFC 7230](https://tools.ietf.org/html/rfc7230#section-5.4). When the header is not set the attribute MUST NOT be set.
+
+**[3]:** If `net.transport` is not specified, it can be assumed to be `IP.TCP` except if `http.flavor` is `QUIC`, in which case `IP.UDP` is assumed.
`http.flavor` MUST be one of the following or, if none of the listed values apply, a custom value:
@@ -179,7 +181,17 @@ If the route cannot be determined, the `name` attribute MUST be set as defined i
**[1]:** `http.url` is usually not readily available on the server side but would have to be assembled in a cumbersome and sometimes lossy process from other information (see e.g. open-telemetry/opentelemetry-python/pull/148). It is thus preferred to supply the raw data that is available.
-**[2]:** This is not necessarily the same as `net.peer.ip`, which would identify the network-level peer, which may be a proxy.
+**[2]:** This is not necessarily the same as `net.peer.ip`, which would
+identify the network-level peer, which may be a proxy.
+
+This attribute should be set when a source of information different
+from the one used for `net.peer.ip`, is available even if that other
+source just confirms the same value as `net.peer.ip`.
+Rationale: For `net.peer.ip`, one typically does not know if it
+comes from a proxy, reverse proxy, or the actual client. Setting
+`http.client_ip` when it's the same as `net.peer.ip` means that
+one is at least somewhat confident that the address is not that of
+the closest proxy.
**Additional attribute requirements:** At least one of the following sets of attributes is required:
diff --git a/specification/trace/semantic_conventions/messaging.md b/specification/trace/semantic_conventions/messaging.md
index f7317ffff17..41c2341bb4f 100644
--- a/specification/trace/semantic_conventions/messaging.md
+++ b/specification/trace/semantic_conventions/messaging.md
@@ -167,6 +167,7 @@ For message consumers, the following additional attributes may be set:
| Attribute | Type | Description | Examples | Required |
|---|---|---|---|---|
| `messaging.operation` | string | A string identifying the kind of message consumption as defined in the [Operation names](#operation-names) section above. If the operation is "send", this attribute MUST NOT be set, since the operation can be inferred from the span kind in that case. | `receive` | No |
+| `messaging.consumer_id` | string | The identifier for the consumer receiving a message. For Kafka, set it to `{messaging.kafka.consumer_group} - {messaging.kafka.client_id}`, if both are present, or only `messaging.kafka.consumer_group`. For brokers, such as RabbitMQ and Artemis, set it to the `client_id` of the client consuming the message. | `mygroup - client-6` | No |
`messaging.operation` MUST be one of the following:
diff --git a/specification/trace/semantic_conventions/rpc.md b/specification/trace/semantic_conventions/rpc.md
index 9a14e465b2f..87879313a6b 100644
--- a/specification/trace/semantic_conventions/rpc.md
+++ b/specification/trace/semantic_conventions/rpc.md
@@ -146,31 +146,29 @@ The [Span Status](../api.md#set-status) MUST be left unset for an `OK` gRPC stat
### Events
In the lifetime of a gRPC stream, an event for each message sent/received on
-client and server spans SHOULD be created with the following attributes:
+client and server spans SHOULD be created. In case of
+unary calls only one sent and one received message will be recorded for both
+client and server spans.
-```
--> [time],
- "name" = "message",
- "message.type" = "SENT",
- "message.id" = id
- "message.compressed_size" = ,
- "message.uncompressed_size" =
-```
+The event name MUST be `"message"`.
-```
--> [time],
- "name" = "message",
- "message.type" = "RECEIVED",
- "message.id" = id
- "message.compressed_size" = ,
- "message.uncompressed_size" =
-```
+
+| Attribute | Type | Description | Examples | Required |
+|---|---|---|---|---|
+| `message.type` | string | Whether this is a received or sent message. | `SENT` | No |
+| `message.id` | int | MUST be calculated as two different counters starting from `1` one for sent messages and one for received message. [1] | | No |
+| `message.compressed_size` | int | Compressed size of the message in bytes. | | No |
+| `message.uncompressed_size` | int | Uncompressed size of the message in bytes. | | No |
-The `message.id` MUST be calculated as two different counters starting from `1`
-one for sent messages and one for received message. This way we guarantee that
-the values will be consistent between different implementations. In case of
-unary calls only one sent and one received message will be recorded for both
-client and server spans.
+**[1]:** This way we guarantee that the values will be consistent between different implementations.
+
+`message.type` MUST be one of the following:
+
+| Value | Description |
+|---|---|
+| `SENT` | sent |
+| `RECEIVED` | received |
+
## JSON RPC