Skip to content

Commit

Permalink
Merge branch 'main' into php-instrumentations-update
Browse files Browse the repository at this point in the history
  • Loading branch information
svrnm authored Jan 30, 2024
2 parents 7f37b65 + 8161083 commit f0ca690
Show file tree
Hide file tree
Showing 311 changed files with 967 additions and 542 deletions.
1 change: 1 addition & 0 deletions .cspell.yml
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ languageSettings:
- CodeBlock
words:
- accountingservice
- actix
- adservice
- alibaba
- Alloc
Expand Down
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,4 @@ content/en/docs/kubernetes/helm/ @open-telemetry/docs-approvers @open-te
content/en/docs/specs/ @open-telemetry/docs-approvers @open-telemetry/specs-approvers
content/en/docs/security/ @open-telemetry/docs-approvers @open-telemetry/sig-security-maintainers
content/en/ecosystem/demo/ @open-telemetry/demo-approvers @open-telemetry/demo-approvers
content/en/docs/contributing/ @open-telemetry/docs-approvers @open-telemetry/docs-maintainers
2 changes: 1 addition & 1 deletion .github/workflows/scripts/update-registry-versions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -97,7 +97,7 @@ for yaml_file in ${FILES}; do
done;

# We use the sha1 over all version updates to uniquely identify the PR.
tag=$(echo body | sha1sum | awk '{print $1;}')
tag=$(echo "${body}" | sha1sum | awk '{print $1;}')
message="Auto-update registry versions (${tag})"
branch="opentelemetrybot/auto-update-registry-${tag}"

Expand Down
3 changes: 3 additions & 0 deletions .textlintrc.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@ rules:
defaultTerms: false
skip: []
terms:
- Actix
- Ajax
- Apache
- API
Expand Down Expand Up @@ -112,6 +113,8 @@ rules:
# https://github.com/sapegin/textlint-rule-terminology/blob/ca36a645c56d21f27cb9d902b5fb9584030c59e3/index.js#L137-L142.
#
- ['3rd[- ]party', third-party]
- ['back[- ]end(s)?', 'backend$1']
- ['bugfix', 'bug fix']
- [cpp, C++]
- # dotnet|.net -> .NET, but NOT for strings like:
# - File extension: file.net
Expand Down
6 changes: 3 additions & 3 deletions content/en/blog/2022/apisix/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ and sends it to OpenTelemetry Collector through HTTP protocol. Apache APISIX
starts to support this feature in v2.13.0.

One of OpenTelemetry's special features is that the agent/SDK of OpenTelemetry
is not locked with back-end implementation, which gives users flexibilities on
choosing their own back-end services. In other words, users can choose the
is not locked with backend implementation, which gives users flexibilities on
choosing their own backend services. In other words, users can choose the
backend services they want, such as Zipkin and Jaeger, without affecting the
application side.

Expand Down Expand Up @@ -192,7 +192,7 @@ resulting in a call chain consisting of two spans.
### Step 1: Deploy OpenTelemetry

The following uses `docker compose` as an example. For other deployments, see
[Getting Started](/docs/collector/getting-started/).
[Quick start](/docs/collector/quick-start/).

You can see the following command to deploy[^1]:

Expand Down
8 changes: 4 additions & 4 deletions content/en/blog/2022/frontend-overhaul/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,13 +114,13 @@ This proposal was presented to the OpenTelemetry demo SIG during one of the
weekly Monday meetings and we were given the green light to move ahead. As part
of the changes, we decided to use [Next.js](https://nextjs.org/) to not only
work as the primary front-end application but also to work as an aggregation
layer between the front-end and the gRPC back-end services.
layer between the front-end and the gRPC backend services.

![New Front-end Data Flow](data-flow.png)

As you can see in the diagram, the application has two major connectivity
points, one coming from the browser side (REST) to connect to the Next.js
aggregation layer and the other from the aggregation layer to the back-end
aggregation layer and the other from the aggregation layer to the backend
services (gRPC).

## OpenTelemetry Instrumentation
Expand All @@ -129,7 +129,7 @@ The next big thing we worked was a way to instrument both sides of the Next.js
app. To do this we had to connect the app twice to the same collector used by
all the microservices.

A simple back-end solution was designed using the
A simple backend solution was designed using the
[official gRPC exporter](https://www.npmjs.com/package/@opentelemetry/exporter-trace-otlp-grpc)
in combination with the
[Node.js SDK](https://www.npmjs.com/package/@opentelemetry/sdk-node).
Expand Down Expand Up @@ -160,7 +160,7 @@ CORS requests from the web app.

Once the setup is complete, by loading the application from Docker and
interacting with the different features, we can start looking at the full traces
that begin from the front-end user events all the way to the back-end gRPC
that begin from the front-end user events all the way to the backend gRPC
services.

![Front-end Trace Jaeger Visualization](jaeger.png)
Expand Down
4 changes: 2 additions & 2 deletions content/en/blog/2022/k8s-otel-expose/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,8 @@ in this setup are mentioned in brackets.
[v1.2.1] installed.
- A Kubernetes [v1.23.3] edge cluster to create a test cluster. Using
[Kind](https://kind.sigs.k8s.io/) is recommended.
- Installed [OpenTelemetry Operator](/docs/collector/getting-started) [v0.58.0]
on both ends.
- Installed [OpenTelemetry Operator](/docs/kubernetes/operator/) [v0.58.0] on
both ends.
- Installed [Jaeger Operator](https://www.jaegertracing.io/docs/1.37/operator/)
[v1.37.0] on your public cluster.
- Installed [cert-manager](https://cert-manager.io/) [v1.9.1] on your public
Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/2023/end-user-discussions-01.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ you will have to send the spans to a centralized service.
#### 3- Bifurcating data in a pipeline

**Q:** If I want to use the Collector to send different sets of data to
different back-ends, what’s the best way to go about it?
different backends, what’s the best way to go about it?

**A:**
[Connectors](https://github.com/open-telemetry/opentelemetry-collector/pull/6140)
Expand Down
26 changes: 13 additions & 13 deletions content/en/blog/2023/end-user-q-and-a-01.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,10 @@ OpenTelemetry with [GraphQL](https://graphql.org/).

J and his team embarked on their OpenTelemetry journey for two main reasons:

- J’s company uses a few different observability back-ends. His team had
switched to a vendor back-end that was different from the back-end used by
other teams that they interfaced with. OpenTelemetry allowed them to continue
to get end-to-end Traces in spite of using different vendors.
- J’s company uses a few different observability backends. His team had switched
to a vendor backend that was different from the backend used by other teams
that they interfaced with. OpenTelemetry allowed them to continue to get
end-to-end Traces in spite of using different vendors.
- His team was using GraphQL, and needed to be able to better understand what
was happening behind the scenes with their GraphQL calls.

Expand All @@ -58,9 +58,9 @@ Across the organization, different teams have chosen to use different
observability platforms to suit their needs, resulting in a mix of both open
source and proprietary observability tools.

J’s team had recently migrated from one observability back-end to another. After
J’s team had recently migrated from one observability backend to another. After
this migration, they started seeing gaps in trace data, because other teams that
they integrated with were still using a different observability back-end. As a
they integrated with were still using a different observability backend. As a
result, they no longer had an end-to-end picture of their traces. The solution
was to use a standard, vendor-neutral way to emit telemetry: OpenTelemetry.

Expand Down Expand Up @@ -133,7 +133,7 @@ is currently discouraging teams from creating their own custom spans. Since they
do a lot of asynchronous programming, it can be very difficult for developers to
understand how the context is going to behave across asynchronous processes.

Traces are sent to their observability back-end using that vendor’s agent, which
Traces are sent to their observability backend using that vendor’s agent, which
is installed on all of their nodes.

### Besides traces, do you use other signals?
Expand All @@ -142,7 +142,7 @@ The team has implemented a custom Node.js plugin for getting certain
[metrics](/docs/concepts/signals/metrics/) data about GraphQL, such as
deprecated field usage and overall query usage, which is something that they
can’t get from their traces. These metrics are being sent to the observability
back-end through the
backend through the
[OpenTelemetry Collector](https://github.com/open-telemetry/opentelemetry-collector#-opentelemetry-collector)’s
[OTLP metrics receiver](https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/otlpreceiver/README.md).

Expand All @@ -158,7 +158,7 @@ The team uses
[Amazon Elasticache](https://en.wikipedia.org/wiki/Amazon_ElastiCache) and the
[ELK stack](https://www.techtarget.com/searchitoperations/definition/Elastic-Stack)
for logging. They are currently doing a proof-of-concept (POC) of migrating .NET
logs to their observability back-end. The ultimate goal is to have
logs to their observability backend. The ultimate goal is to have
[metrics](/docs/concepts/signals/metrics/),
[logs](/docs/concepts/signals/logs/), and
[traces](/docs/concepts/signals/traces/) under one roof.
Expand All @@ -171,7 +171,7 @@ link traces and metrics.

### How is the organization sending telemetry data to various observability back-ends?

J’s team uses a combination of the proprietary back-end agent and the
J’s team uses a combination of the proprietary backend agent and the
OpenTelemetry Collector (for metrics). They are one of the primary users of
OpenTelemetry at J’s company, and he hopes to help get more teams to make the
switch.
Expand Down Expand Up @@ -245,7 +245,7 @@ which they intend to give back to the OpenTelemetry community.

### Are you planning on instrumenting mainframe code?

The observability back-end used by J’s team provided native instrumentation for
The observability backend used by J’s team provided native instrumentation for
the mainframe. J and his team would have loved to instrument mainframe code
using OpenTelemetry. Unfortunately, there is currently no OpenTelemetry SDK for
PL/I (and other mainframe languages such as
Expand Down Expand Up @@ -288,8 +288,8 @@ JavaScript environments are akin to the Wild West of Development due to:

One of J’s suggestions is to treat OTel JavaScript as a hierarchy, which starts
with a Core JavaScript team that splits into two subgroups: front-end web group,
and back-end group. Front-end and back-end would in turn split. For example, for
the back-end, have a separate Deno and Node.js group.
and backend group. Front-end and backend would in turn split. For example, for
the backend, have a separate Deno and Node.js group.

Another suggestion is to have a contrib maintainers group, separate from core
SDK and API maintainers group.
Expand Down
12 changes: 6 additions & 6 deletions content/en/blog/2023/end-user-q-and-a-03.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ alerting. The team is responsible for maintaining Observability tooling,
managing deployments related to Observability tooling, and educating teams on
instrumenting code using OpenTelemetry.

Iris first started her career as a software engineer, focusing on back-end
Iris first started her career as a software engineer, focusing on backend
development. She eventually moved to a DevOps Engineering role, and it was in
this role that she was introduced to cloud monitoring through products such as
[Amazon CloudWatch](https://aws.amazon.com/cloudwatch/) and
Expand Down Expand Up @@ -91,9 +91,9 @@ created by her team. On the open source tooling front:

- [Grafana](https://grafana.com) is used for dashboards
- OpenTelemetry is used for emitting traces, and
[Grafana Tempo](https://grafana.com/oss/tempo/) is used as a tracing back-end
[Grafana Tempo](https://grafana.com/oss/tempo/) is used as a tracing backend
- [Jaeger](https://jaegertracing.io) is still used in some cases for emitting
traces and as a tracing back-end, because some teams have not yet completely
traces and as a tracing backend, because some teams have not yet completely
moved to OpenTelemetry for instrumenting traces
([via Jaeger’s implementation of the OpenTracing API](https://medium.com/velotio-perspectives/a-comprehensive-tutorial-to-implementing-opentracing-with-jaeger-a01752e1a8ce)).
- [Prometheus Thanos](https://github.com/thanos-io/thanos) (highly-available
Expand Down Expand Up @@ -141,7 +141,7 @@ They are not fully there yet:

In spite of that, Iris and her team are leveraging the power of the
[OpenTelemetry Collector](/docs/collector/) to gather and send metrics and
traces to various Observability back-ends. Since she and her team started using
traces to various Observability backends. Since she and her team started using
OpenTelemetry, they started instrumenting more traces. In fact, with their
current setup, Iris has happily reported that they went from processing 1,000
spans per second, to processing 40,000 spans per second!
Expand Down Expand Up @@ -301,7 +301,7 @@ Are you currently using any processors on the OTel Collector? \
The team is currently experimenting with processors, namely for data masking ([transform processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/transformprocessor),
or [redaction processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/redactionprocessor)),
especially as they move to using OTel Logs, which will contain sensitive data that
they won’t want to transmit to their Observability back-end. They currently, however,
they won’t want to transmit to their Observability backend. They currently, however,
are only using the [batch processor](https://github.com/open-telemetry/opentelemetry-collector/blob/main/processor/batchprocessor/README.md).

### Are you aware of any teams using span events?
Expand Down Expand Up @@ -344,7 +344,7 @@ instances of the Collector, using around 8GB memory.
This is something that is currently being explored. The team is exploring
[traces/metrics correlation (exemplars)](/docs/specs/otel/metrics/data-model/#exemplars)
through OpenTelemetry; however, they found that this correlation is accomplished
more easily through their tracing back-end, Tempo.
more easily through their tracing backend, Tempo.

### Are you concerned about the amount of data that you end up producing, transporting, and collecting? How do you ensure data quality?

Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/2023/humans-of-otel.md
Original file line number Diff line number Diff line change
Expand Up @@ -181,7 +181,7 @@ together.

And in order to use all of these tools together, you need to have the data
coming in, the telemetry actually be integrated, so you can't have three
separate streams of telemetry. And then on the back-end, be like, I want to
separate streams of telemetry. And then on the backend, be like, I want to
cross-reference. All of that telemetry has to be organized into an actual graph.
You need a graphical data structure that all these individual signals are a part
of. For me, that is what modern Observability is all about.
Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/2023/k8s-runtime-observability/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -472,5 +472,5 @@ are available from the Tracetest repository.
- [Traces For Kubernetes System Components](https://kubernetes.io/docs/concepts/cluster-administration/system-traces/)
- [Tracing on ContainerD](https://github.com/containerd/containerd/blob/main/docs/tracing.md)
- [Kubernetes: Tools for Monitoring Resources](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-usage-monitoring/)
- [Getting Started with OTel Collector](/docs/collector/getting-started/)
- [OTel Collector quick start](/docs/collector/quick-start/)
- [Boosting Kubernetes container runtime observability with OpenTelemetry](https://kubernetes.io/blog/2022/12/01/runtime-observability-opentelemetry/)
2 changes: 1 addition & 1 deletion content/en/blog/2023/otel-in-focus-06.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ operator.
0.79.0 of the Operator includes enhancements such as Prometheus metric exporter
support for Node.js auto-instrumentation and the ability to inject the service
version into the environment of the instrumented application. There is also a
bugfix regarding the OpenTelemetry Collector version not displaying properly in
bug fix regarding the OpenTelemetry Collector version not displaying properly in
the status field.

0.78.0 includes enhancements such as updating various packages, support for
Expand Down
2 changes: 1 addition & 1 deletion content/en/blog/2023/testing-otel-demo/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@ the demo. This will evaluate all services in the OpenTelemetry Demo.

During the development of the tests, we noticed some differences in the test
results. For example, some minor fixes were made to the Cypress tests, and some
behaviors were observed in the back-end APIs that can be tested and investigated
behaviors were observed in the backend APIs that can be tested and investigated
at a later time. You can find the details in
[this pull request](https://github.com/open-telemetry/opentelemetry-demo/pull/950)
and
Expand Down
15 changes: 7 additions & 8 deletions content/en/docs/collector/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title: Collector
description: Vendor-agnostic way to receive, process and export telemetry data.
aliases: [collector/about]
cascade:
vers: 0.92.0
vers: 0.93.0
weight: 10
---

Expand All @@ -15,9 +15,9 @@ The OpenTelemetry Collector offers a vendor-agnostic implementation of how to
receive, process and export telemetry data. It removes the need to run, operate,
and maintain multiple agents/collectors. This works with improved scalability
and supports open source observability data formats (e.g. Jaeger, Prometheus,
Fluent Bit, etc.) sending to one or more open source or commercial back-ends.
The local Collector agent is the default location to which instrumentation
libraries export their telemetry data.
Fluent Bit, etc.) sending to one or more open source or commercial backends. The
local Collector agent is the default location to which instrumentation libraries
export their telemetry data.

## Objectives

Expand Down Expand Up @@ -48,10 +48,9 @@ it allows your service to offload data quickly and the collector can take care
of additional handling like retries, batching, encryption or even sensitive data
filtering.

It is also easier to [setup a collector](./getting-started) than you might
think: the default OTLP exporters in each language assume a local collector
endpoint, so if you launch a collector it will automatically start receiving
telemetry.
It is also easier to [setup a collector](quick-start) than you might think: the
default OTLP exporters in each language assume a local collector endpoint, so if
you launch a collector it will automatically start receiving telemetry.

## Status and releases

Expand Down
Loading

0 comments on commit f0ca690

Please sign in to comment.