Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OTEP: Logger.Enabled #4290

Open
wants to merge 53 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 48 commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
ae8ef9a
otep boilerplate
pellared Nov 12, 2024
3ddff38
Update file name
pellared Nov 12, 2024
f75f983
Merge branch 'main' into otep-log-enabled
pellared Nov 13, 2024
a724fc6
Merge branch 'main' into otep-log-enabled
pellared Nov 21, 2024
c636a57
Merge branch 'main' into otep-log-enabled
pellared Nov 29, 2024
fae752b
Merge branch 'main' into otep-log-enabled
pellared Dec 10, 2024
5a4e371
Update 4290-logger-enabled.md
pellared Dec 10, 2024
32c39dd
Update 4290-logger-enabled.md
pellared Dec 10, 2024
acd70b1
Update 4290-logger-enabled.md
pellared Dec 10, 2024
1e6243e
Update 4290-logger-enabled.md
pellared Dec 10, 2024
9275c7a
Update 4290-logger-enabled.md
pellared Dec 10, 2024
38e4c98
Update 4290-logger-enabled.md
pellared Dec 10, 2024
dc07a15
Update 4290-logger-enabled.md
pellared Dec 10, 2024
0957402
Update oteps/logs/4290-logger-enabled.md
pellared Dec 10, 2024
50298f1
Update oteps/logs/4290-logger-enabled.md
pellared Dec 10, 2024
3cf5017
Update oteps/logs/4290-logger-enabled.md
pellared Dec 10, 2024
765166a
Update 4290-logger-enabled.md
pellared Dec 11, 2024
2503bdc
Update 4290-logger-enabled.md
pellared Dec 11, 2024
9bb6185
Apply suggestions from code review
pellared Dec 11, 2024
ee60483
Apply suggestions from code review
pellared Dec 11, 2024
c596398
Update 4290-logger-enabled.md
pellared Dec 11, 2024
a4257ef
Update 4290-logger-enabled.md
pellared Dec 16, 2024
7c0b7de
Merge branch 'main' into otep-log-enabled
pellared Dec 16, 2024
21b28e5
Update 4290-logger-enabled.md
pellared Dec 16, 2024
0e66e57
Update 4290-logger-enabled.md
pellared Dec 16, 2024
3db75b9
Update 4290-logger-enabled.md
pellared Dec 16, 2024
33d36c5
Update 4290-logger-enabled.md
pellared Dec 16, 2024
f16513d
Update 4290-logger-enabled.md
pellared Dec 16, 2024
ab8e05e
Update 4290-logger-enabled.md
pellared Dec 16, 2024
cc0eb1e
Update 4290-logger-enabled.md
pellared Dec 16, 2024
537a3c0
Update 4290-logger-enabled.md
pellared Dec 16, 2024
5a76201
Update 4290-logger-enabled.md
pellared Dec 16, 2024
5046347
Update 4290-logger-enabled.md
pellared Dec 16, 2024
b62b2c2
Update 4290-logger-enabled.md
pellared Dec 16, 2024
ada00e4
Update 4290-logger-enabled.md
pellared Dec 16, 2024
ba779dd
Update 4290-logger-enabled.md
pellared Dec 16, 2024
a3401bf
Update 4290-logger-enabled.md
pellared Dec 16, 2024
81bb7e0
Update 4290-logger-enabled.md
pellared Dec 16, 2024
65683dd
Update 4290-logger-enabled.md
pellared Dec 16, 2024
db796ec
Update CHANGELOG.md
pellared Dec 16, 2024
637ec17
Update 4290-logger-enabled.md
pellared Dec 16, 2024
ec21e17
Update 4290-logger-enabled.md
pellared Dec 16, 2024
27fa6c8
Update 4290-logger-enabled.md
pellared Dec 16, 2024
af8f08b
Update 4290-logger-enabled.md
pellared Dec 16, 2024
568b2b7
Update 4290-logger-enabled.md
pellared Dec 16, 2024
cb3b52a
Update 4290-logger-enabled.md
pellared Dec 16, 2024
bdc1eea
Update 4290-logger-enabled.md
pellared Dec 16, 2024
5db20fc
Merge branch 'main' into otep-log-enabled
pellared Dec 17, 2024
216a218
Update 4290-logger-enabled.md
pellared Dec 17, 2024
e5d91d5
Merge branch 'main' into otep-log-enabled
pellared Dec 17, 2024
5820653
Update 4290-logger-enabled.md
pellared Dec 18, 2024
5d35d3c
Update 4290-logger-enabled.md
pellared Dec 18, 2024
80073ea
Update 4290-logger-enabled.md
pellared Dec 18, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,9 @@ release.

### OTEPs

- Create `Logger.Enabled` OTEP.
([#4290](https://github.com/open-telemetry/opentelemetry-specification/pull/4290))

## v1.40.0 (2024-12-12)

### Context
Expand Down
210 changes: 210 additions & 0 deletions oteps/logs/4290-logger-enabled.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,210 @@
# Logger.Enabled

## Motivation
cijothomas marked this conversation as resolved.
Show resolved Hide resolved

In applications requiring high performance,
while optimizing the performance of enabled logging is important,
it is often equally or more critical to minimize the overhead of logging
for disabled or un-exported logs.

The consumers of OpenTelemetry clients want to:

1. **Correctly** and **efficiently** bridge features
like `LogLevelEnabled` in log bridge/appender implementations.
2. Avoid allocating memory to store a log record,
avoid performing computationally expensive operations,
and avoid exporting
when emitting a log or event record is unnecessary.
3. Configure a minimum a log severity level on the SDK level.
4. Filter out log and event records when they are inside a span
that has been sampled out (span is valid and has sampled flag of `false`).
5. **Efficiently** support high-performance logging destination
like [Linux user_events](https://docs.kernel.org/trace/user_events.html)
and [ETW (Event Tracing for Windows)](https://learn.microsoft.com/windows/win32/etw/about-event-tracing).
6. Allow **fine-grained** filtering control for logging pipelines
when using an OpenTelemetry Collector is not feasible
e.g., for mobile devices, serverless, or IoT.

Without a `Logger.Enabled` check in the OpenTelemetry Logs API
and corresponding implementations in the SDK,
achieving this goal is not feasible.

Address [Specify how Logs SDK implements Enabled #4207](https://github.com/open-telemetry/opentelemetry-specification/issues/4207).

## Explanation

For (1) (2), the user can use the Logs API `Logger.Enabled` function,
which tells the user whether a `Logger` for given arguments
is going to emit a log record.

For (3), the user can declaratively configure the Logs SDK
using `LoggerConfigurator` to set the `minimum_severity_level`
Copy link
Contributor

@lmolkova lmolkova Dec 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we'll (eventually) have more config options for severity, sampling, and other things, e.g. a few extra scenarios I can think of:

  • minimum level for this event name
  • maximum level to sample at (e.g. you only want to sample info and lower)
  • minimum level to record exception stack traces at
  • sampling on/off rate for this event name
  • sample based on spans, but also record all logs without parent span (e.g. startup logs are quite important)
  • throttling config (record this event name, but at most X times per time unit)
  • ...

We'll need to think how to structure them on LoggerConfig API so it's extendable but only introduce options we have an immediate need for).

E.g. I think we may need something like LoggerConfig.sampling_strategy = always_on | span_based instead of disabled_on_sampled_out_spans.

Also we'd need to consider having a callback in addition to pure declarative config (e.g. for cases like sampling based on spans, but also startup logs are all in). For this we should consider having a sampler-like interface which would have implementations similar to tracer sampler and then we might want to steal declarative config from tracer_provider.sampler.


TL;DR:

  • let's figure out what we should do first - declarative config vs api extensibility
  • let's come up with extendable options

Copy link
Member Author

@pellared pellared Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll need to think how to structure them on LoggerConfig API so it's extendable but only introduce options we have an immediate need for).

Adding a field is a backwards compatible change. I agree that we should add new fields (options) only if we have a big demand for it.

we'd need to consider having a callback in addition to pure declarative config

For more ad I think the LogRecordProcessor comes into play. It should handle the most advanced scenarios.

I will do my best to describe it "Future possibilities" section.

let's figure out what we should do first - declarative config vs api extensibility

IMO extensibility (LogRecordProcessor.Enabled) is more important as it can also solve the (3) and (4) use cases even without the config extensions.

let's come up with extendable options

I would like to know if there is a use cases that the proposal of adding LogRecordProcessor.Enabled would not solve.

Copy link
Member Author

@pellared pellared Dec 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also we'd need to consider having a callback in addition to pure declarative config

I added "Alternatives: Dynamic Evaluation in LoggerConfig"

I think we'll (eventually) have more config options for severity, sampling, and other things, e.g. a few extra scenarios I can think of:

I added "Future possibilities: Extending LoggerConfig

We would need to closer look at the alternatives when prototyping and during the Development phase of the spec.

of a `LoggerConfig`.

For (4), the user can use the Tracing API to check whether
there is a sampled span in the current context before creating
and emitting a log record.
However, the user can may also want to declaratively configure the Logs SDK
using `LoggerConfigurator` to set the `disabled_on_sampled_out_spans`
pellared marked this conversation as resolved.
Show resolved Hide resolved
of a `LoggerConfig`.

For (5) (6), the user can hook to `Logger.Enabled` Logs API calls
by adding to the Logs SDK a `LogRecordProcessor` implementing `Enabled`.

## Internal details

Regarding (1) (2), the Logs API specification has already introduced `Logger.Enabled`:

- [Add Enabled method to Logger #4020](https://github.com/open-telemetry/opentelemetry-specification/pull/4020)
- [Define Enabled parameters for Logger #4203](https://github.com/open-telemetry/opentelemetry-specification/pull/4203)

The main purpose of this OTEP is to extend the SDK's `LoggerConfig`
with `minimum_severity_level` and optionally `disabled_on_sampled_out_spans`
and to extend the `LogRecordProcessor` with an `Enabled` operation.

The addition of `LoggerConfig.minimum_severity_level` is supposed
to serve the (3) use case in an easy-to-setup and efficient way.

The addition of `LoggerConfig.disabled_on_sampled_out_spans` can serve the (4)
use case in a declarative way configured on the SDK level
if the user would want to only capture the log records that are
within sampled spans.

The addition of `LogRecordProcessor.Enabled` is necessary for
use cases where filtering is dynamic and coupled to processing,
such as (5) and (6).

Both `LoggerConfig` and registered `LogRecordProcessors` take part
in the evalaution of the `Logger.Enabled` operation.
If it returns `true`, meaning the logger is enabled,
then the `LogRecordProcessors` are evaluated.
In such cases, `false` is returned only if all registered
`LogRecordProcessors` return `false` in their `Enabled` calls,
as this means no processor will process the log record.
Pseudo-code:

<!-- markdownlint-disable no-hard-tabs -->
```go
func (l *logger) Enabled(ctx context.Context, param EnabledParameters) bool {
config := l.config()
if config.Disabled {
// The logger is disabled.
return false
}
if params.Severity > config.MinSeverityLevel {
// The severity is less severe than the logger minimum level.
return false
}
if config.DisabledNotSampledSpans && !trace.SpanContextFromContext(ctx).IsSampled() {
// The logger is disabled on sampled out spans.
return false
}

processors := l.provider.processors()
for _, p := range processors {
if p.Enabled(ctx, param) {
Copy link
Contributor

@lmolkova lmolkova Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what would you expect processors to do inside Enabled? If this is a delegating processor, would it call the underlying one? When I'm implementing a processor that does filtering, I should assume it's in the certain position in the pipeline?

I.e. it sounds like there should be just one processor in the pipeline that decides if log should be created. If so, should it be a processor responsibility at all or can we call this component LogFilter, LogSampler, etc and make it a configurable behavior independent of the processor?

If this component returns true, all processing pipelines would get log record and can drop it if they are filtering too.
If it returns false, none of them should. This is the same as you have outlined in the otep, just extracts filtering from processing.

Copy link
Contributor

@lmolkova lmolkova Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some examples of logging filters:

Here's a good old .NET ILogger filter

builder.Logging.AddFilter((provider, category, logLevel) =>
{
    if (provider.Contains("ConsoleLoggerProvider")
        && category.Contains("Controller")
        && logLevel >= LogLevel.Information)
    {
        return true;
    }
    else if (provider.Contains("ConsoleLoggerProvider")
        && category.Contains("Microsoft")
        && logLevel >= LogLevel.Information)
    {
        return true;
    }
    else
    {
        return false;
    }
});

or good old log4j

private void updateLoggers(final Appender appender, final Configuration config) {
    final Level level = null;
    final Filter filter = null;
    for (final LoggerConfig loggerConfig : config.getLoggers().values()) {
        loggerConfig.addAppender(appender, level, filter);
    }
    config.getRootLogger().addAppender(appender, level, filter);
}

I.e. composition instead of inheritance - when the logging is configured, each appender/provider pipeline gets one dedicated filter independent of the appender/provider logic.

Copy link
Member Author

@pellared pellared Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what would you expect processors to do inside Enabled? If this is a delegating processor, would it call the underlying one? When I'm implementing a processor that does filtering, I should assume it's in the certain position in the pipeline?

Depends on the processor.

  1. If this is an exporting processor, e.g. for user_events then Enabled, returns results only for itself. Example: https://github.com/open-telemetry/opentelemetry-rust-contrib/blob/30a3f4eeac0be5e4337f020b7e5bfe0273298858/opentelemetry-user-events-logs/src/logs/exporter.rs#L158-L165. Use case (5).
  2. For a filtering processor it needs to wrap some other processor for which it applies the filtering. Then it makes it possible to have two batching processors with different filters. Example: https://github.com/open-telemetry/opentelemetry-go-contrib/blob/a5391050f37752bbcb62140ead0301981af9d748/processors/minsev/minsev.go#L74-L84 Use case (6).

I.e. it sounds like there should be just one processor in the pipeline that decides if log should be created. If so, should it be a processor responsibility at all or can we call this component LogFilter, LogSampler, etc and make it a configurable behavior independent of the processor?

If this component returns true, all processing pipelines would get log record and can drop it if they are filtering too. If it returns false, none of them should. This is the same as you have outlined in the otep, just extracts filtering from processing.

I think is that it does not suites well (5) nor (6) use cases.
For (5) would someone configuring an user_events exporter would have to configure a filterer as well as processor?
For (6) it would not be even possible to e.g. allowing distinct minimum severity levels for different processorsas the filters would run before the processors.
The other way is that the processors themselves would accept a filterer, but I do not think we want to Logs SDK to become https://logging.apache.org/log4j/2.x/manual/architecture.html (which by the way does not offer an Enabled API).

I.e. composition instead of inheritance - when the logging is configured, each appender/provider pipeline gets one dedicated filter independent of the appender/provider logic.

Filtering processor mentioned before wraps an existing processor so it is following the principle to favor composition over inheritance.

I added "Alternatives: Separate LogRecordFilterer Abstraction".

We would need to closer look at the alternatives when prototyping and during the Development phase of the spec.

// At least one processor will process the record.
return true
}
}
// No processor will process the record.
return false
}
```
<!-- markdownlint-enable no-hard-tabs -->

## Trade-offs and mitigations

For some langagues extending the `LogRecordProcessor` may be seen as breaking.
For these languages implementing `LogRecordProcessor.Enabled` must be optional.
The SDK `LogRecordProcessor` must return `true` if `Enabled` is not implemented.
This approach is currently taken by OpenTelemetry Go.

## Prior art

`Logger.Enabled` is already defined by:

- [OpenTelemetry C++](https://github.com/open-telemetry/opentelemetry-cpp/blob/main/api/include/opentelemetry/logs/logger.h)
- [OpenTelemetry Go](https://github.com/open-telemetry/opentelemetry-go/blob/main/log/logger.go)
- [OpenTelemetry PHP](https://github.com/open-telemetry/opentelemetry-php/blob/main/src/API/Logs/LoggerInterface.php)
- [OpenTelemetry Rust](https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry/src/logs/logger.rs)

`LoggerConfig` (with only `disabled`) is already defined by:

- [OpenTelemetry Java](https://github.com/open-telemetry/opentelemetry-java/blob/main/sdk/logs/src/main/java/io/opentelemetry/sdk/logs/internal/LoggerConfig.java)
- [OpenTelemetry PHP](https://github.com/open-telemetry/opentelemetry-php/blob/main/src/SDK/Logs/LoggerConfig.php)

`LogRecordProcessor.Enabled` is already defined by:

- [OpenTelemetry Go](https://github.com/open-telemetry/opentelemetry-go/tree/main/sdk/log/internal/x)
- [OpenTelemetry Rust](https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-sdk/src/logs/log_processor.rs)

Regarding the (5) use case,
OpenTelemetry Rust provides
[OpenTelemetry Log Exporter for Linux user_events](https://github.com/open-telemetry/opentelemetry-rust-contrib/blob/1cb39edbb6467375f71f5dab25ccbc49ac9bf1d5/opentelemetry-user-events-logs/src/logs/exporter.rs)
enabling efficient log emission to user_events.

Regarding the (6) use case,
OpenTelemetry Go Contrib provides
[`minsev` processor](https://pkg.go.dev/go.opentelemetry.io/contrib/processors/minsev)
allowing distinict minimum severity levels
for different log destinations.

## Alternatives

### Dynamic Evaluation in LoggerConfig

There is a [proposal](https://github.com/open-telemetry/opentelemetry-specification/issues/4207#issuecomment-2501688210)
suggested dynamic evaluation in `LoggerConfig` instead of static configuration
to make the `LoggerConfig` to support dynamic evaluation.
However, since the purpose of `LoggerConfig` is static configuration,
and use cases (5) and (6) are tied to log record processing,
extending `LogRecordProcessor` is more straightforward.

### Separate LogRecordFilterer Abstraction

There is a [proposal](https://github.com/open-telemetry/opentelemetry-specification/issues/4207#issuecomment-2354859647)
to add a distinct `LogRecordFilterer` abstraction.
However, this approach is less suited for use case (5)
and offers limited flexibility for use case (6).

## Open questions

### Need of LoggerConfig.disabled_on_sampled_out_spans

Should LoggerConfig include a `disabled_on_sampled_out_spans` field?
It is uncertain if API callers alone should decide
whether to emit log records for spans that are not sampled.
For instrumentation libraries, API-level control might be more appropriate, e.g.:

<!-- markdownlint-disable no-hard-tabs -->
```go
if trace.SpanContextFromContext(ctx).IsSampled() && logger.Enabled(ctx, params) {
logger.Emit(ctx, createLogRecord(payload))
}
```
<!-- markdownlint-enable no-hard-tabs -->

### Need of LogRecordExporter.Enabled

There is a [proposal](https://github.com/open-telemetry/opentelemetry-specification/pull/4290#discussion_r1878379347)
to additionally extend `LogRecordExporter` with `Enabled` operation.
However, at this point of time, this extension seems unnecessary.
The `LogRecordExporter` abstraction is primarily designed
for batching exporters, whereas the (5) use cases is focused
on synchronous exporters, which can be implemented as a `LogRecordProcessor`.
Skipping this additional level of abstraction would also reduce overhead.
It is worth noticing that, `LogRecordExporter.Enabled`
can always be added in future.

## Future possibilities

The `Enabled` API could be extended in the future
to include additional parameters, such as `Event Name`,
for processing event records.
This would fit well a simple design where `LogRecordProcessor`
is used for both log records and event records.
References:

- [Add EventName parameter to Logger.Enabled #4220](https://github.com/open-telemetry/opentelemetry-specification/issues/4220).
- OpenTelemetry Go: [[Prototype] log: Events support with minimal non-breaking changes #6018](https://github.com/open-telemetry/opentelemetry-go/pull/6018)
Loading