forked from open-telemetry/opentelemetry-collector-contrib
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[exporter/syslog] Add syslog exporter #2
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kasia-kujawa
force-pushed
the
kkujawa-syslogexporter
branch
7 times, most recently
from
March 20, 2023 11:30
74d09a9
to
7d2768b
Compare
kasia-kujawa
force-pushed
the
kkujawa-syslogexporter
branch
4 times, most recently
from
March 30, 2023 15:26
7186cbc
to
b5d1c3c
Compare
kasia-kujawa
force-pushed
the
kkujawa-syslogexporter
branch
from
April 17, 2023 13:39
c1942bd
to
f3a44e6
Compare
Getting 404s when trying to run apt update w/ debian 9 --------- Signed-off-by: Alex Boten <[email protected]>
…emetry#21178) * [chore] [receiver/couchdb] switched to autogenerated status
…dows if the path contains Junction" (open-telemetry#21195) This reverts commit 84d9f48.
Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Alex Boten <[email protected]>
…ance is part of AWS Parallel Cluster (open-telemetry#20251) Signed-off-by: Dani Louca <[email protected]>
This example is duplicated in the examples/demo folder. I couldn't see a good reason to keep both around.
--------- Signed-off-by: Alex Boten <[email protected]>
…n-telemetry#21191) * [chore] [receiver/elasticsearch] switched to autogenerate status
…-telemetry#21192) * [chore] [receiver/flinkmetrics] switched to autogenerate status
* aws s3 exporter initial version --------- Co-authored-by: Przemek Delewski <[email protected]>
Since the other jobs that use the cache depend on `setup-environment`, we might as well only update the go cache from that job. In some cases, the update causes jobs to time out (like in the case of govulncheck) Signed-off-by: Alex Boten <[email protected]>
…ry#21229)" (open-telemetry#21231) This reverts commit e8ccc37.
…ocessor readme (open-telemetry#21227) Editorial changes to the OTTL section of the filter processor readme Co-authored-by: Evan Bradley <[email protected]>
…metry#21888) * Fix mongodbatlas access log paging
fixed move aws doc link
…ry#21762) * Add more examples * Update README.md * Update README.md
…tegy (open-telemetry#21408) [receiver/kafkareceiver] support configuration of initial offset strategy
This will run when PRs with the label "dependencies" are added. In the short term, I'm leaving both dependabot and renovatebot on. Ideally, after seeing renovatebot run this week, we may be able to turn off dependabot for dependencies. --------- Signed-off-by: Alex Boten <[email protected]> Co-authored-by: Antoine Toulme <[email protected]>
) This batcher is intended to be used to split incoming logs batches into profiling and regular logs prior to the processing to simplify the exporter logic. The batcher is written in a way to introduce no overhead if the logs batches don't contain mixed data, which is the most common use case. This change just adds the batcher for now to make review easier. Actual enablement will come next.
…ng (open-telemetry#21909) To simplify the logic and make future improvements possible. Benchmarks shows no performance degradation for typical use cases (regular logs or profiling only) with small improvement on memory allocation. Benchmarks were adjusted to be applied on `ConsumeLogs` in both for before/after states. The only performance hit can be found for batches with both regular and profiling logs, but given that that use case is pretty rare, we can ignore it.
Signed-off-by: Katarzyna Kujawa <[email protected]> Co-authored-by: Raj Nishtala <[email protected]>
problem reported by golangci-lint sender.go:57:71: unexported-return: exported func Connect returns unexported type *syslogexporter.sender, which can be annoying to use (revive) func Connect(logger *zap.Logger, cfg *Config, tlsConfig *tls.Config) (*sender, error) {
- replace protocol with network - replace format with protocol
kasia-kujawa
force-pushed
the
kkujawa-syslogexporter
branch
from
May 15, 2023 07:27
9cf9443
to
6d46774
Compare
kasia-kujawa
force-pushed
the
kkujawa-syslogexporter
branch
from
May 15, 2023 08:34
6d46774
to
c626d4c
Compare
kasia-kujawa
pushed a commit
that referenced
this pull request
Oct 13, 2024
… Histo --> Histogram (open-telemetry#33824) ## Description This PR adds a custom metric function to the transformprocessor to convert exponential histograms to explicit histograms. Link to tracking issue: Resolves open-telemetry#33827 **Function Name** ``` convert_exponential_histogram_to_explicit_histogram ``` **Arguments:** - `distribution` (_upper, midpoint, uniform, random_) - `ExplicitBoundaries: []float64` **Usage example:** ```yaml processors: transform: error_mode: propagate metric_statements: - context: metric statements: - convert_exponential_histogram_to_explicit_histogram("random", [10.0, 20.0, 30.0, 40.0, 50.0, 60.0, 70.0, 80.0, 90.0, 100.0]) ``` **Converts:** ``` Resource SchemaURL: ScopeMetrics #0 ScopeMetrics SchemaURL: InstrumentationScope Metric #0 Descriptor: -> Name: response_time -> Description: -> Unit: -> DataType: ExponentialHistogram -> AggregationTemporality: Delta ExponentialHistogramDataPoints #0 Data point attributes: -> metric_type: Str(timing) StartTimestamp: 1970-01-01 00:00:00 +0000 UTC Timestamp: 2024-07-31 09:35:25.212037 +0000 UTC Count: 44 Sum: 999.000000 Min: 40.000000 Max: 245.000000 Bucket (32.000000, 64.000000], Count: 10 Bucket (64.000000, 128.000000], Count: 22 Bucket (128.000000, 256.000000], Count: 12 {"kind": "exporter", "data_type": "metrics", "name": "debug"} ``` **To:** ``` Resource SchemaURL: ScopeMetrics #0 ScopeMetrics SchemaURL: InstrumentationScope Metric #0 Descriptor: -> Name: response_time -> Description: -> Unit: -> DataType: Histogram -> AggregationTemporality: Delta HistogramDataPoints #0 Data point attributes: -> metric_type: Str(timing) StartTimestamp: 1970-01-01 00:00:00 +0000 UTC Timestamp: 2024-07-30 21:37:07.830902 +0000 UTC Count: 44 Sum: 999.000000 Min: 40.000000 Max: 245.000000 ExplicitBounds #0: 10.000000 ExplicitBounds #1: 20.000000 ExplicitBounds #2: 30.000000 ExplicitBounds #3: 40.000000 ExplicitBounds #4: 50.000000 ExplicitBounds #5: 60.000000 ExplicitBounds open-telemetry#6: 70.000000 ExplicitBounds open-telemetry#7: 80.000000 ExplicitBounds open-telemetry#8: 90.000000 ExplicitBounds open-telemetry#9: 100.000000 Buckets #0, Count: 0 Buckets #1, Count: 0 Buckets #2, Count: 0 Buckets #3, Count: 2 Buckets #4, Count: 5 Buckets #5, Count: 0 Buckets open-telemetry#6, Count: 3 Buckets open-telemetry#7, Count: 7 Buckets open-telemetry#8, Count: 2 Buckets open-telemetry#9, Count: 4 Buckets open-telemetry#10, Count: 21 {"kind": "exporter", "data_type": "metrics", "name": "debug"} ``` ### Testing - Several unit tests have been created. We have also tested by ingesting and converting exponential histograms from the `statsdreceiver` as well as directly via the `otlpreceiver` over grpc over several hours with a large amount of data. - We have clients that have been running this solution in production for a number of weeks. ### Readme description: ### convert_exponential_hist_to_explicit_hist `convert_exponential_hist_to_explicit_hist([ExplicitBounds])` the `convert_exponential_hist_to_explicit_hist` function converts an ExponentialHistogram to an Explicit (_normal_) Histogram. `ExplicitBounds` is represents the list of bucket boundaries for the new histogram. This argument is __required__ and __cannot be empty__. __WARNING:__ The process of converting an ExponentialHistogram to an Explicit Histogram is not perfect and may result in a loss of precision. It is important to define an appropriate set of bucket boundaries to minimize this loss. For example, selecting Boundaries that are too high or too low may result histogram buckets that are too wide or too narrow, respectively. --------- Co-authored-by: Kent Quirk <[email protected]> Co-authored-by: Tyler Helmuth <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description:
Link to tracking Issue:
Testing:
Documentation: