Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[x/log] Bump zap version and add logging encoder configuration #3377

Merged
merged 12 commits into from
Apr 14, 2021

Conversation

wybczu
Copy link
Contributor

@wybczu wybczu commented Mar 19, 2021

What this PR does / why we need it:

  1. Bump zap version to the latest release.
  2. Adds a field in logging configuration for customizing encoder configuration.

Special notes for your reviewer:

N/A

Does this PR introduce a user-facing and/or backwards incompatible change?:

It's now possible to configure the log format.

Does this PR require updating code package or user-facing documentation?:


@wybczu wybczu marked this pull request as ready for review March 23, 2021 20:01
@wybczu wybczu changed the title [x/log] Add logging encoder configuration [x/log] Bump zap version and add logging encoder configuration Mar 23, 2021
@codecov
Copy link

codecov bot commented Mar 23, 2021

Codecov Report

Merging #3377 (3d5cb21) into master (3d5cb21) will not change coverage.
The diff coverage is n/a.

❗ Current head 3d5cb21 differs from pull request most recent head 7a14899. Consider uploading reports for the commit 7a14899 to get more accurate results

Impacted file tree graph

@@           Coverage Diff           @@
##           master    #3377   +/-   ##
=======================================
  Coverage    72.3%    72.3%           
=======================================
  Files        1100     1100           
  Lines      102454   102454           
=======================================
  Hits        74176    74176           
  Misses      23171    23171           
  Partials     5107     5107           
Flag Coverage Δ
aggregator 76.8% <0.0%> (ø)
cluster 84.9% <0.0%> (ø)
collector 84.3% <0.0%> (ø)
dbnode 78.9% <0.0%> (ø)
m3em 74.4% <0.0%> (ø)
m3ninx 73.6% <0.0%> (ø)
metrics 19.8% <0.0%> (ø)
msg 74.5% <0.0%> (ø)
query 67.0% <0.0%> (ø)
x 80.2% <0.0%> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.


Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3d5cb21...7a14899. Read the comment docs.

Copy link
Collaborator

@robskillington robskillington left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@wesleyk wesleyk merged commit 6e6b4d4 into m3db:master Apr 14, 2021
@wybczu wybczu deleted the luk/add-encoder-configuration branch April 14, 2021 14:22
soundvibe added a commit that referenced this pull request Apr 16, 2021
* master:
  [dbnode] Set default values for BootstrapPeersConfiguration (#3420)
  [integration-tests] Use explicit version for quay.io/m3db/prometheus_remote_client_golang (#3422)
  [dtest] Fix dtest docker compose config: env => environment (#3421)
  Fix broken links to edit pages (#3419)
  [dbnode] Fix races in source_data_test.go (#3418)
  [coordinator] add more information to processed count metric (#3415)
  [dbnode] Avoid use of grow on demand worker pool for fetchTagged and aggregate (#3416)
  [docs] Fix m3aggregagtor typo (#3417)
  [x/log] Bump zap version and add logging encoder configuration (#3377)
  Do not use buffer channels if growOnDemand is true (#3414)
  [dbnode] Fix TestSeriesWriteReadParallel datapoints too far in past with -race flag (#3413)
  [docs] Update m3db operator docs with v0.13.0 features (#3397)
  [aggregator] Fix followerFlushManager metrics (#3411)
  [query] Restore optimization to skip initial fetch for timeShift and unary fns (#3408)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants