diff --git a/docs/generated/logformats.md b/docs/generated/logformats.md index 8d94fd08e1c8..bae66addd425 100644 --- a/docs/generated/logformats.md +++ b/docs/generated/logformats.md @@ -26,23 +26,25 @@ The supported log output formats are documented below. ## Format `crdb-v1` -This is the legacy file format used from CockroachDB v1.0. -Each log entry is emitted using a common prefix, described below, -followed by: +This is a legacy file format used from CockroachDB v1.0. + +Each log entry is emitted using a common prefix, described below, followed by: - The logging context tags enclosed between `[` and `]`, if any. It is possible for this to be omitted if there were no context tags. +- Optionally, a counter column, if the option 'show-counter' is enabled. See below for details. - the text of the log entry. -Beware that the text of the log entry can span multiple lines. The following caveats apply: - +Beware that the text of the log entry can span multiple lines. +The following caveats apply: - The text of the log entry can start with text enclosed between `[` and `]`. If there were no logging tags to start with, it is not possible to distinguish between logging context tag information and a `[...]` string in the main text of the - log entry. This means that this format is ambiguous. For an unambiguous alternative, - consider `crdb-v1-count`. + log entry. This means that this format is ambiguous. + + To remove this ambiguity, you can use the option 'show-counter'. - The text of the log entry can embed arbitrary application-level strings, including strings that represent log entries. In particular, an accident @@ -75,7 +77,10 @@ if known, and other metadata about the running process. Each line of output starts with the following prefix: - Lyymmdd hh:mm:ss.uuuuuu goid [chan@]file:line marker + Lyymmdd hh:mm:ss.uuuuuu goid [chan@]file:line marker tags counter + +Reminder, the tags may be omitted; and the counter is only printed if the option +'show-counter' is specified. | Field | Description | |-----------------|--------------------------------------------------------------------------------------------------------------------------------------| @@ -89,6 +94,8 @@ Each line of output starts with the following prefix: | file | The file name where the entry originated. | | line | The line number where the entry originated. | | marker | Redactability marker ` + redactableIndicator + ` (see below for details). | +| tags | The logging tags, enclosed between `[` and `]`. May be absent. | +| counter | The entry counter. Only included if 'show-counter' is enabled. | The redactability marker can be empty; in this case, its position in the common prefix is a double ASCII space character which can be used to reliably identify this situation. @@ -99,88 +106,42 @@ fields that are considered sensitive. These markers are automatically recognized by [`cockroach debug zip`](cockroach-debug-zip.html) and [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) when log redaction is requested. -## Format `crdb-v1-count` - -This is an alternative, backward-compatible legacy file format used from CockroachDB v2.0. - -Each log entry is emitted using a common prefix, described below, -followed by the text of the log entry. - -Beware that the text of the log entry can span multiple lines. The following caveats apply: - - -- The text of the log entry can embed arbitrary application-level strings, - including strings that represent log entries. In particular, an accident - of implementation can cause the common entry prefix (described below) - to also appear on a line of its own, as part of the payload of a previous - log entry. There is no automated way to recognize when this occurs. - Care must be taken by a human observer to recognize these situations. - -- The log entry parser provided by CockroachDB to read log files is faulty - and is unable to recognize the aforementioned pitfall; nor can it read - entries larger than 64KiB successfully. Generally, use of this internal - log entry parser is discouraged for entries written with this format. - -See the newer format `crdb-v2` for an alternative -without these limitations. +Additional options recognized via `format-options`: -### Header lines +| Option | Description | +|--------|-------------| +| `show-counter` | Whether to include the counter column in the line header. Without it, the format may be ambiguous due to the optionality of tags. | +| `colors` | The color profile to use. Possible values: none, auto, ansi, 256color. Default is auto. | +| `timezone` | The timezone to use for the timestamp column. The value can be any timezone name recognized by the Go standard library. Default is `UTC` | -At the beginning of each file, a header is printed using a similar format as -regular log entries. This header reports when the file was created, -which parameters were used to start the server, the server identifiers -if known, and other metadata about the running process. -- This header appears to be logged at severity `INFO` (with an `I` prefix - at the start of the line) even though it does not really have a severity. -- The header is printed unconditionally even when a filter is configured to - omit entries at the `INFO` level. -### Common log entry prefix +## Format `crdb-v1-count` -Each line of output starts with the following prefix: +This format name is an alias for 'crdb-v1' with +the following format option defaults: - Lyymmdd hh:mm:ss.uuuuuu goid [chan@]file:line marker tags counter - -| Field | Description | -|-----------------|--------------------------------------------------------------------------------------------------------------------------------------| -| L | A single character, representing the [log level](logging.html#logging-levels-severities) (e.g., `I` for `INFO`). | -| yy | The year (zero padded; i.e., 2016 is `16`). | -| mm | The month (zero padded; i.e., May is `05`). | -| dd | The day (zero padded). | -| hh:mm:ss.uuuuuu | Time in hours, minutes and fractional seconds. Timezone is UTC. | -| goid | The goroutine id (omitted if zero for use by tests). | -| chan | The channel number (omitted if zero for backward compatibility). | -| file | The file name where the entry originated. | -| line | The line number where the entry originated. | -| marker | Redactability marker ` + redactableIndicator + ` (see below for details). | -| tags | The logging tags, enclosed between `[` and `]`. May be absent. | -| counter | The entry counter. Always present. | - -The redactability marker can be empty; in this case, its position in the common prefix is -a double ASCII space character which can be used to reliably identify this situation. - -If the marker ` + redactableIndicator + ` is present, the remainder of the log entry -contains delimiters (‹...›) around -fields that are considered sensitive. These markers are automatically recognized -by [`cockroach debug zip`](cockroach-debug-zip.html) and [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) when log redaction is requested. +- `show-counter: true` +- `colors: none` ## Format `crdb-v1-tty` -Same textual format as `crdb-v1`. +This format name is an alias for 'crdb-v1' with +the following format option defaults: + +- `show-counter: false` +- `colors: auto` -In addition, if the output stream happens to be a VT-compatible terminal, -and the flag `no-color` was *not* set in the configuration, the entries -are decorated using ANSI color codes. ## Format `crdb-v1-tty-count` -Same textual format as `crdb-v1-count`. +This format name is an alias for 'crdb-v1' with +the following format option defaults: + +- `show-counter: true` +- `colors: auto` -In addition, if the output stream happens to be a VT-compatible terminal, -and the flag `no-color` was *not* set in the configuration, the entries -are decorated using ANSI color codes. ## Format `crdb-v2` @@ -306,73 +267,24 @@ the new format, they are prefixed by the `=` continuation indicator. -## Format `crdb-v2-tty` - -Same textual format as `crdb-v2`. - -In addition, if the output stream happens to be a VT-compatible terminal, -and the flag `no-color` was *not* set in the configuration, the entries -are decorated using ANSI color codes. - -## Format `json` - -This format emits log entries as a JSON payload. - -The JSON object is guaranteed to not contain unescaped newlines -or other special characters, and the entry as a whole is followed -by a newline character. This makes the format suitable for -processing over a stream unambiguously. - -Each entry contains at least the following fields: - -| Field | Description | -|-------|-------------| -| `file` | The name of the source file where the event was emitted. | -| `goroutine` | The identifier of the goroutine where the event was emitted. | -| `line` | The line number where the event was emitted in the source. | -| `redactable` | Whether the payload is redactable (see below for details). | -| `timestamp` | The timestamp at which the event was emitted on the logging channel. | -| `version` | The binary version with which the event was generated. | - +Additional options recognized via `format-options`: -After a couple of *header* entries written at the beginning of each log sink, -all subsequent log entries also contain the following fields: - -| Field | Description | -|---------------------|-------------| -| `channel` | The name of the logging channel where the event was sent. | -| `severity` | The severity of the event. | -| `channel_numeric` | The numeric identifier for the logging channel where the event was sent. | -| `entry_counter` | The entry number on this logging sink, relative to the last process restart. | -| `severity_numeric` | The numeric value of the severity of the event. | +| Option | Description | +|--------|-------------| +| `colors` | The color profile to use. Possible values: none, auto, ansi, 256color. Default is auto. | +| `timezone` | The timezone to use for the timestamp column. The value can be any timezone name recognized by the Go standard library. Default is `UTC` | -Additionally, the following fields are conditionally present: - -| Field | Description | -|---------------------|-------------| -| `node_id` | The node ID where the event was generated, once known. Only reported for single-tenant or KV servers. | -| `cluster_id` | The cluster ID where the event was generated, once known. Only reported for single-tenant of KV servers. | -| `instance_id` | The SQL instance ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | -| `tenant_id` | The SQL tenant ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | -| `tags` | The logging context tags for the entry, if there were context tags. | -| `message` | For unstructured events, the flat text payload. | -| `event` | The logging event, if structured (see below for details). | -| `stacks` | Goroutine stacks, for fatal events. | -When an entry is structured, the `event` field maps to a dictionary -whose structure is one of the documented structured events. See the [reference documentation](eventlog.html) -for structured events for a list of possible payloads. - -When the entry is marked as `redactable`, the `tags`, `message`, and/or `event` payloads -contain delimiters (‹...›) around -fields that are considered sensitive. These markers are automatically recognized -by [`cockroach debug zip`](cockroach-debug-zip.html) and [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) when log redaction is requested. +## Format `crdb-v2-tty` +This format name is an alias for 'crdb-v2' with +the following format option defaults: +- `colors: auto` -## Format `json-compact` +## Format `json` This format emits log entries as a JSON payload. @@ -383,40 +295,42 @@ processing over a stream unambiguously. Each entry contains at least the following fields: -| Field | Description | -|-------|-------------| -| `f` | The name of the source file where the event was emitted. | -| `g` | The identifier of the goroutine where the event was emitted. | -| `l` | The line number where the event was emitted in the source. | -| `r` | Whether the payload is redactable (see below for details). | -| `t` | The timestamp at which the event was emitted on the logging channel. | -| `v` | The binary version with which the event was generated. | +| Field name if `tag-style: compact` is specified | Field name if `tag-style: verbose` is specified | Description | +|-------|-------|-------------| +| `tag` | `tag` | (Only if the option `fluent-tag: true` is given.) A Fluent tag for the event, formed by the process name and the logging channel. | +| `d` | `datetime` | The pretty-printed date/time of the event timestamp, if enabled via options. | +| `f` | `file` | The name of the source file where the event was emitted. | +| `g` | `goroutine` | The identifier of the goroutine where the event was emitted. | +| `l` | `line` | The line number where the event was emitted in the source. | +| `r` | `redactable` | Whether the payload is redactable (see below for details). | +| `t` | `timestamp` | The timestamp at which the event was emitted on the logging channel. | +| `v` | `version` | The binary version with which the event was generated. | After a couple of *header* entries written at the beginning of each log sink, all subsequent log entries also contain the following fields: -| Field | Description | -|---------------------|-------------| -| `C` | The name of the logging channel where the event was sent. | -| `sev` | The severity of the event. | -| `c` | The numeric identifier for the logging channel where the event was sent. | -| `n` | The entry number on this logging sink, relative to the last process restart. | -| `s` | The numeric value of the severity of the event. | +| Field name if `tag-style: compact` is specified | Field name if `tag-style: verbose` is specified | Description | +|-------|-------|-------------| +| `C` | `channel` | The name of the logging channel where the event was sent. | +| `sev` | `severity` | The severity of the event. | +| `c` | `channel_numeric` | The numeric identifier for the logging channel where the event was sent. | +| `n` | `entry_counter` | The entry number on this logging sink, relative to the last process restart. | +| `s` | `severity_numeric` | The numeric value of the severity of the event. | Additionally, the following fields are conditionally present: -| Field | Description | -|---------------------|-------------| -| `N` | The node ID where the event was generated, once known. Only reported for single-tenant or KV servers. | -| `x` | The cluster ID where the event was generated, once known. Only reported for single-tenant of KV servers. | -| `q` | The SQL instance ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | -| `T` | The SQL tenant ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | -| `tags` | The logging context tags for the entry, if there were context tags. | -| `message` | For unstructured events, the flat text payload. | -| `event` | The logging event, if structured (see below for details). | -| `stacks` | Goroutine stacks, for fatal events. | +| Field name if `tag-style: compact` is specified | Field name if `tag-style: verbose` is specified | Description | +|-------|-------|-------------| +| `N` | `node_id` | The node ID where the event was generated, once known. Only reported for single-tenant or KV servers. | +| `x` | `cluster_id` | The cluster ID where the event was generated, once known. Only reported for single-tenant of KV servers. | +| `q` | `instance_id` | The SQL instance ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | +| `T` | `tenant_id` | The SQL tenant ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | +| `tags` | `tags` | The logging context tags for the entry, if there were context tags. | +| `message` | `message` | For unstructured events, the flat text payload. | +| `event` | `event` | The logging event, if structured (see below for details). | +| `stacks` | `stacks` | Goroutine stacks, for fatal events. | When an entry is structured, the `event` field maps to a dictionary whose structure is one of the documented structured events. See the [reference documentation](eventlog.html) @@ -427,124 +341,41 @@ contain delimiters (‹...›) around fields that are considered sensitive. These markers are automatically recognized by [`cockroach debug zip`](cockroach-debug-zip.html) and [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) when log redaction is requested. +Additional options recognized via `format-options`: +| Option | Description | +|--------|-------------| +| `datetime-format` | The format to use for the `datetime` field. The value can be one of `none`, `iso8601`/`rfc3339` (synonyms), or `rfc1123`. Default is `none`. | +| `datetime-timezone` | The timezone to use for the `datetime` field. The value can be any timezone name recognized by the Go standard library. Default is `UTC` | +| `tag-style` | The tags to include in the envelope. The value can be `compact` (one letter tags) or `verbose` (long-form tags). Default is `verbose`. | +| `fluent-tag` | Whether to produce an additional field called `tag` for Fluent compatibility. Default is `false`. | -## Format `json-fluent` - -This format emits log entries as a JSON payload. - -The JSON object is guaranteed to not contain unescaped newlines -or other special characters, and the entry as a whole is followed -by a newline character. This makes the format suitable for -processing over a stream unambiguously. -Each entry contains at least the following fields: - -| Field | Description | -|-------|-------------| -| `tag` | A Fluent tag for the event, formed by the process name and the logging channel. | -| `file` | The name of the source file where the event was emitted. | -| `goroutine` | The identifier of the goroutine where the event was emitted. | -| `line` | The line number where the event was emitted in the source. | -| `redactable` | Whether the payload is redactable (see below for details). | -| `timestamp` | The timestamp at which the event was emitted on the logging channel. | -| `version` | The binary version with which the event was generated. | - - -After a couple of *header* entries written at the beginning of each log sink, -all subsequent log entries also contain the following fields: - -| Field | Description | -|---------------------|-------------| -| `channel` | The name of the logging channel where the event was sent. | -| `severity` | The severity of the event. | -| `channel_numeric` | The numeric identifier for the logging channel where the event was sent. | -| `entry_counter` | The entry number on this logging sink, relative to the last process restart. | -| `severity_numeric` | The numeric value of the severity of the event. | +## Format `json-compact` +This format name is an alias for 'json' with +the following format option defaults: -Additionally, the following fields are conditionally present: +- `fluent-tag: false` +- `tag-style: compact` -| Field | Description | -|---------------------|-------------| -| `node_id` | The node ID where the event was generated, once known. Only reported for single-tenant or KV servers. | -| `cluster_id` | The cluster ID where the event was generated, once known. Only reported for single-tenant of KV servers. | -| `instance_id` | The SQL instance ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | -| `tenant_id` | The SQL tenant ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | -| `tags` | The logging context tags for the entry, if there were context tags. | -| `message` | For unstructured events, the flat text payload. | -| `event` | The logging event, if structured (see below for details). | -| `stacks` | Goroutine stacks, for fatal events. | -When an entry is structured, the `event` field maps to a dictionary -whose structure is one of the documented structured events. See the [reference documentation](eventlog.html) -for structured events for a list of possible payloads. - -When the entry is marked as `redactable`, the `tags`, `message`, and/or `event` payloads -contain delimiters (‹...›) around -fields that are considered sensitive. These markers are automatically recognized -by [`cockroach debug zip`](cockroach-debug-zip.html) and [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) when log redaction is requested. +## Format `json-fluent` +This format name is an alias for 'json' with +the following format option defaults: +- `fluent-tag: true` +- `tag-style: verbose` ## Format `json-fluent-compact` -This format emits log entries as a JSON payload. - -The JSON object is guaranteed to not contain unescaped newlines -or other special characters, and the entry as a whole is followed -by a newline character. This makes the format suitable for -processing over a stream unambiguously. - -Each entry contains at least the following fields: - -| Field | Description | -|-------|-------------| -| `tag` | A Fluent tag for the event, formed by the process name and the logging channel. | -| `f` | The name of the source file where the event was emitted. | -| `g` | The identifier of the goroutine where the event was emitted. | -| `l` | The line number where the event was emitted in the source. | -| `r` | Whether the payload is redactable (see below for details). | -| `t` | The timestamp at which the event was emitted on the logging channel. | -| `v` | The binary version with which the event was generated. | - - -After a couple of *header* entries written at the beginning of each log sink, -all subsequent log entries also contain the following fields: - -| Field | Description | -|---------------------|-------------| -| `C` | The name of the logging channel where the event was sent. | -| `sev` | The severity of the event. | -| `c` | The numeric identifier for the logging channel where the event was sent. | -| `n` | The entry number on this logging sink, relative to the last process restart. | -| `s` | The numeric value of the severity of the event. | - - -Additionally, the following fields are conditionally present: - -| Field | Description | -|---------------------|-------------| -| `N` | The node ID where the event was generated, once known. Only reported for single-tenant or KV servers. | -| `x` | The cluster ID where the event was generated, once known. Only reported for single-tenant of KV servers. | -| `q` | The SQL instance ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | -| `T` | The SQL tenant ID where the event was generated, once known. Only reported for multi-tenant SQL servers. | -| `tags` | The logging context tags for the entry, if there were context tags. | -| `message` | For unstructured events, the flat text payload. | -| `event` | The logging event, if structured (see below for details). | -| `stacks` | Goroutine stacks, for fatal events. | - -When an entry is structured, the `event` field maps to a dictionary -whose structure is one of the documented structured events. See the [reference documentation](eventlog.html) -for structured events for a list of possible payloads. - -When the entry is marked as `redactable`, the `tags`, `message`, and/or `event` payloads -contain delimiters (‹...›) around -fields that are considered sensitive. These markers are automatically recognized -by [`cockroach debug zip`](cockroach-debug-zip.html) and [`cockroach debug merge-logs`](cockroach-debug-merge-logs.html) when log redaction is requested. - +This format name is an alias for 'json' with +the following format option defaults: +- `fluent-tag: true` +- `tag-style: compact` diff --git a/docs/generated/logsinks.md b/docs/generated/logsinks.md index 220fe712d397..33e5ecf1ee29 100644 --- a/docs/generated/logsinks.md +++ b/docs/generated/logsinks.md @@ -81,6 +81,7 @@ Configuration options shared across all sink types: |--|--| | `filter` | specifies the default minimum severity for log events to be emitted to this sink, when not otherwise specified by the 'channels' sink attribute. | | `format` | the entry format to use. | +| `format-options` | additional options for the format. | | `redact` | whether to strip sensitive information before log events are emitted to this sink. | | `redactable` | whether to keep redaction markers in the sink's output. The presence of redaction markers makes it possible to strip sensitive data reliably. | | `exit-on-error` | whether the logging system should terminate the process if an error is encountered while writing to this sink. | @@ -163,6 +164,7 @@ Configuration options shared across all sink types: |--|--| | `filter` | specifies the default minimum severity for log events to be emitted to this sink, when not otherwise specified by the 'channels' sink attribute. | | `format` | the entry format to use. | +| `format-options` | additional options for the format. | | `redact` | whether to strip sensitive information before log events are emitted to this sink. | | `redactable` | whether to keep redaction markers in the sink's output. The presence of redaction markers makes it possible to strip sensitive data reliably. | | `exit-on-error` | whether the logging system should terminate the process if an error is encountered while writing to this sink. | @@ -230,6 +232,7 @@ Configuration options shared across all sink types: |--|--| | `filter` | specifies the default minimum severity for log events to be emitted to this sink, when not otherwise specified by the 'channels' sink attribute. | | `format` | the entry format to use. | +| `format-options` | additional options for the format. | | `redact` | whether to strip sensitive information before log events are emitted to this sink. | | `redactable` | whether to keep redaction markers in the sink's output. The presence of redaction markers makes it possible to strip sensitive data reliably. | | `exit-on-error` | whether the logging system should terminate the process if an error is encountered while writing to this sink. | @@ -279,7 +282,7 @@ Type-specific configuration options: | Field | Description | |--|--| | `channels` | the list of logging channels that use this sink. See the [channel selection configuration](#channel-format) section for details. | -| `no-color` | forces the omission of VT color codes in the output even when stderr is a terminal. | +| `no-color` | forces the omission of VT color codes in the output even when stderr is a terminal. This option is deprecated; its effects are equivalent to 'format-options: {colors: none}'. | Configuration options shared across all sink types: @@ -288,6 +291,7 @@ Configuration options shared across all sink types: |--|--| | `filter` | specifies the default minimum severity for log events to be emitted to this sink, when not otherwise specified by the 'channels' sink attribute. | | `format` | the entry format to use. | +| `format-options` | additional options for the format. | | `redact` | whether to strip sensitive information before log events are emitted to this sink. | | `redactable` | whether to keep redaction markers in the sink's output. The presence of redaction markers makes it possible to strip sensitive data reliably. | | `exit-on-error` | whether the logging system should terminate the process if an error is encountered while writing to this sink. | diff --git a/pkg/util/flagutil/flagutil.go b/pkg/util/flagutil/flagutil.go index 2a87c9132929..ee478eae6412 100644 --- a/pkg/util/flagutil/flagutil.go +++ b/pkg/util/flagutil/flagutil.go @@ -25,6 +25,7 @@ import ( // as described in time.Parse. var TimeFormats = []string{ log.MessageTimeFormat, + log.MessageTimeFormatWithTZ, log.FileTimeFormat, time.RFC3339Nano, time.RFC3339, diff --git a/pkg/util/log/flags.go b/pkg/util/log/flags.go index 2efcb9d0cf3a..4a8a0dbcdd8d 100644 --- a/pkg/util/log/flags.go +++ b/pkg/util/log/flags.go @@ -15,6 +15,7 @@ import ( "fmt" "io/fs" "math" + "strings" "github.com/cockroachdb/cockroach/pkg/util/envutil" "github.com/cockroachdb/cockroach/pkg/util/log/channel" @@ -24,6 +25,7 @@ import ( "github.com/cockroachdb/cockroach/pkg/util/log/severity" "github.com/cockroachdb/cockroach/pkg/util/syncutil" "github.com/cockroachdb/errors" + "github.com/cockroachdb/redact" ) type config struct { @@ -132,7 +134,7 @@ func ApplyConfig(config logconfig.Config) (logShutdownFn func(), err error) { // Call the final value of logShutdownFn immediately if returning with error. defer func() { - if err != nil { + if err != nil && logShutdownFn != nil { logShutdownFn() } }() @@ -248,6 +250,17 @@ func ApplyConfig(config logconfig.Config) (logShutdownFn func(), err error) { if err := logging.stderrSinkInfoTemplate.applyConfig(config.Sinks.Stderr.CommonSinkConfig); err != nil { return nil, err } + if config.Sinks.Stderr.NoColor { + // This branch exists for backward compatibility with CockroachDB + // v23.1 and previous versions. The same effect can be obtained + // using 'format-options: {colors: none}'. + switch t := logging.stderrSinkInfoTemplate.formatter.(type) { + case *formatCrdbV1: + t.colorProfile = nil + case *formatCrdbV2: + t.colorProfile = nil + } + } logging.stderrSinkInfoTemplate.applyFilters(config.Sinks.Stderr.Channels) // Create the per-channel loggers. @@ -437,9 +450,15 @@ func (l *sinkInfo) applyConfig(c logconfig.CommonSinkConfig) error { l.criticality = *c.Criticality f, ok := formatters[*c.Format] if !ok { - return errors.Newf("unknown format: %q", *c.Format) + return errors.WithHintf(errors.Newf("unknown format: %q", *c.Format), + "Supported formats: %s.", redact.Safe(strings.Join(formatNames, ", "))) + } + l.formatter = f() + for k, v := range c.FormatOptions { + if err := l.formatter.setOption(k, v); err != nil { + return err + } } - l.formatter = f return nil } diff --git a/pkg/util/log/format_crdb.go b/pkg/util/log/format_crdb.go index 292a4d653c78..fdf87bbb156b 100644 --- a/pkg/util/log/format_crdb.go +++ b/pkg/util/log/format_crdb.go @@ -29,6 +29,10 @@ const severityChar = "IWEF" // as used in time.Parse and time.Format. const MessageTimeFormat = "060102 15:04:05.999999" +// MessageTimeFormatWithTZ is like MessageTimeFormat but with a numeric +// time zone included. +const MessageTimeFormatWithTZ = "060102 15:04:05.999999-070000" + // FormatLegacyEntry writes the contents of the legacy log entry struct to the specified writer. func FormatLegacyEntry(e logpb.Entry, w io.Writer) error { return FormatLegacyEntryWithOptionalColors(e, w, nil /* cp */) @@ -37,7 +41,7 @@ func FormatLegacyEntry(e logpb.Entry, w io.Writer) error { // FormatLegacyEntryWithOptionalColors is like FormatLegacyEntry but the caller can specify // a color profile. func FormatLegacyEntryWithOptionalColors(e logpb.Entry, w io.Writer, cp ttycolor.Profile) error { - buf := formatLogEntryInternalV1(e, false /* isHeader */, true /* showCounter */, cp) + buf := formatLogEntryInternalV1(e, false /* isHeader */, true /* showCounter */, cp, nil /* loc */) defer putBuffer(buf) _, err := w.Write(buf.Bytes()) return err diff --git a/pkg/util/log/format_crdb_v1.go b/pkg/util/log/format_crdb_v1.go index 5f6068af26b2..2d9525a778cf 100644 --- a/pkg/util/log/format_crdb_v1.go +++ b/pkg/util/log/format_crdb_v1.go @@ -13,6 +13,7 @@ package log import ( "bufio" "bytes" + "fmt" "io" "regexp" "strconv" @@ -23,68 +24,125 @@ import ( "github.com/cockroachdb/cockroach/pkg/util/log/logpb" "github.com/cockroachdb/cockroach/pkg/util/log/severity" "github.com/cockroachdb/cockroach/pkg/util/timeutil" + "github.com/cockroachdb/errors" "github.com/cockroachdb/redact" "github.com/cockroachdb/ttycolor" ) // formatCrdbV1 is the pre-v21.1 canonical log format, without a // counter column. -type formatCrdbV1 struct{} +type formatCrdbV1 struct { + // showCounter determines whether the counter column is printed. + showCounter bool + // colorProfile is used to colorize the output. + colorProfile ttycolor.Profile + // colorProfileName is the name of the color profile, used for + // documentation purposes. + colorProfileName string + // loc is the time zone to use. When non-nil, it forces the + // presentation of a time zone specification after the time stamp. + // The corresponding code path is much slower. + loc *time.Location +} -func (formatCrdbV1) formatterName() string { return "crdb-v1" } +func (f *formatCrdbV1) setOption(k string, v string) error { + switch k { + case "show-counter": + switch v { + case "true": + f.showCounter = true + case "false": + f.showCounter = false + default: + return errors.WithHint( + errors.Newf("unknown show-counter value: %q", redact.Safe(v)), + "Possible values: true, false.") + } + return nil -func (formatCrdbV1) formatEntry(entry logEntry) *buffer { - return formatLogEntryInternalV1(entry.convertToLegacy(), entry.header, false /*showCounter*/, nil) + case "colors": + switch v { + case "none": + f.colorProfile = nil + case "auto": + f.colorProfile = ttycolor.StderrProfile + case "ansi": + f.colorProfile = ttycolor.Profile8 + case "256color": + f.colorProfile = ttycolor.Profile256 + default: + return errors.WithHint( + errors.Newf("unknown colors value: %q", redact.Safe(v)), + "Possible values: none, auto, ansi, 256color.") + } + f.colorProfileName = v + return nil + + case "timezone": + l, err := timeutil.LoadLocation(v) + if err != nil { + return errors.Wrapf(err, "invalid timezone: %q", v) + } + if l == time.UTC { + // Avoid triggering the slow path in the entry formatter. + l = nil + } + f.loc = l + return nil + + default: + return errors.Newf("unknown format option: %q", redact.Safe(k)) + } } -func (formatCrdbV1) doc() string { return formatCrdbV1CommonDoc(false /* withCounter */) } +func (f formatCrdbV1) formatterName() string { + var buf strings.Builder + buf.WriteString("crdb-v1") + if f.colorProfileName != "none" { + buf.WriteString("-tty") + } + if f.showCounter { + buf.WriteString("-count") + } + return buf.String() +} func (formatCrdbV1) contentType() string { return "text/plain" } -func formatCrdbV1CommonDoc(withCounter bool) string { - var buf strings.Builder +func (f formatCrdbV1) doc() string { + if f.formatterName() != "crdb-v1" { + var buf strings.Builder + fmt.Fprintf(&buf, `This format name is an alias for 'crdb-v1' with +the following format option defaults: - if !withCounter { - buf.WriteString(`This is the legacy file format used from CockroachDB v1.0.`) - } else { - buf.WriteString(`This is an alternative, backward-compatible legacy file format used from CockroachDB v2.0.`) +- `+"`show-counter: %v`"+` +- `+"`colors: %v`"+` +`, f.showCounter, f.colorProfileName) + return buf.String() } - buf.WriteString(` + var buf strings.Builder -Each log entry is emitted using a common prefix, described below,`) + buf.WriteString(` +This is a legacy file format used from CockroachDB v1.0. - if withCounter { - buf.WriteString(` -followed by the text of the log entry.`) - } else { - buf.WriteString(` -followed by: +Each log entry is emitted using a common prefix, described below, followed by: - The logging context tags enclosed between ` + "`[`" + ` and ` + "`]`" + `, if any. It is possible for this to be omitted if there were no context tags. -- the text of the log entry.`) - } - - buf.WriteString(` +- Optionally, a counter column, if the option 'show-counter' is enabled. See below for details. +- the text of the log entry. -Beware that the text of the log entry can span multiple lines. The following caveats apply: +Beware that the text of the log entry can span multiple lines. +The following caveats apply: -`) - - if !withCounter { - // If there is no counter, the format is ambiguous. Explain that. - buf.WriteString(` - The text of the log entry can start with text enclosed between ` + "`[`" + ` and ` + "`]`" + `. If there were no logging tags to start with, it is not possible to distinguish between logging context tag information and a ` + "`[...]`" + ` string in the main text of the - log entry. This means that this format is ambiguous. For an unambiguous alternative, - consider ` + "`" + formatCrdbV1WithCounter{}.formatterName() + "`" + `. -`) - } + log entry. This means that this format is ambiguous. + + To remove this ambiguity, you can use the option 'show-counter'. - // General disclaimer about the lack of boundaries. - buf.WriteString(` - The text of the log entry can embed arbitrary application-level strings, including strings that represent log entries. In particular, an accident of implementation can cause the common entry prefix (described below) @@ -116,13 +174,10 @@ if known, and other metadata about the running process. Each line of output starts with the following prefix: - Lyymmdd hh:mm:ss.uuuuuu goid [chan@]file:line marker`) + Lyymmdd hh:mm:ss.uuuuuu goid [chan@]file:line marker tags counter - if withCounter { - buf.WriteString(` tags counter`) - } - - buf.WriteString(` +Reminder, the tags may be omitted; and the counter is only printed if the option +'show-counter' is specified. | Field | Description | |-----------------|--------------------------------------------------------------------------------------------------------------------------------------| @@ -135,15 +190,9 @@ Each line of output starts with the following prefix: | chan | The channel number (omitted if zero for backward compatibility). | | file | The file name where the entry originated. | | line | The line number where the entry originated. | -| marker | Redactability marker ` + "` + redactableIndicator + `" + ` (see below for details). |`) - - if withCounter { - buf.WriteString(` -| tags | The logging tags, enclosed between ` + "`[`" + ` and ` + "`]`" + `. May be absent. | -| counter | The entry counter. Always present. |`) - } - - buf.WriteString(` +| marker | Redactability marker ` + "` + redactableIndicator + `" + ` (see below for details). | +| tags | The logging tags, enclosed between ` + "`[`" + ` and ` + "`]`" + `. May be absent. | +| counter | The entry counter. Only included if 'show-counter' is enabled. | The redactability marker can be empty; in this case, its position in the common prefix is a double ASCII space character which can be used to reliably identify this situation. @@ -154,92 +203,49 @@ fields that are considered sensitive. These markers are automatically recognized by ` + "[`cockroach debug zip`](cockroach-debug-zip.html)" + ` and ` + "[`cockroach debug merge-logs`](cockroach-debug-merge-logs.html)" + ` when log redaction is requested. -`) - - return buf.String() -} - -// formatCrdbV1WithCounter is the canonical log format including a -// counter column. -type formatCrdbV1WithCounter struct{} -func (formatCrdbV1WithCounter) formatterName() string { return "crdb-v1-count" } -func (formatCrdbV1WithCounter) formatEntry(entry logEntry) *buffer { - return formatLogEntryInternalV1(entry.convertToLegacy(), entry.header, true /*showCounter*/, nil) -} - -func (formatCrdbV1WithCounter) doc() string { return formatCrdbV1CommonDoc(true /* withCounter */) } - -func (formatCrdbV1WithCounter) contentType() string { return "text/plain" } +Additional options recognized via ` + "`format-options`" + `: -// formatCrdbV1TTY is like formatCrdbV1 and includes VT color codes if -// the stderr output is a TTY and -nocolor is not passed on the -// command line. -type formatCrdbV1TTY struct{} +| Option | Description | +|--------|-------------| +| ` + "`show-counter`" + ` | Whether to include the counter column in the line header. Without it, the format may be ambiguous due to the optionality of tags. | +| ` + "`colors`" + ` | The color profile to use. Possible values: none, auto, ansi, 256color. Default is auto. | +| ` + "`timezone`" + ` | The timezone to use for the timestamp column. The value can be any timezone name recognized by the Go standard library. Default is ` + "`UTC`" + ` | -func (formatCrdbV1TTY) formatterName() string { return "crdb-v1-tty" } - -func (formatCrdbV1TTY) formatEntry(entry logEntry) *buffer { - cp := ttycolor.StderrProfile - if logging.stderrSink.noColor.Get() { - cp = nil - } - return formatLogEntryInternalV1(entry.convertToLegacy(), entry.header, false /*showCounter*/, cp) -} - -const ttyFormatDoc = ` - -In addition, if the output stream happens to be a VT-compatible terminal, -and the flag ` + "`no-color`" + ` was *not* set in the configuration, the entries -are decorated using ANSI color codes.` - -func (formatCrdbV1TTY) doc() string { - return "Same textual format as `" + formatCrdbV1{}.formatterName() + "`." + ttyFormatDoc -} - -func (formatCrdbV1TTY) contentType() string { return "text/plain" } - -// formatCrdbV1ColorWithCounter is like formatCrdbV1WithCounter and -// includes VT color codes if the stderr output is a TTY and -nocolor -// is not passed on the command line. -type formatCrdbV1TTYWithCounter struct{} - -func (formatCrdbV1TTYWithCounter) formatterName() string { return "crdb-v1-tty-count" } +`) -func (formatCrdbV1TTYWithCounter) formatEntry(entry logEntry) *buffer { - cp := ttycolor.StderrProfile - if logging.stderrSink.noColor.Get() { - cp = nil - } - return formatLogEntryInternalV1(entry.convertToLegacy(), entry.header, true /*showCounter*/, cp) + return buf.String() } -func (formatCrdbV1TTYWithCounter) doc() string { - return "Same textual format as `" + formatCrdbV1WithCounter{}.formatterName() + "`." + ttyFormatDoc +func (f formatCrdbV1) formatEntry(entry logEntry) *buffer { + return formatLogEntryInternalV1(entry.convertToLegacy(), + entry.header, f.showCounter, f.colorProfile, f.loc) } -func (formatCrdbV1TTYWithCounter) contentType() string { return "text/plain" } - -// formatEntryInternalV1 renders a log entry. -// Log lines are colorized depending on severity. -// It uses a newly allocated *buffer. The caller is responsible -// for calling putBuffer() afterwards. -func formatLogEntryInternalV1( - entry logpb.Entry, isHeader, showCounter bool, cp ttycolor.Profile, -) *buffer { - buf := getBuffer() - if entry.Line < 0 { - entry.Line = 0 // not a real line number, but acceptable to someDigits +func writeCrdbHeader( + buf *buffer, + cp ttycolor.Profile, + sev logpb.Severity, + ch logpb.Channel, + file string, + line int, + ts int64, + loc *time.Location, + gid int, + redactable bool, +) { + if line < 0 { + line = 0 // not a real line number, but acceptable to someDigits } - if entry.Severity > severity.FATAL || entry.Severity <= severity.UNKNOWN { - entry.Severity = severity.INFO // for safety. + if sev > severity.FATAL || sev <= severity.UNKNOWN { + sev = severity.INFO // for safety. } tmp := buf.tmp[:len(buf.tmp)] var n int var prefix []byte - switch entry.Severity { + switch sev { case severity.INFO: prefix = cp[ttycolor.Cyan] case severity.WARNING: @@ -248,50 +254,67 @@ func formatLogEntryInternalV1( prefix = cp[ttycolor.Red] } n += copy(tmp, prefix) - // Avoid Fprintf, for speed. The format is so simple that we can do it quickly by hand. - // It's worth about 3X. Fprintf is hard. - now := timeutil.Unix(0, entry.Time) - year, month, day := now.Date() - hour, minute, second := now.Clock() // Lyymmdd hh:mm:ss.uuuuuu file:line - tmp[n] = severityChar[entry.Severity-1] + tmp[n] = severityChar[sev-1] n++ - if year < 2000 { - year = 2000 + + if loc == nil { + // Default time zone (UTC). + // Avoid Fprintf, for speed. The format is so simple that we can do it quickly by hand. + // It's worth about 3X. Fprintf is hard. + now := timeutil.Unix(0, ts) + year, month, day := now.Date() + hour, minute, second := now.Clock() + if year < 2000 { + year = 2000 + } + n += buf.twoDigits(n, year-2000) + n += buf.twoDigits(n, int(month)) + n += buf.twoDigits(n, day) + n += copy(tmp[n:], cp[ttycolor.Gray]) // gray for time, file & line + tmp[n] = ' ' + n++ + n += buf.twoDigits(n, hour) + tmp[n] = ':' + n++ + n += buf.twoDigits(n, minute) + tmp[n] = ':' + n++ + n += buf.twoDigits(n, second) + tmp[n] = '.' + n++ + n += buf.nDigits(6, n, now.Nanosecond()/1000, '0') + } else { + // Time zone was specified. + // Slooooow path. + buf.Write(tmp[:n]) + n = 0 + now := timeutil.Unix(0, ts).In(loc) + buf.WriteString(now.Format("060102")) + buf.Write(cp[ttycolor.Gray]) + buf.WriteByte(' ') + buf.WriteString(now.Format("15:04:05.000000-070000")) } - n += buf.twoDigits(n, year-2000) - n += buf.twoDigits(n, int(month)) - n += buf.twoDigits(n, day) - n += copy(tmp[n:], cp[ttycolor.Gray]) // gray for time, file & line - tmp[n] = ' ' - n++ - n += buf.twoDigits(n, hour) - tmp[n] = ':' - n++ - n += buf.twoDigits(n, minute) - tmp[n] = ':' - n++ - n += buf.twoDigits(n, second) - tmp[n] = '.' - n++ - n += buf.nDigits(6, n, now.Nanosecond()/1000, '0') + tmp[n] = ' ' n++ - if entry.Goroutine > 0 { - n += buf.someDigits(n, int(entry.Goroutine)) + if gid >= 0 { + n += buf.someDigits(n, gid) tmp[n] = ' ' n++ } - if !isHeader && entry.Channel != 0 { + + if ch != 0 { // Prefix the filename with the channel number. - n += buf.someDigits(n, int(entry.Channel)) + n += buf.someDigits(n, int(ch)) tmp[n] = '@' n++ } + buf.Write(tmp[:n]) - buf.WriteString(entry.File) + buf.WriteString(file) tmp[0] = ':' - n = buf.someDigits(1, int(entry.Line)) + n = buf.someDigits(1, line) n++ // Reset the color to default. n += copy(tmp[n:], cp[ttycolor.Reset]) @@ -300,7 +323,7 @@ func formatLogEntryInternalV1( // If redaction is enabled, indicate that the current entry has // markers. This indicator is used in the log parser to determine // which redaction strategy to adopt. - if entry.Redactable { + if redactable { copy(tmp[n:], redactableIndicatorBytes) n += len(redactableIndicatorBytes) } @@ -311,6 +334,26 @@ func formatLogEntryInternalV1( tmp[n] = ' ' n++ buf.Write(tmp[:n]) +} + +// formatEntryInternalV1 renders a log entry. +// Log lines are colorized depending on severity. +// It uses a newly allocated *buffer. The caller is responsible +// for calling putBuffer() afterwards. +func formatLogEntryInternalV1( + entry logpb.Entry, isHeader, showCounter bool, cp ttycolor.Profile, loc *time.Location, +) *buffer { + buf := getBuffer() + ch := entry.Channel + if isHeader { + ch = 0 + } + gid := int(entry.Goroutine) + if gid == 0 { + // In the -v1 format, a goroutine ID of 0 is hidden. + gid = -1 + } + writeCrdbHeader(buf, cp, entry.Severity, ch, entry.File, int(entry.Line), entry.Time, loc, gid, entry.Redactable) // The remainder is variable-length and could exceed // the static size of tmp. But we do have an upper bound. @@ -337,7 +380,8 @@ func formatLogEntryInternalV1( // Display the counter if set and enabled. if showCounter && entry.Counter > 0 { - n = buf.someDigits(0, int(entry.Counter)) + tmp := buf.tmp[:len(buf.tmp)] + n := buf.someDigits(0, int(entry.Counter)) tmp[n] = ' ' n++ buf.Write(tmp[:n]) @@ -360,7 +404,7 @@ func formatLogEntryInternalV1( var entryREV1 = regexp.MustCompile( `(?m)^` + /* Severity */ `([` + severityChar + `])` + - /* Date and time */ `(\d{6} \d{2}:\d{2}:\d{2}.\d{6}) ` + + /* Date and time */ `(\d{6} \d{2}:\d{2}:\d{2}.\d{6}(?:[---+]\d{6})?) ` + /* Goroutine ID */ `(?:(\d+) )?` + /* Channel/File/Line*/ `([^:]+):(\d+) ` + /* Redactable flag */ `((?:` + redactableIndicator + `)?) ` + @@ -373,6 +417,19 @@ type entryDecoderV1 struct { truncatedLastEntry bool } +func decodeTimestamp(fragment []byte) (unixNano int64, err error) { + timeFormat := MessageTimeFormat + if len(fragment) > 7 && (fragment[len(fragment)-7] == '+' || fragment[len(fragment)-7] == '-') { + // The timestamp has a timezone offset. + timeFormat = MessageTimeFormatWithTZ + } + t, err := time.Parse(timeFormat, string(fragment)) + if err != nil { + return 0, err + } + return t.UnixNano(), nil +} + // Decode decodes the next log entry into the provided protobuf message. func (d *entryDecoderV1) Decode(entry *logpb.Entry) error { for { @@ -395,11 +452,11 @@ func (d *entryDecoderV1) Decode(entry *logpb.Entry) error { entry.Severity = Severity(strings.IndexByte(severityChar, m[1][0]) + 1) // Process the timestamp. - t, err := time.Parse(MessageTimeFormat, string(m[2])) + var err error + entry.Time, err = decodeTimestamp(m[2]) if err != nil { return err } - entry.Time = t.UnixNano() // Process the goroutine ID. if len(m[3]) > 0 { diff --git a/pkg/util/log/format_crdb_v1_test.go b/pkg/util/log/format_crdb_v1_test.go index 2399e6bba356..7e135ac98096 100644 --- a/pkg/util/log/format_crdb_v1_test.go +++ b/pkg/util/log/format_crdb_v1_test.go @@ -42,6 +42,17 @@ func TestCrdbV1EncodeDecode(t *testing.T) { } case "entries": + var loc *time.Location + if arg, ok := td.Arg("tz"); ok { + var err error + var tz string + arg.Scan(t, 0, &tz) + loc, err = timeutil.LoadLocation(tz) + if err != nil { + td.Fatalf(t, "invalid tz: %v", err) + } + } + var inputEntries []logpb.Entry for _, line := range strings.Split(td.Input, "\n") { entry := templateEntry @@ -54,7 +65,9 @@ func TestCrdbV1EncodeDecode(t *testing.T) { var buf bytes.Buffer // Encode. for _, entry := range inputEntries { - _ = FormatLegacyEntry(entry, &buf) + lbuf := formatLogEntryInternalV1(entry, false /* header */, true /* showCounter */, nil /* colorProfile */, loc) + buf.Write(lbuf.Bytes()) + putBuffer(lbuf) } // Decode. entryStr := buf.String() diff --git a/pkg/util/log/format_crdb_v2.go b/pkg/util/log/format_crdb_v2.go index cb1d902e2a5c..164f0917c8a8 100644 --- a/pkg/util/log/format_crdb_v2.go +++ b/pkg/util/log/format_crdb_v2.go @@ -22,6 +22,7 @@ package log import ( "bufio" "bytes" + "fmt" "io" "regexp" "strconv" @@ -30,9 +31,7 @@ import ( "unicode/utf8" "github.com/cockroachdb/cockroach/pkg/base/serverident" - "github.com/cockroachdb/cockroach/pkg/util/log/channel" "github.com/cockroachdb/cockroach/pkg/util/log/logpb" - "github.com/cockroachdb/cockroach/pkg/util/log/severity" "github.com/cockroachdb/cockroach/pkg/util/timeutil" "github.com/cockroachdb/errors" "github.com/cockroachdb/redact" @@ -40,19 +39,77 @@ import ( ) // formatCrdbV2 is the canonical log format. -type formatCrdbV2 struct{} +type formatCrdbV2 struct { + // colorProfile is used to colorize the output. + colorProfile ttycolor.Profile + // colorProfileName is the name of the color profile, used for + // documentation purposes. + colorProfileName string + // loc is the time zone to use. When non-nil, it forces the + // presentation of a time zone specification after the time stamp. + // The corresponding code path is much slower. + loc *time.Location +} + +func (f *formatCrdbV2) setOption(k string, v string) error { + switch k { + case "colors": + switch v { + case "none": + f.colorProfile = nil + case "auto": + f.colorProfile = ttycolor.StderrProfile + case "ansi": + f.colorProfile = ttycolor.Profile8 + case "256color": + f.colorProfile = ttycolor.Profile256 + default: + return errors.WithHint( + errors.Newf("unknown colors value: %q", redact.Safe(v)), + "Possible values: none, auto, ansi, 256color.") + } + f.colorProfileName = v + return nil -func (formatCrdbV2) formatterName() string { return "crdb-v2" } + case "timezone": + l, err := timeutil.LoadLocation(v) + if err != nil { + return errors.Wrapf(err, "invalid timezone: %q", v) + } + if l == time.UTC { + // Avoid triggering the slow path in the entry formatter. + l = nil + } + f.loc = l + return nil -func (formatCrdbV2) formatEntry(entry logEntry) *buffer { - return formatLogEntryInternalV2(entry, nil) + default: + return errors.Newf("unknown format option: %q", redact.Safe(k)) + } } -func (formatCrdbV2) doc() string { return formatCrdbV2CommonDoc() } +func (f formatCrdbV2) formatterName() string { + var buf strings.Builder + buf.WriteString("crdb-v2") + if f.colorProfileName != "none" { + buf.WriteString("-tty") + } + return buf.String() +} func (formatCrdbV2) contentType() string { return "text/plain" } -func formatCrdbV2CommonDoc() string { +func (f formatCrdbV2) doc() string { + if f.formatterName() != "crdb-v2" { + var buf strings.Builder + fmt.Fprintf(&buf, `This format name is an alias for 'crdb-v2' with +the following format option defaults: + +- `+"`colors: %v`"+` +`, f.colorProfileName) + return buf.String() + } + var buf strings.Builder buf.WriteString(`This is the main file format used from CockroachDB v21.1. @@ -176,123 +233,28 @@ Finally, in the previous format, structured entries were prefixed with the string ` + "`" + structuredEntryPrefix + "`" + `. In the new format, they are prefixed by the ` + "`=`" + ` continuation indicator. -`) - return buf.String() -} -// formatCrdbV2TTY is like formatCrdbV2 and includes VT color codes if -// the stderr output is a TTY and -nocolor is not passed on the -// command line. -type formatCrdbV2TTY struct{} +Additional options recognized via ` + "`format-options`" + `: -func (formatCrdbV2TTY) formatterName() string { return "crdb-v2-tty" } +| Option | Description | +|--------|-------------| +| ` + "`colors`" + ` | The color profile to use. Possible values: none, auto, ansi, 256color. Default is auto. | +| ` + "`timezone`" + ` | The timezone to use for the timestamp column. The value can be any timezone name recognized by the Go standard library. Default is ` + "`UTC`" + ` | -func (formatCrdbV2TTY) formatEntry(entry logEntry) *buffer { - cp := ttycolor.StderrProfile - if logging.stderrSink.noColor.Get() { - cp = nil - } - return formatLogEntryInternalV2(entry, cp) -} +`) -func (formatCrdbV2TTY) doc() string { - return "Same textual format as `" + formatCrdbV2{}.formatterName() + "`." + ttyFormatDoc + return buf.String() } -func (formatCrdbV2TTY) contentType() string { return "text/plain" } - -// formatEntryInternalV2 renders a log entry. -// Log lines are colorized depending on severity. -// It uses a newly allocated *buffer. The caller is responsible -// for calling putBuffer() afterwards. -// -// Note: the prefix up to and including the logging tags -// needs to remain the same as in crdb-v1, so as to -// preserve cross-version compatibility with at least -// one version backwards. -func formatLogEntryInternalV2(entry logEntry, cp ttycolor.Profile) *buffer { +func (f formatCrdbV2) formatEntry(entry logEntry) *buffer { + // Note: the prefix up to and including the logging tags + // needs to remain the same as in crdb-v1, so as to + // preserve cross-version compatibility with at least + // one version backwards. buf := getBuffer() - if entry.line < 0 { - entry.line = 0 // not a real line number, but acceptable to someDigits - } - if entry.sev > severity.FATAL || entry.sev <= severity.UNKNOWN { - entry.sev = severity.INFO // for safety. - } - - tmp := buf.tmp[:len(buf.tmp)] - var n int - var prefix []byte - switch entry.sev { - case severity.INFO: - prefix = cp[ttycolor.Cyan] - case severity.WARNING: - prefix = cp[ttycolor.Yellow] - case severity.ERROR, severity.FATAL: - prefix = cp[ttycolor.Red] - } - n += copy(tmp, prefix) - // Avoid Fprintf, for speed. The format is so simple that we can do it quickly by hand. - // It's worth about 3X. Fprintf is hard. - now := timeutil.Unix(0, entry.ts) - year, month, day := now.Date() - hour, minute, second := now.Clock() - // Lyymmdd hh:mm:ss.uuuuuu file:line - tmp[n] = severityChar[entry.sev-1] - n++ - if year < 2000 { - year = 2000 - } - n += buf.twoDigits(n, year-2000) - n += buf.twoDigits(n, int(month)) - n += buf.twoDigits(n, day) - n += copy(tmp[n:], cp[ttycolor.Gray]) // gray for time, file & line - tmp[n] = ' ' - n++ - n += buf.twoDigits(n, hour) - tmp[n] = ':' - n++ - n += buf.twoDigits(n, minute) - tmp[n] = ':' - n++ - n += buf.twoDigits(n, second) - tmp[n] = '.' - n++ - n += buf.nDigits(6, n, now.Nanosecond()/1000, '0') - tmp[n] = ' ' - n++ - n += buf.someDigits(n, int(entry.gid)) - tmp[n] = ' ' - n++ - if entry.ch != channel.DEV { - // Prefix the filename with the channel number. - n += buf.someDigits(n, int(entry.ch)) - tmp[n] = '@' - n++ - } - buf.Write(tmp[:n]) - buf.WriteString(entry.file) - tmp[0] = ':' - n = buf.someDigits(1, entry.line) - n++ - // Reset the color to default. - n += copy(tmp[n:], cp[ttycolor.Reset]) - tmp[n] = ' ' - n++ - // If redaction is enabled, indicate that the current entry has - // markers. This indicator is used in the log parser to determine - // which redaction strategy to adopt. - if entry.payload.redactable { - copy(tmp[n:], redactableIndicatorBytes) - n += len(redactableIndicatorBytes) - } - // Note: when the redactable indicator is not introduced - // there are two spaces next to each other. This is intended - // and should be preserved for backward-compatibility with - // 3rd party log parsers. - tmp[n] = ' ' - n++ - buf.Write(tmp[:n]) + cp := f.colorProfile + writeCrdbHeader(buf, cp, entry.sev, entry.ch, entry.file, entry.line, entry.ts, f.loc, int(entry.gid), entry.payload.redactable) // The remainder is variable-length and could exceed // the static size of tmp. But we do have a best-case upper bound. @@ -322,7 +284,8 @@ func formatLogEntryInternalV2(entry logEntry, cp ttycolor.Profile) *buffer { // Display the counter if set and enabled. if entry.counter > 0 { - n = buf.someDigits(0, int(entry.counter)) + tmp := buf.tmp[:len(buf.tmp)] + n := buf.someDigits(0, int(entry.counter)) buf.Write(cp[ttycolor.Cyan]) buf.Write(tmp[:n]) buf.Write(cp[ttycolor.Reset]) @@ -494,7 +457,7 @@ var ( entryREV2 = regexp.MustCompile( `(?m)^` + /* Severity */ `(?P[` + severityChar + `])` + - /* Date and time */ `(?P\d{6} \d{2}:\d{2}:\d{2}.\d{6}) ` + + /* Date and time */ `(?P\d{6} \d{2}:\d{2}:\d{2}.\d{6}(?:[---+]\d{6})?) ` + /* Goroutine ID */ `(?:(?P\d+) )` + /* Go standard library flag */ `(\(gostd\) )?` + /* Channel */ `(?:(?P\d+)@)?` + @@ -688,11 +651,11 @@ func (f entryDecoderV2Fragment) getGoroutine() int64 { } func (f entryDecoderV2Fragment) getTimestamp() (unixNano int64) { - t, err := time.Parse(MessageTimeFormat, string(f[v2DateTimeIdx])) + t, err := decodeTimestamp(f[v2DateTimeIdx]) if err != nil { panic(err) } - return t.UnixNano() + return t } func (f entryDecoderV2Fragment) getChannel() logpb.Channel { diff --git a/pkg/util/log/format_crdb_v2_test.go b/pkg/util/log/format_crdb_v2_test.go index e12adb6fd7ad..7a059d181d02 100644 --- a/pkg/util/log/format_crdb_v2_test.go +++ b/pkg/util/log/format_crdb_v2_test.go @@ -23,6 +23,7 @@ import ( "github.com/cockroachdb/cockroach/pkg/util/log/channel" "github.com/cockroachdb/cockroach/pkg/util/log/logpb" "github.com/cockroachdb/cockroach/pkg/util/log/severity" + "github.com/cockroachdb/cockroach/pkg/util/timeutil" "github.com/cockroachdb/datadriven" "github.com/cockroachdb/logtags" "github.com/kr/pretty" @@ -53,6 +54,15 @@ func TestFormatCrdbV2(t *testing.T) { t.Fatal(err) } + tm2, err := time.Parse(MessageTimeFormatWithTZ, "060102 17:04:05.654321+020000") + if err != nil { + t.Fatal(err) + } + + if tm.UnixNano() != tm2.UnixNano() { + t.Fatalf("expected same, got %q vs %q", tm.In(time.UTC), tm2.In(time.UTC)) + } + emptyCtx := context.Background() sysCtx := context.Background() @@ -151,7 +161,18 @@ func TestFormatCrdbV2(t *testing.T) { } // We only use the datadriven framework for the ability to rewrite the output. - datadriven.RunTest(t, "testdata/crdb_v2", func(t *testing.T, _ *datadriven.TestData) string { + datadriven.RunTest(t, "testdata/crdb_v2", func(t *testing.T, td *datadriven.TestData) string { + var loc *time.Location + if arg, ok := td.Arg("tz"); ok { + var err error + var tz string + arg.Scan(t, 0, &tz) + loc, err = timeutil.LoadLocation(tz) + if err != nil { + td.Fatalf(t, "invalid tz: %v", err) + } + } + var buf bytes.Buffer for _, tc := range testCases { // override non-deterministic fields to stabilize the expected output. @@ -160,7 +181,7 @@ func TestFormatCrdbV2(t *testing.T) { tc.gid = 11 buf.WriteString("#\n") - f := formatCrdbV2{} + f := formatCrdbV2{loc: loc} b := f.formatEntry(tc) fmt.Fprintf(&buf, "%s", b.String()) putBuffer(b) diff --git a/pkg/util/log/format_json.go b/pkg/util/log/format_json.go index 26eebf18eee8..f6ebe2a43218 100644 --- a/pkg/util/log/format_json.go +++ b/pkg/util/log/format_json.go @@ -17,64 +17,107 @@ import ( "sort" "strconv" "strings" + "time" "github.com/cockroachdb/cockroach/pkg/base/serverident" "github.com/cockroachdb/cockroach/pkg/util/jsonbytes" "github.com/cockroachdb/cockroach/pkg/util/log/logpb" + "github.com/cockroachdb/cockroach/pkg/util/timeutil" "github.com/cockroachdb/errors" "github.com/cockroachdb/logtags" "github.com/cockroachdb/redact" ) -type formatFluentJSONCompact struct{} - -func (formatFluentJSONCompact) formatterName() string { return "json-fluent-compact" } - -func (formatFluentJSONCompact) doc() string { return formatJSONDoc(true /* fluent */, tagCompact) } - -func (f formatFluentJSONCompact) formatEntry(entry logEntry) *buffer { - return formatJSON(entry, true /* fluent */, tagCompact) +type formatJSONFull struct { + // fluentTag, if set, will include a Fluent tag in the JSON output. + fluentTag bool + // tags controls the verbosity of the log tags. + tags tagChoice + // datetimeFormat controls the format of the extra timestamp field "datetime". + datetimeFormat string + // loc controls the timezone of the extra timestamp field "datetime". + loc *time.Location } -func (formatFluentJSONCompact) contentType() string { return "application/json" } - -type formatFluentJSONFull struct{} - -func (formatFluentJSONFull) formatterName() string { return "json-fluent" } +func (f *formatJSONFull) setOption(k string, v string) error { + switch k { + case "fluent-tag": + switch v { + case "true": + f.fluentTag = true + case "false": + f.fluentTag = false + default: + return errors.Newf("unknown fluent-tag value: %q", redact.Safe(v)) + } + return nil + + case "tag-style": + switch v { + case "compact": + f.tags = tagCompact + case "verbose": + f.tags = tagVerbose + default: + return errors.Newf("unknown tags value: %q", redact.Safe(v)) + } + return nil -func (f formatFluentJSONFull) formatEntry(entry logEntry) *buffer { - return formatJSON(entry, true /* fluent */, tagVerbose) + case "datetime-timezone": + l, err := timeutil.LoadLocation(v) + if err != nil { + return errors.Wrapf(err, "invalid timezone: %q", v) + } + f.loc = l + return nil + + case "datetime-format": + switch v { + case "none": + f.datetimeFormat = "" + case "iso8601", "rfc3339": + f.datetimeFormat = time.RFC3339Nano + case "rfc1123": + f.datetimeFormat = time.RFC1123Z + default: + if strings.HasPrefix(v, "fmt:") { + f.datetimeFormat = v[4:] + } else { + return errors.Newf("unknown datetime-format value format: %q", v) + } + } + return nil + default: + return errors.Newf("unknown option: %q", redact.Safe(k)) + } } -func (formatFluentJSONFull) doc() string { return formatJSONDoc(true /* fluent */, tagVerbose) } - -func (formatFluentJSONFull) contentType() string { return "application/json" } - -type formatJSONCompact struct{} - -func (formatJSONCompact) formatterName() string { return "json-compact" } - -func (f formatJSONCompact) formatEntry(entry logEntry) *buffer { - return formatJSON(entry, false /* fluent */, tagCompact) +func (f formatJSONFull) formatterName() string { + var buf strings.Builder + buf.WriteString("json") + if f.fluentTag { + buf.WriteString("-fluent") + } + if f.tags == tagCompact { + buf.WriteString("-compact") + } + return buf.String() } -func (formatJSONCompact) doc() string { return formatJSONDoc(false /* fluent */, tagCompact) } - -func (formatJSONCompact) contentType() string { return "application/json" } - -type formatJSONFull struct{} - -func (formatJSONFull) formatterName() string { return "json" } - -func (f formatJSONFull) formatEntry(entry logEntry) *buffer { - return formatJSON(entry, false /* fluent */, tagVerbose) -} +func (formatJSONFull) contentType() string { return "application/json" } -func (formatJSONFull) doc() string { return formatJSONDoc(false /* fluent */, tagVerbose) } +func (f formatJSONFull) doc() string { + if f.formatterName() != "json" { + var buf strings.Builder + fmt.Fprintf(&buf, `This format name is an alias for 'json' with +the following format option defaults: -func (formatJSONFull) contentType() string { return "application/json" } +- `+"`fluent-tag: %v`"+` +- `+"`tag-style: %v`"+` +`, f.fluentTag, f.tags) + return buf.String() + } -func formatJSONDoc(forFluent bool, tags tagChoice) string { var buf strings.Builder buf.WriteString(`This format emits log entries as a JSON payload. @@ -85,12 +128,10 @@ processing over a stream unambiguously. Each entry contains at least the following fields: -| Field | Description | -|-------|-------------| +| Field name if ` + "`tag-style: compact`" + ` is specified | Field name if ` + "`tag-style: verbose`" + ` is specified | Description | +|-------|-------|-------------| +| ` + "`tag`" + ` | ` + "`tag`" + ` | (Only if the option ` + "`fluent-tag: true`" + ` is given.) A Fluent tag for the event, formed by the process name and the logging channel. | `) - if forFluent { - buf.WriteString("| `tag` | A Fluent tag for the event, formed by the process name and the logging channel. |\n") - } keys := make([]string, 0, len(jsonTags)) for c := range jsonTags { @@ -105,7 +146,10 @@ Each entry contains at least the following fields: if !jsonTags[c].includedInHeader { continue } - fmt.Fprintf(&buf, "| `%s` | %s |\n", jsonTags[c].tags[tags], jsonTags[c].description) + fmt.Fprintf(&buf, "| `%s` | `%s` | %s |\n", + jsonTags[c].tags[tagCompact], + jsonTags[c].tags[tagVerbose], + jsonTags[c].description) } buf.WriteString(` @@ -113,33 +157,39 @@ Each entry contains at least the following fields: After a couple of *header* entries written at the beginning of each log sink, all subsequent log entries also contain the following fields: -| Field | Description | -|---------------------|-------------| +| Field name if ` + "`tag-style: compact`" + ` is specified | Field name if ` + "`tag-style: verbose`" + ` is specified | Description | +|-------|-------|-------------| `) for _, k := range keys { c := k[0] if jsonTags[c].includedInHeader { continue } - fmt.Fprintf(&buf, "| `%s` | %s |\n", jsonTags[c].tags[tags], jsonTags[c].description) + fmt.Fprintf(&buf, "| `%s` | `%s` | %s |\n", + jsonTags[c].tags[tagCompact], + jsonTags[c].tags[tagVerbose], + jsonTags[c].description) } buf.WriteString(` Additionally, the following fields are conditionally present: -| Field | Description | -|---------------------|-------------| +| Field name if ` + "`tag-style: compact`" + ` is specified | Field name if ` + "`tag-style: verbose`" + ` is specified | Description | +|-------|-------|-------------| `) for _, k := range serverIdentifierFields { b := byte(k) - fmt.Fprintf(&buf, "| `%s` | %s |\n", jsonTags[b].tags[tags], jsonTags[b].description) + fmt.Fprintf(&buf, "| `%s` | `%s` | %s |\n", + jsonTags[b].tags[tagCompact], + jsonTags[b].tags[tagVerbose], + jsonTags[b].description) } - buf.WriteString(`| ` + "`tags`" + ` | The logging context tags for the entry, if there were context tags. | -| ` + "`message`" + ` | For unstructured events, the flat text payload. | -| ` + "`event`" + ` | The logging event, if structured (see below for details). | -| ` + "`stacks`" + ` | Goroutine stacks, for fatal events. | + buf.WriteString(`| ` + "`tags`" + ` | ` + "`tags`" + ` | The logging context tags for the entry, if there were context tags. | +| ` + "`message`" + ` | ` + "`message`" + ` | For unstructured events, the flat text payload. | +| ` + "`event`" + ` | ` + "`event`" + ` | The logging event, if structured (see below for details). | +| ` + "`stacks`" + ` | ` + "`stacks`" + ` | Goroutine stacks, for fatal events. | When an entry is structured, the ` + "`event`" + ` field maps to a dictionary whose structure is one of the documented structured events. See the [reference documentation](eventlog.html) @@ -151,6 +201,14 @@ fields that are considered sensitive. These markers are automatically recognized by ` + "[`cockroach debug zip`](cockroach-debug-zip.html)" + ` and ` + "[`cockroach debug merge-logs`](cockroach-debug-merge-logs.html)" + ` when log redaction is requested. +Additional options recognized via ` + "`format-options`" + `: + +| Option | Description | +|--------|-------------| +| ` + "`datetime-format`" + ` | The format to use for the ` + "`datetime`" + ` field. The value can be one of ` + "`none`" + `, ` + "`iso8601`/`rfc3339` (synonyms)" + `, or ` + "`rfc1123`" + `. Default is ` + "`none`" + `. | +| ` + "`datetime-timezone`" + ` | The timezone to use for the ` + "`datetime`" + ` field. The value can be any timezone name recognized by the Go standard library. Default is ` + "`UTC`" + ` | +| ` + "`tag-style`" + ` | The tags to include in the envelope. The value can be ` + "`compact`" + ` (one letter tags) or ` + "`verbose`" + ` (long-form tags). Default is ` + "`verbose`" + `. | +| ` + "`fluent-tag`" + ` | Whether to produce an additional field called ` + "`tag`" + ` for Fluent compatibility. Default is ` + "`false`" + `. | `) @@ -168,6 +226,8 @@ var jsonTags = map[byte]struct { "The name of the logging channel where the event was sent.", false}, 't': {[2]string{"t", "timestamp"}, "The timestamp at which the event was emitted on the logging channel.", true}, + 'd': {[2]string{"d", "datetime"}, + "The pretty-printed date/time of the event timestamp, if enabled via options.", true}, 's': {[2]string{"s", "severity_numeric"}, "The numeric value of the severity of the event.", false}, 'S': {[2]string{"sev", "severity"}, @@ -205,6 +265,13 @@ const ( TenantIDLogTagKeyJSON string = "tenant_id" ) +func (t tagChoice) String() string { + if t == tagCompact { + return "compact" + } + return "verbose" +} + var channelNamesLowercase = func() map[Channel]string { lnames := make(map[Channel]string, len(logpb.Channel_name)) for ch, s := range logpb.Channel_name { @@ -213,11 +280,11 @@ var channelNamesLowercase = func() map[Channel]string { return lnames }() -func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { +func (f formatJSONFull) formatEntry(entry logEntry) *buffer { jtags := jsonTags buf := getBuffer() buf.WriteByte('{') - if forFluent { + if f.fluentTag { // Tag: this is the main category for Fluentd events. buf.WriteString(`"tag":"`) // Note: fluent prefers if there is no period in the tag other @@ -245,13 +312,13 @@ func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { } if !entry.header { buf.WriteByte('"') - buf.WriteString(jtags['c'].tags[tags]) + buf.WriteString(jtags['c'].tags[f.tags]) buf.WriteString(`":`) n := buf.someDigits(0, int(entry.ch)) buf.Write(buf.tmp[:n]) - if tags != tagCompact { + if f.tags != tagCompact { buf.WriteString(`,"`) - buf.WriteString(jtags['C'].tags[tags]) + buf.WriteString(jtags['C'].tags[f.tags]) buf.WriteString(`":"`) escapeString(buf, entry.ch.String()) buf.WriteByte('"') @@ -271,7 +338,7 @@ func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { // precision of the resulting number exceeds json's native float // precision. Fluentd doesn't care and still parses the value properly. buf.WriteByte('"') - buf.WriteString(jtags['t'].tags[tags]) + buf.WriteString(jtags['t'].tags[f.tags]) buf.WriteString(`":"`) n := buf.someDigits(0, int(entry.ts/1000000000)) buf.tmp[n] = '.' @@ -279,30 +346,41 @@ func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { n += buf.nDigits(9, n, int(entry.ts%1000000000), '0') buf.Write(buf.tmp[:n]) buf.WriteByte('"') - + // Extra "datetime" field if requested. + if len(f.datetimeFormat) > 0 { + t := timeutil.FromUnixNanos(entry.ts) + if f.loc != nil { + t = t.In(f.loc) + } + buf.WriteString(`,"`) + buf.WriteString(jtags['d'].tags[f.tags]) + buf.WriteString(`":"`) + buf.WriteString(t.Format(f.datetimeFormat)) + buf.WriteByte('"') + } // Server identifiers. if entry.ClusterID != "" { buf.WriteString(`,"`) - buf.WriteString(jtags['x'].tags[tags]) + buf.WriteString(jtags['x'].tags[f.tags]) buf.WriteString(`":"`) escapeString(buf, entry.ClusterID) buf.WriteByte('"') } if entry.NodeID != "" { buf.WriteString(`,"`) - buf.WriteString(jtags['N'].tags[tags]) + buf.WriteString(jtags['N'].tags[f.tags]) buf.WriteString(`":`) buf.WriteString(entry.NodeID) } if entry.TenantID() != "" { buf.WriteString(`,"`) - buf.WriteString(jtags['T'].tags[tags]) + buf.WriteString(jtags['T'].tags[f.tags]) buf.WriteString(`":`) buf.WriteString(entry.TenantID()) } if entry.SQLInstanceID != "" { buf.WriteString(`,"`) - buf.WriteString(jtags['q'].tags[tags]) + buf.WriteString(jtags['q'].tags[f.tags]) buf.WriteString(`":`) buf.WriteString(entry.SQLInstanceID) } @@ -310,7 +388,7 @@ func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { // The binary version. if entry.version != "" { buf.WriteString(`,"`) - buf.WriteString(jtags['v'].tags[tags]) + buf.WriteString(jtags['v'].tags[f.tags]) buf.WriteString(`":"`) escapeString(buf, entry.version) buf.WriteByte('"') @@ -320,22 +398,22 @@ func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { // Severity, both in numeric form (for ease of processing) and // string form (to facilitate human comprehension). buf.WriteString(`,"`) - buf.WriteString(jtags['s'].tags[tags]) + buf.WriteString(jtags['s'].tags[f.tags]) buf.WriteString(`":`) n = buf.someDigits(0, int(entry.sev)) buf.Write(buf.tmp[:n]) - if tags == tagCompact { + if f.tags == tagCompact { if entry.sev > 0 && int(entry.sev) <= len(severityChar) { buf.WriteString(`,"`) - buf.WriteString(jtags['S'].tags[tags]) + buf.WriteString(jtags['S'].tags[f.tags]) buf.WriteString(`":"`) buf.WriteByte(severityChar[int(entry.sev)-1]) buf.WriteByte('"') } } else { buf.WriteString(`,"`) - buf.WriteString(jtags['S'].tags[tags]) + buf.WriteString(jtags['S'].tags[f.tags]) buf.WriteString(`":"`) escapeString(buf, entry.sev.String()) buf.WriteByte('"') @@ -344,18 +422,18 @@ func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { // Goroutine number. buf.WriteString(`,"`) - buf.WriteString(jtags['g'].tags[tags]) + buf.WriteString(jtags['g'].tags[f.tags]) buf.WriteString(`":`) n = buf.someDigits(0, int(entry.gid)) buf.Write(buf.tmp[:n]) // Source location. buf.WriteString(`,"`) - buf.WriteString(jtags['f'].tags[tags]) + buf.WriteString(jtags['f'].tags[f.tags]) buf.WriteString(`":"`) escapeString(buf, entry.file) buf.WriteString(`","`) - buf.WriteString(jtags['l'].tags[tags]) + buf.WriteString(jtags['l'].tags[f.tags]) buf.WriteString(`":`) n = buf.someDigits(0, entry.line) buf.Write(buf.tmp[:n]) @@ -363,7 +441,7 @@ func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { if !entry.header { // Entry counter. buf.WriteString(`,"`) - buf.WriteString(jtags['n'].tags[tags]) + buf.WriteString(jtags['n'].tags[f.tags]) buf.WriteString(`":`) n = buf.someDigits(0, int(entry.counter)) buf.Write(buf.tmp[:n]) @@ -374,7 +452,7 @@ func formatJSON(entry logEntry, forFluent bool, tags tagChoice) *buffer { // it's likely there will be more redaction formats // in the future. buf.WriteString(`,"`) - buf.WriteString(jtags['r'].tags[tags]) + buf.WriteString(jtags['r'].tags[f.tags]) buf.WriteString(`":`) if entry.payload.redactable { buf.WriteByte('1') diff --git a/pkg/util/log/format_json_test.go b/pkg/util/log/format_json_test.go index 6d1c70bdc870..c07374d7575a 100644 --- a/pkg/util/log/format_json_test.go +++ b/pkg/util/log/format_json_test.go @@ -24,6 +24,7 @@ import ( "github.com/cockroachdb/cockroach/pkg/util/log/channel" "github.com/cockroachdb/cockroach/pkg/util/log/logpb" "github.com/cockroachdb/cockroach/pkg/util/log/severity" + "github.com/cockroachdb/cockroach/pkg/util/timeutil" "github.com/cockroachdb/datadriven" "github.com/cockroachdb/logtags" "github.com/kr/pretty" @@ -69,11 +70,16 @@ func TestJSONFormats(t *testing.T) { makeUnstructuredEntry(ctx, severity.ERROR, channel.HEALTH, 0, true, "hello %s", "world"), } + l, err := timeutil.LoadLocation("America/New_York") + if err != nil { + t.Fatal(err) + } formats := []logFormatter{ - formatFluentJSONCompact{}, - formatFluentJSONFull{}, - formatJSONCompact{}, - formatJSONFull{}, + &formatJSONFull{fluentTag: true, tags: tagCompact}, + &formatJSONFull{fluentTag: true, tags: tagVerbose}, + &formatJSONFull{tags: tagCompact}, + &formatJSONFull{}, + &formatJSONFull{datetimeFormat: "2006-01-02 xx 15:04:05+07", loc: l}, } // We only use the datadriven framework for the ability to rewrite the output. diff --git a/pkg/util/log/formats.go b/pkg/util/log/formats.go index c68478b7914b..6d05e892fad9 100644 --- a/pkg/util/log/formats.go +++ b/pkg/util/log/formats.go @@ -10,6 +10,12 @@ package log +import ( + "sort" + + "github.com/cockroachdb/ttycolor" +) + type logFormatter interface { formatterName() string // doc is used to generate the formatter documentation. @@ -18,6 +24,9 @@ type logFormatter interface { // The caller is responsible for calling putBuffer() afterwards. formatEntry(entry logEntry) *buffer + // setOption configures the formatter with the given option. + setOption(key string, value string) error + // contentType is the MIME content-type field to use on // transports which use this metadata. contentType() string @@ -36,30 +45,54 @@ var formatParsers = map[string]string{ "json-fluent-compact": "json-compact", } -var formatters = func() map[string]logFormatter { - m := make(map[string]logFormatter) - r := func(f logFormatter) { - m[f.formatterName()] = f +var formatters = func() map[string]func() logFormatter { + m := make(map[string]func() logFormatter) + r := func(f func() logFormatter) { + name := f().formatterName() + if _, ok := m[name]; ok { + panic("duplicate formatter name: " + name) + } + m[name] = f } - r(formatCrdbV1{}) - r(formatCrdbV1WithCounter{}) - r(formatCrdbV1TTY{}) - r(formatCrdbV1TTYWithCounter{}) - r(formatCrdbV2{}) - r(formatCrdbV2TTY{}) - r(formatFluentJSONCompact{}) - r(formatFluentJSONFull{}) - r(formatJSONCompact{}) - r(formatJSONFull{}) + r(func() logFormatter { + return &formatCrdbV1{showCounter: false, colorProfile: ttycolor.StderrProfile, colorProfileName: "auto"} + }) + r(func() logFormatter { + return &formatCrdbV1{showCounter: false, colorProfileName: "none"} + }) + r(func() logFormatter { + return &formatCrdbV1{showCounter: true, colorProfile: ttycolor.StderrProfile, colorProfileName: "auto"} + }) + r(func() logFormatter { + return &formatCrdbV1{showCounter: true, colorProfileName: "none"} + }) + r(func() logFormatter { + return &formatCrdbV2{colorProfileName: "none"} + }) + r(func() logFormatter { + return &formatCrdbV2{colorProfile: ttycolor.StderrProfile, colorProfileName: "auto"} + }) + r(func() logFormatter { return &formatJSONFull{fluentTag: true, tags: tagCompact} }) + r(func() logFormatter { return &formatJSONFull{fluentTag: true, tags: tagVerbose} }) + r(func() logFormatter { return &formatJSONFull{tags: tagCompact} }) + r(func() logFormatter { return &formatJSONFull{tags: tagVerbose} }) return m }() +var formatNames = func() (res []string) { + for name := range formatters { + res = append(res, name) + } + sort.Strings(res) + return res +}() + // GetFormatterDocs returns the embedded documentation for all the // supported formats. func GetFormatterDocs() map[string]string { m := make(map[string]string) for fmtName, f := range formatters { - m[fmtName] = f.doc() + m[fmtName] = f().doc() } return m } diff --git a/pkg/util/log/formats_test.go b/pkg/util/log/formats_test.go index dc4895602049..9a19413bed17 100644 --- a/pkg/util/log/formats_test.go +++ b/pkg/util/log/formats_test.go @@ -16,7 +16,6 @@ import ( "fmt" "os" "path/filepath" - "sort" "strconv" "testing" @@ -34,13 +33,6 @@ func TestFormatRedaction(t *testing.T) { sc := ScopeWithoutShowLogs(t) defer sc.Close(t) - // Make the test below deterministic. - formatNames := make([]string, 0, len(formatters)) - for n := range formatters { - formatNames = append(formatNames, n) - } - sort.Strings(formatNames) - ctx := context.Background() ctx = logtags.AddTag(ctx, "a", "secret1") ctx = logtags.AddTag(ctx, "b", redact.Sprintf("safe1 %s", "secret2")) diff --git a/pkg/util/log/http_sink.go b/pkg/util/log/http_sink.go index ac8b71e890a4..c512c19df15f 100644 --- a/pkg/util/log/http_sink.go +++ b/pkg/util/log/http_sink.go @@ -51,10 +51,11 @@ func newHTTPSink(c logconfig.HTTPSinkConfig) (*httpSink, error) { hs.doRequest = doGet } - f, ok := formatters[*c.Format] + fConstructor, ok := formatters[*c.Format] if !ok { panic(errors.AssertionFailedf("unknown format: %q", *c.Format)) } + f := fConstructor() if f.contentType() != "" { hs.contentType = f.contentType() } diff --git a/pkg/util/log/intercept.go b/pkg/util/log/intercept.go index d518acbaac3c..c8d08cd951ed 100644 --- a/pkg/util/log/intercept.go +++ b/pkg/util/log/intercept.go @@ -19,6 +19,8 @@ import ( "github.com/cockroachdb/cockroach/pkg/cli/exit" "github.com/cockroachdb/cockroach/pkg/util/log/severity" "github.com/cockroachdb/cockroach/pkg/util/syncutil" + "github.com/cockroachdb/errors" + "github.com/cockroachdb/redact" ) // InterceptWith diverts log traffic to the given interceptor `fn`. @@ -71,6 +73,9 @@ func (l *loggingT) newInterceptorSinkInfo() *sinkInfo { // formatInterceptor converts the raw logpb.Entry to JSON. type formatInterceptor struct{} +func (formatInterceptor) setOption(k string, _ string) error { + return errors.Newf("unknown option: %q", redact.Safe(k)) +} func (formatInterceptor) formatterName() string { return "json-intercept" } func (formatInterceptor) doc() string { return "internal only" } func (formatInterceptor) contentType() string { return "application/json" } diff --git a/pkg/util/log/logconfig/config.go b/pkg/util/log/logconfig/config.go index 92a2e4bc5a5a..676570bba792 100644 --- a/pkg/util/log/logconfig/config.go +++ b/pkg/util/log/logconfig/config.go @@ -215,6 +215,9 @@ type CommonSinkConfig struct { // Format indicates the entry format to use. Format *string `yaml:",omitempty"` + // FormatOptions indicates additional options for the format. + FormatOptions map[string]string `yaml:"format-options,omitempty"` + // Redact indicates whether to strip sensitive information before // log events are emitted to this sink. Redact *bool `yaml:",omitempty"` @@ -290,7 +293,8 @@ type StderrSinkConfig struct { Channels ChannelFilters `yaml:",omitempty,flow"` // NoColor forces the omission of VT color codes in the output even - // when stderr is a terminal. + // when stderr is a terminal. This option is deprecated; its effects + // are equivalent to 'format-options: {colors: none}'. NoColor bool `yaml:"no-color,omitempty"` // CommonSinkConfig is the configuration common to all sinks. Note diff --git a/pkg/util/log/logconfig/testdata/validate b/pkg/util/log/logconfig/testdata/validate index aa2724499016..0b110cf31a71 100644 --- a/pkg/util/log/logconfig/testdata/validate +++ b/pkg/util/log/logconfig/testdata/validate @@ -76,6 +76,53 @@ capture-stray-errors: dir: /default-dir max-group-size: 100MiB +# Check that default format propagates. +yaml +file-defaults: + format: crdb-v1 +sinks: + file-groups: + custom: + channels: DEV +---- +sinks: + file-groups: + custom: + channels: {INFO: all} + filter: INFO + format: crdb-v1 + stderr: + filter: NONE +capture-stray-errors: + enable: true + dir: /default-dir + max-group-size: 100MiB + +# Check that default format options propagate. +yaml +file-defaults: + format-options: {foo: bar} +sinks: + file-groups: + custom: + channels: DEV +---- +sinks: + file-groups: + custom: + channels: {INFO: all} + filter: INFO + format-options: + foo: bar + stderr: + filter: NONE + format-options: + foo: bar +capture-stray-errors: + enable: true + dir: /default-dir + max-group-size: 100MiB + # Check that fluent default network is filled. yaml sinks: diff --git a/pkg/util/log/logconfig/testdata/yaml b/pkg/util/log/logconfig/testdata/yaml index 90722980ecd4..3ee24a3c77d4 100644 --- a/pkg/util/log/logconfig/testdata/yaml +++ b/pkg/util/log/logconfig/testdata/yaml @@ -135,6 +135,8 @@ yaml file-defaults: dir: /default/dir filter: ERROR + format: foo + format-options: {bar: baz} redact: true redactable: false max-file-size: 10MB @@ -171,6 +173,9 @@ file-defaults: max-file-size: 9.5MiB max-group-size: 95MiB filter: ERROR + format: foo + format-options: + bar: baz redact: true redactable: false sinks: diff --git a/pkg/util/log/testdata/crdb_v1 b/pkg/util/log/testdata/crdb_v1 index e7998a349688..1a6ce370139c 100644 --- a/pkg/util/log/testdata/crdb_v1 +++ b/pkg/util/log/testdata/crdb_v1 @@ -115,3 +115,32 @@ logpb.Entry{Severity:1, Time:946684812300000000, Goroutine:456, File:"somefile.g logpb.Entry{Severity:1, Time:946684812300000000, Goroutine:456, File:"somefile.go", Line:136, Message:" ={\"hello\":123}\nstack trace:\nfoo", Tags:"", Counter:0x2, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} subtest end + +subtest timezone + +# Check in the following expected results that the Time field after +# parsing the log format output is the same as the original timestamp +# at top. + +entries tz=america/new_york +{"counter": 3, "message": "info"} +---- +I691231 19:00:12.300000-050000 456 somefile.go:136 3 info +# after parse: +logpb.Entry{Severity:1, Time:12300000000, Goroutine:456, File:"somefile.go", Line:136, Message:"‹info›", Tags:"", Counter:0x3, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} + +entries tz=europe/amsterdam +{"counter": 3, "message": "info"} +---- +I700101 01:00:12.300000+010000 456 somefile.go:136 3 info +# after parse: +logpb.Entry{Severity:1, Time:12300000000, Goroutine:456, File:"somefile.go", Line:136, Message:"‹info›", Tags:"", Counter:0x3, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} + +entries tz=utc +{"counter": 3, "message": "info"} +---- +I700101 00:00:12.300000+000000 456 somefile.go:136 3 info +# after parse: +logpb.Entry{Severity:1, Time:12300000000, Goroutine:456, File:"somefile.go", Line:136, Message:"‹info›", Tags:"", Counter:0x3, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} + +subtest end diff --git a/pkg/util/log/testdata/crdb_v2 b/pkg/util/log/testdata/crdb_v2 index aa43e48f3937..451a0bf67e49 100644 --- a/pkg/util/log/testdata/crdb_v2 +++ b/pkg/util/log/testdata/crdb_v2 @@ -46,3 +46,52 @@ W060102 15:04:05.654321 11 1@util/log/format_crdb_v2_test.go:123 [T2,noval,p3,l I060102 15:04:05.654321 11 util/log/format_crdb_v2_test.go:123 ⋮ [-] ={"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"} # W060102 15:04:05.654321 11 1@util/log/format_crdb_v2_test.go:123 [-] hello world + +run tz=america/new_york +---- +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] hello ‹world› +# +I060102 10:04:05.654321-050000 11 :123 [-] +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] ={"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"} +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] ={"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"} +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] !this is a fake stack +# +W060102 10:04:05.654321-050000 11 1@util/log/format_crdb_v2_test.go:123 [T1,noval,s1,long=2] hello world +# +E060102 10:04:05.654321-050000 11 2@util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] hello ‹world› +# +W060102 10:04:05.654321-050000 11 1@util/log/format_crdb_v2_test.go:123 [T1,noval,s1,long=2] 123 hello world +# +E060102 10:04:05.654321-050000 11 2@util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] hello ‹stack› +E060102 10:04:05.654321-050000 11 2@util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] !this is a fake stack +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 [T1,noval,s1,long=2] maybe multi +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 [T1,noval,s1,long=2] +line +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] maybe ‹multi› +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] +‹line› +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] maybe ‹multi› +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] +‹line with stack› +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] !this is a fake stack +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 [T1,noval,s1,long=2] aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 [T1,noval,s1,long=2] |aaaaaaaaaaaa +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] ={"Timestamp":123,"EventType":"rename_database","Event":"‹aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa› +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] |‹aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa›"} +# +E060102 10:04:05.654321-050000 11 2@util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] hello ‹stack› +E060102 10:04:05.654321-050000 11 2@util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] !this is aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa +E060102 10:04:05.654321-050000 11 2@util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s‹1›,long=‹2›] |aaaaaaaaaaaaaaaaaaaa fake stack +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [T2,noval,p‹3›,longKey=‹456›] ={"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"} +# +W060102 10:04:05.654321-050000 11 1@util/log/format_crdb_v2_test.go:123 [T2,noval,p3,longKey=456] hello world +# +I060102 10:04:05.654321-050000 11 util/log/format_crdb_v2_test.go:123 ⋮ [-] ={"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"} +# +W060102 10:04:05.654321-050000 11 1@util/log/format_crdb_v2_test.go:123 [-] hello world diff --git a/pkg/util/log/testdata/json b/pkg/util/log/testdata/json index d70971ceb711..08f4e96ce330 100644 --- a/pkg/util/log/testdata/json +++ b/pkg/util/log/testdata/json @@ -4,34 +4,41 @@ run json-fluent-compact: {"tag":"logtest.unknown","header":1,"t":"1136214245.654321000","T":1,"v":"v999.0.0","g":11,"f":"util/log/format_json_test.go","l":123,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} json-fluent: {"tag":"logtest.unknown","header":1,"timestamp":"1136214245.654321000","tenant_id":1,"version":"v999.0.0","goroutine":11,"file":"util/log/format_json_test.go","line":123,"redactable":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} json-compact: {"header":1,"t":"1136214245.654321000","T":1,"v":"v999.0.0","g":11,"f":"util/log/format_json_test.go","l":123,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} - json: {"header":1,"timestamp":"1136214245.654321000","tenant_id":1,"version":"v999.0.0","goroutine":11,"file":"util/log/format_json_test.go","line":123,"redactable":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} + json-compact: {"header":1,"t":"1136214245.654321000","T":1,"v":"v999.0.0","g":11,"f":"util/log/format_json_test.go","l":123,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} + json-compact: {"header":1,"t":"1136214245.654321000","d":"2006-01-02 xx 10:04:05+07","T":1,"v":"v999.0.0","g":11,"f":"util/log/format_json_test.go","l":123,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} # json-fluent-compact: {"tag":"logtest.dev","c":0,"t":"1136214245.654321000","s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} json-fluent: {"tag":"logtest.dev","channel_numeric":0,"channel":"DEV","timestamp":"1136214245.654321000","severity_numeric":0,"severity":"UNKNOWN","goroutine":11,"file":"","line":123,"entry_counter":0,"redactable":0,"message":""} json-compact: {"c":0,"t":"1136214245.654321000","s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} - json: {"channel_numeric":0,"channel":"DEV","timestamp":"1136214245.654321000","severity_numeric":0,"severity":"UNKNOWN","goroutine":11,"file":"","line":123,"entry_counter":0,"redactable":0,"message":""} + json-compact: {"c":0,"t":"1136214245.654321000","s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} + json-compact: {"c":0,"t":"1136214245.654321000","d":"2006-01-02 xx 10:04:05+07","s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} # json-fluent-compact: {"tag":"logtest.dev","c":0,"t":"1136214245.654321000","x":"abc","N":123,"T":1,"s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} json-fluent: {"tag":"logtest.dev","channel_numeric":0,"channel":"DEV","timestamp":"1136214245.654321000","cluster_id":"abc","node_id":123,"tenant_id":1,"severity_numeric":0,"severity":"UNKNOWN","goroutine":11,"file":"","line":123,"entry_counter":0,"redactable":0,"message":""} json-compact: {"c":0,"t":"1136214245.654321000","x":"abc","N":123,"T":1,"s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} - json: {"channel_numeric":0,"channel":"DEV","timestamp":"1136214245.654321000","cluster_id":"abc","node_id":123,"tenant_id":1,"severity_numeric":0,"severity":"UNKNOWN","goroutine":11,"file":"","line":123,"entry_counter":0,"redactable":0,"message":""} + json-compact: {"c":0,"t":"1136214245.654321000","x":"abc","N":123,"T":1,"s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} + json-compact: {"c":0,"t":"1136214245.654321000","d":"2006-01-02 xx 10:04:05+07","x":"abc","N":123,"T":1,"s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} # json-fluent-compact: {"tag":"logtest.dev","c":0,"t":"1136214245.654321000","T":456,"q":123,"s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} json-fluent: {"tag":"logtest.dev","channel_numeric":0,"channel":"DEV","timestamp":"1136214245.654321000","tenant_id":456,"instance_id":123,"severity_numeric":0,"severity":"UNKNOWN","goroutine":11,"file":"","line":123,"entry_counter":0,"redactable":0,"message":""} json-compact: {"c":0,"t":"1136214245.654321000","T":456,"q":123,"s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} - json: {"channel_numeric":0,"channel":"DEV","timestamp":"1136214245.654321000","tenant_id":456,"instance_id":123,"severity_numeric":0,"severity":"UNKNOWN","goroutine":11,"file":"","line":123,"entry_counter":0,"redactable":0,"message":""} + json-compact: {"c":0,"t":"1136214245.654321000","T":456,"q":123,"s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} + json-compact: {"c":0,"t":"1136214245.654321000","d":"2006-01-02 xx 10:04:05+07","T":456,"q":123,"s":0,"g":11,"f":"","l":123,"n":0,"r":0,"message":""} # json-fluent-compact: {"tag":"logtest.dev","c":0,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":1,"sev":"I","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"event":{"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"}} json-fluent: {"tag":"logtest.dev","channel_numeric":0,"channel":"DEV","timestamp":"1136214245.654321000","tenant_id":1,"version":"v999.0.0","severity_numeric":1,"severity":"INFO","goroutine":11,"file":"util/log/format_json_test.go","line":123,"entry_counter":0,"redactable":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"event":{"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"}} json-compact: {"c":0,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":1,"sev":"I","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"event":{"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"}} - json: {"channel_numeric":0,"channel":"DEV","timestamp":"1136214245.654321000","tenant_id":1,"version":"v999.0.0","severity_numeric":1,"severity":"INFO","goroutine":11,"file":"util/log/format_json_test.go","line":123,"entry_counter":0,"redactable":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"event":{"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"}} + json-compact: {"c":0,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":1,"sev":"I","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"event":{"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"}} + json-compact: {"c":0,"t":"1136214245.654321000","d":"2006-01-02 xx 10:04:05+07","T":1,"v":"v999.0.0","s":1,"sev":"I","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"event":{"Timestamp":123,"EventType":"rename_database","Event":"‹rename from `hello` to `world`›"}} # json-fluent-compact: {"tag":"logtest.ops","c":1,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":2,"sev":"W","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":0,"tags":{"noval":"","s":"1","long":"2"},"message":"hello world"} json-fluent: {"tag":"logtest.ops","channel_numeric":1,"channel":"OPS","timestamp":"1136214245.654321000","tenant_id":1,"version":"v999.0.0","severity_numeric":2,"severity":"WARNING","goroutine":11,"file":"util/log/format_json_test.go","line":123,"entry_counter":0,"redactable":0,"tags":{"noval":"","s":"1","long":"2"},"message":"hello world"} json-compact: {"c":1,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":2,"sev":"W","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":0,"tags":{"noval":"","s":"1","long":"2"},"message":"hello world"} - json: {"channel_numeric":1,"channel":"OPS","timestamp":"1136214245.654321000","tenant_id":1,"version":"v999.0.0","severity_numeric":2,"severity":"WARNING","goroutine":11,"file":"util/log/format_json_test.go","line":123,"entry_counter":0,"redactable":0,"tags":{"noval":"","s":"1","long":"2"},"message":"hello world"} + json-compact: {"c":1,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":2,"sev":"W","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":0,"tags":{"noval":"","s":"1","long":"2"},"message":"hello world"} + json-compact: {"c":1,"t":"1136214245.654321000","d":"2006-01-02 xx 10:04:05+07","T":1,"v":"v999.0.0","s":2,"sev":"W","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":0,"tags":{"noval":"","s":"1","long":"2"},"message":"hello world"} # json-fluent-compact: {"tag":"logtest.health","c":2,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":3,"sev":"E","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} json-fluent: {"tag":"logtest.health","channel_numeric":2,"channel":"HEALTH","timestamp":"1136214245.654321000","tenant_id":1,"version":"v999.0.0","severity_numeric":3,"severity":"ERROR","goroutine":11,"file":"util/log/format_json_test.go","line":123,"entry_counter":0,"redactable":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} json-compact: {"c":2,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":3,"sev":"E","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} - json: {"channel_numeric":2,"channel":"HEALTH","timestamp":"1136214245.654321000","tenant_id":1,"version":"v999.0.0","severity_numeric":3,"severity":"ERROR","goroutine":11,"file":"util/log/format_json_test.go","line":123,"entry_counter":0,"redactable":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} + json-compact: {"c":2,"t":"1136214245.654321000","T":1,"v":"v999.0.0","s":3,"sev":"E","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} + json-compact: {"c":2,"t":"1136214245.654321000","d":"2006-01-02 xx 10:04:05+07","T":1,"v":"v999.0.0","s":3,"sev":"E","g":11,"f":"util/log/format_json_test.go","l":123,"n":0,"r":1,"tags":{"noval":"","s":"‹1›","long":"‹2›"},"message":"hello ‹world›"} diff --git a/pkg/util/log/testdata/parse b/pkg/util/log/testdata/parse index db45ccfb949d..8c8395ad011b 100644 --- a/pkg/util/log/testdata/parse +++ b/pkg/util/log/testdata/parse @@ -241,4 +241,21 @@ I060102 15:04:05.654321 11 util/log/format_crdb_v2_test.go:123 ⋮ [T1,noval,s logpb.Entry{Severity:1, Time:1136214245654321000, Goroutine:11, File:"util/log/format_crdb_v2_test.go", Line:123, Message:"{\"Timestamp\":123,\"EventType\":\"rename_database\",\"DatabaseName\":\"‹aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa›‹aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa›\"}\nstack trace:\nthis is a fake stack", Tags:"noval,s‹1›,long=‹2›", Counter:0x17, Redactable:true, Channel:0, StructuredEnd:0x91, StructuredStart:0x0, StackTraceStart:0x92, TenantID:"1"} logpb.Entry{Severity:1, Time:1136214245654321000, Goroutine:11, File:"util/log/format_crdb_v2_test.go", Line:123, Message:"maybe ‹multi›\n‹line›", Tags:"noval,s‹1›,long=‹2›", Counter:0x18, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} +subtest end + +subtest timezone + +# Check that the computed Time fields are equal. +log +I210116 17:49:17.073282-020000 14 server/node.go:464 ⋮ [-] 23 started with engine type ‹2› +I210116 21:49:17.073282+020000 14 server/node.go:464 ⋮ [-] 23 started with engine type ‹2› +I210116 19:49:17.073282+000000 14 server/node.go:464 ⋮ [-] 23 started with engine type ‹2› +I210116 19:49:17.073282 14 server/node.go:464 ⋮ [-] 23 started with engine type ‹2› +---- +logpb.Entry{Severity:1, Time:1610826557073282000, Goroutine:14, File:"server/node.go", Line:464, Message:"started with engine type ‹2›", Tags:"", Counter:0x17, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} +logpb.Entry{Severity:1, Time:1610826557073282000, Goroutine:14, File:"server/node.go", Line:464, Message:"started with engine type ‹2›", Tags:"", Counter:0x17, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} +logpb.Entry{Severity:1, Time:1610826557073282000, Goroutine:14, File:"server/node.go", Line:464, Message:"started with engine type ‹2›", Tags:"", Counter:0x17, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} +logpb.Entry{Severity:1, Time:1610826557073282000, Goroutine:14, File:"server/node.go", Line:464, Message:"started with engine type ‹2›", Tags:"", Counter:0x17, Redactable:true, Channel:0, StructuredEnd:0x0, StructuredStart:0x0, StackTraceStart:0x0, TenantID:"1"} + + subtest end