-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging service uses new config format #41956
Comments
Pinging @elastic/kibana-platform |
Log Record FormatLevel
Proposal: document as expected behavior, compatible with elasticsearch logging format. Tags
Proposal: document as expected behavior, compatible with elasticsearch logging format. PID
Proposal: Elasticsearch uses log4j which uses Time
log4j allows to specify date format for Proposal:
declaration can be done as
With settings:meta
legacyPlatformLogger.info('info', {
from: 'v7',
to: 'v8',
}); Output {
"from": "v7",
"to": "v8",
//..
}
platformLogger.info('info', {
from: 'v7',
to: 'v8',
}); Output {
"meta": {
"from": "v7",
"to": "v8",
},
//..
} Proposal:
Log4j supports logging a map structure via Pattern formatThe current format is not compatible with log4j syntax since uses curly braces to define parameters. To make it compatible we can switch the parameter declaration to
Request, response, ops dataFormatting Request, response, ops data should be considered separately. |
I agree, documenting new behavior seems sufficient.
I agree that implementing an equivalent to l4j's MDC and NDC is overkill. I'm ok with the proposal
Using ES layout as the default makes sense. Using moment formats seems sufficient.
I don't even know what this should be used for, but handling it only in
Not sure if this is really an issue, or who could say otherwise.
No opinion on that. From the JS world, curly makes more sense than |
An alternative solution, which is easier to implement, but less flexible and different from layout
highlight: false
kind: pattern
timestamp:
format: yyyy-MM-dd'T'HH:mm:ss,SSSZZ
timezone: America/Los_Angeles |
I'm not sure that we should let the language that Kibana is implemented in drive the way our users use and configure the product itself, at least when we can avoid it. Personally, I think if we're going to get closer to Elasticsearch/log4j's configuration, we should try to make it as close as we can while being practical. I think it's more confusing if the configuration format is 80% similar than if it's 20% similar. Humans tend to pattern match and will assume it's the same pretty quickly. That doesn't mean we need to have full feature parity by any means, but I think that for the features we do choose to implement, we should make them work as closely to our other products as possible OR make it clear very different. Given that premise, my preference is that we implement the One thing that might making parsing this simpler is to write a simple grammar using PEG.js, similar to how we parse expressions in @kbn/interpreter. |
Logging in New platform was completely re-written to be compatible with Elasticsearch logging framework and uses a different configuration format.
Switching to a new config is a breaking change and we have to organize the process in several steps:
Subtasks:
Done
#56480
legacy-appender
appenderverbose
,silent
,quiet
pid
in logs#56982
meta
timezone
#57433
use log4j-like pattern syntax
{level}
--->%level
document how to reproduce LP logging settings in the NP
document compatibility requirements for logging config
setup integration tests for config update Test NP logging config reload on SIGHUP #57681
normalize LogRecord format for the NP & LP logging, document the difference document difference between log record formats #57798
Created as followups
elasticearch.logQueries
Provide substitution for elasticsearch.logQueries in the new logging config #57546legacy-appender
removal Log HTTP requests, responses #13241filtering
Implement filtering mechanism in the Logging service #57547logger
, Kibana -logging
. discussion: Unify Kibana & Elasticsearch logging config keys #57551The text was updated successfully, but these errors were encountered: