-
-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DeduplicationHandler passes duplicated entries to the next handler #1433
Comments
The deduplication is only happening/effective between requests, within one request duplicate messages will still be all sent through. So on the first request yes the file is created and it passes everything through to nested handler, then on subsequent requests it should stop sending any logs unless there is a new deprecation. |
So IMO not a bug unless I misunderstand what you are saying.. At least it seems to work for me as described. |
I have the same problem. Within one request duplicate messages should not be sent. |
What's the use case you are trying to achieve exactly? |
I use async PHP ( |
Ok for long running processes running many jobs, it is recommended to call |
This does not solve the current problem, as within the same iteration, there can still be many identical messages. |
Yeah maybe that's still a feature request then for DeduplicationHandler to have an option to deduplicate even within its own buffer.. Feel free to send a PR if you can add that. |
I have the same problem. The log entry is duplicated on version 2.x |
This is really a problem. Messages should be deduplicated within it's buffer as there can be thousands of deprecation messages in a worker or even a single cli command / web request. E.g. doctrine/orm#7901 triggered ALOT of deprecations for us which overwhelmed graylog. |
Another problem is that the deduplicationStore file can grow indefinitely:
|
@Tobion this was a quick fix solution which served a purpose already, but it's definitely not perfect. PRs are definitely welcome at least to deduplicate within the buffer and allow setting buffer size etc. The deduplicationStore issues you describe I'm not sure if they're fixable within a reasonable amount of added complexity, but if so that'd also be good to fix. |
Have a graphql API. When I query some problematic field in collection item, it will trigger same error for each collection element but will not stop execution. So I'll have numerous same log messages triggered inside 1 request. Which I tried to avoid by using deduplication filter. But looking in the code I came to same colcusion as author of this issue. |
Monolog version: 1.25.3
I'm using DeduplicationHandler to avoid duplicated deprecation messages but it still passes all entries via
$this->handler->handleBatch($this->buffer);
to the next handler (RotatingFileHandler). I think this is because of the following line:$passthru = $passthru || !$this->isDuplicate($record);
The first call to isDuplicate() method results in
false
because the duplication store file does not exists yet, so$passthru
becomestrue
and will never befalse
again.I'm not sure but I think the code should be modified a bit to something like this:
My config:
The text was updated successfully, but these errors were encountered: