-
-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes and Logging #71
Conversation
During testing with a real config I noticed that my 0 bytes fix was flawed, I force pushed a better one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, just did a brief review for now, but it's looking pretty good so far.
I wonder if we can use the MatcherSet without having to pass in a logger, though. Hmm.
Would you prefer it if |
@ydylla Maybe that's a good idea! Can we try it? |
Sure, I will change it tomorrow. |
@mholt I have added a logger to the layer4 connection and also removed the |
@@ -127,7 +127,8 @@ func wrapRoute(route *Route, logger *zap.Logger) Middleware { | |||
// route must match at least one of the matcher sets | |||
matched, err := route.matcherSets.AnyMatch(cx) | |||
if err != nil { | |||
logger.Error("matching connection", zap.Error(err)) | |||
logger.Error("matching connection", zap.String("remote", cx.RemoteAddr().String()), zap.Error(err)) | |||
return nil // return nil so the error does not get logged again |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where would it be logged again? I don't see anywhere else we use the value err
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be logged again in server.go#L109
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, are you seeing that in your logs? How does err
get returned from here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The return statement is completely new to fix this:
The first bug (at least I think it is one 😄) is that when a matcher returns an error the processing is not stopped. Instead the next matcher is tried and if that matches its handlers are executed even if the connection is broken.
You can see example logs of the processing not stopping in the initial post. When using return err
instead one would see the double logging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I follow so far, except I don't see where we are using return err
anywhere. We always return nextCopy.Handle(cx)
instead.
(Sorry if I'm being a bit slow to understand.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we are talking past each other :)
I included the comment to signify we are returning nil
on purpose, because otherwise it may look like a bug.
You are right we mostly return nextCopy.Handle(cx)
except for exactly this case, since we want to stop the execution of the chain. But we already logged (handled) the error, so it is wrong to return the error (like it's often the case in if err != nil
blocks). Because the last return value from the chain will eventually appear in server.go#L106 and thus would get logged in server.go#L109 which we do not want because we already logged it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @mholt, did you found the time to think about this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ydylla I'm so sorry for the delay on this PR. (It's not fair to you, that's for sure.) I was traveling this last week and also taking a little break from sprinting on Caddy stuff after the 2.6 release (to avoid burnout), and now I'm way behind on notifications. 😅 I'll be catching up soon!
If we can find another maintainer who is willing to review and approve this and other PRs, that might be a good idea so it's not always blocked by me. I feel bad about that! I might look around and see who would be willing to accept the invitation. This plugin just isn't nearly as active as the main Caddy project so it has fewer candidates.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem at all, I understand that you are very busy. Don't feel bad, you are doing good work with caddy.
But you have to endure me pining you from time to time 😄
I just used the delay to clean up the commits and fix the failing tests in the l4socks module.
(Sorry, still working through my backlog. I'm trying to get Caddy 2.6 ready to release. Your PRs are the next on my list. Thanks for your patience!) |
Hi @mholt, |
As convention always as 1st field, so it looks nice in console mode and matches existing logging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ydylla I suppose you have tested this more than I will be able to at the moment. I'm willing to merge it in!
Fire away when ready.
(Last night I asked in your other PR why a separate Logger was added for each connection. I guess that is being introduced here, which I now I remember. Is having a separate logger for each connection particularly useful?)
Thanks for all you do :)
@mholt Thanks, I will merge it when the logger stuff is settled.
Not sure what you mean by that. Each connection uses the same logger. It's the one from the server that gets passed into |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this. Sorry it took me so long. Let's give it a try and see how it goes! I may ask for your help if anything comes up.
Again, really appreciate it :D
@mholt Thanks for the merge. diff --git a/layer4/listener.go b/layer4/listener.go
index 5001d6b..49a6d9f 100644
--- a/layer4/listener.go
+++ b/layer4/listener.go
@@ -119,7 +119,7 @@ func (l *listener) handle(conn net.Conn) {
buf.Reset()
defer bufPool.Put(buf)
- cx := WrapConnection(conn, buf)
+ cx := WrapConnection(conn, buf, l.logger)
cx.Context = context.WithValue(cx.Context, listenerCtxKey, l)
start := time.Now() |
Hi,
my experiments mentioned in #68 revealed some bugs. I also added some more logging which makes debugging configs a lot easier.
The first bug (at least I think it is one 😄) is that when a matcher returns an error the processing is not stopped. Instead the next matcher is tried and if that matches its handlers are executed even if the connection is broken.
For example with this config
matcher-bug.json
and this curl
curl -x socks5://127.0.0.1:8888 https://example.org
(Ctrl-C or wait 5 minutes for the timeout) you will get this confusing log:with the fixes and additional logging it looks like this:
Only one matcher and no handlers are executed.
The second bug is causing
Read()
to return 0 bytes and no error on its first call afterrecord()
(which I also mentioned here). It's probably not forbidden for anio.Reader
to do that but its confusing. I wrote a connection test that covers this situationTestConnection_RecordAndRewind
.The additional remote ip address logging is mainly intended as kind of connection id, with that its possible to correlate multiple log lines on busy servers.
Besides that I also have a commit which adds a per route configurable matching timeout. But it needs more testing and is probably better discussed in its own PR.
I am also still experimenting on how to prevent matchers from blocking when there is not enough data for them. But I would like to see these fixes merged first, so I can work on top of them. A short read detection in the layer4 connection looks promising. One problem is the http matcher. Since it uses a buffered reader, it will always cause a short read from the view point of the connection.