Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nginx-ingress grok expression does not handle multiple upstreams #20813

Closed
chendo opened this issue Aug 27, 2020 · 3 comments · Fixed by #21215
Closed

nginx-ingress grok expression does not handle multiple upstreams #20813

chendo opened this issue Aug 27, 2020 · 3 comments · Fixed by #21215
Assignees
Labels
in progress Pull request is currently in progress. Team:Platforms Label for the Integrations - Platforms team [zube]: In Progress

Comments

@chendo
Copy link
Contributor

chendo commented Aug 27, 2020

From https://discuss.elastic.co/t/nginx-ingress-grok-expression-does-not-handle-multiple-upstreams/246046

If nginx-ingress retries multiple upstreams, the grok expression does not parse it correctly and manifested as missing data when we knew errors were happening.

This is on filebeat 7.8.0, but issue does not appear to be resolved in 7.9.0.

Sanitised log output:

2 upstreams attempted

some.domain 172.0.0.0 - - [24/Aug/2020:01:30:24 +0000] "GET https://some.domain/some/path HTTP/1.1" 101 10 "-" "Mozilla/5.0 (Linux; Android 10; SM-XXXX) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.125 Mobile Safari/537.36" 1475 143.855 [upstream-name] [] 10.0.0.1:443, 10.0.0.2:443 0, 0 0.100, 143.757 502, 101 c2c58cd42cb68822aae7d640ddf6583a

3 upstreams attempted

some.domain 172.0.0.0 - - [24/Aug/2020:01:28:53 +0000] "GET https://some.domain/some/path HTTP/2.0" 200 28 "https://switter.at/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36" 60 0.683 [upstream-name] [] 10.0.0.1:443, 10.0.0.2:443, 10.0.0.3:443 0, 0, 39 0.096, 0.092, 0.496 502, 502, 200 d0f69894df9d4b231eca7773a6759ba2

The response code the client sees, as well as the last upstream should be used as http.response.status_code and nginx.ingress_controller.upstream.ip/port, but I'm not super sure how the other status codes and upstreams should be emitted. I don't think it should be thrown away, however.

Reproduction:

  • Create a Service in Kubernetes that points to multiple IPs, with one of these IPs that will refuse a connection from nginx
  • Create an Ingress that uses this service
  • Making a request to this Ingress should have nginx retry if it happens to hit the IP that is not accepting connections
  • nginx ingress should generate log output similar to above
  • A request with multiple upstreams will not show up on the in-built dashboards as it is not parsed correctly
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Aug 27, 2020
@andresrc andresrc added the Team:Platforms Label for the Integrations - Platforms team label Aug 27, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations-platforms (Team:Platforms)

@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Aug 27, 2020
@ChrsMark ChrsMark self-assigned this Sep 22, 2020
@ChrsMark ChrsMark added [zube]: In Progress in progress Pull request is currently in progress. labels Sep 22, 2020
@ChrsMark
Copy link
Member

ChrsMark commented Sep 22, 2020

Thanks for reporting this @chendo! I opened a PR for this, feel free to have a look.

@chendo
Copy link
Contributor Author

chendo commented Sep 23, 2020

@ChrsMark no worries! I had a look and my only concern is the total response length may not make sense if the client only receives the last response

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in progress Pull request is currently in progress. Team:Platforms Label for the Integrations - Platforms team [zube]: In Progress
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants