-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
413 Request Entity Too Large #21
Comments
We plan to add the ability to configure the global NGINX parameters through a ConfigMap. To allow to customize the configuration per Ingress resource, we can leverage annotations. As an example, apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
nginx/client_max_body_size: 1m
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80 Will that work for your case? |
@pleshakov yeah that would be great! |
This already works with ConfigMaps such as:
A value of "0" lifts the restriction on client_max_body_size.
For more information on the available config parameters have a look at here. |
We're adding this features as well as other NGINX configuration parameters soon. Stay tuned. |
customization of NGINX configuration was added in #33 |
@rawlingsj Please check the example on how to customize NGINX: https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/customization |
Not worked in
|
@Hronom the The image for this project -- https://hub.docker.com/r/nginxdemos/nginx-ingress/tags/ |
@pleshakov sorry guys, too many |
@Hronom I'm setting it like that, but it doesn't work.don't know why?? |
In case it helps, 413 solved with the
Running |
FYI, the annotation has changed and is now:
Also, I had to restart the nginx pod for the effect to take place. It immediately started working after that. |
Setting it to "0" makes the nginx post size unrestricted:
|
Looks like it changed to client-max-body-size for configmap. I didn't try with annotations: |
For |
I came here to configure the Docker registry helm chart and found:
I added it to the Ingress for the registry and it did the trick. 👌 |
This solved it for me: |
Hi, I know this is a very old post, we have the same issue and I've updated nginx.ingress.kubernetes.io/proxy-body-size: 50m in the file, now I have to restart Nginx POD to changes to effect, can you please let me know how to restart K8 Nginix POD ..? I know this way, we can restart PODs, kubectl scale deployment name--replicas=0 -n service any other way i can restart Nginx PODs to reload the configuration. |
@Ganeshkumar1023 in latest version of kubectl (1.17.0) they have added restart sub-command under rollout command. You can use this to restart the pods. so command would be: kubectl rollout restart deployment/abc |
I have tried both proxy-body-size and client-max-body-size on the configmap and did a rolling restart of the nginx controller pods and when I grep the nginx.conf file in the pod it returns the default 1m. I am trying to do this within Azure Kubernetes Service (AKS). I'm working with someone from their support. They said its not on them since it appears to be a nginx config issue. The weird hting is we had other clusters in Azure that this wasnt an issue on until we discovered this with some of the newer deployments. The initial fix they came up with is what is in this thread but it just refuses to change. Below is my configmap:
After issuing a rolling restart: kubectl rollout restart deployment/nginx-nginx-ingress-controller -n ingress-nginx Grepping the nginx ingress controller pod to query the value now reveals:
Doesnt matter where I try to change it. On the configmap for global or the Ingress route specifically.......this value above never changes. |
@Waterdrips cc |
think you
…------------------ 原始邮件 ------------------
发件人: "Aech1977"<[email protected]>;
发送时间: 2020年3月11日(星期三) 晚上7:46
收件人: "nginxinc/kubernetes-ingress"<[email protected]>;
抄送: "蓝鹏"<[email protected]>;"Comment"<[email protected]>;
主题: Re: [nginxinc/kubernetes-ingress] 413 Request Entity Too Large (#21)
I have tried both proxy-body-size and client-max-body-size on the configmap and did a rolling restart of the nginx controller pods and when I grep the nginx.conf file in the pod it returns the default 1m. I am trying to do this within Azure Kubernetes Service (AKS). I'm working with someone from their support. They said its not on them since it appears to be a nginx config issue.
The weird hting is we had other clusters in Azure that this wasnt an issue on until we discovered this with some of the newer deployments. The initial fix they came up with is what is in this thread but it just refuses to change.
Below is my configmap:
Please edit the object below. Lines beginning with a '#' will be ignored,
and an empty file will abort the edit. If an error occurs while saving this file will be
reopened with the relevant failures.
apiVersion: v1
data:
client-max-body-size: 0m
proxy-connect-timeout: 10s
proxy-read-timeout: 10s
kind: ConfigMap
metadata:
annotations:
control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"nginx-nginx-ingress-controller-7b9bff87b8-vxv8q","leaseDurationSeconds":30,"acquireTime":"2020-03-10T20:52:06Z","renewTime":"2020-03-10T20:53:21Z","leaderTransitions":1}'
creationTimestamp: "2020-03-10T18:34:01Z"
name: ingress-controller-leader-nginx
namespace: ingress-nginx
resourceVersion: "23928"
selfLink: /api/v1/namespaces/ingress-nginx/configmaps/ingress-controller-leader-nginx
uid: b68a2143-62fd-11ea-ab45-d67902848a80
After issuing a rolling restart: kubectl rollout restart deployment/nginx-nginx-ingress-controller -n ingress-nginx
Grepping the nginx ingress controller pod to query the value now reveals:
kubectl exec -n ingress-nginx nginx-nginx-ingress-controller-7b9bff87b8-p4ppw cat nginx.conf | grep client_max_body_size
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 1m;
client_max_body_size 21m;
Doesnt matter where I try to change it. On the configmap for global or the Ingress route specifically.......this value above never changes.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@Aech1977 did you solve it? |
@Aech1977 this key is wrong client-max-body-size: 0m |
# This is the 1st commit message: add ingress mtls test # This is the commit message #2: add std vs # This is the commit message #3: change vs host # This is the commit message #4: Update tls secret # This is the commit message #5: update certs with host # This is the commit message #6: modify get_cert # This is the commit message #7: Addind encoded cert # This is the commit message #8: Update secrets # This is the commit message #9: Add correct cert and SNI module # This is the commit message #10: Bump styfle/cancel-workflow-action from 0.8.0 to 0.9.0 (#1527) Bumps [styfle/cancel-workflow-action](https://github.com/styfle/cancel-workflow-action) from 0.8.0 to 0.9.0. - [Release notes](https://github.com/styfle/cancel-workflow-action/releases) - [Commits](styfle/cancel-workflow-action@0.8.0...89f242e) Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> # This is the commit message #11: Remove patch version from Docker image for tests (#1534) # This is the commit message #12: Add tests for Ingress TLS termination # This is the commit message #13: Improve assertion of TLS errors in tests When NGINX terminates a TLS connection for a server with a missing/invalid TLS secret, we expect NGINX to reject the connection with the error TLSV1_UNRECOGNIZED_NAME In this commit we: * ensure the specific error * rename the assertion function to be more specific # This is the commit message #14: Bump k8s.io/client-go from 0.20.5 to 0.21.0 (#1530) Bumps [k8s.io/client-go](https://github.com/kubernetes/client-go) from 0.20.5 to 0.21.0. - [Release notes](https://github.com/kubernetes/client-go/releases) - [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md) - [Commits](kubernetes/client-go@v0.20.5...v0.21.0) Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> # This is the commit message #15: Improve tests Dockerfile * Reorganize layers so that changes to the tests do not cause a full image rebuilt * Use .dockerignore to ignore cache folders * Convert spaces to tabs for consistency with the other Dockerfiles # This is the commit message #16: Upgrade kubernetes-python client to 12.0.1 (#1522) * Upgrade kubernetes-python client to 12.0.1 Co-authored-by: Venktesh Patel <[email protected]> # This is the commit message #17: Bump k8s.io/code-generator from 0.20.5 to 0.21.0 (#1531) Bumps [k8s.io/code-generator](https://github.com/kubernetes/code-generator) from 0.20.5 to 0.21.0. - [Release notes](https://github.com/kubernetes/code-generator/releases) - [Commits](kubernetes/code-generator@v0.20.5...v0.21.0) Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> # This is the commit message #18: Test all images (#1533) * Test on all images * Update nightly to test all images * Run all test markers on debian plus also * Update .github/workflows/nightly.yml # This is the commit message #19: Add tests for default server # This is the commit message #20: Support running tests in kind # This is the commit message #21: Update badge for Fossa (#1546) # This is the commit message #22: Fix ensuring connection in tests * Add timeout for establishing a connection to prevent potential "hangs" of the test runs. The problem was noticeable when running tests in kind. * Increase the number of tries to make sure the Ingress Controller pod has enough time to get ready. When running tests in kind locally the number of tries sometimes was not enough. # This is the commit message #23: Ensure connection in Ingress TLS tests Ensure connection to NGINX before running tests. Without ensuring, sometimes the first connection to NGINX would hang (timeout). The problem is noticable when running tests in kind. # This is the commit message #24: Revert changes in nightly for now (#1547) # This is the commit message #25: Bump actions/cache from v2.1.4 to v2.1.5 (#1541) Bumps [actions/cache](https://github.com/actions/cache) from v2.1.4 to v2.1.5. - [Release notes](https://github.com/actions/cache/releases) - [Commits](actions/cache@v2.1.4...1a9e213) Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> # This is the commit message #26: Create release workflow
We've deployed a docker registry and created an ingress rule to it's kubernetes service whilst using the nginx ingress controller. When pushing larger images we quickly hit the nginx limits giving us the error below.
I've forked the repo and hacked the nginx config adding a
client_max_body_size
attribute so we can push larger images. For a proper solution though, it might be nice to set a value in the kubernetes ingress rule and have that used when the nginx controller is updated?The text was updated successfully, but these errors were encountered: