Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protect against scale / k8s bugs with resource statuses that are too large #523

Closed
kdorosh opened this issue Sep 28, 2022 · 1 comment
Closed
Assignees

Comments

@kdorosh
Copy link
Contributor

kdorosh commented Sep 28, 2022

{"level":"fatal","ts":"2022-09-28T18:36:01.811Z","logger":"gloo-ee","caller":"setuputils/main_setup.go:93","msg":"error in setup: creating base VirtualService resource client: list check failed: rpc error: code = ResourceExhausted desc = grpc: trying to send message larger than max (3861539779 vs. 2147483647)","version":"1.12.23","stacktrace":"github.com/solo-io/gloo/pkg/utils/setuputils.Main\n\t/var/home/yuval/Projects/solo/gloo/pkg/utils/setuputils/main_setup.go:93\ngithub.com/solo-io/solo-projects/projects/gloo/pkg/setup.Main\n\t/var/home/yuval/Projects/solo/solo-projects/projects/gloo/pkg/setup/setup.go:29\nmain.main\n\t/var/home/yuval/Projects/solo/solo-projects/projects/gloo/cmd/main.go:11\nruntime.main\n\t/var/home/yuval/bin/go1.19.1.linux-amd64/go/src/runtime/proc.go:250"}

this can happen if the k8s connection to etcd here gets data that is too large (in this case more than 3GB!)

this happened because we had a bug that reported statuses that were too large. in addition to resolving the root cause, we'd also like to truncate the status (which is for human consumption anyways) to a smaller format (say 1 kilobyte) to protect etcd -> k8s communications

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant