-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow start configuration understanding #36961
Comments
documentation for slow start mode: https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/load_balancing/slow_start original PR adding slow start mode: #13176 |
@KBaichoo I have checked documentation but nothing seems to explain the different start modes like the above two pod traffics. Any other reference if you could guide. |
cc @nezdolik might be more familiar with this area |
This is being reported quite frequently by users who operate various service mesh tech or Envoy based ingresses, where the control plane enables locality based routing by default. @anupam-meesho can you confirm that your setup does not have pods spread across multiple localities or priorities? (from slow start docs):
|
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or "no stalebot" or other activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had activity in the last 37 days. If this issue is still valid, please ping a maintainer and ask them to label it as "help wanted" or "no stalebot". Thank you for your contributions. |
Hi Team,
Some background: We have our whole infra in kubernetes. We have configured contour as an ingress gateway for the means of routing traffic across the clusters.
Question: We have configured slow start for some of our services. We came across two different behaviours for the different pods. One is honouring the slow start other one is takes a way longer time to ramp up the pods to the full capacity. It would be very helpful if you can point to the certain documentation of code that governs this behaviour predictively. Below is the configuration we are using:
The text was updated successfully, but these errors were encountered: