-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
func: add mitigation for deployments that take awhile to "come alive" #4449
base: main
Are you sure you want to change the base?
Conversation
Azure Dev CLI Install InstructionsInstall scriptsMacOS/Linux
bash:
pwsh:
WindowsPowerShell install
MSI install
Standalone Binary
MSI
Documentationlearn.microsoft.com documentationtitle: Azure Developer CLI reference
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine - just added one comment.
@@ -117,9 +117,12 @@ func (c *FuncAppHostClient) Publish( | |||
|
|||
func (c *FuncAppHostClient) waitForDeployment(ctx context.Context, location string) (PublishResponse, error) { | |||
// This frequency is recommended by the service team. | |||
polLDelay := 1 * time.Second | |||
pollDelay := 1 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have we considered using the go-retry
package that we have used in the past? It seems to already support the polling logic with max retries, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My preference would be:
- Consider azure-sdk-for-go retry policies
- If implementing HTTP directly, consider handling error conditions directly.
My main hesitancies against go-retry
is that 1. It abstracts away for-loops, which "hides" the true performance cost from the programmer, and 2. It makes it too easy to add inner retries and outer retries that lead to exponential retry loops.
This is primarily speaking from my own experience on previous teams, where a client tool ended up retrying for an hour, because we added multiple retries at different layers. They all looked fine going through code review -- but it's only when you stare at a for-loop long enough that we understand the cost of execution that we are imposing on the program.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good.
Add polling delay and retries for deployments that take awhile to "come alive"
Fixes #4439