Skip to content
This repository has been archived by the owner on Feb 9, 2024. It is now read-only.

Increase dns-app update status checks #2324

Merged
merged 1 commit into from
Nov 15, 2020

Conversation

knisbet
Copy link
Contributor

@knisbet knisbet commented Nov 13, 2020

Description

The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.

Update the retry attempts to allow for ~10 minutes.

Type of change

  • Bug fix (non-breaking change which fixes an issue)

Linked tickets and other PRs

Updates #2282

TODOs

  • Self-review the change
  • Address review feedback

The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.
@knisbet knisbet requested review from a team, a-palchikov and bernardjkim November 13, 2020 16:46
@knisbet knisbet merged commit 2fd474a into master Nov 15, 2020
knisbet pushed a commit that referenced this pull request Nov 15, 2020
The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.

(cherry picked from commit 2fd474a)
knisbet pushed a commit that referenced this pull request Nov 15, 2020
The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.

(cherry picked from commit 2fd474a)
knisbet pushed a commit that referenced this pull request Nov 15, 2020
The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.

(cherry picked from commit 2fd474a)
knisbet pushed a commit that referenced this pull request Nov 15, 2020
The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.

(cherry picked from commit 2fd474a)
knisbet pushed a commit that referenced this pull request Nov 17, 2020
* Increase dns-app update status checks (#2324)

The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.

(cherry picked from commit 2fd474a)

* Bump DNS_APP_TAG
knisbet pushed a commit that referenced this pull request Nov 17, 2020
* Increase dns-app update status checks (#2324)

The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.

(cherry picked from commit 2fd474a)

* Update Makefile
knisbet pushed a commit that referenced this pull request Nov 18, 2020
* Increase dns-app update status checks (#2324)

The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.

(cherry picked from commit 2fd474a)

* Bump DNS_APP_TAG
@wadells wadells deleted the kevin/master/dns-app-retry-attempts branch April 6, 2021 20:04
helgi pushed a commit to helgi/gravity that referenced this pull request Jun 21, 2021
The number of retry attempts when checking the status of rigging changes may be too short for clusters experiencing high scheduler load. We have a customer app that appears to crash during the app update, creating lots of kubernetes scheduler events and the dns pods can take more than 2 minutes to start.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants