Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump github.com/coredns/coredns from 1.6.7 to 1.8.0 #2

Conversation

dependabot[bot]
Copy link

@dependabot dependabot bot commented on behalf of github Jan 4, 2021

Bumps github.com/coredns/coredns from 1.6.7 to 1.8.0.

Commits
  • 054c9ae release: up version to 1.8.0 (#4225)
  • 91fd102 add default reviewers for circleci config (#4222)
  • 3d013b3 auto go mod tidy
  • 1db1e02 build(deps): bump github.com/miekg/dns from 1.1.33 to 1.1.34 (#4217)
  • 6e6aca8 auto go mod tidy
  • 641e2bf build(deps): bump github.com/aws/aws-sdk-go from 1.35.7 to 1.35.9 (#4213)
  • fe335e2 build(deps): bump github.com/golang/protobuf from 1.4.2 to 1.4.3 (#4216)
  • 34d98f1 build(deps): bump github.com/prometheus/client_golang (#4214)
  • 2e63b66 build(deps): bump gopkg.in/DataDog/dd-trace-go.v1 from 1.27.0 to 1.27.1 (#4212)
  • 1f07f7d auto make -f Makefile.doc
  • Additional commits viewable in compare view

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually

@dependabot dependabot bot requested a review from tpantelis as a code owner January 4, 2021 22:47
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Jan 4, 2021
@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Jan 4, 2021

Dependabot tried to add @mangelajo and @tpantelis as reviewers to this PR, but received the following error from GitHub:

POST https://api.github.com/repos/tpantelis/lighthouse/pulls/2/requested_reviewers: 422 - Reviews may only be requested from collaborators. One or more of the users or teams you specified is not a collaborator of the tpantelis/lighthouse repository. // See: https://docs.github.com/rest/reference/pulls#request-reviewers-for-a-pull-request

@dependabot @github
Copy link
Author

dependabot bot commented on behalf of github Jan 22, 2021

Superseded by #4.

@dependabot dependabot bot closed this Jan 22, 2021
@dependabot dependabot bot deleted the dependabot/go_modules/github.com/coredns/coredns-1.8.0 branch January 22, 2021 07:11
tpantelis added a commit that referenced this pull request Mar 20, 2024
...instead of when local EPS is synced to the broker. This can cause
inconsistency if another service instance is unexported on another
cluster simultaneously. The scenario is:

- a service is exported on C2
- the service is then exported on C1. The local SI is created and it's
  observed that the aggregated SI on the broker already exists.
- the service on C2 is unexported and the aggregated SI is deleted b/c
  its cluster status is now empty.
- the local EPS on C1 is synced to the broker. At this point, it tries
  to update the aggregated SI with the cluster info but it no longer exists.

There’s a couple ways to address it.

1) Do create-or-update when merging the local cluster info on EPS creation.
   The downside is that this wouldn’t do the service type conflict checking
   although the possibility that the SI was re-created by another cluster
   with a different service type in that window would be remote.

2) Add the cluster name to the aggregated SI cluster status when created on
   local SI creation. This would’ve prevented C1 from deleting the aggregated
   SI b/c C2's name would’ve been present in the cluster status. I didn’t do
   it this way for consistency so the cluster name and port info are added
   atomically and after the EPS has been successfully exported to ensure it’s
   ready for use if a consumer observes the cluster info present. But this
   isn’t a requirement.

The consensus is #2.

Signed-off-by: Tom Pantelis <[email protected]>
tpantelis added a commit that referenced this pull request Mar 28, 2024
...instead of when local EPS is synced to the broker. This can cause
inconsistency if another service instance is unexported on another
cluster simultaneously. The scenario is:

- a service is exported on C2
- the service is then exported on C1. The local SI is created and it's
  observed that the aggregated SI on the broker already exists.
- the service on C2 is unexported and the aggregated SI is deleted b/c
  its cluster status is now empty.
- the local EPS on C1 is synced to the broker. At this point, it tries
  to update the aggregated SI with the cluster info but it no longer exists.

There’s a couple ways to address it.

1) Do create-or-update when merging the local cluster info on EPS creation.
   The downside is that this wouldn’t do the service type conflict checking
   although the possibility that the SI was re-created by another cluster
   with a different service type in that window would be remote.

2) Add the cluster name to the aggregated SI cluster status when created on
   local SI creation. This would’ve prevented C1 from deleting the aggregated
   SI b/c C2's name would’ve been present in the cluster status. I didn’t do
   it this way for consistency so the cluster name and port info are added
   atomically and after the EPS has been successfully exported to ensure it’s
   ready for use if a consumer observes the cluster info present. But this
   isn’t a requirement.

The consensus is #2.

Signed-off-by: Tom Pantelis <[email protected]>
tpantelis added a commit that referenced this pull request Mar 28, 2024
...instead of when local EPS is synced to the broker. This can cause
inconsistency if another service instance is unexported on another
cluster simultaneously. The scenario is:

- a service is exported on C2
- the service is then exported on C1. The local SI is created and it's
  observed that the aggregated SI on the broker already exists.
- the service on C2 is unexported and the aggregated SI is deleted b/c
  its cluster status is now empty.
- the local EPS on C1 is synced to the broker. At this point, it tries
  to update the aggregated SI with the cluster info but it no longer exists.

There’s a couple ways to address it.

1) Do create-or-update when merging the local cluster info on EPS creation.
   The downside is that this wouldn’t do the service type conflict checking
   although the possibility that the SI was re-created by another cluster
   with a different service type in that window would be remote.

2) Add the cluster name to the aggregated SI cluster status when created on
   local SI creation. This would’ve prevented C1 from deleting the aggregated
   SI b/c C2's name would’ve been present in the cluster status. I didn’t do
   it this way for consistency so the cluster name and port info are added
   atomically and after the EPS has been successfully exported to ensure it’s
   ready for use if a consumer observes the cluster info present. But this
   isn’t a requirement.

The consensus is #2.

Signed-off-by: Tom Pantelis <[email protected]>
tpantelis added a commit that referenced this pull request Apr 2, 2024
...instead of when local EPS is synced to the broker. This can cause
inconsistency if another service instance is unexported on another
cluster simultaneously. The scenario is:

- a service is exported on C2
- the service is then exported on C1. The local SI is created and it's
  observed that the aggregated SI on the broker already exists.
- the service on C2 is unexported and the aggregated SI is deleted b/c
  its cluster status is now empty.
- the local EPS on C1 is synced to the broker. At this point, it tries
  to update the aggregated SI with the cluster info but it no longer exists.

There’s a couple ways to address it.

1) Do create-or-update when merging the local cluster info on EPS creation.
   The downside is that this wouldn’t do the service type conflict checking
   although the possibility that the SI was re-created by another cluster
   with a different service type in that window would be remote.

2) Add the cluster name to the aggregated SI cluster status when created on
   local SI creation. This would’ve prevented C1 from deleting the aggregated
   SI b/c C2's name would’ve been present in the cluster status. I didn’t do
   it this way for consistency so the cluster name and port info are added
   atomically and after the EPS has been successfully exported to ensure it’s
   ready for use if a consumer observes the cluster info present. But this
   isn’t a requirement.

The consensus is #2.

Signed-off-by: Tom Pantelis <[email protected]>
tpantelis added a commit that referenced this pull request May 7, 2024
...instead of when local EPS is synced to the broker. This can cause
inconsistency if another service instance is unexported on another
cluster simultaneously. The scenario is:

- a service is exported on C2
- the service is then exported on C1. The local SI is created and it's
  observed that the aggregated SI on the broker already exists.
- the service on C2 is unexported and the aggregated SI is deleted b/c
  its cluster status is now empty.
- the local EPS on C1 is synced to the broker. At this point, it tries
  to update the aggregated SI with the cluster info but it no longer exists.

There’s a couple ways to address it.

1) Do create-or-update when merging the local cluster info on EPS creation.
   The downside is that this wouldn’t do the service type conflict checking
   although the possibility that the SI was re-created by another cluster
   with a different service type in that window would be remote.

2) Add the cluster name to the aggregated SI cluster status when created on
   local SI creation. This would’ve prevented C1 from deleting the aggregated
   SI b/c C2's name would’ve been present in the cluster status. I didn’t do
   it this way for consistency so the cluster name and port info are added
   atomically and after the EPS has been successfully exported to ensure it’s
   ready for use if a consumer observes the cluster info present. But this
   isn’t a requirement.

The consensus is #2.

Signed-off-by: Tom Pantelis <[email protected]>
tpantelis added a commit that referenced this pull request Aug 26, 2024
...instead of when local EPS is synced to the broker. This can cause
inconsistency if another service instance is unexported on another
cluster simultaneously. The scenario is:

- a service is exported on C2
- the service is then exported on C1. The local SI is created and it's
  observed that the aggregated SI on the broker already exists.
- the service on C2 is unexported and the aggregated SI is deleted b/c
  its cluster status is now empty.
- the local EPS on C1 is synced to the broker. At this point, it tries
  to update the aggregated SI with the cluster info but it no longer exists.

There’s a couple ways to address it.

1) Do create-or-update when merging the local cluster info on EPS creation.
   The downside is that this wouldn’t do the service type conflict checking
   although the possibility that the SI was re-created by another cluster
   with a different service type in that window would be remote.

2) Add the cluster name to the aggregated SI cluster status when created on
   local SI creation. This would’ve prevented C1 from deleting the aggregated
   SI b/c C2's name would’ve been present in the cluster status. I didn’t do
   it this way for consistency so the cluster name and port info are added
   atomically and after the EPS has been successfully exported to ensure it’s
   ready for use if a consumer observes the cluster info present. But this
   isn’t a requirement.

The consensus is #2.

Signed-off-by: Tom Pantelis <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant