forked from submariner-io/lighthouse
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump github.com/coredns/coredns from 1.6.7 to 1.8.0 #2
Closed
dependabot
wants to merge
1
commit into
master
from
dependabot/go_modules/github.com/coredns/coredns-1.8.0
Closed
Bump github.com/coredns/coredns from 1.6.7 to 1.8.0 #2
dependabot
wants to merge
1
commit into
master
from
dependabot/go_modules/github.com/coredns/coredns-1.8.0
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Bumps [github.com/coredns/coredns](https://github.com/coredns/coredns) from 1.6.7 to 1.8.0. - [Release notes](https://github.com/coredns/coredns/releases) - [Changelog](https://github.com/coredns/coredns/blob/master/Makefile.release) - [Commits](coredns/coredns@v1.6.7...v1.8.0) Signed-off-by: dependabot[bot] <[email protected]>
Dependabot tried to add
|
Superseded by #4. |
dependabot
bot
deleted the
dependabot/go_modules/github.com/coredns/coredns-1.8.0
branch
January 22, 2021 07:11
tpantelis
added a commit
that referenced
this pull request
Mar 20, 2024
...instead of when local EPS is synced to the broker. This can cause inconsistency if another service instance is unexported on another cluster simultaneously. The scenario is: - a service is exported on C2 - the service is then exported on C1. The local SI is created and it's observed that the aggregated SI on the broker already exists. - the service on C2 is unexported and the aggregated SI is deleted b/c its cluster status is now empty. - the local EPS on C1 is synced to the broker. At this point, it tries to update the aggregated SI with the cluster info but it no longer exists. There’s a couple ways to address it. 1) Do create-or-update when merging the local cluster info on EPS creation. The downside is that this wouldn’t do the service type conflict checking although the possibility that the SI was re-created by another cluster with a different service type in that window would be remote. 2) Add the cluster name to the aggregated SI cluster status when created on local SI creation. This would’ve prevented C1 from deleting the aggregated SI b/c C2's name would’ve been present in the cluster status. I didn’t do it this way for consistency so the cluster name and port info are added atomically and after the EPS has been successfully exported to ensure it’s ready for use if a consumer observes the cluster info present. But this isn’t a requirement. The consensus is #2. Signed-off-by: Tom Pantelis <[email protected]>
tpantelis
added a commit
that referenced
this pull request
Mar 28, 2024
...instead of when local EPS is synced to the broker. This can cause inconsistency if another service instance is unexported on another cluster simultaneously. The scenario is: - a service is exported on C2 - the service is then exported on C1. The local SI is created and it's observed that the aggregated SI on the broker already exists. - the service on C2 is unexported and the aggregated SI is deleted b/c its cluster status is now empty. - the local EPS on C1 is synced to the broker. At this point, it tries to update the aggregated SI with the cluster info but it no longer exists. There’s a couple ways to address it. 1) Do create-or-update when merging the local cluster info on EPS creation. The downside is that this wouldn’t do the service type conflict checking although the possibility that the SI was re-created by another cluster with a different service type in that window would be remote. 2) Add the cluster name to the aggregated SI cluster status when created on local SI creation. This would’ve prevented C1 from deleting the aggregated SI b/c C2's name would’ve been present in the cluster status. I didn’t do it this way for consistency so the cluster name and port info are added atomically and after the EPS has been successfully exported to ensure it’s ready for use if a consumer observes the cluster info present. But this isn’t a requirement. The consensus is #2. Signed-off-by: Tom Pantelis <[email protected]>
tpantelis
added a commit
that referenced
this pull request
Mar 28, 2024
...instead of when local EPS is synced to the broker. This can cause inconsistency if another service instance is unexported on another cluster simultaneously. The scenario is: - a service is exported on C2 - the service is then exported on C1. The local SI is created and it's observed that the aggregated SI on the broker already exists. - the service on C2 is unexported and the aggregated SI is deleted b/c its cluster status is now empty. - the local EPS on C1 is synced to the broker. At this point, it tries to update the aggregated SI with the cluster info but it no longer exists. There’s a couple ways to address it. 1) Do create-or-update when merging the local cluster info on EPS creation. The downside is that this wouldn’t do the service type conflict checking although the possibility that the SI was re-created by another cluster with a different service type in that window would be remote. 2) Add the cluster name to the aggregated SI cluster status when created on local SI creation. This would’ve prevented C1 from deleting the aggregated SI b/c C2's name would’ve been present in the cluster status. I didn’t do it this way for consistency so the cluster name and port info are added atomically and after the EPS has been successfully exported to ensure it’s ready for use if a consumer observes the cluster info present. But this isn’t a requirement. The consensus is #2. Signed-off-by: Tom Pantelis <[email protected]>
tpantelis
added a commit
that referenced
this pull request
Apr 2, 2024
...instead of when local EPS is synced to the broker. This can cause inconsistency if another service instance is unexported on another cluster simultaneously. The scenario is: - a service is exported on C2 - the service is then exported on C1. The local SI is created and it's observed that the aggregated SI on the broker already exists. - the service on C2 is unexported and the aggregated SI is deleted b/c its cluster status is now empty. - the local EPS on C1 is synced to the broker. At this point, it tries to update the aggregated SI with the cluster info but it no longer exists. There’s a couple ways to address it. 1) Do create-or-update when merging the local cluster info on EPS creation. The downside is that this wouldn’t do the service type conflict checking although the possibility that the SI was re-created by another cluster with a different service type in that window would be remote. 2) Add the cluster name to the aggregated SI cluster status when created on local SI creation. This would’ve prevented C1 from deleting the aggregated SI b/c C2's name would’ve been present in the cluster status. I didn’t do it this way for consistency so the cluster name and port info are added atomically and after the EPS has been successfully exported to ensure it’s ready for use if a consumer observes the cluster info present. But this isn’t a requirement. The consensus is #2. Signed-off-by: Tom Pantelis <[email protected]>
tpantelis
added a commit
that referenced
this pull request
May 7, 2024
...instead of when local EPS is synced to the broker. This can cause inconsistency if another service instance is unexported on another cluster simultaneously. The scenario is: - a service is exported on C2 - the service is then exported on C1. The local SI is created and it's observed that the aggregated SI on the broker already exists. - the service on C2 is unexported and the aggregated SI is deleted b/c its cluster status is now empty. - the local EPS on C1 is synced to the broker. At this point, it tries to update the aggregated SI with the cluster info but it no longer exists. There’s a couple ways to address it. 1) Do create-or-update when merging the local cluster info on EPS creation. The downside is that this wouldn’t do the service type conflict checking although the possibility that the SI was re-created by another cluster with a different service type in that window would be remote. 2) Add the cluster name to the aggregated SI cluster status when created on local SI creation. This would’ve prevented C1 from deleting the aggregated SI b/c C2's name would’ve been present in the cluster status. I didn’t do it this way for consistency so the cluster name and port info are added atomically and after the EPS has been successfully exported to ensure it’s ready for use if a consumer observes the cluster info present. But this isn’t a requirement. The consensus is #2. Signed-off-by: Tom Pantelis <[email protected]>
tpantelis
added a commit
that referenced
this pull request
Aug 26, 2024
...instead of when local EPS is synced to the broker. This can cause inconsistency if another service instance is unexported on another cluster simultaneously. The scenario is: - a service is exported on C2 - the service is then exported on C1. The local SI is created and it's observed that the aggregated SI on the broker already exists. - the service on C2 is unexported and the aggregated SI is deleted b/c its cluster status is now empty. - the local EPS on C1 is synced to the broker. At this point, it tries to update the aggregated SI with the cluster info but it no longer exists. There’s a couple ways to address it. 1) Do create-or-update when merging the local cluster info on EPS creation. The downside is that this wouldn’t do the service type conflict checking although the possibility that the SI was re-created by another cluster with a different service type in that window would be remote. 2) Add the cluster name to the aggregated SI cluster status when created on local SI creation. This would’ve prevented C1 from deleting the aggregated SI b/c C2's name would’ve been present in the cluster status. I didn’t do it this way for consistency so the cluster name and port info are added atomically and after the EPS has been successfully exported to ensure it’s ready for use if a consumer observes the cluster info present. But this isn’t a requirement. The consensus is #2. Signed-off-by: Tom Pantelis <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Bumps github.com/coredns/coredns from 1.6.7 to 1.8.0.
Commits
054c9ae
release: up version to 1.8.0 (#4225)91fd102
add default reviewers for circleci config (#4222)3d013b3
auto go mod tidy1db1e02
build(deps): bump github.com/miekg/dns from 1.1.33 to 1.1.34 (#4217)6e6aca8
auto go mod tidy641e2bf
build(deps): bump github.com/aws/aws-sdk-go from 1.35.7 to 1.35.9 (#4213)fe335e2
build(deps): bump github.com/golang/protobuf from 1.4.2 to 1.4.3 (#4216)34d98f1
build(deps): bump github.com/prometheus/client_golang (#4214)2e63b66
build(deps): bump gopkg.in/DataDog/dd-trace-go.v1 from 1.27.0 to 1.27.1 (#4212)1f07f7d
auto make -f Makefile.docDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting
@dependabot rebase
.Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR:
@dependabot rebase
will rebase this PR@dependabot recreate
will recreate this PR, overwriting any edits that have been made to it@dependabot merge
will merge this PR after your CI passes on it@dependabot squash and merge
will squash and merge this PR after your CI passes on it@dependabot cancel merge
will cancel a previously requested merge and block automerging@dependabot reopen
will reopen this PR if it is closed@dependabot close
will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually