-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
server: refactor the grpc server #33690
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 0 of 0 LGTMs obtained
pkg/server/grpc_server.go, line 35 at r1 (raw file):
newGRPCServer initializes a grpcServer whose serveMode is modeInitializing.
The interface could use a bit of brushing up. Callers shouldn't have to reach inside mode
.
pkg/server/grpc_server.go, line 73 at r1 (raw file):
} if _, allowed := interceptors[fullName]; !allowed { return WaitingForInitError(fullName)
Hmm, is there anything that keeps clients from running into this error (erroneously) while the node is booting up? (This is not a question about this patch in particular)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained
831879e
to
cc51321
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained
pkg/server/grpc_server.go, line 35 at r1 (raw file):
Previously, tbg (Tobias Grieger) wrote…
newGRPCServer initializes a grpcServer whose serveMode is modeInitializing.
The interface could use a bit of brushing up. Callers shouldn't have to reach inside
mode
.
ok, brushed up some more and shuffled things around. PTAL.
pkg/server/grpc_server.go, line 73 at r1 (raw file):
Previously, tbg (Tobias Grieger) wrote…
Hmm, is there anything that keeps clients from running into this error (erroneously) while the node is booting up? (This is not a question about this patch in particular)
not sure I understand the question... We want clients to run into this while the node is booting up, don't we? Like, only some RPCs are allowed then (btw I've renamed interceptors
to rpcsAllowedWhileBootstrapping
).
7359c2d
to
9287008
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 3 files at r1, 3 of 3 files at r2.
Reviewable status: complete! 1 of 0 LGTMs obtained (and 1 stale)
pkg/server/grpc_server.go, line 73 at r1 (raw file):
Previously, andreimatei (Andrei Matei) wrote…
not sure I understand the question... We want clients to run into this while the node is booting up, don't we? Like, only some RPCs are allowed then (btw I've renamed
interceptors
torpcsAllowedWhileBootstrapping
).
Yeah, I suppose that's right. Let me illustrate: n1 power-cycles quickly. It was previously the leaseholder for some ranges, so when it comes back up it immediately receives some BatchRequests. It'll refuse them with waitingForInitError. What happens? Does DistSender realize what's going on? I think it'll treat them as a SendError, which is supposedly correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bors r+
Reviewable status: complete! 1 of 0 LGTMs obtained (and 1 stale)
pkg/server/grpc_server.go, line 73 at r1 (raw file):
Previously, tbg (Tobias Grieger) wrote…
Yeah, I suppose that's right. Let me illustrate: n1 power-cycles quickly. It was previously the leaseholder for some ranges, so when it comes back up it immediately receives some BatchRequests. It'll refuse them with waitingForInitError. What happens? Does DistSender realize what's going on? I think it'll treat them as a SendError, which is supposedly correct.
yeah. Well stuff like that is exactly while I'm trying to pull initialization out of the Server code. When restarting a node, there should be no time when that node has its grpc server in this "awaiting bootstrap" state. I have a dream.
Merge conflict (retrying...) |
Merge conflict |
9287008
to
bf2a390
Compare
bors r+
…On Tue, Jan 15, 2019 at 2:10 PM craig[bot] ***@***.***> wrote:
Merge conflict
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#33690 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXBcb_BA9CEN4xB4wh97OdU6DVJZ5Lzks5vDie1gaJpZM4Z81iQ>
.
|
Build failed |
This patch creates the grpcServer object, to encapsulate the grpc.Server object, the serveMode object and the interceptor that our server.Server uses to filter out RPCs. The idea is to work towards decoupling the gprc.Server from our server.Server object. I'd like to have the grpc server be created before Server.Start(): I'd like the cluster id and node id to be available by the time Server.Start() is called so that we can get rid of the nodeIDCountainer that everybody and their dog uses. But for getting these ids a grpc server needs to be running early to handle the "init" rpc which bootstraps a cluster. This patch doesn't accomplish too much - it doesn't do anything about actually starting to serve any requests without a Server (i.e. create the needed listeners), but I think the patch stands on its own too as good refactoring. Release note: None
bf2a390
to
015aeb8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bors r+
Reviewable status: complete! 0 of 0 LGTMs obtained (and 2 stale)
33690: server: refactor the grpc server r=andreimatei a=andreimatei This patch creates the grpcServer object, to encapsulate the grpc.Server object, the serveMode object and the interceptor that our server.Server uses to filter out RPCs. The idea is to work towards decoupling the gprc.Server from our server.Server object. I'd like to have the grpc server be created before Server.Start(): I'd like the cluster id and node id to be available by the time Server.Start() is called so that we can get rid of the nodeIDCountainer that everybody and their dog uses. But for getting these ids a grpc server needs to be running early to handle the "init" rpc which bootstraps a cluster. This patch doesn't accomplish too much - it doesn't do anything about actually starting to serve any requests without a Server (i.e. create the needed listeners), but I think the patch stands on its own too as good refactoring. Release note: None 34027: distsqlrun: add metrics for queue size and wait duration r=ajwerner a=ajwerner Long queuing can lead to errors like described in #27746. This change should give us more visibility into when queuing is occurring and how problematic it is. Release note: None Co-authored-by: Andrei Matei <[email protected]> Co-authored-by: Andrew Werner <[email protected]>
Build failed (retrying...) |
Build failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bors r+
Reviewable status: complete! 0 of 0 LGTMs obtained (and 2 stale)
Timed out |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bors r+
Reviewable status: complete! 0 of 0 LGTMs obtained (and 2 stale)
Build failed (retrying...) |
Build failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bors r+
Reviewable status: complete! 0 of 0 LGTMs obtained (and 2 stale)
Build failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
bors r+
Reviewable status: complete! 0 of 0 LGTMs obtained (and 2 stale)
33690: server: refactor the grpc server r=andreimatei a=andreimatei This patch creates the grpcServer object, to encapsulate the grpc.Server object, the serveMode object and the interceptor that our server.Server uses to filter out RPCs. The idea is to work towards decoupling the gprc.Server from our server.Server object. I'd like to have the grpc server be created before Server.Start(): I'd like the cluster id and node id to be available by the time Server.Start() is called so that we can get rid of the nodeIDCountainer that everybody and their dog uses. But for getting these ids a grpc server needs to be running early to handle the "init" rpc which bootstraps a cluster. This patch doesn't accomplish too much - it doesn't do anything about actually starting to serve any requests without a Server (i.e. create the needed listeners), but I think the patch stands on its own too as good refactoring. Release note: None 33728: roachtest: switch tests to use the no-barrier ext4 option by default r=andreimatei a=andreimatei Before this patch, some tests were awkwardly reformatting a file system to get the no-barrier option since they needed it for running faster. The vast majority of other tests don't care - nobody cares about durability across machine crashes for these tests. So, this patch switches the default to be no-barrier. Tests that want it don't need to do anything any more - and they no longer need to do reformatting (they now take advantage of a recent capability added to roachtest for mounting things right in the first place). Tests that don't care remain unchanged. Tests that need it switch to declaring that in the test spec. The only test that cares is the synctest, as far as I know. Please let me know if you can think of any other. Release note: None 34064: opt: Fix panic in optCatalog.CheckPrivilege r=andy-kimball a=andy-kimball Fixes #34055 There was a missing panic in resolveSchemaForCreate that caused the schema object to be nil, which then triggered a panic in optCatalog.CheckPrivilege. Also changed panic to assertion error. Release note: None Co-authored-by: Andrei Matei <[email protected]> Co-authored-by: Andrew Kimball <[email protected]>
Build succeeded |
This patch creates the grpcServer object, to encapsulate the grpc.Server
object, the serveMode object and the interceptor that our server.Server
uses to filter out RPCs.
The idea is to work towards decoupling the gprc.Server from our
server.Server object. I'd like to have the grpc server be created before
Server.Start(): I'd like the cluster id and node id to be available by
the time Server.Start() is called so that we can get rid of the
nodeIDCountainer that everybody and their dog uses. But for getting
these ids a grpc server needs to be running early to handle the "init"
rpc which bootstraps a cluster.
This patch doesn't accomplish too much - it doesn't do anything about
actually starting to serve any requests without a Server (i.e. create
the needed listeners), but I think the patch stands on its own too as
good refactoring.
Release note: None