-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single node clickhouse init #6903
Conversation
cf515bc
to
e725a76
Compare
63d3ba4
to
79713cb
Compare
e725a76
to
2e7724a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
clickhouse-admin/api/src/lib.rs
Outdated
/// The single-node server is distinct from the both the multi-node servers | ||
/// and its keepers. The sole purpose of this API is to serialize database | ||
/// initialization requests from reconfigurator execution. Multi-node clusters | ||
/// must eventually implement a similar interface, but the implementation will |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: I'd probably avoid speculating here about implementation time and detail and only state that multi-node clusters must implement a similar interface through clickhouse-admin-server
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good call, fixed in 2ddcbd9.
let ctx = rqctx.context(); | ||
let initialized = ctx.db_initialized(); | ||
let mut initialized = initialized.lock().await; | ||
if !*initialized { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if we want to update the schema? Is there any harm in just letting initialization run through every time? It's supposed to be idempotent, right?
CC @bnaecker
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's supposed to be idempotent, right?
Yeah, schema updates are idempotent, and also a no-op if the version in the database is at least as high as that on the client side.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked a little about this in an update meeting a few weeks ago; my understanding was that letting initialization run each time and therefore update the schema would represent a change to our current policy of not performing automatic schema updates. This was one of the reasons for preferring a server-side approach rather than initializing the database on every client connection. I'm fine either way; it's a trivial change to the code, but a potentially less trivial change in policy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked a little about this in an update meeting a few weeks ago; my understanding was that letting initialization run each time and therefore update the schema would represent a change to our current policy of not performing automatic schema updates. This was one of the reasons for preferring a server-side approach rather than initializing the database on every client connection. I'm fine either way; it's a trivial change to the code, but a potentially less trivial change in policy.
Thanks for reminding me of that discussion Alex. However:
- Currently the schema version is being issued based on a constant in the code, so it won't be updated at all unless the code changes.
- If the admin server crashes or the sled is rebooted, the in-memory state will be lost and the schema will be re-initialized anyway.
I think what we'll want to do in a follow up is to make schema updates part of the planner, and then only execute them when the plan changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, and the API call should then take the schema version to update from the executor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, those are excellent points; you've convinced me. 012b6a1 does unconditional (but still serialized) database initialization. We'll have to take explicit versioning as a follow-up.
let admin_url = format!("http://{admin_addr}"); | ||
let log = opctx.log.new(slog::o!("admin_url" => admin_url.clone())); | ||
let client = ClickhouseSingleClient::new(&admin_url, log.clone()); | ||
if let Err(e) = client.init_db().await { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so I know that I made this change. But afterwards I've been messing with other execution steps and think we should instead return the error here and then convert it into a warning in the execution step. The benefit of doing it this way is that it shows up in the background-task
as a warning in OMDB. I'll put a comment next to where this should go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in c6e98a5.
&opctx, | ||
&blueprint.blueprint_zones, | ||
) | ||
.await?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should put the warning here. Instead of using ?
we should do the following:
if let Err(e) = clickhouse::deploy_single_node(
&opctx,
&blueprint.blueprint_zones,
)
.await {
return StepWarning::new((), e.to_string()).into();
}
fe5e6dc
to
83efab7
Compare
Co-authored-by: Karen Cárcamo <[email protected]>
Depends on #6894, replaces #6878, fixes #6826 on the server side.
Adds a new Dropshot server,
clickhouse-admin-single
, analogous toclickhouse-admin-keeper
andclickhouse-admin-server
(which were split off fromclickhouse-admin
in #6837). Its sole purpose is to initialize the single-node ClickHouse database with the current Oximeter schema. We use a single in-memory lock (atokio::sync::Mutex
) to serialize initialization requests. Multi-node ClickHouse clusters will need something analogous as a follow-up.Still needs testing. Mya4x2
cluster is currently being uncooperative, and so this has not been through a successful "spin-up, expunge ClickHouse, regenerate" cycle. Also missing unit tests.Tested on
a4x2
by expunging the ClickHouse zone in a new blueprint, setting that as the target and verifying that the zone is expunged, regenerating a new blueprint, setting that as the target and ensuring that the new zone is instantiated, and ensuring that Oximeter correctly stops inserting while the db is unavailable, but automatically resumes insertions once the new one is up & initialized.The failing testThanks to @andrewjstone for fixing the last failing test (#7059).cockroachdb::test::test_ensure_preserve_downgrade_option
is caused by the fact that we don't currently spin up the newclickhouse-admin-single
server in a test environment, so when a blueprint containing a ClickHouse zone is executed, the initialization step fails.