-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[reconfigurator] clickhouse_server
SMF service and oximeter replicated mode
#6343
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have made no changes here. This is simply extracted from smf/clickhouse/method_script.sh
. This script will be superseded by config file generation via clickward's libraries in a follow up PR.
clickhouse_server
SMF serviceclickhouse_server
SMF service and oximeter --replicated
mode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for putting this together. I have a few questions and suggestions, but looks pretty good overall!
@@ -49,6 +49,12 @@ enum Args { | |||
/// The socket address at which `oximeter`'s HTTP server runs. | |||
#[clap(short, long, action)] | |||
address: SocketAddrV6, | |||
|
|||
// TODO (https://github.com/oxidecomputer/omicron/issues/4148): This flag | |||
// should be removed once single node functionality is removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll keep saying it: I believe we'll always want to retain the ability to run a single-node, non-replicated ClickHouse database. I'm not sure if this particular argument will change, I'm referring to the functionality itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is stage three of the plan of action to roll out replicated clickhouse in RFD 468. Can we move the discussion of the decision to keep or remove single node functionality there?
In the mean time I'd like to keep these TODO
s to keep track of the places that would need to be modified if we decide to remove single node functionality (I'll rephrase to say "if" instead of "once"). If we decide that single node functionality will never be removed I'll make sure to remove all of these comments :)
</method_environment> | ||
</method_context> | ||
<exec_method type='method' name='start' | ||
exec='ctrun -l child -o noorphan,regent /opt/oxide/oximeter-collector/bin/oximeter run /var/svc/manifest/site/oximeter/config.toml --address %{config/address} --id %{config/id} --replicated &' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to make sure I understand, this whole file is duplicated so that we can pass the --replicated
flag to this invocation? If so, is there a way to simplify that at all? E.g., can we provide this as an SMF property or in the TOML configuration file?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, let me think about how I can make this nicer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've modified the code so that the replicated
setting is now a field in the oximeter configuration file instead of a flag in the CLI.
Example:
[db]
batch_size = 1000
batch_interval = 5 # In seconds
replicated = false
[log]
level = "debug"
mode = "file"
path = "/dev/stdout"
if_exists = "append"
We still need to have two configuration files as there really isn't a way to pass some sort of environment variable via omicron-package. But, it's now much more obvious to see what the difference between the files is, and it's cleaner.
@@ -58,14 +58,23 @@ const OXIMETER_COUNT: usize = 1; | |||
// when Nexus provisions Clickhouse. | |||
// TODO(https://github.com/oxidecomputer/omicron/issues/4000): Use | |||
// omicron_common::policy::CLICKHOUSE_SERVER_REDUNDANCY once we enable | |||
// replicated ClickHouse | |||
// replicated ClickHouse. | |||
// Set to 0 when testing replicated ClickHouse. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like this is the number of single-node ClickHouse servers. If that's right, can we make that clear with some documentation?
I think it might be even better to have one single const
here, to avoid confusion and accidentally starting something we didn't intend. For example:
enum ClickHouseSetup {
Replicated { n_servers: usize, n_keepers: usize, },
SingleNode { n_servers: usize },
}
// We're using single-node for now.
const CLICKHOUSE_SETUP = ClickHouseSetup::SingleNode { n_servers: 1 };
// If you want to test replication, uncomment this:
// const CLICKHOUSE_SETUP = ClickHouseSetup::Replicated { n_servers: 2, n_keepers: 3 };
Then later you can match on that type and figure out how many of each kind of zone to start.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still haven't 100% ruled out implementing stage one of RFD 468. In this scenario we'll be wanting to have the ability to deploy both alongside each other. I'd like to keep these constants as they are unless we decide that we will not implement stage 1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just want to add that these variables are going to go away very soon anyway as part of the reconfigurator work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unfortunate that we have to generate a Blueprint from a service plan, rather than as service plan from an initial blueprint. Otherwise, I'd expect this all to be somewhat simpler. In any case, as @karencfv says, these variables are going away soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking the time to review @bnaecker 🙇♀️ I've addressed all your comments except for https://github.com/oxidecomputer/omicron/pull/6343/files#r1720042719 . Will play around with this to see if I can come up with something better
@@ -49,6 +49,12 @@ enum Args { | |||
/// The socket address at which `oximeter`'s HTTP server runs. | |||
#[clap(short, long, action)] | |||
address: SocketAddrV6, | |||
|
|||
// TODO (https://github.com/oxidecomputer/omicron/issues/4148): This flag | |||
// should be removed once single node functionality is removed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is stage three of the plan of action to roll out replicated clickhouse in RFD 468. Can we move the discussion of the decision to keep or remove single node functionality there?
In the mean time I'd like to keep these TODO
s to keep track of the places that would need to be modified if we decide to remove single node functionality (I'll rephrase to say "if" instead of "once"). If we decide that single node functionality will never be removed I'll make sure to remove all of these comments :)
@@ -58,14 +58,23 @@ const OXIMETER_COUNT: usize = 1; | |||
// when Nexus provisions Clickhouse. | |||
// TODO(https://github.com/oxidecomputer/omicron/issues/4000): Use | |||
// omicron_common::policy::CLICKHOUSE_SERVER_REDUNDANCY once we enable | |||
// replicated ClickHouse | |||
// replicated ClickHouse. | |||
// Set to 0 when testing replicated ClickHouse. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still haven't 100% ruled out implementing stage one of RFD 468. In this scenario we'll be wanting to have the ability to deploy both alongside each other. I'd like to keep these constants as they are unless we decide that we will not implement stage 1.
</method_environment> | ||
</method_context> | ||
<exec_method type='method' name='start' | ||
exec='ctrun -l child -o noorphan,regent /opt/oxide/oximeter-collector/bin/oximeter run /var/svc/manifest/site/oximeter/config.toml --address %{config/address} --id %{config/id} --replicated &' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, let me think about how I can make this nicer
clickhouse_server
SMF service and oximeter --replicated
modeclickhouse_server
SMF service and oximeter replicated mode
@bnaecker @andrewjstone , I think I've addressed all the comments above. Please let me know if you think there's anything missing! |
Sorry @karencfv I haven't had a chance to look yet. I'll take a look a bit later tonight. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing the comments! Looks good, modulo one small fix using lookup_socket_v6()
. Thanks!
oximeter/collector/src/agent.rs
Outdated
let mut address = resolver | ||
.lookup_socket_v6(ServiceName::ClickhouseServer) | ||
.await?; | ||
address.set_port(CLICKHOUSE_PORT); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't need to set the port here. The call to lookup_socket_v6()
has a port in it, we don't want to overwrite it!
oximeter/collector/src/agent.rs
Outdated
) | ||
let mut address = | ||
resolver.lookup_socket_v6(ServiceName::Clickhouse).await?; | ||
address.set_port(CLICKHOUSE_PORT); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here, please just use the return value as-is.
smf/clickhouse_server/manifest.xml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, github got rid of my comment, I'll copy it here so it doesn't get lost:
I have made no changes here. This is simply extracted from smf/clickhouse/method_script.sh. This script will be superseded by config file generation via clickward's libraries in a follow up PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi Karen,
Thanks for taking this on. It's nice to see work done to package up the new zones. However, I think a large emphasis has been put on distinguishing replicated from single-node setups when it isn't needed. I left specific comments around this. I think without doing this, the PR shrinks quite a bit, and less of it will have to be ripped out in the next week or two.
}; | ||
let id = OmicronZoneUuid::new_v4(); | ||
let ip = sled.addr_alloc.next().expect("Not enough addrs"); | ||
// TODO: This may need to be a different port if/when to have single node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is necessary, as they should not be running in the same zones and will therefore have different IP addresses.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we're definitely not doing stage 1 of RFD 468 then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this rules out doing stage 1. I don't want to do that right now.
@@ -394,11 +395,17 @@ impl OximeterAgent { | |||
// database. | |||
let db_address = if let Some(address) = db_config.address { | |||
address | |||
} else if replicated { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why we need a replicated flag at all here. Why not always perform lookups from internal DNS? We just have to ensure that the single node ClickHouse also publishes a DNS entry. Oximeter is always only going to talk to one replica, right? If so it can just look up that replica from DNS. It doesn't have to know if there is only one of them or not because the client interface doesn't change. If there is a failure, then oximeter should perform another lookup so that it can find a healthy server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please correct me if I'm in the wrong here, but my understanding is that internal DNS entries for clickhouse
and clickhouse_server
will look quite different
omicron/internal-dns/src/config.rs
Lines 508 to 516 in ea32c0c
assert_eq!(ServiceName::Clickhouse.dns_name(), "_clickhouse._tcp",); | |
assert_eq!( | |
ServiceName::ClickhouseKeeper.dns_name(), | |
"_clickhouse-keeper._tcp", | |
); | |
assert_eq!( | |
ServiceName::ClickhouseServer.dns_name(), | |
"_clickhouse-server._tcp", | |
); |
Even if we were to use internal DNS we'd have to differentiate between the two zone types no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this was me assuming that single node and multi-node wouldn't be running at the same time. We can't actually guarantee that as you pointed out above. I don't want to entirely rule out step 1 of RFD 468 yet. Sorry for the confusion here. My mind was clearly elsewhere!
// TODO (https://github.com/oxidecomputer/omicron/issues/4148): This field | ||
// should be removed if single node functionality is removed. | ||
/// Whether ClickHouse is running as a replicated cluster or | ||
/// single-node server. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As stated above, I'm unclear why Oximeter has to know anything about whether there are one or more clickhouse servers. I think things would be much simpler without this flag.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oximeter has to initialize the database. It can't do that by asking whether it's talking to a replicated or single-node ClickHouse server, it has to set up the server. Doesn't that mean we need some kind of information like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless, based on your other comment, that oximeter
isn't going to set up the databases in the long run. Is that the job of the clickhouse-admin
server in the future? If so, then I agree with you :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I get it. I totally forgot about this dichotomy in the setup SQL.
It could certainly be done via clickhouse-admin
, but that was not my intent. I still think oximeter should likely run the sql to configure the database, but it should be told what to do by reconfigurator, especially when the old zone containing the single-node setup gets torn down and a replicated setup put in it's place. For now though, I guess that's all done via the oximeter config file. That negates a lot of my comments ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, like @bnaecker says, we need to know which SQL file to use to initiate the database. Even if we use clickhouse-admin to set up the server in the future, we don't have that mechanism in place yet, so we need to have a way to differentiate between the two in the mean time. WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, that makes sense. One could imagine providing something like the same configuration that we give to clickhouse-admin
to oximeter
as well. For example, it already needs to register itself with Nexus. That request could provide information about which DB type to set up, for example, or oximeter
could have an API through which Nexus / Reconfigurator provide that configuration.
@@ -58,14 +58,23 @@ const OXIMETER_COUNT: usize = 1; | |||
// when Nexus provisions Clickhouse. | |||
// TODO(https://github.com/oxidecomputer/omicron/issues/4000): Use | |||
// omicron_common::policy::CLICKHOUSE_SERVER_REDUNDANCY once we enable | |||
// replicated ClickHouse | |||
// replicated ClickHouse. | |||
// Set to 0 when testing replicated ClickHouse. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's unfortunate that we have to generate a Blueprint from a service plan, rather than as service plan from an initial blueprint. Otherwise, I'd expect this all to be somewhat simpler. In any case, as @karencfv says, these variables are going away soon.
listen_addr.to_string(), | ||
) | ||
.add_property("listen_port", "astring", listen_port) | ||
.add_property("store", "astring", "/data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fine for testing, but this is not how I anticipate the system working. When the zone is launched, a clickhouse server will not be running. The clickhouse-admin
dropshot server instead will be waiting for a ReplicaConfig
to be pushed to it so that it can generate a config file and start the service. In the case of an existing dataset, the config file may already be there, along with a cached version of the ReplicaConfig
input and a generation number, so that only updates generate new config files. Once there is a valid config file, then clickhouse-admin
will start the server.
The only properties passed through should be those needed for starting the clickhouse-admin
server. This allows us to use a self-assembling zone and not go through sled-agent when the configuration changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah sweet! Should I leave this code as is for now until we've modified clickhouse-admin to do what you mention? I'd like to leave replicated clickhouse working with this PR even if we remove things later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah sweet! Should I leave this code as is for now until we've modified clickhouse-admin to do what you mention? I'd like to leave replicated clickhouse working with this PR even if we remove things later
Absolutely leave it for now. Sorry again for the confusion. Everything in this PR works, so no reason to break it for some future code! Thanks again for hanging with me.
@@ -0,0 +1,124 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this file was mostly just moved over, but I don't expect it to end up looking like this at all in the very near future. It will only start clickhouse-admin
which will then wait for a ReplicaConfig
. If a clickhouse-config.xml
already exists then it should start clickhouse with that config. clickhouse-admin
will continue waiting for updates indefinitely and rewrite clickhouse-config.xml
when the configuration changes. Clickhouse will automatically reload this configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah yeah, definitely! I expect this to be gone soon.
#[derive(Clone, Debug, strum::EnumString, strum::Display, ValueEnum)] | ||
#[strum(serialize_all = "kebab-case")] | ||
#[clap(rename_all = "kebab-case")] | ||
pub enum ClickhouseTopology { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't believe this is strictly needed. Why do we need to build differently depending upon the way we run clickhouse. I think you can always just build Clickhouse
, ClickhouseServer
and ClickhouseKeeper
zones and package them up, but only deploy zones as decided at runtime. Isn't that exactly what we plan to do for production?
You already have to change the hardcoded variables in RSS to run multi-node, so we can just leave that for now until Reconfigurator is up and running. We'll only start regular Clickhouse
zones when configured in the RSS Plan, and start replicated ones otherwise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea here is to have this ClickhouseTopology
for as long as we have the ability to deploy single-node and replicated clickhouse services in the repo. My view is that since the hardcoding of constants is temporary, we'd only have to manually change them until reconfigurator takes over.
When we swap over to using replicated cluster as the default installation, we can change the default setting to replicated-cluster
. If we decide to remove all single-node functionality, we can get rid of ClickhouseTopology
altogether. Does this make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I follow here. This is just about building packages, not about what gets run via RSS, right? If so, why can't we build all the zone packages without specifying here? Then if we chose to remove all single-node functionality, we just remove that zone type. We don't also have to remove this option.
What am I missing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking the time to review @andrewjstone 🙇♀️ ! I've left some comments, I hope it all makes sense
@@ -394,11 +395,17 @@ impl OximeterAgent { | |||
// database. | |||
let db_address = if let Some(address) = db_config.address { | |||
address | |||
} else if replicated { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please correct me if I'm in the wrong here, but my understanding is that internal DNS entries for clickhouse
and clickhouse_server
will look quite different
omicron/internal-dns/src/config.rs
Lines 508 to 516 in ea32c0c
assert_eq!(ServiceName::Clickhouse.dns_name(), "_clickhouse._tcp",); | |
assert_eq!( | |
ServiceName::ClickhouseKeeper.dns_name(), | |
"_clickhouse-keeper._tcp", | |
); | |
assert_eq!( | |
ServiceName::ClickhouseServer.dns_name(), | |
"_clickhouse-server._tcp", | |
); |
Even if we were to use internal DNS we'd have to differentiate between the two zone types no?
// TODO (https://github.com/oxidecomputer/omicron/issues/4148): This field | ||
// should be removed if single node functionality is removed. | ||
/// Whether ClickHouse is running as a replicated cluster or | ||
/// single-node server. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, like @bnaecker says, we need to know which SQL file to use to initiate the database. Even if we use clickhouse-admin to set up the server in the future, we don't have that mechanism in place yet, so we need to have a way to differentiate between the two in the mean time. WDYT?
#[derive(Clone, Debug, strum::EnumString, strum::Display, ValueEnum)] | ||
#[strum(serialize_all = "kebab-case")] | ||
#[clap(rename_all = "kebab-case")] | ||
pub enum ClickhouseTopology { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea here is to have this ClickhouseTopology
for as long as we have the ability to deploy single-node and replicated clickhouse services in the repo. My view is that since the hardcoding of constants is temporary, we'd only have to manually change them until reconfigurator takes over.
When we swap over to using replicated cluster as the default installation, we can change the default setting to replicated-cluster
. If we decide to remove all single-node functionality, we can get rid of ClickhouseTopology
altogether. Does this make sense?
}; | ||
let id = OmicronZoneUuid::new_v4(); | ||
let ip = sled.addr_alloc.next().expect("Not enough addrs"); | ||
// TODO: This may need to be a different port if/when to have single node |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we're definitely not doing stage 1 of RFD 468 then?
listen_addr.to_string(), | ||
) | ||
.add_property("listen_port", "astring", listen_port) | ||
.add_property("store", "astring", "/data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah sweet! Should I leave this code as is for now until we've modified clickhouse-admin to do what you mention? I'd like to leave replicated clickhouse working with this PR even if we remove things later
@@ -0,0 +1,124 @@ | |||
#!/bin/bash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah yeah, definitely! I expect this to be gone soon.
Overview
This commit introduces a few changes:
clickhouse_server
smf service which runs the old "replicated" mode from theclickhouse
servicereplicated
field for the oximeter configuration file which is consumed by theoximeter
binary that runs the replicated SQL against a database. It now connects to the listen address fromServiceName::ClickhouseServer
orServiceName::Clickhouse
depending which zone has been deployed.--clickhouse-topology
build target flag which builds artifacts based on either asingle-node
orreplicated-cluster
setup. The difference between the two is whether theoximeter
SMF service is executing theoximeter
CLI with the--replicated
flag or not. CAVEAT: It's still necessary to manually change the RSS node count constants to the specified amount for each clickhouse topology mode. This requirement will be short lived as we are moving to use reconfigurator.Usage
To run single node ClickHouse nothing changes, artifacts can be built the same way as before.
To run replicated ClickHouse set the node count constants to the specified amount, and set the build target in the following manner:
Purpose
As laid out in RFD 468, to roll out replicated ClickHouse we will need the ability to roll out either replicated or single node ClickHouse for an undetermined amount of time. This commit is a step in that direction. We need to have separate services for running replicated or single-node ClickHouse servers.
Testing
Deploying omicron on a helios box with both modes.
Single node:
Replicated cluster:
Related: #5999