Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[sled-agent] Avoid causing UUID conflicts #7266

Merged
merged 5 commits into from
Dec 18, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 14 additions & 6 deletions sled-agent/src/sled_agent.rs
Original file line number Diff line number Diff line change
Expand Up @@ -962,12 +962,20 @@ impl SledAgent {
continue;
};

// First, ensure the dataset exists
let dataset_id = zone.id.into_untyped_uuid();
self.inner
.storage
.upsert_filesystem(dataset_id, dataset_name)
.await?;
// NOTE: This code will be deprecated by https://github.com/oxidecomputer/omicron/pull/7160
//
// However, we need to ensure that all blueprints have datasets
// within them before we can remove this back-fill.
//
// Therefore, we do something hairy here: We ensure the filesystem
// exists, but don't specify any dataset UUID value.
//
// This means that:
// - If the dataset exists and has a UUID, this will be a no-op
// - If the dataset doesn't exist, it'll be created
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe add something like

Suggested change
// - If the dataset doesn't exist, it'll be created
// - If the dataset doesn't exist, it'll be created without its oxide:uuid zfs property set

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

// - If a subsequent call to "datasets_ensure" tries to set a UUID,
// it should be able to get set (once).
self.inner.storage.upsert_filesystem(None, dataset_name).await?;
}

self.inner
Expand Down
55 changes: 29 additions & 26 deletions sled-storage/src/manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,6 @@ use std::collections::HashSet;
use std::future::Future;
use tokio::sync::{mpsc, oneshot, watch};
use tokio::time::{interval, Duration, MissedTickBehavior};
use uuid::Uuid;

// The size of the mpsc bounded channel used to communicate
// between the `StorageHandle` and `StorageManager`.
Expand Down Expand Up @@ -100,7 +99,7 @@ enum StorageManagerState {

#[derive(Debug)]
pub(crate) struct NewFilesystemRequest {
dataset_id: Uuid,
dataset_id: Option<DatasetUuid>,
dataset_name: DatasetName,
responder: DebugIgnore<oneshot::Sender<Result<(), Error>>>,
}
Expand Down Expand Up @@ -526,7 +525,7 @@ impl StorageHandle {
// and ask for the set of all datasets from Nexus.
pub async fn upsert_filesystem(
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, this is the only call to upsert_filesystem. Once we remove the invocation in omicron_zones_ensure, we can remove this function (and this "optional dataset UUID" argument) altogether.

&self,
dataset_id: Uuid,
dataset_id: Option<DatasetUuid>,
dataset_name: DatasetName,
) -> Result<(), Error> {
let (tx, rx) = oneshot::channel();
Expand Down Expand Up @@ -1499,27 +1498,9 @@ impl StorageManager {
zoned,
encryption_details,
size_details,
id: Some(DatasetUuid::from_untyped_uuid(request.dataset_id)),
id: request.dataset_id,
additional_options: None,
})?;
// Ensure the dataset has a usable UUID.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still a little weirded out by this block of code. Was it strictly unnecessary before? If there's any chance it wasn't, should we keep it but inside an if let Some(dataset_id) { ... } block?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's a historical artifact -- I'd prefer to remove it if we can, because I think sled agent is plagued by too many of these "two ways to set a value" paths already, and I'd prefer to reduce it to one.

From 256dca4 (2021) -

// Idempotently ensure the named dataset exists as a filesystem with a UUID.
//
// Returns the UUID attached to the ZFS filesystem.
fn ensure_dataset_with_id(fs_name: &str) -> Result<Uuid, Error> {
Zfs::ensure_filesystem(&fs_name, Mountpoint::Legacy)?;
// Ensure the dataset has a usable UUID.
if let Ok(id_str) = Zfs::get_oxide_value(&fs_name, "uuid") {
if let Ok(id) = id_str.parse::<Uuid>() {
return Ok(id);
}
}
let id = Uuid::new_v4();
Zfs::set_oxide_value(&fs_name, "uuid", &id.to_string())?;
Ok(id)
}
This is one of the earliest forms of "ensure filesystem, then set a value".

From ccc28fe (2023) - this was renamed, but still had the structure of "sled agent calls Zfs::ensure_filesystem, and then sets the UUID:

fn ensure_dataset(
&mut self,
dataset_id: Uuid,
dataset_name: &DatasetName,
) -> Result<(), Error> {
let zoned = true;
let fs_name = &dataset_name.full();
let do_format = true;
Zfs::ensure_filesystem(
&dataset_name.full(),
Mountpoint::Path(PathBuf::from("/data")),
zoned,
do_format,
)?;
// Ensure the dataset has a usable UUID.
if let Ok(id_str) = Zfs::get_oxide_value(&fs_name, "uuid") {
if let Ok(id) = id_str.parse::<Uuid>() {
if id != dataset_id {
return Err(Error::UuidMismatch {
name: dataset_name.clone(),
old: id,
new: dataset_id,
});
}
return Ok(());
}
}
Zfs::set_oxide_value(&fs_name, "uuid", &dataset_id.to_string())?;
Ok(())
}

From de8bc93 (last week) - Zfs::ensure_filesystem (later renamed to ensure_dataset) started being able to explicitly set properties, as part of the optimization to avoid calling zfs set individually for each property.

let props = build_zfs_set_key_value_pairs(size_details, id);
if exists {
Self::set_values(name, props.as_slice()).map_err(|err| {
EnsureFilesystemError {
name: name.to_string(),
err: err.err.into(),
}
})?;

This was added recently, because (using dtrace) I noticed the callstack was basically:

zfs set <size properties , one at a time>
zfs get uuid
zfs set uuid

And by moving all properties into Zfs::ensure_filesystem, this could be a single call to get/set.

All this is to say:

  • After de8bc93, any property-managing code after calling Zfs::ensure_filesystem should not be necessary. If the UUID was supplied, it should have already been checked/set at this point.
  • I'd prefer to remove this block of code, rather than diverge the code which mutates dataset UUIDs further.

Does that sound reasonable?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hah! Yes, I was doing the same digging and came to the same conclusion (i.e., #7236 made this code unnecessary). 👍 on the removal.

if let Ok(id_str) = Zfs::get_oxide_value(&fs_name, "uuid") {
if let Ok(id) = id_str.parse::<Uuid>() {
if id != request.dataset_id {
return Err(Error::UuidMismatch {
name: request.dataset_name.full_name(),
old: id,
new: request.dataset_id,
});
}
return Ok(());
}
}
Zfs::set_oxide_value(
&fs_name,
"uuid",
&request.dataset_id.to_string(),
)?;

Ok(())
}
Expand All @@ -1544,7 +1525,6 @@ mod tests {
use std::collections::BTreeMap;
use std::str::FromStr;
use std::sync::atomic::Ordering;
use uuid::Uuid;

// A helper struct to advance time.
struct TimeTravel {}
Expand Down Expand Up @@ -2005,16 +1985,39 @@ mod tests {
.expect("Ensuring disks should work after key manager is ready");
assert!(!result.has_error(), "{:?}", result);

// Create a filesystem on the newly formatted U.2
let dataset_id = Uuid::new_v4();
// Create a filesystem on the newly formatted U.2.
//
// We can call "upsert_filesystem" both with and without a UUID.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to add a test for calling with None and seeing the dataset come back without an ID?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, added a new test: upsert_filesystem_no_uuid

let dataset_id = DatasetUuid::new_v4();
let zpool_name = ZpoolName::new_external(config.disks[0].pool_id);
let dataset_name =
DatasetName::new(zpool_name.clone(), DatasetKind::Crucible);
harness
.handle()
.upsert_filesystem(dataset_id, dataset_name)
.upsert_filesystem(Some(dataset_id), dataset_name.clone())
.await
.unwrap();
// Observe the dataset exists, and the UUID is set.
let observed_dataset = &Zfs::get_dataset_properties(
&[dataset_name.full_name()],
WhichDatasets::SelfOnly,
)
.unwrap()[0];
assert_eq!(observed_dataset.id, Some(dataset_id));

harness
.handle()
.upsert_filesystem(None, dataset_name.clone())
.await
.unwrap();
// Observe the dataset still exists, and the UUID is still set,
// even though we did not ask for a new value explicitly.
let observed_dataset = &Zfs::get_dataset_properties(
&[dataset_name.full_name()],
WhichDatasets::SelfOnly,
)
.unwrap()[0];
assert_eq!(observed_dataset.id, Some(dataset_id));

harness.cleanup().await;
logctx.cleanup_successful();
Expand Down
Loading