Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IP pools data model and API rework #4261

Merged
merged 76 commits into from
Jan 5, 2024
Merged
Show file tree
Hide file tree
Changes from 67 commits
Commits
Show all changes
76 commits
Select commit Hold shift + click to select a range
580594e
migrations and diesel schema for ip_pool_resource join table
david-crespo Oct 10, 2023
3065a61
add pools at rack setup time in the new way, update fetch_default_for
david-crespo Oct 10, 2023
f3809b5
update service pool lookup logic
david-crespo Oct 11, 2023
6404e27
delete silo_id and is_default from ip_pool model, test still passes
david-crespo Oct 11, 2023
94d0ebf
rip out silo_id and is_default on ip_pools in db
david-crespo Oct 11, 2023
1de49fc
let's add an endpoint!
david-crespo Oct 11, 2023
7259531
use joinable to make our joins more cute
david-crespo Oct 11, 2023
4c8c5ea
get all ip_pools integration tests passing
david-crespo Oct 11, 2023
5386043
instance ephemeral IP test passes
david-crespo Oct 11, 2023
a1eb6e4
shuffle migration files in anticipation of merging main
david-crespo Oct 12, 2023
9b4edaa
Merge branch 'main' into ip-pools-rework
david-crespo Oct 18, 2023
49c9927
unauthorized test
david-crespo Oct 18, 2023
9c8d2d5
test for pagination logic
david-crespo Oct 18, 2023
ff3a4e2
exclude service pool from list and block delete/update/modification
david-crespo Oct 18, 2023
db29bd3
fix the tests!
david-crespo Oct 18, 2023
9cbc423
get conflict/update logic right around is_default
david-crespo Oct 19, 2023
ee5549c
add dissociate endpoint as DELETE /associate
david-crespo Oct 20, 2023
0ab0cfc
rename endpoints and params to match convention better
david-crespo Oct 20, 2023
2b4f326
change API to allow associating silo by name. it's beautiful
david-crespo Oct 20, 2023
e2b7600
move migration up a number before merging main
david-crespo Oct 23, 2023
acda55a
Merge branch 'main' into ip-pools-rework
david-crespo Oct 24, 2023
80a4d22
get silo ID so we can test association by name or id
david-crespo Oct 25, 2023
9c101ac
pluralize /associations for restiness, prep for list endpoint
david-crespo Oct 25, 2023
32b2f75
endpoint to list associations for IP pool
david-crespo Oct 25, 2023
9519ed1
don't allow creating IP with not-associated pool, orphan pool test
david-crespo Nov 6, 2023
d7a0bdb
note that we are relying on DB enum order for most specific, flip order
david-crespo Nov 7, 2023
e79b862
fix tests broken by lack of ip association by default
david-crespo Nov 7, 2023
c60ea2a
first pass at checking for allocated IPs in associated pool before de…
david-crespo Nov 7, 2023
fa9317a
move check for outstanding IPs into its own function
david-crespo Nov 8, 2023
2345f35
correct logic for looking up outstanding IPs
david-crespo Nov 8, 2023
3226866
fix dissociate query params type serialization
david-crespo Nov 10, 2023
1dd01c1
these migrations go to 11. avoid conflict
david-crespo Nov 10, 2023
e8d1205
Merge branch 'main' into ip-pools-rework
david-crespo Nov 10, 2023
a47a9a3
git chose to delete something important
david-crespo Nov 10, 2023
6ea14ad
move migration to 19
david-crespo Dec 4, 2023
63e340c
Merge branch 'main' into ip-pools-rework
david-crespo Dec 4, 2023
b871acf
not sure how that became wrong again but whatever
david-crespo Dec 4, 2023
2094a8b
IP pools can't be fleet-wide (#4616)
david-crespo Dec 5, 2023
9da8783
integration_tests::ip_pools all pass
david-crespo Dec 5, 2023
a678c83
make instance integration tests pass
david-crespo Dec 6, 2023
31b476f
fix nexus-db-queries tests
david-crespo Dec 6, 2023
6a55e89
add make_default endpoint and give it one lil test
david-crespo Dec 6, 2023
1b71818
passify a million angry integration tests
david-crespo Dec 6, 2023
75c0290
clean up mess of ip pool test helpers. now pristine-ish
david-crespo Dec 6, 2023
7ff15b9
update migrations for a world without fleet IP pools
david-crespo Dec 7, 2023
85922cd
ip_pool_make_default -> ip_pool_silo_update, much better tests
david-crespo Dec 7, 2023
7194ee2
unauthorized tests pass. it's a miracle
david-crespo Dec 7, 2023
f9ed857
I think... all the tests pass
david-crespo Dec 8, 2023
d1e9b0a
test pool-silo unlink when there are/aren't IPs outstanding
david-crespo Dec 8, 2023
39a4a48
move schema version to v20
david-crespo Dec 11, 2023
00f86a0
Merge branch 'main' into ip-pools-rework
david-crespo Dec 11, 2023
d74ccd4
fix issues introduced by main merge (still some tweaks required)
david-crespo Dec 12, 2023
ee12c4e
Merge branch 'main' into ip-pools-rework
david-crespo Dec 12, 2023
70724cb
ensure silo is linked to IP pool before allowing floating IP allocation
david-crespo Dec 12, 2023
69a482a
ensure no outstanding floating IPs before unlinking silo
david-crespo Dec 12, 2023
7c0ba9d
verify there are no linked silos before deleting pool
david-crespo Dec 12, 2023
5db1f09
always use fully qualified name for SQL enum
david-crespo Dec 13, 2023
bbcd731
set up IP pool properly for e2e test
david-crespo Dec 13, 2023
c761559
move migration one more number up to 21
david-crespo Dec 13, 2023
dcd76b3
Merge branch 'main' into ip-pools-rework
david-crespo Dec 13, 2023
6b2ffbc
update quotas test for new ip pools helpers
david-crespo Dec 13, 2023
e4e969b
don't 500 when there is no default IP pool! be cool instead
david-crespo Dec 13, 2023
80cb5d0
self-review cleanup
david-crespo Dec 13, 2023
bf29052
couldn't help it, had to fix the silo-scoped pools endpoints
david-crespo Dec 14, 2023
32fcbc4
make authz checks slightly more consistent, get rid of some assoc/dis…
david-crespo Dec 14, 2023
dc9de62
PUT link 404s instead of 500 when the link doesn't exist
david-crespo Dec 14, 2023
ee0f363
clarify and clean up migrations, remove a couple of TODOs
david-crespo Dec 14, 2023
3830c2a
unlink takes authz_silo and authz_pool directly and does better auth …
david-crespo Dec 14, 2023
06fcc99
simplify and clarify migrations by using a subquery
david-crespo Dec 14, 2023
e02a0e6
move migration up
david-crespo Jan 3, 2024
ba1eb0e
Merge branch 'main' into ip-pools-rework (with conflicts)
david-crespo Jan 3, 2024
ea4cbfa
resolve conflicts
david-crespo Jan 3, 2024
8aa01b9
add ip pool resource type to list of allowed enum names
david-crespo Jan 4, 2024
584cb14
clean up merge leftovers and TODOs that are becoming issues
david-crespo Jan 4, 2024
4976404
move migrations up one
david-crespo Jan 5, 2024
16cbd29
Merge branch 'main' into ip-pools-rework
david-crespo Jan 5, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions common/src/api/external/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -720,6 +720,7 @@ pub enum ResourceType {
LoopbackAddress,
SwitchPortSettings,
IpPool,
IpPoolResource,
InstanceNetworkInterface,
PhysicalDisk,
Rack,
Expand Down
23 changes: 21 additions & 2 deletions end-to-end-tests/src/bin/bootstrap.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,8 @@ use end_to_end_tests::helpers::{generate_name, get_system_ip_pool};
use omicron_test_utils::dev::poll::{wait_for_condition, CondCheckError};
use oxide_client::types::{
ByteCount, DeviceAccessTokenRequest, DeviceAuthRequest, DeviceAuthVerify,
DiskCreate, DiskSource, IpRange, Ipv4Range, SiloQuotasUpdate,
DiskCreate, DiskSource, IpPoolCreate, IpPoolSiloLink, IpRange, Ipv4Range,
NameOrId, SiloQuotasUpdate,
};
use oxide_client::{
ClientDisksExt, ClientHiddenExt, ClientProjectsExt,
Expand Down Expand Up @@ -38,9 +39,27 @@ async fn main() -> Result<()> {

// ===== CREATE IP POOL ===== //
eprintln!("creating IP pool... {:?} - {:?}", first, last);
let pool_name = "default";
client
.ip_pool_create()
.body(IpPoolCreate {
name: pool_name.parse().unwrap(),
description: "Default IP pool".to_string(),
})
.send()
.await?;
client
.ip_pool_silo_link()
.pool(pool_name)
.body(IpPoolSiloLink {
silo: NameOrId::Name(params.silo_name().parse().unwrap()),
is_default: true,
})
.send()
.await?;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Necessary because we no longer create and link a default silo during rack setup. This comes up in a lot of places, most notably (in terms of line count) the integration tests. Where before we used a populate_ip_pool helper that added an IP range to the existing default pool, we now use a create_default_ip_pool helper, which creates the pool, adds a range, and links it to the default silo.

client
.ip_pool_range_add()
.pool("default")
.pool(pool_name)
.body(IpRange::V4(Ipv4Range { first, last }))
.send()
.await?;
Expand Down
4 changes: 4 additions & 0 deletions end-to-end-tests/src/helpers/ctx.rs
Original file line number Diff line number Diff line change
Expand Up @@ -287,6 +287,10 @@ impl ClientParams {
.build()?;
Ok(Client::new_with_client(&base_url, reqwest_client))
}

pub fn silo_name(&self) -> String {
self.rss_config.recovery_silo.silo_name.to_string()
}
}

async fn wait_for_records(
Expand Down
65 changes: 44 additions & 21 deletions nexus/db-model/src/ip_pool.rs
Original file line number Diff line number Diff line change
Expand Up @@ -5,8 +5,10 @@
//! Model types for IP Pools and the CIDR blocks therein.

use crate::collection::DatastoreCollectionConfig;
use crate::impl_enum_type;
use crate::schema::ip_pool;
use crate::schema::ip_pool_range;
use crate::schema::ip_pool_resource;
use crate::Name;
use chrono::DateTime;
use chrono::Utc;
Expand Down Expand Up @@ -35,42 +37,23 @@ pub struct IpPool {
/// Child resource generation number, for optimistic concurrency control of
/// the contained ranges.
pub rcgen: i64,

/// Silo, if IP pool is associated with a particular silo. One special use
/// for this is associating a pool with the internal silo oxide-internal,
/// which is used for internal services. If there is no silo ID, the
/// pool is considered a fleet-wide pool and will be used for allocating
/// instance IPs in silos that don't have their own pool.
pub silo_id: Option<Uuid>,

pub is_default: bool,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important! Taking this off the IpPool model is the whole point of this PR.

}

impl IpPool {
pub fn new(
pool_identity: &external::IdentityMetadataCreateParams,
silo_id: Option<Uuid>,
is_default: bool,
) -> Self {
pub fn new(pool_identity: &external::IdentityMetadataCreateParams) -> Self {
Self {
identity: IpPoolIdentity::new(
Uuid::new_v4(),
pool_identity.clone(),
),
rcgen: 0,
silo_id,
is_default,
}
}
}

impl From<IpPool> for views::IpPool {
fn from(pool: IpPool) -> Self {
Self {
identity: pool.identity(),
silo_id: pool.silo_id,
is_default: pool.is_default,
}
Self { identity: pool.identity() }
}
}

Expand All @@ -93,6 +76,46 @@ impl From<params::IpPoolUpdate> for IpPoolUpdate {
}
}

impl_enum_type!(
#[derive(SqlType, Debug, Clone, Copy, QueryId)]
#[diesel(postgres_type(name = "ip_pool_resource_type"))]
pub struct IpPoolResourceTypeEnum;

#[derive(Clone, Copy, Debug, AsExpression, FromSqlRow, PartialEq)]
#[diesel(sql_type = IpPoolResourceTypeEnum)]
pub enum IpPoolResourceType;

Silo => b"silo"
);

#[derive(Queryable, Insertable, Selectable, Clone, Debug)]
#[diesel(table_name = ip_pool_resource)]
pub struct IpPoolResource {
pub ip_pool_id: Uuid,
pub resource_type: IpPoolResourceType,
pub resource_id: Uuid,
pub is_default: bool,
}

/// Information required to delete an IP pool association. Comes from request
/// params -- silo is a NameOrId and must be resolved to ID.
#[derive(Clone, Debug)]
pub struct IpPoolResourceDelete {
pub ip_pool_id: Uuid,
pub resource_type: IpPoolResourceType,
pub resource_id: Uuid,
}

impl From<IpPoolResource> for views::IpPoolSilo {
fn from(assoc: IpPoolResource) -> Self {
Self {
ip_pool_id: assoc.ip_pool_id,
silo_id: assoc.resource_id,
is_default: assoc.is_default,
}
}
}

/// A range of IP addresses for an IP Pool.
#[derive(Queryable, Insertable, Selectable, Clone, Debug)]
#[diesel(table_name = ip_pool_range)]
Expand Down
19 changes: 16 additions & 3 deletions nexus/db-model/src/schema.rs
Original file line number Diff line number Diff line change
Expand Up @@ -484,7 +484,14 @@ table! {
time_modified -> Timestamptz,
time_deleted -> Nullable<Timestamptz>,
rcgen -> Int8,
silo_id -> Nullable<Uuid>,
}
}

table! {
ip_pool_resource (ip_pool_id, resource_type, resource_id) {
ip_pool_id -> Uuid,
resource_type -> crate::IpPoolResourceTypeEnum,
resource_id -> Uuid,
is_default -> Bool,
}
}
Expand Down Expand Up @@ -1333,7 +1340,7 @@ table! {
///
/// This should be updated whenever the schema is changed. For more details,
/// refer to: schema/crdb/README.adoc
pub const SCHEMA_VERSION: SemverVersion = SemverVersion::new(20, 0, 0);
pub const SCHEMA_VERSION: SemverVersion = SemverVersion::new(21, 0, 1);

allow_tables_to_appear_in_same_query!(
system_update,
Expand All @@ -1342,8 +1349,9 @@ allow_tables_to_appear_in_same_query!(
);
joinable!(system_update_component_update -> component_update (component_update_id));

allow_tables_to_appear_in_same_query!(ip_pool_range, ip_pool);
allow_tables_to_appear_in_same_query!(ip_pool_range, ip_pool, ip_pool_resource);
joinable!(ip_pool_range -> ip_pool (ip_pool_id));
joinable!(ip_pool_resource -> ip_pool (ip_pool_id));

allow_tables_to_appear_in_same_query!(inv_collection, inv_collection_error);
joinable!(inv_collection_error -> inv_collection (inv_collection_id));
Expand Down Expand Up @@ -1393,6 +1401,11 @@ allow_tables_to_appear_in_same_query!(
allow_tables_to_appear_in_same_query!(dns_zone, dns_version, dns_name);
allow_tables_to_appear_in_same_query!(external_ip, service);

// used for query to check whether an IP pool association has any allocated IPs before deleting
allow_tables_to_appear_in_same_query!(external_ip, instance);
allow_tables_to_appear_in_same_query!(external_ip, project);
allow_tables_to_appear_in_same_query!(external_ip, ip_pool_resource);

allow_tables_to_appear_in_same_query!(
switch_port,
switch_port_settings_route_config
Expand Down
31 changes: 16 additions & 15 deletions nexus/db-queries/src/db/datastore/external_ip.rs
Original file line number Diff line number Diff line change
Expand Up @@ -76,22 +76,18 @@ impl DataStore {
.fetch_for(authz::Action::CreateChild)
.await?;

// If the named pool conflicts with user's current scope, i.e.,
// if it has a silo and it's different from the current silo,
// then as far as IP allocation is concerned, that pool doesn't
// exist. If the pool has no silo, it's fleet-scoped and can
// always be used.
let authz_silo_id = opctx.authn.silo_required()?.id();
if let Some(pool_silo_id) = pool.silo_id {
if pool_silo_id != authz_silo_id {
return Err(authz_pool.not_found());
}
// If this pool is not linked to the current silo, 404
if self.ip_pool_fetch_link(opctx, pool.id()).await.is_err() {
return Err(authz_pool.not_found());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This check now requires looking up the pool in the silo association table instead of just looking at the silo ID on the pool.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering if you'd need to pass through the opctx_alloc here to have permissions, but I see the only permissions check you're doing inside ip_pool_fetch_link is for the silo. Something I learned from the quotas work is that if you explicitly need higher permissions as a side effect of a lower perm thing you can use self.opctx_alloc inside the datastore and pass that as OpContext for elevated permissions. I wonder if that would be better with added authz checks in ip_pool_fetch_link.

}

pool
}
// If no name given, use the default logic
None => self.ip_pools_fetch_default(&opctx).await?,
None => {
let (.., pool) = self.ip_pools_fetch_default(&opctx).await?;
pool
}
};

let pool_id = pool.identity.id;
Expand Down Expand Up @@ -147,24 +143,29 @@ impl DataStore {
) -> CreateResult<ExternalIp> {
let ip_id = Uuid::new_v4();

let pool_id = match params.pool {
// TODO: NameOrId resolution should happen a level higher, in the nexus function
let (.., authz_pool, pool) = match params.pool {
Comment on lines +146 to +147
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could just create a separate issue about this. I think there's a few places in the networking APIs around BGP that have the same thing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you were planning on it, but let's save the fix for this for another PR.

Some(NameOrId::Name(name)) => {
LookupPath::new(opctx, self)
.ip_pool_name(&Name(name))
.fetch_for(authz::Action::Read)
.await?
.1
}
Some(NameOrId::Id(id)) => {
LookupPath::new(opctx, self)
.ip_pool_id(id)
.fetch_for(authz::Action::Read)
.await?
.1
}
None => self.ip_pools_fetch_default(opctx).await?,
};

let pool_id = pool.id();

// If this pool is not linked to the current silo, 404
if self.ip_pool_fetch_link(opctx, pool_id).await.is_err() {
return Err(authz_pool.not_found());
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding the same check (that the specified pool is linked to the current silo) for floating IPs.

}
.id();

let data = if let Some(ip) = params.address {
IncompleteExternalIp::for_floating_explicit(
Expand Down
Loading
Loading