Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openapi macro prototype #1

Merged
merged 4 commits into from
Mar 13, 2020
Merged

openapi macro prototype #1

merged 4 commits into from
Mar 13, 2020

Conversation

ahl
Copy link
Contributor

@ahl ahl commented Mar 12, 2020

I think the change to api_http_entrypoints.rs is probably the most interesting part to check out. This just validated how the macro might work; while I'm confidant that we'll be able to extract the macro data, type content, etc. I haven't figured that all the way out (and that's the next step).

@ahl ahl requested a review from davepacheco March 12, 2020 00:12
Copy link
Collaborator

@davepacheco davepacheco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is really cool. Nearly all of my comments are just asking for more comments and more descriptive variable names. If you're game, I do think it makes sense to clean it up and integrate it.

@@ -0,0 +1,346 @@
extern crate proc_macro;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you think about adding a block comment here explaining the purpose, and maybe some comments later in the file about the various pieces?


use std::collections::HashMap;

macro_rules! abort {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment on this macro would be useful. Is it for compile-time errors or runtime? Why is it different under cfg!(test)?

attr: proc_macro::TokenStream,
item: proc_macro::TokenStream,
) -> proc_macro::TokenStream {
//treeify(0, &attr.clone().into());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Drop the commented-out code? (Also at L26)

) -> proc_macro::TokenStream {
//treeify(0, &attr.clone().into());
let a = attr.into();
let mm = to_map(&a);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about more descriptive names for a and mm? I'm not sure exactly what distinguishes a from attr, but maybe mm could be endpoint_params or something like that?

let method = match mm.get("method") {
Some(MapEntry::Value(method)) => match method.as_str() {
"DELETE" | "GET" | "PATCH" | "POST" | "PUT" => method,
_ => abort!(a, r#"invlid method "{}""#, method),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

invlid should be invalid

}
}

fn parse(tt: &TokenStream) -> MapEntry {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to inline this in the 2 places [I found] where it's used? (Otherwise, maybe a more descriptive name?)

}

#[allow(dead_code)]
fn treeify(depth: usize, tt: &TokenStream) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For debugging?

}

#[cfg(test)]
mod tests {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love all the tests.

@@ -39,11 +41,14 @@ pub fn api_register_entrypoints(router: &mut HttpRouter) {
"/projects",
HttpRouteHandler::new(api_projects_post),
);
/*
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Putting a note here to remind both of us to delete the commented-out block and apply the macro to the other functions before we integrate.

@@ -21,3 +21,4 @@ url = "^2.1"
#url = {version = "1.5", optional = true}
#percent-encoding = {version = "1.0.0", optional = true}
#clap = "2.25"
openapi = { path = "lib/openapi" }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't know you could define crates within the crate like this but it seems like a good call. I imagine we might make this a general-purpose crate later but it'll be much easier to iterate on it here.

@ahl ahl merged commit cee24a9 into master Mar 13, 2020
jmpesp added a commit to jmpesp/omicron that referenced this pull request Mar 7, 2024
Volumes are "checked out" from Nexus for many reasons, some of which
include sending to another service for use in `Volume::construct`. When
that service activates the resulting Volume, this will forcibly take
over any existing downstairs connections based on the Upstairs'
generation number. This is intentional, and was designed so Nexus, in
handing out Volumes with increasing generation numbers, can be sure that
the resulting Volume works no matter what (for example, even if a
previous Upstairs is wedged somehow, even if the service that is running
the previous Upstairs is no longer accepting network connections).

Up until now, Nexus wouldn't allow checking out a Volume if there is any
chance a Propolis could be running that may use that Volume. This meant
restricting certain operations, like creating a snapshot when a disk is
attached to an instance that is stopped: any action Nexus would take to
attempt a snapshot using a Pantry would race with a user's request to
start that instance, and if the Volume checkouts occur in the wrong
order the Pantry would take over connections from Propolis, resulting in
guest OS errors.

Nexus _can_ do this safely though: it has all the information required
to know when a checkout is safe to do, and when it may not be safe.
This commit adds checks to the Volume checkout transaction that are
based on the reason that checkout is occurring, and requires call sites
that are performing a checkout to say why they are. Because these checks
are performed inside a transaction, Nexus can say for sure when it is
safe to allow a Volume to be checked out for a certain reason.

For example, in the scenario of taking a snapshot of a disk attached to
an instance that is stopped, there are two checkout operations that have
the possiblity of racing:

1) the one that Nexus will send to a Pantry during a snapshot create
   saga.

2) the one that Nexus will send to a Propolis during an instance start
   saga.

If oxidecomputer#1 occurs before oxidecomputer#2, then Propolis will take over the downstairs
connections that the Pantry has established, and the snapshot create
saga will fail, but the guest OS for that Propolis will not see any
errors. If oxidecomputer#2 occurs before oxidecomputer#1, then the oxidecomputer#1 checkout will fail due to
one of the conditions added in this commit: the checkout is being
performed for use with a Pantry, and a Propolis _may_ exist, so reject
the checkout attempt.

Fixes oxidecomputer#3289.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants