Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support all HTTP Methods #1747

Open
7 of 32 tasks
mightytyphoon opened this issue Nov 9, 2018 · 87 comments
Open
7 of 32 tasks

Support all HTTP Methods #1747

mightytyphoon opened this issue Nov 9, 2018 · 87 comments
Assignees
Labels
http Supporting HTTP features and interactions registries Related to any or all spec.openapis.org-hosted registries
Milestone

Comments

@mightytyphoon
Copy link

mightytyphoon commented Nov 9, 2018

Feature Request

OpenApi support for all http methods

RFC 2616 updated by RFC 7231 (errata list for 7231)

  • OPTIONS
  • GET
  • HEAD
  • POST
  • PUT
  • DELETE
  • TRACE
  • CONNECT

RFC 2518 updated by RFC 4918 and RFC 5689 for MKCOL

  • PROPFIND
  • PROPPATCH
  • MKCOL
  • COPY
  • MOVE
  • LOCK
  • UNLOCK

RFC 3253 (errata list for 3253)

  • VERSION-CONTROL
  • REPORT
  • CHECKOUT
  • CHECKIN
  • UNCHECKOUT
  • MKWORKSPACE
  • UPDATE
  • LABEL
  • MERGE
  • BASELINE-CONTROL
  • MKACTIVITY

RFC 3648

  • ORDERPATCH

RFC 3744 (errata list for 3744)

  • ACL

RFC 5323

  • SEARCH

draft ietf httpbis safe method w body 03

  • QUERY

draft snell link method 01

  • LINK
  • UNLINK

Related Issues

#658
#1306
#480
swagger-api/swagger-core#1760

Iana link

Another link with methods and their RFC : iana

This post will be re-edited soon to add a small description for all methods

Regards.

@MikeRalphson
Copy link
Member

HEAD has been supported since at least Swagger v1.2

@mightytyphoon
Copy link
Author

@MikeRalphson updated, thanks.

@MikeRalphson
Copy link
Member

MikeRalphson commented Nov 11, 2018

Given the mission statement of the OAS as "standardizing on how REST APIs are described" could you expand on how some of these HTTP methods are being used within REST APIs (e.g. the WebDav methods are often seen as tied only to that protocol by their definitions)?

@mightytyphoon
Copy link
Author

mightytyphoon commented Nov 12, 2018

Given the mission statement of the OAS as "standardizing on how REST APIs are described" could you expand on how some of these HTTP methods are being used within REST APIs (e.g. the WebDav methods are often seen as tied only to that protocol by their definitions)?

Yes I think during next week I will edit this post to describe the methods, if they have a body, header, query, etc... how they should be used and some examples. I think it's what you want, if I understood your answer correctly ?

There is an inherent problem with REST because it wants to be very close to CRUD (Create, Read, Update, Delete) and even SCRUD (S for Search), but a REST API does so much more, like letting users subscribing and unsubscribing to a subject, users also want to connect to the API to have their data saved and secured, they will want to copy something, to move some datas, to lock their account, etc... that's what all these additional methods were made for.

For example to lock your account you could do a post to /accounts/lock with your token in header and in the post body add a parameter 'lock = true' but REST could handle it with just a LOCK request to /accounts and the token as a cookie header (or some encrypted data from localstorage as a token) which makes code clearer and really use the HTTP method verbose to add some logic in requests.

lock/unlock are described here as 'lock model'.

GET will have query params, with optional body, whereas HEAD will have no body and POST has an obligation to have a body, then responses can have a body or not depending on the method. So I have some research to do on this to give the right definitions, I will update very soon.

From wikipedia : http methods

httpmethods

Also there should be some methods to ignore, for example CONNECT which was made to create an ssl tunnel has, I think, no use anymore now with SSL/TLS certificates. But some could find a use for it, if they want to keep an http server and then control https connections opening.

CONNECT METHOD description on mozilla

EDIT : I also want to add that express supports these methods :

Express supports the following routing methods that correspond to HTTP methods: get, post, put, head, delete, options, trace, copy, lock, mkcol, move, purge, propfind, proppatch, unlock, report, mkactivity, checkout, merge, m-search, notify, subscribe, unsubscribe, patch, search, and connect.

source

@MikeRalphson
Copy link
Member

lock/unlock are described here as 'lock model'.

But my point is that LOCK and UNLOCK as defined there are for use within the WebDAV protocol. They have no defined meaning (on their own) outside that protocol. They may be tied to specific XML request / response formats (forgive me, it has been a long time since I read the WebDAV RFCs).

A picture may be worth a thousand words:

image

@darrelmiller
Copy link
Member

I'd be open to considering allowing HTTP method be an unconstrained string. I wouldn't want to re-enumerate all the methods in the IANA registry in the OpenAPI spec. We should consider what benefit tooling derives from it being an enumerated type.

p.s. Wow, that Wikipedia table is misleading. Describing a GET body as optional is just terrible.

@handrews
Copy link
Member

Describing a GET body as optional is just terrible.

Far too many people don't seem to understand that "has no defined semantics" is standards-ese for "FFS don't do this!" and not "sure, do whatever!"

@cmheazel
Copy link
Contributor

What is the expected behavior if my Path Item Object has an x-lock field associated with an Operation Object? Would that add Lock to the set of supported HTTP operations? If not, what else do we need to introduce these operations as Draft Features?

@MikeRalphson
Copy link
Member

MikeRalphson commented Dec 18, 2018

Extensions such as x-lock only have the meaning you give them and agree with third parties. Nothing more.
Methods LOCK and UNLOCK only have defined meanings within the WebDAV protocol nowhere else. Outside of WebDAV their meaning also has to be mutually agreed by all parties.

@cmheazel
Copy link
Contributor

cmheazel commented Dec 19, 2018

@MikeRalphson Agreed. That is the nature of extensions. But if we can add HTTP operations using the extension mechanism, then we can use the Draft Feature process to measure demand. Draft Feature operations which are not implemented could be safely abandoned. Start with four or five of the least controversial ones.

@MikeRalphson
Copy link
Member

MikeRalphson commented Dec 19, 2018

I'm not against that in principle, like @darrelmiller says, maybe open it up with an extension which allows any HTTP method?

@egriff38
Copy link

Hello there, does anyone know if any effort has been made to create an extension which broadens the HTTP methods that can be described in Open API 3? Would like to take advantage of the SEARCH verb in my API but it looks like I am limited in my documentation at this time.

@ioggstream
Copy link
Contributor

Given the ongoing work on HTTP & HTTPAPI I really suggest that we should:

  • delegate methods support to HTTP
  • add interoperability considerations suggesting to limit to the methods referenced in 7231 or I-D httpbis-latest
  • include the fact that tooling might not supporting methods outside 7231

cc: @MikeRalphson @darrelmiller

@philsturgeon
Copy link
Contributor

I'd support changing it to "any method defined in RFC 7231 and RFC 5323" but I'm not sure what value adding all those other ones provides?

I think its a little telling that SEARCH was what necro bumped this otherwise inactive topic. Probably just SEARCH is fine.

@ioggstream
Copy link
Contributor

ioggstream commented Feb 18, 2021

@philsturgeon I just won't ossify OAS with method decisions which are outside the OAS scope.
If we want to provide stuff for implementers, we can say:

  • implementers MUST support methds defined in 7231;
  • implementers MAY support the other methods reported in the IANA table (see httpbis-semantics).

imho stuffing HTTP elements into OAS could provide more harm than benefits because iwe have no guarantee that the interpretation of HTTP we put into OAS is consistent with the latest specs (eg. see https://github.com/httpwg/http-core/pull/653/files).

My 2¢, and thanks for you time!

@philsturgeon
Copy link
Contributor

@ioggstream I don't understand any individual parts of that reply let alone all of it together. I was saying "+ SEARCH" would be a good enough change to satisfy this issue because its the only one people seem all that fussed about.

@ioggstream
Copy link
Contributor

@philsturgeon my opinion is that OAS should not constraint the possible set of http methods. This is because the HTTP specs are the place where methods are defined.

For interoperability reason, OAS could state that tooling implementers MUST support at least the methods defined in RFC7231.

Stating that methods outside RFC7231 are not supported at all is a imho weak design choice.

@karenetheridge
Copy link
Member

I found this issue because I am writing a document that describes a "PURGE" endpoint, which is a non-standard HTTP method but none-the-less is very much in use in my application. Supporting any method matching the pattern "^[a-z]+$" would be very simple and straightforward; much more so than attempting to define a mechanism by which additional HTTP methods could be defined.

@bdunavant
Copy link

I found this issue because I'm writing docs to describe a MERGE endpoint (which is OData 1.0 and OData 2.0, but not HTTP spec), and really should have been written as PATCH but someone made an unfortunate decision years ago and that ship has sailed. While I understand OpenAPI's mission is documenting true REST, adding this flexibility will help a lot in places where someone made poor choices and/or didn't understand REST very well, and the rest of us have to live with it.

@eli-darkly
Copy link

My preference would be to have it allow any string, for the same reason @darrelmiller mentioned above. Really the decision of whether any given method is or isn't valid is the business of the service endpoint implementation; I don't see any value gained by having a general-purpose API tool make its own judgments on that.

However, if it is necessary to pick specific methods, I'll add one thing specifically about the REPORT verb. While it unfortunately is sometimes not allowed by HTTP clients, REPORT fills a specific need in HTTP APIs by being the method that 1. has a request body, 2. is idempotent, 3. is cacheable. GET can't (or shouldn't) have a body; POST isn't idempotent. Basically REPORT is what you would use if you want GET-like semantics but can't fit all the parameters into the URL.

@wparad
Copy link

wparad commented Sep 27, 2021

I feel like step 1 is get published an RFC that has REPORT, QUERY, or SEARCH, then step 2 is add it. We don't need to support it be completely agnostic, while it's a great idea, and why would this tool restrict what is allowed...fundamentally there are tools that use this to implement and automate the spec. It doesn't do any good to enable bad patterns. We should enable good patterns, okay patterns, and necessary patterns. I need SEARCH and I love REPORT exists for webdav, but it doesn't exist for REST, and even if it did, non of the clouds support these verbs, so it wouldn't even have a real usage.

Help me get REPORT published to the right working group, and then it will defacto have to be added here. If we need another one, let's go through a similar process.

@eli-darkly
Copy link

@wparad:

fundamentally there are tools that use this to implement and automate the spec. It doesn't do any good to enable bad patterns

I don't understand what's meant by "bad patterns" here. Documenting the actual behavior of a web service is not a bad pattern, even if you feel that the web service should have used a more standard HTTP method. The person writing up an OpenAPI spec may not be the person who implemented the service, and may not have any control over its behavior, so punishing them by making it impossible for them to write such a spec is not helpful.

it doesn't exist for REST

There isn't a canonical REST specification that says what methods exist or do not exist. There are only conventions.

and even if it did, none of the clouds support these verbs, so it wouldn't even have a real usage

I don't understand this statement either. Whatever set of commonly used web hosts you're referring to as "the clouds", those are not the only hosts that could be running services that have an OpenAPI spec. OpenAPI can be used for literally any HTTP application.

@SVilgelm
Copy link
Contributor

Please add the QUERY method: https://datatracker.ietf.org/doc/draft-ietf-httpbis-safe-method-w-body/

@wparad
Copy link

wparad commented Nov 18, 2024

@ralfhandl Regarding this:

Method names MUST NOT be uppercase versions of the existing field names for HTTP methods, that is GET, PUT, ... (list all of them here), TRACE MUST NOT be used

Is there any reason not to allow the all-caps forms as aliases for the existing lowercase forms and just make it an error to use both forms for the same method? It would allow consistency going forward at what feels to me like a minimal implementation cost. We an actually enforce things like not using both get and GET through the schema.

But I might be missing something here. I'm really just interested in which option is the most benefit for the least impact. If more folks feel like "lowercase for the predefined methods, uppercase for anything else" is easier to think about, I'm all for that!

That is an amazing idea 👍

@ralfhandl
Copy link
Contributor

Actually I'm at odds with the existing lower-case fixed fields for HTTP methods because RFC9110, Section 9.1 states that

The method token is case-sensitive because it might be used as a gateway to object-based systems with case-sensitive method names. By convention, standardized methods are defined in all-uppercase US-ASCII letters.

So get is an allowed HTTP method and may mean something different than GET, and the existing OpenAPI fixed field get is misnamed.

There are several ways to fix that going forward and stay compatible with existing OpenAPI versions. The "baseline" rule should be

  1. New fixed fields for standard HTTP methods use HTTP casing: QUERY, SEARCH, ...

We could then discuss what to do with the existing fields:

  1. Uppercase synonyms for the existing fields get, ... are
    1. forbidden (this would reduce burden on implementations)
    2. allowed and have the same meaning as the existing lowercase fields (this would be forward-compatible and put some burden on implementations)

And optionally

  1. The existing lower-case method fields get, ... are deprecated.

@handrews
Copy link
Member

@ralfhandl Since we have something that's not as correct as we'd like (get needing to be changed to GET internally to be used), implementors will have to do something here anyway, and the amount of extra work is extremely small. Let's just fix thing to properly use uppercase everywhere. There are several ways we could try to make that easier on implementors.

Another option would be to say "if you use any all-caps method fields, you MUST only use all-caps method fields." This preserves the lower-case fields for backwards compatibility, but avoids awkward jumbles like GET: {...}. put: {...}, QUERY: {...}, patch: {...}.

@handrews
Copy link
Member

@ralfhandl a benefit of this approach:

Another option would be to say "if you use any all-caps method fields, you MUST only use all-caps method fields."

is that it fits with something I've been feeling increasingly strongly: It's better to add a new mechanism that is as correct, complete, and coherent as possible alongside a less-correct, less-coherent, less-complete existing option than it is to keep trying to patch the old thing.

In this case "all all-caps fields are treated as HTTP methods taking an Operation Object" is the correct, complete, and coherent approach. The limited set of lower-case fields are retained for compatibility, and allowing only one approach or the other simplifies the implementation (you have to support both, but the "weird" one is already supported, and as soon as you see something trying to mix them, you can error out without having to do anything else).

Allowing a mixture of old and new actually makes things less coherent, because now there are many not-quite-correct combinations that need supporting, which has no clear use case (if you can add QUERY or whatever, you can upper-case your other methods as it has no impact on the API itself) and makes support more complicated.

@kevinswiber
Copy link
Contributor

@handrews @ralfhandl I'm happy to see this in the 3.2 milestone. I was just looking at using QUERY today.

The pragmatic part of me feels:

  • While methods are case sensitive in HTTP, in practice, I've only seen uppercase methods.
  • If we change to requiring uppercase where we previously (erroneously) required lowercase, that's another logic branch for implementations supporting multiple versions.
  • Can we just ignore the error and move on?

The pedantic part of me feels:

  • I'm incredibly irked that it's always been erroneous.
  • We shouldn't restrict methods to a pre-determined list.
  • Changing from a set of JSON names to a user-defined string actually simplifies implementation logic.

How much we lean on pragmatic vs. pedantic in a minor version release is up for discussion.

@handrews
Copy link
Member

handrews commented Jan 8, 2025

@kevinswiber What's wrong with the approach I suggested in the previous comment, of allowing lower-case fields for compatibility, but requiring you to use all-caps if for all of the fields if you want to add newer / custom methods?

We can't drop the lower-case fields because of compatibility guarantees. Are you proposing that we keep those lowercase and only support new methods in uppercase?

The constraints here are:

  • We can't drop the old fields because of compatiblity
  • We can't just say "use whatever you want" because we define some lowercase fields that are not methods (even if it is highly unlikely that those field names would be used for custom methods, it's never impossible, and we should avoid creating collisions)

No matter how we add this, there is the risk of someone defining both get and GET, etc. I'm searching for the simplest way for tools to perform error checking. I think it is more confusing to allow mixing and matching, as then you have to check for each possible duplication separately. It makes more sense to me to say "you can write this Object in compatibility mode (with pre-defined lower-case fields) or new mode (with open-ended upper-case fields), but not a mixture of both."

@baywet
Copy link
Contributor

baywet commented Jan 9, 2025

@handrews happy to take this one on if you want :)

@kevinswiber
Copy link
Contributor

kevinswiber commented Jan 9, 2025

@handrews I'm not too fond of allowing a mixture of casing, aesthetically. It would offer a bridge to the next major version while enabling new features in the current major version release line, and I do like that.

In 3.x, we can say the original set of lowercase names are reserved, that they map to the uppercase HTTP methods, and that when parsing uppercase variants of those reserved names (get, put, post, delete, etc.), implementations MUST ignore those operations. Mixed-case wouldn't have collisions, but... don't do it anyway.

get: 
  # fine...
QUERY:
  # fine...
query:
  # fine, but separate from QUERY...
GET:
  # ignored...
GeT:
  # why??? please, don't...

I would feel more comfortable if we had a firm decision to support HTTP methods as they're defined in the next major version of the spec so we know for a fact that we're offering a conceptual bridge (not necessarily a syntactic one).

@handrews
Copy link
Member

handrews commented Jan 9, 2025

@kevinswiber I think I'm not really getting my point across here. Let me add some examples.

First, something like GeT would never be allowed.

This would be fine as it would be in "compatibility mode":

get: {}
post: {}

This would be fine as it would be in "3.2+ mode":

QUERY: {}
POST: {}

This would NOT be fine, because QUERY wasn't supported in 3.1:

query: {}  # ERROR!
post: {}

This would NOT be fine, because you're mixing modes:

QUERY: {}
post: {} # ERROR! Use of lowercase along with uppercase

I would feel more comfortable if we had a firm decision to support HTTP methods as they're defined in the next major version of the spec

None of this is about 4.0. I assume we'll do something sane like match the RFC case and not mix method fields with non-method fields. But whatever we do, that will be in the Moonwalk repo. This is just about 3.2.

@kevinswiber
Copy link
Contributor

@handrews Thanks for the examples! I see what you're saying now. This moves the complexity around a bit. I'll have a think on this!

@handrews
Copy link
Member

Yeah, "moving the complexity around a bit" is pretty much the only option I see. Because of compatibility, we can't do the absolute simplest thing. To me, the next simplest thing is "there are two options, and you can immediately tell which one you are in or if there's an error of trying to do both without having to individually compare every potential method field to find collisions." It's easy to code the check, easy to explain to users because it avoids "SHALL be ignored" behavior.

@handrews
Copy link
Member

@baywet

@handrews happy to take this one on if you want :)

Please feel free once Kevin and I resolve this last little bit!

@kevinswiber
Copy link
Contributor

@handrews This is a big change. Is it time for an experimental features mechanism? That way, authors and tool makers can opt-in.

features:
  method-http-compatibility:
    enabled: true

That way, we don't necessarily break 3.x, but we extend it in a way that could eventually make it into a future OAS release. It also lets us test ideas over time. We could have companion feature RFCs before stabilizing them into a release. This also lets us back out of decisions that prove unhelpful over time.

@handrews
Copy link
Member

@kevinswiber This feels like you think we're breaking something. What do you think that we're breaking? I don't think there's any breakage at all.

@handrews
Copy link
Member

@kevinswiber I think I need to understand exactly what you think the tooling impact of this change is and why it is such a problem. And let's compare it to as if we did not have a namespace or casing problem, and all we were doing was saying "yeah you can specify any method now" (just pretend we don't have to worry about name collisions on that). Because there's plenty of enthusiasm for that change, the only wrinkle is how to handle comatibility.

So what is the cost here, and why is it so high as to need additional caution?

@kevinswiber
Copy link
Contributor

kevinswiber commented Jan 15, 2025

@handrews I think implicit compatibility mode based on casing is going to be very confusing for authors. I'm looking at ways to make it more explicit.

@handrews
Copy link
Member

@kevinswiber Ah, I see. For compatibility reasons we can't require someone to specify that it needs to be in compatibility mode. I'm very reluctant to add feature flags in general. I've seen that path before and it becomes a nightmare very fast. But this is worth discussing in more detail, as once we get to 3.3 we will have more situations where there are "new" and "old" ways available.

Global flags are rarely sufficient, as the scope of "global" is complex in a multi-document OAD, and many people will not want to change old endpoint, just use new features in new endpoints.

So I could see using a flag in the Operation Object itself, which would reduce the inevitable pressure to throw in billions of feature flags for this that or the other, and preserve flexibility.

@handrews
Copy link
Member

@kevinswiber if we don't want to have per-Operation flexibility, I'd be more in favor of a flag that changes between compatible and new modes globally, rather than per-feature. That simplifies tool development (you're in one mode or the other, no matter whether you're doing HTTP methods, parameter descriptions (likely to change in 3.3), or what. It does put more work on API description authors to migrate their whole OAD, but if we provide migration tools then that's less of a community burden than the burden on tool developers. And it avoids complex matrices of feature flags.

@baywet
Copy link
Contributor

baywet commented Jan 16, 2025

Adding my notes from the meeting.

Goal: allowing arbitrary methods, not just a known set (my original assumption)

Problem: the new methods, or custom ones, might conflict with already defined fields on the path item object (e.g. parameters)

Solutions:

  • Allow arbitrary methods, except for "reserved" names (e.g. parameters)
  • Only allowing a known set, makes it challenging to people using custom methods.
  • Deporting to a sub object e.g. openOperations for anything that's not known in advance. Poses a challenge for overlays and parsers, having to collect things from both places.
  • Special casing for all the arbitrary methods. A bit strange, but probably the least of all evils at this point. Decision on mixing casing or not still need to be made.

@karenetheridge
Copy link
Member

from the meeting:

Our goal should be: a 3.1.1-valid document should continue to be valid in a 3.2 implementation. If you want to make use of new-in-3.2 features, you might need to make changes to your document, but these should be clear and obvious in a migration guide (and for this feature, limited to just the path-items where you are making use of new HTTP methods) and also easy to implement in tools.

I'm envisioning a "3.2 migration guide" to be published, with information both for OpenAPI description authors (end-users), and for tooling vendors. This guide can describe algorithms that tools can follow to detect the error cases and give suggestions for how to communicate the errors to the users.

This (supporting arbitrary method names when all-uppercased) is not a high burden for anyone editing definitions in a tool (as the tool will be able to do this for you in the places that need it), and if you're editing by hand, an implementation should also be able to clearly identify what the errors are and how to fix them.

@handrews
Copy link
Member

handrews commented Jan 16, 2025

To summarize my position post-meeting:

  1. We want to give unambiguous feedback to users so they are confident in their OADs
  2. We want to minimize the implementation cost of features

Balancing these two requires finding the least costly way to catch errors. It also requires that all method fields are either used in full or are errors. No silent ignoring of one over the other.

The sub-object option

I have thought more on the sub-object option that @baywet brought up again. I had excluded it because someone objected to it at some point in the past, so I wasn't quite ready to think it through on the call (particularly while struggling to think through a cold).

It does have one attractive quality: We already have many Objects where some combinations of fields are valid and others are not. This is not really a great design, but it is a well-established pattern that would be familiar to tooling authors.

So we could define methods and say that OADs MUST NOT use both methods and any one or more of the legacy method fields. We have plenty of such rules in the OAS today. The schema would look like:

$defs:
  path-item:
    type: object
    properties:
      $ref:
        type: string
        format: uri-reference
      summary:
        type: string
      description:
        type: string
      servers:
        type: array
        items:
          $ref: '#/$defs/server'
      parameters:
        type: array
        items:
          $ref: '#/$defs/parameter-or-reference'
    oneOf:
    - required: [methods]
      properties:
        methods:
          $ref: '#/$defs/methods'
    - not:
        required: [methods]
      properties:
        get:
          $ref: '#/$defs/operation'
        put:
          $ref: '#/$defs/operation'
        post:
          $ref: '#/$defs/operation'
        delete:
          $ref: '#/$defs/operation'
        options:
          $ref: '#/$defs/operation'
        head:
          $ref: '#/$defs/operation'
        patch:
          $ref: '#/$defs/operation'
        trace:
          $ref: '#/$defs/operation'
    $ref: '#/$defs/specification-extensions'
    unevaluatedProperties: false
  methods:
    type: object
    propertyNames:
      # see below regarding non-standard names
      $ref: '#/$defs/methodRegexp'

Should we (dis)allow non-standard-format methods with a sub-object?

One advantage of a Methods Object is that it would allow any method name allowed by RFC9110, and not just ones that could be standardized methods. Standardized methods are limited (by convention) to uppercase US-ASCII, but names in general are allowed to use the token syntax:

method = token
token = 1*tchar
tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" / DIGIT / ALPHA

I am sure there is someone out there who would want this, but probably not many people. The drawback is that this would make get a valid field in the Methods Object, but it would NOT produce a standard GET /foo HTTP/1.1 request line. It would produce a non-standard get /foo HTTP/1.1 request line.

This seems like asking for trouble, so perhaps for 3.2 we should keep the METHODS object restricted to the regex /[A-Z]+/

A further alternative would be to have an additional Path Item Object field, strictMethods, which defaults to true and restricts the Methods Object used with that Path Item to uppercase US-ASCII. Setting it to false allows any token, and you're on your own for ensuring that you don't mis-capitalize. This would require a bit more implementation complexity, though, and it's not clear to me that there's a big enough use case. When talking about this with interested folks I have always restricted it to all-caps US ASCII methods.

On the other hand, the added complexity is just something like this (Python's match() is implicitly anchored to the beginning of the string):

if not re.match(r'[-a-zA-Z0-9!#$%&'*+.^_`|~]+$', methodName):
  raise InvalidMethodError(methodName)
if strictMethods and not re.match(r'[A-Z]+$', methodName):
  raise InvalidStrictMethodError(methodName)

Interaction with Path Item Object $ref

The syntax rules apply only to Path Item Objects as written, not in-memory Path Item Object-equivalents caused by resolving the special Path Item Object $ref.

This comes from the DOM/ADA work going on in Moonwalk, where the ADA interface presents a "resolved" view of the OAD. The resolved view will not mimic the 3.x syntax; it is intended to be an abstraction layer bridging to 4.0. So in the ADA, we can separate the resolved method set from the syntax problems in 3.x. We want to start laying the groundwork now for tools to shift to this kind of architecture. While that will be a bigger part of 3.3, I expect to include guidance in 3.2 that will start to introduce the concepts without being prescriptive about them.

With ADA, I would not expect tools to construct combined Path Item Objects and apply syntax rules to them. Instead, I would expect them to combine the information in the two Path Item Objects. In terms of information, the case and structure of the fields is irrelevant.

Use with Overlays

@lornajane brought up a point I had not considered with how this works with Overlays.

Overlays and using case for new vs old

I think we need to understand if Overlays can select a field in a case-insensitive way, specifically for the "legacy" method names. This would allow the same selector to work for post vs POST. If anyone actually uses a PARAMETERS method, then they would need to make that selector case-sensitive.

Overlays and a Methods Object

Overlay 1.0 selectors to find every method would need to check for both the Path Item Object fixed set of methods and the open Methods Object set. My impression (which might be wrong) is that JSON Path can select at variable depth, so combining that with case-insensitivity could produce a single selector for everything. Maybe.

But currently, if you want to select only the methods out of the Path Object, you have to enumerate them in your selector somehow, or select each individually. In that case, having to also select the Methods Object separately does not seem like a huge burden. But I don't use Overlays so I'm really not sure. We should work out an example of what it would look like.

Overlays as migration tool

Using POST as an example, it seems to me that one could use an Overlay to just move every post to POST, or post to methods/POST.

@handrews
Copy link
Member

handrews commented Jan 16, 2025

An few additional things we need to keep in mind is to include wording to the effect that:

  • Implementations are not required to understand method-specific rules outside of the methods defined in RFC9110 and RFC5789 (the PATCH method)
  • Even if OpenAPI implementations support custom methods, underlying libraries may have limitations; these limitations are outside of the scope of the OAS
  • CONNECT is still not supported because it cannot take a path in its target URI; this is a restriction of the Paths Object and unrelated to method name fields
  • likewise, OPTIONS * is prohibited by the Paths Object, and this change does not impact that limitation

@lornajane
Copy link
Contributor

That's not quite what I was trying to say about Overlays although it's certainly a valid point and for an Overlay that's used for multiple OpenAPI descriptions, it would make upgrading those versions harder.

The use case I tried to mention on the call is where an Overlay is used to add an operation (something like the approach here: https://apichangelog.substack.com/p/using-openapi-overlays ) - because we're changing what syntax we're allowing, I think this will get messy for a lot of existing pipelines, and will be a lot messier by the time many people are able to upgrade to 3.2 where Overlays is already seeing adoption in the 3.1 space.

Looking at the problem again, I think my preferred options are:

  • expand the existing list of hardcoded field names that are allowed and supported as siblings to the current get/post set. It's not perfect and it's not future proof but it's a very easy way to expand our existing 3.x standard with some new options and very easy to understand.
  • or (my preferred option now I've talked it over with a few people) as a simpler of the subobject approach: add the ability to support any additional HTTP method (but not the ones already named) under an extendedMethods section at the same level as our current get/post/put set. This way, we're opening up to add anything that's needed, but no existing OpenAPI description needs changes, now or ever. It will be a bit more complicated for anyone adding operations that handle both the existing methods and the additional ones, and I acknowledge that. I expect users and tool vendors who need additional HTTP verbs to be able to handle the complexity. It's also a very clear separate feature that tooling can say whether it supports or not - and because it's separate and makes no changes to existing content on upgrade, could also easily be implemented as x-extendedMethods in earlier versions if a tool wanted to support it there.

@lornajane
Copy link
Contributor

Overlays cannot be used as a migration tool. The existing content of the document is not available as an input in a (v1) Overlay.

@handrews
Copy link
Member

I'm concerned about Overlays suddenly adding even more restrictions on what we can and cannot do in a minor version change. This is going to make it very hard to deliver Moonwalk in meaningful increments.

While this limited case can be worked around, we will not be able to work around the identical concern when we revamp parameters for 3.3. The entire point is that the current parameter syntax is too complex and rickety to extend further. If people want to use new parameter features in 3.3, they will need to migrate to a new way of doing it for that operation.

There is always a migration cost. This is not so much about HTTP methods as it is figuring out how to amortize the migration cost effectively.

We should also look at what could be done on the Overlay side in an Overlay 1.1 (which I know was not in the plan, but perhaps it should be if it's going to constrain what the OAS can do). Adding limited semantic selection capabilities of some sort would also solve this by separating the meaning of the overlay from the exact syntax of the OAS.

Again, change has a cost. We want to amortize the big-bang change of Moonwalk across steps in 3.x (because we've seen how hard big bang changes are), and we want to make sure that none of our specs (OAS, Arazzo, Overlay) become a problematic constraint on the other. That means considering whether a difficult interaction is best fixed in one vs the other spec.

This does not mean that all migration costs should be pushed into 3.x. If something (possibly even a parameters revamp) is still too "big bang" in 3.x then we need to figure out whether to move ahead or to make the 4.0 big bang bigger. I don't know the answer to that at this stage.

Getting back to HTTP, if this is a problem for Overlays, the solution might lie in changing Overlays rather than restricting OAS. And again, we should not think of this as a one-off. As a one-off, it doesn't much matter. But that is the problem with a lot of the OAS (ahem, parameters). Too many one-offs that add up to a lot of complexity and no patterns to make it easier.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
http Supporting HTTP features and interactions registries Related to any or all spec.openapis.org-hosted registries
Projects
None yet
Development

No branches or pull requests