Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discuss] API naming convention #52284

Open
mshustov opened this issue Dec 5, 2019 · 14 comments
Open

[Discuss] API naming convention #52284

mshustov opened this issue Dec 5, 2019 · 14 comments
Labels
discuss Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc

Comments

@mshustov
Copy link
Contributor

mshustov commented Dec 5, 2019

The current Kibana style guide enforces API format compatible with elasticsearch API:
https://github.com/elastic/kibana/blob/master/STYLEGUIDE.md#snake_case

Kibana uses snake_case for the entire API, just like Elasticsearch. All urls, paths, query string parameters, values, and bodies should be snake_case formatted.

The reality that:

  • it's not always possible to follow this convention. For example, when passing around user input:
GET  /api/kibana/settings  
settings: {
  mySetting: { <—— registered by a plugin, Kibana shouldn't change the value
    userValue: ....
  },
}
  • it's hard to enforce consistency across the Kibana. For example, a plugin provides data to other plugins via contract and API. As a result, the same data set has a different shape, depending on the way it's consumed. We have this problem with saved objects API [Core][WIP] SavedObjects v2 #40380
// pluginB
const data = pluginA.get();
data.someProperty.name

// or plugin B
const data = await fetch('/api/pluginA');
data.some_property.name
  • It enforces data transformation between the client and server sides. As a result, client and server sides have to adopt the shape of data for both environments. It's not that bad, but all consumers have to adapt their code not to handle elasticsearch API as a source of truth anymore.
licensing.features.my_feature // received from elasticsearch
licensing.features.myFeature // across the whole Kibana
licensing.features.my_feature // when plugin interacts with elasticsearch directly

Open questions

  • Do we still want to enforce elasticsearch API format?
  • How to define when a plugin has to follow the convention?
  • Should we follow the convention only in Kibana public API? How to define what set of API is public?
@mshustov
Copy link
Contributor Author

mshustov commented Dec 5, 2019

@elastic/kibana-platform @epixa

@mshustov mshustov changed the title API naming convention [Discuss] API naming convention Dec 6, 2019
@pgayvallet
Copy link
Contributor

pgayvallet commented Dec 10, 2019

Being consistent in public API is the whole stack makes sense imho. However,

URLs

I don't see any difficulties in keeping our urls snake case, as this is something that does not depends on any data.
An exception would be an url like /endpoint/{id}. even if id is, say, an uuid, we are already breaking the convention I guess?

Query params and body payload

This is where the difficulty is imho, as our API conventions kinda conflict with our JS naming conventions. I'm not even speaking about user input here, but just plain, static data structure.

Let say we have the following request that respect the API naming conventions:

POST  /api/some_endpoint
{
  some_property: 'someValue',
}

which would results in the following type:

type myDataStructure = {
     some_property: string;
}

is this was not constrained by the API convention, the type would probably have been

type myDataStructure = {
     someProperty: string;
}

However we are in JS. There is not such distinction as serializable vs not-serializable types. We don't have the tooling proper typed languages can offer such as field anotation, type converters and more. By default, our JS data structure is what got serialized.

So what are our options ? From what I see:

  • Enforce our js naming convention to be snake case instead of camel case, on the whole platform.

  • dissociate transfert data structure (DTO) from actual model, and convert everything to DTO that respect the naming convention when sending/receiving from/to the server/client (heavy boilerplate as every endpoint needs to creates and use DTOs)

  • Add some of core data adapter that does the same thing than last point, but auto-magically. However there is no typescript utility to do something like MyDtoType = ToSnakeCase<MyType>, so the TS types will not be properly linked.

  • Actually decide that this API convention is more a best effort

  • Other ?

@joshdover
Copy link
Contributor

A couple of thoughts:

  • Moving to a snake_case convention in JS code seems like a non-starter to me.
    • Virtually the entire ecosystem uses camelCase. If we used snake_case in our 1st-party code, anywhere we import and use library code, we'd have a jumbled unreadable mess.
  • Writing DTOs by hand just to get the naming convention consistent seems like a terrible ROI to me.

There are other API niceties we'd like to add features for, at least for public APIs. For instance: OpenAPI spec, breaking change management / detection, etc. We could add an auto-magic renaming layer for public API routes that leverage these features. This automatic renaming could include a feature that allows you define which keys of the schema should and should not be renamed.

If we wanted automatic TypeScript support, we could leverage a TypeScript compiler plugin / transformer that would be able to generate the snake_case and camelCase versions of the interface.

All that said, building the tooling to make all this work, just to avoid the boilerplate of writing DTOs, is significant.


The complexity here stems from the mismatch between Elasticsearch's conventions and the JavaScript ecosystem's conventions. I'm like to push back on the assumption that our APIs need to be consistent with Elasticsearch. How valuable is this in the first place?

The primary instance where I would understand Kibana wanting to match Elasticsearch is APIs where part of the request body is actually an Elasticsearch query or filter. Mixing camelCase and snake_case in these requests would be awkward. However, taking a cursory glance at our currently available public REST APIs, none of them appear to expose raw Elasticsearch API interfaces in this way.

This begs the question: do we even plan to have public APIs that give raw access to Elasticsearch features? If not, it seems removing this snake_case API convention from Kibana would be more valuable than keeping it and building tooling to support it.

@pgayvallet
Copy link
Contributor

Moving to a snake_case convention in JS code seems like a non-starter to me

+1

Writing DTOs by hand just to get the naming convention consistent seems like a terrible ROI to me.

+1

This begs the question: do we even plan to have public APIs that give raw access to Elasticsearch features? If not, it seems removing this snake_case API convention from Kibana would be more valuable than keeping it and building tooling to support it.

+100.

@rudolf
Copy link
Contributor

rudolf commented Dec 12, 2019

Assuming we keep the API naming convention, I think the best solution is to honour the API convention above the JS naming convention. Although we can introduce renaming layers and typescript support (Saved Objects uses a type-safe renaming function) it still introduces complexity by introducing subtle differences in property names, network requests and the data that's persisted in ES. I'd rather have inconsistent property names than have to wonder whether something is snake case or camel case in each of the different contexts.

I also don't see the value in using the same API conventions as ES other than consistency for consistency's sake. Although consistency is more beautiful, I don't think it would have any impact on API consumers.

@mshustov mshustov added the Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc label Dec 16, 2019
@mshustov
Copy link
Contributor Author

mshustov commented Dec 16, 2019

However, taking a cursory glance at our currently available public REST APIs, none of them appear to expose raw Elasticsearch API interfaces in this way.

Is it a full list of Kibana REST API?

Btw there can see inconsistency even within a single endpoint:

  "updated_at": "2019-07-23T00:11:07.059Z",
  "version": "WzQ0LDFd",
  "attributes": {
    "title": "[Flights] Global Flight Dashboard",
    "hits": 0,
    "description": "Analyze mock flight data for ES-Air, Logstash Airways, Kibana Airlines and JetBeats",
    "panelsJSON": ....
    "optionsJSON": "{\"hidePanelTitles\":false,\"useMargins\":true}",
    "version": 1,
    "timeRestore": true,
    "timeTo": "now",
    "timeFrom": "now-24h",
    "refreshInterval": {
      "display": "15 minutes",
      "pause": false,
      "section": 2,
      "value": 900000
    },
    "kibanaSavedObjectMeta": {...
  }

@epixa
Copy link
Contributor

epixa commented Dec 16, 2019

The role APIs accept data for both Kibana and Elasticsearch roles, so it essentially just proxies some portion of the request body to Elasticsearch: https://www.elastic.co/guide/en/kibana/current/role-management-api-put.html

@joshdover
Copy link
Contributor

joshdover commented Dec 16, 2019

Saved Objects uses a type-safe renaming function

@rudolf can you point me to this?

@rudolf
Copy link
Contributor

rudolf commented Dec 17, 2019

/**
* Returns a new object with the own properties of `obj`, but the
* keys renamed according to the `keysMap`.
*
* @param keysMap - a map of the form `{oldKey: newKey}`
* @param obj - the object whose own properties will be renamed
*/
const renameKeys = <T extends Record<string, any>, U extends Record<string, any>>(
keysMap: Record<keyof T, keyof U>,
obj: Record<string, any>
) =>
Object.keys(obj).reduce((acc, key) => {
return {
...acc,
...{ [keysMap[key] || key]: obj[key] },
};
}, {});

An example of where we use it:

const renameMap = {
defaultSearchOperator: 'default_search_operator',
fields: 'fields',
hasReference: 'has_reference',
page: 'page',
perPage: 'per_page',
search: 'search',
searchFields: 'search_fields',
sortField: 'sort_field',
type: 'type',
filter: 'filter',
};
const renamedQuery = renameKeys<SavedObjectsFindOptions, any>(renameMap, options);

@joshdover
Copy link
Contributor

Current plan:

  • Keep public APIs as-is for now
  • New public APIs should respect snake_case conventions on a best effort basis
  • JavaScript code consuming this data should not do any conversion to camelCase
  • When we introduce OpenAPI and work on stabilizing Kibana's public APIs, we will converge on a convention and have tooling to generate types and do data conversion.

@FrankHassanabad
Copy link
Contributor

FrankHassanabad commented Nov 10, 2020

So from earlier comment within most parts of securitySolutions such as detections we did follow the advice here:
#52284 (comment)

Where we kept our REST API to be snake_case on a best effort. We are revisiting some of this potentially here:
#81964

Not all layers of the stack and conventions throughout Kibana follows this convention that we build our detections on or do not have public REST API's (some internal ones are using camelCase such as SO based REST ones from earlier) so what we have ended up with today is mixed cases throughout the code base with regards to camelCase and snake_case. In some instances we cannot easily abstract/translate between the snake_case and camelCase due to the nature of proxying things through something such as the SO objects which are mixed between camelCase, snake_case, and hypen-case so we adjust throughout the code base best we can. Hypen-case is more rare but not 0% within the SO objects.

What we have ended up with that are pain points generally are:

  • "Hand spun" conversions between snake_case and camelCase and then back again are already checked in. Sometimes devs are writing their own hand conversions, using lodash, or doing .map(deepObject => deep_object). Depending on the dev and where they're at in the code base you can find these utilities peppered in different places. Within Detections Engine we have up to now done mostly destructuring and renaming but as the number of rules has increased this has been a sore point of code complexity. To expound on this complexity, if a dev pushes down the non-destructured object mixed with the converted one, you get two pieces of data where one is camelCase and the other is snake_case.

  • Lack of REST linter tools means some newer rest inputs have accidentally introduced camelCase (I have by accident) where now we carry the burden of maintenance forward or break customer usage/API's mixed in there. Risk of more being added in the future is there without static analysis/linter tooling/test tooling. This in turn leads to more mixed API naming conventions.

  • Lots of additional overhead utilities, libraries, and then tests supporting each conversion/use case are showing up due to these differences.

  • We still deactivate the camelCase rule in a lot of areas and just move forward with snake_case to make some parts easier to maintain or we have done it out of fatigue (at least I have ;-). This just adds to the already mixed messages and hard to tell directionality within detections engine and other teams.

  • It's tricky to enforce the code reviews and disciplines within teams much less outside so when you're gluing together pieces on the backend between teams you kind of just assume you're going to be doing conversions between naming conventions and styles at this point.

  • Since parts of Kibana and other teams are mixing snake_case and snakeCase as well (probably these same reasons), if we get an object or property of an object that is snake_case we just move forward and turn off any linter errors if they come up. However, the codebase at this point is a mixture of something like 90-95% camelCase and 5-10% snake_case. Some hypen-case but not a whole lot.

  • Another big part of this is the filters within KQL queries that we send to the backend. That for the most part doesn't have types or strong validations for us since it can be an incredibly wide universe of types and anyone can type whatever they want into the filter. For that one, we don't do any conversions and just let it flow from Kibana -> ES and let ES do all the validations.

  • For detections there is also a conversion or "changeup" between the REST interfaces and the glue code we have for the alerting system and its schema based system. We have defined our "schema" within that as mostly camelCase to keep with the alerting convention but even within the alerting SO it does mixed between camelCase and snakeCase which adds to the inconsistency and issues of deciding which ones to keep between which naming type.

  • So, finally, we reach the TypeScript "types" or "auto-types such as const type foo = typeof schema" which can be generated between either Kibana Schema based systems, io-ts, or etc... Some of those types are different from the ES types or hand spun such as the ES filter type where we do have some mismatches or "as casts" between our derived types and the ES based ones we proxy to ES. These types are also exploding in nature between different conversions where some way of auto-translating if the decision to keep things in camelCase 100% were to be looked at.

Things that would be helpful for us (or at least myself) ...

For our use cases and what is going on, I understand that the JS Eco system is mostly camelCase. If we could get "boundary" conversions automatically for serialization and de-serialization that would be incredibly helpful if people are very concerned about keeping things camelCase. This can be tricky still depending on if at some point you are dealing with mixed SO objects or mixed REST API's as the serialization, de-seralization might or might not be smart enough to know what to auto-covert correctly at this point but I don't think impossible. This might be tricky with passing through a KQL filter as another example, but maybe not impossible.

...or...

Keeping things mixed case and allowing camel + snake_case throughout the code base and discouraging transforms altogether might be another path. It seems like some people are approaching that while others are going the other direction but we adjust depending on where we are within the codebase.

@mshustov
Copy link
Contributor Author

@kobelb @stacey-gammon considering all the pain points outlined above, what do you think about allowing snake_case format in the code base? Right now we allow it for object properties only

'snake_case', // keys in elasticsearch requests / responses

but not for variables, parameters, etc. which enforces solution teams either to implement those manual conversions or mute linter errors anyway.
It definitely will increase entropy in the code base, but we already have the same problem:

Since parts of Kibana and other teams are mixing snake_case and snakeCase as well (probably these same reasons), if we get an object or property of an object that is snake_case we just move forward and turn off any linter errors if they come up. However, the codebase at this point is a mixture of something like 90-95% camelCase and 5-10% snake_case. Some hypen-case but not a whole lot.

We have defined our "schema" within that as mostly camelCase to keep with the alerting convention but even within the alerting SO it does mixed between camelCase and snakeCase which adds to the inconsistency and issues of deciding which ones to keep between which naming type.

Switching to snake_case might be a significant effort. Instead, we could provide guidance when snake_case is preferred: when working with stack API, read/write SO, access config value, etc.

@kobelb
Copy link
Contributor

kobelb commented Nov 11, 2020

Long-term, I think we should do what @joshdover previously stated and invest in tooling to do the automatic conversion from our HTTP API standards which use snake_case to our code standards that use camelCase. I haven't seen a compelling reason for us to consider switching the HTTP API standards to use camelCase, except it's easier to do so with our current level of tooling.

Short-term, I'm extremely hesitant to completely remove our eslint protection disallowing snake_case for all code. This moves us further away from where we want to be long-term and allows code using snake_case to proliferate. If a team feels that they are being hindered enough by disabling the eslint rule on a per-file/per-line basis, we can use eslint overrides to remove this rule for a plugin/folder: https://github.com/elastic/kibana/blob/master/.eslintrc.js#L84

@mshustov
Copy link
Contributor Author

@kobelb JFYI: TS v4.1 adding support for key re-mapping https://devblogs.microsoft.com/typescript/announcing-typescript-4-1/#key-remapping-mapped-types which could help us to automate type conversion

If a team feels that they are being hindered enough by disabling the eslint rule on a per-file/per-line basis, we can use eslint overrides to remove this rule for a plugin/folder: https://github.com/elastic/kibana/blob/master/.eslintrc.js#L84

Then we need to state in our docs that we are okay with such approach.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss Team:Core Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc
Projects
None yet
Development

No branches or pull requests

7 participants