Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Define API shape to interact with back-end over WebSockets #119

Open
dzintars opened this issue Aug 29, 2020 · 24 comments
Open

Define API shape to interact with back-end over WebSockets #119

dzintars opened this issue Aug 29, 2020 · 24 comments
Assignees
Labels
idea Ideas and concepts question Further information is requested

Comments

@dzintars
Copy link
Owner

dzintars commented Aug 29, 2020

Currently I am using just simple type and payload structure which will work for a small amount of services, but as amount of services will grow we will encounter bunch of naming collisions. And overall to name the methods will be harder.

Right now i have an idea to mimic the RPC style where wss message should define the service and the method it is calling.
Also i am thinking how to version the methods. Most likely, i don't need to version services as those will be pretty stable, but methods and messages(payload)definitely will be modified over the time.

So, let's define pseudo message shape for now:

/**
 * Messages
 */
const ListModulesRequestPayload {
  parent: action.payload.id,
  page_size: action.payload.pageSize,
  page_token: action.payload.pageToken,
}

/** 
 * Websocket.Send will be an action creator function taking those 3 arguments.
 * Object used to describe the signature.
 */
WebsocketActions.Send: {
  service: 'ModulesService',
  rpc: 'ListModulesRequest',
  message: ListModulesRequestPayload,
}
@dzintars dzintars self-assigned this Aug 29, 2020
@dzintars dzintars added idea Ideas and concepts question Further information is requested labels Aug 29, 2020
@dzintars
Copy link
Owner Author

dzintars commented Aug 29, 2020

What if we want to use multiple wss API endpoints instead of single one?
We will execute something like Websocket.Connect('wss://someService.api.oswee.com'). At this point user could have multiple wss connections open. How do we will distinguish to what connection we should throw that message in?

@dzintars
Copy link
Owner Author

dzintars commented Aug 29, 2020

What if we want to manage services at proxy level? I mean, we should jump into every wss envelope, read the headers of it and forward to the right service which sound impossible to me with websocket connection.

@dzintars
Copy link
Owner Author

dzintars commented Aug 29, 2020

Hmmm... seems this could work...
f933e73 and dzintars/wss@be9c394

@dzintars
Copy link
Owner Author

dzintars commented Aug 30, 2020

Now i need to think, how to incorporate Protobufs for payload definitions.
Ideally i want to define message shape in protobufs and generate TypeScript typings and Go stubs.
This way in TypeScript IDE side at least i will see the shape of the every RPC/method payload.
Perfectly would be to generate some kind of Swagger as well.

@dzintars
Copy link
Owner Author

dzintars commented Aug 30, 2020

Something feels wrong with current naming.
I am using ListModulesRequest and ListModulesResponse to distinguish between outbound and inbound actions. And then adding a suffix *Payload to name the actual message.
To be closer to the idiomatic Protobufs i would need to use ListModules method name and ListModulesRequest message name. For outbound it could work, but ... what the API should return then? It can't return the same ListModules method name because it could make a loop because Redux will throw that action back into websocket. Or it will simply do nothing, because message signature (data) will be non-serializable in backend.

What if Response will look different?
What if in onmessage() we are looking at "message name" instead of method name?
For example:

const ListModulesResponse {
  name: "ListModulesResponse",
  body: {
    parent: action.payload.id,
    page_size: action.payload.pageSize,
    page_token: action.payload.pageToken,
  }
}

So from API we could receive such simple object:

{
  service: "ModulesService",
  rpc: "ListModules",
  message: {
    name: "ListModulesResponse",
    body: {
      parent: action.payload.id,
      page_size: action.payload.pageSize,
      page_token: action.payload.pageToken,
    },
  },
}

Then message.name would signal an Redux action.
Moreover... we potentially could augment message with version number, some context IDs or.. any other extra data.

@dzintars
Copy link
Owner Author

dzintars commented Aug 30, 2020

This for now seems good, but carries kinda lot of metadata. All that JSON structure wastes prety much bytes. But it is still much less that plain POST/GET etc request.
https://golb.hplar.ch/2020/04/exchange-protobuf-messages.html
I should look into websocket sub-protocols like WAMP, MQTT etc.. ideally i would like to send messages in binary over the wire.

Also i need to think about some message buffering. If client is trying to send more messages as server can handle. For example large images or other binary data. Connection will be blocked.
https://www.ably.io/concepts/websockets
https://www.npmjs.com/package/protobufjs

protoc -I=$SRC_DIR --js_out=$DST_DIR $SRC_DIR/addressbook.proto

Motivation:
It's not that JSON is bad. These days browsers did a lot optimizations to parse JSON really fast and so on. Main motivation is to have message format which can be understood by client and by the server and to give developers good DX. Because Protobufs uses protobuf IDL which then can be compiled into many languages (in my case Go and potentially TypeScript) it is good candidate for implementation there. And if we use protobufs then we should send messages as arraybuffer. So all that JSON representation above becomes irrelevant.
Besides JSON is quite optimized in browsers, there is still some overhead. It is more important if we want to build event-intensive communication with lots of small and larger messages.
https://medium.com/samsung-internet-dev/being-fast-and-light-using-binary-data-to-optimise-libraries-on-the-client-and-the-server-5709f06ef105
So, at the end we have 2 goals there:

  • Make communication effective
  • Make good DX

@dzintars
Copy link
Owner Author

dzintars commented Aug 31, 2020

You could either wrap your messages in an envelope message, for example ...

message Envelope {
  oneof kind {
    SomeMessage a = 1:
    SomeOtherMessage b = 2;
  }
}

or implement a custom header for your purpose.

protobufjs/protobuf.js#689

@dzintars
Copy link
Owner Author

dzintars commented Aug 31, 2020

Some updates from Go Protobufs
gogo/protobuf#691
https://blog.golang.org/protobuf-apiv2

@dzintars
Copy link
Owner Author

dzintars commented Aug 31, 2020

The binary protobuf encoding format is not most CPU efficient for browser clients. Furthermore, the generated code size increases as the total protobuf definition increases.
https://github.com/grpc/grpc-web/blob/master/doc/roadmap.md#non-binary-message-encoding

But I still should think about the bandwidth. If this service will be hosted in some of the clouds, then every extra byte can be costly for event-intensive communication. Same applies for mobile users on 2G and 3G.

@dzintars
Copy link
Owner Author

dzintars commented Aug 31, 2020

How to import type declarations from external API repository?
https://dev.to/room_js/typescript-how-do-you-share-type-definitions-across-multiple-projects-1203

npm link ?
lerna ?

@dzintars
Copy link
Owner Author

dzintars commented Sep 1, 2020

At this point i created simple Module package in API and imported type declarations in app-navigation redux.
This link helped a little bit understand how to use protobuf generated declarations - https://github.com/easyCZ/grpc-web-hacker-news/blob/32fd37b82d5dafcd4c51089b82f4c52b8514504f/README.md

My questions ATM:

  • How to namespace all message types per service so that i could import them in TS like @oswee/modules-service/list-modules-response or similar way.
  • Should i focus for now just on simple JSON exchange format?

@dzintars
Copy link
Owner Author

dzintars commented Sep 2, 2020

Today i got an thought that... probably my current approach of sending entire object of

export interface Modules {
  readonly entities: { [id: string]: Module }
  readonly ids: string[]
}

is wasteful.

If we are using websocket we should not send large messages.
Instead, i could send every Module entity as separate message.
Websocket is tailored for a small "packets". If we send large blobs, the socket gets blocked.
Also... current approach could introduce some over-consumption of data.
It would allow for more flexible data manipulations.
From UX perspective, user should not wait for an entire data set. If user is interested in last record created and we stream latest data first, then user immediately sees the his record and can make manipulations on it. And we have an opportunity to cancel delivery of the remaining entities if user navigated away from the current view (little bandwidth optimization).
Also, it seems to me that Protobufs does not support entities: { [id: string]: Module } field type.

Downsides is that every time new entity arrives, store get updated and thus components get repainted.

This could change the communication message shape as we are leaning away from Request/Response thinking more towards Events.

New navigation/Component Connected callback
1st entity arrived
2nd entity arrived
3rd entity arrived
Nth entity arrived

Basically UI should not ask for the data. UI just broadcasts an events. Depending on the events, relative data gets streamed back.

This is some mental shift in the thinking.

@dzintars
Copy link
Owner Author

dzintars commented Sep 2, 2020

Every UI component can be an independent subscriber.
connectedCallback() just subscribes to so some "channel" and that's it. Data get's streamed in.
disconnectCallback() unsubscribes from the "channel".

There is something... I feel i like it.

We basically do not care what data we need to request. As components get connected (rendered), they subscribes to their data streams.

@dzintars dzintars pinned this issue Sep 2, 2020
@dzintars
Copy link
Owner Author

dzintars commented Sep 2, 2020

At system-shell level we could listen for a connected components and subscribe them to the respective channels. This will introduce coupling and dependency on the system-shell team, but would centralize all subscriptions. Need to think about it. 50/50

@dzintars
Copy link
Owner Author

dzintars commented Sep 2, 2020

{
  service: "ModulesService",
  rpc: "SubscribeModulesChannel",
  message: {
    name: "SubscribeModulesChannelRequest",
    body: {
      parent: action.payload.id,
      page_size: action.payload.pageSize,
      page_token: action.payload.pageToken,
    },
  },
}
{
  service: "ModulesService",
  rpc: "SubscribeModulesChannel",
  message: {
    name: "SubscribeModulesChannelResponse",
    body: {
      module: {},
      total_entities: 47,
      entity_no: 9,
    },
  },
}

@dzintars
Copy link
Owner Author

dzintars commented Sep 2, 2020

All channels could be grouped into Services.

ModulesService {
  Modules
  ModulesCreate
  ModulesUpdates
}

@dzintars
Copy link
Owner Author

dzintars commented Sep 2, 2020

When new form get's opened we create draft entity. On form field update we emit update event. On form submit we change entity status from draft to created. On form cancel we change status to draftCanceled.
WRONG! Think in terms of events!

<form id="user">NewUserFocus
<input id="user-username" onfocus="">UsernameFocus
UsenameChange
UsernameChange
UsernameBlur
EmailFocus
EmailChange
EmailChange
EmailBlur
UserSubmit or UserReset

Idea is to utilize native HTML5 events as much as we can thus stay closer to the native platform API. Event-target gives us an information about the object we are interacting with. Most likely every form will match some kind of domain, like User, Order, OrderLine etc. <form id="new-delivery-order">

This would be a hard core to stream every UI level event, but general idea for now is like that. Forms typically already represents some kind of entity and fields are just a entity properties.

Also keep in mind arraybuffers, indexedDb and service workers.

UPDATE: I think there i wend down into rabbit hole. 😃

@dzintars
Copy link
Owner Author

dzintars commented Sep 3, 2020

Found this project to document WebSocket API - https://www.asyncapi.com/docs/getting-started/coming-from-openapi/

@dzintars
Copy link
Owner Author

dzintars commented Sep 9, 2020

Status update
Basically at this point went into Bazel rabbit hole. https://github.com/dzintars/oswee
The main idea is that i have/will have quite a bit protobufs using shell scripts is not the best DX. So.. in order to automate that i picked Bazel as protobufs also are like first class citizens. With single bazel build i can generate all the changes to the Go an TypeScript.
Other argument is that i am not familiar with build systems and i think now is right time to learn at least one. IMO Bazel is good option.
The challenge now is that Bazel is mono-repo centric and does not generate artifacts in the repository itself. This means that i can't import artifacts as regular files and are forced to move into mono-repo setup so that Bazel takes care about all dependencies for me.
I tried to tie together 2 projects - Protobuf API and WSS gateway. It seems working well. Moreover that i can bazel run every executable or containerize them.
Overall o feel optimistic about integrating Bazel into my workflow.
Will see how it goes.
Most likely i will merge this repository into Prime repo.

@dzintars
Copy link
Owner Author

Other conclusion from the talks i saw is that it is better to implement Bazel workflow as early as possible because migrating monolitic'ish codebase into small bazel modules is not an easy task.
Today I extended Prime repository structure and created first Lit Element element.
Now i need to see how to run an application which depends on this element.
Other thought is that i should try to write my very first tests. :) Without tests updating dependencies in monorepo could be a pain in a long term.
Also i should discover ibazel to run application in watch mode (dev server).

@dzintars
Copy link
Owner Author

UPDATE: 2021
At this point i am in CI/CD Rabbit Hole.
In order to manage all this zoo, i need to automate my setup.
This lead to many new interconnected tools.
Ansible => Jenkins => MinIO => Terraform => Vault => Qemu
The idea is a simple IaC. To code all the setup and deploy it with Jenkins.
This way i will get familiar with all the required tooling. And it seems working pretty well.
Primary target is to deploy some flavour of the K8s cluster so that i could Bazel build containers directly into that without using Quay image registry.

@dzintars
Copy link
Owner Author

Still shaving the yak, but i see it worth it in the long term.
Working on shared Ansible roles and Terraform modules. Gluing things together.

@dzintars
Copy link
Owner Author

At this point i got really bored to work with Terraform and Ansible in oswee/ansible and oswee/infa repositories. I tried to attempt to prepare whole workstations automated setup for OpenShift cluster, but... things in this area changes a lot and sometimes i was chasing the upstream bugs which i had no idea how they work, look, smell or ... whatever. There is almost zero resources on such stuff as mostly all that is used in large organizations with lot of NDAs on top of that.
In the end, i went the simple Minikube route. LOL
I do not regret the time i spent on that. Learned a lot about CI/CD and GitOps. In one or the other form got almost all dots connected and at least now i think i have high level picture, how agile software development could be done.
I will eventually improve those repositories. Clean up. Implement more Molecule tests. And so on.
Currently i will shape the Prime repository to make the DX smooth.
ATM i can publish and expose the services into K8s which was the ultimate goal.
So, now it just matter of making good folder structure and sample packages for every tech i use there.

@dzintars
Copy link
Owner Author

ATM mostly working with automation.
In order to test Ansible roles with Molecule, I need to speed up VM images. Otherwise each iteration is painfully slow. To solve that, I need to create base image factory by using Hashicorp Packer.
So... I should have pre-baked and hardened images for each infra component/host.
This also will speed up Terraform apply/destroy tasks as all images will be pre-configured.
Packer seems works well with my current Ansible layout, however I am trying to understand how hostnames works so that i could use --limit workstations --tags nvim as example. Or ... do i need that at all!?
Once base images will be done, i will go back to "build" image to configure Buildkite agents, which could run my CI tasks on my remote servers.
After that i need to setup Bazel remote cache and remote execution, so that my workstation is free of all that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
idea Ideas and concepts question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant