-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposal: New architecture of Apache APISIX Ingress controller #610
Comments
Agree +1 |
+1 |
1 similar comment
+1 |
This issue has been marked as stale due to 90 days of inactivity. It will be closed in 30 days if no further activity occurs. If this issue is still relevant, please simply write any comment. Even if closed, you can still revive the issue at any time or discuss it on the [email protected] list. Thank you for your contributions. |
In the simplest terms, to make it easier to scale and manage, etcd must be removed. So there is a high probability that we will no longer use the Admin API I guess it will be
APISIX may become a child process managed by another component we implement. |
What about using an ETCD adapter to let the custom component support ETCD APIs so that we can avoid any changes for APISIX. |
APISIX standalone mode will fully update the configuration, which will have some impact on health checks or caching. |
Why use gRPC ? I recommend use default APISIX admin api. |
If there is no storage component, then we will drop the Admin API @sober-wang Using gRPC allows for active push configuration via the server.
While the current mode is really simple, obviously I want it to be more powerful. |
I'm new to APISIX so I started reviewing the Architecture and the Deployment modes, alongside the documentation of the Ingress Controller itself (since my goal is to manage APISIX via Kubernetes CRDs and use it as an alternate Ingress Class for K8s Ingresses) If my understanding is correct, the APISIX Ingress Controller makes the APISIX Control Plane (and the Admin API for that matter) almost entirely obsolete (at least in the context being discussed here). If the actual configuration of APISIX is now done via Custom Resources (which are ultimately persisted in the While it does what it's meant to do, it also introduces certain problems, some of which deserve serious consideration:
It would be great if the Ingress Controller could talk directly to the Data Plane in standalone mode. |
You are right! That's the main reason why I came up with this idea. This will be my third priority, I will deal with #1465 first and then release v1.6. Then I will start working on this one. It won't be long before I post my thoughts 💡 here to discuss with you all |
I have a new idea. Since APISIX v3 has added the capability of gRPC-client, some optimizations have been made to the CP/DP deployment model in APISIX v3. In this way, the data plane APISIX is exactly the same as normal APISIX, not in Standalone mode, so you can use all the capabilities of APISIX without any modification to APISIX. WDYT? |
Sounds good to me though I'm not the most familiar with the APISIX architecture, specially not when it comes to the gRPC components. |
In the new architecture, the ingress controller is a stateless component. It can just read and store resource status in Kubernetes's resources. For authentication, we can add certificates to protect the connection. |
Is it possible that controller just modify the apisix.yaml configmap thus other standalone apisix instances can watch these changes |
@caibirdme no, this is not designed for standalone mode. Are you using standalone mode? I want to understand your use case |
look like , the apisix pull a configuretion from apisix-ingress-controller. maybe , I'm misunderstand the means. So can you clarify the direction of data flow? |
Currently, APISIX v3 already supports decouple mode. CP provides an etcd-like service. In the new architecture of APISIX Ingress, we only need to let the Ingress controller assume the role of CP. |
I'm using apisix in standalone mode to work as the ingress gateway. I don't want to use ingress, because it's only designed for http. And I don't want to deploy an etcd cluster either. |
discuss a scenario: If these four situations are met:
data plane (upstream) will reference obsoleted pod ip, which leads to:
how about |
Is there any decision on how to implement this feature? Folks on my current company are willing to spend some engineering time on this |
cool! let's keep in touch |
Can please share the code somewhere how it's setup with standalone mode is done...I mean how you are refering to backend svc's in diff namespaces and passing apisix.yaml config in the deployment, as I can't find that capability in the present helmchat of apisix. |
Great idea! This can elevate apisix-ingress-controller to a core position. In fact, the current approach is equivalent to storing two sets of data in two etcd instances: one for the Ingress CRD and the other for Apisix's own data. But when the ingress-controller goes down, it would require re-fetching, generating, and distributing the route entries upon restart. In scenarios with a large number of ingresses, this may bring more pressure on the apiserver? Or may result in longer recovery times? |
I didn't fully understand your meaning, are you referring to the new architecture or the existing one? |
The existing one, and I mean your new design would avoid this. This is the benefit. |
https://github.com/apache/apisix-ingress-controller/releases/tag/v1.7.0 V1.7.0 released with this feature. Thanks all!!! |
thanks @tao12345666333 is there documentation ready around the new feature ? |
I must be missing something, but doesn't APISIX already support reading/monitoring its config from a YAML file, the YAML file can/should be mounted as a ConfigMap by APISIX pods - so all the ingress controller needs to do is monitor ingress/CRD records and update the ConfigMap as necessary. New architecture with mimicking of etcd service seems like WAAAY overkill, no? |
Not if you want to use CRDs |
Can you please say more? Is there something in CRDs that is not available in the configuration file in standalone more? |
I partially agree with @mlasevich - since Apisix can be controlled by a configmap, this configmap could be produced and updated by ingress-controller and we would not need a mock etcd. It does not rule out CRDs. However, I assume that you can't have everything at once and I am able to accept mock-etcd in a transitional state. I have a question, about the pod configuration. The composite.md documentation shows an example in which one pod has both apisix-ingress-controller and apisix-gateway. This certainly made testing easier, but I'm not sure whether it's intended to be used that way "in production". The same scheme was duplicated within the actual ingress-controller chart, making the chart from apisix-gateway now unnecessary and causing the ingress-controller to be scaled simultaneously with apisix-gateway. Was this intended? If not then I am able to prepare a fix for chart. |
It seems that there are still some doubts about this, so let me answer the questions involved. 1. Why not use standalone mode directly? We know that APISIX has a standalone mode (obviously), but APISIX's standalone mode is only a subset of its functions. It cannot complete all the capabilities of APISIX (especially the important dynamic ability for production environment). This is also why we spent a lot of time and energy to implement an etcd-mocked server. We hope to provide the complete capability of APISIX. 2. About Pod configuration The essence of this architecture is to simplify deployment, eliminate the need for users to maintain etcd services, and make scaling easier. Therefore, in the current state, we recommend deploying them in the same Pod. 3. About Helm chart As the configuration of APISIX is required in this mode, the configuration file of APISIX has been added. Similarly, not all configurations of APISIX are fully supported at this stage. We look forward to hearing more feedback and test results from the community. |
I understand the intention of these changes - they simplify a lot, but at the same time are losing a LOT of functionality that we had with a separate Apisix deployment. For example:
Is the desired way forward to add this missing functionality back to apisix-ingress-controller helm chart? |
Sir, thank you! i am finally getting to this now :) |
Hi all! I want to share my thoughts on this matter. Dataplane in composite mode only receives changes to its configuration, but does not change configuration of Kubernetes resources: Deployment, Services, ConfigMaps, etc. For example:
I think it would be nice to look towards the Operator SDK, which would allow the use of CR to configure and deploy standalone instances with Apisix. This approach will not only allow you to configure standalone instances, but will also allow you to manage Kubernetes resources for Apisix. I think this approach will be more Kubernetes native Have you thought about this? |
In the current architecture of the Apache APISIX Ingress controller, we use the Apache APISIX Ingress controller as a control plane component.
The user creates a specified type of CR in Kubernetes, and the Apache APISIX Ingress controller converts it into a data structure that can be received by Apache APISIX, and creates, modifies or deletes it by calling the admin API.
Such an architecture has the following advantages:
But such an architecture will also have its disadvantages
Users need to maintain a complete Apache APISIX cluster, which cannot be done simply by modifying the replicas field of the Apache APISIX Ingress controller
I hope to introduce an architecture similar to ingress-nginx, which is widely used in Kubernetes.
In this way, users can complete the deployment directly through a Pod. At the same time, user can simply modify the replicas parameter to complete the scale.
sync from mail list: https://lists.apache.org/thread.html/r929a6dfa9620d96874056750c6b07b8139b4952c8f168670553dfb86%40%3Cdev.apisix.apache.org%3E
The text was updated successfully, but these errors were encountered: