-
Notifications
You must be signed in to change notification settings - Fork 251
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OAM component versioning mechanism #336
Comments
Some mechanical implementation feedback:
This isn't to say that we shouldn't have versions; just that the initial proposal isn't feasible. |
My original idea was revision is implicit in OAM, but it's tricky in ref model (the main reason Matt put a "name+version" example in current spec). My current thinking is the version could be annotated on The controller is responsible for tracking and maintaining the @ryanzhang-oss please checkout StatefulSet code and DaemonSet doc for |
I agree with @resouer and propose similar thoughts in #342 (comment) Let's continue discussion there. |
We need a way to assign a version to a workload instance if we want to support upgrade in OAM. We have a separate discussion on whether we need to support upgrade. Assuming that we need to, here are the more concrete design proposals from Alibaba. Backgound
kind: Component
metadata:
name: web-service
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: default
port: 8080
parameter:
- name: image
fieldPaths:
- spec.containers[0].image
---
kind: ApplicationConfiguration
spec:
components:
- componentName: web-service
parameterValue:
- name: image
value: nginx:v3 The actual instance of a workload is apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:v3
port: 8080 We propose that we require an OAM platform to track the revisions of the generated workload instances. OAM users can list/get the revision history of a workload so that they can perform upgrade/rollback operations. |
However, since the workload instances are generated with a combination of a specific component and a specific appConfig revision, we have two designs to generate the revision of a single workload. Maintain two types of revisions
---
## v1
kind: Component
metadata:
name: web-service
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: default
port: 8080
parameter:
- name: image
fieldPaths:
- spec.containers[0].image
---
## v2
kind: Component
metadata:
name: web-service
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:latest
port: 8080 OAM platform generates the below two objects internally kind: ComponentRevision
metadata:
name: web-service-v1
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: default
port: 8080
parameter:
- name: image
fieldPaths:
- spec.containers[0].image
---
kind: ComponentRevision
metadata:
name: web-service-v2
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:latest
port: 8080 A user can get the revisions of a component from the OAM platform and use it in the applicationConfiguration to refer to a specific revision of a component ---
kind: ApplicationConfiguration
meta
name: webApp
spec:
components:
- component: web-service
revision: web-service-v1
parameterValue:
- name: image
value: nginx:v3
traits:
- trait:
manualScaler:
replica: 3
kind: WorkloadRevision
metadata:
name: web-service-workload-v1
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:v3
port: 8080 This is for the case that a user needs to refer to a specific. We have this type of usage internally for traffic shifting traits. kind: ApplicationConfiguration
spec:
components:
- componentName: web-service
revision: web-service-v1
parameterValue:
- name: image
value: nginx:v1
traits:
- trait:
traffic:
- revision: current
weight: "60%"
- revision: web-service-workload-v1
weight: "30%"
- revison: web-service-workload-v2
weight: "10%" In this trait, we are directing 60% of the workload to the current workload version while shifting 30 and 10 percent to the previous workload instances Pro: This approach allows an OAM user to change a component. Cons: The workload instance is hidden from the user while its revision is exposed to the user. |
Create the workload instance For example, an app developer can create the two resources below ---
kind: Component
metadata:
name: web-service-v1
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: default
port: 8080
parameter:
- name: image
fieldPaths:
- spec.containers[0].image
---
kind: ComponentInstance
spec:
components:
- componentName: web-service-v1
parameterValue:
- name: image
value: nginx:v3 We ask OAM platforms to track the revisions of the ComponentInstance so we will have kind: ComponentInstanceRevision
meta:
name: web-service-v1-v3
spec:
components:
- componentName: web-service-v1
spec:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:v3
port: 8080 Finally, an OAM operator will create the applicationConfiguraiton kind: ApplicationConfiguration
spec:
components:
- ComponentRevision: web-service-v1-v3
traits:
- trait:
traffic:
- revision: web-service-v1-v3
weight: "60%"
- revision: web-service-v1-v2
weight: "30%"
- revison: web-service-v1-v1
weight: "10%" Pro: This is a more explicit way to express workload versions. It also clearly defines an application deployment vs an application configuration. Con: This approach changes the current specs user experience greatly. It is also not easy to make a component immutable in practice. |
So, is there any conclusion about this? |
We are going to discuss this in our community meeting, you are welcome to join. |
Make a component an instance
For example, an app developer will create two components that use different images versions. ---
kind: Component
metadata:
name: web-service
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:v1
port: 8080
---
kind: Component
metadata:
name: web-service
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:v2
port: 8080 We ask OAM platforms to track the revisions of the ComponentInstance so we will have kind: ComponentRevision
metadata:
name: web-service-v1
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:v1
port: 8080
---
kind: ComponentRevision
metadata:
name: web-service-v2
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
spec:
containers:
-name: c
image: nginx:v2
port: 8080 Finally, an OAM operator creates the applicationConfiguraiton kind: ApplicationConfiguration
spec:
components:
- ComponentRevision: web-service-v1
traits:
- trait:
traffic:
- revision: web-service-v1-v1
weight: "60%"
- revision: web-service-v1-v2
weight: "30%" In the case that we have application configurations with traits that modify the component (i.e manualScaler), we would recommend OAM users to keep track of applicationConfiguration snapshots outside of OAM spec. This is outside of the scope of OAM. Pro: It clearly defines the role of an application developer vs an operator. Con: A component is not reusable across different application configurations. OAM can have many identical components. This is also not backward compatible. |
I would not add a new concept to handle version (or revision). Versioning components make more sense IMO. It is easier to understand. I saw the comment about it not being supported in Kubernetes - that can be handled at the implementation layer: to have the implementation of the spec to append the version to the name automatically but the spec itself keeps the separation between name and version. I am in favor of the original idea with minor changes: Component
ApplicationConfig
|
(I will use the term "revision" to differ from version of app or version of api): @artursouza The main issue for revisioning OAM Component is "parameterValues" feature which violated the integrity of Component object. This would be the main topic we need to discuss in next community meeting (check 3 solutions above listed by Ryan). For where to maintain the revision information, it's an independent issue.
I think it's still infeasible as Component itself as a CR is also a K8s resource, so we can't add unsupported fields in its metadata. I am very hesitate to reinvent a metadata object in implementation layer which means we cannot use many of existing building blocks like controller runtime, client-go etc in K8s community, I'm afraid I don't even know how to write a k8s controller in that case 😄 . btw, if we agree on making OAM Component pure "data", then my 2 cent is either
|
I can’t seem to see this comment on the webpage
… On May 21, 2020, at 4:34 AM, 大雄 ***@***.***> wrote:
(I will use the term "revision" to differ from version of app or version of api):
@artursouza <https://github.com/artursouza> The main issue for revisioning OAM Component is "parameterValues" feature which violated the integrity of Component object. This would be the main topic we need to discuss in next community meeting (check 3 solutions above listed by Ryan).
For where to maintain the revision information, it's an independent issue.
to have the implementation of the spec to append the version to the name automatically but the spec itself keeps the separation between name and version
I think it's still infeasible as Component itself as a CR is also a K8s resource, so we can't add unsupported fields in its metadata. I am very hesitate to reinvent a metadata object in implementation layer which means we cannot use many of existing building blocks like controller runtime, client-go etc in K8s community, I'm afraid I don't even know how to write a k8s controller in that case 😄 .
btw, if we agree on making OAM Component pure "data", then my 2 cent is either
use runtime layer object (e.g. ControllerRevision <https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#controllerrevision-v1beta2-apps> ) to track history or
define the whole component.spec as a Revision object.
Could we use the Component rendered by "parameterValues" as the ComponentRevision? Otherwise we need to keep the ComponentRevision and WorkloadRevision both.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#336 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AODXOHHX3WGQPM7B3HPJD5LRSUGTHANCNFSM4LN7IHNQ>.
|
Yes there some good point we want to discuss but not sure who is the author. Maybe deleted or outdated? |
Yes, it's me. I found it has been a long time since the latest comment. So I deleted the comments. And I really like the proposal "remove the parameter in AppConfig". Since Component has already been not a template. In that way, If we use two revisions |
And I think the In that situation, the If we keep |
We will treat |
Close since it's already implemented. |
An application Configuration author does not have a way to point to a specific version of a component in the current spec. The reason is that OAM spec uses a component's name as the only information to reference to it.
This is needed in any mission-critical production services that utilize roll-back as the default emergency backup. In this case, they need a way to specify a previous
version
during the rollback. One can work around this issue if they are willing to take the risk and roll-forward their deployment. However, this is not the case in most of the cloud services providers.Moreover, a real workload in the OAM v1alpha2 spec is generated by templating a component with values filled in an applicationConfig. Therefore, this two-level composition nature of a workload makes it more difficult to expose the real version of a component. Here, we will add three proposals below that can solve this problem.
The text was updated successfully, but these errors were encountered: