Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"kn service export --with-revisions" should create revision COs directly #728

Closed
rhuss opened this issue Mar 10, 2020 · 17 comments · Fixed by #819
Closed

"kn service export --with-revisions" should create revision COs directly #728

rhuss opened this issue Mar 10, 2020 · 17 comments · Fixed by #819
Assignees
Milestone

Comments

@rhuss
Copy link
Contributor

rhuss commented Mar 10, 2020

Currently, kn service export --with-revisions exports only services which are update one after each other. However, as it is allowed to create revisions directly it makes more sense to export those directly, with the proper owner-references.

An kn service import then could then also adapt the revision names in case of collisions.

@itsmurugappan
Copy link
Contributor

/assign @itsmurugappan

@itsmurugappan
Copy link
Contributor

@rhuss looked at this today.
When we do kn service export --with-revisions . We would have to export the service and all the active revisions.
When we import using kn import, we would have to get owner uid (of the configuration) after creating the service , plug into revision and create it.
With this approach, Kubectl apply would not work directly. User has to plugin ownerid manually and use kubectl apply.
Is that right ? am i missing something ?

@rhuss
Copy link
Contributor Author

rhuss commented Mar 18, 2020

When we do kn service export --with-revisions . We would have to export the service and all the active revisions.

Correct.

When we import using kn import, we would have to get owner uid (of the configuration) after creating the service , plug into revision and create it.
With this approach, Kubectl apply would not work directly. User has to plugin ownerid manually and use kubectl apply.

Yes, that's correct. Sticking the owner-reference together properly is the task of kn import.

However, we could also export theConfiguration, too, so that we can set the owner reference already properly in the YAML file. I think we should follow this path an export the full graph. Wdyt ?

@itsmurugappan
Copy link
Contributor

itsmurugappan commented Mar 18, 2020

My 2 cents, this approach works if the intention is to export the current state of the service and let 'kn import' do some extra work to import. But if we want to import as it is, there is some problem with this approach. Please see the below example.

Exporting the service with 3 revisions(2 active) will result in the below artifacts

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: envtest
  namespace: test-alpha
  uid: 08cd1946-97cf-4877-b1aa-1fe5d141f83f
spec:
  template:
    metadata:
      annotations:
        client.knative.dev/user-image: murugappans/envtest:v1
      name: envtest-v3
    spec:
      containerConcurrency: 0
      containers:
      - env:
        - name: greet
          value: vanakkam
        image: murugappans/envtest:v1
        name: user-container
        resources: {}
        securityContext:
          runAsUser: 1001
      timeoutSeconds: 300
  traffic:
  - latestRevision: false
    percent: 50
    revisionName: envtest-v1
  - latestRevision: false
    percent: 50
    revisionName: envtest-v2
---
apiVersion: serving.knative.dev/v1
kind: Configuration
metadata:
  labels:
    serving.knative.dev/route: envtest
    serving.knative.dev/service: envtest
  name: envtest
  namespace: test-alpha
  ownerReferences:
  - apiVersion: serving.knative.dev/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Service
    name: envtest
    uid: 08cd1946-97cf-4877-b1aa-1fe5d141f83f
  uid: f68ddab9-f868-47ec-b405-829dd236270e
spec:
  template:
    metadata:
      annotations:
        client.knative.dev/user-image: murugappans/envtest:v1
      name: envtest-v3
    spec:
      containerConcurrency: 0
      containers:
      - env:
        - name: greet
          value: vanakkam
        image: murugappans/envtest:v1
        name: user-container
        resources: {}
        securityContext:
          runAsUser: 1001
      timeoutSeconds: 300
---
apiVersion: serving.knative.dev/v1
kind: Revision
metadata:
  labels:
    serving.knative.dev/configuration: envtest
    serving.knative.dev/route: envtest
    serving.knative.dev/service: envtest
  name: envtest-v1
  namespace: test-alpha
  ownerReferences:
  - apiVersion: serving.knative.dev/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Configuration
    name: envtest
    uid: f68ddab9-f868-47ec-b405-829dd236270e
spec:
  containerConcurrency: 0
  containers:
  - env:
    - name: greet
      value: hi
    image: murugappans/envtest:v1
    name: user-container
    resources: {}
    securityContext:
      runAsUser: 1001
  timeoutSeconds: 300
---
apiVersion: serving.knative.dev/v1
kind: Revision
metadata:
  labels:
    serving.knative.dev/configuration: envtest
    serving.knative.dev/route: envtest
    serving.knative.dev/service: envtest
  name: envtest-v2
  namespace: test-alpha
  ownerReferences:
  - apiVersion: serving.knative.dev/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: Configuration
    name: envtest
    uid: f68ddab9-f868-47ec-b405-829dd236270e
spec:
  containerConcurrency: 0
  containers:
  - env:
    - name: greet
      value: hola
    image: murugappans/envtest:v1
    name: user-container
    resources: {}
    securityContext:
      runAsUser: 1001
  timeoutSeconds: 300

When we create the service again , a new uid and a configuration will be generated, so the exported configuration cannot be used. Now the uid in owner references of the revision has to be changed to the newly generated configuration.

With that being said we can export the service with revisions(without configuration) and kn import can update the uid in the owner references. This will result in only one pod starting.

With the current approach of exporting services, more pods will be created but it can be used by kubectl also and the generation number will be clean.

@rhuss
Copy link
Contributor Author

rhuss commented Mar 20, 2020

That sounds like a good approach. I have to understand the uid bit a bit better, I wonder whether we could just strip it out ? (and let the controller add it). Sure what the uid is for as the other coordinates are already sufficient for uniquely identifying the service.

@rhuss
Copy link
Contributor Author

rhuss commented Mar 20, 2020

UIDs are decribed here but it not clear whether a UID is mandatory. Would be interesting to see what happens if one just remove the UID. Would this work ?

Otherwise I agree that we should rely on kube import for the import.

@rhuss
Copy link
Contributor Author

rhuss commented Mar 20, 2020

Another interesting issue is that we would have to create the Service first which implicitely creates own Configuration and Revisions. If we then import then our exported Configuration and Revisions (with then fixing the ownerRef to the service and deleting the implicitly created objects), we pretty sure run into some races or confuse the controller.

I wonder whether a service side solution wouldn't be better ? E.g. allowing a Service to allow to inline revisions that are referenced in a traffic declaration. On initial reconciliation the service could then remove that list. (just brainstorming here)

@rhuss
Copy link
Contributor Author

rhuss commented Mar 20, 2020

After some discussion slack discussion with @markusthoemmes I think we should allow those two mode for kn service export --with-revisions:

kn service export --with-revisions

This will export the objects as described for this issue. The exported format is suitable only for kn service import (so I wonder whether we should make this explicit to select another export format which can't be used by kubectl).

An import with kn import would work like this:

  • First import all Revisions, without ownerReference
  • Import the Configuration, without ownerReference
  • Set all Revisions owner reference to the just created Configuration
  • Create the Service. The service will go into an error state because the created Configuration with the same as the service is not owned by the service
  • Update the Configuration's ownerReference with the Service's coordinates.
  • The Service eventually will reconcile into a Ready = True state.

Not 100% sure how the Route reconciliation fits into this picture, but I think this should be the best way to rebuild a service with traffic configuration

kn service export --with-revisions --kubernetes-resources

This would export the sevice as we do it now, and which can be used with kubectl apply. Here's a summary of this algorithm:

Lets assume we have Service "svc" with two Revisions "r1" and "r2" that a referenced in a traffic declaration. With kn service export a ServiceList with 3 Service is created:

  • A service with the podspec from "r1" and name "svc", without traffic targets.
  • A service with the podspec from "r2" and name "svc", without traffic targets
  • "svc" itself, with traffic targets

This ServiceList then can be imported afresh with kubectl apply which will

  • Create a service "svc" from the "r1" spec
  • Update the service "svc" from the "r2" spec
  • Update the service "svc" from the previously exported svc service

So actually this command just "replays" what a human user would do when incrementally building up the revisions and services.

@itsmurugappan
Copy link
Contributor

I tried the above steps. Please let me know if i am missing something

  1. Created 2 revisions without owner references which immediately spins up 2 pods which keeps crashing
  2. create configuration
  3. update owner reference in revision
  4. create service
  5. update owner reference in configuration and update labels to add "serving.knative.dev/service"
    update labels of revision
  6. pods are now running but service is doesn't become ready error is "revision missing"

I dont see istio virtual service getting created.

@rhuss
Copy link
Contributor Author

rhuss commented Mar 23, 2020

Yes, looks like a serving issue when reconciling a service with an initial not-owned configuration and then updated to an owned configuration by setting the configuration's ownerRef. I would expect the service to eventually reconcile to the ready state, including working routes. @mattmoor is this a correct assumption ? If so I would carry that specific issue over to serving.

@rhuss
Copy link
Contributor Author

rhuss commented Mar 23, 2020

  1. Created 2 revisions without owner references which immediately spins up 2 pods which keeps crashing

This alone is not what we want. This might be avoided if the start-with-scale-0 feature comes, so that not pod gets started (also important to avoid any application specific side effects).

@itsmurugappan
Copy link
Contributor

@rhuss i think we can go ahead with the below 2 options you had mentioned.

  1. kn service export --with-revisions
  • Eventhough import has some challenges, can be useful to show the current state of the service and can be persisted in a cm system.
  1. kn service export --with-revisions --kubernetes-resources
  • Can be used with kubectl

@rhuss
Copy link
Contributor Author

rhuss commented Mar 25, 2020

@itsmurugappan Yes, that looks fine for me.

Another idea by @evankanderson:

Another option is to populate the current state of Service/Configuration/Route, and then backfill the
Revisions after the “latest” is installed, which would allow you to create/update each object exactly once:

  • Create kSvc “A”, reconcilers create Route “A”, Config “A”, Revision “A100”.
  • Create Revision “A99"-“A1” after this, setting ownerReference during creation to point to Config “A”.

The challenge would be how to deal with the traffic (route) configuration of the service when its not pointing 100% to latest. The question is how can the traffic configuration then be added as a last step without serving any data in the meantime, Is there kind of a service setting that indicates the service to be inactive ? Except maybe to use an always-false readiness probe for the initial during the import action, but that would affect the latest revision, too. Would be cool to have an field in service to deactivate it which is not reflected in a revision (i.e. does not create a new revision if changed)

Of course that export format can be only imported again by kn itself as it has to perform extract actions. I wonder whether we already should declare this by choosing a different export schema and not a k8s-resource lookalike, which accidentally even could be imported by kubectl.

@rhuss
Copy link
Contributor Author

rhuss commented Mar 25, 2020

@itsmurugappan also you might want to have a look into this PR which introduces a migration feature to kn: https://github.com/knative/client-contrib/pull/6/files#diff-c0d82cc6f6d3bf38a38a9131cf5c3bb7R45 which also already deals with how to set UIDs. // cc: @zhangtbj

@itsmurugappan
Copy link
Contributor

itsmurugappan commented Mar 25, 2020

@rhuss the approach used by @zhangtbj is similar to what we discussed #728 (comment) and #728 (comment)

This is the import flow,

  1. Create service (which creates config and a revision)
  2. Get config uuid, plugin to other revisions and create it (Except latest).

@zhanggbj please correct me if i am wrong.

@itsmurugappan
Copy link
Contributor

Another option is to populate the current state of Service/Configuration/Route, and then backfill the
Revisions after the “latest” is installed, which would allow you to create/update each object exactly once:

  • Create kSvc “A”, reconcilers create Route “A”, Config “A”, Revision “A100”.
  • Create Revision “A99"-“A1” after this, setting ownerReference during creation to point to Config “A”.

I had tried this approach earlier and it works.

create service with traffic split (eventhough revisions doesnt exist) -> get config uuid plug in to revisions which were not created and create it.

This exactly creates one pod only and traffic split seemed to work correctly.

@rhuss
Copy link
Contributor Author

rhuss commented Mar 26, 2020

Sounds good, so we might want to try todo this.

@navidshaikh navidshaikh added this to the v0.15.0 milestone May 14, 2020
coryrc pushed a commit to coryrc/client that referenced this issue May 14, 2020
…ve#728)

Coverage completion marker is used in pre-submit workflow to find
most recent healthy post-submit coverage result as a basis for comparison.
We are adding the marker now regardless of unit test failure, to prevent
stale comparison baseline

knative#679
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants