Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[problem] unsturctured field is dropped when using v1 API #50

Closed
gaocegege opened this issue Mar 1, 2021 · 8 comments
Closed

[problem] unsturctured field is dropped when using v1 API #50

gaocegege opened this issue Mar 1, 2021 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@gaocegege
Copy link

gaocegege commented Mar 1, 2021

Hi, I am trying to use unstructured in the CRD like this:

// BatchSpec defines the desired state of Batch
type BatchSpec struct {
	// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
	// Important: Run "make" to regenerate code after modifying this file

	// Foo is an example field of Batch. Edit Batch_types.go to remove/update
	TrialTemplate *TrialTemplate `json:"trialTemplate,omitempty"`
}

type TrialTemplate struct {
	// Source for trial template (unstructured structure or config map)
	TrialSource `json:",inline"`
}

// TrialSource represent the source for trial template
// Only one source can be specified
type TrialSource struct {
	// TrialSpec represents trial template in unstructured format
	TrialSpec *unstructured.Unstructured `json:"trialSpec,omitempty"`
}

But we got errors when we use v1:

---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  annotations:
    controller-gen.kubebuilder.io/version: v0.4.1
  creationTimestamp: null
  name: batches.appv1.zhengwu.io
spec:
  group: appv1.zhengwu.io
  names:
    kind: Batch
    listKind: BatchList
    plural: batches
    singular: batch
  scope: Namespaced
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        description: Batch is the Schema for the batches API
        properties:
          apiVersion:
            description: 'APIVersion defines the versioned schema of this representation
              of an object. Servers should convert recognized schemas to the latest
              internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
            type: string
          kind:
            description: 'Kind is a string value representing the REST resource this
              object represents. Servers may infer this from the endpoint the client
              submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
            type: string
          metadata:
            type: object
          spec:
            description: BatchSpec defines the desired state of Batch
            properties:
              trialTemplate:
                description: Foo is an example field of Batch. Edit Batch_types.go
                  to remove/update
                properties:
                  trialSpec:
                    description: TrialSpec represents trial template in unstructured
                      format
                    type: object
                type: object
            type: object
          status:
            description: BatchStatus defines the observed state of Batch
            type: object
        type: object
status:
  acceptedNames:
    kind: ""
    plural: ""
  conditions: []
  storedVersions: []

BTW, it works well when we use v1beta1. I was diving into the code base but I did not found the reason.

There is the diff between v1 CRD and v2 CRD. Seems that there are some more managed fields in v1beta1.

41a42
>         f:preserveUnknownFields: {}
42a44,83
>         f:validation:
>           .: {}
>           f:openAPIV3Schema:
>             .: {}
>             f:description: {}
>             f:properties:
>               .: {}
>               f:apiVersion:
>                 .: {}
>                 f:description: {}
>                 f:type: {}
>               f:kind:
>                 .: {}
>                 f:description: {}
>                 f:type: {}
>               f:metadata:
>                 .: {}
>                 f:type: {}
>               f:spec:
>                 .: {}
>                 f:description: {}
>                 f:properties:
>                   .: {}
>                   f:trialTemplate:
>                     .: {}
>                     f:description: {}
>                     f:properties:
>                       .: {}
>                       f:trialSpec:
>                         .: {}
>                         f:description: {}
>                         f:type: {}
>                     f:type: {}
>                 f:type: {}
>               f:status:
>                 .: {}
>                 f:description: {}
>                 f:type: {}
>             f:type: {}
>         f:version: {}

I'd appreciate it if anyone could help me.

Thanks 🥂 🍻

@Zheaoli
Copy link

Zheaoli commented Mar 2, 2021

I could not reproduce your issue in Kubernetes 1.20

Would you mind give more information about your cluster version and the error output?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 31, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 30, 2021
@alculquicondor
Copy link
Member

Maybe you have to add x-kubernetes-preserve-unknown-fields: true to trialSpec

@gaocegege
Copy link
Author

I used to add it.

@alculquicondor
Copy link
Member

So it doesn't work? If it doesn't, can you please open an issue in kubernetes/kubernetes instead? I don't think people pay much attention to issues here.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants