-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[VC-35377] Reduce memory by excluding Helm release Secrets and some standard Secret types #554
Conversation
…ypes other than tls and Opaque Signed-off-by: Richard Wall <[email protected]>
Signed-off-by: Richard Wall <[email protected]>
b24ffa5
to
6eed6f4
Compare
Your changes are sensible and I don't think anything will break as a consequence. I can confirm that the Agent no longer receives/unmarshals Helm secrets. To double check that, I used mitmproxy and looked at the responses to the requests sent to the API server: Before: GET https://me:58453/api/v1/secrets?limit=500&resourceVersion=0 HTTP/2.0
accept: application/json
user-agent: preflight/v0.0.0 (darwin/arm64) kubernetes/$Format
After: GET https://localhost:58453/api/v1/secrets?fieldSelector=%2Ctype%21%3Dkubernetes.io%2Fservice-account-token%2Ctype%21%3Dkubernetes.io%2Fdockercfg%2Ctype%21%3Dkubernetes.io%2Fdockerconfigjson%2Ctype%21%3Dkubernetes.io%2Fbasic-auth%2Ctype%21%3Dkubernetes.io%2Fssh-auth%2Ctype%21%3Dbootstrap.kubernetes.io%2Ftoken%2Ctype%21%3Dhelm.sh%2Frelease.v1&limit=500&resourceVersion=0 HTTP/2.0
accept: application/json
user-agent: preflight/v0.0.0 (darwin/arm64) kubernetes/$Format
This should dramatically reduce the memory used at boot time when the Agent unmarshals the whole response at once.
I got a little confused by the ordering of your diff; shouldn't the master branch be
Do you think using selectors decreases the memory usage (and CPU usage) of the API server when the Agent lists the secrets? My belief was that the API server still had to keep all the secrets in memory before filtering them, meaning that the memory load on the API server would be the same with and without the field selectors. |
Additional info on how to use mitmproxy with the agent: First, install Get the script that prevents mitmproxy from blocking on long-lasting requests: curl -L https://raw.githubusercontent.com/maelvls/kubectl-incluster/main/watch-stream.py >/tmp/watch-stream.py Then, set export HTTPS_PROXY=http://localhost:9090
export KUBECONFIG=/tmp/kube
KUBECONFIG= HTTPS_PROXY= kubectl incluster --replace-ca-cert ~/.mitmproxy/mitmproxy-ca-cert.pem | sed 's|127.0.0.1|me|' >/tmp/kube Then, make sure to have in your
(Read https://maelvls.dev/go-ignores-proxy-localhost/ to learn why) Create a key pair for the Private Key JWT authentication:
Finally, run the agent: go run . agent -c examples/one-shot-secrets.yaml \
--client-id $(jq -r .client_id agent-credentials.json) \
--private-key-path <(jq -r .private_key agent-credentials.json) \
-p 5m |
@wallrj - would we want to make this filtering optional, and off by default? I think that discovering all identities regardless of their secret type is a key part of the product. But from a security point of view, you don't want to filter any out? You actually want to find all those certs that might have been hidden on purpose. For example I was able to just apply whatever secret apiVersion: v1
data:
tls.crt: <REDACT>
tls.key: <REDACT>
kind: Secret
metadata:
name: definitely-not-a-cert
namespace: default
type: my-company.io/durp It seems some types require certain fields, but I'm wondering if it's easy to just hide a cert in some other secret. In those cases we would be missing certs and not giving the other visibility? |
Thanks for pointing it out. Here is the selector Richard went with:
Richard went with a query parameter that selects the "open" set defined by the complement of the set of excluded types. This is a good thing as it doesn't exclude unknown types that may appear in the future. In your example, the custom type
Imagining that a person really wants to hide a cert, you are right, they could very well use one of the excluded types above, e.g. But I don't think people aim to hide certificates; why would they? Unfortunately, there isn't a good way of filtering out types based on what they contain server-side. For example, we can't include a I think excluding these types is a risk worth taking compared to the downside of a huge memory consumption (more than 3 GiB) in the 30 first seconds after starting the agent. Side note: if we were to parse the YAML contained in these |
By the way, I think we should also look at the other resources (besides secrets). On large clusters with tons of Pods, listing all the pods is also a problem... But I can't see a good way of filtering out "some" pods server-side (the only filtering that can happen is client-side, e.g. only report pods that mount secrets) |
echo "{}" > dockerconfig.json
k create secret docker-registry no-looky-here --from-file=.dockerconfigjson=dockerconfig.json --from-file=tls.crt=a-cert.crt For example:
|
Thanks @wallrj & @maelvls. As discussed at the standup, and as Mael states above, there's nothing to stop someone adding extra keys to those Thats said, right now, someone could actually just change the data field key name and we also wouldn't detect that because lets say » k create secret docker-registry real-docker-config-promise --from-file=.dockerconfigjson=dockerconfig.json --from-file=token=a-cert.crt
secret/real-docker-config-promise created
» k get secret real-docker-config-promise -o yaml | yq '.data | keys'
- .dockerconfigjson
- token So actually, regardless of the type, it should be fairly easy to hide a cert from our agent, if someone really wanted to. Technically your PR looks like it solves the problem in mind.
It could be useful for testing purposes (without code change), and to give customers the option to include / exclude secret types depending on their performance needs. |
I believe it should, for the reasons given in the WatchList design document. But I haven't measured it and therefore I've removed that as a motivation from the PR description. |
I take your point, but it seems unlikely that someone would add a The UI does show leaf (not CA) Certificates found in any type of Kubernetes Secret, since: And with the following config and dockerconfigjson Secret containing the leaf cert from example.com, the cert does show up in the UI. organization_id: "richardw"
cluster_id: "richardw-1"
period: 1m
data-gatherers:
- kind: "k8s-dynamic"
name: "k8s/secrets"
config:
include-namespaces: ["default"]
resource-type:
version: v1
resource: secrets
server: "https://api.venafi.cloud/"
venafi-cloud:
uploader_id: "uploader-id-1"
upload_path: "/v1/tlspk/upload/clusterdata"
apiVersion: v1
kind: Secret
metadata:
generateName: test-
type: kubernetes.io/dockerconfigjson
stringData:
.dockerconfigjson: |
{
"auths": {
"registry.example.com": {
"username": "user-1",
"password": "password-1"
}
}
}
tls.crt: |
-----BEGIN CERTIFICATE-----
MIIHbjCCBlagAwIBAgIQB1vO8waJyK3fE+Ua9K/hhzANBgkqhkiG9w0BAQsFADBZ
MQswCQYDVQQGEwJVUzEVMBMGA1UEChMMRGlnaUNlcnQgSW5jMTMwMQYDVQQDEypE
aWdpQ2VydCBHbG9iYWwgRzIgVExTIFJTQSBTSEEyNTYgMjAyMCBDQTEwHhcNMjQw
MTMwMDAwMDAwWhcNMjUwMzAxMjM1OTU5WjCBljELMAkGA1UEBhMCVVMxEzARBgNV
BAgTCkNhbGlmb3JuaWExFDASBgNVBAcTC0xvcyBBbmdlbGVzMUIwQAYDVQQKDDlJ
bnRlcm5ldMKgQ29ycG9yYXRpb27CoGZvcsKgQXNzaWduZWTCoE5hbWVzwqBhbmTC
oE51bWJlcnMxGDAWBgNVBAMTD3d3dy5leGFtcGxlLm9yZzCCASIwDQYJKoZIhvcN
AQEBBQADggEPADCCAQoCggEBAIaFD7sO+cpf2fXgCjIsM9mqDgcpqC8IrXi9wga/
9y0rpqcnPVOmTMNLsid3INbBVEm4CNr5cKlh9rJJnWlX2vttJDRyLkfwBD+dsVvi
vGYxWTLmqX6/1LDUZPVrynv/cltemtg/1Aay88jcj2ZaRoRmqBgVeacIzgU8+zmJ
7236TnFSe7fkoKSclsBhPaQKcE3Djs1uszJs8sdECQTdoFX9I6UgeLKFXtg7rRf/
hcW5dI0zubhXbrW8aWXbCzySVZn0c7RkJMpnTCiZzNxnPXnHFpwr5quqqjVyN/aB
KkjoP04Zmr+eRqoyk/+lslq0sS8eaYSSHbC5ja/yMWyVhvMCAwEAAaOCA/IwggPu
MB8GA1UdIwQYMBaAFHSFgMBmx9833s+9KTeqAx2+7c0XMB0GA1UdDgQWBBRM/tAS
TS4hz2v68vK4TEkCHTGRijCBgQYDVR0RBHoweIIPd3d3LmV4YW1wbGUub3Jnggtl
eGFtcGxlLm5ldIILZXhhbXBsZS5lZHWCC2V4YW1wbGUuY29tggtleGFtcGxlLm9y
Z4IPd3d3LmV4YW1wbGUuY29tgg93d3cuZXhhbXBsZS5lZHWCD3d3dy5leGFtcGxl
Lm5ldDA+BgNVHSAENzA1MDMGBmeBDAECAjApMCcGCCsGAQUFBwIBFhtodHRwOi8v
d3d3LmRpZ2ljZXJ0LmNvbS9DUFMwDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQG
CCsGAQUFBwMBBggrBgEFBQcDAjCBnwYDVR0fBIGXMIGUMEigRqBEhkJodHRwOi8v
Y3JsMy5kaWdpY2VydC5jb20vRGlnaUNlcnRHbG9iYWxHMlRMU1JTQVNIQTI1NjIw
MjBDQTEtMS5jcmwwSKBGoESGQmh0dHA6Ly9jcmw0LmRpZ2ljZXJ0LmNvbS9EaWdp
Q2VydEdsb2JhbEcyVExTUlNBU0hBMjU2MjAyMENBMS0xLmNybDCBhwYIKwYBBQUH
AQEEezB5MCQGCCsGAQUFBzABhhhodHRwOi8vb2NzcC5kaWdpY2VydC5jb20wUQYI
KwYBBQUHMAKGRWh0dHA6Ly9jYWNlcnRzLmRpZ2ljZXJ0LmNvbS9EaWdpQ2VydEds
b2JhbEcyVExTUlNBU0hBMjU2MjAyMENBMS0xLmNydDAMBgNVHRMBAf8EAjAAMIIB
fQYKKwYBBAHWeQIEAgSCAW0EggFpAWcAdABOdaMnXJoQwzhbbNTfP1LrHfDgjhuN
acCx+mSxYpo53wAAAY1b0vxkAAAEAwBFMEMCH0BRCgxPbBBVxhcWZ26a8JCe83P1
JZ6wmv56GsVcyMACIDgpMbEo5HJITTRPnoyT4mG8cLrWjEvhchUdEcWUuk1TAHYA
fVkeEuF4KnscYWd8Xv340IdcFKBOlZ65Ay/ZDowuebgAAAGNW9L8MAAABAMARzBF
AiBdv5Z3pZFbfgoM3tGpCTM3ZxBMQsxBRSdTS6d8d2NAcwIhALLoCT9mTMN9OyFz
IBV5MkXVLyuTf2OAzAOa7d8x2H6XAHcA5tIxY0B3jMEQQQbXcbnOwdJA9paEhvu6
hzId/R43jlAAAAGNW9L8XwAABAMASDBGAiEA4Koh/VizdQU1tjZ2E2VGgWSXXkwn
QmiYhmAeKcVLHeACIQD7JIGFsdGol7kss2pe4lYrCgPVc+iGZkuqnj26hqhr0TAN
BgkqhkiG9w0BAQsFAAOCAQEABOFuAj4N4yNG9OOWNQWTNSICC4Rd4nOG1HRP/Bsn
rz7KrcPORtb6D+Jx+Q0amhO31QhIvVBYs14gY4Ypyj7MzHgm4VmPXcqLvEkxb2G9
Qv9hYuEiNSQmm1fr5QAN/0AzbEbCM3cImLJ69kP5bUjfv/76KB57is8tYf9sh5ik
LGKauxCM/zRIcGa3bXLDafk5S2g5Vr2hs230d/NGW1wZrE+zdGuMxfGJzJP+DAFv
iBfcQnFg4+1zMEKcqS87oniOyG+60RMM0MdejBD7AS43m9us96Gsun/4kufLQUTI
FfnzxLutUV++3seshgefQOy5C/ayi8y1VTNmujPCxPCi6Q==
-----END CERTIFICATE----- |
Signed-off-by: Richard Wall <[email protected]>
Signed-off-by: Richard Wall <[email protected]>
@wallrj I think my "objection" can be taken offline. I think my focus is on complete coverage, but it seems that we already have gaps that can be exploited and therefore it can be a separate conversation. I was about to suggest something but see you have already done it here, you must have read my mind! Thanks @wallrj for making the effort to have this configurable. |
Signed-off-by: Richard Wall <[email protected]>
…cesFieldSelector Signed-off-by: Richard Wall <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generated by make update-helm-docs
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hawksight I added ignoredSecretTypes
to the Helm chart values, as you suggested.
@@ -64,7 +64,7 @@ resource referenced in the `kind` for that datagatherer. | |||
There is an example `ClusterRole` and `ClusterRoleBinding` which can be found in | |||
[`./deployment/kubernetes/base/00-rbac.yaml`](./deployment/kubernetes/base/00-rbac.yaml). | |||
|
|||
# Secrets | |||
## Secrets |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Drive-by fix. This was wrong heading level.
@@ -52,6 +54,7 @@ func (c *ConfigDynamic) UnmarshalYAML(unmarshal func(interface{}) error) error { | |||
} `yaml:"resource-type"` | |||
ExcludeNamespaces []string `yaml:"exclude-namespaces"` | |||
IncludeNamespaces []string `yaml:"include-namespaces"` | |||
FieldSelectors []string `yaml:"field-selectors"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit weird that the yaml field name hints are added in two places, but I decided to follow the existing pattern.
Perhaps the reason is so that for example, the configuration file can contain strings which are unmarshalled and then converted into some other rich type?
for i, selectorString := range c.FieldSelectors { | ||
if selectorString == "" { | ||
errors = append(errors, fmt.Sprintf("invalid field selector %d: must not be empty", i)) | ||
} | ||
_, err := fields.ParseSelector(selectorString) | ||
if err != nil { | ||
errors = append(errors, fmt.Sprintf("invalid field selector %d: %s", i, err)) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The selectors are validated here but the validation is not very good.
ParseSelector will happily parse strings like "type != foo" into
fields.Requirement {
Op: "!=",
Field: "type " // with trailing space
Value: " foo" // with leading space
}
And then later the client-go Reflector will fail with an error like "unknown field: type " but you won't be able to see the trailing space, because it's not quoted.
@@ -150,7 +164,15 @@ func (c *ConfigDynamic) newDataGathererWithClient(ctx context.Context, cl dynami | |||
return nil, err | |||
} | |||
// init shared informer for selected namespaces | |||
fieldSelector := generateFieldSelector(c.ExcludeNamespaces) | |||
fieldSelector := generateExcludedNamespacesFieldSelector(c.ExcludeNamespaces) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave this function a much more specific name to avoid any confusion about how it fits in with the user-supplied field-selectors.
Also changed the return type to fields.Selector
so that the user-supplied selectors can be easily ANDed to it.
// generateFieldSelector creates a field selector string from a list of | ||
// namespaces to exclude. | ||
func generateFieldSelector(excludeNamespaces []string) string { | ||
fieldSelector := fields.Nothing() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there was a bug in this function. This initial fields.Nothing()
caused the returned selector to always have a trailing comma which was ugly when I appended the user-supplied field selectors.
@@ -105,8 +105,12 @@ func sortGatheredResources(list []*api.GatheredResource) { | |||
func TestNewDataGathererWithClientAndDynamicInformer(t *testing.T) { | |||
ctx := context.Background() | |||
config := ConfigDynamic{ | |||
IncludeNamespaces: []string{"a"}, | |||
ExcludeNamespaces: []string{"kube-system"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The validate function prevents you having both Include and Exclude namespaces, so I removed the IncludeNamespaces in this test because I wanted to see how the exclude namespace field selector got merged with the user-supplied field selectors.
@@ -300,19 +342,19 @@ func TestGenerateFieldSelector(t *testing.T) { | |||
ExcludeNamespaces: []string{ | |||
"kube-system", | |||
}, | |||
ExpectedFieldSelector: "metadata.namespace!=kube-system,", | |||
ExpectedFieldSelector: "metadata.namespace!=kube-system", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the trailing comma bug that I referred to in an earlier comment.
- kubernetes.io/dockercfg | ||
- kubernetes.io/dockerconfigjson | ||
- kubernetes.io/basic-auth | ||
- kubernetes.io/ssh-auth, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the trailing comma expected here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
Thanks @achuchev Signed-off-by: Richard Wall <[email protected]>
Here's the difference you'll see in the rendered Helm manifests as a result of this feature: $ diff -u \
<(helm template oci://registry.venafi.cloud/charts/venafi-kubernetes-agent --version 0.1.49) \
<(helm template deploy/charts/venafi-kubernetes-agent/)
Pulled: registry.venafi.cloud/charts/venafi-kubernetes-agent:0.1.49
Digest: sha256:0cc0244761fbc8fb48b3c7a94e58c4f7611b24432481f1ecf95ab4aa399033b9
--- /dev/fd/63 2024-08-22 11:27:00.292015674 +0100
+++ /dev/fd/62 2024-08-22 11:27:00.292015674 +0100
@@ -106,6 +106,14 @@
resource-type:
version: v1
resource: secrets
+ field-selectors:
+ - type!=kubernetes.io/service-account-token
+ - type!=kubernetes.io/dockercfg
+ - type!=kubernetes.io/dockerconfigjson
+ - type!=kubernetes.io/basic-auth
+ - type!=kubernetes.io/ssh-auth
+ - type!=bootstrap.kubernetes.io/token
+ - type!=helm.sh/release.v1
- kind: "k8s-dynamic"
name: "k8s/certificates"
config: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done a cursory reading of the changes and found nothing out of the ordinary.
I haven't tested the changes myself either through running the binary locally, or through the helm chart.
To give an idea of the size of Helm release secrets, I installed every version of cert-manager since 1.4 and realised that Helm does prune all but the last 10 Secrets: curl -fsSL https://charts.jetstack.io/index.yaml \
| yq '.entries.cert-manager[] | select(.annotations["artifacthub.io/prerelease"] == "false" and .version == "v1.*" ) | .version' \
| tac \
| while read version; do \
helm upgrade cert-manager cert-manager \
--repo https://charts.jetstack.io \
--install \
--create-namespace \
--namespace=cert-manager \
--set installCRDs=true \
--wait \
--version "${version}"; \
done $ kubectl get secret --field-selector=type=helm.sh/release.v1 -A
NAMESPACE NAME TYPE DATA AGE
cert-manager sh.helm.release.v1.cert-manager.v58 helm.sh/release.v1 1 46m
cert-manager sh.helm.release.v1.cert-manager.v59 helm.sh/release.v1 1 46m
cert-manager sh.helm.release.v1.cert-manager.v60 helm.sh/release.v1 1 45m
cert-manager sh.helm.release.v1.cert-manager.v61 helm.sh/release.v1 1 45m
cert-manager sh.helm.release.v1.cert-manager.v62 helm.sh/release.v1 1 45m
cert-manager sh.helm.release.v1.cert-manager.v63 helm.sh/release.v1 1 44m
cert-manager sh.helm.release.v1.cert-manager.v64 helm.sh/release.v1 1 44m
cert-manager sh.helm.release.v1.cert-manager.v65 helm.sh/release.v1 1 43m
cert-manager sh.helm.release.v1.cert-manager.v66 helm.sh/release.v1 1 43m
cert-manager sh.helm.release.v1.cert-manager.v67 helm.sh/release.v1 1 43m
venafi sh.helm.release.v1.venafi-kubernetes-agent.v1 helm.sh/release.v1 1 26h
venafi sh.helm.release.v1.venafi-kubernetes-agent.v2 helm.sh/release.v1 1 3h27m
venafi sh.helm.release.v1.venafi-kubernetes-agent.v3 helm.sh/release.v1 1 3h21m $ kubectl get secret --field-selector=type=helm.sh/release.v1 -A -o json | dd of=/dev/null
7437+1 records in
7437+1 records out
3807929 bytes (3.8 MB, 3.6 MiB) copied, 0.61285 s, 6.2 MB/s |
Thanks @maelvls I also rebased on top of #552 and re-ran the # Before
$ grep HWM /proc/$(pidof preflight)/status
VmHWM: 72944 kB
# After
$ grep HWM /proc/$(pidof preflight)/status
VmHWM: 57260 kB |
This PR aims to solve the problem of large startup memory use in the agent.
The agent lists all Secrets when it starts up and this balloons the startup memory usage, because all the data of all the Secrets is returned from the API server in one SecretList which must be held in memory while it is unmarshalled.
This is a symptom of the List then Watch behaviour of the client-go informer abstraction. (See "Why does agent request all the Secrets from the K8S API server when it starts up?" below.)
The venafi-kubernetes-agent lists, watches and caches all Secret resources in the cluster.
It sends to Venafi some metadata of all Secrets
and the data for tls.crt and ca.crt files if those keys exist in the Secret.
jetstack-secure/pkg/datagatherer/k8s/fieldfilter.go
Lines 12 to 26 in 473277b
Supplying a field selector to the client causes the server to only return the resources which match the selector.
Which reduces the size of the response and reduces the size of the SecretList which is in memory after the initial listing.
In this PR I add a field selector to exclude Helm release Secrets and all the standard Kubernetes Secret types which are not expected to contain
tls.crt
orca.crt
files.I wanted to exclude Helm Secrets in particular because they are often large and often numerous.
This approach was also considered for cert-manager cainjector but rejected in favour of using the metadata-only API. See
There is a chance that this change will cause the agent to ignore some public TLS certificates which it previously reported. This is because users might add
tls.crt
andca.crt
files tokubernetes.io/dockerconfig
Secrets and @hawksight demonstrates this in the comments below.Why does agent request all the Secrets from the K8S API server when it starts up?
To populate its in-memory cache. It sents the following requests:
The first request is a List request, which returns all the Secret resources, to populate the cache.
The second request is a Watch request, which streams all changes to all Secrets in the cluster, so that the agent can keep its cache up-to-date.
Alternatives considered
Use metadata-only API
We could in theory list only the metadata of the Secrets, inspect the field-owner metadata for the
tls.crt
andca.crt
keys and then get the full data of only those Secrets. But this would be more complex to implement and would be obsolete as soon as WatchList feature becomes GA.And when WatchList is usable the Agent can efficiently get the data of all Secrets and do deeper inspection of all the files for public certificates....not just the
tls.crt
andca.crt
files.Use the WatchList feature
In future, the agent can use the WatchList feature but it is not yet stable.
This will remove the initial requests to List all resources and instead the agent will be able to stream and incrementally decode all the resources that it discovers which should dramatically reduce the startup memory.
Only list
Opaque
andkubernets.io/tls
Secret typesBecause those are the Secret types which are commonly contain a tls.crt and ca.crt file.
Rejected for the following reasons:
Opaque OR kubernetes.io/tls
. See:type
but which containtls.crt
andca.crt
keys.Testing
Given a cluster containing the following Secrets: