-
Notifications
You must be signed in to change notification settings - Fork 929
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] VerifyImages - add validated images digests/fetched KMS pubkeys to cache #4732
Comments
I'll throw in support for this feature, we recently added in-memory caching to our custom built image verification system because the number of calls it was making back to the registry was getting out of control, especially during cluster maintenance when we'd expel thousands of pods at once. |
Hi @zswanson - can you describe the cache behaviors you are configuring for your clusters? Is this something you would like to contribute back upstream as a PR? |
Our signature process is totally custom, but what we did was in the verifier just added a small TTL memstore and it keeps the image digests after approval and stores them for a configurable amount of time; at approval the controller first checks that cache before doing any of the more expensive signature checks. We used jellydator/ttlcache for the in-mem cache. Could be better packages for that, we just needed one in a hurry. |
I'll chip in for our use-case too.. We cosign every image which runs in our cluster, and we probably turn over about 1000 pods a day on average, but of those 95% are re-used images. We're using AWS KMS as an attestor, and are starting to exceed the "free tier" limits (20K/month). If it were possible to simply cache image shas which had been validated against the attestor (for a configurable TTL), we'd see a massive reduction in AWSKMS queries. |
@zswanson - how do you deal with mutable tags? |
@JimBugwadia a few ways
|
Hi @zswanson I have created a PR that adds the basic structure of an image verify cache, here we store the cache on per policy basis (i.e. we are caching whether an image was verified using a given rule in a policy or not, basically caching the result of a policy on an image ref). We are storing entries with a TTL. It would be really helpful if you could review it: #7890 |
Hi, I was mostly commenting in support of the original issue. We are not currently doing image verification with kyverno so I can't do any testing, I would defer to @galsolo for that. |
Problem Statement
Current Kyverno/VerifyImages implementation does not cache validated images.
In a rapid deployment/dynamic cluster where operators spawn the same image thousands of times a day the same image is being validated over and over again.
Solution Description
Add cache to the Verifyimages process.
Alternatives
No response
Additional Context
just a metric from today, a single image was deployed 15k times today, it could be validated once.
Slack discussion
https://kubernetes.slack.com/archives/CLGR9BJU9/p1664009503764459
Research
The text was updated successfully, but these errors were encountered: