-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make configurable the Resource Quota enforcement at tenant level #50
Comments
Nice feature, but honestly I wouldn't go for a CLI argument: that would be a global option and enforced for any deployed Tenant. Rather, would be great having this option at Tenant level, maybe as following: apiVersion: capsule.clastix.io/v1alpha1
kind: Tenant
metadata:
name: oil
spec:
resourceQuotas:
-
hard:
limits.cpu: "8"
limits.memory: 16Gi
requests.cpu: "8"
requests.memory: 16Gi
scopes:
- NotTerminating
shared: true
-
hard:
pods: "10"
shared: true
-
hard:
requests.storage: 100Gi
shared: true # this is the option
With this implementation, we can ensure a high grade of customization. |
Do you mean:
From the snippet above, it looks you mean the latter. IMHO, the first option is preferable (same behaviour for the same tenant) because the latter can be confusing for the final users since they do not access the tenant resource and so they will not be aware of which behaviour is defined to the different quotas (unless we put an annotation in each quota). If we go with the first option, the tenant snippet would be the following: apiVersion: capsule.clastix.io/v1alpha1
kind: Tenant
metadata:
name: oil
spec:
sharedResources: true # this is the option
resourceQuotas:
-
hard:
limits.cpu: "8"
limits.memory: 16Gi
requests.cpu: "8"
requests.memory: 16Gi
scopes:
- NotTerminating
-
hard:
pods: "10"
-
hard:
requests.storage: 100Gi |
With the Tenant level I was referring to make this flag configurable per instance, rather than globally. The YAML snippet you suggested is good enough: just wondering if |
@prometherion I had a discussion with a customer about disabling quota enforcement at tenant level. It would be nice to have this implemented. We should change priority of this enhancement and consider to implement it in next major release. |
Are you referring to 0.1.0 or 0.5.0? Getting this done should be deadly simple: just wondering how we should change the API. Honestly, using the resourceQuotas:
type: TenantLevel # enum: TenantLevel, NamespaceLevel
items:
- ... But implementing this change would be breaking, so we got two options here.
Having a feedback from the community would be appreciated too, so @GlassOfWhiskey @MaxFedotov @bsctl @gdurifw please share your thoughts or vote through reaction to this message with 1️⃣ or 2️⃣ |
@prometherion imho both the options are ok |
@MaxFedotov may I ask you for feedback on this? Since we're delivering Wondering if |
@prometherion what do you think about
|
KISS: I like it! 😂 |
Describe the feature
It would be useful to make configurable via CLI argument, eg.
--force-tenant-quota=true
the resource quota enforcement at tenant level. The default should be true and leave the cluster admin to disable the quota resource enforcement at tenant level.What would the new user story look like?
We would to address case where the cluster admin wants assign resources only at namespace level so the total resource assignment (at tenant level) is statically calculated as number of
assigned_resources_in_namespace x namespaceQuota
For example, with
--force-tenant-quota=false
, the cluster admin can assign 128GB of RAM per namespace and a namespace quota of 3 to a given tenant. So the permitted amount of RAM for that tenant will be 128GB x 3 = 384GB but each namespace has a strict quota of 128GB.On the opposite, with
--force-tenant-quota=true
, the cluster admin can assign 384GB to the tenant and a namespace quota of 3. So the permitted amount of RAM for a single namespace will be 384GB but this amount of RAM has to be shared between all namespaces.Expected behavior
When
--force-tenant-quota=true
(default) we have the current behaviour with creation of ResourceQuota at namespace level and cross namespace (tenant) quota check.When
--force-tenant-quota=false
the creation of ResourceQuota at namespace level is still in place but no check at cross namespace level (tenant). Only check at namespace level performed by regular kubernetes ResourceQuota admission controller.The text was updated successfully, but these errors were encountered: