Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make configurable the Resource Quota enforcement at tenant level #50

Closed
bsctl opened this issue Aug 11, 2020 · 9 comments · Fixed by #356
Closed

Make configurable the Resource Quota enforcement at tenant level #50

bsctl opened this issue Aug 11, 2020 · 9 comments · Fixed by #356
Labels
enhancement New feature or request medium-priority Feature Request with medium-priority
Milestone

Comments

@bsctl
Copy link
Member

bsctl commented Aug 11, 2020

Describe the feature

It would be useful to make configurable via CLI argument, eg. --force-tenant-quota=true the resource quota enforcement at tenant level. The default should be true and leave the cluster admin to disable the quota resource enforcement at tenant level.

What would the new user story look like?

We would to address case where the cluster admin wants assign resources only at namespace level so the total resource assignment (at tenant level) is statically calculated as number of

assigned_resources_in_namespace x namespaceQuota

For example, with --force-tenant-quota=false, the cluster admin can assign 128GB of RAM per namespace and a namespace quota of 3 to a given tenant. So the permitted amount of RAM for that tenant will be 128GB x 3 = 384GB but each namespace has a strict quota of 128GB.

On the opposite, with --force-tenant-quota=true, the cluster admin can assign 384GB to the tenant and a namespace quota of 3. So the permitted amount of RAM for a single namespace will be 384GB but this amount of RAM has to be shared between all namespaces.

Expected behavior

When --force-tenant-quota=true (default) we have the current behaviour with creation of ResourceQuota at namespace level and cross namespace (tenant) quota check.

When --force-tenant-quota=false the creation of ResourceQuota at namespace level is still in place but no check at cross namespace level (tenant). Only check at namespace level performed by regular kubernetes ResourceQuota admission controller.

@bsctl bsctl added enhancement New feature or request needs-discussion No outline on the feature, discussion is welcome low-priority Feature Request with low-priority labels Aug 11, 2020
@bsctl bsctl changed the title Make the configurable the Resource Quota enforcement at tenant level Make configurable the Resource Quota enforcement at tenant level Aug 11, 2020
@prometherion
Copy link
Member

Nice feature, but honestly I wouldn't go for a CLI argument: that would be a global option and enforced for any deployed Tenant.

Rather, would be great having this option at Tenant level, maybe as following:

apiVersion: capsule.clastix.io/v1alpha1
kind: Tenant
metadata:
  name: oil
spec:
resourceQuotas:
    -
      hard:
        limits.cpu: "8"
        limits.memory: 16Gi
        requests.cpu: "8"
        requests.memory: 16Gi
      scopes:
        - NotTerminating
      shared: true
    -
      hard:
        pods: "10"
      shared: true
    -
      hard:
        requests.storage: 100Gi
      shared: true  # this is the option

shared could be a boolean value: true would set a ResourceQuota used and shared by all the Namespaces in the Tenant. The opposite (false) would set-up a ResourceQuota per single namespace.

With this implementation, we can ensure a high grade of customization.

@bsctl
Copy link
Member Author

bsctl commented Aug 12, 2020

Rather, would be great having this option at Tenant level

Do you mean:

  • "at tenant level", i.e. same behaviour for all quotas belonging to same tenant, or
  • "at quota level", i.e. different behaviour for different quotas of the same tenant?

From the snippet above, it looks you mean the latter.

IMHO, the first option is preferable (same behaviour for the same tenant) because the latter can be confusing for the final users since they do not access the tenant resource and so they will not be aware of which behaviour is defined to the different quotas (unless we put an annotation in each quota).

If we go with the first option, the tenant snippet would be the following:

apiVersion: capsule.clastix.io/v1alpha1
kind: Tenant
metadata:
  name: oil
spec:
sharedResources: true  # this is the option
resourceQuotas:
    -
      hard:
        limits.cpu: "8"
        limits.memory: 16Gi
        requests.cpu: "8"
        requests.memory: 16Gi
      scopes:
        - NotTerminating
    -
      hard:
        pods: "10"
    -
      hard:
        requests.storage: 100Gi

@prometherion
Copy link
Member

With the Tenant level I was referring to make this flag configurable per instance, rather than globally.

The YAML snippet you suggested is good enough: just wondering if sharedResources is enough self-explanatory 🤔

@prometherion prometherion added v0.1.0 To be implemented/hotfixed for 0.1.0 and removed needs-discussion No outline on the feature, discussion is welcome labels Sep 29, 2020
@bsctl
Copy link
Member Author

bsctl commented Mar 6, 2021

@prometherion I had a discussion with a customer about disabling quota enforcement at tenant level. It would be nice to have this implemented. We should change priority of this enhancement and consider to implement it in next major release.

@bsctl bsctl added medium-priority Feature Request with medium-priority and removed low-priority Feature Request with low-priority labels Mar 6, 2021
@prometherion
Copy link
Member

We should change priority of this enhancement and consider to implement it in next major release.

Are you referring to 0.1.0 or 0.5.0?

Getting this done should be deadly simple: just wondering how we should change the API.

Honestly, using the .spec.resourceQuotas key as list of ResourceQuotaSpec items has been a bad idea since the new flag that could control this behaviour should be encapsulated in a proper key, as following.

resourceQuotas:
  type: TenantLevel # enum: TenantLevel, NamespaceLevel
  items:
  - ...

But implementing this change would be breaking, so we got two options here.

  1. Using an annotation that will be converted when a new API version will be delivered
  2. Adding a new attribute in the .spec struct, although it would add more unstructured data/non grouped keys.

Having a feedback from the community would be appreciated too, so @GlassOfWhiskey @MaxFedotov @bsctl @gdurifw please share your thoughts or vote through reaction to this message with 1️⃣ or 2️⃣

@bsctl
Copy link
Member Author

bsctl commented Mar 8, 2021

@prometherion imho both the options are ok

@prometherion prometherion added this to the v0.1.0 milestone May 4, 2021
@prometherion prometherion removed the v0.1.0 To be implemented/hotfixed for 0.1.0 label May 4, 2021
@prometherion
Copy link
Member

@MaxFedotov may I ask you for feedback on this?

Since we're delivering v1alpha2, I guess we could go for an annotation (resourcequota.capsule.clastix.io/type={TenantLevel,NamespaceLevel}).

Wondering if {TenantScoped,NamespacedScoped} would fit better as naming. 🤔

@MaxFedotov
Copy link
Collaborator

@prometherion what do you think about resourcequota.capsule.clastix.io/scope = {Tenant/Namespace}?
and then

resourceQuotas:
  scope: Tenant # enum: Tenant, Namespace
  items:
  - ...

@prometherion
Copy link
Member

KISS: I like it! 😂

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request medium-priority Feature Request with medium-priority
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants