Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

QCL turnup add profile options based of linked earth #2332

Merged
merged 13 commits into from
Mar 14, 2023
158 changes: 158 additions & 0 deletions config/clusters/qcl/common.values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,7 @@ jupyterhub:
- jtkmckenna
- pnasrat
JupyterHub:
enable_auth_state: true
authenticator_class: github
GitHubOAuthenticator:
populate_teams_in_auth_state: true
Expand All @@ -54,3 +55,160 @@ jupyterhub:
- quantifiedcarbon:JupyterHub
scope:
- read:org
singleuser:
image:
# pangeo/pangeo-notebook is maintained at: https://github.com/pangeo-data/pangeo-docker-images
name: pangeo/pangeo-notebook
# pullPolicy set to "Always" because we use the changing over time tag
# "latest".
pullPolicy: Always
tag: "latest"
profileList:
# NOTE: About node sharing
#
# CPU/Memory requests/limits are actively considered still. This
# profile list is setup to involve node sharing as considered in
# https://github.com/2i2c-org/infrastructure/issues/2121.
#
# - Memory requests are different from the description, based on:
# whats found to remain allocate in k8s, subtracting 1GiB
# overhead for misc system pods, and transitioning from GB in
# description to GiB in mem_guarantee
# https://cloud.google.com/kubernetes-engine/docs/concepts/plan-node-sizes.
pnasrat marked this conversation as resolved.
Show resolved Hide resolved
# - CPU requests are lower than the description, with a factor of
# 10%.
#
- display_name: "Small: up to 4 CPU / 32 GB RAM"
description: &profile_list_description "Start a container with at least a chosen share of capacity on a node of this type"
slug: small
default: true
allowed_teams:
- 2i2c-org:hub-access-for-2i2c-staff
- quantifiedcarbon:JupyterHub
profile_options:
requests:
# NOTE: Node share choices are in active development, see comment
# next to profileList: above.
display_name: Node share
choices:
mem_1:
default: true
display_name: ~1 GB, ~0.125 CPU
kubespawner_override:
mem_guarantee: 0.836G
cpu_guarantee: 0.013
mem_2:
display_name: ~2 GB, ~0.25 CPU
kubespawner_override:
mem_guarantee: 1.671G
cpu_guarantee: 0.025
mem_4:
display_name: ~4 GB, ~0.5 CPU
kubespawner_override:
mem_guarantee: 3.342G
cpu_guarantee: 0.05
mem_8:
display_name: ~8 GB, ~1.0 CPU
kubespawner_override:
mem_guarantee: 6.684G
cpu_guarantee: 0.1
mem_16:
display_name: ~16 GB, ~2.0 CPU
kubespawner_override:
mem_guarantee: 13.369G
cpu_guarantee: 0.2
mem_32:
display_name: ~32 GB, ~4.0 CPU
kubespawner_override:
mem_guarantee: 26.738G
cpu_guarantee: 0.4
kubespawner_override:
cpu_limit: null
mem_limit: null
node_selector:
node.kubernetes.io/instance-type: n2-highmem-4
- display_name: "Medium: up to 16 CPU / 128 GB RAM"
description: *profile_list_description
slug: medium
allowed_teams:
- 2i2c-org:hub-access-for-2i2c-staff
- quantifiedcarbon:JupyterHub
profile_options:
requests:
# NOTE: Node share choices are in active development, see comment
# next to profileList: above.
display_name: Node share
choices:
mem_1:
display_name: ~1 GB, ~0.125 CPU
kubespawner_override:
mem_guarantee: 0.903G
cpu_guarantee: 0.013
mem_2:
display_name: ~2 GB, ~0.25 CPU
kubespawner_override:
mem_guarantee: 1.805G
cpu_guarantee: 0.025
mem_4:
default: true
display_name: ~4 GB, ~0.5 CPU
kubespawner_override:
mem_guarantee: 3.611G
cpu_guarantee: 0.05
mem_8:
display_name: ~8 GB, ~1.0 CPU
kubespawner_override:
mem_guarantee: 7.222G
cpu_guarantee: 0.1
mem_16:
display_name: ~16 GB, ~2.0 CPU
kubespawner_override:
mem_guarantee: 14.444G
cpu_guarantee: 0.2
mem_32:
display_name: ~32 GB, ~4.0 CPU
kubespawner_override:
mem_guarantee: 28.887G
cpu_guarantee: 0.4
mem_64:
display_name: ~64 GB, ~8.0 CPU
kubespawner_override:
mem_guarantee: 57.775G
cpu_guarantee: 0.8
mem_128:
display_name: ~128 GB, ~16.0 CPU
kubespawner_override:
mem_guarantee: 115.549G
cpu_guarantee: 1.6
kubespawner_override:
cpu_limit: null
mem_limit: null
node_selector:
node.kubernetes.io/instance-type: n2-highmem-16
- display_name: "n2-highcpu-32: 32 CPU / 32 GB RAM"
description: "Start a container on a dedicated node"
slug: "n2_highcpu_32"
allowed_teams:
- 2i2c-org:hub-access-for-2i2c-staff
- quantifiedcarbon:JupyterHub
kubespawner_override:
node_selector:
node.kubernetes.io/instance-type: n2-highcpu-32
mem_guarantee: 27G
cpu_guarantee: 3.2
cpu_limit: null
mem_limit: null
# TODO(pnasrat): check on value once node running
- display_name: "n2-highcpu-96: 96 CPU / 96 GB RAM"
description: "Start a container on a dedicated node"
slug: "n2_highcpu_96"
allowed_teams:
- 2i2c-org:hub-access-for-2i2c-staff
- quantifiedcarbon:JupyterHub
kubespawner_override:
node_selector:
node.kubernetes.io/instance-type: n2-highcpu-96
cpu_limit: null
mem_limit: null
mem_guarantee: 86G
cpu_guarantee: 9.6