Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add utoronto staging hub #866

Merged
merged 6 commits into from
Dec 6, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
125 changes: 125 additions & 0 deletions config/hubs/utoronto.cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,125 @@
name: utoronto
provider: kubeconfig
kubeconfig:
file: secrets/utoronto.yaml
support:
config:
prometheus:
server:
resources:
requests:
cpu: 0.5
memory: 4Gi
limits:
cpu: 2
memory: 16Gi
grafana:
ingress:
hosts:
- grafana.utoronto.2i2c.cloud
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DNS change needs to be done for this

tls:
- secretName: grafana-tls
hosts:
- grafana.utoronto.2i2c.cloud
hubs:
- name: staging
domain: staging.utoronto.2i2c.cloud
template: basehub
auth0:
enabled: false
config: &utorontoHubConfig
azureFile:
enabled: true
nfs:
enabled: false
shareCreator:
enabled: false
jupyterhub:
custom:
homepage:
templateVars:
org:
name: University of Toronto
logo_url: https://raw.githubusercontent.com/utoronto-2i2c/homepage/master/extra-assets/images/home-hero.png
url: https://www.utoronto.ca/
designed_by:
name: 2i2c
url: https://2i2c.org
operated_by:
name: 2i2c
url: https://2i2c.org
funded_by:
name: University of Toronto
url: https://www.utoronto.ca/
singleuser:
image:
name: quay.io/2i2c/utoronto-image
tag: 83a724f5b829
storage:
type: none
extraVolumes:
- name: home
persistentVolumeClaim:
claimName: home-azurefile
scheduling:
userPlaceholder:
enabled: false
replicas: 0
userScheduler:
resources:
requests:
cpu: 0.1
memory: 512Mi
limits:
memory: 512Mi
proxy:
chp:
resources:
requests:
cpu: 0.1
memory: 128Mi
limits:
memory: 512Mi
traefik:
resources:
requests:
cpu: 0.1
memory: 256Mi
limits:
memory: 512Mi
Comment on lines +71 to +89
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are some resource limits from the old utoronto config. I wasn't sure whether to still use those or go with the default values in basehub template.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO we should stick with the current UToronto configuration values

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mmm... can I suggest otherwise, maybe? 😉
I mean, do we know if these values are the proper ones for the current incantation of our hubs from this repo?
If we set the values without validation (and only because we inherit those values), we may be setting conditions that are not actually needed (nor optimal) and makes the whole thing more complex than it should be...
Maybe we can trace back why these values were needed and infer if we need to tweak them here as well before actually doing it?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That was my feeling too @damianavila, esp since I believe the utoronto hub was the first version of our infra.

Maybe we can trace back why these values were needed and infer if we need to tweak them here as well before actually doing it?

I will dig into it to try find out if these were tweaked afterwards or were part of the initial deployment.

Copy link
Member

@choldgraf choldgraf Dec 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - I was making an assumption here, let me re-phrase my suggestion:

  • If they have explicitly requested those limits then we should keep them.
  • If they have not and this was just set by us without explicit rationale for special-casing it then we should just go with whatever our other configurations use

From this conversation it sounds like we were the ones that chose those limits, not UToronto, so if that's true then I'm good w/ option 2

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the commit history, it seems that the limits were set by us.
But I'm not 100% sure this means that there wasn't a request from utoronto or a limitation observed at some point.

l would love if @yuvipanda could confirm this 👀

This is a side-by-side resource differences. Hope this makes it more easier to follow
resource-diffs

hub:
resources:
requests:
cpu: 0.3
memory: 512Mi
limits:
memory: 2Gi
allowNamedServers: true
readinessProbe:
enabled: false
config:
Authenticator:
enable_auth_state: false
admin_users:
- 7c76d04b-2a80-4db1-b985-a2d2fa2f708c
- 09056164-42f5-4113-9fd7-dd852e63ff1d
- adb7ebad-9fb8-481a-bc2c-6c0a8b6de670
JupyterHub:
authenticator_class: azuread
AzureAdOAuthenticator:
username_claim: oid
login_service: "University of Toronto ID"
oauth_callback_url: https://staging.utoronto.2i2c.cloud/hub/oauth_callback
tenant_id: 78aac226-2f03-4b4d-9037-b46d56c55210
extraConfig:
10-dynamic-subpath: |
import os
pod_namespace = os.environ['POD_NAMESPACE']
# FIXME: This isn't setting up _shared dirs properly
c.KubeSpawner.volume_mounts = [
{
"mountPath": "/home/jovyan",
"name": "home",
"subPath": f"{pod_namespace}/{{username}}"
},
]
23 changes: 23 additions & 0 deletions secrets/config/hubs/utoronto.cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
hubs:
- name: ENC[AES256_GCM,data:IAeq1UJtaQ==,iv:uw3u83ri3uxGI0kgRsg6F+eEFFf4p6OZXGJRicLMl00=,tag:2OZjavRba3MlN+0CJxVzaA==,type:str]
config:
basehub:
jupyterhub:
hub:
config:
AzureAdOAuthenticator:
client_id: ENC[AES256_GCM,data:rdlhojMF1P28Vj3SkPSDjuO0qwo5WugMlQvIRA5C6/lyw/0Z,iv:m0/O3JbHq8+3ZW0e3Ha/VsI1djMjT1eVJupEcuEHNag=,tag:WC7Imb3ZcDv+eEynBdKTng==,type:str]
client_secret: ENC[AES256_GCM,data:6WAV6hc4aJFhlLeXsW95s4gsYnamKN5xRDPqcfD7Df+vHQ==,iv:tAwyHQmZ9kWRw9mPh0Tl3g+XFlkzXAbS+86lI0UIKAg=,tag:BhStdrmpyq5uD6OiQdt6cQ==,type:str]
sops:
kms: []
gcp_kms:
- resource_id: projects/two-eye-two-see/locations/global/keyRings/sops-keys/cryptoKeys/similar-hubs
created_at: '2021-10-20T15:32:22Z'
enc: CiQA4OM7eOPsuFFNlwtrI+0j6+kZJFl/p2t0LC6ePKzDQ02zTH0SSQC9ZQbLbGzX4ZOsrFbkYAC9644szTsZiwVKRMdwNTzSpSizEwgWN81QUfZaGjoUUCkBEH5409uO9+nq6QHxQ6BSJtBDRQ6t/0w=
azure_kv: []
hc_vault: []
lastmodified: '2021-12-02T15:38:24Z'
mac: ENC[AES256_GCM,data:fkAlJcNvL108DIbxtSwMYyeFeS7YNe6dYvBWxK6XaAR/b+JlMzXtHUml7FkVWNVNOXFzoiW7PUT4M0ku1E3VW00TmxK0wUCoGC9a/ZJ8avBU8ly7+856tFq6mcOqi6n1w0+ahTZ/oJB/kI7ZCV9S+gG3pJyTU7BXD3urVPtBh2o=,iv:FOtsiZ/E058YUnDcUlZ0xRVYz4bqczMeyj4JnbzjDuw=,tag:zJJkVzqiwrzGLeQfMkGkmw==,type:str]
pgp: []
unencrypted_suffix: _unencrypted
version: 3.7.1