-
Notifications
You must be signed in to change notification settings - Fork 20
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create k8s config for new spack-gantry service #826
Conversation
We should be able to use a k8s ServiceAccount associated with an IAM role to grant the pod access to the S3 bucket, instead of creating long-lived credentials and encoding them as a secret. I can set up that role/service account + the S3 bucket.
I think using a service account instead of storing IAM credentials eliminates the need for this, but let us know if you still need this info and we can get you it. |
Thanks for setting this up!
We need to somehow store the Gitlab API token besides the S3 creds, if secrets are the right way to do this I updated some of the files in response to your comments, I appreciate the feedback. One question, I'm seeing this line on some of the deployments:
should I add that to this PR as well? |
I added that here - 655968a. Let me know if you need a higher scope/more scopes than
Yes, that should be added (that's needed for karpenter to schedule the pods correctly) |
Thanks for adding the API token, that level of access is perfect. Once the webhook is set up, we'll need to also store the webhook secret |
Hey @mvandenburgh, this is almost ready. Would it be possible to add a webhook for all job status changes? I am not sure what the FQDN will be inside the cluster, but it should be pointed to |
Thanks! is there a webhook secret that gets set? If not, I'll need to remove that as a required env variable in the app |
GitLab doesn't provide one, no. And since the service isn't exposed publicly, it is safe to allow unauthenticated requests. |
Sweet! Thanks @mvandenburgh :D |
Thanks! I'll do some last minute checks and mark this as ready. Once deployed, would it be possible to get CLI access to the pods via kubectl? I imagine that there will be some unexpected issues and it might be easier for me to debug this way. If this doesn't fit with the way you guys access the cluster, no worries |
- volumes.configMap does not need a namespace field as it will inherant the pod's namespace - update subPath for litestream config given location in `terraform/modules/spack/spack_gantry.tf`
Should be ready for merging now |
Do you have an IAM user on AWS? |
Not at the moment |
This is my first pass at deploying the dynamic allocation service into the k8s cluster. Please let me know if I'm missing something important or if you see any glaring issues.
One thing that's missing is the secrets, is there a process for generating the
sealed-secrets.yaml
file?We will also need to create an S3 bucket along with creds, docs here: https://litestream.io/guides/s3/
todo:
closes spack/spack-gantry#7
@alecbcs fyi