-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm S3 Support #2558
Comments
Hm, personally I like the idea that ArgoCD could access Helm charts or repositories residing in S3 buckets, but I feel using Maybe native S3 support for accessing repositories hosted on S3 in ArgoCD could work, i.e. by leveraging the Minio S3 client (https://github.com/minio/minio-go). This way, we could manage the credentials just like every other repository credentials (i.e. access key as username, secret key as password) and also use custom TLS certificates, in case someone hosts on private Minio instances exposing the S3 API. What do you think about this idea? |
Yeah i was not sure how much of a demand is for this feature, that is why I tried to go the S3 plugin route which we use in our current CI process (and Helmsman prior to Argo migration). If the team is able to add S3 protocol support via minio, I would be super happy as it would reduce the need for us to build a custom image for this feature. Edit: I would prefer that we also allow the possibility of using IAM roles for accessing the S3 repo (the argo repo server would be granted read-only access), it would make it less stressful for us for key rotation. |
Hm, I've just had a look at our Helm integration code, and it seems we use golang's On a second thought, maybe using Chart Museum would be another viable choice, since it supports many backends for storing the charts already. Finally, I can understand the desire for using IAM roles for authentication & authorization, but I think managing those is a little out of scope for ArgoCD, since they're very vendor-specific. But that's just my opinion, there might be others. @alexec You know the Helm stuff way better than me, so I'd love to hear your thoughts on this topic. |
That is an interesting thought, I'll play around with the permissions on the Perhaps Chart Museum is a better product, however that would require us to retool around it. The attractive piece about using S3 is we no longer need to manage the basic auth credentials, and relieve the stress on the key rotation and secrets management. I am not advocating for the Argo CD to have special provisions for IAM roles (any more than it has now), we would leverage the IAM roles granted to the pods via Kube2IAM/KAIM/IRSA. |
So after updating the permissions on the bucket, looks like using the https URL instead of the S3 URL works, provided the Helm S3 Plugin is installed in our custom image. after adding the chart repo i was able to pull down an existing chart and the UI displays all the |
Having a workaround is some good news, I hope the installation of charts works that way as well. I was thinking a little more about native S3 support meanwhile, and I guess it becomes quite difficult when dealing with fetching dependencies, although I have not looked at the code yet how (or if) we resolve them currently. I'm not an avid user of Helm, so I'm unsure whether As a further workaround, instead of using a custom built |
This is correct. Our code does not currently support S3. Argo CD is un-opinionated on what cloud provider you use, and import an S3 library would break that principle. Our first preference would be to recommend people use HTTP rather than a custom protocol. Have you read this? https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingBucket.html#access-bucket-intro |
Yeah I figured that would be the case. It looks like the work around works for us right now, register the S3 bucket via the HTTPS URL and ensure that Helm S3 is present in the Argo CD repo server. I tried using the init container, ultimately it turned out to be cleaner to build the images locally. Main reason turned out that i would need to build a custom image that contains the plugin, just seemed cleaner to build it directly into the image.
|
See #1105 . |
@theelthair it would be amazing if you could document this for other users, just edit this page: https://github.com/argoproj/argo-cd/edit/master/docs/user-guide/helm.md |
Of course, I have some bandwidth coming up that I can contribute an example. |
@theeltahir do you have an updated way of doing this with helm3? |
argoproj#2558 Based on discussions from: argoproj#2558 argoproj#2789 argoproj#1105 argoproj#1153 I've spent a lot of time to put everything together to get integration with GCS working. I think it's worth to have documentation for it.
Hi, after build this customized image, argocd still could not acess s3 hosted helm, with 403 forbidden. Do you how it access iam credentials? I have exec into argo-repo container which do have a iam service account attached with amazon s3 full access policy:
Any idea? |
a few things to look for: |
Its not working without making |
I've added a proxy (not great) bit at least it worked! apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: s3-proxy
app.kubernetes.io/name: argocd-s3-proxy
app.kubernetes.io/part-of: argocd
namespace: argocd
name: argocd-s3-proxy
spec:
selector:
matchLabels:
app.kubernetes.io/name: argocd-s3-proxy
template:
metadata:
labels:
app.kubernetes.io/name: argocd-s3-proxy
spec:
containers:
- name: s3-proxy
image: pottava/s3-proxy:2.0
env:
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: argocd-aws-user
key: AWS_ACCESS_KEY_ID
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: argocd-aws-user
key: AWS_SECRET_ACCESS_KEY
- name: AWS_REGION
value: eu-west-3
- name: AWS_S3_BUCKET
value: qonto-helm-charts
ports:
- containerPort: 80
resources:
requests:
memory: 50Mi
cpu: 10m
limits:
memory: 100Mi
cpu: 50m
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: s3-proxy
app.kubernetes.io/name: argocd-s3-proxy
app.kubernetes.io/part-of: argocd
name: argocd-s3-proxy
namespace: argocd
spec:
ports:
- name: server
port: 80
protocol: TCP
targetPort: 80
selector:
app.kubernetes.io/name: argocd-s3-proxy Now in the UI, I can add the repository with argocd@argocd-repo-server-5c5ff5c684-wmlsr$ cd /tmp/[TEMPORARY_GIT_PROJECT
argocd@argocd-repo-server-5c5ff5c684-wmlsr:/tmp/https:XXXX/applications/pdf$ helm init --client-only
argocd@argocd-repo-server-5c5ff5c684-wmlsr:/tmp/https:XXXX/applications/pdf$ helm2 dependency build
Hang tight while we grab the latest from your chart repositories...
...Unable to get an update from the "local" chart repository (http://127.0.0.1:8879/charts):
Get http://127.0.0.1:8879/charts/index.yaml: dial tcp 127.0.0.1:8879: connect: connection refused
...Successfully got an update from the "stable" chart repository
Update Complete.
Saving 1 charts
Downloading XXX from repo http://XXXX
Deleting outdated charts EDIT: Using Helm3 fixed the issue all together. The S3 proxy still apply. |
yep. |
Hi @victorboissiere , you wrote that you set somewhere Helm3, but you didn't write where :)
Could you please write how did you resolve the problem with little more detail? EDIT: However, if you push with a relative path like the next command: then, you will be able to create an application in ArgoCD. However, if you push chart with relative path, you can't delete old chart with |
Hey folks! Any plan on this one? Would be super to have this support, simplify a lot |
Summary
Repo server should leverage the Helm CLI for some of the calls to enable better Helm plugin support
Motivation
Currently our team uses S3 to store Helm charts. We are excited to use the 1st class Helm support features slated to arrive in Argo CD 1.3. After building a custom image with the Helm S3 plugin, adding an S3 bucket as a Helm repo results in the following errors
After digging in the code further, looks like the GetIndex method acquires the chart index via http calls and not the Helm CLI:
argo-cd/util/helm/client.go
Lines 153 to 201 in 5706a17
This would lead me to believe that our image with the Helm S3 plugin is ineffective as the http client does not understand the S3 calls.
Proposal
I have not spent enough time with the code base to know if it is feasible but if we leveraged the Helm CLI for some of these calls it might allow the Helm S3 to function properly
The text was updated successfully, but these errors were encountered: