Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multiple fixes for helm chart #5522

Closed

Conversation

Keramblock
Copy link
Contributor

Hi, if I want to use ingress from your helm chart - that does not necessary means that i want to use traefik, so I moved option to allow traefik to separate place

@Keramblock Keramblock requested a review from nmanovic as a code owner December 28, 2022 10:17
@Keramblock
Copy link
Contributor Author

Keramblock commented Dec 28, 2022

Hi, @azhavoro, could you please help me with helm chart?

Right now I see, that you have created there 3 deployments, all of them are using same pvc for different stuff like:
https://github.com/opencv/cvat/blob/develop/helm-chart/templates/cvat_backend/worker_default/deployment.yml#L162
https://github.com/opencv/cvat/blob/develop/helm-chart/templates/cvat_backend/worker_low/deployment.yml#L162
https://github.com/opencv/cvat/blob/develop/helm-chart/templates/cvat_backend/server/deployment.yml#L207

That is lead to broken state in k8s with default cloud RWO storage mode because of the conflict:
image
image

So is there any reason to use pvc like that? Do different servers using same data, or that is a typo? I could fix that in current mr, because in current state sadly it is not working at least for me.

@Keramblock
Copy link
Contributor Author

Keramblock commented Dec 28, 2022

Also for what they are currently used? Right now I dont see any info about them in main or helm readme =(

@Keramblock
Copy link
Contributor Author

@nmanovic or mb you know the answer? Could you please help me with this issues because as far as i can see - helm chart is not usable as is and need to be fixed =(

@Keramblock
Copy link
Contributor Author

Keramblock commented Dec 28, 2022

also fixed broken opa version
UPD: reverted by develop changes

@Keramblock Keramblock changed the title allow to use all other ingresses multiple fixes for helm chart Dec 29, 2022
@azhavoro
Copy link
Contributor

Also for what they are currently used? Right now I dont see any info about them in main or helm readme =(

@Keramblock Hi, sorry for the late response
server, worker_default and worker_low are use the same data and in general case a RWX storage like NFS is needed here (or you have to use affinity to assing all cvat pods for a single node). I agree that this should have been described in the readme at least. Currently I have only one idea how to improve it - use intermediate NFS server that uses RWO storage and create corresponding NFS backed PV and PVC that can be used by many CVAT pods, but I'm not sure about performanse in this case. What do you think?

@Keramblock
Copy link
Contributor Author

@azhavoro RWX usually absent in default cloud k8s setup(GKE/AKS/etc) so imo we should avoid using it, we could use one RWO with readonly flags for multiple reads, if that is suitable for this services.
Otherwise this chart will not be useful for large portion(my estimate is >95%) of kubernetes clusters worldwide.

@nmanovic nmanovic requested review from azhavoro and removed request for nmanovic March 27, 2023 09:54
@Keramblock
Copy link
Contributor Author

Keramblock commented Apr 19, 2023

Hi, @azhavoro I see, that this PR is effectively dead and I cant dedicate to CVAT helm enough time right now, but I do need some changes and I believe, that changes are necessary for whole community. Could I create issues instead, so you/someone else could fix them?

UPD: create #6043 for that

@Keramblock
Copy link
Contributor Author

This PR is to outdated, so I am closing it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants