-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add more environment variables to configure CVAT container #445
Comments
@ppanczyk , it is very specific request I'm not sure that we will be able to do that without your support. We don't try to run CVAT on Kubernetes at the moment. Thus it will be extremely difficult for us to test a solution for your use case. Could you please send us a PR and we will be happy to integrate it? |
Hey I am having the same issue, It is about the two services that CVAT requires (cvat_db:5432) and (cvat_redis:6379). Kubernetes does not allow underscore (_) to be on the DNS names. Is there any way to change the default name of the services that CVAT container waits for before starting? |
I am having the same issue.
@amiralipour have you found a workaround for this? |
@inealey |
@amiralipour , could you please prepare a PR to improve CVAT? |
@amiralipour could you share your steps in deploying cvat on k8s in more details, would be greatly appreciated. |
I am looking to get CVAT running in my K8s cluster, too and facing a few issues regarding the front and back-end connection. Specially the hardcoded url schemes during build are not ideal. Did anyone managed to dynamically insert backend address with env vars during runtime? Why did you define: |
@Langhalsdino , in case of docker-compose you can easily specify these environment variables in docker-compose.yml. I have no idea how to do that with K8s cluster. |
The main challenge is that k8s is only delpoying pre build containers from a registry. Therefore the env need to be inserted during runtime of the nginx container and not during build. |
@Langhalsdino , probably you can mount /etc/environment from host. It is just an idea. |
any plan to support this? |
You don't need to rebuild the image every time just set other environment variables at runtime or the k8s YAML that describe the features. The cvat-backend container has the following env that can be adjusted: env:
- name: ALLOWED_HOSTS
value: "my.fancy.domain.com" Furthermore, the cvat-frontend has the following configurations: env:
- name: REACT_APP_API_PROTOCOL
value: "http"
- name: REACT_APP_API_HOST
value: "My.fancy.domain.com"
- name: REACT_APP_API_PORT
value: "8080" Furthermore i adjusted const backendAPI = process.env.REACT_APP_API_FULL_URL; I needed to do some modifications to cvat itself in order to get it working, which might introduce new bugs and disable features. If someone is interested in the changes and K8s YAML files feel free to 👍 and I will try to sort everything and share it :) |
@Langhalsdino I hate to hijack this issue - however I'm very interested in your modifications to facilitate K8s deployment. |
I just changed the references to each container with regards to a environmental variable, that is set in the k8s deployment config. cvat/settings/production.py: for key in RQ_QUEUES:
RQ_QUEUES[key]['HOST'] = os.getenv('CVAT_REDIS_HOST', 'cvat-redis')
CACHEOPS_REDIS['host'] = os.getenv('CVAT_REDIS_HOST', 'cvat-redis') For the supervisord config i used the environmental variables too Form my current knowledge the other changes i made have been about fixing things like ffmpeg quality, ... |
@Langhalsdino in my use-case cvat containers ( |
We do this with environmental variables that are mounted into the container by cvat itself. e.g. here is one of our configure yamls: apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: cvat-backend
namespace: production
labels:
app: cvat-app
tier: backend
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: cvat-app
tier: backend
template:
metadata:
labels:
app: cvat-app
tier: backend
spec:
containers:
- name: cvat-backend-app-container
image: privat.registry.com/.../cvat/backend:latest
imagePullPolicy: Always
resources:
requests:
cpu: 10m
memory: 100Mi
env:
- name: DJANGO_MODWSGI_EXTRA_ARGS
value: ""
- name: UI_PORT
value: "80"
- name: UI_HOST
value: "cvat-frontend-service"
- name: ALLOWED_HOSTS
value: ".domain.com"
- name: CVAT_REDIS_HOST
value: "cvat-redis-service"
- name: CVAT_DB_HOST
value: "cvat-postgres-service"
- name: CVAT_NAME
valueFrom:
secretKeyRef:
name: cvat-postgres-secret
key: POSTGRES_DB
- name: CVAT_USER
valueFrom:
secretKeyRef:
name: cvat-postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: cvat-postgres-secret
key: POSTGRES_PASSWORD
ports:
- containerPort: 8080
resources: {}
volumeMounts:
- mountPath: /home/django/data
name: cvat-backend-data
subPath: data
- mountPath: /home/django/keys
name: cvat-backend-data
subPath: keys
- mountPath: /home/django/logs
name: cvat-backend-data
subPath: logs
- mountPath: /home/django/models
name: cvat-backend-data
subPath: models
initContainers:
- name: user-data-permission-fix
image: busybox
command: ["/bin/chmod", "-R", "777", "/home/django"]
volumeMounts:
- mountPath: /home/django/data
name: cvat-backend-data
subPath: data
- mountPath: /home/django/keys
name: cvat-backend-data
subPath: keys
- mountPath: /home/django/logs
name: cvat-backend-data
subPath: logs
- mountPath: /home/django/models
name: cvat-backend-data
subPath: models
volumes:
- name: cvat-backend-data
persistentVolumeClaim:
claimName: cvat-backend-data Our custom K8s resources consist of the following:
|
I will try to publish useful K8s yml templates, as soon as public docker containers are available. |
@Langhalsdino what changes are needed to enable the share path https://github.com/openvinotoolkit/cvat/blob/master/cvat/apps/documentation/installation.md#share-path using persistent volumes ? i am building a data ingestion pipeline from aws s3 that needs to push the bulk images into the CVAT directly |
Since you shared that you are using the (PR k8s template)[https://github.com/apic-ai/cvat/tree/release-1.1.0/kubernetes-templates] template via email my suggestions are according to this (PR 1962)[https://github.com//pull/1962] Since Kubernetes is managing a cluster, to my knowledge it is not possible to expose a host path on a machine to a pod directly. I am not sure what you are trying to build exactly, but i think two approaches might be of interest to you:
initContainers::
- name: kubectl-aws
image: amazon/aws-cli
# env/... -> Get your credentials inside the container
command: [ "/bin/bash", "-c", "aws s3 sync s3://my-bucket/ /home/data/share" ]
to
(Table containing support of access modes)[https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes] Even though everything is subjective, i would say that the first approach is easier than the second. Please share your experiences and share the way you choose to solve the issue. |
@TaSeeMba @Langhalsdino I am also finding a way to config s3 compatible bucket as share path of CVAT. I found this https://github.com/IBM/dataset-lifecycle-framework project can be used to setup s3 bucket as a pvc, and mount to CVAT backend as share path. Brief steps are 1. install dataset-lifecycle-framework, 2. create a dataset with s3 bucket credentials and endpoint, and 3. configure cvat-backend-deployment.yml to use the dataset. YAML is something like: metadata:
name: cvat-backend
namespace: cvat
labels:
app: cvat-app
tier: backend
dataset.0.id: "cos-dataset"
dataset.0.useas: "mount"
...
- mountPath: /home/django/share
name: "cos-dataset"
...
volumes:
- name: cvat-backend-data
persistentVolumeClaim:
claimName: cvat-backend-data
- name: cos-dataset
persistentVolumeClaim:
claimName: cos-dataset |
Hello,
it would be nice to be able to configure database and Redis connections by environment variables. Currently hostnames (cvat_db and cvat_redis) are hardcoded (I see 'cvat_redis' not only in settings/production.py, but also in supervisord.conf). It would also be good to configure auth to Postgres and Redis this way.
I tried to run CVAT on Kubernetes and was stuck on this - I used modified Helm charts to deploy Postgres and Redis (I changed service names), however Helm complained that the underscore is not a valid character for a DNS name.
The text was updated successfully, but these errors were encountered: