Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CVAT deployment on Kubernetes with Helm #7605

Closed
joon612 opened this issue Mar 14, 2024 · 5 comments
Closed

CVAT deployment on Kubernetes with Helm #7605

joon612 opened this issue Mar 14, 2024 · 5 comments

Comments

@joon612
Copy link

joon612 commented Mar 14, 2024

Is the section of this document creating superuser up to date? I encountered an error while executing.
I only did few changes on elm-chart\values.override.yaml

postgresql:
  secret:
    password: XXXX
    postgres_password: XXXX
    replication_password: XXXX
traefik:
  service:
    externalIPs:
      - 192.168.49.2

CVAT version: v2.11.1
System: Windows 11
Docker version: Docker Desktop 4.28.0 (139021)
Minikube version: v1.32.0
Helm version: v3.14.2

Ref: https://opencv.github.io/cvat/v2.11.2/docs/administration/advanced/k8s_deployment_with_helm/#post-deployment-configuration

$ kubectl get pods -n cvat
NAME                                                           READY   STATUS             RESTARTS          AGE
cvat-latest-backend-server-77cb6788c4-tdmtq                    1/1     Running            0                 45h
cvat-latest-backend-utils-8d74459c6-zblm9                      1/1     Running            0                 45h
cvat-latest-backend-worker-analyticsreports-75ff865966-pvbcp   1/1     Running            0                 45h
cvat-latest-backend-worker-annotation-7fc4f669c7-jmr87         1/1     Running            0                 45h
cvat-latest-backend-worker-export-85bf7fdfdf-dmnsk             1/1     Running            0                 45h
cvat-latest-backend-worker-export-85bf7fdfdf-n9p5f             1/1     Running            0                 45h
cvat-latest-backend-worker-import-854c7cf5fc-4cjj7             1/1     Running            0                 45h
cvat-latest-backend-worker-import-854c7cf5fc-8qqwr             1/1     Running            0                 45h
cvat-latest-backend-worker-qualityreports-6b48d9cdd-rzdz4      1/1     Running            0                 45h
cvat-latest-backend-worker-webhooks-869cb8f549-sp6rn           1/1     Running            0                 45h
cvat-latest-clickhouse-shard0-0                                1/1     Running            0                 45h
cvat-latest-frontend-6b9668fbf5-f7v9z                          1/1     Running            0                 45h
cvat-latest-grafana-7f88fbdb49-4rjjk                           1/1     Running            106 (22h ago)     44h
cvat-latest-kvrocks-0                                          1/1     Running            0                 45h
cvat-latest-opa-6fcc67cd6b-d668r                               1/1     Running            0                 45h
cvat-latest-postgresql-0                                       1/1     Running            0                 45h
cvat-latest-redis-master-0                                     1/1     Running            0                 45h
cvat-latest-vector-0                                           0/1     CrashLoopBackOff   114 (3m32s ago)   45h

$ kubectl exec -it --namespace cvat cvat-latest-backend-server-77cb6788c4-tdmtq -c cvat-backend-app-container -- python manage.py createsuperuser
Error from server (BadRequest): container cvat-backend-app-container is not valid for pod cvat-latest-backend-server-77cb6788c4-tdmtq

B.T.W., cvat-latest-vector-0 cannot start by no reason. Here is the pod log:

$ kubectl logs cvat-latest-vector-0 -n cvat
2024-03-14T05:39:35.760927Z  INFO vector::app: Internal log rate limit configured. internal_log_rate_secs=10
2024-03-14T05:39:35.761003Z  INFO vector::app: Log level is enabled. level="vector=info,codec=info,vrl=info,file_source=info,tower_limit=trace,rdkafka=info,buffers=info,lapin=info,kube=info"
2024-03-14T05:39:35.761056Z  INFO vector::app: Loading configs. paths=["/etc/vector"]
2024-03-14T05:39:35.761244Z ERROR vector::cli: Configuration error. error=No sources defined in the config.
2024-03-14T05:39:35.761262Z ERROR vector::cli: Configuration error. error=No sinks defined in the config.
@abhi-bhatra
Copy link
Contributor

I believe documentation is not updated, it is fetching different pod, when I use this command:
BACKEND_POD_NAME=$(kubectl get pod --namespace $HELM_RELEASE_NAMESPACE -l tier=backend,app.kubernetes.io/instance=$HELM_RELEASE_NAME -o jsonpath='{.items[0].metadata.name}')

The correct commands you can use are as follows:

BACKEND_POD_NAME=$(kubectl get pod --namespace $HELM_RELEASE_NAMESPACE -l tier=backend,app.kubernetes.io/instance=$HELM_RELEASE_NAME,component=server -o jsonpath='{.items[0].metadata.name}')
kubectl exec -it --namespace $HELM_RELEASE_NAMESPACE $BACKEND_POD_NAME -c cvat-backend -- python manage.py createsuperuser

@joon612
Copy link
Author

joon612 commented Mar 18, 2024

cvat-latest-vector-0

Thanks for help, do you know the reason of cvat-latest-vector-0 cannot run on Windows but Linux?

@abhi-bhatra
Copy link
Contributor

Still investigating on this !

azhavoro pushed a commit that referenced this issue Mar 26, 2024
<!-- Raise an issue to propose your change
(https://github.com/opencv/cvat/issues).
It helps to avoid duplication of efforts from multiple independent
contributors.
Discuss your ideas with maintainers to be sure that changes will be
approved and merged.
Read the [Contribution
guide](https://opencv.github.io/cvat/docs/contributing/). -->

<!-- Provide a general summary of your changes in the Title above -->

### Motivation and context
<!-- Why is this change required? What problem does it solve? If it
fixes an open
issue, please link to the issue here. Describe your changes in detail,
add
screenshots. -->

**Issue: ** #7605 
Currently, the command used in the official documentation is not
working, and giving error:
1. **Command Used**: 
```shell
BACKEND_POD_NAME=$(kubectl get pod --namespace $HELM_RELEASE_NAMESPACE -l tier=backend,app.kubernetes.io/instance=$HELM_RELEASE_NAME -o jsonpath='{.items[0].metadata.name}') &&\
kubectl exec -it --namespace $HELM_RELEASE_NAMESPACE $BACKEND_POD_NAME -c cvat-backend-app-container -- python manage.py createsuperuser
```

2. **Actual Output**:

![image](https://github.com/opencv/cvat/assets/63901956/bd826a8a-aacb-48e1-8501-04a9c730fa86)

3. **Expected Output**: 

![image](https://github.com/opencv/cvat/assets/63901956/20a31418-3136-46bc-a04d-3c17e44dcbde)


### How has this been tested?
<!-- Please describe in detail how you tested your changes.
Include details of your testing environment, and the tests you ran to
see how your change affects other areas of the code, etc. -->
I have run a minikube cluster, and run the as it is command in cluster,
which gives error:
```shell
error: cannot exec into a container in a completed pod; current phase is Succeeded
```

### Checklist
<!-- Go over all the following points, and put an `x` in all the boxes
that apply.
If an item isn't applicable for some reason, then ~~explicitly
strikethrough~~ the whole
line. If you don't do that, GitHub will show incorrect progress for the
pull request.
If you're unsure about any of these, don't hesitate to ask. We're here
to help! -->
- [x] I submit my changes into the `develop` branch
- [x] I have created a changelog fragment <!-- see top comment in
CHANGELOG.md -->
- [x] I have updated the documentation accordingly
- [x] I have added tests to cover my changes
- [x] I have linked related issues (see [GitHub docs](

https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword))
- [x] I have increased versions of npm packages if it is necessary

([cvat-canvas](https://github.com/opencv/cvat/tree/develop/cvat-canvas#versioning),

[cvat-core](https://github.com/opencv/cvat/tree/develop/cvat-core#versioning),

[cvat-data](https://github.com/opencv/cvat/tree/develop/cvat-data#versioning)
and

[cvat-ui](https://github.com/opencv/cvat/tree/develop/cvat-ui#versioning))

### License

- [x] I submit _my code changes_ under the same [MIT License](
https://github.com/opencv/cvat/blob/develop/LICENSE) that covers the
project.
  Feel free to contact the maintainers if that's a concern.

Co-authored-by: abhinav.sharma <[email protected]>
g-kartik pushed a commit to g-kartik/cvat that referenced this issue Mar 29, 2024
…7631)

<!-- Raise an issue to propose your change
(https://github.com/opencv/cvat/issues).
It helps to avoid duplication of efforts from multiple independent
contributors.
Discuss your ideas with maintainers to be sure that changes will be
approved and merged.
Read the [Contribution
guide](https://opencv.github.io/cvat/docs/contributing/). -->

<!-- Provide a general summary of your changes in the Title above -->

### Motivation and context
<!-- Why is this change required? What problem does it solve? If it
fixes an open
issue, please link to the issue here. Describe your changes in detail,
add
screenshots. -->

**Issue: ** cvat-ai#7605 
Currently, the command used in the official documentation is not
working, and giving error:
1. **Command Used**: 
```shell
BACKEND_POD_NAME=$(kubectl get pod --namespace $HELM_RELEASE_NAMESPACE -l tier=backend,app.kubernetes.io/instance=$HELM_RELEASE_NAME -o jsonpath='{.items[0].metadata.name}') &&\
kubectl exec -it --namespace $HELM_RELEASE_NAMESPACE $BACKEND_POD_NAME -c cvat-backend-app-container -- python manage.py createsuperuser
```

2. **Actual Output**:

![image](https://github.com/opencv/cvat/assets/63901956/bd826a8a-aacb-48e1-8501-04a9c730fa86)

3. **Expected Output**: 

![image](https://github.com/opencv/cvat/assets/63901956/20a31418-3136-46bc-a04d-3c17e44dcbde)


### How has this been tested?
<!-- Please describe in detail how you tested your changes.
Include details of your testing environment, and the tests you ran to
see how your change affects other areas of the code, etc. -->
I have run a minikube cluster, and run the as it is command in cluster,
which gives error:
```shell
error: cannot exec into a container in a completed pod; current phase is Succeeded
```

### Checklist
<!-- Go over all the following points, and put an `x` in all the boxes
that apply.
If an item isn't applicable for some reason, then ~~explicitly
strikethrough~~ the whole
line. If you don't do that, GitHub will show incorrect progress for the
pull request.
If you're unsure about any of these, don't hesitate to ask. We're here
to help! -->
- [x] I submit my changes into the `develop` branch
- [x] I have created a changelog fragment <!-- see top comment in
CHANGELOG.md -->
- [x] I have updated the documentation accordingly
- [x] I have added tests to cover my changes
- [x] I have linked related issues (see [GitHub docs](

https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword))
- [x] I have increased versions of npm packages if it is necessary

([cvat-canvas](https://github.com/opencv/cvat/tree/develop/cvat-canvas#versioning),

[cvat-core](https://github.com/opencv/cvat/tree/develop/cvat-core#versioning),

[cvat-data](https://github.com/opencv/cvat/tree/develop/cvat-data#versioning)
and

[cvat-ui](https://github.com/opencv/cvat/tree/develop/cvat-ui#versioning))

### License

- [x] I submit _my code changes_ under the same [MIT License](
https://github.com/opencv/cvat/blob/develop/LICENSE) that covers the
project.
  Feel free to contact the maintainers if that's a concern.

Co-authored-by: abhinav.sharma <[email protected]>
@joon612
Copy link
Author

joon612 commented Apr 1, 2024

Still investigating on this !

Is there any update? @abhi-bhatra

@azhavoro
Copy link
Contributor

Fixed in #7631

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants