Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Install KintoHub on K3s #21

Open
bakayolo opened this issue Mar 29, 2021 · 12 comments
Open

Install KintoHub on K3s #21

bakayolo opened this issue Mar 29, 2021 · 12 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@bakayolo
Copy link
Member

Is your feature request related to a problem? Please describe.
We confirmed being able to install KintoHub on Minikube -> https://www.kintohub.com/installation/minikube
We need also be able to install it on K3s and provide a doc for that.

Describe the solution you'd like
Test the deployment of KintoHub on K3s and create a page in our kinto-docs for that.

@bakayolo bakayolo added the documentation Improvements or additions to documentation label Mar 29, 2021
@Utwo
Copy link

Utwo commented Apr 7, 2021

I deployed KintoHub on k3d following Minikube documentation and everything runs smoothly. I had only one problem, when deploying a service, it get stuck on Allocating resources. This may take a few seconds.

@bakayolo
Copy link
Member Author

bakayolo commented Apr 7, 2021

Hey @Utwo
Thanks for trying.
Can u try to run kubectl get pod -n kintohub while the deployment is running, then `kubectl logs -n kintohub [pod with error status] to see what's the issue there?
Thanks.

@Utwo
Copy link

Utwo commented Apr 7, 2021

Checked, no errors, everything is up.
image

@bakayolo
Copy link
Member Author

bakayolo commented Apr 7, 2021

You need to trigger a deployment first. Because the workflow pods have a TTL.

@Utwo
Copy link

Utwo commented Apr 7, 2021

Yes, I did that.
image

@bakayolo
Copy link
Member Author

bakayolo commented Apr 7, 2021

It is not possible.

You must trigger a new deployment (delete this service) and immediately run the commands above.
The pods used for the deployment workflow last Max 5 minutes if I am not mistaken.

If it's still does not work, make sure Argo is running fine.

If yes, plz connect with me on slack.Kintohub.com.

@Utwo
Copy link

Utwo commented Apr 8, 2021

Yes, you were right! Argo had an error. On latest k3d you must set --set controller.containerRuntimeExecutor=kubelet

@bakayolo
Copy link
Member Author

bakayolo commented Apr 8, 2021

@Utwo yeah it's because kubernetes now use containerd as a container engine.
So everything works as expected?

@Utwo
Copy link

Utwo commented Apr 8, 2021

Yes and no. I can trigger a new deploy now, I can see the logs building the docker image but at the end it fails. I attached the logs below.

But is fine, I managed to get this working on minikube from the first try. 🔥

INFO[0135] cmd: EXPOSE                                  
INFO[0135] Adding exposed port: 80/tcp                  
INFO[0135] No files changed in this command, skipping snapshotting. 
INFO[0135] RUN rm -rf /usr/share/nginx/html/*           
INFO[0135] Taking snapshot of full filesystem...        
INFO[0136] cmd: /bin/sh                                 
INFO[0136] args: [-c rm -rf /usr/share/nginx/html/*]    
INFO[0136] Running: [/bin/sh -c rm -rf /usr/share/nginx/html/*] 
INFO[0136] Taking snapshot of full filesystem...        
INFO[0136] COPY --from=builder /app/dist /usr/share/nginx/html/ 
INFO[0136] Pushing layer index.docker.io/utwo/de-urgenta-client/cache:5a0f34582005e3172884b3a10116977277bb3aadbf845ffbb49a6e3d6a96384f to cache now 
INFO[0136] Pushing image to index.docker.io/utwo/de-urgenta-client/cache:5a0f34582005e3172884b3a10116977277bb3aadbf845ffbb49a6e3d6a96384f 
INFO[0136] Taking snapshot of files...                  
INFO[0136] RUN sed -i '/location \/ {/a \ \ \ \ \ \ \ \ try_files $uri $uri/ /index.html;' /etc/nginx/conf.d/default.conf 
INFO[0136] cmd: /bin/sh                                 
INFO[0136] args: [-c sed -i '/location \/ {/a \ \ \ \ \ \ \ \ try_files $uri $uri/ /index.html;' /etc/nginx/conf.d/default.conf] 
INFO[0136] Running: [/bin/sh -c sed -i '/location \/ {/a \ \ \ \ \ \ \ \ try_files $uri $uri/ /index.html;' /etc/nginx/conf.d/default.conf] 
INFO[0136] Taking snapshot of full filesystem...        
INFO[0136] Pushing layer index.docker.io/utwo/de-urgenta-client/cache:c173c9641128b08c02506456d92c25c50c3b00bac9ac9ac6522d6af7b305b975 to cache now 
INFO[0136] Pushing image to index.docker.io/utwo/de-urgenta-client/cache:c173c9641128b08c02506456d92c25c50c3b00bac9ac9ac6522d6af7b305b975 
WARN[0137] error uploading layer to cache: failed to push to destination index.docker.io/utwo/de-urgenta-client/cache:5a0f34582005e3172884b3a10116977277bb3aadbf845ffbb49a6e3d6a96384f: HEAD https://index.docker.io/v2/utwo/de-urgenta-client/cache/blobs/sha256:e672c91c6a836747594e09109dd1b8e30be3390f4c653c3cf707b6f2dd16227a: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details) 
INFO[0137] Pushing image to utwo/de-urgenta-client:f2e0130 
INFO[0141] Pushed image to 1 destinations               
Deploying service...
Deploying service...
DBG Successfully loaded env var: IMAGE_REGISTRY_HOST=utwo
DBG Successfully loaded env var: NAMESPACE=606f4a23fed9234f585c77d7
DBG Successfully loaded env var: RELEASE_TYPE=DEPLOY
DBG Successfully loaded env var: BLOCK_NAME=de-urgenta-client
DBG Successfully loaded env var: PROXLESS_FQDN=kinto-proxless.kintohub.svc.cluster.local
W0408 18:26:40.557828     173 client_config.go:543] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
DBG Deploying release &{f2e01301-a6f7-4fc0-83f7-8ad6b0d72bf5 9f0fa358-4c0a-4f2d-9032-db91256ce2e9 de-urgenta-client 606f4a23fed9234f585c77d7 utwo/de-urgenta-client:f2e0130   map[app:de-urgenta-client env:606f4a23fed9234f585c77d7 owner:kinto] map[PORT:3000] 60 kinto-builder-workflow-docker   2 <nil> 80 [de-urgenta-client-3ca91c.kinto.dev] <nil> <nil>}
DBG Start watching pods with labels map[app:de-urgenta-client env:606f4a23fed9234f585c77d7 owner:kinto release:f2e01301-a6f7-4fc0-83f7-8ad6b0d72bf5]
start watching logs for service instance de-urgenta-client-f2e01301-a6f7-4fc0-83f7-8ad6b0d72bf5-697rst5p
DEBUG Calling kinto core kinto-core:8080
DEBUG Sending build status - blockName:"de-urgenta-client" envId:"606f4a23fed9234f585c77d7" releaseId:"f2e01301-a6f7-4fc0-83f7-8ad6b0d72bf5" status:<state:FAILURE finishTime:<seconds:1617906425 nanos:486994705 > > 
DEBUG Status FAILURE updated

@bakayolo
Copy link
Member Author

bakayolo commented Apr 8, 2021

@Utwo
Copy link

Utwo commented Apr 12, 2021

Yes, I've used the exact same configs as for Minikube and that worked perfectly. Anyway I will test this on Minikube for the moment and if I will have any more questions, I let you know on Slack. Thank you for your time and for this great product!

These are my configs, just for reference:

helm upgrade --install kinto \
--set minio.resources.requests.memory=null \
--set minio.makeBucketJob.resources.requests.memory=null \
--set builder.env.IMAGE_REGISTRY_HOST=utwo \
--set builder.workflow.docker.registry=https://index.docker.io/v1/ \
--set builder.workflow.docker.email=[email] \
--set builder.workflow.docker.username=[user] \
--set builder.workflow.docker.password=[pass] \
--set common.domainName=kinto.dev \
--set builder.env.ARGO_WORKFLOW_VOLUME_SIZE=2Gi \
--namespace kintohub kintohub/kinto

@bakayolo
Copy link
Member Author

@Utwo ok cool, glad to hear that and thanks for checking.
I'll test this when I have more time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

3 participants