-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track which dev services work using kubedock #34051
Comments
/cc @geoand (devservices), @stuartwdouglas (devservices) |
cc @maxandersen |
Most of the quick starts I tried failed. From what I understand in most cases the issue has to do with file permissions. There are specific guidelines on how to fix those images but I am not sure if we want to get down to that road. jdbc-postgres (hibernate-orm-quickstart)Status: FAILED
jdbc-mysql (hibernate-orm-quickstart)Status: FAILED MySQL container starts, logs seems clear but schema never gets created and tests eventually fail with
jdbc-oracle (hibernate-orm-quickstart)Status: FAILED jdbc-mariadb (hibernate-orm-quickstart)Status: FAILED
jdbc-db2 (hibernate-orm-quickstart)Status: FAILED
mongodb (mongodb-quickstart)Status: FAILED
smallrye-reactive-messaging-kafka (kafka-quickstart)Status: FAILED redis-client (redis-quickstart)Status: OK keycloak-authorization (security-keycloak-authorization)Status: OK |
Thanks for diving into this @iocanel!
+1 Many popular images are designed around running as root, and fall apart when passed a UID. Or they expected a fixed non-root UID, which is then overridden by kube/openshift to a random one. Another common failure case is a pre-created image path with files owned by root get squashed by the storage provider to a different UID than the one running the container. Once userns support lands in Kube/Openshift (coming soon), the situation should eventually improve since fixed UIDs are fine (get remapped to a different UID).
IMO it is worth it to work with upstream image providers to improve their images. Aside from using our own layers to workaround we might be able to patch kubedock to support (if it doesnt already) init container usage to fix up file related issues. |
What I find interesting is that some of the failing images, work if they are used with |
This does not apply to all issues, but nevertheless might help getting more insight. Since volume mounts are difficult to get correctly configured in ci, and may not even possible if testcontainers is using a remote host (eg testcontainers cloud), a lot of the modules are copying config instead of using volumes. One of the gaps when simulating the docker api on kubernetes is that in docker, you can copy files to an unstarted container, while in kubernetes you can’t since there is no unstarted state. So kubedock will “always” start a container when files are being copied to it. (“always” because there is a —pre-archive flag which works around this, using configmaps). This approach however, can lead to race exceptions (e.g. copying the config after the container proces requires it), or to permission issues. Usually this is overcome by implemeting a script that waits until the config is present in the container. However, this script might have a destination that is not writable for non-root users, like /. The kafka devservice has its own implementation, and implements it in a pattern that can fit both worlds. See this pr as well: #19736 This is another interesting pr, tackling a similar issue: testcontainers/testcontainers-java#7524 |
Another thing that worries me about using a shared cluster like this with testcontainers is that most (if not all) containers are hosted on DockerHub, and a shared cluster is likely to run into rate limits. We need to take this into consideration as well and see if we can find a workaround by caching images in the local OpenShift cluster. |
I am trying to run test containers as kubernetes pods in GitLab pipeline using GitLab Kubernetes Executor (as runner). I added kubedock as GitLab service. This is my current devservices configuration: '%test':
quarkus:
devservices:
enabled: true
timeout: 300
datasource:
devservices:
image-name: mariadb:10.11
elasticsearch:
devservices:
image-name: docker.elastic.co/elasticsearch/elasticsearch:8.11.0
# Works with opensearch distribution
# distribution: opensearch
# image-name: docker.io/opensearchproject/opensearch:2.9.0
kafka:
devservices:
image-name: docker.io/vectorized/redpanda:v22.3.4 This works when using opensearch as the distribution of elasticsearch, but fails when I use default (elastic) distribution. The other two containers (kafka and mariadb) are started up correctly.
The log from the elasticsearch pod does not contain any errors. When observing the whole process, it looks to me like the pod is killed before elasticsearch finishes initializing. |
After some debugging, I found out that elasticsearch fails, because testcontainers make request for retrieving cert Edit: It seems the request for retrieving cert is too early. Running the request a bit later after the container starts, successfully downloads the cert. Update: Solved with joyrex2001/kubedock#60 (comment). |
We did the superhero workshop in Dev Spaces yesterday, so I can confirm that with Quarkus 3.9.2 jdbc-postgresql works without any additional configuration |
Hello, Changes detected in configuration. Updating the server image.
Updating the configuration and installing your custom providers, if any. Please wait.
2024-09-02 06:28:56,117 INFO [io.qua.dep.QuarkusAugmentor] (main) Quarkus augmentation completed in 10602ms
Server configuration updated and persisted. Run the following command to review the configuration:
kc.sh show-config
Next time you run the server, just run:
kc.sh start --http-enabled=true --hostname-strict=false --spi-user-profile-declarative-user-profile-config-file=/opt/keycloak/upconfig.json --optimized
ERROR: Unexpected error when starting the server in (production) mode
ERROR: Failed to start quarkus
ERROR: Failed to reaad default user profile configuration: /opt/keycloak/upconfig.json
ERROR: /opt/keycloak/upconfig.json (No such file or directory)
For more details run the same command passing the '--verbose' option. Also you can use '--help' to see the details about the usage of the particular command. |
Description
There have been discussion about running dev services over kubedock, to support dockerless environments.
We should first check which dev services work with
kubedock
and which are not.Failing Dev Services
Here's a list of Dev Services that do not work
See more details & logs in the comments below
Working Dev Services
Here's a list of Dev Services that do not work
Core extensions with DevService support
The text was updated successfully, but these errors were encountered: