-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC: Services #84
base: master
Are you sure you want to change the base?
RFC: Services #84
Conversation
Signed-off-by: Aidan Oldershaw <[email protected]>
This greatly simplifies the architecture Signed-off-by: Aidan Oldershaw <[email protected]>
Signed-off-by: Aidan Oldershaw <[email protected]>
Signed-off-by: Aidan Oldershaw <[email protected]>
My preference would be to maintain the standard docker-compose configuration. Perhaps the service key can point directly to a docker-compose file. That way one could easily use a docker-compose file in a git repo if needed. Concourse would also render any variables in this compose file like it does with task files. I guess the issue with this approach would be networking configuration may not match up with what concourse can supply. I would propose that concourse creates a compose container such that the docker host network is internal to the compose container. |
@samgurtman by "maintain the standard docker-compose configuration", do you mean be able to use your existing In the RFC (the 2nd snippet under Service Configuration), I gave an example of running a service using a theoretical
I think running |
Signed-off-by: Aidan Oldershaw <[email protected]>
Currently, it returns only the IP address of the container's virtual ethernet pair, as well as any additional properties that are set as a raw map (e.g. grace time) - so it's not Garden compliant, but Concourse clearly doesn't need it to be (we currently don't even use the container info endpoint). The motivation for this can be seen in the newly added integration test: finding a containers IP address so that containers on the same host can communicate via the bridge network. This will make concourse/rfcs#84 possible with a containerd runtime. Signed-off-by: Aidan Oldershaw <[email protected]>
And remove the "expose host port" alternative, since that's totally unnecessary Signed-off-by: Aidan Oldershaw <[email protected]>
Signed-off-by: Aidan Oldershaw <[email protected]>
Signed-off-by: Aidan Oldershaw <[email protected]>
Signed-off-by: Aidan Oldershaw <[email protected]>
As the image in https://github.com/aoldershaw/rfcs/blob/services/084-services/proposal.md#networking states port 8080 specifically, I just wanted to point out that there are images
MySQL is a good example for that. |
it was just confusing Signed-off-by: Aidan Oldershaw <[email protected]>
@elgohr fair point, it's confusing - an earlier form of the diagram was tracing a single request to a specific example service endpoint, and I never ended up changing the port from 8080. Updated the diagram. Worth noting that the use-case of different/multiple ports is supported - you can configure as many ports as you wish when configuring your service (https://github.com/aoldershaw/rfcs/blob/services/084-services/proposal.md#service-configuration). e.g. task: ...
services:
- name: mysql
...
ports:
- name: default
number: 3306
- name: admin
number: 33062 |
I did an initial implementation of the Services RFC. concourse/concourse#8673. I'd want some feedback from Concourse maintainers before finishing it, so I don't waste my time if they won't accept it. It works great so far. |
the only thing I'd like to see, in addition to launching per task services, would be allowing tunneled access to services per worker, that can be assumed to exist. I.e. a It would fit the tag on the can, but I am not convinced it's a) a good fit for the concept or b) bloating the RFC too much and should be extracted/not included. |
Hi there.. |
Rendered
Please comment on individual lines, not at the top-level.