-
-
Notifications
You must be signed in to change notification settings - Fork 267
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And what about Pylint/Flake8 and other repositories non OCA?
It's already installed by the |
I'm not talking about installing them. I'm talking about running them. |
The usage is added to README. Just use it as usual, but instead of adding env variables as travis does, add them to the docker run command. |
I still no get it, but maybe it's because my unknown about Docker, but I'm seeing that you run here https://github.com/OCA/maintainer-quality-tools/pull/477/files#diff-3254677a7917c6c01f55212f86c57fbfR64 the command for testing the server, and that's why I'm saying that you should run also pylint/flake8 or none. |
AFAICS, maintainer-quality-tools/travis/travis_run_tests Lines 51 to 59 in e28607f
Currently travis sets those in its maintainer-quality-tools/sample_files/.travis.yml Lines 71 to 75 in e28607f
maintainer-quality-tools/sample_files/.travis.yml Lines 99 to 100 in e28607f
So, what you should do now is: global:
- OWNER=OCA
- REPO=server-tools # for example. Not sure if we get these automatically from Travis
- VERSION=10.0
- ODOO_REPO=odoo
matrix:
- LINT_CHECK="1"
- TRANSIFEX="1"
- TESTS="1" ODOO_REPO="odoo"
- TESTS="1" ODOO_REPO="OCB"
script:
- docker container run -it --rm -e "LINT_CHECK=$LINT_CHECK" -e "TRANSIFEX=$TRANSIFEX" -e "TESTS=$TESTS" -v "$(pwd):/root/build/$OWNER/$REPO:ro,z" oca/maintainer-quality-tools:$ODOO_REPO-$VERSION Now let me explain the script part by part for the Docker-unaware readers:
|
Thanks for the explanations, @yajo. What do you think, @lasley, @moylop260 ? I'm thinking that this would be incompatible with travis2docker in the current form. |
Do you have planned change runbot-travis2docker module in order to do compatible with this change? Because this change will be required a privileged and so on |
I don't like the thought of my build images having privileged host access. This is a blocker IMO |
I'm pretty sure Travis is spinning up VMs for their sudo builds, which allows isolation of the actual hosts. This is not the case with Runbot, and we definitely can't have those builds being aware of the build host in any way. |
Well, all
I have not looked into that yet, this would serve the purpose to speed up & standarize travis builds, not runbot ones. However I think we could build a new image that runs the tests (instead of mounting the volumes, use a Dockerfile, then run tests, then commit and push image). Then the runbot could run that same image.
Travis needs that, but as long as you have a proper docker image, all runbot has to do is boot it. Yes, runbot needs access to a docker socket (as it does now), but you can always use dind if you don't want it to use the main one. Or we could dump travis, use http://drone.io/ and be happy. 😛 |
You are not understanding me: if you switch .travis.yml to this new form, runbot won't work as it uses .travis.yml file and travis2docker for creating a dockerized runbot build, and I doubt that this new .travis.yml is compatible with it. And this is why also @lasley talks about privileged host access (not on Travis, but on runbot). |
Ah OK, I thought you were talking about Travis only. Any ideas on how to make t2d work with this? |
I've recently been experimenting with a Docker inception strategy for my Runbot so that the build workers don't need privileged host access. In a standard setup where Docker containers build Docker containers, you:
You can modify this strategy slightly by removing the °3 and °4 steps, which will make the build worker spawn Docker containers inside of itself instead of the host. I'm running into some major walls on cleanup though - the Docker build worker basically won't stop once you get a few containers running inside of it. Killing it will leave some massive memory, mounted volume and port leaks, so that's not advisable either. There's also the issue of accessing the containers built by the worker from outside of the worker. I'm using two sets of Traefiks for that, which route around all the network stuff that is Docker-ception. I dunno how close I am to solving this though. |
Well, I recently had to do something similar in our CI. Just spin up a dind (Docker in Docker) container, put it in the same network as your runbot container, and add You will probably have to use a couple of traefiks as you do now, nothing much special. Remember to set |
If you let me the thought, and I don't mean to offend anyone, IMHO I think the travis2docker approach is wrong by definition. Travis can perfectly run docker containers, so this should be docker2travis instead: First, have a base image where to inject code and run tests. Then run that image in travis for unit tests and in runbot for human tests. Doing it the other way around is what complicates everything. |
I dunno if it's necessarily wrong, it's just that it was built with the goal of using the existing OCA workflows from within another build agent. I think this is instead an example of feature creep at its finest that's now causing us scaling issues, as with any sort of legacy code. There's a point where giant leaps have to be made, and it seems we may be there.
Yeah ok so I think we first need to take a step back and identify what exactly it is that MQT is bringing to the table here. We're basically talking about uprooting our entire build system for this, am I correct? |
Ok nope we're not. I traced this Dockerfile a few times, then launched it in my local & I understand more. Basically we're just supplying the Dockerfile that T2D would normally create. So in this instance, we would simply need a new module |
What about supports "sudo" and "docker service" options for travis2docker? |
But why? In the case here, we already have a Dockerfile. With the goal of T2D being to create a DockerFile from a Travis file, I feel this would be circular logic creating a Dockerfile from a Travis file of which the sole purpose is to run a different Dockerfile. |
Maybe I'm a little lost. |
What @lasley tries to say, if I'm not wrong: What we do now:
What we could then do after having this:
As you can see @moylop260, under such scenario, Pros:
Cons:
Please remember this PR is WIP, I just thought it's easier to speak above code than above nothing 😊 |
@moylop260 Now you have a script that would do all the work for you and a sample Still unhandled the point on how to put the docker login password as protected to avoid PRs on leaking it, and also the combination between travis and runbot, since those steps would still require some extra things, such as setting up a oca organization in the docker hub and a I hope that you get the point now at least (although I'd be surprised if this worked as it is right now). * I'm currently in the task of plugging in MQT with our gitlab-ci, so I'll use it to check it works. If it does, it should work on travis too. * Edit: Surprise, it built! https://hub.docker.com/r/tecnativa/maintainer-quality-tools/tags/ |
Well, we will have sass install problems if we install 2 times the packages (wrong way first). That was the last issue. We had the - ENV: WEB_REPO=1
- BEFORE_INSTALL: install sass wrong If we had this same error in any architecture we will have sass install issues again. 😄 I understand the architecture but I'm talking about the configuration. |
Could you point me to resources of current runbot implementation please? I didn't dig enough on that part. |
Our Compass/SaaS problems seem to have come back actually. I've been meaning to look into it. Basically what this boils down to for me is that we're maintaining build compatibility with two different images - none of which are OCA controlled. This adds unpredictability into our builds and regularly causes failures in one system or the other. We could easily adapt this strategy to support multiple images too, such as maybe a CentOS build. In doing that though, we'd still have a unified file, application, and user hierarchy. This is where our issues are stemming from I think. |
Source code? |
In the past, t2d tool have created a bash file to run directly the We can create this piece of code again in order to avoid create 2 configuration files and change all our current CI arch. |
Yeah, because that branch have the error mentioned from #477 (comment) in the .travis.yml |
I'm having some problems on setting this up, I think that it mostly boils down to current MQT being almost absolutely travis-specific. This makes wrapping all in a Dockerfile quite hard, and making that image work in another CI even harder. Not sure where to go from here; definitely having a plug&play docker image for testing (both automated and manual) seems to be the way to go, but making current code work in such a way is quite hard. Just as an example, pylint is not finding any modules to lint, even when everything is properly set in place... |
We can start with |
It means:
Regards. |
Yikes, I forgot this is still open. I'm gonna close it since I'm not gonna develop it further. IMO the easiest way to make these tests would be to have a little collection of little Docker images, each one testing one purpose. This PR is definitely wrong and highly related to the final conclusion (if it ever comes) from OCA/runbot-addons#144, so closing. |
The objective of this PR is to make us able to run the full MQT power in a dockerized, reproducible environment, and make anyone able to plug it quickly and easily into any CI pipeline.
I'm trying to reinvent as less as possible while keeping the Docker logic in place.
I'd love to be able to add oca dependencies in a way that fits better with https://github.com/Tecnativa/docker-odoo-base, although I'd prefer to get some help from MQT pros around.
Of course, after merging, to see the effect, we need to create a https://hub.docker.com/r/oca organization and configure an automatic build there. I configured https://hub.docker.com/r/tecnativa/maintainer-quality-tools for testing purposes, that will rebuild on each push to the PR branch. I could do it myself if I were a PSC here, but I don't feel like I know much what I'm doing here yet.
... and guess what? phantomjs, sass, compass and bootstrap-sass is preinstalled. Deal with it, #471. 😎
@Tecnativa