-
-
Notifications
You must be signed in to change notification settings - Fork 95
[RFC] Future of Runbot #144
Comments
@lasley You were visionary of the odoosh platform 😉
I dont think Odoo will invest more time on runbot from now on. |
Hello Dave, The next one is to create the GUI to see all these instances and here is related with this issue and is undefined yet. |
I love your insight on the matter, @lasley. I didn't think of this problem, but it's pretty obvious that runbot will die soon. Basically the plan we have made is that @Tecnativa would be pleased to donate https://github.com/Tecnativa/docker-odoo-base (We can call it Doodba for friends 😋 ) to @OCA, and then keep on using it to [almost] completely replace https://github.com/OCA/maintainer-quality-tools. After some days talking to many OCA+Docker contributors/integrators, I finally can say I have a clear roadmap as explained below; and although it's pretty clear and not extremely hard (a little bit long, not hard really), I'm no way able to do all of this work by myself. This is a very integral change on how OCA works, and its usefulness surface would be way bigger with way less work and tools (not only specific CI, but full agnostic deployment stack). So we'd love to see some community commitment in the issue. If we're not gonna get to the end of it, it's not worth it to start... The main use case that gave birth doodba wasn't CI and runbot at the beginning, but nowadays internally we are using it into our CI pipeline, and it turns out the need of a runbot disappeared as a natural side effect: we just ask our CI to boot the project somewhere, and we have a runbot. A piece of cake. We need to implement some changes in the project to make it more coupled to OCA:
After that, the OCA itself needs to make some changes to adapt to doodba:
To make it all work as smoothly as possible, there are still another needs:
An that's why we didn't have much to report in the code sprint, BTW... Coming to this roadmap, teaching the tool... was quite a few work by itself. We only finished the tasks specified above. Although the roadmap seems a little bit tricky, I think it's pretty clear, and at the end of the road we'd have new tools that could be plugged anywhere with almost no effort, providing a real open alternative to Odoo.sh and runbot where Fabien decided that "Docker is not for an ERP" 🤣 Thoughts? |
Looking back at my RFC - wow. It seems I am 😆
We have a version 9 here. I did a quick v10 upgrade that works, but I think it would be nicer to have something new - this is really bad code.
Clouder is definitely out - It's been disbanded.
Hooray - I figured this was the plan at some point, if not just so we have an official image.
I love it!
I can add a decent amount resources into this if we can couple it into a full deployment pipeline. It sounds like this is the plan, so count me in. I still need something for my PaaS orchestration, and this would take me closer.
Exactly
This point has bugged me as well. Can we not instead modify the
Agreed - such as our static image for Runbot. This is built directly in Docker, but I would love to have a push mechanism. This could even allow for some things like a private repo clone during build, then a subsequent deletion of the SSH keys, and then push of the image. With the build code not actually exposed, the SSH key is safe, but the private repo is still there. We've been doing this in our private images, but it would be nice to be able to do in the public ones too.
This is done in Tecnativa/doodba#71. I'll rebase to clear conflict and hopefully the erroneous error
Our [OCA] Runbot servers are physical. Some prices came up in our last board call when voted to create the new build worker for the sprint. We're getting a better deal than we could get on AWS I think. That said, LasLabs has about 6-7 lab servers in our inventory. I would be willing to slice one off for the cause of our development, which would allow us to not have to worry about costing and stuff like that until we have a real system. I'd just have to make some proper network rules and all that so our corporate network isn't accessible.
I just registered
We're using CamptoCamp's Mopper, which works pretty nicely for this. Plenty of other stuff around though.
I think we could get around these requirement initially. These builds are pretty light honestly - coming in at a little over 200 megs of RAM, and 0 CPU usage. A dymo-type system would be freaking awesome, but the development effort it will cost vs. the tradeoff I don't think is worth it. If we were to pursue something like this, I think making Odoo compatible with true serverless computing would be a better use of resources (such as AWS Lambda) Just in case you're curious, I gave you access into our Rancher's Runbot environment (https://rancher.laslabs.com just use your Github login - don't hack me bro). Go to Infrastructure => Hosts => laslabs-runbot-01. From there, I just start using the hell out of Runbot to see if resources spike. Things only really get bad if there are a lot of consecutive builds (this worker can handle 8 well - one per CPU core basically), or a lot of instances with activity. Idling instances haven't really caused me issues. Here's last week of historical overview:
Docker is for everything, dammit! 🚀 |
cc @gurneyalex |
Note I snuck something in with an edit about the dymos vs. serverless computing:
|
We have been working internally a lot on a flexible docker image with automatic fetch of recursive repositories (public, private,PR), automatic build of odoo.conf for all versions since v7. |
Throwing this on the list of features that is required - I'll handle it myself. The new system should support XML output (Coverage.py outputs a format compatible with Cobertura), and parse the results properly. I've been missing this from our past build systems for a long time now. We would use pycobertura, and parse the results into something meaningful. Reporting would come later. |
Hi, Current statusMQT is a key element in ensuring the quality of the code developed by OCA. Today these scripts cover at least the following needs:
Today, with the announced end of runbot, there is also the need to set up a system allowing the automatic deployment of these changes. These needs are common needs shared by all the OCA community and many initiatives have been taken by different members to come up with a solution that is easy to implement outside of runbot. Indeed, today, one of the biggest problem of MQT is that it is not very modular and it's designed to be run on runbot. At this stage of reflection, I think it is important to separate tools from platforms. It is also important to avoid reinventing the wheels by taking advantage of what the python ecosystem can bring. But what do we really need? Managing dependenciesIn our developments/deployments we must deal with dependencies between Odoo addons but also on pure python modules. At the genesis of MQT nothing existed to take in charge this big challenge. A process based on two files has been put in place
Thanks to this process, we are able to initialize a test server with all the requirements available on the server. This process is at some point a little bit boring and require to duplicate part of the information already present into the module's manifest in external files. Over the past two years, a lot of work has been done by @sbidoul to make our Odoo addons compatbile with the dependency management system of python (pip). In the same time a bot has been put in place to package and publish all the OCA addons (from 8.0) as wheels on pypi. https://pypi.python.org/pypi?:action=browse&show=all&c=624 Therefore we no more need to provide our own way to manage dependencies and we can take advatange of pip to safely install Odoo and the addons to test. (https://pypi.python.org/pypi/odoo-autodiscover, https://pypi.python.org/pypi/setuptools-odoo) requirements-test.ext
pip install acsoo
for addon in $(acsoo addons -s " " list); do echo "-e ./setup/$addon" >> requirements-test.txt; done
pip install -r requirements-test.txt Testing addonsTo test our addons the following pattern is applied:
Even if this pattern is clear, it's not possible/difficult to reuse MQT outside travis to apply this pattern to our daily work. This observation was the starting point for the development of a new python utility module to provide parts of the logic of MQT as a set of command-line utilities to facilitate our development workflow at Acsone. https://pypi.python.org/pypi/acsoo/1.5.0 With this king of utility module, it becomes easy to explain the different steps in our CI and to provide a way to contributors to apply the same pattern whatever environment he wants to use them in (travis, gitlab, docker, a virtualenv on his own machine). The only technical skill required is to be able to install an odoo module. In three words: Check code quality with flake8
Check code quality with flake8
Launch Odoo to install all the addons on which the addons to be tested depend and check log for errors
ADDONS_INST=$(acsoo addons list-depends --exclude=....)
unbuffer ${VIRTUAL_ENV}/bin/odoo -d ${DB_NAME} --stop-after-init -i ${ADDONS_INST} | acsoo checklog
Launch Odoo to install all the addons on which the addons to be tested depend and check log for errors
ADDONS_INST=$(acsoo addons list --exclude=....)
unbuffer coverage run ${VIRTUAL_ENV}/bin/odoo -d ${DB_NAME} --stop-after-init -i ${ADDONS_TEST} --test-enable | acsoo checklog This approach based on standards and lightweight tools is used to test alfodoo. https://github.com/acsone/alfodoo/blob/10.0/.travis.yml Travis, Docker, Runbot, ....At this stage, we have all we need to test our addons in a simple and lightweight way. This approach is used on our internal projects (with giltab ci) but also to test alfodoo with travis. https://github.com/acsone/alfodoo/blob/10.0/.travis.yml From here we can put in place some specific layer to speed up the test on travis (our elsewhere). We could provide some docker image with odoo already pip installed for exemple... I'm not a specialist of these technologies but IMO, these should/must be a layers embedding this lightweight process. What abut runbot....As @yajo says, what we need is to be able to deploy the result of our builders somewhere, somewhere that has to manage the lifecycle of these deployments. One benefit of using pip to install our addons into a virtualenv is that we can use standard commands to extract and build all the python packages required to deploy our Odoo server on an other linux machine. In 2 commands you can create the list of python modules (Odoo addons included) and build all the wheels to use to install the server.
The result is a list of python wheels into the release directory. You can copy this directory whatever you want. Once copied, your server is installed in one command. |
@lmignon - thank you for the great evaluation. I was looking forward to seeing what you guys had to say about this, particularly because of the work your team has done on the pip side of things (seriously thank you @sbidoul). On the high level, I fully agree with you. We should absolutely leverage every pre-existing system we have in order to keep our work (both now and future) minimal. The current fracturing of the ecosystem definitely leads us to most of our problems; maintaining one logical pathway for the installation, and subsequent testing/running of Odoo and associated addons just seems like a good idea honestly. It's also PEP-20 compliant:
So yeah my biggest question on the pip side of things is how would we handle dependencies that haven't been merged, and thus aren't on PyPi yet? Edit: In hindsight the complex cases are out of scope |
@lasley there is an old PR of mine illustrating this. The gist of it is these 3 lines. So basically, in normal situations a repo has no (or empty) requirements.txt. When setting up the test environment, all addons of the repo are added to it. Then that requirements.txt is pip installed which pulls all other dependencies automatically. If a PR requires a specific branch of a dependency (an Odoo addon or a python external dependency), you can add that branch in the requirements.txt of the PR. For example, say I make a PR for mis_builder 11 which depends on a yet unmerged date_range 11. My mis_builder branch would include a requirements.txt such as this:
|
Thanks for the elaboration @sbidoul - this totally makes sense, and basically mirrors what we already do in the Looking at the requirements command you posted, I notice the I honestly didn't even know this folder existed until someone submitted a PR to one of my branches adding it. Looking back on this, I am now connecting the dots. If we went this direction, this does add some Python knowledge over the standard Odoo dev knowledge, which could increase the learning curve for our stuff. I think I'm overall for it, I just feel it's necessary to point stuff like this out. Should we think about pairing the implementation details of this with our planned apps store? Realistically that's a pipeline too - so if we maybe added some concept of PyPi repos + Git repos to the mix. We could then leverage our apps store for CI, which in turn leverages pip for actual installation/dependency resolution. Sounds like killing a few necessary birds with one giant boulder I think. |
My above PR generates the missing setup.py automatically in the branch being tested. But in general yes, they need to be added. There is a tool that helps you do that. I don't think it increases the learning curve compared to an another dependency management solution. And what people would learn with that is mostly generic python ecosystem knowledge, which is arguably more useful than local knowledge such as oca_dependencies.txt or repos.yml. An OCA app store should definitely implement PEP 503 -- Simple Repository API. |
The case that's still missing for pip installers is when you need to merge 2 or more PR for the same addon, so we should still be able to support both install methods, as we currently do at doodba. I'd also still miss the feature of installing odoo itself with pip then (which didn't work last time I checked). In any case, all of this reveals a conceptual problem right now at OCA repositories: All of this said, we still have one thing to keep in mind: OCA has currently 151 repositories, most of them for addons. This means that any system we deploy must be backwards-compatible until all of them are updated. The main problem on your comments is that it blurs the roadmap I stated here: #144 (comment). I'm not exactly sure on how to get to the end of it by using your suggestions, so I'd thank help on that. One last reflection: it's interesting how each one of us came to a different solution to the same problem. The fact that runbot starts fading away just made the problem visible, but it's quite obvious this has been a problem for a long time, and it has to be fixed collaboratively. 😊 |
This case is not missing. We have developed git-aggregator for this use case. When we need to merge 2 or more PR for the same addon we use git-aggregator to build a consolidated branch and we pip install the addon from this consolidated branch. pip install -e 'git+https://github.com/acsone/web.git@MY_BRANCH#egg=odoo9_addon_web_m2x_options&subdirectory=setup/web_m2x_options' If your banch is local:
This file is not required with pip. All the dependencies are managed by pip. (Odoo and python dependencies) https://pypi.python.org/pypi/setuptools-odoo
My main concern is to separate tools from platforms. As a developer I never need to use docker to test all my devs. I just need tools easy to use, easy to install and easy to understand to help me build test and deploy my devs. I'm not interested by tools that reinvent the wheels. I'm a python developer since 1998, and as a python developer I expect to be able to reuse what the python ecosystem provides to all the python developer. In my #144 (comment) I try to show you how it can be simple to build, test and deploy our devs with simple tools. These tools exist and are independent of a specific platform. IMO the first step is to cleanup the way MQT works before introducing new complex layers. MQT must be based on standard python tools and patterns. |
I agree with @lmignon. The current approach from acsone is IMHO easy to use, it is quickly being adopted by many, and clearly separates layers. We also face the need to decide when to use docker and when not to. If there are a few corner cases that require extra tools, we can discuss on how to implement. I think that we need to take advantage of this huge work. |
Definitively agree here: you end up downloading way too much useless stuff. |
I feel that the pypi direction is the right one (basically because it has been running for so long that it is stable and well documented).
For "advanced" level:
That means too:
In short, I see a lot of technical discussions and just dont want to lose the end user perspective: what are the plan for the end-user interface?
|
The end user interface would be just a link at the end of any PR that leads you to the running testbot. Nothing else. It can be improved later if the need arises. @lmignon I don't understand your comment, see...
This is a contradcition. 🤔 Do you mean that you never used Docker and thus you feel it's complex? Because I could say the same about pip-installing addons, and I'm pretty sure your facepalm would be almost as big as mine's right now... 😏
Well, we're talking about mostly reinventing MQT and runbot here, so are you saying you're not interested in this whole thread?
I'm not against it, as long at it is based on Docker, although the plain truth is that a well-made Dockerfile can quickly deprecate any needs for other systems.
I don't like to force everybody to maintain own forks, although it's an acceptable strategy if you like it, of course. 😉👍 Anyway, setting differences apart, you use git-aggregator too, so at the end of the day we're gonna need it anyway, so I don't really see that both standpoints collide. So, after all, what the roadmap should change is just:
Right? @lmignon said no need for All this said, one of the funny things about Docker is that it encourages the black box paradigm (put this here, that there... and it all just works), switching from one system to another would be pretty simple, no matter the direction we take with the first approach. OTOH nobody yet talked about the backwards compatibility with pip... |
IMO we need at least a Runbot-esq interface. Aside from functional reviews, my customers are all trained to use Runbot when reporting issues. We absolutely require a way for anyone to easily dive into a live Odoo instance of production branch Y. Docker or Pip We're talking apples to oranges here. Using a Docker based solution would require whatever methods we have to devise for the non-Docker method. The question here is where does our code lie? All Docker does is give us a unified platform and syntax for the code - but really it's just a bunch of bash files. There really is no difference except the fact one is called VM and one is called Docker. I do agree that we need a Docker based solution though, and that reason is compatibility. If you can guarantee me that whatever we come up with here is going to be compatible with all systems, then that's great let's remove the layer. The problem is this guarantee, if provided, would be false. Windows and Mac simply do not run things the same way as Linux. Core level things like OpenSSL implementations are different, which lends to unexpected results somewhere. We can absolutely provide this guarantee with a Docker based solution. To me, this is worth the layer.
I think I'm missing the point here. What additional would be required in order to make pip backwards compatible? The naming schema on PyPi is defined by bot and based on the addon name, so it's just a matter of knowing the dependencies in the manifest. |
Hi all, Here are a few more thoughts. If I remember correctly OCA/maintainer-quality-tools#343 was preserverving backward compatibility with oca_dependencies.txt, so that should not be hard to achieve. We could finish that PR without much effort, btw. requirements.txt is not necessary when installing the addons to test with pip (because external python dependencies are handled automatically by setuptools-odoo). Actually, we must NOT have requirements.txt at all in python library repos such as OCA addons. requirements.txt is only necessary to reference specific branches of dependencies when a PR depends on an unmerged addon in another repo, as explained in #144 (comment). Such a requirements.txt must not be merged in the main branch. In the very rare cases when a PR would depend of several unmerged branches of an addon in another repo, we can maintain that temporary branch manually and reference it in a requirements.txt. I don't see git-aggregate fit anywhere in an automated OCA CI flow. [as a side note, even in the simplest projects, we always run git-aggregate manually in a controlled way, never in an automated flow: it's too dangerous] My 2 cents regarding the roadmap:
|
From reading docker-odoo-base README and Dockerfile, addons.yaml/repos.yaml seems reinventing what pip is made for. |
That's exactly my point! |
Oops I forgot the most important part in my flowchart - the testing interfaces! Added |
@lasley LGTM. |
Good point, I want to keep Travis too. This is where the Docker image comes in - MQT I think would just be a wrapper allowing for the execution our tools (like @sbidoul laid out in #144 (comment)). I'll figure out how to adjust the flowchart on that one - it's a bit complicated to represent visually I think |
Note: I also want Gitlab CI too |
Indeed it might be simpler for the OCA app store to feed itself from https://wheelhouse.odoo-community.org/oca-simple/ (wheels being essentially zip files already). |
@yajo you are not the only one. |
@yajo I don't share the same POV in your comparison database.
You only need to create the setup folders for new addons. A bot generates the missing setup folder every night for all the OCA addons. Moreover, generating a setup folder is done by launching one command into the addons folder
???? you are able to merge all the branches you need for your addon. You have no limitation. Even more you can select for each addon individually the branch you want to merge. I have experienced in the past problems with the approach based on addons in the same file tree when the merge of one patch needed for one addon impacted another addon used into the repo. The pip approach is more flexible and less error prone.
For our client, we never deploy from git. We always generate the wheels to deploy on the target machine from our staging server. The installation on these machines are done in three commands:
When you deploy from git, you must have access to git from the remote machine. It's not always possible for some clients.
Are you a python developer 😏 😏 😏 ? I agree that we use some advanced features of pip but you can find a lot of documentation into the python community. Moreover, this knowledge is valid for all python developments, not only for Odoo.
For sure staging or production environments must be the most secure as possible (no dev packages, no dev envs, no ssh certificates allowing the access to external systems, ...) . But all these envs are installed in the same way: IMO It is also very important that the images that would be created to validate our PRs as replacement of our runbot are as light as possible. With the proposed approach, we would just need Odoo's non python dependencies, wheels that would be generated/obtained by travis during its build process and that's all.
It's only true on our dev envs for addons not available as wheels. @yajo I think that in this discussion we can continue to argue endlessly for one solution or another. For me, the most important thing is that the tools provided by OCA are examples applying best practice and based on known standards. |
Wao, I read and read by 1 hour the thread, and there are A LOT of half trues on SEVERAL ARGUMENTS arguments here, almost all of them are subjective "We can't , We must, blah blah blah". There are some facts:
Then my opinion (conclusion), let's document more, do not change any format, just complete it, and let's allow mature naturally the @yajo approach 1 or 2 years until we can have a more solid work there I think as he said (and I am totally agreed on that) naturally it will evolve to the easiest way, on my opinion the easiest way is the best documented not necessary the best tool, let's document. What do you think? Another point we need thinks to do in Odoo Efficiently because it is what we maintain, we do not need to use tools that almost turn on a mixer ;-). I hope you take into account this facts in your decision at the end the one best maintained will win (it does not matter the tools), just what is done. thanks for read and sorry for the long email. |
Why do you oppose pip and docker ? It's not a fact We use docker with voodoo (buildout inside) and we'll convert voodoo to pip for odoo modules |
@lmignon rejected some negative points, although none of them were said to be fake, just not so bad/hard at my sight.
Best practice: KISS. Known standard: Docker. I also agree with many of @nhomar's points, although if we are facing tool & formats deprecations (not just because of yes, but because it is really better 😉... and prs welcome!), then we should invest the documenting efforts in the new ones IMHO.
Thanks for that, pal 😊 In any case, no matter if we are in the runbot deprecation, pip enforcement lobby, or any-custom-odoo-up-and-running-in-10-minutes side of the discussion, one clear 1st step we can do is to add a I'd be glad to start doing that ASAP. That's a common, non-hurting base that almost everyone will benefit from. If people at OCA refuse doodba, we can keep it for ourselves, It's a shame, because with some little improvements as I explained long ago, it would possibly improve the whole situation in MQT, runbot, contributor development, and integrator deployments; but I can't force anyone... So do you all agree on starting with this little step and then somebody from the pip lobby can provide their alternative roadmap? 😊 |
Sorry if I express myself incorrectly, I am not against Docker, I am just saying that comments about Odoo that did not pick docker are incorrect, AND, our effort must be around something "docker or not docker" and that's is simply a package that automate the Odoo's related tasks. Said that, IF we pick @yajo approach it can not relay by default on Docker, docker is a Brand not an Standard as you say. The Standards is this one BTW (Which is a wip). Then let's focus on Odoo's layer, where that will relay on (Docker, LXC, Odoo.sh direnct in an amazon AWS, my machine, guindows) should be irrelevant. Docker is just "Where Odoo will land to". THat's my point. |
There is another point and I think the must important one on my statement: Both are making arguments with "we can't or we must not" which ARE NOT REAL. I did a little counting on mqt and mt of the number of lines of code: That have:
IMHO that fairly nothing (but can be reduced), dig into tens of now seudo tools will kill the contributions also, now I can look in 1 and only 1 place for all the tools "Why look in other places"? let's document better. About mqt:
Let's plan the cleanup. You all will see if you pick ANOTHER option that will represent the SAME efforrt which is done already, Why the re-work? why not clean-up? may be refactor it. |
Ok so we have a lot of differing opinions here, and I highly doubt we're going to settle on one. We all obviously have our own ways that we want things done, and we all have our own infrastructure and existing methodologies that need to be considered. We're definitely out of scope of just a Runbot conversation, but I feel that it's all totally related and absolutely good to be discussing as a whole. I think the first step here is to identify what it is that we need from an entire build pipeline in the abstract. Aligning the conversations and staying technology agnostic, I see the following:
Once we nail down our abstracts, I think the best approach is to rebuild the existing MQT to provide helpers, plus an interface/adapter mechanism to allow for our particular build pipelines. Our pipelines can then be installed as Python packages, or possibly Odoo modules (in the case of a Runbot-esque GUI) |
I mapped Dave's concepts with:
I'm sorry I didn't include
Some final words: mqt, pip and Docker are not competitors. They are different tools solving different layers of the problem.
|
FYI we have repackaged some part of MQT into a set of command-line utilities to facilitate our Odoo development workflow at Acsone. https://github.com/acsone/acsoo For exemple, I use also these utilities to test alfodoo with travis (https://github.com/acsone/alfodoo/blob/10.0/.travis.yml) |
I don't understand this debate pip vs docker. It seems this is rather a debate pip vs doodba actually. But I see no reason why pip could not be used with a docker image. Now, regarding Docker, I think it's very nice for a CI process, where you test your images in, for instance Travis, and spawn them somewhere else. It doesn't mean this Docker image cannot be built using pip... We should be really cautious not to have a tight coupling between the Docker image distribution used and the tools used for the tests, checks, deployment and so on. I'd say one should be able to use the tools on any Docker image or even on any system outside of Docker. My opinion: the image should be kept to the bare minimum required, containing only the sources, the dependencies and a small entrypoint to generate the config file and install addons at startup is enough. The new mqt tools, should be independent and outside of the Docker image. The way to inject the source code in the image (pip, git repos copied in the image with COPY, whatever) doesn't matter much. Once an image is generated by Travis / Gitlab CI / Jenkins / ... it can be pushed to a runbot-like server (which should support any image) I can share what is our CI process @ Camptocamp if it can be of any help. Our project images are based on https://github.com/camptocamp/docker-odoo-project/ We use Travis which build the image, run lints and creates a container to run the tests. If they succeed, the image generated during the build is pushed to the hub registry. Then, it sends a POST request to a small app we call "Rancher Minions". The Rancher Minions is our internal runbot-like, running on top of Rancher, a small 500 lines Flask application. It leverages Rancher and is mostly a nicer graphical interface on top of Rancher to display odoo instances grouped by branch. It spawns the new stacks and destroys the old Rancher stacks when it receives new requests. The hard stuff (dns, proxy routing, instances state, logs, cleaning, ...) is handled by Rancher. Also, as the build of the images is done by Travis, the instances spawned "only" have to start and create a db (automatically when they start) Rancher Minions: At the moment, Rancher Minions is an internal tool, but if there is some interest in the OCA, we can share :) |
@dreispt 👍 If I understood you fine @dreispt : if we like use docker, so we need to change MQT to support it. (other issue, other PR, other matter) And if your custom project like use pip odoo addons packages so you will need use a way to support it (environment variables, configuration files and so on). If you run the same scripts that .travis.yml in other compatible environment so currently MQT is compatible with:
The bad point for MQT is that we have ugly and separated scripts instead of a good and correctly documented package. Someone told me: "I can't use MQT with gitlab" And there are too many ways to create the same thing that use one and just one is imposible go forward. What about if we use a command like as: def get_addons_dependencies(self):
if os.path.isfile("oca_dependencies.txt"):
clone_oca_dependencies
elif os.path.isfile("git_agregator.cfg"):
clone_git_aggregator
elif os.path.isfile("odoo_addons_pip_requirements.txt"):
pip_install_odoo_addons If you run this command in a docker, in a travis, in a gitlab, in a shippable, in a |
Hello all. Following the topic discussed: Make mqt a clean python package step by step in the right way, I started myself the work here: OCA/maintainer-quality-tools#500 The TODO's on the PR. I did not know the existence of https://github.com/acsone/acsoo in all cases I will try to copy what is adjustable to mqt (I think it would be helpful just start the discussion with I HAVE THIS DONE GUYS @lmignon and I am open to put it under the OCA umbrella) because that's exactly the very first step I am re-doing now backward compatible. The only point is that as mqt grows up adding and adding huge number of use cases (almost all important ones that are really complex to debug in some cases) I do not want to lose one single use case in the move, and be backward compatible with all the repositories, then no brake the instance of the people using it tight now (like our self) and have time to move. One important point is that mqt since some time ago is not anymore travis only oriented, BUT the names on the scripts were maintained and that was a wrong move because it sales a wrong approach. I will try to have this week a working environment as good as I can documented to move forward on the mqt layer and try to . Once it is done I will try to include git-aggregator as a dependency BUT we need to move git-aggregator under OCA umbrella (In order to add there the specific necessary features) and clean up some python stuff (no too big change just make usable as a module also and not just a command line) the we will be able to use addons.yml if you like (and may be move some modules to see how it works) this feature is agnostic from Docker BTW then add it should not be difficult and may be help people to have their un-merged environment (that's for me heresy but that's my opinion ;-)) . I would not like to create ooooother package fore deployment proposes (easily importing what is needed we can add the sub commands necessary using the right tool for such deal). Best regards and happy hacking. |
FYI, we have no intention to drop the runbot at the moment. It is not even discussed. We discuss adding features to the runbot. If you want to create new (and clean) test tools, go for it but don't take that decison based on the fact that we will stop the runbot... |
@guewen Thank you for this clear summary of the situation, which makes it possible to refocus the debate on the real issues at stake in this discussion. I completely agree with what is being said. |
Why do we need-it for MQT? As @sbidoul says: "Please keep MQT focused on OCA needs only. The needs of OCA (testing addons libraries) are quite different from the needs of integrators (testing integrated projects). If we try to have one solution that fits all use cases, we create something that is very complex and hard to maintain. On the other hand, if MQT is built around small components with separated concerns, such components provide value to everyone." Moreover, you are free to contribute to git-aggregator even if it's not under the OCA umbrella. |
@lmignon I think exactly as you explained mqt will be the place where all the testing process and automation will land on (at least we changed that now) and the fact of use it as part of the testing flow to premerge thing and download the linked repositories with the @yajo proposal. If you think It is not needed then may be I understand something wrong, the proposal is use a dockerized environment and instead use oca_dependencies have a tool to pre-merge pending PR, Did I understand well?.
The point is that we should not make brand-depending our main tools, and basically the package has the acsone brand on it, and I think it is not fear do that, nobody is doing that under oca contribution. If we start to call all the work we contribute instead generic vauxooX it will not be seen well I think (at least if I am not wrong). BTW I understand you are Ok if we copy some things you re-did that were done in mqt already (watever the reason I understand some of them) and add some other new ones, I can do the job to put that under agnostic-brand under mqt no problem, now I understand better your technical point BTW. |
Thank you all, I believe we now agree that there is no Docker vs Pip vs MQT: these are different tools, solving different problems, that can even work together for the same solution. The "Freedom" FOSS value also means that we should be free to choose the tools that best fit our use cases. So we should really avoid locking down our tools to opinionated choices, unless really needed. @nhomar I agree with the vision that MQT should make the switch from a test/CI suite to a CLI tool, so OCA/maintainer-quality-tools#500 can a step in the right direction. I have some concerns with MQT though. IMO the project needs quite some refactoring. For example, the self test approach is far form the best. The system variable communication between scripts is also not ideal. I believe that the best approach is to leave MQT as the OCA CT suite, and create a new project , upstream of MQT, like we did for pylint-odoo. This would be a new CLI tool, solving the problems MQT needs, such as defining configurations, resolving dependencies (Git or Pip) and providing entry points for running and testing Odoo. Maybe acsoo can provide a good starting point for that, and we can fork it into an OCA tool and repo? And maybe addons.yaml can be extracted from Doodba to move into this tools? (Sorry for focusing on MQT in a Runbot thread 😞 ) |
@nhomar IMO for oca we don't need to premerge things.. we just need to get all the dependencies for a given branch of a repo in a way or an other.
This tool is a generic tool that can be used everywhere not only into the python ecosystem. It's a tool for the git users community. The name is neutral without reference to Acsone. To install it you just need to type 'pip install git-aggregator' (once again no reference to Acsone). More generally, I hope it is possible to use tools or libraries external to OCA. If not, we have a problem, we have to rewrite all the tools and libraries we use. I have always seen OCA as a community open to the outside world where we try to take the best of what exists when it is consistent with OCA values. Am I wrong? |
Hmm the problem on having too much agnosticity is that it becomes harder to KISS. #144 (comment) explains very well the options we have. Since it's gonna be quite hard to get to an agreement on best practices for integrator needs, I feel we should just focus on specific OCA needs and the easiest way to get them from an idea to a working tool, without breaking backwards compatibility and maintaining a sensible level of pluggability to allow integrators leverage those tools. Specific OCA needs are just automated CI and manual CI. The point of donating doodba is that we could provide a tool for CI, integrators and developers, that supports development, testing and production environments; it would make the odoo landing experience much smoother, and since one of the environments is testing, it can easily be used for both needed CI purposes, but to make it straightforward it needs also to make decisions, and since those are not liked by most OCA users, I guess it's best to keep it out from it. 😞 In such case, I think we can dump the idea of git-aggregator and addons.yaml for OCA also. They are used in doodba to blur the differences among supported environments, but again, if OCA is gonna support only one, it doesn't need such blurring. Hopefully some day pipenv gets mature enough to make us able to dump addons.yaml, but that is not today. All the pip stuff is quite hard to understand and maintain IMHO, but the truth is it's already there so we could be able to use it for CI. So, a good roadmap could be now:
MQT has a lot of hacks to make tests for just the addon you are pushing in your PR, but the truth is that since v11 we have reset our branches, so any added code shouldn't break preexisting code. This makes those hacks no more needed. About GUI, If we just turn on and off the manual CI environment (runbot) on demand by the smart proxy, as Odoo.sh does, you need no GUI, only a link to enter the runbot, and maybe a splash screen telling you to "wait while we boot the instance". If we complete all of this:
If tomorrow we want to use another container-based tool (rancher, kubernetes, docker swarm...), since all of them use the same basic unit as we'd be using (an image), adaptation should be easy too. The bad part:
... but that's how it's always been anyways... |
I don't get why... Maybe it's because like Laurent I use it since ages but I struggle to understand what's hard in there. In fact, I think that the current way of working around the non-usage of py packages IS the hard part and led all of us to re-invent our own wheel to make our deploys reproducible.
Am I wrong or are you saying that MQT should just work inside a container?
Why? You don't need a docker image for this already AFAIK.
Maybe is true for integrators but for (py)developers relying on py packages is a must have and I think is going to ease on-boarding of those who are scared by this flaw in odoo eco-system. |
Dude PIP is hard to understand when you do not know it and do not read enough about that, I think if you read this may be it will change your mind the problem is that:
Your dream will be true here (WIP) we made a mistake at the very begining of that repository not opensource it in the right way, and this discussion is the very example of the consecuences, we will fix that I am on it with you all, just remember why I do not want to start a new **nothing:
SUPER +1 @yajo need to understand a little better this observation "Docker is Nice" but, if you know a hammer you can not treat everything as a nail.
SUPER +1 here. Imagine node people creating wired docker images with .js on selected folders ;-) |
+1 on this, annotated in the roadmap of the refactor.
|
@lmignon I think the same (even that feature looks weird for me) but that the unique argument from @yajo to propose replace fully oca_dependencies and refactor all the way we worked, that's why I considered it if everybody think that's out of scope or there is not another reason, then let's move that feature out. |
I think that this issue could be closed since the good comment about: Feel free to re-open it |
Maybe this effort can grow useful at some point... https://github.com/xoe-labs/odoo-operator |
Also there is review apps integration from GitLab with any random k8s cluster (don't fear k8s's complexities! It's pristinely elegant 😉 )... |
So with the invention of Odoo.sh, it is obvious that the community will soon be left high and dry without a functional testing platform. Runbot is built on the old API, which will be deprecated with v9 one year from now.
This issue is to discuss what the hell we plan on doing about that. IMO Runbot is very important to our workflows, so I believe we must take on the brunt of this maintenance.
The big question is whether we start completely fresh, or try to upgrade the garbage that is the Runbot code. I honestly think it might just be easier to build it from scratch than upgrade, but maybe someone has another opinion.
I outlined some stuff in #88, but I think we could probably simplify. This would probably also go into our MQT redesign, and our planned OCA distribution platform.
cc @moylop260 @yajo
The text was updated successfully, but these errors were encountered: