Skip to content
This repository has been archived by the owner on Sep 16, 2024. It is now read-only.

[RFC] Future of Runbot #144

Closed
lasley opened this issue Oct 4, 2017 · 101 comments
Closed

[RFC] Future of Runbot #144

lasley opened this issue Oct 4, 2017 · 101 comments
Labels

Comments

@lasley
Copy link
Contributor

lasley commented Oct 4, 2017

So with the invention of Odoo.sh, it is obvious that the community will soon be left high and dry without a functional testing platform. Runbot is built on the old API, which will be deprecated with v9 one year from now.

This issue is to discuss what the hell we plan on doing about that. IMO Runbot is very important to our workflows, so I believe we must take on the brunt of this maintenance.

The big question is whether we start completely fresh, or try to upgrade the garbage that is the Runbot code. I honestly think it might just be easier to build it from scratch than upgrade, but maybe someone has another opinion.

I outlined some stuff in #88, but I think we could probably simplify. This would probably also go into our MQT redesign, and our planned OCA distribution platform.

cc @moylop260 @yajo

@lasley lasley added the question label Oct 4, 2017
@elicoidal
Copy link

@lasley You were visionary of the odoosh platform 😉
I think we need to move forward with the idea but clearly need to simplify / prioritize the steps:

  1. runbot clean on docker/v10 (or any container techno to improve the load)
  2. deployment platform (not sure whether clouder can make the trick: I do not know it)

I dont think Odoo will invest more time on runbot from now on.

@moylop260
Copy link
Contributor

Hello Dave,
Yajo and me talking in the OCA Sprint and he have a new way to build a instance that you are using too. (Yajo you can reply with the explanation)
The step one is to create the first PR working with this way with travis green and runbot with t2d disabled.
The next step is to create a alive instance using just github status to add the link to connect.

The next one is to create the GUI to see all these instances and here is related with this issue and is undefined yet.

@yajo
Copy link
Member

yajo commented Oct 5, 2017

I love your insight on the matter, @lasley. I didn't think of this problem, but it's pretty obvious that runbot will die soon.

Basically the plan we have made is that @Tecnativa would be pleased to donate https://github.com/Tecnativa/docker-odoo-base (We can call it Doodba for friends 😋 ) to @OCA, and then keep on using it to [almost] completely replace https://github.com/OCA/maintainer-quality-tools. After some days talking to many OCA+Docker contributors/integrators, I finally can say I have a clear roadmap as explained below; and although it's pretty clear and not extremely hard (a little bit long, not hard really), I'm no way able to do all of this work by myself. This is a very integral change on how OCA works, and its usefulness surface would be way bigger with way less work and tools (not only specific CI, but full agnostic deployment stack). So we'd love to see some community commitment in the issue. If we're not gonna get to the end of it, it's not worth it to start...

The main use case that gave birth doodba wasn't CI and runbot at the beginning, but nowadays internally we are using it into our CI pipeline, and it turns out the need of a runbot disappeared as a natural side effect: we just ask our CI to boot the project somewhere, and we have a runbot. A piece of cake.

We need to implement some changes in the project to make it more coupled to OCA:

  1. It needs to temporally support recursively oca_dependencies.txt as MQT does ➡️ Code sprint WIP by @PCatinean in [WIP] Autogenerate repos.yaml for missing addons Tecnativa/doodba#86 (I'm not sure if he plans to keep on developing it or hopes me to finish it...).
  2. It needs to be able to auto-clone repos added into addons.yaml that are not found into repos.yaml (so, if you define i.e. server-tools: ["module_auto_update"] and it is not found, we assume that you want it bare from OCA, and keep manual definitions in repos.yaml just for the exceptions, to keep it DRY and make @nhomar happy 😝.
  3. It needs to keep static and onbuild images separate, so it can be reused by a wider audicence. Some OCA members already have their own docker pipelines and they'd want to share just the base image instead of the whole project with onbuilds and scaffolding.
  4. It needs v11 image. ✔️ done at code sprint
  5. It needs to support wildcards in addons.yaml. This is essential to be able to deprecate oca_dependencies.txt in favor of addons.yaml (more on that below). ✔️ finished at code sprint
  6. It needs to support different addons per environment. ✔️ finished at code sprint
  7. It needs to have a utility that lets you find (and install, update, test...) addons from private, extra and core sources separately (or not). ✔️ finished at code sprint
  8. It needs to detect and install pip dependencies from addons instead of having to specify requirements.txt at repo level. ✔️ finished

After that, the OCA itself needs to make some changes to adapt to doodba:

  1. It needs to decide where will we run the new runbot (maybe in AWS, maybe in the servers that are now used for runbot... wherever) and basically install a docker engine there.
  2. It needs an organization in Docker Hub. @moylop260 donated https://hub.docker.com/u/ocaimages although we'd prefer the owner of https://hub.docker.com/u/odoocommunity to donate it. We tried with just "oca" but it turns out you need 4 chars min. Suggestions welcome.
  3. The pylint-odoo project should provide the needed configurations for OCA workflow, all bundled in a reusable Docker image, to streamline and speed up the linting process.
  4. Although we have no special flake8 plugin or fork, we should provide a pluggable Docker image for it too for the same reason.
  5. Maybe the same for odoolint? Although it's not OCA's right now...
  6. Doodba should then be donated to OCA.
  7. MQT should get doodba support.
  8. Single repositories would need to start making the switch to the new mqt doodba mode. In this process, the oca_dependencies.txt files found in repos should get replaced by addons.yaml files This process should be made step by step to avoid crashing the whole OCA in case something fails.
  9. Once all OCA repos are moved and working fine:
    • Support for oca_dependencies.txt should be removed from doodba.
    • OCA runbots should be completely dropped.
    • MQT old mode, travis2docker and runbot_travis2docker tools should be completely dropped (or deprecated, if any of you still needs them).

To make it all work as smoothly as possible, there are still another needs:

  1. Since the new runbot would now be in effect a volatile FaaS, we would need a tool (docker image, of course 😆) that handles resources properly, such as what you saw in Odoo.sh. I imagine there must be some non-odoo-specific project that makes this (in fact, I wouldn't do it odoo-specific if we are forced to do it from scratch), I just didn't have the time to investigate enough. It should:
    • Clean up old images/containers/networks/volumes.
    • Switch off new projects where odoo has not been used for i.e. 10 minutes.
    • Boot the container and redirect to it (maybe relying on Traefik for that) when a new request enters.
  2. Doodba only supports Odoo versions 8 (hopefully soon deprecated) to 11 right now. Those that use older versions and need CI should choose among migrating to newer versions, providing support for older versions in doodba (I'm no way gonna do it 😆), or not having CI.

An that's why we didn't have much to report in the code sprint, BTW... Coming to this roadmap, teaching the tool... was quite a few work by itself. We only finished the tasks specified above.

Although the roadmap seems a little bit tricky, I think it's pretty clear, and at the end of the road we'd have new tools that could be plugged anywhere with almost no effort, providing a real open alternative to Odoo.sh and runbot where Fabien decided that "Docker is not for an ERP" 🤣

Thoughts?

@lasley
Copy link
Contributor Author

lasley commented Oct 5, 2017

You were visionary of the odoosh platform 😉

Looking back at my RFC - wow. It seems I am 😆

runbot clean on docker/v10 (or any container techno to improve the load)

We have a version 9 here. I did a quick v10 upgrade that works, but I think it would be nicer to have something new - this is really bad code.

deployment platform (not sure whether clouder can make the trick: I do not know it)

Clouder is definitely out - It's been disbanded.

Basically the plan we have made is that @Tecnativa would be pleased to donate https://github.com/Tecnativa/docker-odoo-base

Hooray - I figured this was the plan at some point, if not just so we have an official image.

(We can call it Doodba for friends 😋

I love it!

So we'd love to see some community commitment in the issue. If we're not gonna get to the end of it, it's not worth it to start...

I can add a decent amount resources into this if we can couple it into a full deployment pipeline. It sounds like this is the plan, so count me in. I still need something for my PaaS orchestration, and this would take me closer.

we just ask our CI to boot the project somewhere, and we have a runbot. A piece of cake.

Exactly

It needs to be able to auto-clone repos added into addons.yaml that are not found into repos.yaml (so, if you define i.e. server-tools: ["module_auto_update"] and it is not found, we assume that you want it bare from OCA, and keep manual definitions in repos.yaml just for the exceptions, to keep it DRY and make @nhomar happy 😝.

This point has bugged me as well. Can we not instead modify the repos.yml to support what we need? I think unification is good, but we should be Pythonic about this I think (explicit ftw)

It needs to keep static and onbuild images separate, so it can be reused by a wider audicence. Some OCA members already have their own docker pipelines and they'd want to share just the base image instead of the whole project with onbuilds and scaffolding.

Agreed - such as our static image for Runbot. This is built directly in Docker, but I would love to have a push mechanism.

This could even allow for some things like a private repo clone during build, then a subsequent deletion of the SSH keys, and then push of the image. With the build code not actually exposed, the SSH key is safe, but the private repo is still there. We've been doing this in our private images, but it would be nice to be able to do in the public ones too.

It needs to detect and install pip dependencies from addons instead of having to specify requirements.txt at repo level.

This is done in Tecnativa/doodba#71. I'll rebase to clear conflict and hopefully the erroneous error

It needs to decide where will we run the new runbot (maybe in AWS, maybe in the servers that are now used for runbot... wherever) and basically install a docker engine there.

Our [OCA] Runbot servers are physical. Some prices came up in our last board call when voted to create the new build worker for the sprint. We're getting a better deal than we could get on AWS I think.

That said, LasLabs has about 6-7 lab servers in our inventory. I would be willing to slice one off for the cause of our development, which would allow us to not have to worry about costing and stuff like that until we have a real system. I'd just have to make some proper network rules and all that so our corporate network isn't accessible.

It needs an organization in Docker Hub. @moylop260 donated https://hub.docker.com/u/ocaimages although we'd prefer the owner of https://hub.docker.com/u/odoocommunity to donate it. We tried with just "oca" but it turns out you need 4 chars min. Suggestions welcome.

I just registered oca4odoo. That's the best idea I have ¯\_(ツ)_/¯

Since the new runbot would now be in effect a volatile FaaS, we would need a tool (docker image, of course 😆) that handles resources properly, such as what you saw in Odoo.sh. I imagine there must be some non-odoo-specific project that makes this (in fact, I wouldn't do it odoo-specific if we are forced to do it from scratch), I just didn't have the time to investigate enough. It should:

Clean up old images/containers/networks/volumes.

We're using CamptoCamp's Mopper, which works pretty nicely for this. Plenty of other stuff around though.

Switch off new projects where odoo has not been used for i.e. 10 minutes.
Boot the container and redirect to it (maybe relying on Traefik for that) when a new request enters.

I think we could get around these requirement initially. These builds are pretty light honestly - coming in at a little over 200 megs of RAM, and 0 CPU usage.

A dymo-type system would be freaking awesome, but the development effort it will cost vs. the tradeoff I don't think is worth it. If we were to pursue something like this, I think making Odoo compatible with true serverless computing would be a better use of resources (such as AWS Lambda)

Just in case you're curious, I gave you access into our Rancher's Runbot environment (https://rancher.laslabs.com just use your Github login - don't hack me bro). Go to Infrastructure => Hosts => laslabs-runbot-01. From there, I just start using the hell out of Runbot to see if resources spike. Things only really get bad if there are a lot of consecutive builds (this worker can handle 8 well - one per CPU core basically), or a lot of instances with activity. Idling instances haven't really caused me issues. Here's last week of historical overview:

image

Fabien decided that "Docker is not for an ERP"

Docker is for everything, dammit! 🚀

@yajo
Copy link
Member

yajo commented Oct 5, 2017

cc @gurneyalex

@lasley
Copy link
Contributor Author

lasley commented Oct 5, 2017

Note I snuck something in with an edit about the dymos vs. serverless computing:

A dymo-type system would be freaking awesome, but the development effort it will cost vs. the tradeoff I don't think is worth it. If we were to pursue something like this, I think making Odoo compatible with true serverless computing would be a better use of resources (such as AWS Lambda)

@elicoidal
Copy link

We have been working internally a lot on a flexible docker image with automatic fetch of recursive repositories (public, private,PR), automatic build of odoo.conf for all versions since v7.
Our work is available here: https://github.com/Elico-Corp/odoo-docker
We are more than happy to participate in an image for the community cc @seb-elico
Names: ocaimages, oca4odoo, odoocommunity are possible.
I would suggest to check as well oxide, ocadocker, ocadock

@lasley
Copy link
Contributor Author

lasley commented Oct 6, 2017

Throwing this on the list of features that is required - I'll handle it myself. The new system should support XML output (Coverage.py outputs a format compatible with Cobertura), and parse the results properly. I've been missing this from our past build systems for a long time now.

We would use pycobertura, and parse the results into something meaningful. Reporting would come later.

@lmignon
Copy link

lmignon commented Oct 6, 2017

Hi,

Current status

MQT is a key element in ensuring the quality of the code developed by OCA.
Over time, the scripts have evolved to try to cover Odoo-specific aspects in the implementation of a continuous integration system.

Today these scripts cover at least the following needs:

  • dependencies management (addons and python modules)
  • analysis of the quality of the code
  • the initialization of an Odoo instance with all the modules on which the modules to be tested depend and the check for errors into this process
  • the execution of tests and the check for errors into this process

Today, with the announced end of runbot, there is also the need to set up a system allowing the automatic deployment of these changes.

These needs are common needs shared by all the OCA community and many initiatives have been taken by different members to come up with a solution that is easy to implement outside of runbot. Indeed, today, one of the biggest problem of MQT is that it is not very modular and it's designed to be run on runbot.

At this stage of reflection, I think it is important to separate tools from platforms. It is also important to avoid reinventing the wheels by taking advantage of what the python ecosystem can bring. But what do we really need?

Managing dependencies

In our developments/deployments we must deal with dependencies between Odoo addons but also on pure python modules. At the genesis of MQT nothing existed to take in charge this big challenge. A process based on two files has been put in place

  • oca_requirements.txt is used to provide the list of additional repositories to recursively checkout. Theses externals repositories provide the additional addons required by the addons to be tested.
  • requirements.txt is used to list the python modules required by the Odoo addons.

Thanks to this process, we are able to initialize a test server with all the requirements available on the server.

This process is at some point a little bit boring and require to duplicate part of the information already present into the module's manifest in external files.

Over the past two years, a lot of work has been done by @sbidoul to make our Odoo addons compatbile with the dependency management system of python (pip). In the same time a bot has been put in place to package and publish all the OCA addons (from 8.0) as wheels on pypi. https://pypi.python.org/pypi?:action=browse&show=all&c=624 Therefore we no more need to provide our own way to manage dependencies and we can take advatange of pip to safely install Odoo and the addons to test. (https://pypi.python.org/pypi/odoo-autodiscover, https://pypi.python.org/pypi/setuptools-odoo)

requirements-test.ext

--find-links https://wheelhouse.acsone.eu/manylinux1
--extra-index-url https://wheelhouse.odoo-community.org/oca-simple

# odoo
odoo-autodiscover>=2.0.0b1
-r https://raw.githubusercontent.com/odoo/odoo/10.0/requirements.txt
https://nightly.odoo.com/10.0/nightly/src/odoo_10.0.latest.zip
pip install acsoo
for addon in $(acsoo addons -s " " list); do echo "-e ./setup/$addon" >> requirements-test.txt; done
pip install -r requirements-test.txt

Testing addons

To test our addons the following pattern is applied:

  1. Check code quality with flake8
  2. Check code quality with pylint
  3. Launch Odoo to install all the addons on which the addons to be tested depend and check log for errors
  4. Launch Odoo in test mode to install all the addons to be tested and check log for errors

Even if this pattern is clear, it's not possible/difficult to reuse MQT outside travis to apply this pattern to our daily work. This observation was the starting point for the development of a new python utility module to provide parts of the logic of MQT as a set of command-line utilities to facilitate our development workflow at Acsone. https://pypi.python.org/pypi/acsoo/1.5.0

With this king of utility module, it becomes easy to explain the different steps in our CI and to provide a way to contributors to apply the same pattern whatever environment he wants to use them in (travis, gitlab, docker, a virtualenv on his own machine). The only technical skill required is to be able to install an odoo module. In three words: pip install acsoo. IMO it's very important to lower the technical skills barrier required by the OCA to attract new developper and ease the adoption of our Coding/Quality standard in the day to day work of all the cntributors.

Check code quality with flake8

acsso flake8

Check code quality with flake8

acsso pylint

Launch Odoo to install all the addons on which the addons to be tested depend and check log for errors

  1. Build the list of addons to install
ADDONS_INST=$(acsoo addons list-depends --exclude=....)
  1. Launch odoo and check logs
unbuffer ${VIRTUAL_ENV}/bin/odoo -d ${DB_NAME} --stop-after-init -i ${ADDONS_INST} | acsoo checklog

Launch Odoo to install all the addons on which the addons to be tested depend and check log for errors

  1. Build the list of addons to test
ADDONS_INST=$(acsoo addons list --exclude=....)
  1. Launch odoo and check logs
unbuffer coverage run ${VIRTUAL_ENV}/bin/odoo -d ${DB_NAME} --stop-after-init -i ${ADDONS_TEST} --test-enable | acsoo checklog

This approach based on standards and lightweight tools is used to test alfodoo. https://github.com/acsone/alfodoo/blob/10.0/.travis.yml

Travis, Docker, Runbot, ....

At this stage, we have all we need to test our addons in a simple and lightweight way. This approach is used on our internal projects (with giltab ci) but also to test alfodoo with travis. https://github.com/acsone/alfodoo/blob/10.0/.travis.yml

From here we can put in place some specific layer to speed up the test on travis (our elsewhere). We could provide some docker image with odoo already pip installed for exemple... I'm not a specialist of these technologies but IMO, these should/must be a layers embedding this lightweight process.

What abut runbot....

As @yajo says, what we need is to be able to deploy the result of our builders somewhere, somewhere that has to manage the lifecycle of these deployments.
As previously stated I'm not a specialist in docker etc... Therefore I've no opinion on the way we can easily manage the lifecycle of our deployed server. Nevertheless, the deployment must be the most simple as possible and as for the testing, use the most possible standard tools.

One benefit of using pip to install our addons into a virtualenv is that we can use standard commands to extract and build all the python packages required to deploy our Odoo server on an other linux machine. In 2 commands you can create the list of python modules (Odoo addons included) and build all the wheels to use to install the server.

pip freeze > server-requirements.txt
pip wheel -r server-requirements.txt --wheel-dir release

The result is a list of python wheels into the release directory. You can copy this directory whatever you want. Once copied, your server is installed in one command. pip install release/*

@lasley
Copy link
Contributor Author

lasley commented Oct 7, 2017

@lmignon - thank you for the great evaluation. I was looking forward to seeing what you guys had to say about this, particularly because of the work your team has done on the pip side of things (seriously thank you @sbidoul).

On the high level, I fully agree with you. We should absolutely leverage every pre-existing system we have in order to keep our work (both now and future) minimal.

The current fracturing of the ecosystem definitely leads us to most of our problems; maintaining one logical pathway for the installation, and subsequent testing/running of Odoo and associated addons just seems like a good idea honestly. It's also PEP-20 compliant:

There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.

So yeah my biggest question on the pip side of things is how would we handle dependencies that haven't been merged, and thus aren't on PyPi yet? Or even more complex cases, such as when there are multiple bug fix branches being applied against the same module in order to test long term? Sorry if I missed this in the acsoo docs - I'm still perusing those.

Edit: In hindsight the complex cases are out of scope

@sbidoul
Copy link
Member

sbidoul commented Oct 7, 2017

@lasley there is an old PR of mine illustrating this. The gist of it is these 3 lines.

So basically, in normal situations a repo has no (or empty) requirements.txt. When setting up the test environment, all addons of the repo are added to it. Then that requirements.txt is pip installed which pulls all other dependencies automatically.

If a PR requires a specific branch of a dependency (an Odoo addon or a python external dependency), you can add that branch in the requirements.txt of the PR. For example, say I make a PR for mis_builder 11 which depends on a yet unmerged date_range 11. My mis_builder branch would include a requirements.txt such as this:

-e git+https://github.com/OCA/[email protected]_range#egg=odoo11-addon-date_range&subdirectory=setup/date_range

@lasley
Copy link
Contributor Author

lasley commented Oct 7, 2017

Thanks for the elaboration @sbidoul - this totally makes sense, and basically mirrors what we already do in the oca_dependencies.txt file just in another context.

Looking at the requirements command you posted, I notice the subdirectory argument there. We would also need to be manually creating the setup folder in the instance of us going this direction?

I honestly didn't even know this folder existed until someone submitted a PR to one of my branches adding it. Looking back on this, I am now connecting the dots.

If we went this direction, this does add some Python knowledge over the standard Odoo dev knowledge, which could increase the learning curve for our stuff. I think I'm overall for it, I just feel it's necessary to point stuff like this out.

Should we think about pairing the implementation details of this with our planned apps store?

Realistically that's a pipeline too - so if we maybe added some concept of PyPi repos + Git repos to the mix. We could then leverage our apps store for CI, which in turn leverages pip for actual installation/dependency resolution. Sounds like killing a few necessary birds with one giant boulder I think.

@sbidoul
Copy link
Member

sbidoul commented Oct 7, 2017

My above PR generates the missing setup.py automatically in the branch being tested. But in general yes, they need to be added. There is a tool that helps you do that.

I don't think it increases the learning curve compared to an another dependency management solution. And what people would learn with that is mostly generic python ecosystem knowledge, which is arguably more useful than local knowledge such as oca_dependencies.txt or repos.yml.

An OCA app store should definitely implement PEP 503 -- Simple Repository API.

@yajo
Copy link
Member

yajo commented Oct 9, 2017

The case that's still missing for pip installers is when you need to merge 2 or more PR for the same addon, so we should still be able to support both install methods, as we currently do at doodba.

I'd also still miss the feature of installing odoo itself with pip then (which didn't work last time I checked).

In any case, all of this reveals a conceptual problem right now at OCA repositories: requirements.txt shouldn't be at repo level, but at addon level. Indeed, if that file included odoo addon requirements, all of the CI would become much easier.

All of this said, we still have one thing to keep in mind: OCA has currently 151 repositories, most of them for addons. This means that any system we deploy must be backwards-compatible until all of them are updated.

The main problem on your comments is that it blurs the roadmap I stated here: #144 (comment). I'm not exactly sure on how to get to the end of it by using your suggestions, so I'd thank help on that.


One last reflection: it's interesting how each one of us came to a different solution to the same problem. The fact that runbot starts fading away just made the problem visible, but it's quite obvious this has been a problem for a long time, and it has to be fixed collaboratively. 😊

@lmignon
Copy link

lmignon commented Oct 9, 2017

The case that's still missing for pip installers is when you need to merge 2 or more PR for the same addon, so we should still be able to support both install methods, as we currently do at doodba.

This case is not missing. We have developed git-aggregator for this use case. When we need to merge 2 or more PR for the same addon we use git-aggregator to build a consolidated branch and we pip install the addon from this consolidated branch.
If your banch is remote:

pip install -e 'git+https://github.com/acsone/web.git@MY_BRANCH#egg=odoo9_addon_web_m2x_options&subdirectory=setup/web_m2x_options'

If your banch is local:

cd src/web.git
pip install -e setup/web_m2x_options

In any case, all of this reveals a conceptual problem right now at OCA repositories: requirements.txt shouldn't be at repo level, but at addon level. Indeed, if that file included odoo addon requirements, all of the CI would become much easier.

This file is not required with pip. All the dependencies are managed by pip. (Odoo and python dependencies) https://pypi.python.org/pypi/setuptools-odoo

The main problem on your comments is that it blurs the roadmap I stated here: #144 (comment). I'm not exactly sure on how to get to the end of it by using your suggestions, so I'd thank help on that.

My main concern is to separate tools from platforms. As a developer I never need to use docker to test all my devs. I just need tools easy to use, easy to install and easy to understand to help me build test and deploy my devs. I'm not interested by tools that reinvent the wheels. I'm a python developer since 1998, and as a python developer I expect to be able to reuse what the python ecosystem provides to all the python developer.

In my #144 (comment) I try to show you how it can be simple to build, test and deploy our devs with simple tools. These tools exist and are independent of a specific platform. IMO the first step is to cleanup the way MQT works before introducing new complex layers. MQT must be based on standard python tools and patterns.

@JordiBForgeFlow
Copy link
Member

I agree with @lmignon. The current approach from acsone is IMHO easy to use, it is quickly being adopted by many, and clearly separates layers. We also face the need to decide when to use docker and when not to. If there are a few corner cases that require extra tools, we can discuss on how to implement. I think that we need to take advantage of this huge work.

@elicoidal
Copy link

In any case, all of this reveals a conceptual problem right now at OCA repositories: requirements.txt shouldn't be at repo level, but at addon level. Indeed, if that file included odoo addon requirements, all of the CI would become much easier.

Definitively agree here: you end up downloading way too much useless stuff.

@elicoidal
Copy link

elicoidal commented Oct 9, 2017

I feel that the pypi direction is the right one (basically because it has been running for so long that it is stable and well documented).
It can easily be embedded in Docker if anyone needs it
As a functional consultant (or customer), the only thing I am worried is the easy access to the test database:

  • I click on a link in github
  • (I actively start a test interface)
  • I enter my password

For "advanced" level:

  • I can rebuild/check logs if any issue

That means too:

  • that I dont have to check with a developer everytime I have to test something.
  • I dont need a test database for every commit but I'd rather have a 1 min waiting time to build the test DB I need

In short, I see a lot of technical discussions and just dont want to lose the end user perspective: what are the plan for the end-user interface?

  • Similar to runbot?
  • Similar to odoosh?
  • Something else?

@yajo
Copy link
Member

yajo commented Oct 9, 2017

The end user interface would be just a link at the end of any PR that leads you to the running testbot. Nothing else. It can be improved later if the need arises.

@lmignon I don't understand your comment, see...

I never need to use docker [...] I just need tools easy to use, easy to install and easy to understand

This is a contradcition. 🤔

Do you mean that you never used Docker and thus you feel it's complex? Because I could say the same about pip-installing addons, and I'm pretty sure your facepalm would be almost as big as mine's right now... 😏

I'm not interested by tools that reinvent the wheels

Well, we're talking about mostly reinventing MQT and runbot here, so are you saying you're not interested in this whole thread?

MQT must be based on standard python tools and patterns.

I'm not against it, as long at it is based on Docker, although the plain truth is that a well-made Dockerfile can quickly deprecate any needs for other systems.

We have developed git-aggregator for this use case. When we need to merge 2 or more PR for the same addon we use git-aggregator to build a consolidated branch and we pip install the addon from this consolidated branch.

I don't like to force everybody to maintain own forks, although it's an acceptable strategy if you like it, of course. 😉👍

Anyway, setting differences apart, you use git-aggregator too, so at the end of the day we're gonna need it anyway, so I don't really see that both standpoints collide.

So, after all, what the roadmap should change is just:

  1. Add a requirements.txt file per addon instead.
  2. Copy the current addons into where they should be.
  3. for r in addons/*/requirements.txt; do pip install -r $r; done
  4. ... and all the other magic.

Right? @lmignon said no need for requirements.txt, but there it is today, so I hope you can explain that a little bit further. Can you provide an easier to use/understand way of expressing addon dependencies?

All this said, one of the funny things about Docker is that it encourages the black box paradigm (put this here, that there... and it all just works), switching from one system to another would be pretty simple, no matter the direction we take with the first approach.

OTOH nobody yet talked about the backwards compatibility with pip...

@lasley
Copy link
Contributor Author

lasley commented Oct 9, 2017

The end user interface would be just a link at the end of any PR that leads you to the running testbot. Nothing else. It can be improved later if the need arises.

IMO we need at least a Runbot-esq interface. Aside from functional reviews, my customers are all trained to use Runbot when reporting issues. We absolutely require a way for anyone to easily dive into a live Odoo instance of production branch Y.

Docker or Pip

We're talking apples to oranges here. Using a Docker based solution would require whatever methods we have to devise for the non-Docker method.

The question here is where does our code lie? All Docker does is give us a unified platform and syntax for the code - but really it's just a bunch of bash files. There really is no difference except the fact one is called VM and one is called Docker.

I do agree that we need a Docker based solution though, and that reason is compatibility. If you can guarantee me that whatever we come up with here is going to be compatible with all systems, then that's great let's remove the layer.

The problem is this guarantee, if provided, would be false. Windows and Mac simply do not run things the same way as Linux. Core level things like OpenSSL implementations are different, which lends to unexpected results somewhere.

We can absolutely provide this guarantee with a Docker based solution. To me, this is worth the layer.

OTOH nobody yet talked about the backwards compatibility with pip...

I think I'm missing the point here. What additional would be required in order to make pip backwards compatible? The naming schema on PyPi is defined by bot and based on the addon name, so it's just a matter of knowing the dependencies in the manifest.

@sbidoul
Copy link
Member

sbidoul commented Oct 9, 2017

Hi all,

Here are a few more thoughts.

If I remember correctly OCA/maintainer-quality-tools#343 was preserverving backward compatibility with oca_dependencies.txt, so that should not be hard to achieve. We could finish that PR without much effort, btw.

requirements.txt is not necessary when installing the addons to test with pip (because external python dependencies are handled automatically by setuptools-odoo). Actually, we must NOT have requirements.txt at all in python library repos such as OCA addons.

requirements.txt is only necessary to reference specific branches of dependencies when a PR depends on an unmerged addon in another repo, as explained in #144 (comment). Such a requirements.txt must not be merged in the main branch.

In the very rare cases when a PR would depend of several unmerged branches of an addon in another repo, we can maintain that temporary branch manually and reference it in a requirements.txt. I don't see git-aggregate fit anywhere in an automated OCA CI flow.

[as a side note, even in the simplest projects, we always run git-aggregate manually in a controlled way, never in an automated flow: it's too dangerous]

My 2 cents regarding the roadmap:

  • It's a good idea to use docker with a base image that is used both by travis and runbot, because 1/ that simplifies installation of non-python dependencies on the runbot server, and 2/ that reduces the startup time of test jobs.
  • we should not reinvent another dependency management mechanism when we have one that works and is pythonic; so the question is not "docker OR pip", it's "repos.yaml/oca_dependencies.txt OR pip"
  • if we start doing heavy lifting in MQT, we should seize that opportunity to make it more modular to ease maintenance (ie make it a thin layer that invokes standalone tools such as odoo-pylint, checklog, addons list, addons list-depends); such standalone, easy to compose tools can be very valuable outside OCA, while trying to make MQT usable both inside OCA and outside makes it too complex, IMHO

@hparfr
Copy link

hparfr commented Oct 9, 2017

From reading docker-odoo-base README and Dockerfile, addons.yaml/repos.yaml seems reinventing what pip is made for.

@elicoidal
Copy link

@lasley

IMO we need at least a Runbot-esq interface. Aside from functional reviews, my customers are all trained to use Runbot when reporting issues. We absolutely require a way for anyone to easily dive into a live Odoo instance of production branch Y.

That's exactly my point!

@lasley
Copy link
Contributor Author

lasley commented Oct 9, 2017

From a 100k view, this is what I envision. I think this nails the modularity we need in terms of isolated systems, with most of the debatable specifics left out. I think we pretty much all agree in pip for the dependency platform, so I've hard coded that:

oca-ci 3

@lasley
Copy link
Contributor Author

lasley commented Oct 9, 2017

Oops I forgot the most important part in my flowchart - the testing interfaces! Added

@elicoidal
Copy link

@lasley LGTM.
Nevertheless, about CI, I am not sure: I'd rather keep Travis than invest in another server. For me we just need an environment for testing.
In concrete, maybe we could split that box in 2: runbot test env and CI
NB: interface-github is currently linked to OCA apps.
NB2: current approach on OCA Apps is to actually git the repos (which is redundant in this case with the runbot) so I think we definitively should put in common this service!
NB3: current approach on OCA Apps is to generate zip files out of the git'd content. Another point that should be depending on pip/distribution in the future
(in short 1000% for your approach)

@lasley
Copy link
Contributor Author

lasley commented Oct 9, 2017

Good point, I want to keep Travis too. This is where the Docker image comes in - MQT I think would just be a wrapper allowing for the execution our tools (like @sbidoul laid out in #144 (comment)). I'll figure out how to adjust the flowchart on that one - it's a bit complicated to represent visually I think

@lasley
Copy link
Contributor Author

lasley commented Oct 9, 2017

Note: I also want Gitlab CI too

@sbidoul
Copy link
Member

sbidoul commented Oct 9, 2017

NB3: current approach on OCA Apps is to generate zip files out of the git'd content. Another point that should be depending on pip/distribution in the future

Indeed it might be simpler for the OCA app store to feed itself from https://wheelhouse.odoo-community.org/oca-simple/ (wheels being essentially zip files already).

@aek
Copy link

aek commented Oct 24, 2017

@yajo you are not the only one.
I have the feeling that this issue has become an Odoo pip usage discussion losing the sight on the initial needs

@lmignon
Copy link

lmignon commented Oct 24, 2017

@yajo I don't share the same POV in your comparison database.

👎 You are forced to maintain the setup folder

You only need to create the setup folders for new addons. A bot generates the missing setup folder every night for all the OCA addons. Moreover, generating a setup folder is done by launching one command into the addons folder setuptools-odoo-make-default -d .

👎  Can't merge more than 1 PR per addon

???? you are able to merge all the branches you need for your addon. You have no limitation. Even more you can select for each addon individually the branch you want to merge. I have experienced in the past problems with the approach based on addons in the same file tree when the merge of one patch needed for one addon impacted another addon used into the repo. The pip approach is more flexible and less error prone.

👎 Forced to change installation url (possibly a git one) when you customer needs a hot patch (because OCA wheels are built nightly)

For our client, we never deploy from git. We always generate the wheels to deploy on the target machine from our staging server. The installation on these machines are done in three commands:

mkdir $VENV
virtualenv $VENV
$VENV/bin/pip install --no-deps --no-index $RELEASE/*.whl

When you deploy from git, you must have access to git from the remote machine. It's not always possible for some clients.

👎 Steep learning curve (odoo-autodiscover, setuptools-odoo, wheelhouses, setup folders, pip install options unkown to the average pip user...)

Are you a python developer 😏 😏 😏 ? I agree that we use some advanced features of pip but you can find a lot of documentation into the python community. Moreover, this knowledge is valid for all python developments, not only for Odoo.

👎 Introduces important divergences among environments*

For sure staging or production environments must be the most secure as possible (no dev packages, no dev envs, no ssh certificates allowing the access to external systems, ...) . But all these envs are installed in the same way: $VENV/bin/pip install --no-deps --no-index $RELEASE/*.whl

IMO It is also very important that the images that would be created to validate our PRs as replacement of our runbot are as light as possible. With the proposed approach, we would just need Odoo's non python dependencies, wheels that would be generated/obtained by travis during its build process and that's all.

:-1: When installing from git, it clones all the history, and it possibly will always do

It's only true on our dev envs for addons not available as wheels.

@yajo I think that in this discussion we can continue to argue endlessly for one solution or another. For me, the most important thing is that the tools provided by OCA are examples applying best practice and based on known standards.

@nhomar
Copy link
Member

nhomar commented Oct 25, 2017

Wao, I read and read by 1 hour the thread, and there are A LOT of half trues on SEVERAL ARGUMENTS arguments here, almost all of them are subjective "We can't , We must, blah blah blah".

There are some facts:

  • WE HAVE now a set of tools "cost 0, effort 0, knowledge shared to improve".
  • Odoo itself by definition (with the fact that it does not respect the python structure to declare a module) IS NOT a tool thought to be deployed by PIP AND I LOVE THAT FACT.
  • PIP approach increase the maintenance (yes somebody did the job to publish them, and yes we have an script than 2 or 3 understand that create the packages) but factually speaking it increase 2 folder and 1 file per module, AND the statement of @sbidoul where he says oca_requirements.txt declares what is on manifest.py is not True, basically the manifest says "Who I depend from" and the oca_dependency.txt says "Where it is", I do not understand how that can be simpler.
  • I saw the addons.yml approach and basically IMHO it is what we have already but with another format, I do not get the point of include a different format for the same task just "because we do not understand the current one o because I want".
  • The tool is hardly link to travis: NOT TRUE, we use gitlab-ci with mqt like a charm, IF that is the impression on some situation, then we MUST consider it as a bug and at least from here in Vauxoo we can fix it @moylop260 is almost 100% dedicated to SQA in vauxoo + @JesusZapata then we are more than open to share not just code but time to maintain a more generic tool (but please not PIP, I have proofs of REALLY ugly and non-auditable deployments with that). repeat after me #odooisnotdjango
  • You can always use PIP on your deployments with what @lmignon team did, then we have it, nothing to complain there, but factually it is not OCA managed, even that team extracted very nice some sub-features, and that's nice, let's contribute that back, What do you think?
  • Whatever decisions we take must make the contribution back easiest (copy and paste code as PIP does simple differentiate the coding part from the deployment part and make harder to contribute back [It is an actual problem now in python community]).
  • The approach of DEPLOY FROM SOURCES is even by default in modern language like GO (remember python standard are really old) then do that by default is not bad.
  • Docker or not docker, mqt can be improved to manage that, basically, create a new format is simply an extra work that I think is not necessary, we need to deploy "Odoo" not 100 tools.
  • May be extend mqt to mqt-deploy can be nice, that's a leak on our side also, but basically there are not 1 there are hundreds of options, and what ALL of them "leak of" is do it in the extrange odoo's way, (but let's be honest that's technically put a line in a config file and clone a repo) "why make it more complicate, let's automate JUST that part".
  • Docker is SUPER powerful, and yes I love it, but pip lovers are talking from the side of the ignorance, basically, once you know docker you regret yo did not know it before.
  • Odoo.sh did not use docker for platform reasons (and very well supported) it is less consuming use systemd-nspawn, let's be honest Docker is the most popular because it is well documented and with huge marketing, not because it is the best (I know Odoo WILL need to relay on it at some moment because they MUST allow set local environments for local or complex deployments), then let's try to learn from their decisions instead criticize them, I saw with my own eyes that power and I will copy some concepts to my own environment.
    -Odoo.sh uses the power of GIT to maintain the filesystem which IMHO is other reason why they re implement the layer for it on docker because they can have available THOUSANDS of instances with just using the git power, it does not matter what PIP lovers says, that's something only GIT and the magical power of git can do. (remember what @moylop260 says.
  • @lasley you CAN use mqt on gitlab and deploy with the tool you want (we have already a tool called deployv plugable and wired to odoo) ready for use BTW, but you can use what ever you want, basically it automate OUR decisions, and I learn a developer/good sysadmin will ALWAYS do not like 1 tool and he is able to expend 1 year of work because that 1 tool, then I use dedicated server for the CI workers, may be you want amazon whatever, that's ok, you just need one little plugin and gualá, Why refactor everything?. Here the .yaml file if you want see the proofs that it is using just 2 tools mqt and deployv.
    image
  • What I think we need AND that's factual is better documentation of the use cases where use this flows (somebody is willing to finance that with people and/or money?)
  • PIP does not have recursively check of repositories origin (git-subcommand have it but with other problems) and oca_dependencies solved that very very simple.
  • mqt needs a clean up, remember A LOT of technical decisions were taken because our backguard compatibility issue, WHY not fix that?, understand what we have instead re-invent what is working?.
  • mqt is for odoo and to test odoo, the travis dependency is because we use travis, BUT as you saw in the .yaml file it is a python file at the end, why re-invent oooooooooooooother format as addons.yml instead re-use .travis (which is very powerfull BTW) Did you know travis2dockerfile? let's finish that concept what do you think? at the end you will need to learn the template ssytem for whatever tool you use: travis, jenkins, gitlab-ci, blah blah blah, then at the end we will have ooooootrhe one?, we have already manifest.py + oca_dependecies, AGAIN Who I depend from and Where it is (we are open to quetions if you think you can't do something with that and we can answer how).
  • oca_requirements.txt is not difficult to use it is almost perfect and pythonistic 1 line says only 1 thing in only 1 place "what option is that simple? any tool around that just need to read that file (may be that's the modularity we need on mqt may be be able to do import mqt; mqt.parse_dep() and return a list but honestly guys that's really simple in python to integrate in any CI/CD).
  • Even Python community is having problems because the PIPpocalipse they use Docker for their pretty new warehouse and that project is taking more that 4 years, please do not make our little number of contributors walk that path, let's DOCUMENT instead DO new tools.
  • It does not matter what set of tools we decide/do, sysadmins WILL do their own, because they can, we need something "simple", fast, with an pretty well done user interface, and I think we are having problems there let's make the RFC for that part, not for the backend, that's pretty done now and we need just to finish it I think.
  • May be runbot will be dead, but migrate that to new api is not a work Odoo did not do because they will kill it, may be they are refactoring it, may be @odony can help with the vision here, but at the end if I am Fabien, I did not do that now just when I hire the new developers they are the priority is not the CI is the product itself.
  • BTW Gitlab-ci is the road if you ask me where you can have ALL THE FLOW integrated, and it is pretty easy to set up, but HOW?, we have tens of tools and formats pick one and that's it, even if we create a new one, that will have leaks and it does not add any value to the organization, let's make more features, I think that add more value.

Then my opinion (conclusion), let's document more, do not change any format, just complete it, and let's allow mature naturally the @yajo approach 1 or 2 years until we can have a more solid work there I think as he said (and I am totally agreed on that) naturally it will evolve to the easiest way, on my opinion the easiest way is the best documented not necessary the best tool, let's document. What do you think?

Another point we need thinks to do in Odoo Efficiently because it is what we maintain, we do not need to use tools that almost turn on a mixer ;-).

I hope you take into account this facts in your decision at the end the one best maintained will win (it does not matter the tools), just what is done.

thanks for read and sorry for the long email.

@bealdav
Copy link
Member

bealdav commented Oct 25, 2017

@nhomar

Docker is SUPER powerful, and yes I love it, but pip lovers are talking from the side of the ignorance, basically, once you know docker you regret yo did not know it before.

Why do you oppose pip and docker ? It's not a fact

We use docker with voodoo (buildout inside) and we'll convert voodoo to pip for odoo modules

@yajo
Copy link
Member

yajo commented Oct 25, 2017

@lmignon rejected some negative points, although none of them were said to be fake, just not so bad/hard at my sight.

the most important thing is that the tools provided by OCA are examples applying best practice and based on known standards.

Best practice: KISS. Known standard: Docker.


I also agree with many of @nhomar's points, although if we are facing tool & formats deprecations (not just because of yes, but because it is really better 😉... and prs welcome!), then we should invest the documenting efforts in the new ones IMHO.

let's allow mature naturally the @yajo approach 1 or 2 years until we can have a more solid work there I think as he said (and I am totally agreed on that) naturally it will evolve to the easiest way

Thanks for that, pal 😊


In any case, no matter if we are in the runbot deprecation, pip enforcement lobby, or any-custom-odoo-up-and-running-in-10-minutes side of the discussion, one clear 1st step we can do is to add a Dockerfile to each of OCB branches that builds 2 versions of it: one with all the required dependencies, and another on top of that that includes OCB itself.

I'd be glad to start doing that ASAP.

That's a common, non-hurting base that almost everyone will benefit from.

If people at OCA refuse doodba, we can keep it for ourselves, It's a shame, because with some little improvements as I explained long ago, it would possibly improve the whole situation in MQT, runbot, contributor development, and integrator deployments; but I can't force anyone...
It's sad because it seems nobody here has proposed an alternative roadmap to #144 (comment) yet.

So do you all agree on starting with this little step and then somebody from the pip lobby can provide their alternative roadmap? 😊

@nhomar
Copy link
Member

nhomar commented Oct 25, 2017

@bealdav

Sorry if I express myself incorrectly, I am not against Docker, I am just saying that comments about Odoo that did not pick docker are incorrect, AND, our effort must be around something "docker or not docker" and that's is simply a package that automate the Odoo's related tasks.

Said that, IF we pick @yajo approach it can not relay by default on Docker, docker is a Brand not an Standard as you say.

The Standards is this one BTW (Which is a wip).

Then let's focus on Odoo's layer, where that will relay on (Docker, LXC, Odoo.sh direnct in an amazon AWS, my machine, guindows) should be irrelevant.

Docker is just "Where Odoo will land to".

THat's my point.

@nhomar
Copy link
Member

nhomar commented Oct 25, 2017

There is another point and I think the must important one on my statement:

@yajo @lmignon

Both are making arguments with "we can't or we must not" which ARE NOT REAL.

I did a little counting on mqt and mt of the number of lines of code:

That have:

Python:
  nFiles: 20
  blank: 382
  comment: 424
  code: 2036
YAML:
  nFiles: 1
  blank: 0
  comment: 284
  code: 48
Bourne Shell:
  nFiles: 1
  blank: 3
  comment: 6
  code: 27
SUM:
  blank: 385
  code: 2111
  comment: 714
  nFiles: 22

IMHO that fairly nothing (but can be reduced), dig into tens of now seudo tools will kill the contributions also, now I can look in 1 and only 1 place for all the tools "Why look in other places"? let's document better.

About mqt:

Python:
  nFiles: 37
  blank: 287
  comment: 281
  code: 1512
Bourne Again Shell:
  nFiles: 3
  blank: 27
  comment: 25
  code: 127
YAML:
  nFiles: 5
  blank: 15
  comment: 52
  code: 121
XML:
  nFiles: 2
  blank: 3
  comment: 2
  code: 30
Javascript:
  nFiles: 2
  blank: 0
  comment: 3
  code: 11
SUM:
  blank: 332
  code: 1801
  comment: 363
  nFiles: 49

Let's plan the cleanup.

You all will see if you pick ANOTHER option that will represent the SAME efforrt which is done already, Why the re-work? why not clean-up? may be refactor it.

@lasley
Copy link
Contributor Author

lasley commented Oct 25, 2017

Ok so we have a lot of differing opinions here, and I highly doubt we're going to settle on one. We all obviously have our own ways that we want things done, and we all have our own infrastructure and existing methodologies that need to be considered. We're definitely out of scope of just a Runbot conversation, but I feel that it's all totally related and absolutely good to be discussing as a whole.

I think the first step here is to identify what it is that we need from an entire build pipeline in the abstract. Aligning the conversations and staying technology agnostic, I see the following:

  • Version control - the actual source files
  • Dependency management - the combination of source files & installations required for a specific Odoo instance
    • Operating System - specifically mentioned because of our existing problems with different builds producing different results
    • Binary
    • Python
    • Odoo modules
  • Configuration management - writing a configuration file for a specific Odoo instance
  • Entrypoint - actually launching the Odoo instance with any necessary command line parameters
    • Runs tests
    • Runs live instance
  • Database Handler - Provisioning new databases, deleting old, serving current databases, authenticating Odoo instances
    • This could be database servers/containers, or simply databases in one PSQL instance
  • GUI - Whether it be full-fledged or simply interfacing with pre-existing systems like GitHub or Gitlab

Once we nail down our abstracts, I think the best approach is to rebuild the existing MQT to provide helpers, plus an interface/adapter mechanism to allow for our particular build pipelines. Our pipelines can then be installed as Python packages, or possibly Odoo modules (in the case of a Runbot-esque GUI)

@dreispt
Copy link
Member

dreispt commented Oct 26, 2017

I mapped Dave's concepts with:

  • mqt: OCA/maintainer-quality-tools), and
  • pip: setuptools-odoo+ Python/pip

I'm sorry I didn't include addons.yaml but I'm not knowledgeable enough about it. @yajo you're welcome to add it. It also seems tied to a Docker distribution, I wonder if there are plans to autonomize it?

  • Version Control:

    • mqt: plain Git commit (at repo level)
    • pip: packaged Module versions (at modules level)
  • Configuration management:

    • manual: write your own shell command or scrips
    • mqt: uses an "oca_dependencies.txt" with Git repos
    • pip: "requirements.txt" file with individual Modules and Git repos, or module names from a PyPi index
  • Dependency management:

    • manual: just Git clone and set Odoo addons path
    • mqt: relies on "oca_dependencies.txt" accuracy; resolves "oca_dependencies.txt" Git repo dependency tree
    • pip: relies on prebuilt "setup.py" module files; resolves dependencies automatically (from requirements.txt repos or a wheelhouse/PyPi)
  • Entrypoint:

    • manual: ./odoo-bin -c <my.conf> --addons=<my/addons>
    • mqt: was designed for test running only (travis_run_tests)
    • pip: ./odoo-bin -c <my.conf> (addons path automatically handled)
  • Database Handler:

    • out of scope for mqt and pip --> own discussion on Runbot/Buildbot/Docker
  • GUI:

    • out of scope for mqt and pip --> own discussion on Runbot/Buildbot/Docker

Some final words: mqt, pip and Docker are not competitors. They are different tools solving different layers of the problem.

  1. Odoo provisioning. MQT bundles that, and dependes on TravisCI for the platform side. Docker surely provides a more complete way to do it.
  2. Version Control/Configuration management/Dependency management. There should be some options about it. Pip is one way to do it. A tool using Git directly could be used, extracted from mqt and/or addons.yaml.
  3. Entrypoint. MQT is narrowly oriented implementation for that. Pip handles addons-path, but that's it. Plain odoo-bin+conf files is not that hard. but I believe that there is room for improvement: either a helper command line tool, or improvements to Odoo-bin, or both.
  4. Instance Managing. That's where Runbot comes in. MQT or pip are no answer for that. Nor plain Docker, although it may be an enabler to use Docker based tools for that.

@lmignon
Copy link

lmignon commented Oct 26, 2017

@dreispt

Entrypoint. MQT is narrowly oriented implementation for that. Pip handles addons-path, but that's it. Plain odoo-bin+conf files is not that hard. but I believe that there is room for improvement: either a helper command line tool, or improvements to Odoo-bin, or both.

FYI we have repackaged some part of MQT into a set of command-line utilities to facilitate our Odoo development workflow at Acsone. https://github.com/acsone/acsoo

For exemple, I use also these utilities to test alfodoo with travis (https://github.com/acsone/alfodoo/blob/10.0/.travis.yml)

@guewen
Copy link
Member

guewen commented Oct 26, 2017

I don't understand this debate pip vs docker. It seems this is rather a debate pip vs doodba actually. But I see no reason why pip could not be used with a docker image.
Those who speak against pip should try it, really, I tried it during the OCA code sprint in Louvain-la-Neuve and I have been really impressed, in some commands you can setup your environment (and I say that as a Docker enthusiast). As some say: there is no docker vs pip.

Now, regarding Docker, I think it's very nice for a CI process, where you test your images in, for instance Travis, and spawn them somewhere else. It doesn't mean this Docker image cannot be built using pip...

We should be really cautious not to have a tight coupling between the Docker image distribution used and the tools used for the tests, checks, deployment and so on. I'd say one should be able to use the tools on any Docker image or even on any system outside of Docker.

My opinion: the image should be kept to the bare minimum required, containing only the sources, the dependencies and a small entrypoint to generate the config file and install addons at startup is enough. The new mqt tools, should be independent and outside of the Docker image. The way to inject the source code in the image (pip, git repos copied in the image with COPY, whatever) doesn't matter much. Once an image is generated by Travis / Gitlab CI / Jenkins / ... it can be pushed to a runbot-like server (which should support any image)

I can share what is our CI process @ Camptocamp if it can be of any help.

Our project images are based on https://github.com/camptocamp/docker-odoo-project/
This image uses the tools marabunta and anthem (sadly poorly documented), they are executed at the start of a container and used to migrate odoo databases from a file describing the migrations. In our projects, we have git submodules that we inject in the Docker container with "COPY" instructions, but we also pip install the requirements.txt at build time, which means we can use git submodules, local code and also install addons through pip.

We use Travis which build the image, run lints and creates a container to run the tests. If they succeed, the image generated during the build is pushed to the hub registry. Then, it sends a POST request to a small app we call "Rancher Minions".

The Rancher Minions is our internal runbot-like, running on top of Rancher, a small 500 lines Flask application. It leverages Rancher and is mostly a nicer graphical interface on top of Rancher to display odoo instances grouped by branch. It spawns the new stacks and destroys the old Rancher stacks when it receives new requests. The hard stuff (dns, proxy routing, instances state, logs, cleaning, ...) is handled by Rancher. Also, as the build of the images is done by Travis, the instances spawned "only" have to start and create a db (automatically when they start)

Rancher Minions:
selection_083

Corresponding stacks on Rancher:

selection_084

At the moment, Rancher Minions is an internal tool, but if there is some interest in the OCA, we can share :)

@moylop260
Copy link
Contributor

moylop260 commented Oct 26, 2017

@dreispt 👍

If I understood you fine @dreispt :

if we like use docker, so we need to change MQT to support it. (other issue, other PR, other matter)
If we like use git-aggregator, so we need to change MQT to support it. (other issue, other PR, other matter)
If we like use doodba, so we need to change MQT to support it. (other issue, other PR, other matter)
If we like use "pip odoo addons packages", so we need to change MQT to support it. Like as @sbidoul PR OCA/maintainer-quality-tools#343 👍

And if your custom project like use pip odoo addons packages so you will need use a way to support it (environment variables, configuration files and so on).
if your custom project like use oca_dependencies, so you will need use oca_dependencies.txt file (like as OCA projects).

If you run the same scripts that .travis.yml in other compatible environment so currently MQT is compatible with:

The bad point for MQT is that we have ugly and separated scripts instead of a good and correctly documented package.

Someone told me: "I can't use MQT with gitlab"
Yes, you can.
But if you don't understand MQT so you will need too much R&D time to understand MQT scripts that was better create other ones.

And there are too many ways to create the same thing that use one and just one is imposible go forward.

What about if we use a command like as:
mqt get_addons_dependencies

def get_addons_dependencies(self):
    if os.path.isfile("oca_dependencies.txt"):
        clone_oca_dependencies
    elif os.path.isfile("git_agregator.cfg"):
       clone_git_aggregator
    elif os.path.isfile("odoo_addons_pip_requirements.txt"):
       pip_install_odoo_addons

If you run this command in a docker, in a travis, in a gitlab, in a shippable, in a ____other compatible environment____ we will have the same result without discussion about how to clone for a docker discussion.

@nhomar
Copy link
Member

nhomar commented Oct 27, 2017

Hello all.

Following the topic discussed:

Make mqt a clean python package step by step in the right way, I started myself the work here:

OCA/maintainer-quality-tools#500

The TODO's on the PR.

I did not know the existence of https://github.com/acsone/acsoo in all cases I will try to copy what is adjustable to mqt (I think it would be helpful just start the discussion with I HAVE THIS DONE GUYS @lmignon and I am open to put it under the OCA umbrella) because that's exactly the very first step I am re-doing now backward compatible.

The only point is that as mqt grows up adding and adding huge number of use cases (almost all important ones that are really complex to debug in some cases) I do not want to lose one single use case in the move, and be backward compatible with all the repositories, then no brake the instance of the people using it tight now (like our self) and have time to move.

One important point is that mqt since some time ago is not anymore travis only oriented, BUT the names on the scripts were maintained and that was a wrong move because it sales a wrong approach.

I will try to have this week a working environment as good as I can documented to move forward on the mqt layer and try to .

Once it is done I will try to include git-aggregator as a dependency BUT we need to move git-aggregator under OCA umbrella (In order to add there the specific necessary features) and clean up some python stuff (no too big change just make usable as a module also and not just a command line) the we will be able to use addons.yml if you like (and may be move some modules to see how it works) this feature is agnostic from Docker BTW then add it should not be difficult and may be help people to have their un-merged environment (that's for me heresy but that's my opinion ;-)) .

I would not like to create ooooother package fore deployment proposes (easily importing what is needed we can add the sub commands necessary using the right tool for such deal).

Best regards and happy hacking.

@mart-e
Copy link

mart-e commented Oct 27, 2017

FYI, we have no intention to drop the runbot at the moment. It is not even discussed. We discuss adding features to the runbot.
See @dbo-odoo 's comment for our developer point of view.

If you want to create new (and clean) test tools, go for it but don't take that decison based on the fact that we will stop the runbot...

@lmignon
Copy link

lmignon commented Oct 27, 2017

@guewen Thank you for this clear summary of the situation, which makes it possible to refocus the debate on the real issues at stake in this discussion. I completely agree with what is being said.
The approach outlined for the continuous integration process (from the installation to the setting up of an instance for tests/functional tests or ...) seems to me to be very pragmatic and modular. My main concern is to have composable/orchestrable tools. For my part, I would be very happy if we could use this same approach for the OCA and elsewhere. The question now arises as to how this approach could be implemented. (If there is a consensus for your approach).
In #144 (comment) I tried (perhaps not very clearly) to break down the functionalities now covered by MQT and to show how they could be handled by a set of simple command line utilities easy to install and use. I'm not saying we need to reuse these utilities, but I'm just trying to explain what's done today with MQT and how it can be done in a more modular way. All this seems perfectly in line with what you are proposing: simplicity and modularity.

@lmignon
Copy link

lmignon commented Oct 27, 2017

@nhomar

(I think it would be helpful just start the discussion with I HAVE THIS DONE GUYS @lmignon and I am open to put it under the OCA umbrella)

#144 (comment)

Once it is done I will try to include git-aggregator as a dependency BUT we need to move git-aggregator under OCA umbrella

Why do we need-it for MQT? As @sbidoul says: "Please keep MQT focused on OCA needs only. The needs of OCA (testing addons libraries) are quite different from the needs of integrators (testing integrated projects). If we try to have one solution that fits all use cases, we create something that is very complex and hard to maintain. On the other hand, if MQT is built around small components with separated concerns, such components provide value to everyone."

Moreover, you are free to contribute to git-aggregator even if it's not under the OCA umbrella.

@nhomar
Copy link
Member

nhomar commented Oct 27, 2017

Why do we need-it for MQT?

@lmignon I think exactly as you explained mqt will be the place where all the testing process and automation will land on (at least we changed that now) and the fact of use it as part of the testing flow to premerge thing and download the linked repositories with the @yajo proposal.

If you think It is not needed then may be I understand something wrong, the proposal is use a dockerized environment and instead use oca_dependencies have a tool to pre-merge pending PR, Did I understand well?.

#144 (comment)

The point is that we should not make brand-depending our main tools, and basically the package has the acsone brand on it, and I think it is not fear do that, nobody is doing that under oca contribution.

If we start to call all the work we contribute instead generic vauxooX it will not be seen well I think (at least if I am not wrong).

BTW I understand you are Ok if we copy some things you re-did that were done in mqt already (watever the reason I understand some of them) and add some other new ones, I can do the job to put that under agnostic-brand under mqt no problem, now I understand better your technical point BTW.

@dreispt
Copy link
Member

dreispt commented Oct 27, 2017

Thank you all, I believe we now agree that there is no Docker vs Pip vs MQT: these are different tools, solving different problems, that can even work together for the same solution.

The "Freedom" FOSS value also means that we should be free to choose the tools that best fit our use cases. So we should really avoid locking down our tools to opinionated choices, unless really needed.

@nhomar I agree with the vision that MQT should make the switch from a test/CI suite to a CLI tool, so OCA/maintainer-quality-tools#500 can a step in the right direction.

I have some concerns with MQT though.

IMO the project needs quite some refactoring. For example, the self test approach is far form the best. The system variable communication between scripts is also not ideal.
But on the other hand MQT is a very sensitive project, and refactorings are not easy to get merged (for example: OCA/maintainer-quality-tools#337).

I believe that the best approach is to leave MQT as the OCA CT suite, and create a new project , upstream of MQT, like we did for pylint-odoo. This would be a new CLI tool, solving the problems MQT needs, such as defining configurations, resolving dependencies (Git or Pip) and providing entry points for running and testing Odoo.

Maybe acsoo can provide a good starting point for that, and we can fork it into an OCA tool and repo? And maybe addons.yaml can be extracted from Doodba to move into this tools?

(Sorry for focusing on MQT in a Runbot thread 😞 )

@lmignon
Copy link

lmignon commented Oct 27, 2017

@nhomar IMO for oca we don't need to premerge things.. we just need to get all the dependencies for a given branch of a repo in a way or an other.

The point is that we should not make brand-depending our main tools, and basically the package has the acsone brand on it, and I think it is not fear do that, nobody is doing that under oca contribution.

This tool is a generic tool that can be used everywhere not only into the python ecosystem. It's a tool for the git users community. The name is neutral without reference to Acsone. To install it you just need to type 'pip install git-aggregator' (once again no reference to Acsone). More generally, I hope it is possible to use tools or libraries external to OCA. If not, we have a problem, we have to rewrite all the tools and libraries we use. I have always seen OCA as a community open to the outside world where we try to take the best of what exists when it is consistent with OCA values. Am I wrong?

@yajo
Copy link
Member

yajo commented Oct 27, 2017

Hmm the problem on having too much agnosticity is that it becomes harder to KISS. #144 (comment) explains very well the options we have. Since it's gonna be quite hard to get to an agreement on best practices for integrator needs, I feel we should just focus on specific OCA needs and the easiest way to get them from an idea to a working tool, without breaking backwards compatibility and maintaining a sensible level of pluggability to allow integrators leverage those tools.

Specific OCA needs are just automated CI and manual CI.

The point of donating doodba is that we could provide a tool for CI, integrators and developers, that supports development, testing and production environments; it would make the odoo landing experience much smoother, and since one of the environments is testing, it can easily be used for both needed CI purposes, but to make it straightforward it needs also to make decisions, and since those are not liked by most OCA users, I guess it's best to keep it out from it. 😞

In such case, I think we can dump the idea of git-aggregator and addons.yaml for OCA also. They are used in doodba to blur the differences among supported environments, but again, if OCA is gonna support only one, it doesn't need such blurring. Hopefully some day pipenv gets mature enough to make us able to dump addons.yaml, but that is not today.

All the pip stuff is quite hard to understand and maintain IMHO, but the truth is it's already there so we could be able to use it for CI.

So, a good roadmap could be now:

  1. Add minimal docker images for OCB branches. This image should have just 2 entrypoint scripts:
    • Wait for postgres to be listening
    • Generate config file from env variables.
  2. Make a new tool (if none exists) that replaces runbot, and possibly it should be a smart proxy such as Odoo.sh has:
    • Not odoo specific.
    • Has a webhook to be notified about a new image to be booted.
    • Boots the tests environment
    • Notifies another webhook about test environment being ready (configurable: github/gitlab...)
    • Powers off environments not used in X minutes.
    • Deletes environments when they reach X used space on disk or are older than X hours/days.
  3. Package and document MQT, but change it to just do:
    1. Make static checks (lints)
    2. Install current addons repo at current commit in OCB Docker image using pip. Build this as an image.
    3. Run unit tests inside that image.
    4. Push image to a grabage images registry in Docker Hub.
    5. Notify new runbot about this new image; new runbot will do the rest from here.
    6. Publish new pip packages on commits to main branches. Seriously, this shouldn't be done nightly but on each new commit.

MQT has a lot of hacks to make tests for just the addon you are pushing in your PR, but the truth is that since v11 we have reset our branches, so any added code shouldn't break preexisting code. This makes those hacks no more needed.

About GUI, If we just turn on and off the manual CI environment (runbot) on demand by the smart proxy, as Odoo.sh does, you need no GUI, only a link to enter the runbot, and maybe a splash screen telling you to "wait while we boot the instance".

If we complete all of this:

  • We would have official docker images used for CI. If we split among those including only dependencies and those including full odoo code, we cover all possible subimage uses.
  • We contribute a good non-odoo-specific tool to the broader container community that can be easily used to run test environments anywhere.
  • MQT becomes pip-packaged and thus is easier to plug into any environment
  • By MQT using Docker image, it would be way easier to include odoolint from @Vauxoo for instance.
  • We leverage current pip packaging of odoo addons so we don't have to support oca_dependencies anymore.
  • Runbot becomes smarter, less error prone, and almost no variable.

If tomorrow we want to use another container-based tool (rancher, kubernetes, docker swarm...), since all of them use the same basic unit as we'd be using (an image), adaptation should be easy too.

The bad part:

  • Developers and integrators are left on their own.

... but that's how it's always been anyways...

@simahawk
Copy link

All the pip stuff is quite hard to understand and maintain IMHO

I don't get why... Maybe it's because like Laurent I use it since ages but I struggle to understand what's hard in there. In fact, I think that the current way of working around the non-usage of py packages IS the hard part and led all of us to re-invent our own wheel to make our deploys reproducible.

Package and document MQT, but change it to just do:
Make static checks (lints)
Install current addons repo at current commit in OCB Docker image using pip. Build this as an image.

Am I wrong or are you saying that MQT should just work inside a container?
I think that, as far as possible, we should be able to pip install oca-mqt (whatever the name) and run oca-mqt module_name or something like that.
Not sure if this statement is already satisfied by your "MQT becomes pip-packaged and thus is easier to plug into any environment" 😛

By MQT using Docker image, it would be way easier to include odoolint from @Vauxoo for instance.

Why? You don't need a docker image for this already AFAIK.

The bad part: Developers and integrators are left on their own.

Maybe is true for integrators but for (py)developers relying on py packages is a must have and I think is going to ease on-boarding of those who are scared by this flaw in odoo eco-system.
And using docker is kind of daily habit for most of them nowadays.

@nhomar
Copy link
Member

nhomar commented Oct 28, 2017

@yajo

All the pip stuff is quite hard to understand and maintain IMHO

Dude PIP is hard to understand when you do not know it and do not read enough about that, I think if you read this may be it will change your mind the problem is that:

  1. The ones that adapt themselves to Odoo thinks that python is Odoo and it is not, odoo oversimplify a lot of things around the distribution of an odoo package, I do not like to change that, but say that about PIP is not true, if you say that, then you did not know the power of real python yet ;-) <trollmodeon/>

@simahawk

Am I wrong or are you saying that MQT should just work inside a container?
I think that, as far as possible, we should be able to pip install oca-mqt (whatever the name) and run oca-mqt module_name or something like that.
Not sure if this statement is already satisfied by your "MQT becomes pip-packaged and thus is easier to plug into any environment" 😛

Your dream will be true here (WIP) we made a mistake at the very begining of that repository not opensource it in the right way, and this discussion is the very example of the consecuences, we will fix that I am on it with you all, just remember why I do not want to start a new **nothing:

  1. Several real use cases on the details considered in the current code.
  2. That's not the way of opensource things, if we do something wrong, we re-wrote our selves, but don't kill the child.
  3. Actually it is completely possible that refactor with no kill nothing... and progresivally, then non stop hundred of quality process on several repositories (well at least on my case change that in hundreds of repositories in absurd.
  4. Put things in another place ins not right (this is because the proposal of @dreispt about do a new package), that's not necessary my friend I will do the clean up and you will see a very simple move forward on this discussion, once that is clean enough that will be THE easiest way to do things, <joke>even if you use Odoo in a [wash machine with linux](https://www.ruinelli.ch/bilder/linux_waschmittel_2.jpg)<joke/>

Why? You don't need a docker image for this already AFAIK.

SUPER +1 @yajo need to understand a little better this observation "Docker is Nice" but, if you know a hammer you can not treat everything as a nail.

Maybe is true for integrators but for (py)developers relying on py packages is a must have and I think is going to ease on-boarding of those who are scared by this flaw in odoo eco-system.
And using docker is kind of daily habit for most of them nowadays.

SUPER +1 here.

Imagine node people creating wired docker images with .js on selected folders ;-)

@nhomar
Copy link
Member

nhomar commented Oct 28, 2017

@yajo

  1. Package and document MQT, but change it to just do:
  1. Make static checks (lints)
  2. Install current addons repo at current commit in OCB Docker image using pip. Build this as an image.
  3. Run unit tests inside that image.
  4. Push image to a grabage images registry in Docker Hub.
  5. Notify new runbot about this new image; new runbot will do the rest from here.
  6. Publish new pip packages on commits to main branches. Seriously, this shouldn't be done nightly but on each new commit.

+1 on this, annotated in the roadmap of the refactor.

  • Point 4: PR must be done, mqt should not be Docker dependant, but it is not mandatory once clean, then we can add it in a simplest way.
  • Point 5: IT is what hooks on the CI do (AFAICU) nothing to do there.

@nhomar
Copy link
Member

nhomar commented Oct 28, 2017

@nhomar IMO for oca we don't need to premerge things.. we just need to get all the dependencies for a given branch of a repo in a way or an other.

@lmignon I think the same (even that feature looks weird for me) but that the unique argument from @yajo to propose replace fully oca_dependencies and refactor all the way we worked, that's why I considered it if everybody think that's out of scope or there is not another reason, then let's move that feature out.

@moylop260
Copy link
Contributor

I think that this issue could be closed since the good comment about:

Feel free to re-open it

@blaggacao
Copy link

blaggacao commented Jul 29, 2018

Maybe this effort can grow useful at some point... https://github.com/xoe-labs/odoo-operator

@blaggacao
Copy link

blaggacao commented Jul 29, 2018

Also there is review apps integration from GitLab with any random k8s cluster (don't fear k8s's complexities! It's pristinely elegant 😉 )...
https://docs.gitlab.com/ee/user/project/clusters/
So probably this whole question is soon to be outsourced to a fellow ecosystem and we'd be able to re-focus.
/cc @yelizariev

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests