-
Notifications
You must be signed in to change notification settings - Fork 49
Dev Cookbook
Tastylicious Quantum Leap recipes.
We develop on MacOS but the various scripts we use to set up dev &
test environment should (in principle!) work on most Linuxes too—well,
they definitely work on Ubuntu Xenial which is the default Travis build
environment where we build and test Quantum Leap. If you're into IDEs,
try PyCharm for a smooth coding experience—you should be able to just
define a project out of the repo and start coding right away. We use
Docker/Docker Compose to build and test Quantum Leap images locally
so you should install them—on the Mac you can try Docker Desktop which
ships with both. To actually write Python code, you'll need Python
(3.8.3
currently, see Pipfile
) and pipenv
$ pip install pipenv
or, on MacOS, if you have Home Brew you can
$ brew install pipenv
After you've installed the dev tools, you're ready to hack away
$ git clone https://github.com/smartsdk/ngsi-timeseries-api.git
$ cd ngsi-timeseries-api
$ pipenv install
$ source setup_dev_env.sh
$ charm .
pipenv install
installs all Python deps from our Pipfile
in the
root dir. You'll have to run this when starting from scratch or when
a new lib gets added to the Pipfile
. Also you should start your fave
editor in a shell after sourcing our setup_dev_env.sh
script which
exports a whole bunch of vars you'll need to run and test Quantum Leap.
For the impatient, here's how to run our test suite if you're starting from scratch
$ cd where/you/cloned/ngsi-timeseries-api
$ pipenv install
$ source setup_dev_env.sh
$ sh run_tests.sh
Let's have a look at the details now.
Test files go in the tests
directories of the module you're testing,
e.g. src/reporter/tests/
. Our test framework is pytest.
Each test file name should start with a test_
and so should test
function names within the file. To run test cases, you'll have to
prep your shell env, which you do by running the setup_dev_env.sh
script in the repo root dir:
$ cd ngsi-timeseries-api/
$ source setup_dev_env.sh
Then you can run every test case known to man with:
$ sh run_tests.sh
This is painstakingly slow though (about 20 mins on my Mac) since most
tests are integration or end to end tests (rather than unit!) which
run in a containerised environment through docker compose. So you'd
typically only do that before a commit to double check the new commit
doesn't wreak havoc. To speed up development a notch, you could try
running just the test cases in your module, e.g. those in src/reporter/tests/
.
Each test dir comes with a run_tests.sh
script that's called by the
main run_tests.sh
in the root dir. These scripts build the QL container
image, bring up the containerised env as spec'd by docker-compose.yml
(in the same test dir) and finally start a pytest
session to run the
module's test cases. Long story short: to run e.g. reporter tests
$ cd src/reporter/tests/
$ sh run_tests.sh
If you have any unit tests, you can run them separately using your
IDE or pytest
directly from the shell. This way you won't have to
twiddle your thumbs waiting on integration and e2e tests to cross the
finishing line...
Sometimes you really need a debugger to figure out what the heck Quantum Leap is up to. Here's one way to debug a Quantum Leap server process connected to the various back-ends—DB, cache, etc. First off, as usual, you'll need to prep your shell and start your IDE from there
$ cd ngsi-timeseries-api/
$ source setup_dev_env.sh
$ charm .
Now edit
src/reporter/tests/docker-compose.yml
to comment out the whole quantumleap
service block. Then
$ cd src/reporter/tests
$ docker-compose up -d
This brings up a Docker network with all the back-end services Quantum
Leap will connect to. (QL finds the services through the env vars that
setup_dev_env.sh
exports.) Wait a few secs to make sure all the services
are up and running, then start debugging
src/app.py
Hammer away, e.g. curl
a request or start another debug session with
one of the reporter tests. When done, don't forget to clean up after
yourself
$ docker-compose down -v
Our Docker-based dependency management will get us, from time to time, into dependency hell. In fact, in general building and pulling of images are time-dependent operations. In other words, the image you pull today from DockerHub for a given tag could be different than that you pulled yesterday for the same tag and likewise building the same Docker file twice might output different images. We should also make sure the various Python envs we juggle never get out of synch w/r/t each other.
So when updating dependencies
- Always prefer more specific Docker tags when available. In particular,
in our Quantum Leap and Timescale setup Docker files, we should use
python:M.m.p-alpineN.n
(e.g.python:3.6.10-alpine3.12
) rather thanpython:M.m.p-alpine
(e.g.python:3.6.10-alpine
). This doesn't really solve the problem since even e.g.python:3.6.10-alpine3.12
will get updates from time to time but hopefully it'll always refer to the same Alpine version. - Stop the Travis build in its tracks if Docker fails to build the QL
image, so it should be easier to spot a problem image that could
compromise the test run. (See
set -e
insetup_dev_env.sh
.) - Use the exact same Python base env across the board: dev env, Travis,
test suite, QL and Timescale setup images. In detail: use the same
Python version
M.m.p
(i.e. down to the patch level as in3.8.3
) in the QL and Timescale Docker files, Travis build (.travis.yml
), andPipfile
; also, install that exact version in your dev env.
Ideally we should pin QL Python deps in the Pipfile
too—not done
yet. Ditto for apk
packages in the Docker files.
From time to time, you may run into dependency hell. We've been hit a few times (see #273 and the other issues linked there as additional context) and it wasn't fun. Surely there's still things we can improve on our side, but, truth be told, while the mainstream dependency management story is decent, it isn't as good as it could be—think Nix/NixOS and Guix/GuixSD, the gold standard in the field IMHO. (I wish they were more popular among Pythonistas and DevOp folks, but am digressing :-) Anyway, here are some home remedies for dependency snake bites—use at own risk.
Quantum Leap depends on a bunch of libs that in turn depend on many others. Unless you've got a compelling reason not to, you should develop, test and run Quantum Leap in a sandboxed environment instead of installing stuff or relying on libs already in your Python global environment. We've seen all sorts of wonderful things caused by slightly different versions of dependent libraries. Often those are indirect dependencies which makes it tricky to unravel weirdness you may be experiencing—e.g. a test fails because expected and actual time index are one second apart but it passes with flying colours on another machine.
We're using pipenv
to create dev and test sandboxes. Our Pipfile
pins the Python version and that's the one you should install on your
box. If pipenv
can't find it, it'll complain loudly. If you already
have other Python versions installed, it might take some fiddling to
install yet another version—e.g. you've got 3.8.4
but our Pipfile
requires 3.8.3
. While you could change the version in the Pipfile
to match yours (e.g. 3.8.4
or even 3.8
), it's best not to since
we had much grief in the past with supposedly compatible versions,
e.g. 3.6.7
and 3.6.10
.
If pipenv
starts acting up, like
pkg_resources.DistributionNotFound:
The 'pipenv==2018.11.26' distribution was not found
and is required by the application
You'll have to reinstall it
$ brew reinstall pipenv
Your QL dev image work differently than e.g. prod? Test results on Travis look different than yours? In general building and pulling of Docker images are time-dependent operations. So it could be that even if you're looking at the same tags across the board, the actual images are slightly different.
In the case of a Travis build, probably your Docker registry has cached an older Alpine image, so you can try zapping all Alpine images in your Docker registry and then rebuild your QL image. If that still doesn't work, you might need heavy-handed tactics
$ docker system prune -a
The case of two QL images with the same tag (e.g. yours and prod) doing slightly different things can be quite hairy. You'll have to figure out what got built when and possibly look at the history of all involved Docker files...
Developer Track
- Cookbook
- Gauging performance
- Mother of all queries
- Enteater
- Work a Q
- No async free lunch
- Release procedure
User Track