I'd like to register the same set of tests a few times, with a different before
hook
#3366
-
Here is the problem I have. I'm testing certain script which gets some data from the database and creates reports from them. So far, I have done it like this: the Now, the reports are taking much time, so I decided to precalculate some data and cache them in the database to speed them up. And now I want to make sure that the script which does the precalculations works correctly, too. Here's the scenario. The whole system is running for some time, accumulates data in the database, the caching script runs, updates the cached data, more data comes in, the caching script runs again, updates the cached data, etc. I want to make sure that the generated report is the same, irrespective of when the caching script runs. (I don't want to go into details, but this is something not "obviously correct" – to generate the cache for the newly accumulated data, some of the cache for the previous data is needed, and it's all very dependent on things like timestamps in the database, so it's potentially very easy to screw up and hence I want it covered by tests.) So, basically, I want to run the seeding/caching scripts in various configurations several times, rerun all the tests and make sure the results are still the same. (This is why mocking the database is not the way I want to go – this would require basically reimplementing it rather than just mocking the results of a few queries.) Ideally, I'd like to register the same set of tests several times, each time with a different Here is what I tried so far. Registering them in a loop, with a One thing I could do is to pass the parameters to the scripts on the command line. This is not ideal, though, since it would mean that I need to run the test suite several times manually. (Of course, it's easy to write a Bash script for that, but then I no longer run one suite of tests, but a few.) So far, the best idea I had is this: have my tests in a separate module/file and write a few test files, each setting up a different I have a feeling that AVA might not be the ideal tool for that – it's rather far from the ideal unit test example. Still, I already have and use AVA in the project, so I'd prefer not to introduce another testing tool just for this particular set of tests. (Note: this is not a criticism of AVA, every tool has its scope and it's just possible that my use case is a bit outside AVA's.) Any ideas? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Taking the setup out of the test run altogether may be the approach that's easiest to understand and maintain. You could also create your tests in a helper file (e.g. |
Beta Was this translation helpful? Give feedback.
Taking the setup out of the test run altogether may be the approach that's easiest to understand and maintain.
You could also create your tests in a helper file (e.g.
_
prefix) and import it from several actual test files. IIRC you don't need to calltest.before()
before you declare the tests.