-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cartesian products and more powerful unpacking #29
base: master
Are you sure you want to change the base?
Conversation
Representing items supplied by the data and file_data decorators as instances of classes Params and ParamsFailure will simplify implementation of multi-level unpacking (the unpack() method) and nested data and file_data decorators (combine two instances by summing them up).
The data and file_data decorators will just create and store instances of InlineParamsSet and FileParamsSet, respectively. Both classes implement the same interface so the ddt decorator will no longer need to distinguish between them when generating tests for individual combinations of test parameters.
First, I do not like the idea of masking Python3 capabilities (exceptions in this case) just because we support Python2, too. Second, I want to test it properly, meaning separate tests for Python2 and Python3 where the details differ. Perhaps there is a cleaner way to test version-specific behavior and pass all pep8 tests, but I do not know it.
The change breaks some older tests. I am not fixing them in this commit to make backward-incompatible changes more obvious.
8 tests were broken due to the following changes: 1. The unpack decorator must precede a data or file_data decorator. This is the only change that is by design and cannot be undone. 2. Generated names are numbered from 0 instead of 1 - there is no obvious gain in starting from 1 but the "+1" makes code less clear. 3. Sequences of underscores in value representations are reduced to a single underscore. There is no meaning in five consecutive underscores. 4. The file_data decorator reports failures in data file processing in a more structured way. Details depend on Python version used. See also commit 204ecae. 5. Some tests were implementation-dependent. The implementation changed significantly.
With more functionality, I felt the need to structure the tutorial a bit more.
|
Thanks! One small question before I look at this in detail (maybe tomorrow): how does this compare with #25? I imagine myself having to choose between either of them. |
""" | ||
Method decorator to add to your test methods. | ||
Method decorator to supply parameters to your test method from a JSON file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can I suggest that while you're changing this, you move to allow YAML data? It will parse json, but then also allow for a more readable style of data as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I pretty open to the idea and understand that YAML is a super set to JSON (pyyaml hopefully thinks so, too). :) I just do not have any experience with it. Is it really as simple as replacing json.load()
with a pyyaml counterpart? Should we perhaps document what features (more complex data types, perhaps?) we do not support?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the idea of supporting YAML. But I'd make that out of the scope of this refactor. Maybe open a separate issue? Which I just did: #30
#29 should supersede #25 feature-wise. I would have based my changes on #25 if I realized up front why #25 brings:
#25 backward incompatibility:
#29 brings:
#29 backward incompatibility:
Anyway, I will need to clean the code a bit to pass all the checks. Hold on until I let you know, please (perhaps later today). |
Thanks both @cpennington and @mycharis for your work and this joint session today! I understand the reason for the backwards incompatibility, and I think it's the right thing to do. When we introduced We will release this as ddt 2.0 - I'll try to make a better job at release notes this time. |
…mentation InlineDataValues - Represents values passed to a @DaTa decorator. FileDataValues - Represents values passed to a @file_data decorator. A common base class (formerly BaseParamsSet) has been dropped since the classes share a common interface rather than implementation.
IOError is an alias for OSError since Python 3.3 so there is no need for workarounds or guarded Python 3 code that fails on Python 2.
…ython2" This reverts commit ae0a393. The code has been updated so that the workaround is no longer needed.
Always time to learn something new. Now I know what all the # pylint comments are good for. :-)
...which is loading a JSON file. Instances of Params are created in FileDavaValues.__iter__(); instances of ParamsFailure should be created there as well.
Thanks @txels for your support today. I do not like the idea of Having decorators for unpacking is much cleaner, imho. If you worry about backward compatibility and interpretation of
The only thing that I do not like about the current syntax is verbose multi-level unpacking but it can be solved by optional argument to the decorators. @unpack
@data(list(dict("a": 1, "b": 2)))
def test_one(arg):
... will still unpack one level and call the function with test_one(dict("a": 1, "b": 2)) while @unpack(2)
@data(list(dict("a": 1, "b": 2)))
def test_two(a, b):
... will unpack two levels and call the function with test_two(a=1, b=2) Same for |
This PR looks quite a big change indeed. |
Hi @mycharis I'm really sorry about having neglected this for so long. I have been considering whether just making this into a new How does that sound? You are already using your fork anyway, I imagine... Would be good to join forces on a new cleaner library. |
When you say |
Disregarding backwards compatibility and releasing a 2.0.0 is something I would like to see. |
We started a discussion about design for ddt 2.0 and this is my current thinking about where I'd like to take this library: #44 (comment) In terms of API it's a detour from the current one but I think it simplifies solving the use cases that triggered you to write this PR, so I'd love for you guys to take a look at that issue and add some notes there. Once the main ideas there are clear and we are convinced it's the way forward, I will decide whether it's better to start the new version out of this branch, current master, or mostly rewrite from scratch and pick pieces from here and there. |
Is anyone still interested in this? |
We implemented something similar to this here: https://github.com/Qiskit/qiskit-terra/blob/209d1b6a936fde427de76a49899ebd901f89ff9c/test/__init__.py
Qiskit project is interested :) |
What started as a small exercise to get Cartesian products in ddt resulted in a package rewritten almost from scratch. Implementation changes go in the direction of edx:cartesian-products but way beyond that. In the end, it rather aims at becoming ddt 2.0. :) It is still mostly backwards-compatible and I do not intend to create a new package when one already exists.
Here is what I would like to ask you:
Parts of the implementation might be controversial, especially with respect to Python2. I am working exclusively in Python3 and studied Python2 only to the extent to get all tests passed. :)
On the other hand:
There are still some minor items on my backlog. I will be happy to complete them if you indicate that my changes stand a chance.
Thanks. :)