Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cartesian products and more powerful unpacking #29

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

mycharis
Copy link

What started as a small exercise to get Cartesian products in ddt resulted in a package rewritten almost from scratch. Implementation changes go in the direction of edx:cartesian-products but way beyond that. In the end, it rather aims at becoming ddt 2.0. :) It is still mostly backwards-compatible and I do not intend to create a new package when one already exists.

Here is what I would like to ask you:

  1. Build the documentation and read the tutorial to see what I have done and why.
  2. If you will generally approve the motivation, review the commits and their comments.
  3. Review the code if you get this far and let me know what needs to be done to get it merged. :)

Parts of the implementation might be controversial, especially with respect to Python2. I am working exclusively in Python3 and studied Python2 only to the extent to get all tests passed. :)

On the other hand:

  • You can name data with @DaTa, too.
  • Tests have 100% coverage (99% with fixed PYTHONHASHSEED).
  • Documentation now includes output of nosetests - it does not work with the master.

There are still some minor items on my backlog. I will be happy to complete them if you indicate that my changes stand a chance.

  • Better terminology projected to class names. I am not very happy about the current names but could not come up with better ones.
  • API documentation. It currently serves more like comments to the code than API documentation. This largely depends on the previous point.
  • I would like to introduce another section to the documentation - public interface/decorators, as a reference guide. The decorators are a bit lost in the API section.
  • Find an easy-to-grasp example for Cartesian product of three sets that would work in Python2. Unfortunately, str.split() does not work with keyword arguments in Python2.

Thanks. :)

Jan Holeček added 7 commits May 14, 2015 13:31
Representing items supplied by the data and file_data decorators as instances of
classes Params and ParamsFailure will simplify implementation of multi-level unpacking
(the unpack() method) and nested data and file_data decorators (combine two instances
by summing them up).
The data and file_data decorators will just create and store instances of InlineParamsSet
and FileParamsSet, respectively. Both classes implement the same interface so the ddt
decorator will no longer need to distinguish between them when generating tests for
individual combinations of test parameters.
First, I do not like the idea of masking Python3 capabilities (exceptions in this case)
just because we support Python2, too. Second, I want to test it properly, meaning separate
tests for Python2 and Python3 where the details differ.

Perhaps there is a cleaner way to test version-specific behavior and pass all pep8 tests,
but I do not know it.
The change breaks some older tests. I am not fixing them in this commit to make
backward-incompatible changes more obvious.
8 tests were broken due to the following changes:

1. The unpack decorator must precede a data or file_data decorator. This is the only
  change that is by design and cannot be undone.

2. Generated names are numbered from 0 instead of 1 - there is no obvious gain
  in starting from 1 but the "+1" makes code less clear.

3. Sequences of underscores in value representations are reduced to a single
  underscore. There is no meaning in five consecutive underscores.

4. The file_data decorator reports failures in data file processing in a more structured
  way. Details depend on Python version used. See also commit
  204ecae.

5. Some tests were implementation-dependent. The implementation changed
  significantly.
With more functionality, I felt the need to structure the tutorial a bit more.
@landscape-bot
Copy link

Code Health
Repository health decreased by 3% when pulling 9f6afbd on mycharis:mycharis-ddt-2.0 into 9e9998b on txels:master.

@txels
Copy link
Collaborator

txels commented May 14, 2015

Thanks!
Yes I know the feeling. This package started as a really small module that was useful for me. I didn't even imagine the use cases people would come up with. With all those contributions, the code was starting to feel quite messy to me. A rewrite needs to happen sooner or later.

One small question before I look at this in detail (maybe tomorrow): how does this compare with #25? I imagine myself having to choose between either of them.

@txels txels mentioned this pull request May 14, 2015
"""
Method decorator to add to your test methods.
Method decorator to supply parameters to your test method from a JSON file.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I suggest that while you're changing this, you move to allow YAML data? It will parse json, but then also allow for a more readable style of data as well.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I pretty open to the idea and understand that YAML is a super set to JSON (pyyaml hopefully thinks so, too). :) I just do not have any experience with it. Is it really as simple as replacing json.load() with a pyyaml counterpart? Should we perhaps document what features (more complex data types, perhaps?) we do not support?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of supporting YAML. But I'd make that out of the scope of this refactor. Maybe open a separate issue? Which I just did: #30

@mycharis
Copy link
Author

#29 should supersede #25 feature-wise. I would have based my changes on #25 if I realized up front why file_data decorator processing is deferred (file path relative to containing class).

#25 brings:

  • Nested data and file_data decorators

#25 backward incompatibility:

  • No incompatible changes

#29 brings:

  • Nested data and file_data decorators
  • Unpacking per data and file_data decorator
  • Multi-level unpacking
  • Custom names for inline data (keyword arguments for data decorator)
  • Repeatable numbering of generated tests for named values (JSON dict, inline keyword arguments) - values are processed in the order of sorted keys
  • encoding argument of file_data

#29 backward incompatibility:

Anyway, I will need to clean the code a bit to pass all the checks. Hold on until I let you know, please (perhaps later today).

@txels
Copy link
Collaborator

txels commented May 14, 2015

Thanks both @cpennington and @mycharis for your work and this joint session today!

I understand the reason for the backwards incompatibility, and I think it's the right thing to do. When we introduced unpack there was a discussion on alternative APIs to achieve that purpose - given we supported a single @data decorator at the time, what we did looked like the simplest approach. Doing away with that decorator and modifying data (e.g. with an additional argument _unpack) could be an alternative, although it can be tricky and had a tiny potential for backwards incompatibility. Since now we're already introducing some backwards incompatibility for good, we could re-evaluate the option for 2.0 to use _-prefixed keywords arguments in data and warn about not using them as kwarg names for data values.

We will release this as ddt 2.0 - I'll try to make a better job at release notes this time.

Jan Holeček added 10 commits May 14, 2015 17:53
…mentation

InlineDataValues - Represents values passed to a @DaTa decorator.
FileDataValues - Represents values passed to a @file_data decorator.

A common base class (formerly BaseParamsSet) has been dropped since the classes
share a common interface rather than implementation.
IOError is an alias for OSError since Python 3.3 so there is no need for workarounds
or guarded Python 3 code that fails on Python 2.
…ython2"

This reverts commit ae0a393.

The code has been updated so that the workaround is no longer needed.
Always time to learn something new. Now I know what all the # pylint comments are good for. :-)
...which is loading a JSON file. Instances of Params are created in FileDavaValues.__iter__();
instances of ParamsFailure should be created there as well.
@landscape-bot
Copy link

Code Health
Code quality remained the same when pulling 25bf37c on mycharis:mycharis-ddt-2.0 into 9e9998b on txels:master.

@mycharis
Copy link
Author

Thanks @txels for your support today.

I do not like the idea of _unpack keyword argument very much. It is hard enough to write legible inline data as it is. The keyword argument would be lost.

Having decorators for unpacking is much cleaner, imho. If you worry about backward compatibility and interpretation of @unpack, how about:

  • @unpacknext to unpack values in following data set,
  • @unpackall to unpack values in all data sets (the position of this decorator does not matter), and
  • @unpack as a deprecated alias for @unpackall?

The only thing that I do not like about the current syntax is verbose multi-level unpacking but it can be solved by optional argument to the decorators.

@unpack
@data(list(dict("a": 1, "b": 2)))
def test_one(arg):
    ...

will still unpack one level and call the function with

test_one(dict("a": 1, "b": 2))

while

@unpack(2)
@data(list(dict("a": 1, "b": 2)))
def test_two(a, b):
    ...

will unpack two levels and call the function with

test_two(a=1, b=2)

Same for @unpackall. See 9828a50.

@keradus
Copy link

keradus commented Sep 22, 2015

This PR looks quite a big change indeed.
What is the status of it?

@landscape-bot
Copy link

Code Health
Code quality remained the same when pulling 25bf37c on mycharis:mycharis-ddt-2.0 into 2aba978 on txels:master.

@txels
Copy link
Collaborator

txels commented Nov 18, 2015

Hi @mycharis I'm really sorry about having neglected this for so long.

I have been considering whether just making this into a new ddt2 and just not caring about backwards compatibility, just redesign the API to cover the more extensive usage scenarios that I was not contemplating in the initial design.

How does that sound? You are already using your fork anyway, I imagine... Would be good to join forces on a new cleaner library.

@cpennington
Copy link
Contributor

When you say ddt2, do you mean a new package with that name, or just a new version of the ddt package? I'd encourage the latter.

@pradyunsg
Copy link
Contributor

Disregarding backwards compatibility and releasing a 2.0.0 is something I would like to see.

@txels
Copy link
Collaborator

txels commented Aug 5, 2016

We started a discussion about design for ddt 2.0 and this is my current thinking about where I'd like to take this library: #44 (comment)

In terms of API it's a detour from the current one but I think it simplifies solving the use cases that triggered you to write this PR, so I'd love for you guys to take a look at that issue and add some notes there.

Once the main ideas there are clear and we are convinced it's the way forward, I will decide whether it's better to start the new version out of this branch, current master, or mostly rewrite from scratch and pick pieces from here and there.

@wswld
Copy link
Contributor

wswld commented Jun 25, 2018

Is anyone still interested in this?

@1ucian0
Copy link

1ucian0 commented Jan 17, 2020

We implemented something similar to this here: https://github.com/Qiskit/qiskit-terra/blob/209d1b6a936fde427de76a49899ebd901f89ff9c/test/__init__.py

Is anyone still interested in this?

Qiskit project is interested :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants