Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issues when running tests #13

Closed
pmli opened this issue Feb 11, 2022 · 8 comments
Closed

Issues when running tests #13

pmli opened this issue Feb 11, 2022 · 8 comments

Comments

@pmli
Copy link

pmli commented Feb 11, 2022

I noticed the following issues when running pytest pynumdiff (with Python 3.7.11 and PyNumDiff installed with pip install -e .):

  • in test_optimize.py, the exception TypeError: pi_control() got an unexpected keyword argument 'simdt' is raised
  • after fixing the above issue by removing simdt=0.01:
    • some tests fail if cvxopt is not installed (maybe pip install -r requirements.txt is required before running the tests?)
    • in test_optimize.py, both test_chebydiff and test_template fail
    • many PendingDeprecationWarnings are raised because of np.matrix usage (with NumPy 1.21.0)
    • a DeprecationWarning is raised by scipy.sparse.linalg.cg (with SciPy 1.7.0)

(This is related to my JOSS review: openjournals/joss-reviews#4078)

@pmli pmli mentioned this issue Feb 11, 2022
@florisvb
Copy link
Owner

I fixed all of these in the most recent push.

Due to a previous issue regarding installing the package I had decided to make the code robust to installing and working as best as possible without installing cvxpy or pychebfun. I think this is the right move, because many of the functions (most, really) work perfectly without these packages, and I don't want to force users to jump through the hoops of installing either of these less common packages.

This does however cause issues when running tests without having these packages installed. My solution, perhaps unorthodox and I am open to suggestions, was to check if cvxpy and pychebfun are installed in the tests that require them, and skip those tests if those packages are not installed. I have logging.info messages for those cases, but pytest does not by default allow those to get printed to the terminal, so they skip silently. I would prefer a warning to show up, I'm open to suggestions there.

All tests pass in python 3.5.2 with cvxpy and pychebfun installed.

In python 3.7.9 without cvxpy or pychebfun installed all tests pass, though the tests that require cvxpy and pychebfun are skipped silently.

@florisvb
Copy link
Owner

Let me know what you think of my solution for the tests, if its okay I'll close, if not, if you have a suggestion for a better way to handle it that would be very helpful.

@pmli
Copy link
Author

pmli commented Feb 23, 2022

Due to a previous issue regarding installing the package I had decided to make the code robust to installing and working as best as possible without installing cvxpy or pychebfun. I think this is the right move, because many of the functions (most, really) work perfectly without these packages, and I don't want to force users to jump through the hoops of installing either of these less common packages.

That sounds good to me.

This does however cause issues when running tests without having these packages installed. My solution, perhaps unorthodox and I am open to suggestions, was to check if cvxpy and pychebfun are installed in the tests that require them, and skip those tests if those packages are not installed. I have logging.info messages for those cases, but pytest does not by default allow those to get printed to the terminal, so they skip silently. I would prefer a warning to show up, I'm open to suggestions there.

I would suggest using pytest.skip instead of logging. That way, pytest will report how many tests were skipped.

All tests pass in python 3.5.2 with cvxpy and pychebfun installed.

For me, with Python 3.7.11, one test fails:

    def test_iterative_velocity(self):
        params_1, val_1 = iterative_velocity(x, dt, params=None, tvgamma=tvgamma, dxdt_truth=dxdt_truth)
        params_2, val_2 = iterative_velocity(x, dt, params=None, tvgamma=0, dxdt_truth=None)
>       self.assertListEqual(params_1, [2, 0.0001])
E       AssertionError: Lists differ: [1, 0.0001] != [2, 0.0001]
E       
E       First differing element 0:
E       1
E       2
E       
E       - [1, 0.0001]
E       ?  ^
E       
E       + [2, 0.0001]
E       ?  ^

pynumdiff/tests/test_optimize.py:100: AssertionError

@florisvb
Copy link
Owner

I replaced the logging calls with pytest.skip, and I tried to make the test that failed for you more robust. Hard for me to fully debug though, since I get a different value (i.e. 2 instead of 1). What version of scipy and numpy do you have?

@pmli
Copy link
Author

pmli commented Feb 23, 2022

I replaced the logging calls with pytest.skip,

Great, it looks good.

and I tried to make the test that failed for you more robust. Hard for me to fully debug though, since I get a different value (i.e. 2 instead of 1). What version of scipy and numpy do you have?

I used the versions from requirements.txt.

@florisvb
Copy link
Owner

That is odd, I'm using the same versions of scipy and numpy, on python 3.7.

In any case, could you rerun the tests with the latest code to see if it passes now?

@pmli
Copy link
Author

pmli commented Feb 24, 2022

Yes, the relaxed test passes.

@florisvb
Copy link
Owner

Excellent, thanks.

I believe everything from this issue is now addressed, so I'm going to close the issue. If I missed something let me know.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants