-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues when running tests #13
Comments
I fixed all of these in the most recent push. Due to a previous issue regarding installing the package I had decided to make the code robust to installing and working as best as possible without installing cvxpy or pychebfun. I think this is the right move, because many of the functions (most, really) work perfectly without these packages, and I don't want to force users to jump through the hoops of installing either of these less common packages. This does however cause issues when running tests without having these packages installed. My solution, perhaps unorthodox and I am open to suggestions, was to check if cvxpy and pychebfun are installed in the tests that require them, and skip those tests if those packages are not installed. I have logging.info messages for those cases, but pytest does not by default allow those to get printed to the terminal, so they skip silently. I would prefer a warning to show up, I'm open to suggestions there. All tests pass in python 3.5.2 with cvxpy and pychebfun installed. In python 3.7.9 without cvxpy or pychebfun installed all tests pass, though the tests that require cvxpy and pychebfun are skipped silently. |
Let me know what you think of my solution for the tests, if its okay I'll close, if not, if you have a suggestion for a better way to handle it that would be very helpful. |
That sounds good to me.
I would suggest using
For me, with Python 3.7.11, one test fails: def test_iterative_velocity(self):
params_1, val_1 = iterative_velocity(x, dt, params=None, tvgamma=tvgamma, dxdt_truth=dxdt_truth)
params_2, val_2 = iterative_velocity(x, dt, params=None, tvgamma=0, dxdt_truth=None)
> self.assertListEqual(params_1, [2, 0.0001])
E AssertionError: Lists differ: [1, 0.0001] != [2, 0.0001]
E
E First differing element 0:
E 1
E 2
E
E - [1, 0.0001]
E ? ^
E
E + [2, 0.0001]
E ? ^
pynumdiff/tests/test_optimize.py:100: AssertionError |
I replaced the logging calls with pytest.skip, and I tried to make the test that failed for you more robust. Hard for me to fully debug though, since I get a different value (i.e. 2 instead of 1). What version of scipy and numpy do you have? |
Great, it looks good.
I used the versions from |
That is odd, I'm using the same versions of scipy and numpy, on python 3.7. In any case, could you rerun the tests with the latest code to see if it passes now? |
Yes, the relaxed test passes. |
Excellent, thanks. I believe everything from this issue is now addressed, so I'm going to close the issue. If I missed something let me know. |
I noticed the following issues when running
pytest pynumdiff
(with Python 3.7.11 and PyNumDiff installed withpip install -e .
):test_optimize.py
, the exceptionTypeError: pi_control() got an unexpected keyword argument 'simdt'
is raisedsimdt=0.01
:cvxopt
is not installed (maybepip install -r requirements.txt
is required before running the tests?)test_optimize.py
, bothtest_chebydiff
andtest_template
failPendingDeprecationWarnings
are raised because ofnp.matrix
usage (with NumPy 1.21.0)DeprecationWarning
is raised byscipy.sparse.linalg.cg
(with SciPy 1.7.0)(This is related to my JOSS review: openjournals/joss-reviews#4078)
The text was updated successfully, but these errors were encountered: