-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add acceptance tests #1360
Add acceptance tests #1360
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comparing the results of running this against @7df8caa, it appears that this added 48 tests. Meaning that it is scanning 48 modules. Looking at my system python, I am seeing that it should be closer to 197. Any idea why this is missing so many modules?
LIB_DIR = os.path.dirname(os.__file__) | ||
MODULES_TO_CHECK = [f for f in os.listdir(LIB_DIR) if f.endswith(".py") | ||
or os.path.exists(os.path.join(LIB_DIR, f, '__init__.py'))] | ||
MODULES_TO_CHECK = MODULES_TO_CHECK |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reason for this line of code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@degustaf I had a slice there to limit module count during test runs when implementing it. Obviously it needs to be deleted.
I'll try to revisit module count somewhere around this weekend. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rogalski: Looking at travis, I see why you disabled all the messages to speed up the test suite. But, 3.6 had 3 failures, without finishing without disabling the actual checks. After this change, there is just one failure. We might need to reconsider what parts of the standard library to lint, or look into optimizing pylint to run faster.
I think this is interesting, but I'm worried about the increase of the test running time, which tripled from what I'm seeing, and it does not provide too much value anyway, after running this the first time and fixing the found bugs. It would probably make more sense to have this under a flag, which we can enable before doing a release, and not with every commit. What do you think @rogalski and @degustaf ? |
@PCManticore I agree that this only needs to be run periodically. I think having this as runnable when desired is a good idea. Perhaps move the acceptance directory out of test, and add documentation about running it using pytest. |
Hey @rogalski can you follow up on this? I think we just need to expose this under a flag, which we can run before every release. |
Yeah, sure, although probably somewhere around next week. I'd probably go for custom markers and reasonable default configuration on pytest, but feel free to suggest better solution. |
That sounds like a good idea @rogalski |
5973961
to
4542476
Compare
AppVeyor issue seems unrelated. |
It's a follow-up on discussion in #1072 comments.
One failing case on Py 3.6 found. One failing case on PyPy found.
Obviously these tests shouldn't run per check-in, but having them in source code and easily runnable may be beneficial.
Just an idea, we can think about it a little bit more.