-
Notifications
You must be signed in to change notification settings - Fork 770
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[READY] Migrate to unittest #1591
Conversation
9af7f0f
to
988e577
Compare
Codecov Report
@@ Coverage Diff @@
## master #1591 +/- ##
==========================================
+ Coverage 96.32% 96.40% +0.07%
==========================================
Files 90 90
Lines 7838 7838
Branches 164 164
==========================================
+ Hits 7550 7556 +6
+ Misses 235 230 -5
+ Partials 53 52 -1 |
Finally. It only took me slightly less than a week. Ready for review, with the update description. Hopefully, the description will help with the review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
with a few minor nits and a couple of questions. in particular the one about the pattern
with the load tests protocol is worth checking.
Reviewed 123 of 123 files at r1, all commit messages.
Reviewable status: 1 of 2 LGTMs obtained (waiting on @bstaletic)
a discussion (no related file):
do we need to update .vimspector.json? I can do this later if you want.
a discussion (no related file):
(optional) we've added #noqa
to a number of places where we're importing setUpModule
. The purpose of the import is to only have to write it once, I get that however, in many cases:
- the imports are added to existing import lists
- the new import lists are > 80 chars
so the noqa
is suppressing some other errors. I'd prefer to put these imports individually:
from blah import setUpModule #noqa
from blah import tearDownModule #noqa
this narrows the exclusion and makes it a bit clearer what it's for.
But this is a bit of a pain and only do it if you have the energy. I really don't mind that much.
Same for many of the other comments, they are just bikeshedding nits and not that important.
run_tests.py, line 164 at r1 (raw file):
action = 'store_true', help = 'Run tests inside valgrind.' ) parser.add_argument( '--no-parallel', action='store_true',
it would be a shame to lose parallel running. I take it that's not implemented/supported in unittest?
ycmd/tests/init.py, line 33 at r1 (raw file):
"""Defines a decorator to be attached to tests of this package. This decorator passes the shared ycmd application as a parameter. Do NOT attach it to test generators but directly to the yielded tests."""
is this terminology right now? I think we don't yield tests anymore but use subTest or something?
ycmd/tests/init.py, line 41 at r1 (raw file):
def Wrapper( *args, **kwargs ): ClearCompletionsCache() return test( args[ 0 ], shared_app, *args[ 1: ], **kwargs )
using args[ 0 ]
here seems odd. better to name the arg in the wrapper spec def Wrapper( test_case_instance, *args, **kwargs )
?
ycmd/tests/init.py, line 50 at r1 (raw file):
def Wrapper( *args, **kwargs ): with IsolatedApp( custom_options ) as app: test( args[ 0 ], app, *args[ 1: ], **kwargs )
here too ? def Wrapper( test_case, *args, **kwargs )
ycmd/tests/identifier_utils_test.py, line 410 at r1 (raw file):
[ 'Simple', '1stNonNumber', 'Subidentifier', 'Unicode' ] ) for i, args in enumerate( zipped ): with self.subTest( args[ 1 ], ident = args[ 0 ], i = i ):
it seems a bit pointless to include i
here as we're including all of (both) of the arguments anyway. the index is largely irrelevant if we know the ident and the ids. no problem with it, though I would have written this like:
for ident, id in zipped:
with self.subTest( ident=ident, id=id ):
...
or am I missing something?
ycmd/tests/shutdown_test.py, line 121 at r1 (raw file):
def load_tests( loader: unittest.TestLoader, tests, pattern ):
do we need to apply the pattern
filter ourselves here or does that happen before or after this is called? Have you tried to -k test_FromWatchdog*
or something ?
ycmd/tests/test_utils.py, line 412 at r1 (raw file):
if os.environ.get( 'YCM_TEST_NO_RETRY' ) == 'XFAIL': return expectedFailure
I think this means that the test will FAIL if the test passes From docs for expectedFailure
If the test passes, it will be considered a failure.
I think that's wrong as it will make the test flaky if it happens to succeed first time.
Better to raise SkipTest
after the first failure
ycmd/tests/test_utils.py, line 430 at r1 (raw file):
except Exception as test_exception: run += 1 if run == opts[ 'reruns' ]:
maybe add:
elif os.environ.get( 'YCM_TEST_NO_RETRY', None ) == 'XFAIL':
raise SkipTest()
ycmd/tests/clangd/signature_help_test.py, line 25 at r1 (raw file):
from ycmd import handlers from ycmd.tests.clangd import PathToTestFile, SharedYcmd, setUpModule, tearDownModule, IsolatedYcmd # noqa
I think we can make this 80 charcters even if we're STFUing falke8
ycmd/tests/cs/init.py, line 108 at r1 (raw file):
if filepath not in shared_filepaths.setdefault( app, [] ): # StartCompleterServer( app, 'cs', filepath )
why commented out ? if it's not needed, prefer to remove it (perhaps with a comment).
ycmd/tests/cs/signature_help_test.py, line 30 at r1 (raw file):
IsolatedYcmd, SharedYcmd, WrapOmniSharpServer, setUpModule, tearDownModule )
would prefer stylistically to put these on their own line. I really abhor bin-packing, but I particularly don't like mixing styles in one command :)
ycmd/tests/go/diagnostics_test.py, line 28 at r1 (raw file):
import json from ycmd.tests.go import PathToTestFile, SharedYcmd, setUpModule, tearDownModule # noqa
check 80 chars.
ycmd/tests/rust/testdata/common/Cargo.lock, line 3 at r1 (raw file):
# This file is automatically @generated by Cargo. # It is not intended for manual editing. version = 3
why was this file added? I have no problem, just curious what is creating it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @bstaletic and @puremourning)
a discussion (no related file):
Previously, puremourning (Ben Jackson) wrote…
(optional) we've added
#noqa
to a number of places where we're importingsetUpModule
. The purpose of the import is to only have to write it once, I get that however, in many cases:
- the imports are added to existing import lists
- the new import lists are > 80 chars
so the
noqa
is suppressing some other errors. I'd prefer to put these imports individually:from blah import setUpModule #noqa from blah import tearDownModule #noqa
this narrows the exclusion and makes it a bit clearer what it's for.
But this is a bit of a pain and only do it if you have the energy. I really don't mind that much.
Same for many of the other comments, they are just bikeshedding nits and not that important.
Right, my laziness is showing. Done.
a discussion (no related file):
Previously, puremourning (Ben Jackson) wrote…
do we need to update .vimspector.json? I can do this later if you want.
Please check the change to .vimspector.json
, but I think I got it right.
run_tests.py, line 164 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
it would be a shame to lose parallel running. I take it that's not implemented/supported in unittest?
Not supported out of the box. We can opt for a different test runner. I tried using pytest as just a runner, but that fails with self.subTest
context managers.
Maybe nose2? Quick StackOverflow lead me to testtools.
Haven't really looked into alternative test runners beyond that.
ycmd/tests/init.py, line 33 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
is this terminology right now? I think we don't yield tests anymore but use subTest or something?
That's what happens when you (read: I) dig through ancient commits to get the old code back. That comment is definitely wrong!
Comments deleted.
ycmd/tests/init.py, line 41 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
using
args[ 0 ]
here seems odd. better to name the arg in the wrapper specdef Wrapper( test_case_instance, *args, **kwargs )
?
Definitely reads better. Done.
ycmd/tests/init.py, line 50 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
here too ?
def Wrapper( test_case, *args, **kwargs )
Done.
ycmd/tests/identifier_utils_test.py, line 410 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
it seems a bit pointless to include
i
here as we're including all of (both) of the arguments anyway. the index is largely irrelevant if we know the ident and the ids. no problem with it, though I would have written this like:for ident, id in zipped: with self.subTest( ident=ident, id=id ): ...or am I missing something?
You're overlooking that the index is passed down to LoopExpectedLongestIdentifier()
, which then passes it over to C++ as end_index
.
That was the reason these two tests were written in such an ugly way.
In python 2 the following was possible:
for i, (descr, identifier) in enumerate(zipped):
ycmd/tests/shutdown_test.py, line 121 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
do we need to apply the
pattern
filter ourselves here or does that happen before or after this is called? Have you tried to-k test_FromWatchdog*
or something ?
Right, we're completely ignoring the pattern here. Pattern matching isn't just a substring match, so I put it aside. Particularly because -k
is allowed to match fully qualified module names.
https://docs.python.org/3/library/unittest.html#cmdoption-unittest-k
ycmd/tests/test_utils.py, line 412 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
if os.environ.get( 'YCM_TEST_NO_RETRY' ) == 'XFAIL': return expectedFailure
I think this means that the test will FAIL if the test passes From docs for expectedFailure
If the test passes, it will be considered a failure.
I think that's wrong as it will make the test flaky if it happens to succeed first time.
Better to raise
SkipTest
after the first failure
Isn't the point of expected failures to fail if assertion passes.
To be honest, I don't remember when this YCM_TEST_NO_RETRY == XFAIL
thing was added. Do you remember?
ycmd/tests/test_utils.py, line 430 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
maybe add:
elif os.environ.get( 'YCM_TEST_NO_RETRY', None ) == 'XFAIL': raise SkipTest()
Sure, done.
ycmd/tests/clangd/signature_help_test.py, line 25 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
I think we can make this 80 charcters even if we're STFUing falke8
Done.
ycmd/tests/cs/init.py, line 108 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
why commented out ? if it's not needed, prefer to remove it (perhaps with a comment).
Good question. Probably a debugging remnant. Will look into it.
ycmd/tests/cs/signature_help_test.py, line 30 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
would prefer stylistically to put these on their own line. I really abhor bin-packing, but I particularly don't like mixing styles in one command :)
As said before, laziness. Done.
ycmd/tests/go/diagnostics_test.py, line 28 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
check 80 chars.
Done.
ycmd/tests/rust/testdata/common/Cargo.lock, line 3 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
why was this file added? I have no problem, just curious what is creating it.
I think this is an artifact of rust-analyzer running cargo check
on our test data. We could put it in .gitignore
if you'd prefer.
Rust docs say that libraries should not commit Cargo.lock
, but that executables should. We're neither, so we don't need to listen to others.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @puremourning)
ycmd/tests/cs/init.py, line 108 at r1 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
Good question. Probably a debugging remnant. Will look into it.
Turns out it's not my debugging remnant. It was like that before, in conftest.py
. Probably even before pytest, as I know I didn't write cs/__init__.py
or cs/conftest.py
. Well, my name is on them, but I am not the one who originally came up with WrapOmniSharpServer
.
Considering that things have been working with that thing commented out, I decided to remove the line. Not pushed, because I'm waiting to see if there will be other review comments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @puremourning)
.vimspector.json, line 4 at r2 (raw file):
"$schema": "https://puremourning.github.io/vimspector/schema/vimspector.schema.json", "configurations": { "python - launch pytest": {
This should say unittest
instead of pytest
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 65 of 65 files at r2.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @bstaletic and @puremourning)
.vimspector.json, line 37 at r2 (raw file):
"args": [ "-v", "${Test}"
I'm thinking that we will need to change this to be more useful, as my current compiler plugin will set ${Test}
to <filename>:<test_function>
. in order to do that we'd need the args to be -k <test_function> <filename>
I think.
I'll sort that out next time I happen to use it. I suspect the answer is to switch to using a splat *${TestArgs}
then I'll change my custom stuff to pass the arguments as a string like -k <test_function> <filename>
. Vimspector will then splice the args array together.
run_tests.py, line 164 at r1 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
Not supported out of the box. We can opt for a different test runner. I tried using pytest as just a runner, but that fails with
self.subTest
context managers.Maybe nose2? Quick StackOverflow lead me to testtools.
Haven't really looked into alternative test runners beyond that.
Not urgent right now. We can get this back if we find a way in future. We didn't have it for years. It's a shame to lose it, but it's not the end of the world. Stable tests is always the priority.
ycmd/tests/shutdown_test.py, line 121 at r1 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
Right, we're completely ignoring the pattern here. Pattern matching isn't just a substring match, so I put it aside. Particularly because
-k
is allowed to match fully qualified module names.https://docs.python.org/3/library/unittest.html#cmdoption-unittest-k
Does this mean it's no longer possible to run a specific test from within this module? That would be kinda sad.
ycmd/tests/test_utils.py, line 412 at r1 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
Isn't the point of expected failures to fail if assertion passes.
To be honest, I don't remember when this
YCM_TEST_NO_RETRY == XFAIL
thing was added. Do you remember?
I think the idea was, rather than retrying, if the test fails, mark it as expected failure. This just speeds up test runs and allows you to focus on things which are actually failing. However, the current implementation would require the test to fail first time.
If I added the 'XFAIL' value for YCM_TEST_NO_RETRY, I don't remember doing it and I had totally forgotten about it, though I guess I added it for the reason above, speed up full test runs (particularly java) where there are flaky tests that just require re-running a number of times.
Anyway
ycmd/tests/test_utils.py, line 430 at r1 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
Sure, done.
This is good, but we need to remove the block above, which makes this unreachable.
ycmd/tests/test_utils.py, line 411 at r2 (raw file):
opts = { 'reruns': 20, 'reruns_delay': 0.5 } opts.update( kwargs ) if os.environ.get( 'YCM_TEST_NO_RETRY' ) == 'XFAIL':
remove these two lines
ycmd/tests/cs/init.py, line 108 at r1 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
Turns out it's not my debugging remnant. It was like that before, in
conftest.py
. Probably even before pytest, as I know I didn't writecs/__init__.py
orcs/conftest.py
. Well, my name is on them, but I am not the one who originally came up withWrapOmniSharpServer
.Considering that things have been working with that thing commented out, I decided to remove the line. Not pushed, because I'm waiting to see if there will be other review comments.
SGTM
ycmd/tests/language_server/init.py, line 56 at r2 (raw file):
with IsolatedApp( custom_options ) as app: try: test( args[ 0 ], app, *args[ 1: ], **kwargs )
test_case_instance
ycmd/tests/rust/testdata/common/Cargo.lock, line 3 at r1 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
I think this is an artifact of rust-analyzer running
cargo check
on our test data. We could put it in.gitignore
if you'd prefer.Rust docs say that libraries should not commit
Cargo.lock
, but that executables should. We're neither, so we don't need to listen to others.
thanks for the explanation, this is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r3, all commit messages.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @bstaletic)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @bstaletic and @puremourning)
.vimspector.json, line 37 at r2 (raw file):
Previously, puremourning (Ben Jackson) wrote…
I'm thinking that we will need to change this to be more useful, as my current compiler plugin will set
${Test}
to<filename>:<test_function>
. in order to do that we'd need the args to be-k <test_function> <filename>
I think.I'll sort that out next time I happen to use it. I suspect the answer is to switch to using a splat
*${TestArgs}
then I'll change my custom stuff to pass the arguments as a string like-k <test_function> <filename>
. Vimspector will then splice the args array together.
You could try matching module.name.ClassTest.test_function
. Do you want me to revert these changes? Or...
run_tests.py, line 164 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
Not urgent right now. We can get this back if we find a way in future. We didn't have it for years. It's a shame to lose it, but it's not the end of the world. Stable tests is always the priority.
I also found unittest-parallel
, but it wasn't packaged for my distro, so didn't bother with setting up a virtual env.
ycmd/tests/shutdown_test.py, line 121 at r1 (raw file):
Might be. Let me check. If the answer is yes, I'll simply match member function names. -k
matching other parts of module.Class.test
seems to be inconsistent.
Changed in version 3.5: Discovery no longer checks package names for matching pattern due to the impossibility of package names matching the default pattern.
ycmd/tests/test_utils.py, line 412 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
I think the idea was, rather than retrying, if the test fails, mark it as expected failure. This just speeds up test runs and allows you to focus on things which are actually failing. However, the current implementation would require the test to fail first time.
If I added the 'XFAIL' value for YCM_TEST_NO_RETRY, I don't remember doing it and I had totally forgotten about it, though I guess I added it for the reason above, speed up full test runs (particularly java) where there are flaky tests that just require re-running a number of times.
Anyway
Okay, I see what is the problem with expectedFailure
here. Also, irrelevant with these two lines removed, in favour of below.
ycmd/tests/test_utils.py, line 430 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
This is good, but we need to remove the block above, which makes this unreachable.
Done that too.
ycmd/tests/test_utils.py, line 411 at r2 (raw file):
Previously, puremourning (Ben Jackson) wrote…
remove these two lines
Done.
ycmd/tests/language_server/init.py, line 56 at r2 (raw file):
Previously, puremourning (Ben Jackson) wrote…
test_case_instance
Good catch! Done.
ycmd/tests/rust/testdata/common/Cargo.lock, line 3 at r1 (raw file):
Previously, puremourning (Ben Jackson) wrote…
thanks for the explanation, this is fine.
Decided to remove it, because some users complain about drive space.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @puremourning)
ycmd/tests/shutdown_test.py, line 121 at r1 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
Might be. Let me check. If the answer is yes, I'll simply match member function names.
-k
matching other parts ofmodule.Class.test
seems to be inconsistent.Changed in version 3.5: Discovery no longer checks package names for matching pattern due to the impossibility of package names matching the default pattern.
Okay, somehow the pattern is still being respected:
bstaletic@Gallifrey ycmd (git)-[unittest]-% ./run_tests.py --skip-b --no-f ycmd/tests/shutdown_test.py -k Without -v
Running tests on Python 3.9.7
test_FromHandlerWithoutSubserver (ycmd.tests.shutdown_test.ShutdownTest) ... ok
test_FromWatchdogWithoutSubserver (ycmd.tests.shutdown_test.ShutdownTest) ... ok
/usr/lib/python3.9/subprocess.py:1052: ResourceWarning: subprocess 3250 is still running
_warn("subprocess %s is still running" % self.pid,
ResourceWarning: Enable tracemalloc to get the object allocation traceback
----------------------------------------------------------------------
Ran 2 tests in 4.322s
OK
bstaletic@Gallifrey ycmd (git)-[unittest]-% ./run_tests.py --skip-b --no-f ycmd/tests/shutdown_test.py -k Watch -v
Running tests on Python 3.9.7
test_FromWatchdogWithSubservers (ycmd.tests.shutdown_test.ShutdownTest) ... ok
test_FromWatchdogWithoutSubserver (ycmd.tests.shutdown_test.ShutdownTest) ... ok
----------------------------------------------------------------------
Ran 2 tests in 17.807s
OK
I guess nothing to do here.
c4d3352
to
02a10a8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 4 of 5 files at r4, 4 of 4 files at r5, all commit messages.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @bstaletic)
.vimspector.json, line 37 at r2 (raw file):
Previously, bstaletic (Boris Staletic) wrote…
You could try matching
module.name.ClassTest.test_function
. Do you want me to revert these changes? Or...
nah, this is fine; I'll patch it up when I need to.
ycmd/tests/cs/debug_info_test.py, line 131 at r5 (raw file):
@SharedYcmd def test_DebugInfo_UsesSuperfolderHint( self, app ): print( app )
debug code ? do we need to keep it?
Things wrong with pytest: 1. It's trying to do too much and ends up getting in the way. 1.1. Most notably, rerunning tests is pretty broken - see pytest-dev/rerun-failures#51 2. Everything needs to be a fixture, instead of being compatible with argument-altering decorators. 3. Yet, parametrizing fixtures is an annoying chore. 4. Weird interaction with @patch and its arguments - they are in reversed order, which messes up quite a few things. As a result, we're moving to the standard unittest, which looks similar to nose. Here are the relevant differences: - Writing tests: - Each test needs to be a member function, named `test_whatever()`. - Each test class needs to - Inherit from `unittest.TestCase` - Be called `ThingTest` - Instead of `yield`ing generated tests a `self.subTest()` is called in a loop: https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests - Running tests: - There are far fewer command line options and the biggest problem was `--ignore`. - We have used `--ignore` to omit libclang tests from that one CI run. Now we add a `load_tests()` to `ycmd/tests/clang/__init__.py`. Reference https://docs.python.org/3/library/unittest.html#load-tests-protocol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: 0 of 2 LGTMs obtained (and 1 stale) (waiting on @puremourning)
ycmd/tests/cs/debug_info_test.py, line 131 at r5 (raw file):
Previously, puremourning (Ben Jackson) wrote…
debug code ? do we need to keep it?
Probably a leftover of "am I doing what I think I am doing" from when I was rewriting the decorators.
Removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 1 of 1 files at r6, all commit messages.
Dismissed @bstaletic from a discussion.
Reviewable status: 1 of 2 LGTMs obtained (waiting on @bstaletic)
Thanks for sending a PR! |
Manual merge required - I touched |
Finally ready.
Port to unittest
Things wrong with pytest:
1.1. Most notably, rerunning tests is pretty broken - see
Fixtures marked (scope='session') are run on every re-run pytest-dev/pytest-rerunfailures#51
argument-altering decorators.
@patch
and its arguments - they are in reversed order, which messes up quite a few things.As a result, we're moving to the standard unittest, which looks similar
to nose. Here are the relevant differences:
test_whatever()
.unittest.TestCase
ThingTest
yield
ing generated tests aself.subTest()
is calledin a loop: https://docs.python.org/3/library/unittest.html#distinguishing-test-iterations-using-subtests
--ignore
.--ignore
to omit libclang tests from that one CIrun. Now we add a
load_tests()
toycmd/tests/clang/__init__.py
. Referencehttps://docs.python.org/3/library/unittest.html#load-tests-protocol
What to review
__init__.py
, as they are similar enough that every time I revisit them all, there are silly copy-paste bugs.run_tests.py
shutdown_test.py
- it's the one that replaced@valgrind_skip
withload_tests()
.test_utils.py
utils_test.py
- it's the one that just dropped@valgrind_skip
.In case you are a particularly brave reviewer
That is, if you want to review everything. First of all, make sure to ignore whitespace changes. Most of this is just a few mechanical changes:
TestCase
class FooTest( TestCase ):
before the first test.Module_TestDescription_test
totest_Module_TestDescription
self
parameter.@patch
arguments with what's sensible, if there are any of those.This change is