-
-
Notifications
You must be signed in to change notification settings - Fork 371
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conditional Coverage Reporting with Platform Spoofing #1262
Conversation
8445c11
to
ca8d858
Compare
@@ -167,12 +165,12 @@ def run_app( | |||
raise BriefcaseCommandError(f"Unable to start app {app.app_name}.") | |||
finally: | |||
# Ensure the App also terminates when exiting | |||
if app_pid: | |||
if app_pid: # pragma: no-cover-if-is-py310 | |||
with suppress(ProcessLookupError): | |||
self.tools.os.kill(app_pid, SIGTERM) | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know why this is reported uncovered only on 3.10...
4bf3196
to
990fb34
Compare
17fcca1
to
edc2704
Compare
@freakboy3742, as long as you don't think I went too overboard with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the additional tox targets all make sense; no objections there. My bigger concerns are around the testing and CI configs:
- The approach of mocking the platform for a test, rather than
#pragma
excluding a test. As I've indicated inline, I think it's more likely that we'll mess up a mock than conditional-coverage will mess up a branch coverage; on that basis, I'd rather see a platform run less tests, but get coverage in the aggregate. - I'm not sure I understand the coverage reporting approach. If each individual platform + python version that we test has 100% coverage, any "per platform", "per python version", or "overall" coverage report doesn't really serve a purpose, does it?
There's only 2 benefits I can see.
Firstly, there's a minor improvement in developer ergonomics - if I add a new branch that isn't python version or platform specific, and miss coverage, then I'll then get 10+ failing coverage reports in CI rather than 1. That's bad... but then, if I can run tox on 1 platform and get the same report.
Secondly, there's a possible edge case where we have an "if False" conditional coverage equivalent. In that case, a unified report might report the dead code. However, I'm not sure how we'd get this in practice, outside of an error in the conditional specifier.
However, both of these cases seem pretty niche, especially given the complications that the CI configuration gains in order to support various merged reports.
Am I missing something here? What's the benefit of this approach over having a simple coverage report at the end of each test run, which we require to be a 100%?
tox.ini
Outdated
print("COVERAGE_FILE", os.environ.get("COVERAGE_FILE", "<not set>")); \ | ||
print("COVERAGE_PLATFORM", os.environ.get("COVERAGE_PLATFORM", "<not set>")); \ | ||
print("COVERAGE_EXCLUDE_PYTHON_VERSION", os.environ.get("COVERAGE_EXCLUDE_PYTHON_VERSION", "<not set>")); \ | ||
print("COVERAGE_EXCLUDE_PLATFORM", os.environ.get("COVERAGE_EXCLUDE_PLATFORM", "<not set>"))' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I get that these may be useful for debugging purposes, but do they serve a purpose in the long term?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Definitely of debatable usefulness now. Because there really isn't any way to introspect what condition-coverage-plugin
is doing, these were extremely helpful in getting everything working. They may prove useful in the future if things go wrong but we can just add them back if the information would be useful then.
) | ||
env = dev_command.get_environment(first_app, test_mode=False) | ||
expected = "default" if platform == "windows" else "missing" | ||
assert env.get(PYTHONMALLOC, "missing") == expected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not 100% sold on the approach here. There's effectively 2 options:
- Mark the branch as no-cover, and only test it on Windows
- monkeypatch platform and test the branch on every platform.
My inclination is that it's better to do the former, as our mock of a platform is more likely to be in error than coverage's identification of the platform.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless I'm not following entirely, I did implement option 2. I switched to option 1, though. I had to use # pragma: no cover
since no platform will take both branches.
|
||
def test_verify(create_command, monkeypatch): | ||
"""If you're on macOS, you can verify tools.""" | ||
create_command.tools.host_os = "Darwin" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What coverage is this adding? Is it purely to evaluate "verify macOS works" on Windows/Linux?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't consider this one to be adding coverage per se; although, it does add coverage on Windows and Linux.
In my mind, this adds "proper" testing and coverage for iOS verification since it explicitly ensures the expected tools are verified. I think that all of the platforms with tools should have this test.
Furthermore, I've added all of these tests in #1093.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I definitely agree that the rest of the test is a much more comprehensive test of the core iOS verification. I guess my question is whether we gain anything by running this test on Windows/Linux. However, it's only verifying that the right workflow is executed (i.e., the right verify methods are invoked), so I guess it doesn't hurt.
…inting coverage env vars in tox
👍🏼
I did try to add
I don't think you're missing anything necessarily. Basically, if we don't run platform or project coverage with all the conditional coverage rules disabled, we won't have any real verification that everything is covered. I'm not sure how likely or easily someone could create the situation where platform + python version coverage passes but project coverage would not., though; i did create mock ups where such conditions would arise but they were highly contrived. As for the complications to the CI config, I think some of those should have already existed; for instance, checking if a specific step failed instead of just I'd say the most unnecessary thing is uploading all of these HTML reports; I'd definitely be fine with just uploading the final project coverage report. After all, if you download the coverage zip file, one could create any one of these HTML reports themselves. At the end of the day, I think the platform and project coverage reporting is a good thing to do to verify any conditional coverage rules usage. However, I don't think it's absolutely critical. |
assert os.access( | ||
cmdline_tools_base_path / "latest" / "bin" / "sdkmanager", | ||
os.X_OK, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the kind of test I was referring to in my comments about mocking vs platform coverage. We're explicitly testing the behaviour of Linux on Windows, and we need to insert a platform mock of os.access
to make that work. I'm not sure that's actually gaining us anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, yeah; I think this is another one that'll require # pragma: no cover
. I'll verify to be sure.
I'm going to finish up the docs tomorrow and do one more review with fresh eyes of the interplay between CI and Also, the cert for https://docs.appimage.org expired....so, docs builds are failing. |
I'll do a quick pass to see if anything stands out.
🙄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modulo documentation, this is all looking solid. Pretty sure you've earned your Eagle Scout Merit Badge for Tox wrangling :-)
|
||
# fmt: off | ||
# Python zip unpacking ignores permission metadata. | ||
# On non-Windows, we manually fix permissions. | ||
if self.tools.host_os != "Windows": | ||
if self.tools.host_os != "Windows": # pragma: no branch no-cover-if-is-windows | ||
for binpath in (self.cmdline_tools_path / "bin").glob("*"): | ||
if not self.tools.os.access(binpath, self.tools.os.X_OK): | ||
binpath.chmod(0o755) | ||
# fmt: on | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh. All bad options, IMHO. I have a mild "we chose black, so we should live with it's sub-optimal choices" preference, but not enough to actually block a merge if you disagree.
0723c8d
to
db1a2ea
Compare
- Only produce and upload an HTML report for final project coverage - Convert `COVERAGE_PLATFORM` env var to individual factors for each platform for `tox -e coverage` - Use `-ci` factor to drive using the platform coverage files created in CI; also useful if the CI coverage files are downloaded manually
@@ -17,6 +17,9 @@ The recommended way of setting up your development environment for Briefcase is | |||
to use a `virtual environment <https://docs.python.org/3/library/venv.html>`__, | |||
and then install the development version of Briefcase and its dependencies: | |||
|
|||
Clone Briefcase and create virtual environment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Along with adding documentation for conditional coverage, I also wanted to make these docs easier to skim. So, I added a lot more headers so you can easily scroll down the page and understand what's included...on top of the TOC on the right.
Additionally, I did play around with adding a tl;dr section....but something was telling me you probably wouldn't like it....
Nonetheless, the motivation for such a section stems from the seemingly not insignificant number of contributors that seemingly don't do most of what's on this page...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly - I don't hate this idea. Our getting started guide is long - for good reason - and there's probably a non-trivial number of "I know what I'm doing" contributors who start, get put off by the "I already know what git is" aspect, and ignore some of the details.
I'm not sure it will completely fix the problems we're seeing with people not following the contribution guide; experience has shown me that you can't fix process problems with documentation, because there's literally no way to make people read the documentation... but anything we can do to guide people towards the right process can't hurt.
To that end, we should probably also link to this contribution guide in CONTRIBUTION.md
. Again, I don't know how much that will change behavior, but it can't hurt to make the path as explicit as possible.
(I'm also happy with these changes being a follow up PR, rather than being part of this one).
@@ -83,8 +104,8 @@ commands = | |||
all : python -m sphinx {[docs]sphinx_args_extra} -b html . {[docs]build_dir}/html | |||
|
|||
[testenv:package] | |||
package_env = none |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This workaround doesn't seem necessary anymore.
ha, this was definitely a journey. I think I've finally made it to the end, though. Now I can finish the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've made a couple of cosmetic tweaks; but I've also got some follow up questions about the usage of -coverage
options.
@@ -17,6 +17,9 @@ The recommended way of setting up your development environment for Briefcase is | |||
to use a `virtual environment <https://docs.python.org/3/library/venv.html>`__, | |||
and then install the development version of Briefcase and its dependencies: | |||
|
|||
Clone Briefcase and create virtual environment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly - I don't hate this idea. Our getting started guide is long - for good reason - and there's probably a non-trivial number of "I know what I'm doing" contributors who start, get put off by the "I already know what git is" aspect, and ignore some of the details.
I'm not sure it will completely fix the problems we're seeing with people not following the contribution guide; experience has shown me that you can't fix process problems with documentation, because there's literally no way to make people read the documentation... but anything we can do to guide people towards the right process can't hurt.
To that end, we should probably also link to this contribution guide in CONTRIBUTION.md
. Again, I don't know how much that will change behavior, but it can't hurt to make the path as explicit as possible.
(I'm also happy with these changes being a follow up PR, rather than being part of this one).
running tox in parallel, by running ``tox p`` (or ``tox run-parallel``). When | ||
you run the test suite in parallel, you'll get less feedback on the progress of | ||
the test suite as it runs, but you'll still get a summary of any problems found | ||
at the end of the test run. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI: I've modified this to suggest parallel operation as an option, rather than the default, for two reasons:
- parallel mode seems to have some interesting failure modes if the test suite doesn't pass - I've seen a bunch of weird "can't clean up temp folder" errors, but only when the test fails.
- parallel mode is very quiet - this is nice when you know it's working, but if it's your first time running the test suite, it can be a bit off-putting to not see a pytest suite actually running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
parallel mode seems to have some interesting failure modes if the test suite doesn't pass - I've seen a bunch of weird "can't clean up temp folder" errors, but only when the test fails.
I saw this too....but only on my macOS VM; wasn't sure if it was virtualization or something else going on. Good to have two datapoints, though.
docs/how-to/contribute-code.rst
Outdated
@@ -354,18 +458,101 @@ After running the test suite, generate a coverage report by running: | |||
|
|||
(venv) C:\...>tox -e coverage | |||
|
|||
You can run the test suite and coverage together by including the testing | |||
environment to run, e.g. ``py,coverage`` or ``py310,coverage``. | |||
To run the test suite along with this coverage reporting, run: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems a little misleading - tox p
also runs the test suite and give coverage. The difference is that it doesn't do towncrier, docs and pre-commit checks. Feels like there's an "only" or "just" missing here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh...this was only to show how you can run tests and get coverage reporting.....as opposed to just running tox -e coverage
to get the report.
But, tbh, the use-case to run just tox -e coverage
is a little niche....i mean, it just creates a coverage report from existing coverage files. The most likely use-case for getting coverage is to run the tests over code you just changed and get the coverage report.
So, it might be better to first introduce methods to run tests and get the coverage report....and in note at the end mention that tox -e coverage
can re-produce a coverage report.
Coverage report for host platform | ||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | ||
|
||
If all supported versions of Python are available to tox, then coverage for the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The distinction between -e coverage
and -e coverage-platform
isn't clear to me based on this description - if coverage
fills in the gaps for missing platforms, what is -platform
for? Also - there's no mention of Python version specific variants. Functionally - how do I get 100% coverage for tox -e py
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm super steeped in this configuration at this point; so, this feedback helps me understand how it's perceived by others. I'll need to take this in to consideration to convey what's happening with some updates.
To answer the question for the time being, though, running tox -e coverage
leaves both "python version exclusions" as well as "platform exclusions" enabled; running tox -e coverage-platform
disables "python version exclusions" while keeping "platform exclusions" enabled.
Basically, tox -e coverage
asks if the current platform and Python version is covered; tox -e coverage-platform
asks if the current platform is covered for all Python versions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically,
tox -e coverage
asks if the current platform and Python version is covered;tox -e coverage-platform
asks if the current platform is covered for all Python versions.
So... either I'm misunderstanding your terminology, or... no it doesn't?
tox -e py && tox -e coverage
runs a single python version on the current platform... and then fails coverage. `
tox -e py38,py39,py310,py311 && tox -e coverage
passes coverage when run on a single platform.
What am I missing here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The difference is which conditional coverage rules are in effect when the coverage report is produced.
tox -e py && tox -e coverage
runs a single python version on the current platform... and then fails coverage.
This runs with both "platform exclusions" and "Python version exclusions" enabled; so, if the host platform and python version are fully covered, then this succeeds. (However, this has a caveat; tox -e coverage
will look for the oldest version of Python installed; so, technically, tox -e py
could run for the version of Python in your virtual environment, but tox -e coverage
may find an older version of Python that exists on the system. So, safest way to run tox -e py,coverage
is with an explicit Python version, i.e. tox -e py311,coverage311
)
tox -e py38,py39,py310,py311 && tox -e coverage
passes coverage when run on a single platform.
As above, this also runs with "platform exclusions" and "Python version exclusions" enabled. Producing a coverage report for all Python versions only really makes sense for tox -e coverage-platform
.
The difference between tox -e coverage
and tox -e coverage-platform
is whether "python version exclusions" should be disabled.....as it is for the -e coverage-platform
command.
To round this out, running tox -e coverage-project
disables the "platform exclusions" as well as the "python version exclusions"; this'll only work, though, if you have coverage files produced on Linux, macOS, and Windows.
Does this help? I'm not sure I can tell where we're missing each other...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this help? I'm not sure I can tell where we're missing each other...
Not really... maybe I'm misunderstanding the "double negative" aspect of what "platform exclusions enabled" means. I read it as the #pragma
lines for no-cover-if-windows (et al) are in effect, so you shouldn't get a coverage miss if "exclusions are enabled".
To take this from another angle: what invocation of coverage would let me run a single test run for Python3.10 on macOS (i.e., tox -e py310
), and get a 100% coverage report?
And, the inverse - how would I generate a failing report out of tox -e py38,py39,py310,py311
on macOS (i.e., I have coverage for all Python versions, but I haven't run on Linux and Windows)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm...I was concerned this would remain as confusing as it was to me when I started this...
First, your understanding of "coverage exclusions" is correct; when an exclusion like no-cover-if-is-windows
is "enabled", then coverage for that code on Windows is ignored.
Second, though, there are two levels of exclusions:
- Python version exclusions
- When these are enabled, it doesn't matter which version of Python is used for coverage reporting
- (however, there is the issue of "phantom branches" with using a different version for reporting coverage versus creating the coverage files.)
- Platform exclusions
- When these are enabled, it doesn't matter which platform is used for coverage reporting
So, from the user perspective trying to create coverage reports:
- Running
tox -e coverage
- This has both "python version" and "platform" exclusions enabled
- So, it doesn't matter the platform or Python version being used to produce the coverage report
- (however, again, the caveat that
coverage
should be run using the same version of Python used to produce the coverage files because of how that all works.)
- Running
tox -e coverage-platform
- This disables "python version" exclusions...so, for instance, the
no-cover-if-gte-py310
exclusion is disabled (i.e. ignored) and coverage is verified for those code blocks - However, "platform" exclusions, like
no-cover-if-is-windows
, remain in effect and that coverage is ignored
- This disables "python version" exclusions...so, for instance, the
- Running
tox -e coverage-project
- This disables "python version" and "platform" exclusions; so, all of the conditional coverage rules are ignored
So, specifically:
what invocation of coverage would let me run a single test run for Python3.10 on macOS (i.e.,
tox -e py310
), and get a 100% coverage report?
Running tox -e coverage310
after tox -e py310
will produce a 100% coverage because, by default, all coverage exclusions are enabled.
how would I generate a failing report out of
tox -e py38,py39,py310,py311
on macOS (i.e., I have coverage for all Python versions, but I haven't run on Linux and Windows)?
Running tox -e coverage-project
would disable all exclusions and detect that linux-specific and windows-specific code was not covered.
What about this? And how much of this should be in the docs? because clearly I've internalized all this but it is far from obvious...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm...I was concerned this would remain as confusing as it was to me when I started this...
First, your understanding of "coverage exclusions" is correct; when an exclusion like
no-cover-if-is-windows
is "enabled", then coverage for that code on Windows is ignored
Ok - good to know I haven't missed something fundamental here :-)
What about this? And how much of this should be in the docs? because clearly I've internalized all this but it is far from obvious...
Eureka - I've finally worked out where the gap is!
tox -e py
runs the test suite on the current default Python3 (3.11 on my machine). However, tox -e coverage
runs on the oldest version of Python installed on your machine (3.8 for me).
I was seeing a bunch of coverage failures, and assuming they were all platform/version mismatches, but didn't actually verify that they were caused by Python version/platform branches. It turns out they were all 3.8/3.11 branching discrepancies. Once I worked that out, everything fell into place.
That means on my machine, tox -e py && tox -e coverage311
and tox -e py38 && tox -e coverage
both pass clean. tox -e py311 && tox -e coverage311-platform
fails on the python version branches.
So - the issue is with explaining the differences in "default python version" that is used between py
and coverage
targets. The fact that tox -e py && tox -e coverage
doesn't pass clean (unless you only have a single Python version installed) definitely isn't intuitively obvious (as my ... journey... has revealed :-).
I guess the alternative would be a "test and coverage for a single python version" target, but I'm not sure I see an easy way to add that without another set of complications to the tox config.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, considering all this and your other comment above....I'm thinking we just remove talking about tox -e coverage
and instead keep it simple for a beginner and just say "use tox -e py310,coverage310
to produce a coverage report" and mention that "310" should be swapped out for the version of Python they are using.
I guess the alternative would be a "test and coverage for a single python version" target, but I'm not sure I see an easy way to add that without another set of complications to the tox config.
It wouldn't be too difficult to add additional labels for each Python version; so tox -m test39
, tox -m test310
, etc. to run the tests and produce the coverage report. And then, we could just talk about these python version-specific labels in the coverage section.....as well as producing platform and project coverage with tox -m test-platform
and tox -m test-project
, respectively.
Ultimately, there's a lot of options in this version of the tox.ini
....but we don't have to expose it all to anyone up front. An adventurous dev can poke around in tox.ini
to discover this.
What do you think?
P.S. a side concern is people not noticing the difference between tox -e
and tox -m
....especially since tox will run tox -e test
and kinda look like it's something...but really do nothing at all... I haven't found a way to make tox error for an "unknown/undefined" environment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, considering all this and your other comment above....I'm thinking we just remove talking about
tox -e coverage
and instead keep it simple for a beginner and just say "usetox -e py310,coverage310
to produce a coverage report" and mention that "310" should be swapped out for the version of Python they are using.
Sounds like a good approach to me.
I guess the alternative would be a "test and coverage for a single python version" target, but I'm not sure I see an easy way to add that without another set of complications to the tox config.
It wouldn't be too difficult to add additional labels for each Python version; so
tox -m test39
,tox -m test310
, etc. to run the tests and produce the coverage report. And then, we could just talk about these python version-specific labels in the coverage section.....as well as producing platform and project coverage withtox -m test-platform
andtox -m test-project
, respectively.
This also seems like a good idea; it provides a simple path in to the most likely "just run the tests and give me coverage" use case.
Ultimately, there's a lot of options in this version of the
tox.ini
....but we don't have to expose it all to anyone up front. An adventurous dev can poke around intox.ini
to discover this.What do you think?
Definitely agreed. We don't need to document everything in the contributor guide. We just need to document enough to explain the common developer use cases. To my mind, that means:
- Run the tests quickly to prove they all pass
- Run the tests on one platform and confirm I've got coverage
- Run as many versions of the tests as I can on one platform, and give me coverage
- Do as much of a CI-alike run as I can on one platform.
P.S. a side concern is people not noticing the difference between
tox -e
andtox -m
....especially since tox will runtox -e test
and kinda look like it's something...but really do nothing at all... I haven't found a way to make tox error for an "unknown/undefined" environment.
Yeah - that's not great - but if the "public testing interface" is all tox -m
calls, then it's a lot less likely that people will get caught out.
docs/how-to/contribute-code.rst
Outdated
C:\...>venv\Scripts\activate | ||
(venv) C:\...>python3 -m pip install -Ue .[dev] | ||
(venv) C:\...>python -m pip install -Ue .[dev] | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python3
did not resolve to the python in the virtual environment that was just activated....but python
does.
- Add tl;dr section to top - Add tox labels to run tests and coverage for specific versions of Python - Add a section to create a new git branch for changes
Update coverage docs to say to use I also added the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One last pass of minor cosmetic doc tweaks, and I think we done! Nice work!
Changes
unit-tests
jobcoverage
jobtox
to produce reports fromcoverage
files created on other platforms....it does not allow you to createcoverage
files for other platforms.tox
commandstox p -m test
: only runs tests for all available platformstox p -m test-fast
: only runs tests with multi-process for all available platformstox p -m test-platform
: runs tests for all available platforms and produces coverage reporttox p -m ci
: intended to most closely mimic GitHub CItox
(ortox p
) helps cover the most likely default case where host env only has a single Python inPATH
tox -e py310,coverage310
tox p -qmci
Notes
pytest
fails....but I don't think that's too bad because the tests run pretty quickly and all jobs should finish or also error quickly.coverage
job instead ofunit-tests
.PR Checklist: