-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transition to hatch #752
Transition to hatch #752
Conversation
FYI this draft PR is not forgotten - I just can't wrap my head around the interaction between versioneer and the man page generation / datalad-buildsupport-monster. When I remove versioneer.py version determination, man page generation fails... |
I have pushed a seemingly complete setup now. It would make sense to me, if you rebase and consolidate the commits in the PR and we merge, after addressing the two last TODO items. Re documentation I think we can take most, if not all, from datalad/datalad-core#10
I solved the manpage problem by disabling them. This is not nice, but also a minor issue. I filed #758 with thoughts on how to deal with this better. |
e356772
to
ab6b512
Compare
pypa/hatch#1677 is an issue that prevents using hatch test --cover on windows. |
bd7b0e5
to
4e1b7de
Compare
d8eb8af
to
a657453
Compare
This aligns the management setup with the `datasalad` library and the newer `datalad-core`, and `dlcmd` projects.
We aim to transition away from setuptools. The manpage generation is implemented as a custom setuptools command, though. This needs to be rewritten to become a hatch custom build hook -- most likely.
Most is taken from recent work on datalad-core.
Some of this functionality still has to be converted to a hatch-based build hook. But for that this dead code does not need to linger here.
This allows for (more) quickly checking a subset of tests in a matrix run. ``` hatch run tests:run datalad_next/config/tests ```
Hatch has some problems with UTF chars in `pyproject.toml` on windows. Refs: pypa/hatch#1677
Also use latest Python for Windows and Mac, and use soon-to-be-oldest Python 3.9 on Linux.
Some tests are not robust, and we do not want to run after them all the time. Ultimately, the should be marked individually: ```py @pytest.mark.flaky(reruns=5) def test_example(): ... ``` Refs: https://pypi.org/project/pytest-rerunfailures/#re-run-individual-failures
The reason for this is unclear, and requires a dedicated investigation. However, the general test setup is needed now, and this will have to wait for a moment. Refs: datalad#759
I toyed with hatch following #723. Its pretty neat, and I wanted to leave a work-in-progress draft PR.
hatch
? #723A bunch of tests now fail. These failures smell like an issue with the test assumptions that are no longer valid. The failures are captured here: #759