-
Notifications
You must be signed in to change notification settings - Fork 667
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Port GH actions and other fixes to master #3061
Conversation
Fixes MDAnalysis#3036 ## Work done in this PR - Adds a continuous integration workflow based on github actions. - Disables previous continuous integration workflow based on TravisCI. - Fixes minor Visibledeprecation warnings with the hole2 tests.
Fixes MDAnalysis#3046. Deploys docs in gh-ci.yaml script and removes Travis
* TST, CI: add ARM64 Graviton 2 to CI * start testing MDAnalysis on ARM64 using new AWS Graviton 2 architecture available in Travis CI * testing on my fork shows only two failures in the MDAnalysis test suite using a fairly minimal set of dependencies; we can probably either fix those or skip them and open an issue * the test failures are: `test_written_remarks_property` `TestEncoreClustering.test_clustering_three_ensembles_two_identical` * run time for this CI entry is about 23 minutes, which is solid for ARM; we save a lot of time using experimental upstream wheels in some cases and excluding (source builds of) dependencies where possible until the ARM64 binary wheel ecosystem matures a bit more * MAINT: ARM64 shims * adjust `CAffinityPropagation()` C-level function to use `double` instead of `float` in some strategic locations that allow the `test_clustering_three_ensembles_two_identical()` test to pass on gcc115 ARM64 machine * mark `test_written_remarks_property()` as a known failure on ARM64; it fails in Travis CI on that platform, but not on gcc115 ARM64 machine; that said, this test is already known to be flaky on Windows 32-bit (may depend on character settings on machine?) * MAINT: MR 2956 revisions * reduce optimization level of MDAnalysis builds on ARM64 to avoid problems with `ap.c` causing test failures on that architecture * revert any source changes to `ap.c`
Fixes MDAnalysis#2993 Redirect URLs in stable/sitemap.xml and dev/sitemap.xml to correct locations
…2953) * fix NameError when catching NoDataError * add test + missing imports
Hello @lilyminium! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found:
Comment last updated at 2020-12-08 22:23:00 UTC |
2b65941
to
43e964c
Compare
…DAnalysis#3029) Fixes MDAnalysis#3028 * Improves the performance of the ParmEd converter by using a dictionary lookup for the atomgroup to universe index mapping.
a5c97a8
to
590c180
Compare
Codecov Report
@@ Coverage Diff @@
## master #3061 +/- ##
==========================================
+ Coverage 91.46% 92.38% +0.91%
==========================================
Files 173 181 +8
Lines 24203 24755 +552
Branches 3193 3194 +1
==========================================
+ Hits 22138 22870 +732
- Misses 1438 1789 +351
+ Partials 627 96 -531
Continue to review full report at Codecov.
|
426664a
to
3574cf8
Compare
The current thing I'm working on is that python 2.7 takes forever to install dependencies -- I haven't let it time out yet but it's been 1 hr+ stuck at that step. Not sure why because everything installs fine on my own device. |
e2df870
to
e79c80c
Compare
MDA_CONDA_MIN_DEPS: "pip pytest mmtf-python biopython networkx cython matplotlib scipy griddataformats hypothesis gsd codecov" | ||
MDA_CONDA_EXTRA_DEPS: "seaborn" | ||
MDA_CONDA_PY3_DEPS: "netcdf4 scikit-learn joblib>=0.12 clustalw=2.1 chemfiles mock" | ||
MDA_CONDA_PY27_DEPS: "clustalw=2.1 netcdf4=1.3.1 six=1.15.0 funcsigs<=1.0.2 mock<=3.0.5" | ||
MDA_CONDA_PY35_DEPS: "netcdf4 scikit-learn joblib>=0.12 clustalw=2.1 mock<=3.0.5" | ||
MDA_PIP_MIN_DEPS: 'coveralls coverage<5 pytest-cov pytest-xdist' | ||
MDA_PIP_EXTRA_DEPS: 'duecredit parmed' | ||
MDA_PIP_PY27_DEPS: "setuptools<45.0.0 chemfiles scikit-learn joblib>=0.12" | ||
MDA_PIP_PY3_DEPS: "" | ||
MDA_PIP_PY35_DEPS: "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This and adding the 2.7 and 3.5 jobs are the primary changes I made. Py3.5 and 2.7 seemed to require a lot of pinning in gh actions for conda to resolve dependencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have to pin for versions, should this be reflected in our setup (or maybe it is)? We probably want to avoid further matplotlib-like issues in 1.0.1?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So conda never refused to resolve dependencies in the 1-2 hours I would let it run. In addition, on both my linux and mac machines it resolved the original unpinned versions easily within a couple minutes. Moreover, Travis didn't require pinning. I'm more inclined to think that this is a GH Action quirk than anything else. On the other hand, seeing as we know those versions work, I am up for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To be honest this sounds more like miniconda trickery more than GH issues :/ I don't know, I guess we can't release a new version that goes beyond tested limits.
codecov: true | ||
numpy: numpy=1.16 | ||
extra_deps: "EXTRA PY27" | ||
conda-version: 4.7.10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pinning conda didn't really help in resolving dependencies but I found astropy's reason for pinning persuasive (conda is not always so stable from update to update)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that the old CI behaviour was just to pin for everything, wouldn't it be cleaner to enforce this as a blanket thing for the entire matrix?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it broke for a 3.7 job?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How odd, is this the same version that the ci helpers were pinning to?
conda_deps="$conda_deps ${!conda_var}" | ||
echo $conda_deps | ||
pip_deps="$pip_deps ${!pip_var}" | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed this part from @IAlibay's original setup because setting up all the environment variables was a little bit of a pain. I am however happy to go back to the original construct now that we've figured out which dependencies to list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't particularly think we should push for keeping things the same as what they are on develop
(they are both completely different kettles of fish).
That being said, at least to me, the dependency list is getting quite bloated/hard to follow.
As a thought here, is there maybe a way we could improve readability/maintainability by splitting up the job into multiple versions of it? It would mean a lot of duplication, but if it comes with increased readability, I personally think it might be a good tradeoff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could do -- we could also have environment.yml
files for pythons 2.7, 3.5, fully featured, and minimal. That also saves users from having to work this stuff out.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the reason I chose to avoid env files in develop
is that they cause really big slowdowns for some reason, conda setup time was going from ~ 3 mins to ~ 15 mins. That and mamba 100% hates pip in environment files.
I'd prefer stay away from them on develop
, but if we need them to clean up the matrix here, I don't have any objections (as long as CI actually completes).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @lilyminium
I have a few comments (particularly re: aarch64 inclusion), but I don't think any of them are blocking so I'll set it to approve.
@@ -111,6 +111,31 @@ jobs: | |||
INSTALL_HOLE="false" | |||
CODECOV="true" SETUP_CMD="${PYTEST_FLAGS} --cov=MDAnalysis" | |||
|
|||
- os: linux |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we really planning to bring aarch64 support to 1.0.x? Given it's still experimental (and I think it requires as-of-yet unreleased numpy versions), maybe it's best to leave it out for now at least?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps @tylerjereddy has a better view of whether it fits in a near-future 1.0.1 release?
MDA_CONDA_MIN_DEPS: "pip pytest mmtf-python biopython networkx cython matplotlib scipy griddataformats hypothesis gsd codecov" | ||
MDA_CONDA_EXTRA_DEPS: "seaborn" | ||
MDA_CONDA_PY3_DEPS: "netcdf4 scikit-learn joblib>=0.12 clustalw=2.1 chemfiles mock" | ||
MDA_CONDA_PY27_DEPS: "clustalw=2.1 netcdf4=1.3.1 six=1.15.0 funcsigs<=1.0.2 mock<=3.0.5" | ||
MDA_CONDA_PY35_DEPS: "netcdf4 scikit-learn joblib>=0.12 clustalw=2.1 mock<=3.0.5" | ||
MDA_PIP_MIN_DEPS: 'coveralls coverage<5 pytest-cov pytest-xdist' | ||
MDA_PIP_EXTRA_DEPS: 'duecredit parmed' | ||
MDA_PIP_PY27_DEPS: "setuptools<45.0.0 chemfiles scikit-learn joblib>=0.12" | ||
MDA_PIP_PY3_DEPS: "" | ||
MDA_PIP_PY35_DEPS: "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have to pin for versions, should this be reflected in our setup (or maybe it is)? We probably want to avoid further matplotlib-like issues in 1.0.1?
codecov: true | ||
numpy: numpy=1.16 | ||
extra_deps: "EXTRA PY27" | ||
conda-version: 4.7.10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Given that the old CI behaviour was just to pin for everything, wouldn't it be cleaner to enforce this as a blanket thing for the entire matrix?
- name: minimal-ubuntu | ||
os: ubuntu-latest | ||
python-version: 3.6 | ||
extra_deps: "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has an empty extra_deps, but macOS doesn't? Can we get away with nothing here?
conda_deps="$conda_deps ${!conda_var}" | ||
echo $conda_deps | ||
pip_deps="$pip_deps ${!pip_var}" | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't particularly think we should push for keeping things the same as what they are on develop
(they are both completely different kettles of fish).
That being said, at least to me, the dependency list is getting quite bloated/hard to follow.
As a thought here, is there maybe a way we could improve readability/maintainability by splitting up the job into multiple versions of it? It would mean a lot of duplication, but if it comes with increased readability, I personally think it might be a good tradeoff.
Superseded by #3067. |
In aid of #2768.
Possibly this will help fix the last remaining obstacle, #3004?
Added all commits to
develop
that did not involve a.. versionchanged:: 2.0.0
docstring. Please let me know if any were premature.PR Checklist