Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Repository version number? #539

Closed
dkeeney opened this issue Jul 3, 2019 · 27 comments
Closed

Repository version number? #539

dkeeney opened this issue Jul 3, 2019 · 27 comments
Assignees

Comments

@dkeeney
Copy link

dkeeney commented Jul 3, 2019

Currently our version is set to 1.0.8.community0 in the file VERSION.
PYPI does not like non-numeric version numbers so this should be changed ... but what should it be? Do we need to retain an association to the version number in numenta's nupic.core?

Is it time to change it to 1.0.0 ? Or maybe even 2.0.0

@ctrl-z-9000-times
Copy link
Collaborator

I think we should bump it to 2.0.0
Version numbers should always increase or else it will confuse the computers.

I"m assuming that this version number is in the usual format: MAJOR.MINOR.BUGFIX

  • Major changes break backwards compat
  • Minor changes add features

@breznak
Copy link
Member

breznak commented Jul 3, 2019

I think we should bump it to 2.0.0

agree on 2.0, I could bump that but we don;t have releases, so there's no use for it anyway
#366 #19

@dkeeney
Copy link
Author

dkeeney commented Jul 3, 2019

PYPI needs something like MAJOR.MINOR.BUGFIX
If it really does not matter at this point, lets make it 2.0.0 so we can make PYPI happy ( if we ever get that setup).

@dkeeney
Copy link
Author

dkeeney commented Aug 23, 2019

I am going to use this issue to continue the discussion about our Release project since the PR's go away each time we try something.

I must confess that I am having a problem being motivated on this project for some reason. I understand so little about how these tools work that reading the docs don't make since. Every time I start a search on deployment strategies I end up reading about elaborate processes with dev branches and release branches with pre-releases and my eyes glaze over ... and I go work on something else. Anyway, I want to keep trying to finish this.

To summarize the current approach for Release is to

  • manually set a tag on the master using git and push it to GitHub.
  • GitHub sends out the tagged event to all three CI's
  • Each CI starts a build and generates artifacts.
  • When travis finishes its build of the master it runs this script to collect the artifacts and sends them to GitHub and PYPI for the release.

The problem with this is that the script on travis times out before the other CI's finish. The assumption is that we need all of the artifacts present before we can create a release.

@dkeeney
Copy link
Author

dkeeney commented Aug 23, 2019

Today I stumbled on "Github Releases". I am sure I saw it before but I thought this was a generic term related to the process of downloading the releases that were created by a CI. But this is a different approach where we initiate with Github rather than by setting a tag.

There is a "Draft a new Release" button in the Github releases page https://github.com/htm-community/htm.core/releases Not sure I totally understand this yet but it appears that if you click that button (and you have write permissions for the repository) it will provide you with a page to create a release.

  • A 'Release' is created in GitHub with no artifacts and it sets the tag on the main branch.
  • This causes all three CI's to build and generate artifacts.
  • As each CI finishes it uploads it's own artifacts to the 'Release' in GitHub. They don't have to be uploaded at the same time.
  • PYPI may be uploaded one at a time as well as part of the 'Release Build'.

I am just starting to think about what changes I might need to make to the CI scripts to make this work. @breznak have you had any experiance with this?

@breznak
Copy link
Member

breznak commented Aug 23, 2019

To summarize the current approach for Release is to

all points correct

[GH Release] it will provide you with a page to create a release.

yes, it's a "special" term, for releases done on GH.

As each CI finishes it uploads it's own artifacts to the 'Release' in GitHub. They don't have to be uploaded at the same time.

yes, yes, but this one no.

The "manual release" (let's call it that), does the above 2 steps as you say, but then does nothing (the release is empty, resp with sources zip only). There's no logic telling the CIs to upload anything (the CI are 3rd party to GH anyway). What actually happens is the CI will generate artifacts on the new tag, and Travis would time-out before others finish.

In other words, the manual GH release is equivalent to command line:

git tag v6.6.6
git push --tags origin

what changes I might need to make to the CI scripts to make this work

off the bat, possible workarounds to the current problem with Travis timeout:

  • make travis run longer (the longest), ie by travis running both Debug and then Release mode (currently OSX does that), speedup the other CI
  • tags trigger artifacts, and a release is done during the scheduled nightly build (iff new tag) only.
  • some voodoo with the version regexp, ie pre2.0.0 tag builds all 3 artifacts, v2.0.0 does just-deploy and fetches prev known-version artifacts
  • we move the deployment (the fetch-artifacts script) to another CI with larger timeout (CircleCI must have a long lifetime if it manages to run the ARM build)
  • wait for the new Github Actions CI that will integrate the build service

What do you think, @dkeeney ?

@dkeeney
Copy link
Author

dkeeney commented Aug 23, 2019

I went to "GitHub Releases" and Clicked the "Draft a new Release" button and created release v2.0.8 to see what happens. Waited a while... then checked each CI.

  • CircleCI showed no new jobs.
  • Travis started a full build on the master. Successful build. Package generated locally. The Job is timed out because nothing from AppVeyor. Never even got to the point it checked CircleCI.
  • AppVeyor started a full build on master. Successful. Package generated locally.

So, the fetch-artifacts.sh is not working for AppVeyor and CircleCI just did not even start. No binary artifacts ended up in GitHub Releases. Nothing ended up in PYPI test (was not expecting anything).

Next step is:

  • Find what happened to CircleCI. that should have started.
  • Research how to upload Artifacts to GitHub Releases for each CI. It does not seem to be automatic although Travis might have if it had not timed out.

@dkeeney
Copy link
Author

dkeeney commented Aug 23, 2019

But giving the script longer to run is not going to help because the API request to download from AppVeyor did not work even though the artifact was generated and waiting.

@breznak
Copy link
Member

breznak commented Aug 23, 2019

because the API request to download from AppVeyor did not work even though the artifact was generated and waiting.

that's a good 1st step, so the AppV API is not working?

@dkeeney
Copy link
Author

dkeeney commented Aug 24, 2019

Find what happened to CircleCI. that should have started.

for CircleCI, tagged master builds are not ran by default. So we need a special job to run tagged master builds. Probably does not need the debug build to run in this case.

Research how to upload Artifacts to GitHub Releases for each CI.

Artifacts can be uploaded independently from each CI, (at least the GitHub API supports it).

  • For CircleCI, I have to run a go script (or run our own curl command) to do the artifact uploads.
  • For Travis, it has deploy with provider: releases for GitHub and provider: pypi for PYPI deploys.
  • For Appveyor, It has a deploy: with provider GitHub but this creates the entire release in GitHub so I might have to do my own curl command.

Working on updates to the CI configuration files.

@dkeeney
Copy link
Author

dkeeney commented Aug 25, 2019

Current status: All three CI's are kicking off builds triggered on a merge of a PR. It does not hurt anything I guess but probably not necessary. We don't need the artifacts until after the tag of the master which should start yet another build.

The merge build's for Travis and Appveyor were successful and did not perform the deployment. However CircleCI did try to deploy so needed to add the filter.

@dkeeney
Copy link
Author

dkeeney commented Aug 26, 2019

Current status: Tagged build on master
Travis:

  • Built correctly
  • no sign of GitHub deploy running at all
  • PYPI deploy ran but failed trying to find setup.py. This error probably aborted the flow

CircleCI:

  • Did not trigger at all. So no build

AppVeyor:

  • Built correctly
  • Tried to deploy to GitHub: "Error creating GitHub release: Provider setting not found or it's value is empty." This error probably aborted the flow.
  • No sign it tried to deploy to PYPI.

===============
Next things to try:

  • Study the filters on CircleCI to try to figure out why it did not trigger for tagged build. I know it is the default NOT to trigger but thought I had the right things to tell it to run.
  • On Travis, for PYPI restore the entire build folder so setup.py can run.
  • On AppVeyor, Something messed up with the GitHub deploy settings.

@dkeeney
Copy link
Author

dkeeney commented Aug 28, 2019

Run number 6.
CircleCI did not trigger at all. Studying the docs some more and I ran across this statement in the section about Scheduling jobs:

It can be inefficient and expensive to run a workflow for every commit for every branch. Instead, you can schedule a workflow to run at a certain time for specific branches. This will disable commits from triggering jobs on those branches.

This implies that if there is a schedule on a branch then it cannot be triggered by a commit (and that may include commits due to being tagged). So I want to try removing the nightly build for ARM64 and see if that allows the release build to run. If this works then maybe we can consider putting the ARM64 build on a different branch or something.

But I don't think this is the only problem with my configuration. In re-reading the section on building with tags, it appears that If a job is marked with a filter for a tag then its 'requires' jobs must also allow the same tags. So if I am reading this right I cannot use the 'osx-build-release' job for both tagged and un-tagged build.

@breznak
Copy link
Member

breznak commented Aug 29, 2019

@dkeeney the VERSION seems to be broken. What I get running cmake:

Packaging version: v2,0,10.2,0,10.2,0,10 in CPACK
git diff
diff --git a/VERSION b/VERSION
index de87a17ee..544c60f40 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-v2.0.5
\ No newline at end of file
+v2,0,10.2,0,10.2,0,10
\ No newline at end of file

@dkeeney
Copy link
Author

dkeeney commented Aug 29, 2019

yep, I broke it.

@dkeeney
Copy link
Author

dkeeney commented Aug 30, 2019

I built a script to perform the PYPI upload and ran it locally here on my machine. I got this error:

dave@ubuntu1810:~/htm$ ./ci/deploy.sh "2.0.11" $GITHUB_TOKEN
Uploading distributions to https://test.pypi.org/legacy/
Uploading htm.core-2.0.11-cp37-cp37m-linux_x86_64.whl
100%|██████████████████████████████████████| 2.33M/2.33M [00:01<00:00, 1.57MB/s]
NOTE: Try --verbose to see response content.
HTTPError: 400 Client Error: Binary wheel 'htm.core-2.0.11-cp37-cp37m-linux_x86_64.whl' has an unsupported platform tag 'linux_x86_64'. for url: https://test.pypi.org/legacy/

Google on his gives: pypi/legacy#120

Here is a work-around
https://github.com/pypa/manylinux

@dkeeney
Copy link
Author

dkeeney commented Aug 30, 2019

That may be why the travis-ci builds never deployed to PYPI.

@breznak
Copy link
Member

breznak commented Aug 30, 2019

manylinux

oh, that sucks. If I understand it, for deployment we'd have to use Docker with manylinux to build?

@dkeeney
Copy link
Author

dkeeney commented Aug 30, 2019

  • I have a fix for the version number scrambling thing.
  • All three seem to be triggering on tagged master now.
  • The merge-to-master build is still running on travis and circleci. Not needed but probably ok.
  • The deploy sections of all three fail. For travis it was probably the linux problem on PYPI. The other two do not seem to have a GITHUB_TOKEN in the environment.
  • I made a script that allows me to debug the artifact upload running locally. I will get those working and then I can add the script to all three CI's, instead of their deployment scripts.

@breznak when you return, will you do two things for me that require admin privileges:

  • make sure that the GITHUB_TOKEN is added to the environments on all three CI's.
  • Add this to all three environments: TWINE_PASSWORD="pypi-AgENdGVzdC5weXBpLm9yZwIkOTk0YmZjNGYtZTgxNS00Yjk2LTg5ZTAtODE1MGI4MjZhNGZlAAIleyJwZXJtaXNzaW9ucyI6ICJ1c2VyIiwgInZlcnNpb24iOiAxfQAABiDXJOuxvodsEDoD5dOH-e0td1DdUSwrl2NCl_lP_vy6RA" This is the token from my account for the test version of PYPI. When we have everything working and are ready to go live we can replace this with a real token created by admin.

@dkeeney
Copy link
Author

dkeeney commented Aug 30, 2019

I will continue to work on this today but I will not be available during this weekend. (my kids are coming to visit).

@dkeeney
Copy link
Author

dkeeney commented Sep 4, 2019

Ok, back to addressing the deployment issues...

With that last push, #652, I added our own script for uploading artifacts using the GitHub and PYPI APIs. However, these will not actually work because the Login Tokens need to be created and set in the environment for each CI and that needs to be performed by someone with admin privs.

Once we have the Tokens in place I should be able to get the GitHub Releases working. The PYPI deployment should work with OSx and Windows. But we still has the issue of what to do about Linux machines not being binary compatible unless we build with an environment older or as old as any user might have.

So, until the Tokens are ready, I will go work on the EncoderRegion plugin.

@breznak breznak self-assigned this Sep 4, 2019
@breznak
Copy link
Member

breznak commented Sep 4, 2019

I'll do the tokens in a day or 2, sorry for delays.

@breznak
Copy link
Member

breznak commented Sep 4, 2019

But we still has the issue of what to do about Linux machines not being binary compatible unless we build with an environment older or as old as any user might have.

ok, this sucks. So that's why Numenta has the "manylinux" docker images for deployment. I'd like to suggest something similar as with the docker ARM, keep normal (linux) PRs on modern compilers,c++,... and on release do an extra docker build on an old compiler.

Q: (std)libC is the component that has to be same or older for the PYPI to work for our linux users? So we need a docker image with recently old distro? What if we bundeled static lib c, would it then work with any (=not the same or older requirement)?

@dkeeney
Copy link
Author

dkeeney commented Sep 4, 2019

My feeling is that we should do a "manylinux" for whatever we happen to be building on travis and everyone else for which that does not work will need to build from sources.

If you really do want to build one that works for everyone, there is a base Docker image that PYPI maintains for that purpose. Don't know if that supports the newer compilers.

@dkeeney
Copy link
Author

dkeeney commented Sep 4, 2019

I was wondering if we should convert to using Docker for all builds.
... of course, this again is another thing I know absolutely nothing about.

@breznak
Copy link
Member

breznak commented Sep 4, 2019

there is a base Docker image that PYPI maintains for that purpose. Don't know if that supports the newer compilers.

this could be interesting, iff the image is recent enough for our c++11/17 compilers. I'd like to keep the CI as close to a modern, real platform, and have the docker only for the releases.
That way we catch compiler errs and other incompatibilities, our source build is now really lean, so I want to keep it that way.

if we should convert to using Docker for all [CI] builds.

likely no for performance reasons..

@breznak
Copy link
Member

breznak commented Jun 2, 2020

We now have automated GH & PyPI releases
https://github.com/htm-community/htm.core/releases/tag/v2.1.15
https://test.pypi.org/project/htm.core/

Closing the PyPI related issues with #819

@breznak breznak closed this as completed Jun 2, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants