Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🚀 Pre-release master -> staging_sevenPeaks2 #5029

Closed
10 of 17 tasks
matusdrobuliak66 opened this issue Nov 14, 2023 · 1 comment
Closed
10 of 17 tasks

🚀 Pre-release master -> staging_sevenPeaks2 #5029

matusdrobuliak66 opened this issue Nov 14, 2023 · 1 comment
Assignees
Labels
release Preparation for pre-release/release t:maintenance Some planned maintenance work
Milestone

Comments

@matusdrobuliak66
Copy link
Contributor

matusdrobuliak66 commented Nov 14, 2023

What kind of pre-release?

master branch

Sprint Name

sevenPeaks

Pre-release version

2

Commit SHA

a4d9fec

Did the commit CI suceeded?

  • The commit CI succeeded.

Motivation

  • Weekly release

What Changed

Devops check ⚠️ devops

e2e testing check 🧪

No response

Summary 📝

  • make release-staging name=sevenPeaks version=2 git_sha=a4d9fecdbc53dc354a5c44fc4a2cb1d82ceab3a9
    • https://github.com/ITISFoundation/osparc-simcore/releases/new?prerelease=1&target=<commit_sha>&tag=staging_<sprint_name><version>&title=Staging%20<sprint_name><version>
  • Draft pre-release
  • Announce (add redis key maintenance in every concerned deployment)
    {"start": "2023-02-01T12:30:00.000Z", "end": "2023-02-01T13:00:00.000Z", "reason": "Release ResistanceIsFutile9 "}
  • Announce release in Mattermost
    :loud_sound:  Maintenance scheduled for **NAMED_DAY DD. MM from START_TIME - END_TIME**.
    =========================================================================
    
    @all Be aware that you will automatically be logged out and your projects stopped and saved during the maintenance time. Affected:
    *   [https://staging.osparc.io](https://staging.osparc.io/)
    *   [https://https://staging.s4l-lite.io/](https://https://staging.s4l-lite.io//)
    
    and on premises:
    *   [https://osparc-staging.speag.com](https://osparc-staging.speag.com/)
    *   [https://tip-staging.speag.com](https://tip-staging.speag.com/)
    *   [https://s4l-staging.speag.com](https://s4l-staging.speag.com/)
    *   [https://s4l-lite-staging.speag.com](https://s4l-lite-staging.speag.com/)
    
    
    Reason: Scheduled staging-release of STAGING_NAME_AND_VERSION.
    
    Thanks for your understanding and sorry for the inconveniences,
    
    Your friendly oSparc Team
    
    

Releasing

  • Release (release draft)
  • Check Release CI
  • Check hanging sidecars. Helper command to run in director-v2 CLI simcore-service-director-v2 close-and-save-service <uuid>
  • Check deployed
    • aws deploy
    • dalco deploy
  • Delete announcement
  • Check e2e runs
  • Announce
https://github.com/ITISFoundation/osparc-simcore/releases/tag/staging_<sprint_name><version>
@matusdrobuliak66 matusdrobuliak66 added t:maintenance Some planned maintenance work release Preparation for pre-release/release labels Nov 14, 2023
@matusdrobuliak66 matusdrobuliak66 added this to the 7peaks milestone Nov 15, 2023
@matusdrobuliak66
Copy link
Contributor Author

Summary:

  • in dalco migration service successfully claimed that the job is done with the correct alembic version. but in was not migrated in reality when we checked in the DB. We needed to restart the migration service.
  • New founding: When you re-pull the image and restart the service, it does not take the latest image!
  • Director v0 had a health check problem therefore it was restarting and was not able to start (assumption: docker-swarm/docker resources problems, after all other services were deployed, the scaling director to 0 and back up helped)
  • Clusters keeper environment variable CLUSTERS_KEEPER_EC2_SECRET_ACCESS_KEY was wrongly setup in the docker-compose.yml file therefore empty value was propagated (currently it was fixed manually in Portainer, in the next PR it will be fixed in the code)
  • Cluster keeper could not start because he didn't have enough resources (OPS will have a look into it)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release Preparation for pre-release/release t:maintenance Some planned maintenance work
Projects
None yet
Development

No branches or pull requests

2 participants