Skip to content
This repository has been archived by the owner on Jan 27, 2023. It is now read-only.

Proposal: Package release process #86

Closed
ruflin opened this issue Jun 22, 2020 · 21 comments
Closed

Proposal: Package release process #86

ruflin opened this issue Jun 22, 2020 · 21 comments

Comments

@ruflin
Copy link
Contributor

ruflin commented Jun 22, 2020

This proposal on how the deployment of the package registry and package storage should change.

Problem

With the current way the package-storage is deployed, there is no easy way to have different environments for production, QA or snapshots. If someone builds a package and wants to share it for testing with others, the tester is required to setup the testing environment local and run it with a local registry.

There is also no process to have certain packages in a QA stage before they are shipped to production.

Goals

The following proposal dives into how the above can be solved. But even though multiple stages of deployment are introduced, it should still be as easy as possible to release a new package. The different stages of deployment can be followed but it is not a strict requirement.

Note: At the moment we have an additional "experimental" environment which goes away in the near future and is ignored on purpose in this proposal.

Environments

To make testing and snapshot builds possible, three environments are needed:

  • production
  • staging (QA)
  • snapshot

Each of these environments is tied to a specific version of the package-storage and the package-registry and has different rules on how packages are added. These are described in more details below.

Snapshot environment

The snapshot environment is to share the latest versions of packages and testing it together with the snapshot builds of the Elastic Stack. New packages are added and updated at any time and in most cases fully automated. In the snapshot branch it is allowed to overwrite / update existing versions.

Addition of packages are done through direct commits to the branch. It is expected, that any package updates pushed to the branch are already pre-checked by CI by the contributor.

The branch used for the snapshot packages is called snapshot in the package-storage repositories. The related package-registry is the one in the master branch. Every time a new commit is pushed to the package-storage snapshot branch or the package-registry master branch a new build is kicked off. The build is expected to pass as precheckes should have happened, if not, the contributors are pinged and deployment does not happen.

Taking the integrations repository as an example, every time a PR is merged, a script will trigger a commit to the snapshot branch with the updated version. This will trigger a build and a new registry is deployed.

The snapshot registry is available under epr-snapshot.elastic.co.

Snapshot is NOT a package development environment. All changes and tests should happen outside the snapshot branch and only the final result of changes is pushed.

The snapshot registry is a combination of production + staging + snapshot packages. If the same version exists in production and snapshot, the production version is taken as these should not conflict. Having all packages together in snapshot also allows to do upgrade tests.

Staging environment

The staging environment is meant for testing packages which are ready for deployment. Talking in terms of directory, moving to staging is moving a directory from snapshot to staging, meaning the package will be removed from snpashot. In most cases, when a package enters staging, it is expected not to change anymore. But if issues are found, changes can be picked again from the snapshot branch. A script is expected to do the work taking a package from snapshot and pushing it to staging.

From the deployment perspective it is the same as the snapshot build, it is continously built.

It links to a specific release version of the registry. If the registry is updated, the registry reference has to be updated manually. This is to ensure all CI builds are consistent.

The staging environment is expected to be used on SNAPSHOT branches.

The url used is epr-staging.elastic.co.

Production environment

The production branch is used for all the released packages. These packages should never be changed / updated after they are released. Contributions to the production branch happend through a PR to make sure CI checks can be run in advance. These PR are normally opened by a script taking a package from the staging branch. As soon as it is merged into production, the staging version should be removed. For now the merging of these PR's need to happen manually, but it could be automated if CI is green for "trusted" contributors.

The registry is tied to a specific release tag. As soon as a PR is merged in the production branch, it is deployment automatically to epr.elastic.co.

Summary

In summary, the above gives us 3 different environments. Deployments of each environment happens fully automated as soon as a commit is added to the branch. Only in the case of production, the addition of a package has to go through a PR to have a previous CI check guaranteed.

Below is a summary table of the 3 environments:

Snapshot Staging Production
URL epr-snapshot.elastic.co epr-staging.elastic.co epr.elastic.co
Add package Commit Commit PR
Version overwrite yes if needed no
Stack Version *-SNAPSHOT *-SNAPSHOT Released (candidates)
Registry Version master Stable release Stable release
Branch snapshot staging production
Packages snapshot+staging+prod staging+production production
Release Automated Automated Automated
Docker image snapshot staging production

How to get to this new deployment

Today all packages are in the master branch. To make the above possible, the integrations repository script has to be adjusted and an additional script has to be built, to move packages between the different version.

On the Kibana side, a way must be found the SNAPSHOT builds point to a different version of the registry then development. Or the production one could be just the default and it needs manual adjustment.

Why Git and branches

Instead of using Git for the above, it would also be possible to just use a directory structure on S3 or similar. But the nice thing about Git is that it shows us exactly what happened when and by who in the past, and in extreme casees allows us to roll back changes if needed. In addition for the production registry it allows us to the use a manual PR review to have CI checks in advance. The part that is a bit more combuersome is moving packages between branches, but scripts can do this for us.

@ruflin ruflin self-assigned this Jun 22, 2020
@ruflin
Copy link
Contributor Author

ruflin commented Jun 22, 2020

Overall I would like to move on this proposal very quickly so we can get the changes in place soon.

@kuisathaverat
Copy link
Contributor

Snapshot is NOT a package development environment. All changes and tests should happen outside the snapshot branch and only the final result of changes is pushed.

for development, we can use the dev-next environment, there we can configure a package-registry inside the cluster.

@kuisathaverat
Copy link
Contributor

Would be great to get your feedback on the above from a "deployment" perspective

for snapshot and staging the deploy process is clear, every good merge in the branch publishes the images and kick the rollout in the service to grab the new images. In the snapshot environment, we can make a simple update of the pods, for staging I guess it is valid too, the downtime should be short.
For the production environment, I have a few doubts, When we trigger the release process? Would we make it manually? Which version do we promote the latest, one of the latest 10(or whatever)? here the deployment is more critical so we have to choose something robust like blue-green or canary deployments, we can experiment on staging and decide which one is the best for us.

@ruflin
Copy link
Contributor Author

ruflin commented Jun 23, 2020

  • For the development we also have all the tooling in the "integrations" repo. What I was getting at with the above comment is that only things should be pushed to snapshot that are already "tested", meaning went through a PR review / CI in an other repo. dev-next will be great for having it point to snapshot registry
  • staging / snapshot environment: Why is there a downtime? Isn't this the same as we have at the moment?
  • prod: I like the blue/green approach, canary is probably more than we need at the moment. For the release process triggering, I don't think it should be manual. It would happen after a PR is merged (basically approving the PR is approving the release). My assumption is that all versions would be deployed in a serial way. So if 3 PR's are merged closely together, each of these will be rolled out fully and the next release is only triggered after the previous one was completed.

If we want to go with the manual release for production for now, we can come back the Jenkins approval process we discussed earlier.

@kuisathaverat
Copy link
Contributor

staging / snapshot environment: Why is there a downtime? Isn't this the same as we have at the moment?

the downtime is because it has to start the new pods, stop the old pods, and change the service to point to the new pods, it really short seconds, but there is an outage.

It would happen after a PR is merged (basically approving the PR is approving the release).

on the merge, we would run all the test again, if we use blue-green deployment, we can deploy without disturbing the current green deploy, meanwhile, we can run additional tests over the blue deployment before changing it to green.

My assumption is that all versions would be deployed in a serial way. So if 3 PR's are merged closely together, each of these will be rolled out fully and the next release is only triggered after the previous one was completed.

we can put additional logic to allow only a number of changes per hour or other controls, but keep it simple is always the best way so I think that serial deploy of all changes is fine.

If we want to go with the manual release for production for now, we can come back the Jenkins approval process we discussed earlier.

If we have to make the PR, in any case, maybe the approval of the PR and the merge is enough approval.

@jsoriano
Copy link
Member

The branch used for the snapshot packages is called snapshot in the package-storage repositories. The related package-registry is the one in the master branch.

Why using the master branch of the package registry to serve snapshot packages? I think we should decouple development of packages from the development of the registry and its storage backends. In my opinion, packages on all stages should be served by released versions of the registry, ideally the same versions. This way we avoid blocking development of packages by issues introduced in the registry, or introducing features in packages that make them unsuitable to be promoted to staging/production.
If packages need a new feature in the registry, this feature should be developed, tested and released in the registry independently. Release process of the registry should be independent to the release process of packages, and it may include to use at some point different versions in the same environments (e.g. for the mentioned blue-green deployments).
Snapshot environment shouldn't be an environment to test latest registry code if this environment is also used to test and share packages.
If someone wants to test a certain change in a package with a certain change in the registry I think that the place to do it is a local environment.

Taking the integrations repository as an example, every time a PR is merged, a script will trigger a commit to the snapshot branch with the updated version. This will trigger a build and a new registry is deployed.

We have to keep in mind that (hopefuly) at some point there may be other repositories with integrations that will need a way to release to a registry too.

On the Kibana side, a way must be found the SNAPSHOT builds point to a different version of the registry then development. Or the production one could be just the default and it needs manual adjustment.

Have we considered to support multiple registries? This can be interesting for organizations that develop their own packages, they could use epr.elastic.co and their own registry. This can be also a solution to have different stages. Production and staging registries could be included in Kibana, but staging disabled by default.
This has more implications, like priority of registries when same package is available in multiple registries, but I think it could be a nice feature.

the nice thing about Git is that it shows us exactly what happened when and by who in the past

I guess there can be some time between a version of a package is "released", and it is actually published and available on each environment. It can be still interesting to register when packages are effectively published.

and in extreme casees allows us to roll back changes if needed.

I don't think we should support "unreleasing" released packages, even if possible with a git-based approach.

@jonathan-buttner
Copy link
Contributor

@ruflin I think it might be helpful to have a CI script (maybe in jenkins?) where you could enter or select the package you want to move from snapshot to staging or staging to production and then click a button to have it moved to the next environment for you. I think this would effectively open a PR from one branch to the next and handle the cleanup.

Talking in terms of directory, moving to staging is moving a directory from snapshot to staging, meaning the package will be removed from snpashot.

What's the benefit of removing it from staging? Reducing the clutter? Once it's deployed to staging it will need to be redeployed to snapshot right since snapshot has staging and production packages?

@jsoriano

We have to keep in mind that (hopefuly) at some point there may be other repositories with integrations that will need a way to release to a registry too.

Yeah currently the security/endpoint team has their own repo: https://github.com/elastic/endpoint-package . We're hoping to leverage this release process as well.

@EricDavisX
Copy link
Contributor

I'm loving this conversation. There is just so much to digest here...
I think it would help testing and automated tests to run more smoothly for sure, to have stable versions of the code and packages to run against.

I appreciate @jsoriano 's comment about having the actual Registry code pulled a known stable version. I suppose It Depends. I would guess that Beats & Kibana ci would generally want to pull known stable versions, while the Registry & Packages ci may more generally want to pull latest versions in development?

I still think the process discussion around package update communications with Customers will be important to flesh out, either here or in another ticket. Tho I may still be thinking about this different than what we're planning to support. One question comes up:

will an ELK stack admin be able to 'downgrade' to latest-minus-1 package if needed for some reason, for the System and Endpoint packages (the others can be un-installed, but those that are built in I was thinking would maybe need yet more special logic code)?

@ruflin
Copy link
Contributor Author

ruflin commented Jun 24, 2020

@kuisathaverat

  • We had in the past a discussion around the problem on what happens if k8s scales up the containers and newer ones are available we get a version disconenct. This lead us to the internal version hashes etc. Do we still need this?

@jsoriano

  • Decouple registry development from integrations: I'm on board with this, it also leads to reproducible CI. But I assume the snapshot one most of the time uses the latest version.
  • Multiple package development repos: Yes, integrations was just an example here
  • Multiple registries: It came up in other conversations. It is probably something we need to support in the long run on the Kibana side but for now would like to stay away from it to not introduce complexity on the Kibana side because of the package development process.
  • Register when package is released: How would you do that?
  • Roll back: Agree, rollback should never happen in prod.

@jonathan

  • Fully on board to even be able to do this in Jenkins. I so far only though of a script but if we can have a UI for it too, even better.
  • Removing it from staging and snapshot is to remove cluster. But we don't have to.

@EricDavisX

  • Downgrade: Not yet, but we had discussions that we might support this at one stage.

@jsoriano
Copy link
Member

@EricDavisX

I appreciate @jsoriano 's comment about having the actual Registry code pulled a known stable version. I suppose It Depends. I would guess that Beats & Kibana ci would generally want to pull known stable versions, while the Registry & Packages ci may more generally want to pull latest versions in development?

Yep, it depends on the use that we make of each environment, but if the environments described here are going to be used for the package release process, and as a way to share packages during development and testing, I think that they should use controlled versions, ideally stable ones. Registry and Packages CI should probably be separated things, each one with their own dependencies.

@ruflin

Register when package is released: How would you do that?

Not sure how to do it. Not sure if even neccesary :) I only wanted to remark that only with Git we may not have all the control on when things happen. For example with S3, or a normal filesystem, you can know when certain files were modified, and you can also audit modifications.

@kuisathaverat
Copy link
Contributor

@ruflin I think it might be helpful to have a CI script (maybe in jenkins?) where you could enter or select the package you want to move from snapshot to staging or staging to production and then click a button to have it moved to the next environment for you. I think this would effectively open a PR from one branch to the next and handle the cleanup.

That's my initial idea to promote the binaries, but we have three "release" branches and each one is a different stage, so I guess that what we promote are the changes between those branches and then we release the binary.

@kuisathaverat
We had in the past a discussion around the problem on what happens if k8s scales up the containers and newer ones are available we get a version disconenct. This lead us to the internal version hashes etc. Do we still need this?

you always need to differentiate in some way the old pod from the new pod, so you need to tag new versions with a different tag whatever you want. The thing here is, we have two branches on package-registry and package-storage, master for the development and cut releases with tag or whatever, and experimental that is another develop branch. Here is when I am not sure why we need an experimental branch I guess that it is because we want to have packages in two different versions or packages in some kind of incubator. I see it on the package-registry but I do not understand this branch in the package-repository, I guess it is to add features for the experimental packages without adding then to the master branch until the package is ready. This will generate painful cherrypick between experimental and master, also backport fixes between master and experimental, I see a lot of maintenance work there, Are the changes required for the experimental packages always breaking changes? Can we incorporate those changes in the master and use the flag strategy to disable them in the stable releases? if so we simplify a lot the release process, we can release stable version from the package-registry for example with a git tag, this version is bundled in a binary with the stable version of the package storage (stable binary) and the experimental version of the package storage(experimental binary). Then the experimental binary is deployed in the experimental environment for testing and validation. The stable binary is deployed in the stage environment for testing and validation, there we will run an automated test, acceptance test, stress test, manual test, and so on; when the version is marked as good enough, we promote the same binary to production with a blue-green deployment (for example), we test again the blue deployment and we make the switch of version when it is ready.

@ruflin
Copy link
Contributor Author

ruflin commented Jun 24, 2020

@kuisathaverat Quoting from my initial description:

Note: At the moment we have an additional "experimental" environment which goes away in the near future and is ignored on purpose in this proposal.

Based on the comment from @jsoriano I think we can assume that each version of a package-storage branch is tied to a very specific version of the registry moving forward (not the case today).

@kuisathaverat
Copy link
Contributor

Based on the comment from @jsoriano I think we can assume that each version of a package-storage branch is tied to a very specific version of the registry moving forward (not the case today).

if this is the case you only have one version number and two flavors (stable and experimental).

@ruflin
Copy link
Contributor Author

ruflin commented Jun 25, 2020

I had a follow up chat with @kuisathaverat to dive a bit more into this. What I plan to do as a next step is to move this forward is to create the snapshot branch with a POC for this. It will contain the existing packages + everything we need for the above. The Docker containers which will be created will be:

elastic.co/package-registry/distribution:snapshot
elastic.co/package-registry/distribution:staging
elastic.co/package-registry/distribution:production
elastic.co/package-registry/distribution:{{commit-hash}}
elastic.co/package-registry/distribution:{{specific-version-if-we-need-it}}

We need the commit hash docker images for the deployment to make it reproducible.

Ivan will start to look into the blue-green deployment.

As soon as I have a branch ready, I'll share it here.

@ruflin
Copy link
Contributor Author

ruflin commented Jun 25, 2020

I branched of master into snapshot and now adding the logic for the above discussion to it. The PR is here (WIP): #88

@mtojek
Copy link
Contributor

mtojek commented Jun 25, 2020

A side note: this thread is getting a bit too long, would be nice to write up summary once decided, so that everyone can review it.

@ruflin
Copy link
Contributor Author

ruflin commented Jun 30, 2020

Here a quick summary of the current state and the steps forward. The main thing which changed from the initial proposal is that for all branches we use a fixed version of the package-registry. We didn't settle yet on the final way how the deployment of production will work. Here is the updated table:

Snapshot Staging Production
URL epr-snapshot.elastic.co epr-staging.elastic.co epr.elastic.co
Add package Commit Commit PR
Version overwrite yes if needed no
Stack Version *-SNAPSHOT *-SNAPSHOT Released (candidates)
Registry Version fixed version Stable release Stable release
Branch snapshot staging production
Packages snapshot+staging+prod staging+production production
Release Automated Automated Manual
Docker image snapshot staging production

Now lets dig into the phases.

Phase 1 - Basic setup

Phase 1 is about getting all the basics in place to start working on top of it.

  • Create branches in package-storage repository (all 3 branches already exist)
  • Create docker images for each build (already done, images exist)
  • Get snapshot registry deployed, we already have staging and production running

Phase 2 - Move over development / relase

In Phase 2, the development / releases should be switched over to the new branches:

  • Integrations repo pushes to snapshot
  • Release script available to push packages through the branches
  • Update Kibana to point to the correct registries

Phase 3 - Cleanup

Cleanup on the existing branches should happen. master should be removed, replaced with main or similar and only contain docs around how the deployment works.

  • Remove packages shipped with the registry docker container currently

ruflin added a commit to ruflin/integrations that referenced this issue Jul 1, 2020
As part of elastic/package-storage#86 the way packages are released is changed. The release process will be snapshot -> staging -> production. This PR now first switches over to directly open PR's against production to move away from master. This allows us to update the deployment of epr.elastic.co to point to production branch instead and start to cleanup / remove the master branch.

The second step will be to adjust the script that it directly pushed to snapshot and we then have a release script to promote packages from snapshot to staging to production.
@jfsiii
Copy link

jfsiii commented Jul 1, 2020

@ruflin @jen-huang In Kibana, we can determine if a cluster is running a snapshot by using the version.build_snapshot boolean from /api/status response

https://github.com/elastic/kibana/blob/c8c20e4ca8e768bcce2e471a2f80aef03d4a2a62/packages/kbn-dev-utils/src/kbn_client/kbn_client_status.ts#L39

I don't know if it's available when we initially assign the registry url but we could work the logic into https://github.com/elastic/kibana/blob/a00051f91471378fb2e1f882eb88b15c2fbb1e97/x-pack/plugins/ingest_manager/server/services/epm/registry/registry_url.ts#L23 without much issue

Is it possible for the various environments to start Kibana with the xpack.ingestManager.epm.registryUrl value set to the desired registry? We only allow changing the registry for Gold+ so I don't know if it's applicable, but it's worth mentioning.

ruflin added a commit to elastic/integrations that referenced this issue Jul 1, 2020
As part of elastic/package-storage#86 the way packages are released is changed. The release process will be snapshot -> staging -> production. This PR now first switches over to directly open PR's against production to move away from master. This allows us to update the deployment of epr.elastic.co to point to production branch instead and start to cleanup / remove the master branch.

The second step will be to adjust the script that it directly pushed to snapshot and we then have a release script to promote packages from snapshot to staging to production.
@ruflin
Copy link
Contributor Author

ruflin commented Jul 1, 2020

@jfsiii Would be great to have some logic like the above. We could probably also do without special logic by saying: master always points to the snapshot registry and everything else to production. Then the only time we have to be careful to reset it, is if we branch of the new 8.x from master to change it. For testing, users would manually change it to staging / snapshot etc. as needed.

ruflin added a commit to ruflin/package-storage that referenced this issue Jul 2, 2020
As part of elastic#86 the master branch will not be used anymore. By now, all packages have been moved over to the production branch and it is ready to be deployed under `epr.elastic.co`. All future contributions of packages should go to the production branch or as soon as staging and snapshot deployment are fully available + promotion script, follow the new process.

The master branch will need to be updated to remove most code and the README should be updated to contain details around the branch and usage instead. I'm removing first most packages to prevent accidental contributions to the master branch but still keep tooling working. Further cleanups will follow.
ruflin added a commit to ruflin/package-storage that referenced this issue Jul 2, 2020
As part of elastic#86 the master branch will not be used anymore. By now, all packages have been moved over to the production branch and it is ready to be deployed under `epr.elastic.co`. All future contributions of packages should go to the production branch or as soon as staging and snapshot deployment are fully available + promotion script, follow the new process.

The master branch will need to be updated to remove most code and the README should be updated to contain details around the branch and usage instead. I'm removing first most packages to prevent accidental contributions to the master branch but still keep tooling working. Further cleanups will follow.
ruflin added a commit to ruflin/package-registry that referenced this issue Jul 2, 2020
The Dockerfile for the registry itself should not contain any packages. Instead it should be empty and there are other distributions with packages, see elastic/package-storage#86. This removes the packages from the default Docker build.
@ruflin
Copy link
Contributor Author

ruflin commented Jul 2, 2020

Update on the current status of this migration:

  • 3 branches: Now in package-storage repo the production, staging and snapshot branches exist. production contains all the packages form master. No further changes should be made to the master packages, these will be removed. snapshot and staging contain both only 1 package for now. These packages are called snapshot and staging which helps if you are not sure which registry is someone running, you can search for this package.
  • epr.elastic.co on production: epr.elastic.co is now served from production.
  • epr-staging.elastic.co is now served from the staging branch
  • epr-snapshot.elastic.co is still WIP to be set up, but should be available in the next days.
  • Docker images: All package-registry/distribution:* images are available and updated automatically
  • The package-registry pure Docker image will soon not contain any packages anymore: Remove packages from registry Docker build package-registry#583

The integrations repo was update to open PRs against production for now but a release script is still needed to allow the new workflow. An initial idea on how the promote script could look like is open here: #110

ruflin added a commit to ruflin/package-registry that referenced this issue Jul 2, 2020
The Dockerfile for the registry itself should not contain any packages. Instead it should be empty and there are other distributions with packages, see elastic/package-storage#86. This removes the packages from the default Docker build.

Few additional changes in the this PR:

* Split up the Dockerfile in two stages to reduce the size of the image. Thanks @jsoriano for the contribution
* Switch over to production packages instead of master
* Select /packages as default path
ruflin added a commit to elastic/package-registry that referenced this issue Jul 3, 2020
The Dockerfile for the registry itself should not contain any packages. Instead it should be empty and there are other distributions with packages, see elastic/package-storage#86. This removes the packages from the default Docker build.

Few additional changes in the this PR:
ruflin added a commit to ruflin/package-storage that referenced this issue Jul 3, 2020
As part of elastic#86 the master branch will not be used anymore. By now, all packages have been moved over to the production branch and it is ready to be deployed under `epr.elastic.co`. All future contributions of packages should go to the production branch or as soon as staging and snapshot deployment are fully available + promotion script, follow the new process.

This PR removes all packages and big junk of the code. Further cleanup will be needed. The goal of this PR is to make sure this registry is not used anymore moving forward.
ruflin added a commit that referenced this issue Jul 3, 2020
As part of #86 the master branch will not be used anymore. By now, all packages have been moved over to the production branch and it is ready to be deployed under `epr.elastic.co`. All future contributions of packages should go to the production branch or as soon as staging and snapshot deployment are fully available + promotion script, follow the new process.

This PR removes all packages and big junk of the code. Further cleanup will be needed. The goal of this PR is to make sure this registry is not used anymore moving forward.
ruflin added a commit to elastic/kibana that referenced this issue Jul 3, 2020
…70687)

With elastic/package-storage#86 we have now 3 registries available: production, staging, snapshot. Our current master snapshot build should point to the snapshot registry. The 7.x and 7.8 branch both should point to the production registry. It means, if someone runs the master snapshot builds, he always has the most recent packages available.

This also ensures, we don't accidentally ship with the production registry. The only time we need to be careful, is when we will branch of 8.x from master. At this stage, we need to switch over the registry in 8.x to prod again.

The registry URL used is: https://epr-snapshot.ea-web.elastic.dev The reasons is that the CDN URL is not deployed yet. As soon as the CDN is available, we should switch it over to : https://epr-snapshot.elastic.co The reason I'm already switching over is to make sure we can use the snapshot branch as soon as possible.
@ruflin
Copy link
Contributor Author

ruflin commented Jul 3, 2020

Update on the progress:

  • Snapshot registry: This is now available under https://epr-snapshot.ea-web.elastic.dev/ Soon also the CDN variant under https://epr-snapshot.elastic.co/ will be available.
  • The registry package itself now does not contain any packages. All release branches were updated to make use of the new registry and add their own packages.
  • Kibana master was updated to point to the snapshot registry

The main piece missing now is the promotion script. I'm going to close this issue now and follow up with a separate issue on the release script.

@ruflin ruflin closed this as completed Jul 3, 2020
eyalkraft pushed a commit to build-security/integrations that referenced this issue Mar 30, 2022
As part of elastic/package-storage#86 the way packages are released is changed. The release process will be snapshot -> staging -> production. This PR now first switches over to directly open PR's against production to move away from master. This allows us to update the deployment of epr.elastic.co to point to production branch instead and start to cleanup / remove the master branch.

The second step will be to adjust the script that it directly pushed to snapshot and we then have a release script to promote packages from snapshot to staging to production.
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants