Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support running publishing bot for containerd #320

Closed
5 of 9 tasks
akhilerm opened this issue Mar 1, 2023 · 17 comments
Closed
5 of 9 tasks

Support running publishing bot for containerd #320

akhilerm opened this issue Mar 1, 2023 · 17 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@akhilerm
Copy link
Member

akhilerm commented Mar 1, 2023

Currently the publishing-bot is centred towards running only on the kubernetes/kubernetes repo. We want to run publishing bot on the containerd/containerd repo to publish the api/ directory from containerd to a new containerd/api repository.

Following are the list of changes that must be made to so that this can be achieved without breaking the existing publishing bot workflow for k8s:

@BenTheElder
Copy link
Member

Personally: I would really suggest considering just making containerd/api a go package / module instead.

The staging hack is not something Kubernetes can reconsider easily, but it can cause a lot of problems with IDEs etc (e.g. gopls won't support multiple modules without go.work) and it allows you to develop libraries without thinking as much about breaking changes versus if you consumed it as a library yourself ...

@akhilerm
Copy link
Member Author

@BenTheElder Sorry for the delay in response.

For the containerd repo, it wont cause an issue with IDEs. The workflow that is in containerd/containerd for contributing and also the code organization in containerd repo will not change. We will not have a staging like directory in containerd. Just the /api directory and some dependent directories will be made available as containerd/api repo. The containerd/containerd will be consuming the apis from the directory itself. Only projects which need to use just the containerd api will be importing and using the containerd/api module.

I would really suggest considering just making containerd/api a go package / module instead.

This was earlier tried in containerd . Ref: containerd/containerd#5716. But didnt work out and had to revert the module changes, here. Ref: containerd/containerd#6439

@BenTheElder
Copy link
Member

just the /api directory and some dependent directories will be made available as containerd/api repo.

I still don't recommend this because it means you're not experiencing how other users will experience using the package, which makes it too easy to ignore breaking changes etc.

This was earlier tried in containerd . Ref: containerd/containerd#5716. But didnt work out and had to revert the module changes, here. Ref: containerd/containerd#6439

I mean actually moving it out to another repo.

@akhilerm
Copy link
Member Author

I mean actually moving it out to another repo.

Didnt get this part. Do you mean to completely move the api to another repository and then import containerd/api in containerd/containerd

@BenTheElder
Copy link
Member

Yes.

@akhilerm
Copy link
Member Author

@BenTheElder That will cause issue with tagging and using the containerd/api in containerd/containerd as it will involve back and forth tagging between the repositories to use a latest release version in containerd/containerd.

Hence following the publishing-bot approach, as it will handle tagging also and containerd community will not have to change their workflow.

Only consumers of containerd/api will have to import the new module instead of containerd/containerd as a whole.

@akhilerm
Copy link
Member Author

akhilerm commented Jun 6, 2023

@BenTheElder Do you have more thoughts on this?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2024
@akhilerm
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 31, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 30, 2024
@akhilerm
Copy link
Member Author

akhilerm commented May 1, 2024

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 30, 2024
@akhilerm
Copy link
Member Author

/close as containerd has moved to using containerd/containerd/api module for releasing APIs

@akhilerm
Copy link
Member Author

/reopen

@k8s-ci-robot k8s-ci-robot reopened this Jul 30, 2024
@k8s-ci-robot
Copy link
Contributor

@akhilerm: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@akhilerm
Copy link
Member Author

/close

@k8s-ci-robot
Copy link
Contributor

@akhilerm: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants