Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Balancer - main.go and yamls #5643

Merged
merged 1 commit into from
Apr 12, 2023
Merged

Conversation

mwielgus
Copy link
Contributor

What type of PR is this?

/kind feature

What this PR does / why we need it:

Adds main.go, Makefile, Dockerfile and setup and example yamls.

Which issue(s) this PR fixes:

None

Special notes for your reviewer:

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files. labels Mar 31, 2023
@k8s-ci-robot k8s-ci-robot added area/balancer approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Mar 31, 2023
@mwielgus
Copy link
Contributor Author

/assign @pbetkier

balancer/examples/nginx-hpa.yaml Outdated Show resolved Hide resolved
balancer/examples/nginx-hpa.yaml Outdated Show resolved Hide resolved
@@ -0,0 +1,119 @@
#
# Balancer scaling 2 deployments using priority policy and hpa.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The examples at the moment are pretty artificial: 2 identical deployments (just different labels and names). I think users could greatly benefit from more real-life examples like equal ratio between zones or overflow to other machine types. Even if an example would require tuning in a specific cluster deployment (e.g. modify zone names) I think it's still more useful than having users figure it out on their own.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These examples can be deployed anywhere and work out of the box. If someone is using Balancer they are probably advanced enough to figure out node selectors on their own ;).

func init() {
flag.StringVar(&kubeconfig, "kubeconfig", "", "Path to a kubeconfig. Only required if out-of-cluster.")
flag.StringVar(&masterURL, "master", "", "The address of the Kubernetes API server. Overrides any value in kubeconfig. Only required if out-of-cluster.")
flag.IntVar(&balancerReprocessPeriodSec, "reprocess-period-sec", 15, "How often (in second) balancers are processed")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps DurationVar instead of int in seconds and reconciliation-interval (or sync-period which I guess is how the option is typically called in k8s)?

balancer/main.go Outdated Show resolved Hide resolved
)

var (
masterURL string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see masterURL being configurable for the other k8s.io/autoscaler components and all of them can be run out-of-cluster nicely. When looking at CA and VPA, I saw that both use BuildConfigFromFlags as well, but set masterURL: "" and don't allow for overriding it in the commandline. Do we have a good reason to deviate from this pattern? If not, I think it makes sense to have people configure the k8s.io/autoscaler components in the same way.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite handy if you try to run balancer locally for development (and communicate with cluster via kubectl proxy).

balancer/Makefile Outdated Show resolved Hide resolved
serviceAccountName: balancer-controller
containers:
- name: controller
image: gcr.io/gke-autoscaling-gcr/balancer:0.1.1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we rather point to an official k8s repo here, or is this something we can only do after the fact when the code has been merged and released from the k8s repo for the first time?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's update it after we do the official oss release.

balancer/deploy/controller.yaml Outdated Show resolved Hide resolved
@mwielgus mwielgus force-pushed the balancer-main branch 2 times, most recently from 43a9af8 to 084b0b3 Compare April 11, 2023 12:04
Copy link
Contributor

@pbetkier pbetkier left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 12, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: mwielgus, pbetkier

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot merged commit f173888 into kubernetes:master Apr 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/balancer cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/XL Denotes a PR that changes 500-999 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants