-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Network and Compute services which can be used by top level reconciler #139
New Network and Compute services which can be used by top level reconciler #139
Conversation
/assign @tariq1890 @justaugustus @juan-lee |
@awesomenix: GitHub didn't allow me to assign the following users: juan-lee. Note that only kubernetes-sigs members and repo collaborators can be assigned and that issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@awesomenix -- Noticed you pushed some extra commits. As a contributor pro-tip, you can mark a PR as work-in-progress by including "WIP" anywhere in the PR title (I usually do something like That said, let me know if this is ready to review and I'll do so when I have a chance. :) |
Sorry was testing out the internal load balancer, so had to move around settings to a common place to it can be referred across multiple files. The PR is ready for review. I apologize for a massive PR, i started off with changes to base network only, but while integration machine actuator got tangled.
Thanks for taking time to review @justaugustus |
I have uploaded azure provider to personal docker repo for testing (if you want to dry run |
@@ -36,6 +36,9 @@ type AzureClusterProviderSpec struct { | |||
ResourceGroup string `json:"resourceGroup"` | |||
Location string `json:"location"` | |||
|
|||
// Generated PublicIPName generated, since it should be unique | |||
PublicIPName string `json:"publicIPName,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if this is generated I'd put it in status.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
True, currently it uses consistent hashing, if that method changes in future to be something random, that means, during pivot, target cluster could end up generating a different name
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is probably fine to stay in the Spec, as eventually, we may want to allow users to supply a PIP that already exists. PIP reconcile should then check for the field and store in Status.
@@ -48,6 +51,15 @@ type AzureClusterProviderSpec struct { | |||
// SAKeyPair is the service account key pair. | |||
SAKeyPair KeyPair `json:"saKeyPair,omitempty"` | |||
|
|||
// AdminKubeconfig generated using the certificates part of the spec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to find a better place for these. Specs should not contain any secrets. Since these are already in the model I'm open to addressing this in a separate PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a TODO here to address later?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think its more of a design decision on where the secrets should live, also not knowing where the secret generation part is headed (is cluster api upstream project owning it, or will provider own it). I rather have it logged as an issue or a feature request than a TODO here
// return errors.Wrapf(err, "failed to retrieve kubeconfig while creating machine %q", machine.Name) | ||
// } | ||
|
||
networkInterfaceSpec := networkinterfaces.Spec{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider a factory method to return the networkinterfaces.Spec.
e.g.
func makeNICSpec(role NodeRole) *networkinterfaces.Spec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next PR please, currently its already too much of a change
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fine with punting that to next PR.
return publicKeyPins, nil | ||
} | ||
|
||
// CreateNewBootstrapToken creates new bootstrap token using in cluster config. | ||
func CreateNewBootstrapToken() (string, error) { | ||
func CreateNewBootstrapToken(kubeconfig string, tokenTTL time.Duration) (string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider converting parameters into a struct. This makes plumbing easier in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 to a struct here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keeping it for now, will be addressed when refactored
pkg/cloud/azure/services/internalloadbalancers/internalloadbalancers.go
Outdated
Show resolved
Hide resolved
return nil | ||
} | ||
|
||
// CreateOrUpdate Helper function so this can be unittested. | ||
func (s *Service) CreateOrUpdate(ctx context.Context) error { | ||
func (s *Service) CreateOrUpdate(ctx context.Context, spec interface{}) error { | ||
klog.V(2).Infof("generating certificates") | ||
clusterName := s.scope.Cluster.Name | ||
tmpDirName := "/tmp/cluster-api/" + clusterName |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider extracting this to remove the dependency on the path, and on the concrete file system.
Relying on io.Reader makes it easy to test without touching the filesystem
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, side effect of using kubeadm which always writes to filesystem, not sure if there is an internal API to achieve that, please file an issue and will be addressed in future PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's punt to another PR. @serbrech -- please file an issue for this.
limitations under the License. | ||
*/ | ||
|
||
package internalloadbalancers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why do we want a separate package for internalloadbalancers
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Each service has a purpose, defines a target behaviour rather than mimicking azure service itself, agreed that the code could be 50% same, its better to keep it in different packages to test the behaviour.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like loadbalancers to be consolidated into a single package. The requesting code path can do something like:
- Specify
NetworkSpec
- NetworkSpec will hold something like,
NetworkType: {public,private}
ordefaultLoadBalancerType: {public,internal|private}
From there, load balancer methods should take that as a parameter and have:
func ReconcileLoadBalancer(type) {
...
case: "public":
...
case: "private|internal"
...
}
Let's minimize duplication of code where we can, as the probe, backend pool, frontendIp logic will largely be the same as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Branches are the cause of most bugs in software, i would rather separate the implementation in separate files and maintain duplicity if any. Looking at the difference between the 2 (as of now which can change in future)
Common:
- Load Balancing rules
- Probes
- Get
- Delete
Differences:
InternalLoadBalancers:
- Static Address
- Uses Master Subnet
PublicLoadBalancers:
- Dynamic
- Public IP Address
- Nat Rules
There is quite a bit of difference, but to save 25-30% code duplication is not worth sacrificing the code readability.
May be in future it would be nice to separate the configuration into separate file, would rather postpone to future when this is absolutely required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@awesomenix -- Done with a first pass review. This is more style introspection than logic specific.
I'll give another pass once we address some of my comments.
Adding Nick for review as well.
/assign @soggiest
pkg/cloud/azure/services/networkinterfaces/networkinterfaces.go
Outdated
Show resolved
Hide resolved
@awesomenix -- Additional request: Do we know if this results in a "working" (by our current loose definition of working) cluster? If so, can you post explicit instructions about how you're building the cluster in this branch? |
…r to pure ARM calls fix: New Compute and Network micro services convering machine, cluster to pure ARM calls fix: New Network services which can be used by top level reconciler Add Virtual Machine and Virtual Machine Extensions as well, unforunately this is a massive PR Atleast create single node clusters Add Azure Cloud provider config to all masters and client Use Internal LB IP for nodes to communicate with Control plane Move all defaults to a separate file under services Minor fix to remove any accidental tabs present in yaml file used in startup script Move AdminKubeconfig and DiscoveryHashes back to ClusterConfig, since kubeadm uses on disk certificates, we only update if the spec certs are empty, causing mismatch Address review comments, convert spec to an defined interface and use ref rather than value
@justaugustus updated PR, can you please take another look.
Yes, the instructions are exactly the same as mentioned in getting started. Just need a new release to be cut. |
|
@awesomenix -- That screenshot is AWESOME! :) |
@justaugustus if possible can you please do a final review, it would be nice to get this in before the weekend, so can work on further important improvements such as tests, availability zones, machine actuator cleanup |
@awesomenix -- Fabulous job! @awesomenix / @juan-lee / @serbrech / @tariq1890 / @soggiest -- As we start to work more and more of the backlog, I need everyone to explicitly declare the things they'll be working on and create/assign issues, if they don't already exist to prevent collisions and large PRs. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: awesomenix, justaugustus The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…r to pure ARM calls (#139) fix: New Compute and Network micro services convering machine, cluster to pure ARM calls fix: New Network services which can be used by top level reconciler Add Virtual Machine and Virtual Machine Extensions as well, unforunately this is a massive PR Atleast create single node clusters Add Azure Cloud provider config to all masters and client Use Internal LB IP for nodes to communicate with Control plane Move all defaults to a separate file under services Minor fix to remove any accidental tabs present in yaml file used in startup script Move AdminKubeconfig and DiscoveryHashes back to ClusterConfig, since kubeadm uses on disk certificates, we only update if the spec certs are empty, causing mismatch Address review comments, convert spec to an defined interface and use ref rather than value
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #119, #105, #111
Special notes for your reviewer:
Please confirm that if this PR changes any image versions, then that's the sole change this PR makes.
Release note: