Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace and remove deprecated linters #1848

Merged
merged 1 commit into from
Nov 17, 2021

Conversation

mboersma
Copy link
Contributor

What type of PR is this?:

/kind cleanup

What this PR does / why we need it:

Removes the deprecated interfacer linter and replaces golint with revive in the golangci-lint configuration. This silences two warnings and moves CAPZ closer to CAPI's linting configuration.

Which issue(s) this PR fixes:

N/A

Special notes for your reviewer:

TODOs:

  • squashed commits
  • includes documentation
  • adds unit tests

Release note:

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Nov 10, 2021
@k8s-ci-robot k8s-ci-robot added area/provider/azure Issues or PRs related to azure provider sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. labels Nov 10, 2021
@@ -477,7 +477,7 @@ func MachinePoolToInfrastructureMapFunc(gvk schema.GroupVersionKind, log logr.Lo

// AzureClusterToAzureMachinePoolsFunc is a handler.MapFunc to be used to enqueue
// requests for reconciliation of AzureMachinePools.
func AzureClusterToAzureMachinePoolsFunc(ctx context.Context, kClient client.Client, log logr.Logger) handler.MapFunc {
func AzureClusterToAzureMachinePoolsFunc(ctx context.Context, cli client.Client, log logr.Logger) handler.MapFunc {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

curious was this a suggestion from the linter? Minor but I kind of like the additional information from having the k while reading the code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the revive linter complained about this:

exp/controllers/helpers_test.go:569:5: var-naming: don't use leading k in Go names; var kClient should be client (revive)

I agree that kClient is a more informative name than cli. I didn't use client because that's a package name already imported in those files (although we could import it with a different name).

I'm open to suggestions if there's a better var name to use here (that doesn't start with k apparently).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't have a great one, apparently it is just variables starting with k and upper case after: https://github.com/mgechev/revive/blob/76b8c5732985b4332321ab368730fc16810726a1/rule/var-naming.go#L94-L101

so something like k8sCleint would work but not sure how I actually feel about it. cli is probably fine, was more interesting the k leading variable thing 😄

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking at the rest of the codebase https://github.com/kubernetes-sigs/cluster-api-provider-azure/search?q=client.Client, we're using client, c, k8sClient, kubeClient in different places. We should probably standardize on one instead of introducing yet another name for it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense to standardize, my vote would be for kubeClient or k8sclient

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like k8sClient and kubeClient. I'll try to hunt down the references and standardize them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just went with local convention, as I should have done initially. So that's c in helpers.go and fakeClient in helpers_test.go.

We do have a hodgepodge of variable names for this, and so does CAPI. Seems like c is the winner overall. I'm not sure it would be worth standardizing all of them; consistency is nice but maybe not that nice.

@mboersma
Copy link
Contributor Author

/retest

@mboersma
Copy link
Contributor Author

/retest

But that's an interesting and unrelated test failure:

Error: resource name may not be empty
• Failure [1488.583 seconds]
Workload cluster creation
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:43
  Creating a VMSS cluster
  /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:333
    with a single control plane node and an AzureMachinePool with 2 Linux and 2 Windows worker nodes [It]
    /home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:334
    Unexpected error:
    <*errors.errorString | 0xc0014e3e20>: {
        s: "resource name may not be empty",
    }
    resource name may not be empty
occurred
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_machinepool_drain.go:204
Full Stack Trace
sigs.k8s.io/cluster-api-provider-azure/test/e2e.labelNodesWithMachinePoolName(0x2583ae0, 0xc000124018, 0x259e4e0, 0xc0000241c0, 0xc00116e800, 0x1b, 0xc00126d500, 0x2, 0x2)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_machinepool_drain.go:204 +0x191
sigs.k8s.io/cluster-api-provider-azure/test/e2e.testMachinePoolCordonAndDrain(0x2583ae0, 0xc000124018, 0x25a20b0, 0xc0004f5bb0, 0x25a2120, 0xc00113c800, 0xc000c39b60, 0x10, 0xc000caffb0, 0x27, ...)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_machinepool_drain.go:147 +0x432
sigs.k8s.io/cluster-api-provider-azure/test/e2e.AzureMachinePoolDrainSpec(0x2583ae0, 0xc000124018, 0xc00054acc8)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_machinepool_drain.go:89 +0x798
sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func1.6.1.4()
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:394 +0x6c
github.com/onsi/ginkgo/internal/suite.(*Suite).PushContainerNode(0xc00019c9a0, 0x22a5666, 0x1a, 0xc000936ff0, 0x0, 0x2aa3f71, 0x4f, 0x189, 0xc000553800, 0xab7)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/suite/suite.go:181 +0x323
github.com/onsi/ginkgo.Context(0x22a5666, 0x1a, 0xc000936ff0, 0xc0004f5b01)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/ginkgo_dsl.go:347 +0xaa
sigs.k8s.io/cluster-api-provider-azure/test/e2e.glob..func1.6.1()
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:393 +0x89e
github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00039dd40, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/leafnodes/runner.go:113 +0xa3
github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00039dd40, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/leafnodes/runner.go:64 +0x15c
github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00043e460, 0x253f180, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/leafnodes/it_node.go:26 +0x87
github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc0007004b0, 0x0, 0x253f180, 0xc00016e8c0)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/spec/spec.go:215 +0x72f
github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc0007004b0, 0x253f180, 0xc00016e8c0)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/spec/spec.go:138 +0xf2
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0006b6420, 0xc0007004b0, 0x1)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/specrunner/spec_runner.go:200 +0x111
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0006b6420, 0x1)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/specrunner/spec_runner.go:170 +0x147
github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0006b6420, 0xc000124840)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/specrunner/spec_runner.go:66 +0x117
github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00019c9a0, 0x7f96f416bcb8, 0xc000582a80, 0x2286541, 0x8, 0xc0002a3c40, 0x2, 0x2, 0x258ced8, 0xc00016e8c0, ...)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/internal/suite/suite.go:79 +0x546
github.com/onsi/ginkgo.runSpecsWithCustomReporters(0x2540fa0, 0xc000582a80, 0x2286541, 0x8, 0xc0002a3c20, 0x2, 0x2, 0x2)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/ginkgo_dsl.go:245 +0x218
github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x2540fa0, 0xc000582a80, 0x2286541, 0x8, 0xc000096f30, 0x1, 0x1, 0xc00029f748)
	/home/prow/go/pkg/mod/github.com/onsi/[email protected]/ginkgo_dsl.go:228 +0x136
sigs.k8s.io/cluster-api-provider-azure/test/e2e.TestE2E(0xc000582a80)
	/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/e2e_suite_test.go:257 +0x1da
testing.tRunner(0xc000582a80, 0x23783f0)
	/usr/local/go/src/testing/testing.go:1193 +0xef
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:1238 +0x2b3

SSSSSSSSSSSSSSSSSS

Workload cluster creation
With 3 control-plane nodes and 2 Linux and 2 Windows worker nodes
/home/prow/go/src/sigs.k8s.io/cluster-api-provider-azure/test/e2e/azure_test.go:205

cc: @devigned

@devigned
Copy link
Contributor

But that's an interesting and unrelated test failure:

Error: resource name may not be empty

That's a new one on me.

@mboersma mboersma mentioned this pull request Nov 12, 2021
3 tasks
Copy link
Contributor

@CecileRobertMichon CecileRobertMichon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/assign @jsturtevant

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 16, 2021
@jsturtevant
Copy link
Contributor

/approve

1 similar comment
@CecileRobertMichon
Copy link
Contributor

/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: CecileRobertMichon, jsturtevant

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Nov 17, 2021
@k8s-ci-robot k8s-ci-robot merged commit 8be8e43 into kubernetes-sigs:main Nov 17, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.1 milestone Nov 17, 2021
@mboersma mboersma deleted the sync-linters branch November 17, 2021 16:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/provider/azure Issues or PRs related to azure provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants