From e16e8551ef796d842689f070a461f65b854c9b40 Mon Sep 17 00:00:00 2001 From: Rishabh Patel <66425093+rishabh-11@users.noreply.github.com> Date: Wed, 23 Oct 2024 12:14:18 +0530 Subject: [PATCH] Sync with upstream `v1.29.4` (#327) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * Update with make generate * Add pdb filtering to remainingPdbTracker * Convert replicated, system, not-safe-to-evict, and local storage pods to drainability rules * Convert scale-down pdb check to drainability rule * Pass DeleteOptions once during default rule creation * Split out custom controller and common checks into separate drainability rules * Filter out disabled drainability rules during creation * Refactor GetPodsForDeletion logic and tests into simulator * Fix custom controller drainability rule and add test coverage * Add unit test for long-terminating pod past grace period * Removed node drainer, kept node termination handler * Add HasNodeGroupStartedScaleUp to cluster state registry. - HasNodeGroupStartedScaleUp checks wheter a scale up request exists without checking any upcoming nodes. * Add kwiesmueller to OWNERS jbartosik et al are transitioning off of workload autoscalers (incl vpa and addon-resizer). kwiesmueller is on the new team and has agreed to take on reviewer/approver responsibilities. * Add information about provisioning-class-name annotation. * Remove redundant if branch * Add mechanism to override drainability status * Log drainability override * fix(cluster-autoscaler-chart): if secretKeyRefNameOverride is true, don't create secret Signed-off-by: Jonathan Raymond * fix: correct version bump Signed-off-by: Jonathan Raymond * Initialize default drainability rules * feat: each node pool can now have different init configs * ClusterAPI: Allow overriding the kubernetes.io/arch label set by the scale from zero method via environment variable The architecture label in the build generic labels method of the cluster API (CAPI) provider is now populated using the GetDefaultScaleFromZeroArchitecture().Name() method. The method allows CAPI users deploying the cluster-autoscaler to define the default architecture to be used by the cluster-autoscaler for scale up from zero via the env var CAPI_SCALE_ZERO_DEFAULT_ARCH. Amd64 is kept as a fallback for historical reasons. The introduced changes will not take into account the case of nodes heterogeneous in architecture. The labels generation to infer properties like the cpu architecture from the node groups' features should be considered as a CAPI provider specific implementation. * Update image builder to use Go 1.21.3 Some of Cluster Autoscaler code is now using features only available in Go 1.21. * Add node-delete-delay-after-taint to FAQ * Reports node taints. * Add debugging-snapshot-enabled back * Rename comments, logs, structs, and vars from packet to equinix metal * Rename types * fix: provider name to be used in builder to provide backward compatibility Signed-off-by: Ayush Rangwala * Rename comments, logs, structs, and vars from packet to equinix metal * Created a new env var for metal to replace/support packet env vars as usual * Support backward compatibility for PACKET_MANAGER env var Signed-off-by: Ayush Rangwala * fix: refactor cloud provider names Signed-off-by: Ayush Rangwala * Documents startup/status/ignore node taints. * Adding price info for c3d (Price for preemptible instances is calculated as: (Spot price / On-demand price) * instance prices) * Bump CA golang to 1.21.3 * cloudprovider/exoscale: update limits/quotas URL https://portal.exoscale.com/account/limits has been deprecated in favor of https://portal.exoscale.com/organization/quotas. Update README accordingly. * Add the AppVersion to cluster-autoscaler.labels as app.kubernetes.io/version * Bump version in chart.yaml * add note for CRD and RBAC handling for VPA (>=1.0.0) * feat(helm): add support for exoscale provider Signed-off-by: Thomas Stadler * Add TOC link in README for EvictionRequirement example * Fix 'evictionRequirements.resources' to be plural in yaml * Run 'hack/generate-crd-yamls.sh' * Adapt AEP to have 'resources' in plural * Remove deprecated dependency: gogo/protobuf * Fix klog formating directives in cluster-autoscaler package. * Update kubernetes dependencies to 1.29.0-alpha.3. * Change scheduler framework function names after recent refactor in kubernetes scheduler. * chore(helm): bump version of cluster-autoscaler Signed-off-by: Thomas Stadler * chore(helm): docs, update README template Signed-off-by: Thomas Stadler * Fix capacityType label in AWS ManagedNodeGroup Fixes an issue where the capacityType label inferred from an empty EKS ManagedNodeGroup does not match the same label on the node after it is created and joins the cluster * Cleanup: Remove deprecated github.com/golang/protobuf usage - Regenerate cloudprovider/externalgrpc proto - go mod tidy * Remove maps.Copy usage. * chore: upgrade vpa go and k8s dependencies Signed-off-by: Amir Alavi * ScaleUp is only ever called when there are unscheduled pods * Bump golang from 1.21.2 to 1.21.4 in /vertical-pod-autoscaler/builder Bumps golang from 1.21.2 to 1.21.4. --- updated-dependencies: - dependency-name: golang dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Disambiguate the resource usage node removal eligibility messages * Cleanup: Remove separate client for k8s events Remove RateLimiting options - replay on APF for apiserver protection. Details: https://github.com/kubernetes/kubernetes/issues/111880 * Update Chart.yaml * Remove gce-expander-ephemeral-storage-support flag Always enable the feature * Add min/max/current asg size to log * Clarify that log line updates cache, now AWS * Update README.md: Link to Cluster-API Add Link to Cluster API. * azure: add owner-jackfrancis * Update OWNERS - typo * Update README.md * Template the autoDiscovery.clusterName variable in the Helm chart * fix: Add revisionHistoryLimit override to cluster-autoscaler Signed-off-by: Matt Dainty * allow users to avoid aws instance not found spam * fix: alicloud the function NodeGroupForNode is incorrect * Update README.md Fix error in text * fix: handle error when listing machines Signed-off-by: Cyrill Troxler * AWS: cache instance requirements * fix: update node annotation used to limit log spam with valid key * Removes unnecessary check * Allow overriding domain suffix in GCE cloud provider. * chore(deps): update vendored hcloud-go to 2.4.0 Generated by: ``` UPSTREAM_REF=v2.4.0 hack/update-vendor.sh ``` * Add new pod list processors for clearing TPU requests & filtering out expendable pods Treat non-processed pods yet as unschedulable * Fix multiple comments and update flags * Add new test for new behaviour and revert changes made to other tests * Allow users to specify which schedulers to ignore * Update flags, Improve tests readability & use Bypass instead of ignore in naming * Update static_autoscaler tests & handle pod list processors errors as warnings * Fix: Include restartable init containers in Pod utilization calc Reuse k/k resourcehelper func * Implement ProvReq service * Set Go versions to the same settings kubernetes/kubernetes uses Looks like specifying the Go patch version in go.mod might've been a mistake: https://github.com/kubernetes/kubernetes/pull/121808. * feat: implement kwok cloudprovider feat: wip implement `CloudProvider` interface boilerplate for `kwok` provider Signed-off-by: vadasambar feat: add builder for `kwok` - add logic to scale up and scale down nodes in `kwok` provider Signed-off-by: vadasambar feat: wip parse node templates from file Signed-off-by: vadasambar docs: add short README Signed-off-by: vadasambar feat: implement remaining things - to get the provider in a somewhat working state Signed-off-by: vadasambar docs: add in-cluster `kwok` as pre-requisite in the README Signed-off-by: vadasambar fix: templates file not correctly marshalling into node list Signed-off-by: vadasambar fix: `invalid leading UTF-8 octet` error during template parsing - remove encoding using `gob` - not required Signed-off-by: vadasambar fix: use lister to get and list - instead of uncached kube client - add lister as a field on the provider and nodegroup struct Signed-off-by: vadasambar fix: `did not find nodegroup annotation` error - CA was thinking the annotation is not present even though it is - fix a bug with parsing annotation Signed-off-by: vadasambar fix: CA node recognizing fake nodegroups - add provider ID to nodes in the format `kwok:` - fix invalid `KwokManagedAnnotation` - sanitize template nodes (remove `resourceVersion` etc.,) - not sanitizing the node leads to error during creation of new nodes - abstract code to get NG name into a separate function `getNGNameFromAnnotation` Signed-off-by: vadasambar fix: node not getting deleted Signed-off-by: vadasambar test: add empty test file Signed-off-by: vadasambar chore: add OWNERS file Signed-off-by: vadasambar feat: wip kwok provider config - add samples for static and dynamic template nodes Signed-off-by: vadasambar feat: wip implement pulling node templates from cluster - add status field to kwok provider config - this is to capture how the nodes would be grouped by (can be annotation or label) - use kwok provider config status to get ng name from the node template Signed-off-by: vadasambar fix: syntax error in calling `loadNodeTemplatesFromCluster` Signed-off-by: vadasambar feat: first draft of dynamic node templates - this allows node templates to be pulled from the cluster - instead of having to specify static templates manually Signed-off-by: vadasambar fix: syntax error Signed-off-by: vadasambar refactor: abstract out related code into separate files - use named constants instead of hardcoded values Signed-off-by: vadasambar feat: cleanup kwok nodes when CA is exiting - so that the user doesn't have to cleanup the fake nodes themselves Signed-off-by: vadasambar refactor: return `nil` instead of err for `HasInstance` - because there is no underlying cloud provider (hence no reason to return `cloudprovider.ErrNotImplemented` Signed-off-by: vadasambar test: start working on tests for kwok provider config Signed-off-by: vadasambar feat: add `gpuLabelKey` under `nodes` field in kwok provider config - fix validation for kwok provider config Signed-off-by: vadasambar docs: add motivation doc - update README with more details Signed-off-by: vadasambar feat: update kwok provider config example to support pulling gpu labels and types from existing providers - still needs to be implemented in the code Signed-off-by: vadasambar feat: wip update kwok provider config to get gpu label and available types Signed-off-by: vadasambar feat: wip read gpu label and available types from specified provider - add available gpu types in kwok provider config status Signed-off-by: vadasambar feat: add validation for gpu fields in kwok provider config - load gpu related fields in kwok provider config status Signed-off-by: vadasambar feat: implement `GetAvailableGPUTypes` Signed-off-by: vadasambar feat: add support to install and uninstall kwok - add option to disable installation - add option to manually specify kwok release tag - add future scope in readme Signed-off-by: vadasambar docs: add future scope 'evaluate adding support to check if kwok controller already exists' Signed-off-by: vadasambar fix: vendor conflict and cyclic import - remove support to get gpu config from the specified provider (can't be used because leads to cyclic import) Signed-off-by: vadasambar docs: add a TODO 'get gpu config from other providers' Signed-off-by: vadasambar refactor: rename `file` -> `configmap` - load config and templates from configmap instead of file - move `nodes` and `nodegroups` config to top level - add helper to encode configmap data into `[]bytes` - add helper to get current pod namespace Signed-off-by: vadasambar feat: add new options to the kwok provider config - auto install kwok only if the version is >= v0.4.0 - add test for `GPULabel()` - use `kubectl apply` way of installing kwok instead of kustomize - add test for kwok helpers - add test for kwok config - inject service account name in CA deployment - add example configmap for node templates and kwok provider config in CA helm chart - add permission to create `clusterrolebinding` (so that kwok provider can create a clusterrolebinding with `cluster-admin` role and create/delete upstream manifests) - update kwok provider sample configs - update `README` Signed-off-by: vadasambar chore: update go.mod to use v1.28 packages Signed-off-by: vadasambar chore: `go mod tidy` and `go mod vendor` (again) Signed-off-by: vadasambar refactor: kwok installation code - add functions to create and delete clusterrolebinding to create kwok resources - refactor kwok install and uninstall fns - delete manifests in the opposite order of install ] - add cleaning up left-over kwok installation to future scope Signed-off-by: vadasambar fix: nil ptr error - add `TODO` in README for adding docs around kwok config fields Signed-off-by: vadasambar refactor: remove code to automatically install and uninstall `kwok` - installing/uninstalling requires strong permissions to be granted to `kwok` - granting strong permissions to `kwok` means granting strong permissions to the entire CA codebase - this can pose a security risk - I have removed the code related to install and uninstall for now - will proceed after discussion with the community Signed-off-by: vadasambar chore: run `go mod tidy` and `go mod vendor` Signed-off-by: vadasambar fix: add permission to create nodes - to fix permissions error for kwok provider Signed-off-by: vadasambar test: add more unit tests - add tests for kwok helpers - fix and update kwok config tests - fix a bug where gpu label was getting assigned to `kwokConfig.status.key` - expose `loadConfigFile` -> `LoadConfigFile` - throw error if templates configmap does not have `templates` key (value of which is node templates) - finish test for `GPULabel()` - add tests for `NodeGroupForNode()` - expose `loadNodeTemplatesFromConfigMap` -> `LoadNodeTemplatesFromConfigMap` - fix `KwokCloudProvider`'s kwok config was empty (this caused `GPULabel()` to return empty) Signed-off-by: vadasambar refactor: abstract provider ID code into `getProviderID` fn - fix provider name in test `kwok` -> `kwok:kind-worker-xxx` Signed-off-by: vadasambar chore: run `go mod vendor` and `go mod tidy Signed-off-by: vadasambar docs(cloudprovider/kwok): update info on creating nodegroups based on `hostname/label` Signed-off-by: vadasambar refactor(charts): replace fromLabelKey value `"kubernetes.io/hostname"` -> `"kwok-nodegroup"` - `"kubernetes.io/hostname"` leads to infinite scale-up Signed-off-by: vadasambar feat: support running CA with kwok provider locally Signed-off-by: vadasambar refactor: use global informer factory Signed-off-by: vadasambar refactor: use `fromNodeLabelKey: "kwok-nodegroup"` in test templates Signed-off-by: vadasambar refactor: `Cleanup()` logic - clean up only nodes managed by the kwok provider Signed-off-by: vadasambar fix/refactor: nodegroup creation logic - fix issue where fake node was getting created which caused fatal error - use ng annotation to keep track of nodegroups - (when creating nodegroups) don't process nodes which don't have the right ng nabel - suffix ng name with unix timestamp Signed-off-by: vadasambar refactor/test(cloudprovider/kwok): write tests for `BuildKwokProvider` and `Cleanup` - pass only the required node lister to cloud provider instead of the entire informer factory - pass the required configmap name to `LoadNodeTemplatesFromConfigMap` instead of passing the entire kwok provider config - implement fake node lister for testing Signed-off-by: vadasambar test: add test case for dynamic templates in `TestNodeGroupForNode` - remove non-required fields from template node Signed-off-by: vadasambar test: add tests for `NodeGroups()` - add extra node template without ng selector label to add more variability in the test Signed-off-by: vadasambar test: write tests for `GetNodeGpuConfig()` Signed-off-by: vadasambar test: add test for `GetAvailableGPUTypes` Signed-off-by: vadasambar test: add test for `GetResourceLimiter()` Signed-off-by: vadasambar test(cloudprovider/kwok): add tests for nodegroup's `IncreaseSize()` - abstract error msgs into variables to use them in tests Signed-off-by: vadasambar test(cloudprovider/kwok): add test for ng `DeleteNodes()` fn - add check for deleting too many nodes - rename err msg var names to make them consistent Signed-off-by: vadasambar test(cloudprovider/kwok): add tests for ng `DecreaseTargetSize()` - abstract error msgs into variables (for easy use in tests) Signed-off-by: vadasambar test(cloudprovider/kwok): add test for ng `Nodes()` - add extra test case for `DecreaseTargetSize()` to check lister error Signed-off-by: vadasambar test(cloudprovider/kwok): add test for ng `TemplateNodeInfo` Signed-off-by: vadasambar test(cloudprovider/kwok): improve tests for `BuildKwokProvider()` - add more test cases - refactor lister for `TestBuildKwokProvider()` and `TestCleanUp()` Signed-off-by: vadasambar test(cloudprovider/kwok): add test for ng `GetOptions` Signed-off-by: vadasambar test(cloudprovider/kwok): unset `KWOK_CONFIG_MAP_NAME` at the end of the test - not doing so leads to failure in other tests - remove `kwokRelease` field from kwok config (not used anymore) - this was causing the tests to fail Signed-off-by: vadasambar chore: bump CA chart version - this is because of changes made related to kwok - fix type `everwhere` -> `everywhere` Signed-off-by: vadasambar chore: fix linting checks Signed-off-by: vadasambar chore: address CI lint errors Signed-off-by: vadasambar chore: generate helm docs for `kwokConfigMapName` - remove `KWOK_CONFIG_MAP_KEY` (not being used in the code) - bump helm chart version Signed-off-by: vadasambar docs: revise the outline for README - add AEP link to the motivation doc Signed-off-by: vadasambar docs: wip create an outline for the README - remove `kwok` field from examples (not needed right now) Signed-off-by: vadasambar docs: add outline for ascii gifs Signed-off-by: vadasambar refactor: rename env variable `KWOK_CONFIG_MAP_NAME` -> `KWOK_PROVIDER_CONFIGMAP` Signed-off-by: vadasambar docs: update README with info around installation and benefits of using kwok provider - add `Kwok` as a provider in main CA README Signed-off-by: vadasambar chore: run `go mod vendor` - remove TODOs that are not needed anymore Signed-off-by: vadasambar docs: finish first draft of README Signed-off-by: vadasambar fix: env variable in chart `KWOK_CONFIG_MAP_NAME` -> `KWOK_PROVIDER_CONFIGMAP` Signed-off-by: vadasambar refactor: remove redundant/deprecated code Signed-off-by: vadasambar chore: bump chart version `9.30.1` -> `9.30.2` - because of kwok provider related changes Signed-off-by: vadasambar chore: fix typo `offical` -> `official` Signed-off-by: vadasambar chore: remove debug log msg Signed-off-by: vadasambar docs: add links for getting help Signed-off-by: vadasambar refactor: fix type in log `external cluster` -> `cluster` Signed-off-by: vadasambar chore: add newline in chart.yaml to fix CI lint Signed-off-by: vadasambar docs: fix mistake `sig-kwok` -> `sig-scheduling` - kwok is a part if sig-scheduling (there is no sig-kwok) Signed-off-by: vadasambar docs: fix type `release"` -> `"release"` Signed-off-by: vadasambar refactor: pass informer instead of lister to cloud provider builder fn Signed-off-by: vadasambar * add unit test for function getScalingInstancesByGroup * Azure: Remove AKS vmType Signed-off-by: Jack Francis * Implement TemplateNodeInfo for civo cloudprovider Signed-off-by: Vishal Anarse * Add comment for type and function Signed-off-by: Vishal Anarse * refactor(*): move getKubeClient to utils/kubernetes (cherry picked from commit b9f636d2efba48689fc0bb23361e5959c2177269) Signed-off-by: qianlei.qianl refactor: move logic to create client to utils/kubernetes pkg - expose `CreateKubeClient` as public function - make `GetKubeConfig` into a private `getKubeConfig` function (can be exposed as a public function in the future if needed) Signed-off-by: vadasambar fix: CI failing because cloudproviders were not updated to use new autoscaling option fields Signed-off-by: vadasambar refactor: define errors as constants Signed-off-by: vadasambar refactor: pass kube client options by value Signed-off-by: vadasambar * Calculate real value for template using node group Signed-off-by: Vishal Anarse * Fix lint error * Fix tests Signed-off-by: Vishal Anarse * Update aws-sdk-go to 1.48.7 via tarball Remove *_test.go, models/, examples * + Added SDK version in the log + Update version in README + command * Switch to multistage build Dockerfiles for VPA * Adding 33 instances types * heml chart - update cluster-autoscaler to 1.28 * Bump builder images to go 1.21.5 * feat: add metrics to show target size of every node group * deprecate unused node-autoprovisioning-enabled and max-autoprovisioned-node-group-count flags Signed-off-by: Prashant Rewar <108176843+prashantrewar@users.noreply.github.com> * fix(hetzner): insufficient nodes when boot fails The Hetzner Cloud API returns "Actions" for anything asynchronous that happens inside the backend. When creating a new server multiple actions are returned: `create_server`, `start_server`, `attach_to_network` (if set). Our current code waits for the `create_server` and if it fails, it makes sure to delete the server so cluster-autoscaler can create a new one immediately to provide the required capacity. If one of the "follow up" actions fails though, we do not handle this. This causes issues when the server for whatever reason did not start properly on the first try, as then the customer has a shutdown server, is paying for it, but does not receive the additional capacity for their Kubernetes cluster. This commit fixes the bug, by awaiting all actions returned by the create server API call, and deleting the server if any of them fail. * Add VSCode workspace files to .gitignore * Remove vpa/builder and switch dependabot updates to component Dockerfiles * fix: updated readme for hetzner cloud provider * Add error details to autoscaling backoff. Change-Id: I3b5c62ba13c2e048ce2d7170016af07182c11eee * Make backoff.Status.ErrorInfo non-pointer. Change-Id: I1f812d4d6f42db97670ef7304fc0e895c837a13b * allow specifing grpc timeout rather than hardcoded 5 seconds Signed-off-by: lizhen * [GCE] Support paginated instance listing * azure: fix chart bugs after AKS vmType deprecation Signed-off-by: Jack Francis * Update VPA release README to reference 1.X VPA versions. * implement priority based evictor and refactor drain logic * Update dependencies to kubernetes 1.29.0 * [civo] Add Gpu count to node template Signed-off-by: Vishal Anarase (cherry picked from commit 8703ff98ce8f6a571be218fdcb5c698e60951d29) * Restore flags for setting QPS limit in CA Partially undo #6274. I noticed that with this change CA get rate limited and slows down significantly (especially during large scale downs). * Pass Burst and QPS client params to capi k8s clients * Dependency update for CA 1.29.1 * feat: support `--scale-down-delay-after-*` per nodegroup Signed-off-by: vadasambar feat: update scale down status after every scale up - move scaledown delay status to cluster state/registry - enable scale down if `ScaleDownDelayTypeLocal` is enabled - add new funcs on cluster state to get and update scale down delay status - use timestamp instead of booleans to track scale down delay status Signed-off-by: vadasambar refactor: use existing fields on clusterstate - uses `scaleUpRequests`, `scaleDownRequests` and `scaleUpFailures` instead of `ScaleUpDelayStatus` - changed the above existing fields a little to make them more convenient for use - moved initializing scale down delay processor to static autoscaler (because clusterstate is not available in main.go) Signed-off-by: vadasambar refactor: remove note saying only `scale-down-after-add` is supported - because we are supporting all the flags Signed-off-by: vadasambar fix: evaluate `scaleDownInCooldown` the old way only if `ScaleDownDelayTypeLocal` is set to `false` Signed-off-by: vadasambar refactor: remove line saying `--scale-down-delay-type-local` is only supported for `--scale-down-delay-after-add` - because it is not true anymore - we are supporting all `--scale-down-delay-after-*` flags per nodegroup Signed-off-by: vadasambar test: fix clusterstate tests failing Signed-off-by: vadasambar refactor: move back initializing processors logic to from static autoscaler to main - we don't want to initialize processors in static autoscaler because anyone implementing an alternative to static_autoscaler has to initialize the processors - and initializing specific processors is making static autoscaler aware of an implementation detail which might not be the best practice Signed-off-by: vadasambar refactor: revert changes related to `clusterstate` - since I am going with observer pattern Signed-off-by: vadasambar feat: add observer interface for state of scaling - to implement observer pattern for tracking state of scale up/downs (as opposed to using clusterstate to do the same) - refactor `ScaleDownCandidatesDelayProcessor` to use fields from the new observer Signed-off-by: vadasambar refactor: remove params passed to `clearScaleUpFailures` - not needed anymore Signed-off-by: vadasambar refactor: revert clusterstate tests - approach has changed - I am not making any changes in clusterstate now Signed-off-by: vadasambar refactor: add accidentally deleted lines for clusterstate test Signed-off-by: vadasambar feat: implement `Add` fn for scale state observer - to easily add new observers - re-word comments - remove redundant params from `NewDefaultScaleDownCandidatesProcessor` Signed-off-by: vadasambar fix: CI complaining because no comments on fn definitions Signed-off-by: vadasambar feat: initialize parent `ScaleDownCandidatesProcessor` - instead of `ScaleDownCandidatesSortingProcessor` and `ScaleDownCandidatesDelayProcessor` separately Signed-off-by: vadasambar refactor: add scale state notifier to list of default processors - initialize processors for `NewDefaultScaleDownCandidatesProcessor` outside and pass them to the fn - this allows more flexibility Signed-off-by: vadasambar refactor: add observer interface - create a separate observer directory - implement `RegisterScaleUp` function in the clusterstate - TODO: resolve syntax errors Signed-off-by: vadasambar feat: use `scaleStateNotifier` in place of `clusterstate` - delete leftover `scale_stateA_observer.go` (new one is already present in `observers` directory) - register `clustertstate` with `scaleStateNotifier` - use `Register` instead of `Add` function in `scaleStateNotifier` - fix `go build` - wip: fixing tests Signed-off-by: vadasambar test: fix syntax errors - add utils package `pointers` for converting `time` to pointer (without having to initialize a new variable) Signed-off-by: vadasambar feat: wip track scale down failures along with scale up failures - I was tracking scale up failures but not scale down failures - fix copyright year 2017 -> 2023 for the new `pointers` package Signed-off-by: vadasambar feat: register failed scale down with scale state notifier - wip writing tests for `scale_down_candidates_delay_processor` - fix CI lint errors - remove test file for `scale_down_candidates_processor` (there is not much to test as of now) Signed-off-by: vadasambar test: wip tests for `ScaleDownCandidatesDelayProcessor` Signed-off-by: vadasambar test: add unit tests for `ScaleDownCandidatesDelayProcessor` Signed-off-by: vadasambar refactor: don't track scale up failures in `ScaleDownCandidatesDelayProcessor` - not needed Signed-off-by: vadasambar test: better doc comments for `TestGetScaleDownCandidates` Signed-off-by: vadasambar refactor: don't ignore error in `NGChangeObserver` - return it instead and let the caller decide what to do with it Signed-off-by: vadasambar refactor: change pointers to values in `NGChangeObserver` interface - easier to work with - remove `expectedAddTime` param from `RegisterScaleUp` (not needed for now) - add tests for clusterstate's `RegisterScaleUp` Signed-off-by: vadasambar refactor: conditions in `GetScaleDownCandidates` - set scale down in cool down if the number of scale down candidates is 0 Signed-off-by: vadasambar test: use `ng1` instead of `ng2` in existing test Signed-off-by: vadasambar feat: wip static autoscaler tests Signed-off-by: vadasambar refactor: assign directly instead of using `sdProcessor` variable - variable is not needed Signed-off-by: vadasambar test: first working test for static autoscaler Signed-off-by: vadasambar test: continue working on static autoscaler tests Signed-off-by: vadasambar test: wip second static autoscaler test Signed-off-by: vadasambar refactor: remove `Println` used for debugging Signed-off-by: vadasambar test: add static_autoscaler tests for scale down delay per nodegroup flags Signed-off-by: vadasambar chore: rebase off the latest `master` - change scale state observer interface's `RegisterFailedScaleup` to reflect latest changes around clusterstate's `RegisterFailedScaleup` in `master` Signed-off-by: vadasambar test: fix clusterstate test failing Signed-off-by: vadasambar test: fix failing orchestrator test Signed-off-by: vadasambar refactor: rename `defaultScaleDownCandidatesProcessor` -> `combinedScaleDownCandidatesProcessor` - describes the processor better Signed-off-by: vadasambar refactor: replace `NGChangeObserver` -> `NodeGroupChangeObserver` - makes it easier to understand for someone not familiar with the codebase Signed-off-by: vadasambar docs: reword code comment `after` -> `for which` Signed-off-by: vadasambar refactor: don't return error from `RegisterScaleDown` - not needed as of now (no implementer function returns a non-nil error for this function) Signed-off-by: vadasambar refactor: address review comments around ng change observer interface - change dir structure of nodegroup change observer package - stop returning errors wherever it is not needed in the nodegroup change observer interface - rename `NGChangeObserver` -> `NodeGroupChangeObserver` interface (makes it easier to understand) Signed-off-by: vadasambar refactor: make nodegroupchange observer thread-safe Signed-off-by: vadasambar docs: add TODO to consider using multiple mutexes in nodegroupchange observer Signed-off-by: vadasambar refactor: use `time.Now()` directly instead of assigning a variable to it Signed-off-by: vadasambar refactor: share code for checking if there was a recent scale-up/down/failure Signed-off-by: vadasambar test: convert `ScaleDownCandidatesDelayProcessor` into table tests Signed-off-by: vadasambar refactor: change scale state notifier's `Register()` -> `RegisterForNotifications()` - makes it easier to understand what the function does Signed-off-by: vadasambar test: replace scale state notifier `Register` -> `RegisterForNotifications` in test - to fix syntax errors since it is already renamed in the actual code Signed-off-by: vadasambar refactor: remove `clusterStateRegistry` from `delete_in_batch` tests - not needed anymore since we have `scaleStateNotifier` Signed-off-by: vadasambar refactor: address PR review comments Signed-off-by: vadasambar fix: add empty `RegisterFailedScaleDown` for clusterstate - fix syntax error in static autoscaler test Signed-off-by: vadasambar (cherry picked from commit 5de49a11fb06eddc8682de03e09d7dc6d6ab3961) * Backport #6522 [CA] Bump go version into CA1.29 * Backport #6491 and #6494 [CA] Add informer argument to the CloudProviders builder into CA1.29 * Merge pull request #6617 from ionos-cloud/update-ionos-sdk ionoscloud: Update ionos-cloud sdk-go and add metrics * CA - Update k/k vendor to 1.29.3 * [v1.29][Hetzner] Fix missing ephemeral storage definition This fixed requests for pods with ephemeral storage requests being denied due to insufficient ephemeral storage for the Hetzner provider. Backport of #6574 to `v1.29` branch. * Use cache to track vms pools * fx * Add UTs * Fx boilder plate header * Add const * Rename vmsPoolSet * [v1.29][Hetzner] Fix Autoscaling for worker nodes with invalid ProviderID This change fixes a bug that arises when the user's cluster includes worker nodes not from Hetzner Cloud, such as a Hetzner Dedicated server or any server resource other than Hetzner. It also corrects the behavior when a server has been physically deleted from Hetzner Cloud. Signed-off-by: Maksim Paskal * [v1.29] fix(hetzner): hostname label is not considered The Node Group info we currently return does not include the `kubernetes.io/hostname` label, which is usually set on every node. This causes issues when the user has an unscheduled pod with a `topologySpreadConstraint` on `topologyKey: kubernetes.io/hostname`. cluster-autoscaler is unable to fulfill this constraint and does not scale up any of the node groups. Related to #6715 * Remove shadow err variable in deleteCreatedNodesWithErros func * fix: scale up broken for providers not implementing NodeGroup.GetOptions() Properly handle calls to `NodeGroup.GetOptions()` that return `cloudprovider.ErrNotImplemented` in the scale up path. * Update k/k vendor to 1.29.5 for CA 1.29 * Rebase * Fx gomock * Rename ARM_BASE_URL * Backport #6528 [CA] Fix expectedToRegister to respect instances with nil status into CA1.29 * Backport #6750 [CA] fix(hetzner): missing error return in scale up/down into CA1.29 * PR#6911 Backport for 1.29: Fix/aws asg unsafe decommission #5829 * CA - 1.29.4 Pre-release AWS Instance Types Update * Update vendor to use k8s 1.29.6 * adjust logs --------- Signed-off-by: Jonathan Raymond Signed-off-by: Ayush Rangwala Signed-off-by: Thomas Stadler Signed-off-by: Amir Alavi Signed-off-by: dependabot[bot] Signed-off-by: Matt Dainty Signed-off-by: Cyrill Troxler Signed-off-by: vadasambar Signed-off-by: Jack Francis Signed-off-by: Vishal Anarse Signed-off-by: Prashant Rewar <108176843+prashantrewar@users.noreply.github.com> Signed-off-by: lizhen Signed-off-by: Maksim Paskal Co-authored-by: Mathieu Bruneau Co-authored-by: Artem Minyaylov Co-authored-by: Kubernetes Prow Robot Co-authored-by: Dumlu Timuralp <34840364+dumlutimuralp@users.noreply.github.com> Co-authored-by: Hakan Bostan Co-authored-by: Rich Gowman Co-authored-by: Daniel Gutowski Co-authored-by: mikutas <23391543+mikutas@users.noreply.github.com> Co-authored-by: Jonathan Raymond Co-authored-by: Johnnie Ho Co-authored-by: aleskandro Co-authored-by: Kuba Tużnik Co-authored-by: lisenet Co-authored-by: Piotr Wrótniak Co-authored-by: Ayush Rangwala Co-authored-by: Dixita Narang Co-authored-by: Artur Żyliński Co-authored-by: Alexandros Afentoulis Co-authored-by: jw-maynard Co-authored-by: xiaoqing Co-authored-by: Thomas Stadler Co-authored-by: Marco Voelz Co-authored-by: Aleksandra Gacek Co-authored-by: Luis Ramirez Co-authored-by: piotrwrotniak <91665466+piotrwrotniak@users.noreply.github.com> Co-authored-by: Amir Alavi Co-authored-by: Michael Grosser Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: shapirus Co-authored-by: Guy Templeton Co-authored-by: Mads Hartmann Co-authored-by: Thomas Güttler Co-authored-by: Prachi Gandhi Co-authored-by: Prachi Gandhi <73401862+gandhipr@users.noreply.github.com> Co-authored-by: Mike Tougeron Co-authored-by: Matt Dainty Co-authored-by: Guo Peng <370090914@qq.com> Co-authored-by: Alex Serbul <22218473+AlexanderSerbul@users.noreply.github.com> Co-authored-by: Cyrill Troxler Co-authored-by: alexanderConstantinescu Co-authored-by: Brydon Cheyney Co-authored-by: Julian Tölle Co-authored-by: Mahmoud Atwa Co-authored-by: Yaroslava Serdiuk Co-authored-by: vadasambar Co-authored-by: Jack Francis Co-authored-by: Vishal Anarse Co-authored-by: qianlei.qianl Co-authored-by: Andrea Scarpino Co-authored-by: Prashant Rewar <108176843+prashantrewar@users.noreply.github.com> Co-authored-by: Jont828 Co-authored-by: Pascal Co-authored-by: Walid Ghallab Co-authored-by: lizhen Co-authored-by: Daniel Kłobuszewski Co-authored-by: Luiz Antonio Co-authored-by: damikag Co-authored-by: Maciek Pytel Co-authored-by: Joachim Bartosik Co-authored-by: Kyle Weaver Co-authored-by: shubham82 Co-authored-by: Kubernetes Prow Robot <20407524+k8s-ci-robot@users.noreply.github.com> Co-authored-by: wenxuanW Co-authored-by: Maksim Paskal Co-authored-by: Bartłomiej Wróblewski Co-authored-by: Krishna Sarabu --- charts/cluster-autoscaler/README.md | 2 - charts/cluster-autoscaler/values.yaml | 9 - .../cloudprovider/aws/auto_scaling_groups.go | 117 +- .../aws/aws-sdk-go/aws/awserr/error.go | 80 +- .../aws-sdk-go/aws/client/default_retryer.go | 1 + .../aws/corehandlers/awsinternal.go | 2 +- .../aws/credentials/chain_provider.go | 23 +- .../aws-sdk-go/aws/credentials/credentials.go | 51 +- .../ec2rolecreds/ec2_role_provider.go | 20 +- .../aws/credentials/endpointcreds/provider.go | 31 +- .../aws/credentials/plugincreds/provider.go | 94 +- .../aws/credentials/processcreds/provider.go | 46 +- .../aws/credentials/ssocreds/doc.go | 54 +- .../credentials/ssocreds/sso_cached_token.go | 2 +- .../aws/aws-sdk-go/aws/csm/doc.go | 58 +- .../aws/aws-sdk-go/aws/csm/enable.go | 22 +- .../aws/aws-sdk-go/aws/csm/reporter.go | 18 +- .../cloudprovider/aws/aws-sdk-go/aws/doc.go | 32 +- .../aws/aws-sdk-go/aws/endpoints/defaults.go | 8 +- .../aws/aws-sdk-go/aws/logger.go | 7 +- .../aws/aws-sdk-go/aws/request/request.go | 16 +- .../aws/request/request_pagination.go | 20 +- .../aws/request/timeout_read_closer.go | 2 +- .../aws/aws-sdk-go/aws/session/doc.go | 142 +- .../aws/aws-sdk-go/awstesting/assert.go | 2 +- .../awstesting/cmd/bucket_cleanup/main.go | 3 +- .../aws/aws-sdk-go/internal/ini/doc.go | 53 +- .../aws-sdk-go/internal/ini/literal_tokens.go | 6 +- .../internal/s3shared/arn/accesspoint_arn.go | 5 +- .../internal/s3shared/arn/outpost_arn.go | 10 +- .../aws/aws-sdk-go/internal/sdkmath/floor.go | 1 - .../internal/sdkmath/floor_go1.9.go | 1 - .../aws/aws-sdk-go/private/model/api/api.go | 4 +- .../private/model/api/legacy_jsonvalue.go | 142 +- .../aws/aws-sdk-go/private/model/api/load.go | 10 +- .../private/model/cli/gen-endpoints/main.go | 7 +- .../eventstream/eventstreamtest/testing.go | 2 +- .../aws/aws_cloud_provider_test.go | 115 +- .../cloudprovider/aws/ec2_instance_types.go | 244 +- .../cloudprovider/azure/azure_client.go | 131 + .../cloudprovider/azure/azure_config.go | 20 +- .../cloudprovider/azure/azure_manager_test.go | 6 + .../azure/azure_mock_agentpool_client.go | 94 + .../cloudprovider/cloud_provider.go | 2 +- .../hetzner/hetzner_node_group.go | 90 +- .../cloudprovider/mcm/mcm_cloud_provider.go | 2 +- .../cloudprovider/mcm/mcm_manager.go | 1 + .../github.com/gofrs/flock/.gitignore | 8 - .../common/netretry_test.go | 2 +- .../clusterstate/clusterstate.go | 2 +- .../core/scaleup/orchestrator/orchestrator.go | 6 +- cluster-autoscaler/core/static_autoscaler.go | 7 +- cluster-autoscaler/go.mod | 131 +- cluster-autoscaler/go.sum | 181 +- .../Azure/azure-sdk-for-go-extensions/LICENSE | 21 + .../pkg/middleware/arm_client_opts.go | 66 + .../pkg/middleware/arm_error_collector.go | 243 + .../pkg/middleware/default_http_client.go | 102 + .../pkg/middleware/poller.go | 28 + .../pkg/middleware/queryparam_policy.go | 37 + .../azure-sdk-for-go/sdk/azcore/CHANGELOG.md | 745 ++ .../azure-sdk-for-go/sdk/azcore/LICENSE.txt | 21 + .../azure-sdk-for-go/sdk/azcore/README.md | 39 + .../azure-sdk-for-go/sdk/azcore/arm/client.go | 72 + .../azure-sdk-for-go/sdk/azcore/arm/doc.go | 9 + .../internal/resource/resource_identifier.go | 224 + .../arm/internal/resource/resource_type.go | 114 + .../sdk/azcore/arm/policy/policy.go | 98 + .../sdk/azcore/arm/resource_identifier.go | 23 + .../sdk/azcore/arm/resource_type.go | 40 + .../sdk/azcore/arm/runtime/pipeline.go | 65 + .../azcore/arm/runtime/policy_bearer_token.go | 145 + .../azcore/arm/runtime/policy_register_rp.go | 347 + .../arm/runtime/policy_trace_namespace.go | 30 + .../sdk/azcore/arm/runtime/runtime.go | 24 + .../Azure/azure-sdk-for-go/sdk/azcore/ci.yml | 29 + .../sdk/azcore/cloud/cloud.go | 44 + .../azure-sdk-for-go/sdk/azcore/cloud/doc.go | 53 + .../Azure/azure-sdk-for-go/sdk/azcore/core.go | 154 + .../Azure/azure-sdk-for-go/sdk/azcore/doc.go | 264 + .../azure-sdk-for-go/sdk/azcore/errors.go | 14 + .../Azure/azure-sdk-for-go/sdk/azcore/etag.go | 48 + .../sdk/azcore/internal/exported/exported.go | 175 + .../sdk/azcore/internal/exported/pipeline.go | 77 + .../sdk/azcore/internal/exported/request.go | 223 + .../internal/exported/response_error.go | 157 + .../sdk/azcore/internal/log/log.go | 38 + .../azcore/internal/pollers/async/async.go | 159 + .../sdk/azcore/internal/pollers/body/body.go | 135 + .../sdk/azcore/internal/pollers/fake/fake.go | 133 + .../sdk/azcore/internal/pollers/loc/loc.go | 119 + .../sdk/azcore/internal/pollers/op/op.go | 145 + .../sdk/azcore/internal/pollers/poller.go | 24 + .../sdk/azcore/internal/pollers/util.go | 187 + .../sdk/azcore/internal/shared/constants.go | 44 + .../sdk/azcore/internal/shared/shared.go | 149 + .../azure-sdk-for-go/sdk/azcore/log/doc.go | 10 + .../azure-sdk-for-go/sdk/azcore/log/log.go | 50 + .../azure-sdk-for-go/sdk/azcore/policy/doc.go | 10 + .../sdk/azcore/policy/policy.go | 187 + .../sdk/azcore/runtime/doc.go | 10 + .../sdk/azcore/runtime/errors.go | 19 + .../sdk/azcore/runtime/pager.go | 128 + .../sdk/azcore/runtime/pipeline.go | 94 + .../sdk/azcore/runtime/policy_api_version.go | 75 + .../sdk/azcore/runtime/policy_bearer_token.go | 121 + .../azcore/runtime/policy_body_download.go | 72 + .../sdk/azcore/runtime/policy_http_header.go | 40 + .../sdk/azcore/runtime/policy_http_trace.go | 143 + .../azcore/runtime/policy_include_response.go | 35 + .../azcore/runtime/policy_key_credential.go | 57 + .../sdk/azcore/runtime/policy_logging.go | 264 + .../sdk/azcore/runtime/policy_request_id.go | 34 + .../sdk/azcore/runtime/policy_retry.go | 255 + .../azcore/runtime/policy_sas_credential.go | 47 + .../sdk/azcore/runtime/policy_telemetry.go | 83 + .../sdk/azcore/runtime/poller.go | 384 + .../sdk/azcore/runtime/request.go | 179 + .../sdk/azcore/runtime/response.go | 109 + .../runtime/transport_default_dialer_other.go | 15 + .../runtime/transport_default_dialer_wasm.go | 15 + .../runtime/transport_default_http_client.go | 48 + .../sdk/azcore/streaming/doc.go | 9 + .../sdk/azcore/streaming/progress.go | 75 + .../azure-sdk-for-go/sdk/azcore/to/doc.go | 9 + .../azure-sdk-for-go/sdk/azcore/to/to.go | 21 + .../sdk/azcore/tracing/constants.go | 41 + .../sdk/azcore/tracing/tracing.go | 191 + .../sdk/azidentity/CHANGELOG.md | 457 + .../sdk/azidentity/LICENSE.txt | 21 + .../sdk/azidentity/MIGRATION.md | 307 + .../azure-sdk-for-go/sdk/azidentity/README.md | 243 + .../sdk/azidentity/TROUBLESHOOTING.md | 207 + .../sdk/azidentity/assets.json | 6 + .../sdk/azidentity/azidentity.go | 178 + .../sdk/azidentity/azure_cli_credential.go | 183 + .../azidentity/chained_token_credential.go | 138 + .../azure-sdk-for-go/sdk/azidentity/ci.yml | 34 + .../azidentity/client_assertion_credential.go | 75 + .../client_certificate_credential.go | 160 + .../azidentity/client_secret_credential.go | 65 + .../sdk/azidentity/confidential_client.go | 156 + .../azidentity/default_azure_credential.go | 196 + .../sdk/azidentity/device_code_credential.go | 106 + .../sdk/azidentity/environment_credential.go | 164 + .../azure-sdk-for-go/sdk/azidentity/errors.go | 128 + .../interactive_browser_credential.go | 86 + .../sdk/azidentity/logging.go | 14 + .../sdk/azidentity/managed_identity_client.go | 404 + .../azidentity/managed_identity_credential.go | 113 + .../sdk/azidentity/on_behalf_of_credential.go | 92 + .../sdk/azidentity/public_client.go | 178 + .../sdk/azidentity/test-resources-pre.ps1 | 36 + .../sdk/azidentity/test-resources.bicep | 1 + .../username_password_credential.go | 66 + .../sdk/azidentity/version.go | 15 + .../sdk/azidentity/workload_identity.go | 126 + .../azure-sdk-for-go/sdk/internal/LICENSE.txt | 21 + .../sdk/internal/diag/diag.go | 51 + .../azure-sdk-for-go/sdk/internal/diag/doc.go | 7 + .../sdk/internal/errorinfo/doc.go | 7 + .../sdk/internal/errorinfo/errorinfo.go | 46 + .../sdk/internal/exported/exported.go | 129 + .../azure-sdk-for-go/sdk/internal/log/doc.go | 7 + .../azure-sdk-for-go/sdk/internal/log/log.go | 104 + .../sdk/internal/poller/util.go | 155 + .../sdk/internal/temporal/resource.go | 123 + .../azure-sdk-for-go/sdk/internal/uuid/doc.go | 7 + .../sdk/internal/uuid/uuid.go | 76 + .../armcontainerservice/v4/CHANGELOG.md | 726 ++ .../armcontainerservice/v4/LICENSE.txt | 21 + .../armcontainerservice/v4/README.md | 108 + .../v4/agentpools_client.go | 739 ++ .../armcontainerservice/v4/assets.json | 6 + .../armcontainerservice/v4/autorest.md | 13 + .../armcontainerservice/v4/build.go | 7 + .../armcontainerservice/v4/ci.yml | 28 + .../armcontainerservice/v4/client_factory.go | 140 + .../armcontainerservice/v4/constants.go | 1143 +++ .../armcontainerservice/v4/date_type.go | 58 + .../armcontainerservice/v4/machines_client.go | 187 + .../v4/maintenanceconfigurations_client.go | 313 + .../v4/managedclusters_client.go | 2232 +++++ .../v4/managedclustersnapshots_client.go | 417 + .../armcontainerservice/v4/models.go | 3026 +++++++ .../armcontainerservice/v4/models_serde.go | 7426 +++++++++++++++++ .../v4/operations_client.go | 89 + .../v4/operationstatusresult_client.go | 254 + .../armcontainerservice/v4/options.go | 459 + .../v4/privateendpointconnections_client.go | 334 + .../v4/privatelinkresources_client.go | 109 + .../v4/resolveprivatelinkserviceid_client.go | 113 + .../armcontainerservice/v4/responses.go | 437 + .../v4/snapshots_client.go | 413 + .../armcontainerservice/v4/time_rfc3339.go | 110 + .../v4/trustedaccessrolebindings_client.go | 345 + .../v4/trustedaccessroles_client.go | 104 + .../Azure/go-armbalancer/CODE_OF_CONDUCT.md | 9 + .../github.com/Azure/go-armbalancer/LICENSE | 21 + .../github.com/Azure/go-armbalancer/README.md | 56 + .../Azure/go-armbalancer/SECURITY.md | 41 + .../Azure/go-armbalancer/SUPPORT.md | 25 + .../Azure/go-armbalancer/armbalancer.go | 200 + .../LICENSE | 21 + .../apps/cache/cache.go | 54 + .../apps/confidential/confidential.go | 685 ++ .../apps/errors/error_design.md | 111 + .../apps/errors/errors.go | 89 + .../apps/internal/base/base.go | 467 ++ .../internal/base/internal/storage/items.go | 203 + .../internal/storage/partitioned_storage.go | 436 + .../internal/base/internal/storage/storage.go | 516 ++ .../apps/internal/exported/exported.go | 34 + .../apps/internal/json/design.md | 140 + .../apps/internal/json/json.go | 184 + .../apps/internal/json/mapslice.go | 333 + .../apps/internal/json/marshal.go | 346 + .../apps/internal/json/struct.go | 290 + .../apps/internal/json/types/time/time.go | 70 + .../apps/internal/local/server.go | 177 + .../apps/internal/oauth/oauth.go | 353 + .../oauth/ops/accesstokens/accesstokens.go | 451 + .../oauth/ops/accesstokens/apptype_string.go | 25 + .../internal/oauth/ops/accesstokens/tokens.go | 338 + .../internal/oauth/ops/authority/authority.go | 552 ++ .../ops/authority/authorizetype_string.go | 30 + .../internal/oauth/ops/internal/comm/comm.go | 320 + .../oauth/ops/internal/comm/compress.go | 33 + .../oauth/ops/internal/grant/grant.go | 17 + .../apps/internal/oauth/ops/ops.go | 56 + .../ops/wstrust/defs/endpointtype_string.go | 25 + .../wstrust/defs/mex_document_definitions.go | 394 + .../defs/saml_assertion_definitions.go | 230 + .../oauth/ops/wstrust/defs/version_string.go | 25 + .../ops/wstrust/defs/wstrust_endpoint.go | 199 + .../ops/wstrust/defs/wstrust_mex_document.go | 159 + .../internal/oauth/ops/wstrust/wstrust.go | 136 + .../apps/internal/oauth/resolvers.go | 149 + .../apps/internal/options/options.go | 52 + .../apps/internal/shared/shared.go | 72 + .../apps/internal/version/version.go | 8 + .../apps/public/public.go | 713 ++ .../github.com/felixge/httpsnoop/.travis.yml | 6 - .../github.com/felixge/httpsnoop/Makefile | 2 +- .../github.com/felixge/httpsnoop/README.md | 4 +- .../felixge/httpsnoop/capture_metrics.go | 2 +- .../httpsnoop/wrap_generated_gteq_1.8.go | 2 +- .../httpsnoop/wrap_generated_lt_1.8.go | 2 +- .../github.com/golang-jwt/jwt/v5/.gitignore | 4 + .../github.com/golang-jwt/jwt/v5/LICENSE | 9 + .../golang-jwt/jwt/v5/MIGRATION_GUIDE.md | 185 + .../github.com/golang-jwt/jwt/v5/README.md | 167 + .../github.com/golang-jwt/jwt/v5/SECURITY.md | 19 + .../golang-jwt/jwt/v5/VERSION_HISTORY.md | 137 + .../github.com/golang-jwt/jwt/v5/claims.go | 16 + .../github.com/golang-jwt/jwt/v5/doc.go | 4 + .../github.com/golang-jwt/jwt/v5/ecdsa.go | 134 + .../golang-jwt/jwt/v5/ecdsa_utils.go | 69 + .../github.com/golang-jwt/jwt/v5/ed25519.go | 80 + .../golang-jwt/jwt/v5/ed25519_utils.go | 64 + .../github.com/golang-jwt/jwt/v5/errors.go | 49 + .../golang-jwt/jwt/v5/errors_go1_20.go | 47 + .../golang-jwt/jwt/v5/errors_go_other.go | 78 + .../github.com/golang-jwt/jwt/v5/hmac.go | 104 + .../golang-jwt/jwt/v5/map_claims.go | 109 + .../github.com/golang-jwt/jwt/v5/none.go | 50 + .../github.com/golang-jwt/jwt/v5/parser.go | 215 + .../golang-jwt/jwt/v5/parser_option.go | 120 + .../golang-jwt/jwt/v5/registered_claims.go | 63 + .../github.com/golang-jwt/jwt/v5/rsa.go | 93 + .../github.com/golang-jwt/jwt/v5/rsa_pss.go | 135 + .../github.com/golang-jwt/jwt/v5/rsa_utils.go | 107 + .../golang-jwt/jwt/v5/signing_method.go | 49 + .../golang-jwt/jwt/v5/staticcheck.conf | 1 + .../github.com/golang-jwt/jwt/v5/token.go | 86 + .../golang-jwt/jwt/v5/token_option.go | 5 + .../github.com/golang-jwt/jwt/v5/types.go | 150 + .../github.com/golang-jwt/jwt/v5/validator.go | 301 + .../github.com/kylelemons/godebug/LICENSE | 202 + .../kylelemons/godebug/diff/diff.go | 186 + .../kylelemons/godebug/pretty/.gitignore | 5 + .../kylelemons/godebug/pretty/doc.go | 25 + .../kylelemons/godebug/pretty/public.go | 188 + .../kylelemons/godebug/pretty/reflect.go | 241 + .../kylelemons/godebug/pretty/structure.go | 223 + .../vendor/github.com/pkg/browser/LICENSE | 23 + .../vendor/github.com/pkg/browser/README.md | 55 + .../vendor/github.com/pkg/browser/browser.go | 57 + .../github.com/pkg/browser/browser_darwin.go | 5 + .../github.com/pkg/browser/browser_freebsd.go | 14 + .../github.com/pkg/browser/browser_linux.go | 21 + .../github.com/pkg/browser/browser_netbsd.go | 14 + .../github.com/pkg/browser/browser_openbsd.go | 14 + .../pkg/browser/browser_unsupported.go | 12 + .../github.com/pkg/browser/browser_windows.go | 7 + .../net/http/otelhttp/common.go | 4 +- .../net/http/otelhttp/config.go | 7 +- .../net/http/otelhttp/handler.go | 18 +- .../net/http/otelhttp/version.go | 2 +- .../go.opentelemetry.io/otel/.gitignore | 5 +- .../go.opentelemetry.io/otel/.golangci.yml | 17 +- .../go.opentelemetry.io/otel/CHANGELOG.md | 85 +- .../go.opentelemetry.io/otel/CONTRIBUTING.md | 4 + .../vendor/go.opentelemetry.io/otel/Makefile | 29 +- .../vendor/go.opentelemetry.io/otel/README.md | 15 +- .../otel/baggage/baggage.go | 4 +- .../otel/exporters/otlp/otlptrace/README.md | 51 - .../otel/exporters/otlp/otlptrace/doc.go | 21 + .../otel/exporters/otlp/otlptrace/exporter.go | 6 +- .../otlp/otlptrace/otlptracegrpc/client.go | 22 +- .../otlp/otlptrace/otlptracegrpc/doc.go | 77 + .../internal/envconfig/envconfig.go | 4 +- .../internal/otlpconfig/options.go | 3 - .../otlp/otlptrace/otlptracegrpc/options.go | 8 +- .../otel/exporters/otlp/otlptrace/version.go | 2 +- .../otel/internal/global/instruments.go | 60 +- .../otel/internal/global/trace.go | 7 + .../go.opentelemetry.io/otel/metric/doc.go | 2 +- .../otel/metric/instrument.go | 23 + .../otel/metric/syncfloat64.go | 10 +- .../otel/metric/syncint64.go | 10 +- .../otel/propagation/trace_context.go | 6 +- .../go.opentelemetry.io/otel/requirements.txt | 2 +- .../otel/sdk/resource/auto.go | 10 +- .../otel/sdk/resource/env.go | 10 +- .../otel/sdk/resource/os.go | 7 +- .../otel/sdk/resource/process.go | 36 +- .../otel/sdk/trace/provider.go | 8 +- .../otel/sdk/trace/sampling.go | 4 +- .../otel/sdk/trace/span.go | 11 +- .../otel/sdk/trace/tracer.go | 3 + .../go.opentelemetry.io/otel/sdk/version.go | 2 +- .../go.opentelemetry.io/otel/trace/config.go | 1 + .../go.opentelemetry.io/otel/trace/doc.go | 64 + .../otel/trace/embedded/embedded.go | 56 + .../go.opentelemetry.io/otel/trace/noop.go | 10 +- .../otel/trace/noop/noop.go | 118 + .../go.opentelemetry.io/otel/trace/trace.go | 40 +- .../otel/trace/tracestate.go | 38 +- .../go.opentelemetry.io/otel/version.go | 2 +- .../go.opentelemetry.io/otel/versions.yaml | 7 +- .../vendor/go.uber.org/mock/AUTHORS | 12 + .../vendor/go.uber.org/mock/LICENSE | 202 + .../vendor/go.uber.org/mock/gomock/call.go | 508 ++ .../vendor/go.uber.org/mock/gomock/callset.go | 164 + .../go.uber.org/mock/gomock/controller.go | 318 + .../vendor/go.uber.org/mock/gomock/doc.go | 60 + .../go.uber.org/mock/gomock/matchers.go | 443 + .../go.uber.org/mock/mockgen/model/model.go | 533 ++ .../x/crypto/internal/poly1305/bits_compat.go | 39 - .../x/crypto/internal/poly1305/bits_go1.13.go | 21 - .../x/crypto/internal/poly1305/sum_generic.go | 43 +- .../x/crypto/internal/poly1305/sum_ppc64le.s | 14 +- .../vendor/golang.org/x/net/html/token.go | 12 +- .../vendor/golang.org/x/net/http2/frame.go | 42 +- .../vendor/golang.org/x/net/http2/pipe.go | 11 +- .../vendor/golang.org/x/net/http2/server.go | 13 +- .../vendor/golang.org/x/net/http2/testsync.go | 331 + .../golang.org/x/net/http2/transport.go | 307 +- .../golang.org/x/net/websocket/client.go | 55 +- .../vendor/golang.org/x/net/websocket/dial.go | 11 +- .../x/oauth2/google/appengine_gen1.go | 1 - .../x/oauth2/google/appengine_gen2_flex.go | 1 - .../x/oauth2/internal/client_appengine.go | 1 - .../vendor/golang.org/x/sys/unix/aliases.go | 2 +- .../vendor/golang.org/x/sys/unix/mkerrors.sh | 39 +- .../x/sys/unix/syscall_darwin_libSystem.go | 2 +- .../golang.org/x/sys/unix/syscall_freebsd.go | 12 +- .../golang.org/x/sys/unix/syscall_linux.go | 99 + .../golang.org/x/sys/unix/zerrors_linux.go | 90 +- .../x/sys/unix/zerrors_linux_386.go | 3 + .../x/sys/unix/zerrors_linux_amd64.go | 3 + .../x/sys/unix/zerrors_linux_arm.go | 3 + .../x/sys/unix/zerrors_linux_arm64.go | 3 + .../x/sys/unix/zerrors_linux_loong64.go | 3 + .../x/sys/unix/zerrors_linux_mips.go | 3 + .../x/sys/unix/zerrors_linux_mips64.go | 3 + .../x/sys/unix/zerrors_linux_mips64le.go | 3 + .../x/sys/unix/zerrors_linux_mipsle.go | 3 + .../x/sys/unix/zerrors_linux_ppc.go | 3 + .../x/sys/unix/zerrors_linux_ppc64.go | 3 + .../x/sys/unix/zerrors_linux_ppc64le.go | 3 + .../x/sys/unix/zerrors_linux_riscv64.go | 3 + .../x/sys/unix/zerrors_linux_s390x.go | 3 + .../x/sys/unix/zerrors_linux_sparc64.go | 3 + .../golang.org/x/sys/unix/zsyscall_linux.go | 10 + .../x/sys/unix/zsyscall_openbsd_386.go | 2 - .../x/sys/unix/zsyscall_openbsd_amd64.go | 2 - .../x/sys/unix/zsyscall_openbsd_arm.go | 2 - .../x/sys/unix/zsyscall_openbsd_arm64.go | 2 - .../x/sys/unix/zsyscall_openbsd_mips64.go | 2 - .../x/sys/unix/zsyscall_openbsd_ppc64.go | 2 - .../x/sys/unix/zsyscall_openbsd_riscv64.go | 2 - .../x/sys/unix/zsysnum_linux_386.go | 4 + .../x/sys/unix/zsysnum_linux_amd64.go | 3 + .../x/sys/unix/zsysnum_linux_arm.go | 4 + .../x/sys/unix/zsysnum_linux_arm64.go | 4 + .../x/sys/unix/zsysnum_linux_loong64.go | 4 + .../x/sys/unix/zsysnum_linux_mips.go | 4 + .../x/sys/unix/zsysnum_linux_mips64.go | 4 + .../x/sys/unix/zsysnum_linux_mips64le.go | 4 + .../x/sys/unix/zsysnum_linux_mipsle.go | 4 + .../x/sys/unix/zsysnum_linux_ppc.go | 4 + .../x/sys/unix/zsysnum_linux_ppc64.go | 4 + .../x/sys/unix/zsysnum_linux_ppc64le.go | 4 + .../x/sys/unix/zsysnum_linux_riscv64.go | 4 + .../x/sys/unix/zsysnum_linux_s390x.go | 4 + .../x/sys/unix/zsysnum_linux_sparc64.go | 4 + .../golang.org/x/sys/unix/ztypes_linux.go | 185 +- .../golang.org/x/sys/windows/env_windows.go | 17 +- .../x/sys/windows/syscall_windows.go | 4 +- .../x/sys/windows/zsyscall_windows.go | 9 + .../api/annotations/field_behavior.pb.go | 22 +- .../vendor/google.golang.org/grpc/README.md | 2 +- .../grpc/attributes/attributes.go | 4 +- .../grpc/balancer/balancer.go | 15 + .../google.golang.org/grpc/clientconn.go | 18 +- .../google.golang.org/grpc/dialoptions.go | 5 +- .../grpc/encoding/encoding.go | 13 +- .../health/grpc_health_v1/health_grpc.pb.go | 22 +- .../grpc/internal/backoff/backoff.go | 36 + .../grpc/internal/internal.go | 6 + .../grpc/internal/status/status.go | 28 + .../grpc/internal/transport/handler_server.go | 13 +- .../grpc/internal/transport/http2_client.go | 13 +- .../grpc/internal/transport/http2_server.go | 14 +- .../grpc/internal/transport/http_util.go | 18 +- .../grpc/internal/transport/transport.go | 2 +- .../grpc/resolver/manual/manual.go | 27 +- .../vendor/google.golang.org/grpc/server.go | 136 +- .../vendor/google.golang.org/grpc/tap/tap.go | 6 + .../vendor/google.golang.org/grpc/version.go | 2 +- .../vendor/google.golang.org/grpc/vet.sh | 3 + .../vendor/k8s.io/api/core/v1/generated.proto | 2 +- .../vendor/k8s.io/api/core/v1/types.go | 2 +- .../core/v1/types_swagger_doc_generated.go | 2 +- .../pkg/util/httpstream/wsstream/conn.go | 2 +- .../pkg/admission/plugin/cel/composition.go | 2 +- .../controller_reconcile.go | 5 +- .../apiserver/pkg/cel/environment/base.go | 16 +- .../apiserver/pkg/features/kube_features.go | 17 + .../apiserver/pkg/storage/cacher/cacher.go | 15 +- .../pkg/storage/cacher/lister_watcher.go | 30 +- .../pkg/storage/cacher/watch_cache.go | 10 +- .../storage/cacher/watch_cache_interval.go | 17 + .../pkg/storage/cacher/watch_progress.go | 9 +- .../pkg/storage/etcd3/metrics/metrics.go | 12 +- .../tools/remotecommand/websocket.go | 23 +- .../generate-internal-groups.sh | 148 +- .../metrics/prometheus/slis/metrics.go | 1 + .../storage/volume/helpers.go | 14 + .../k8s.io/kubernetes/pkg/apis/core/types.go | 2 +- .../pkg/apis/core/validation/validation.go | 42 +- .../kubernetes/pkg/features/kube_features.go | 8 +- .../kubernetes/pkg/kubelet/kubelet_volumes.go | 8 +- .../cache/actual_state_of_world.go | 10 +- .../volumemanager/reconciler/reconstruct.go | 12 +- .../kubelet/winstats/perfcounter_nodestats.go | 3 +- .../pkg/scheduler/framework/interface.go | 5 +- .../plugins/nodeaffinity/node_affinity.go | 45 +- .../nodeunschedulable/node_unschedulable.go | 28 +- .../framework/preemption/preemption.go | 17 +- .../scheduler/framework/runtime/framework.go | 3 +- .../pkg/scheduler/framework/types.go | 5 + .../internal/queue/scheduling_queue.go | 5 + .../pkg/scheduler/metrics/metrics.go | 2 +- .../kubernetes/pkg/scheduler/schedule_one.go | 12 +- .../kubernetes/pkg/scheduler/scheduler.go | 4 +- .../k8s.io/kubernetes/pkg/util/slice/slice.go | 75 + .../kubernetes/pkg/volume/csi/csi_plugin.go | 3 + .../kubernetes/pkg/volume/csi/expander.go | 2 +- .../k8s.io/kubernetes/pkg/volume/plugins.go | 2 +- .../operationexecutor/operation_executor.go | 4 + .../operationexecutor/operation_generator.go | 14 + .../kubernetes/pkg/volume/util/types/types.go | 17 + .../azure/azure_managedDiskController.go | 5 + cluster-autoscaler/vendor/modules.txt | 221 +- cluster-autoscaler/version/version.go | 2 +- .../pkg/apis/autoscaling.k8s.io/v1/types.go | 2 +- 479 files changed, 50964 insertions(+), 1675 deletions(-) create mode 100644 cluster-autoscaler/cloudprovider/azure/azure_mock_agentpool_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/LICENSE create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/arm_client_opts.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/arm_error_collector.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/default_http_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/poller.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/queryparam_policy.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/CHANGELOG.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/LICENSE.txt create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/README.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_identifier.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_type.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy/policy.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/resource_identifier.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/resource_type.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/pipeline.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_bearer_token.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_register_rp.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_trace_namespace.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/runtime.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/ci.yml create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/cloud.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/core.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/errors.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/etag.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/exported.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/pipeline.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/request.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/response_error.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log/log.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async/async.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body/body.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake/fake.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc/loc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op/op.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/poller.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/util.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/constants.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/shared.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/log.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/policy.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/errors.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pager.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pipeline.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_api_version.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_bearer_token.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_body_download.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_header.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_trace.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_include_response.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_key_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_logging.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_request_id.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_retry.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_sas_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_telemetry.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/poller.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/request.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/response.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_other.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_wasm.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_http_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/progress.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/to/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/to/to.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/constants.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/tracing.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/CHANGELOG.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/LICENSE.txt create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/MIGRATION.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/README.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/TROUBLESHOOTING.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/assets.json create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azidentity.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_cli_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/chained_token_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/ci.yml create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_assertion_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_certificate_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_secret_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/confidential_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/default_azure_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/device_code_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/environment_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/errors.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/interactive_browser_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/logging.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/on_behalf_of_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/public_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources-pre.ps1 create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources.bicep create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/username_password_credential.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/version.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/workload_identity.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/LICENSE.txt create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/diag.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/errorinfo.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/exported/exported.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/log.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/poller/util.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/temporal/resource.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/uuid.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/CHANGELOG.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/LICENSE.txt create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/README.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/agentpools_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/assets.json create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/autorest.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/build.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/ci.yml create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/client_factory.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/constants.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/date_type.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/machines_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/maintenanceconfigurations_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/managedclusters_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/managedclustersnapshots_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/models.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/models_serde.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/operations_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/operationstatusresult_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/options.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/privateendpointconnections_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/privatelinkresources_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/resolveprivatelinkserviceid_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/responses.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/snapshots_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/time_rfc3339.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/trustedaccessrolebindings_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/trustedaccessroles_client.go create mode 100644 cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/CODE_OF_CONDUCT.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/LICENSE create mode 100644 cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/README.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/SECURITY.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/SUPPORT.md create mode 100644 cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/armbalancer.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/LICENSE create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache/cache.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential/confidential.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/error_design.md create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/errors.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/base.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/items.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/partitioned_storage.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/storage.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported/exported.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/design.md create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/json.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/mapslice.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/marshal.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/struct.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time/time.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local/server.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/oauth.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/accesstokens.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/apptype_string.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/tokens.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authority.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authorizetype_string.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/comm.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/compress.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant/grant.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/ops.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/endpointtype_string.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/mex_document_definitions.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/saml_assertion_definitions.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/version_string.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_endpoint.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_mex_document.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/wstrust.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/resolvers.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options/options.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared/shared.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/version/version.go create mode 100644 cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/public/public.go delete mode 100644 cluster-autoscaler/vendor/github.com/felixge/httpsnoop/.travis.yml create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/.gitignore create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/LICENSE create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/MIGRATION_GUIDE.md create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/README.md create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/SECURITY.md create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/VERSION_HISTORY.md create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/claims.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ecdsa.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ecdsa_utils.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ed25519.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ed25519_utils.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors_go1_20.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors_go_other.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/hmac.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/map_claims.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/none.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/parser.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/parser_option.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/registered_claims.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa_pss.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa_utils.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/signing_method.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/staticcheck.conf create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/token.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/token_option.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/types.go create mode 100644 cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/validator.go create mode 100644 cluster-autoscaler/vendor/github.com/kylelemons/godebug/LICENSE create mode 100644 cluster-autoscaler/vendor/github.com/kylelemons/godebug/diff/diff.go create mode 100644 cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/.gitignore create mode 100644 cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/doc.go create mode 100644 cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/public.go create mode 100644 cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/reflect.go create mode 100644 cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/structure.go create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/LICENSE create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/README.md create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/browser.go create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/browser_darwin.go create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/browser_freebsd.go create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/browser_linux.go create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/browser_netbsd.go create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/browser_openbsd.go create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/browser_unsupported.go create mode 100644 cluster-autoscaler/vendor/github.com/pkg/browser/browser_windows.go delete mode 100644 cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/README.md create mode 100644 cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/doc.go create mode 100644 cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/doc.go create mode 100644 cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/embedded/embedded.go create mode 100644 cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/noop/noop.go create mode 100644 cluster-autoscaler/vendor/go.uber.org/mock/AUTHORS create mode 100644 cluster-autoscaler/vendor/go.uber.org/mock/LICENSE create mode 100644 cluster-autoscaler/vendor/go.uber.org/mock/gomock/call.go create mode 100644 cluster-autoscaler/vendor/go.uber.org/mock/gomock/callset.go create mode 100644 cluster-autoscaler/vendor/go.uber.org/mock/gomock/controller.go create mode 100644 cluster-autoscaler/vendor/go.uber.org/mock/gomock/doc.go create mode 100644 cluster-autoscaler/vendor/go.uber.org/mock/gomock/matchers.go create mode 100644 cluster-autoscaler/vendor/go.uber.org/mock/mockgen/model/model.go delete mode 100644 cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/bits_compat.go delete mode 100644 cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/bits_go1.13.go create mode 100644 cluster-autoscaler/vendor/golang.org/x/net/http2/testsync.go create mode 100644 cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/slice/slice.go diff --git a/charts/cluster-autoscaler/README.md b/charts/cluster-autoscaler/README.md index 2d13c3b0a44f..9bc2619ca048 100644 --- a/charts/cluster-autoscaler/README.md +++ b/charts/cluster-autoscaler/README.md @@ -382,8 +382,6 @@ vpa: | awsSecretAccessKey | string | `""` | AWS access secret key ([if AWS user keys used](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md#using-aws-credentials)) | | azureClientID | string | `""` | Service Principal ClientID with contributor permission to Cluster and Node ResourceGroup. Required if `cloudProvider=azure` | | azureClientSecret | string | `""` | Service Principal ClientSecret with contributor permission to Cluster and Node ResourceGroup. Required if `cloudProvider=azure` | -| azureClusterName | string | `""` | Azure AKS cluster name. Required if `cloudProvider=azure` | -| azureNodeResourceGroup | string | `""` | Azure resource group where the cluster's nodes are located, typically set as `MC___`. Required if `cloudProvider=azure` | | azureResourceGroup | string | `""` | Azure resource group that the cluster is located. Required if `cloudProvider=azure` | | azureSubscriptionID | string | `""` | Azure subscription where the resources are located. Required if `cloudProvider=azure` | | azureTenantID | string | `""` | Azure tenant where the resources are located. Required if `cloudProvider=azure` | diff --git a/charts/cluster-autoscaler/values.yaml b/charts/cluster-autoscaler/values.yaml index edc17f9addd8..15e3177a1c8a 100644 --- a/charts/cluster-autoscaler/values.yaml +++ b/charts/cluster-autoscaler/values.yaml @@ -75,14 +75,6 @@ azureClientID: "" # Required if `cloudProvider=azure` azureClientSecret: "" -# azureClusterName -- Azure AKS cluster name. -# Required if `cloudProvider=azure` -azureClusterName: "" - -# azureNodeResourceGroup -- Azure resource group where the cluster's nodes are located, typically set as `MC___`. -# Required if `cloudProvider=azure` -azureNodeResourceGroup: "" - # azureResourceGroup -- Azure resource group that the cluster is located. # Required if `cloudProvider=azure` azureResourceGroup: "" @@ -375,7 +367,6 @@ serviceMonitor: # serviceMonitor.metricRelabelings -- MetricRelabelConfigs to apply to samples before ingestion. metricRelabelings: {} - # tolerations -- List of node taints to tolerate (requires Kubernetes >= 1.6). tolerations: [] diff --git a/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go b/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go index 284b5b5c0b32..6c34b7c726dc 100644 --- a/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go +++ b/cluster-autoscaler/cloudprovider/aws/auto_scaling_groups.go @@ -141,8 +141,6 @@ func (m *asgCache) register(asg *asg) *asg { return existing } - klog.V(4).Infof("Updating ASG %s", asg.AwsRef.Name) - // Explicit registered groups should always use the manually provided min/max // values and the not the ones returned by the API if !m.explicitlyConfigured[asg.AwsRef] { @@ -310,46 +308,78 @@ func (m *asgCache) DeleteInstances(instances []*AwsInstanceRef) error { } } + placeHolderInstancesCount := m.GetPlaceHolderInstancesCount(instances) + // Check if there are any placeholder instances in the list. + if placeHolderInstancesCount > 0 { + // Log the check for placeholders in the ASG. + klog.V(4).Infof("Detected %d placeholder instance(s) in ASG %s", + placeHolderInstancesCount, commonAsg.Name) + + asgNames := []string{commonAsg.Name} + asgDetail, err := m.awsService.getAutoscalingGroupsByNames(asgNames) + + if err != nil { + klog.Errorf("Error retrieving ASG details %s: %v", commonAsg.Name, err) + return err + } + + activeInstancesInAsg := len(asgDetail[0].Instances) + desiredCapacityInAsg := int(*asgDetail[0].DesiredCapacity) + klog.V(4).Infof("asg %s has placeholders instances with desired capacity = %d and active instances = %d. updating ASG to match active instances count", + commonAsg.Name, desiredCapacityInAsg, activeInstancesInAsg) + + // If the difference between the active instances and the desired capacity is greater than 1, + // it means that the ASG is under-provisioned and the desired capacity is not being reached. + // In this case, we would reduce the size of ASG by the count of unprovisioned instances + // which is equal to the total count of active instances in ASG + + err = m.setAsgSizeNoLock(commonAsg, activeInstancesInAsg) + + if err != nil { + klog.Errorf("Error reducing ASG %s size to %d: %v", commonAsg.Name, activeInstancesInAsg, err) + return err + } + } + for _, instance := range instances { - // check if the instance is a placeholder - a requested instance that was never created by the node group - // if it is, just decrease the size of the node group, as there's no specific instance we can remove - if m.isPlaceholderInstance(instance) { - klog.V(4).Infof("instance %s is detected as a placeholder, decreasing ASG requested size instead "+ - "of deleting instance", instance.Name) - m.decreaseAsgSizeByOneNoLock(commonAsg) - } else { - // check if the instance is already terminating - if it is, don't bother terminating again - // as doing so causes unnecessary API calls and can cause the curSize cached value to decrement - // unnecessarily. - lifecycle, err := m.findInstanceLifecycle(*instance) - if err != nil { - return err - } - if lifecycle != nil && - *lifecycle == autoscaling.LifecycleStateTerminated || - *lifecycle == autoscaling.LifecycleStateTerminating || - *lifecycle == autoscaling.LifecycleStateTerminatingWait || - *lifecycle == autoscaling.LifecycleStateTerminatingProceed { - klog.V(2).Infof("instance %s is already terminating in state %s, will skip instead", instance.Name, *lifecycle) - continue - } + if m.isPlaceholderInstance(instance) { + // skipping placeholder as placeholder instances don't exist + // and we have already reduced ASG size during placeholder check. + continue + } + // check if the instance is already terminating - if it is, don't bother terminating again + // as doing so causes unnecessary API calls and can cause the curSize cached value to decrement + // unnecessarily. + lifecycle, err := m.findInstanceLifecycle(*instance) + if err != nil { + return err + } - params := &autoscaling.TerminateInstanceInAutoScalingGroupInput{ - InstanceId: aws.String(instance.Name), - ShouldDecrementDesiredCapacity: aws.Bool(true), - } - start := time.Now() - resp, err := m.awsService.TerminateInstanceInAutoScalingGroup(params) - observeAWSRequest("TerminateInstanceInAutoScalingGroup", err, start) - if err != nil { - return err - } - klog.V(4).Infof(*resp.Activity.Description) + if lifecycle != nil && + *lifecycle == autoscaling.LifecycleStateTerminated || + *lifecycle == autoscaling.LifecycleStateTerminating || + *lifecycle == autoscaling.LifecycleStateTerminatingWait || + *lifecycle == autoscaling.LifecycleStateTerminatingProceed { + klog.V(2).Infof("instance %s is already terminating in state %s, will skip instead", instance.Name, *lifecycle) + continue + } - // Proactively decrement the size so autoscaler makes better decisions - commonAsg.curSize-- + params := &autoscaling.TerminateInstanceInAutoScalingGroupInput{ + InstanceId: aws.String(instance.Name), + ShouldDecrementDesiredCapacity: aws.Bool(true), + } + start := time.Now() + resp, err := m.awsService.TerminateInstanceInAutoScalingGroup(params) + observeAWSRequest("TerminateInstanceInAutoScalingGroup", err, start) + if err != nil { + return err } + klog.V(4).Infof(*resp.Activity.Description) + + // Proactively decrement the size so autoscaler makes better decisions + commonAsg.curSize-- + } return nil } @@ -626,3 +656,16 @@ func (m *asgCache) buildInstanceRefFromAWS(instance *autoscaling.Instance) AwsIn func (m *asgCache) Cleanup() { close(m.interrupt) } + +// GetPlaceHolderInstancesCount returns count of placeholder instances in the cache +func (m *asgCache) GetPlaceHolderInstancesCount(instances []*AwsInstanceRef) int { + + placeholderInstancesCount := 0 + for _, instance := range instances { + if strings.HasPrefix(instance.Name, placeholderInstanceNamePrefix) { + placeholderInstancesCount++ + + } + } + return placeholderInstancesCount +} diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/awserr/error.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/awserr/error.go index 1e876ffe8f2c..99849c0e19c0 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/awserr/error.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/awserr/error.go @@ -10,23 +10,24 @@ package awserr // // Example: // -// output, err := s3manage.Upload(svc, input, opts) -// if err != nil { -// if awsErr, ok := err.(awserr.Error); ok { -// // Get error details -// log.Println("Error:", awsErr.Code(), awsErr.Message()) -// -// // Prints out full error message, including original error if there was one. -// log.Println("Error:", awsErr.Error()) -// -// // Get original error -// if origErr := awsErr.OrigErr(); origErr != nil { -// // operate on original error. -// } -// } else { -// fmt.Println(err.Error()) -// } -// } +// output, err := s3manage.Upload(svc, input, opts) +// if err != nil { +// if awsErr, ok := err.(awserr.Error); ok { +// // Get error details +// log.Println("Error:", awsErr.Code(), awsErr.Message()) +// +// // Prints out full error message, including original error if there was one. +// log.Println("Error:", awsErr.Error()) +// +// // Get original error +// if origErr := awsErr.OrigErr(); origErr != nil { +// // operate on original error. +// } +// } else { +// fmt.Println(err.Error()) +// } +// } +// type Error interface { // Satisfy the generic error interface. error @@ -99,31 +100,32 @@ func NewBatchError(code, message string, errs []error) BatchedErrors { // // Example: // -// output, err := s3manage.Upload(svc, input, opts) -// if err != nil { -// if reqerr, ok := err.(RequestFailure); ok { -// log.Println("Request failed", reqerr.Code(), reqerr.Message(), reqerr.RequestID()) -// } else { -// log.Println("Error:", err.Error()) -// } -// } +// output, err := s3manage.Upload(svc, input, opts) +// if err != nil { +// if reqerr, ok := err.(RequestFailure); ok { +// log.Println("Request failed", reqerr.Code(), reqerr.Message(), reqerr.RequestID()) +// } else { +// log.Println("Error:", err.Error()) +// } +// } // // Combined with awserr.Error: // -// output, err := s3manage.Upload(svc, input, opts) -// if err != nil { -// if awsErr, ok := err.(awserr.Error); ok { -// // Generic AWS Error with Code, Message, and original error (if any) -// fmt.Println(awsErr.Code(), awsErr.Message(), awsErr.OrigErr()) -// -// if reqErr, ok := err.(awserr.RequestFailure); ok { -// // A service error occurred -// fmt.Println(reqErr.StatusCode(), reqErr.RequestID()) -// } -// } else { -// fmt.Println(err.Error()) -// } -// } +// output, err := s3manage.Upload(svc, input, opts) +// if err != nil { +// if awsErr, ok := err.(awserr.Error); ok { +// // Generic AWS Error with Code, Message, and original error (if any) +// fmt.Println(awsErr.Code(), awsErr.Message(), awsErr.OrigErr()) +// +// if reqErr, ok := err.(awserr.RequestFailure); ok { +// // A service error occurred +// fmt.Println(reqErr.StatusCode(), reqErr.RequestID()) +// } +// } else { +// fmt.Println(err.Error()) +// } +// } +// type RequestFailure interface { Error diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/client/default_retryer.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/client/default_retryer.go index 784a0144f0d9..4d2f7d7b100d 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/client/default_retryer.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/client/default_retryer.go @@ -12,6 +12,7 @@ import ( // DefaultRetryer implements basic retry logic using exponential backoff for // most services. If you want to implement custom retry logic, you can implement the // request.Retryer interface. +// type DefaultRetryer struct { // Num max Retries is the number of max retries that will be performed. // By default, this is zero. diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/corehandlers/awsinternal.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/corehandlers/awsinternal.go index ec6fb00e6ab1..140242dd1b8d 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/corehandlers/awsinternal.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/corehandlers/awsinternal.go @@ -1,4 +1,4 @@ // DO NOT EDIT package corehandlers -const isAwsInternal = "" +const isAwsInternal = "" \ No newline at end of file diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/chain_provider.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/chain_provider.go index 28bad8364a97..3348aed28124 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/chain_provider.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/chain_provider.go @@ -37,18 +37,19 @@ var ( // does not return any credentials ChainProvider will return the error // ErrNoValidProvidersFoundInChain // -// creds := credentials.NewChainCredentials( -// []credentials.Provider{ -// &credentials.EnvProvider{}, -// &ec2rolecreds.EC2RoleProvider{ -// Client: ec2metadata.New(sess), -// }, -// }) +// creds := credentials.NewChainCredentials( +// []credentials.Provider{ +// &credentials.EnvProvider{}, +// &ec2rolecreds.EC2RoleProvider{ +// Client: ec2metadata.New(sess), +// }, +// }) +// +// // Usage of ChainCredentials with aws.Config +// svc := ec2.New(session.Must(session.NewSession(&aws.Config{ +// Credentials: creds, +// }))) // -// // Usage of ChainCredentials with aws.Config -// svc := ec2.New(session.Must(session.NewSession(&aws.Config{ -// Credentials: creds, -// }))) type ChainProvider struct { Providers []Provider curr Provider diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/credentials.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/credentials.go index 74f204d27a0a..89a733428b88 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/credentials.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/credentials.go @@ -14,36 +14,38 @@ // // Example of using the environment variable credentials. // -// creds := credentials.NewEnvCredentials() +// creds := credentials.NewEnvCredentials() // -// // Retrieve the credentials value -// credValue, err := creds.Get() -// if err != nil { -// // handle error -// } +// // Retrieve the credentials value +// credValue, err := creds.Get() +// if err != nil { +// // handle error +// } // // Example of forcing credentials to expire and be refreshed on the next Get(). // This may be helpful to proactively expire credentials and refresh them sooner // than they would naturally expire on their own. // -// creds := credentials.NewCredentials(&ec2rolecreds.EC2RoleProvider{}) -// creds.Expire() -// credsValue, err := creds.Get() -// // New credentials will be retrieved instead of from cache. +// creds := credentials.NewCredentials(&ec2rolecreds.EC2RoleProvider{}) +// creds.Expire() +// credsValue, err := creds.Get() +// // New credentials will be retrieved instead of from cache. // -// # Custom Provider +// +// Custom Provider // // Each Provider built into this package also provides a helper method to generate // a Credentials pointer setup with the provider. To use a custom Provider just // create a type which satisfies the Provider interface and pass it to the // NewCredentials method. // -// type MyProvider struct{} -// func (m *MyProvider) Retrieve() (Value, error) {...} -// func (m *MyProvider) IsExpired() bool {...} +// type MyProvider struct{} +// func (m *MyProvider) Retrieve() (Value, error) {...} +// func (m *MyProvider) IsExpired() bool {...} +// +// creds := credentials.NewCredentials(&MyProvider{}) +// credValue, err := creds.Get() // -// creds := credentials.NewCredentials(&MyProvider{}) -// credValue, err := creds.Get() package credentials import ( @@ -62,10 +64,10 @@ import ( // when making service API calls. For example, when accessing public // s3 buckets. // -// svc := s3.New(session.Must(session.NewSession(&aws.Config{ -// Credentials: credentials.AnonymousCredentials, -// }))) -// // Access public S3 buckets. +// svc := s3.New(session.Must(session.NewSession(&aws.Config{ +// Credentials: credentials.AnonymousCredentials, +// }))) +// // Access public S3 buckets. var AnonymousCredentials = NewStaticCredentials("", "", "") // A Value is the AWS credentials value for individual credential fields. @@ -148,11 +150,10 @@ func (p ErrorProvider) IsExpired() bool { // provider's struct. // // Example: -// -// type EC2RoleProvider struct { -// Expiry -// ... -// } +// type EC2RoleProvider struct { +// Expiry +// ... +// } type Expiry struct { // The date/time when to expire on expiration time.Time diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go index fcb327f9959f..ff60f4c28acd 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ec2rolecreds/ec2_role_provider.go @@ -25,17 +25,17 @@ const ProviderName = "EC2RoleProvider" // Example how to configure the EC2RoleProvider with custom http Client, Endpoint // or ExpiryWindow // -// p := &ec2rolecreds.EC2RoleProvider{ -// // Pass in a custom timeout to be used when requesting -// // IAM EC2 Role credentials. -// Client: ec2metadata.New(sess, aws.Config{ -// HTTPClient: &http.Client{Timeout: 10 * time.Second}, -// }), +// p := &ec2rolecreds.EC2RoleProvider{ +// // Pass in a custom timeout to be used when requesting +// // IAM EC2 Role credentials. +// Client: ec2metadata.New(sess, aws.Config{ +// HTTPClient: &http.Client{Timeout: 10 * time.Second}, +// }), // -// // Do not use early expiry of credentials. If a non zero value is -// // specified the credentials will be expired early -// ExpiryWindow: 0, -// } +// // Do not use early expiry of credentials. If a non zero value is +// // specified the credentials will be expired early +// ExpiryWindow: 0, +// } type EC2RoleProvider struct { credentials.Expiry diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go index f15b3e5c973c..1e30db39160b 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/endpointcreds/provider.go @@ -7,29 +7,26 @@ // // Static credentials will never expire once they have been retrieved. The format // of the static credentials response: -// -// { -// "AccessKeyId" : "MUA...", -// "SecretAccessKey" : "/7PC5om....", -// } +// { +// "AccessKeyId" : "MUA...", +// "SecretAccessKey" : "/7PC5om....", +// } // // Refreshable credentials will expire within the "ExpiryWindow" of the Expiration // value in the response. The format of the refreshable credentials response: -// -// { -// "AccessKeyId" : "MUA...", -// "SecretAccessKey" : "/7PC5om....", -// "Token" : "AQoDY....=", -// "Expiration" : "2016-02-25T06:03:31Z" -// } +// { +// "AccessKeyId" : "MUA...", +// "SecretAccessKey" : "/7PC5om....", +// "Token" : "AQoDY....=", +// "Expiration" : "2016-02-25T06:03:31Z" +// } // // Errors should be returned in the following format and only returned with 400 // or 500 HTTP status codes. -// -// { -// "code": "ErrorCode", -// "message": "Helpful error message." -// } +// { +// "code": "ErrorCode", +// "message": "Helpful error message." +// } package endpointcreds import ( diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/plugincreds/provider.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/plugincreds/provider.go index ebc70fb52eb3..714c13ef5e76 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/plugincreds/provider.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/plugincreds/provider.go @@ -7,7 +7,7 @@ // // As of Go 1.8 plugins are only supported on the Linux platform. // -// # Plugin Symbol Name +// Plugin Symbol Name // // The "GetAWSSDKCredentialProvider" is the symbol name that will be used to // lookup the credentials provider getter from the plugin. If you want to use a @@ -18,52 +18,52 @@ // retrieve the credentials, and another to determine if the credentials have // expired. // -// # Plugin Symbol Signature +// Plugin Symbol Signature // // The plugin credential provider requires the symbol to match the // following signature. // -// func() (RetrieveFn func() (key, secret, token string, err error), IsExpiredFn func() bool) +// func() (RetrieveFn func() (key, secret, token string, err error), IsExpiredFn func() bool) // -// # Plugin Implementation Example +// Plugin Implementation Example // // The following is an example implementation of a SDK credential provider using // the plugin provider in this package. See the SDK's example/aws/credential/plugincreds/plugin // folder for a runnable example of this. // -// package main +// package main // -// func main() {} +// func main() {} // -// var myCredProvider provider +// var myCredProvider provider // -// // Build: go build -o plugin.so -buildmode=plugin plugin.go -// func init() { -// // Initialize a mock credential provider with stubs -// myCredProvider = provider{"a","b","c"} -// } +// // Build: go build -o plugin.so -buildmode=plugin plugin.go +// func init() { +// // Initialize a mock credential provider with stubs +// myCredProvider = provider{"a","b","c"} +// } // -// // GetAWSSDKCredentialProvider is the symbol SDK will lookup and use to -// // get the credential provider's retrieve and isExpired functions. -// func GetAWSSDKCredentialProvider() (func() (key, secret, token string, err error), func() bool) { -// return myCredProvider.Retrieve, myCredProvider.IsExpired -// } +// // GetAWSSDKCredentialProvider is the symbol SDK will lookup and use to +// // get the credential provider's retrieve and isExpired functions. +// func GetAWSSDKCredentialProvider() (func() (key, secret, token string, err error), func() bool) { +// return myCredProvider.Retrieve, myCredProvider.IsExpired +// } // -// // mock implementation of a type that returns retrieves credentials and -// // returns if they have expired. -// type provider struct { -// key, secret, token string -// } +// // mock implementation of a type that returns retrieves credentials and +// // returns if they have expired. +// type provider struct { +// key, secret, token string +// } // -// func (p provider) Retrieve() (key, secret, token string, err error) { -// return p.key, p.secret, p.token, nil -// } +// func (p provider) Retrieve() (key, secret, token string, err error) { +// return p.key, p.secret, p.token, nil +// } // -// func (p *provider) IsExpired() bool { -// return false; -// } +// func (p *provider) IsExpired() bool { +// return false; +// } // -// # Configuring SDK for Plugin Credentials +// Configuring SDK for Plugin Credentials // // To configure the SDK to use a plugin's credential provider you'll need to first // open the plugin file using the plugin standard library package. Once you have @@ -72,24 +72,24 @@ // credentials loader of a Session or Config. See the SDK's example/aws/credential/plugincreds // folder for a runnable example of this. // -// // Open plugin, and load it into the process. -// p, err := plugin.Open("somefile.so") -// if err != nil { -// return nil, err -// } -// -// // Create a new Credentials value which will source the provider's Retrieve -// // and IsExpired functions from the plugin. -// creds, err := plugincreds.NewCredentials(p) -// if err != nil { -// return nil, err -// } -// -// // Example to configure a Session with the newly created credentials that -// // will be sourced using the plugin's functionality. -// sess := session.Must(session.NewSession(&aws.Config{ -// Credentials: creds, -// })) +// // Open plugin, and load it into the process. +// p, err := plugin.Open("somefile.so") +// if err != nil { +// return nil, err +// } +// +// // Create a new Credentials value which will source the provider's Retrieve +// // and IsExpired functions from the plugin. +// creds, err := plugincreds.NewCredentials(p) +// if err != nil { +// return nil, err +// } +// +// // Example to configure a Session with the newly created credentials that +// // will be sourced using the plugin's functionality. +// sess := session.Must(session.NewSession(&aws.Config{ +// Credentials: creds, +// })) package plugincreds import ( diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/processcreds/provider.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/processcreds/provider.go index 6047724a87bc..2097a616a78c 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/processcreds/provider.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/processcreds/provider.go @@ -15,52 +15,52 @@ location, with the `credential_process` key and the command you want to be called. You also need to set the AWS_SDK_LOAD_CONFIG environment variable (e.g., `export AWS_SDK_LOAD_CONFIG=1`) to use the shared config file. - [default] - credential_process = /command/to/call + [default] + credential_process = /command/to/call Creating a new session will use the credential process to retrieve credentials. NOTE: If there are credentials in the profile you are using, the credential process will not be used. - // Initialize a session to load credentials. - sess, _ := session.NewSession(&aws.Config{ - Region: aws.String("us-east-1")}, - ) + // Initialize a session to load credentials. + sess, _ := session.NewSession(&aws.Config{ + Region: aws.String("us-east-1")}, + ) - // Create S3 service client to use the credentials. - svc := s3.New(sess) + // Create S3 service client to use the credentials. + svc := s3.New(sess) Another way to use the `credential_process` method is by using `credentials.NewCredentials()` and providing a command to be executed to retrieve credentials: - // Create credentials using the ProcessProvider. - creds := processcreds.NewCredentials("/path/to/command") + // Create credentials using the ProcessProvider. + creds := processcreds.NewCredentials("/path/to/command") - // Create service client value configured for credentials. - svc := s3.New(sess, &aws.Config{Credentials: creds}) + // Create service client value configured for credentials. + svc := s3.New(sess, &aws.Config{Credentials: creds}) You can set a non-default timeout for the `credential_process` with another constructor, `credentials.NewCredentialsTimeout()`, providing the timeout. To set a one minute timeout: - // Create credentials using the ProcessProvider. - creds := processcreds.NewCredentialsTimeout( - "/path/to/command", - time.Duration(500) * time.Millisecond) + // Create credentials using the ProcessProvider. + creds := processcreds.NewCredentialsTimeout( + "/path/to/command", + time.Duration(500) * time.Millisecond) If you need more control, you can set any configurable options in the credentials using one or more option functions. For example, you can set a two minute timeout, a credential duration of 60 minutes, and a maximum stdout buffer size of 2k. - creds := processcreds.NewCredentials( - "/path/to/command", - func(opt *ProcessProvider) { - opt.Timeout = time.Duration(2) * time.Minute - opt.Duration = time.Duration(60) * time.Minute - opt.MaxBufSize = 2048 - }) + creds := processcreds.NewCredentials( + "/path/to/command", + func(opt *ProcessProvider) { + opt.Timeout = time.Duration(2) * time.Minute + opt.Duration = time.Duration(60) * time.Minute + opt.MaxBufSize = 2048 + }) You can also use your own `exec.Cmd`: diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ssocreds/doc.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ssocreds/doc.go index e531c45533d8..18c940ab3c36 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ssocreds/doc.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ssocreds/doc.go @@ -5,54 +5,54 @@ // some other mechanism. The provider must find a valid non-expired access token for the AWS SSO user portal URL in // ~/.aws/sso/cache. If a cached token is not found, it is expired, or the file is malformed an error will be returned. // -// # Loading AWS SSO credentials with the AWS shared configuration file +// Loading AWS SSO credentials with the AWS shared configuration file // // You can use configure AWS SSO credentials from the AWS shared configuration file by // providing the specifying the required keys in the profile: // -// sso_account_id -// sso_region -// sso_role_name -// sso_start_url +// sso_account_id +// sso_region +// sso_role_name +// sso_start_url // // For example, the following defines a profile "devsso" and specifies the AWS SSO parameters that defines the target // account, role, sign-on portal, and the region where the user portal is located. Note: all SSO arguments must be // provided, or an error will be returned. // -// [profile devsso] -// sso_start_url = https://my-sso-portal.awsapps.com/start -// sso_role_name = SSOReadOnlyRole -// sso_region = us-east-1 -// sso_account_id = 123456789012 +// [profile devsso] +// sso_start_url = https://my-sso-portal.awsapps.com/start +// sso_role_name = SSOReadOnlyRole +// sso_region = us-east-1 +// sso_account_id = 123456789012 // // Using the config module, you can load the AWS SDK shared configuration, and specify that this profile be used to // retrieve credentials. For example: // -// sess, err := session.NewSessionWithOptions(session.Options{ -// SharedConfigState: session.SharedConfigEnable, -// Profile: "devsso", -// }) -// if err != nil { -// return err -// } +// sess, err := session.NewSessionWithOptions(session.Options{ +// SharedConfigState: session.SharedConfigEnable, +// Profile: "devsso", +// }) +// if err != nil { +// return err +// } // -// # Programmatically loading AWS SSO credentials directly +// Programmatically loading AWS SSO credentials directly // // You can programmatically construct the AWS SSO Provider in your application, and provide the necessary information // to load and retrieve temporary credentials using an access token from ~/.aws/sso/cache. // -// svc := sso.New(sess, &aws.Config{ -// Region: aws.String("us-west-2"), // Client Region must correspond to the AWS SSO user portal region -// }) +// svc := sso.New(sess, &aws.Config{ +// Region: aws.String("us-west-2"), // Client Region must correspond to the AWS SSO user portal region +// }) // -// provider := ssocreds.NewCredentialsWithClient(svc, "123456789012", "SSOReadOnlyRole", "https://my-sso-portal.awsapps.com/start") +// provider := ssocreds.NewCredentialsWithClient(svc, "123456789012", "SSOReadOnlyRole", "https://my-sso-portal.awsapps.com/start") // -// credentials, err := provider.Get() -// if err != nil { -// return err -// } +// credentials, err := provider.Get() +// if err != nil { +// return err +// } // -// # Additional Resources +// Additional Resources // // Configuring the AWS CLI to use AWS Single Sign-On: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-sso.html // diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ssocreds/sso_cached_token.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ssocreds/sso_cached_token.go index 7c8ca837432f..54e59f0a2f81 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ssocreds/sso_cached_token.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/credentials/ssocreds/sso_cached_token.go @@ -5,8 +5,8 @@ import ( "encoding/hex" "encoding/json" "fmt" - "io/ioutil" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/shareddefaults" + "io/ioutil" "os" "path/filepath" "strconv" diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/doc.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/doc.go index 7c7780f0616b..25a66d1dda22 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/doc.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/doc.go @@ -3,7 +3,7 @@ // control options, and configuration for the CSM client. The client can be // controlled manually, or automatically via the SDK's Session configuration. // -// # Enabling CSM client via SDK's Session configuration +// Enabling CSM client via SDK's Session configuration // // The CSM client can be enabled automatically via SDK's Session configuration. // The SDK's session configuration enables the CSM client if the AWS_CSM_PORT @@ -12,39 +12,39 @@ // The configuration options for the CSM client via the SDK's session // configuration are: // -// - AWS_CSM_PORT= -// The port number the CSM agent will receive metrics on. +// * AWS_CSM_PORT= +// The port number the CSM agent will receive metrics on. // -// - AWS_CSM_HOST= -// The hostname, or IP address the CSM agent will receive metrics on. -// Without port number. +// * AWS_CSM_HOST= +// The hostname, or IP address the CSM agent will receive metrics on. +// Without port number. // -// # Manually enabling the CSM client +// Manually enabling the CSM client // // The CSM client can be started, paused, and resumed manually. The Start // function will enable the CSM client to publish metrics to the CSM agent. It // is safe to call Start concurrently, but if Start is called additional times // with different ClientID or address it will panic. // -// r, err := csm.Start("clientID", ":31000") -// if err != nil { -// panic(fmt.Errorf("failed starting CSM: %v", err)) -// } +// r, err := csm.Start("clientID", ":31000") +// if err != nil { +// panic(fmt.Errorf("failed starting CSM: %v", err)) +// } // // When controlling the CSM client manually, you must also inject its request // handlers into the SDK's Session configuration for the SDK's API clients to // publish metrics. // -// sess, err := session.NewSession(&aws.Config{}) -// if err != nil { -// panic(fmt.Errorf("failed loading session: %v", err)) -// } +// sess, err := session.NewSession(&aws.Config{}) +// if err != nil { +// panic(fmt.Errorf("failed loading session: %v", err)) +// } // -// // Add CSM client's metric publishing request handlers to the SDK's -// // Session Configuration. -// r.InjectHandlers(&sess.Handlers) +// // Add CSM client's metric publishing request handlers to the SDK's +// // Session Configuration. +// r.InjectHandlers(&sess.Handlers) // -// # Controlling CSM client +// Controlling CSM client // // Once the CSM client has been enabled the Get function will return a Reporter // value that you can use to pause and resume the metrics published to the CSM @@ -54,16 +54,16 @@ // The Pause method can be called to stop the CSM client publishing metrics to // the CSM agent. The Continue method will resume metric publishing. // -// // Get the CSM client Reporter. -// r := csm.Get() +// // Get the CSM client Reporter. +// r := csm.Get() // -// // Will pause monitoring -// r.Pause() -// resp, err = client.GetObject(&s3.GetObjectInput{ -// Bucket: aws.String("bucket"), -// Key: aws.String("key"), -// }) +// // Will pause monitoring +// r.Pause() +// resp, err = client.GetObject(&s3.GetObjectInput{ +// Bucket: aws.String("bucket"), +// Key: aws.String("key"), +// }) // -// // Resume monitoring -// r.Continue() +// // Resume monitoring +// r.Continue() package csm diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/enable.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/enable.go index 6f3222d43cad..4b19e2800e3c 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/enable.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/enable.go @@ -43,18 +43,18 @@ func AddressWithDefaults(host, port string) string { // start the metric listener once and will panic if a different // client ID or port is passed in. // -// r, err := csm.Start("clientID", "127.0.0.1:31000") -// if err != nil { -// panic(fmt.Errorf("expected no error, but received %v", err)) -// } -// sess := session.NewSession() -// r.InjectHandlers(sess.Handlers) +// r, err := csm.Start("clientID", "127.0.0.1:31000") +// if err != nil { +// panic(fmt.Errorf("expected no error, but received %v", err)) +// } +// sess := session.NewSession() +// r.InjectHandlers(sess.Handlers) // -// svc := s3.New(sess) -// out, err := svc.GetObject(&s3.GetObjectInput{ -// Bucket: aws.String("bucket"), -// Key: aws.String("key"), -// }) +// svc := s3.New(sess) +// out, err := svc.GetObject(&s3.GetObjectInput{ +// Bucket: aws.String("bucket"), +// Key: aws.String("key"), +// }) func Start(clientID string, url string) (*Reporter, error) { lock.Lock() defer lock.Unlock() diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/reporter.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/reporter.go index 3a1ec0f8954d..a5ad5999d5d7 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/reporter.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/csm/reporter.go @@ -227,17 +227,17 @@ const ( // InjectHandlers is NOT safe to call concurrently. Calling InjectHandlers // multiple times may lead to unexpected behavior, (e.g. duplicate metrics). // -// // Start must be called in order to inject the correct handlers -// r, err := csm.Start("clientID", "127.0.0.1:8094") -// if err != nil { -// panic(fmt.Errorf("expected no error, but received %v", err)) -// } +// // Start must be called in order to inject the correct handlers +// r, err := csm.Start("clientID", "127.0.0.1:8094") +// if err != nil { +// panic(fmt.Errorf("expected no error, but received %v", err)) +// } // -// sess := session.NewSession() -// r.InjectHandlers(&sess.Handlers) +// sess := session.NewSession() +// r.InjectHandlers(&sess.Handlers) // -// // create a new service client with our client side metric session -// svc := s3.New(sess) +// // create a new service client with our client side metric session +// svc := s3.New(sess) func (rep *Reporter) InjectHandlers(handlers *request.Handlers) { if rep == nil { return diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/doc.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/doc.go index 917618c75fb6..4fcb6161848e 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/doc.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/doc.go @@ -1,7 +1,7 @@ // Package aws provides the core SDK's utilities and shared types. Use this package's // utilities to simplify setting and reading API operations parameters. // -// # Value and Pointer Conversion Utilities +// Value and Pointer Conversion Utilities // // This package includes a helper conversion utility for each scalar type the SDK's // API use. These utilities make getting a pointer of the scalar, and dereferencing @@ -16,33 +16,33 @@ // to get pointer of a literal string value, because getting the address of a // literal requires assigning the value to a variable first. // -// var strPtr *string +// var strPtr *string // -// // Without the SDK's conversion functions -// str := "my string" -// strPtr = &str +// // Without the SDK's conversion functions +// str := "my string" +// strPtr = &str // -// // With the SDK's conversion functions -// strPtr = aws.String("my string") +// // With the SDK's conversion functions +// strPtr = aws.String("my string") // -// // Convert *string to string value -// str = aws.StringValue(strPtr) +// // Convert *string to string value +// str = aws.StringValue(strPtr) // // In addition to scalars the aws package also includes conversion utilities for // map and slice for commonly types used in API parameters. The map and slice // conversion functions use similar naming pattern as the scalar conversion // functions. // -// var strPtrs []*string -// var strs []string = []string{"Go", "Gophers", "Go"} +// var strPtrs []*string +// var strs []string = []string{"Go", "Gophers", "Go"} // -// // Convert []string to []*string -// strPtrs = aws.StringSlice(strs) +// // Convert []string to []*string +// strPtrs = aws.StringSlice(strs) // -// // Convert []*string to []string -// strs = aws.StringValueSlice(strPtrs) +// // Convert []*string to []string +// strs = aws.StringValueSlice(strPtrs) // -// # SDK Default HTTP Client +// SDK Default HTTP Client // // The SDK will use the http.DefaultClient if a HTTP client is not provided to // the SDK's Session, or service client constructor. This means that if the diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/endpoints/defaults.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/endpoints/defaults.go index ce7e1169688b..fca694c3f7c8 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/endpoints/defaults.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/endpoints/defaults.go @@ -89,10 +89,10 @@ func DefaultResolver() Resolver { // DefaultPartitions returns a list of the partitions the SDK is bundled // with. The available partitions are: AWS Standard, AWS China, AWS GovCloud (US), AWS ISO (US), AWS ISOB (US), AWS ISOE (Europe), and AWS ISOF. // -// partitions := endpoints.DefaultPartitions -// for _, p := range partitions { -// // ... inspect partitions -// } +// partitions := endpoints.DefaultPartitions +// for _, p := range partitions { +// // ... inspect partitions +// } func DefaultPartitions() []Partition { return defaultPartitions.Partitions() } diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/logger.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/logger.go index c8b8203e0cdc..49674cc79ebd 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/logger.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/logger.go @@ -92,10 +92,9 @@ type Logger interface { // list of arguments and wrap it so the Logger interface can be used. // // Example: -// -// s3.New(sess, &aws.Config{Logger: aws.LoggerFunc(func(args ...interface{}) { -// fmt.Fprintln(os.Stdout, args...) -// })}) +// s3.New(sess, &aws.Config{Logger: aws.LoggerFunc(func(args ...interface{}) { +// fmt.Fprintln(os.Stdout, args...) +// })}) type LoggerFunc func(...interface{}) // Log calls the wrapped function with the arguments provided diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/request.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/request.go index b7532f35fa3f..5ab5fddd0121 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/request.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/request.go @@ -182,11 +182,11 @@ type Option func(*Request) // // This Option can be used multiple times with a single API operation. // -// var id2, versionID string -// svc.PutObjectWithContext(ctx, params, -// request.WithGetResponseHeader("x-amz-id-2", &id2), -// request.WithGetResponseHeader("x-amz-version-id", &versionID), -// ) +// var id2, versionID string +// svc.PutObjectWithContext(ctx, params, +// request.WithGetResponseHeader("x-amz-id-2", &id2), +// request.WithGetResponseHeader("x-amz-version-id", &versionID), +// ) func WithGetResponseHeader(key string, val *string) Option { return func(r *Request) { r.Handlers.Complete.PushBack(func(req *Request) { @@ -199,8 +199,8 @@ func WithGetResponseHeader(key string, val *string) Option { // headers from the HTTP response and assign them to the passed in headers // variable. The passed in headers pointer must be non-nil. // -// var headers http.Header -// svc.PutObjectWithContext(ctx, params, request.WithGetResponseHeaders(&headers)) +// var headers http.Header +// svc.PutObjectWithContext(ctx, params, request.WithGetResponseHeaders(&headers)) func WithGetResponseHeaders(headers *http.Header) Option { return func(r *Request) { r.Handlers.Complete.PushBack(func(req *Request) { @@ -212,7 +212,7 @@ func WithGetResponseHeaders(headers *http.Header) Option { // WithLogLevel is a request option that will set the request to use a specific // log level when the request is made. // -// svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebugWithHTTPBody) +// svc.PutObjectWithContext(ctx, params, request.WithLogLevel(aws.LogDebugWithHTTPBody) func WithLogLevel(l aws.LogLevelType) Option { return func(r *Request) { r.Config.LogLevel = aws.LogLevel(l) diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/request_pagination.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/request_pagination.go index ef1d71f1ed2f..3a043a962002 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/request_pagination.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/request_pagination.go @@ -17,14 +17,14 @@ import ( // does the pagination between API operations, and Paginator defines the // configuration that will be used per page request. // -// for p.Next() { -// data := p.Page().(*s3.ListObjectsOutput) -// // process the page's data -// // ... -// // break out of loop to stop fetching additional pages -// } +// for p.Next() { +// data := p.Page().(*s3.ListObjectsOutput) +// // process the page's data +// // ... +// // break out of loop to stop fetching additional pages +// } // -// return p.Err() +// return p.Err() // // See service client API operation Pages methods for examples how the SDK will // use the Pagination type. @@ -237,9 +237,9 @@ func (r *Request) NextPage() *Request { // EachPage iterates over each page of a paginated request object. The fn // parameter should be a function with the following sample signature: // -// func(page *T, lastPage bool) bool { -// return true // return false to stop iterating -// } +// func(page *T, lastPage bool) bool { +// return true // return false to stop iterating +// } // // Where "T" is the structure type matching the output structure of the given // operation. For example, a request object generated by diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/timeout_read_closer.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/timeout_read_closer.go index a04a20374bf8..694d154b1dfe 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/timeout_read_closer.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/request/timeout_read_closer.go @@ -71,7 +71,7 @@ func adaptToResponseTimeoutError(req *Request) { // This will allow for per read timeouts. If a timeout occurred, we will return the // ErrCodeResponseTimeout. // -// svc.PutObjectWithContext(ctx, params, request.WithTimeoutReadCloser(30 * time.Second) +// svc.PutObjectWithContext(ctx, params, request.WithTimeoutReadCloser(30 * time.Second) func WithResponseReadTimeout(duration time.Duration) Option { return func(r *Request) { diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/session/doc.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/session/doc.go index 3e53a8746113..ff3cc012ae37 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/session/doc.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/session/doc.go @@ -9,7 +9,7 @@ files each time the Session is created. Sharing the Session value across all of your service clients will ensure the configuration is loaded the fewest number of times possible. -# Sessions options from Shared Config +Sessions options from Shared Config By default NewSession will only load credentials from the shared credentials file (~/.aws/credentials). If the AWS_SDK_LOAD_CONFIG environment variable is @@ -19,27 +19,27 @@ values from the shared config (~/.aws/config) and shared credentials SharedConfigState set to SharedConfigEnable will create the session as if the AWS_SDK_LOAD_CONFIG environment variable was set. -# Credential and config loading order +Credential and config loading order The Session will attempt to load configuration and credentials from the environment, configuration files, and other credential sources. The order configuration is loaded in is: - - Environment Variables - - Shared Credentials file - - Shared Configuration file (if SharedConfig is enabled) - - EC2 Instance Metadata (credentials only) + * Environment Variables + * Shared Credentials file + * Shared Configuration file (if SharedConfig is enabled) + * EC2 Instance Metadata (credentials only) The Environment variables for credentials will have precedence over shared config even if SharedConfig is enabled. To override this behavior, and use shared config credentials instead specify the session.Options.Profile, (e.g. when using credential_source=Environment to assume a role). - sess, err := session.NewSessionWithOptions(session.Options{ - Profile: "myProfile", - }) + sess, err := session.NewSessionWithOptions(session.Options{ + Profile: "myProfile", + }) -# Creating Sessions +Creating Sessions Creating a Session without additional options will load credentials region, and profile loaded from the environment and shared config automatically. See, @@ -49,6 +49,7 @@ by Session. // Create Session sess, err := session.NewSession() + When creating Sessions optional aws.Config values can be passed in that will override the default, or loaded, config values the Session is being created with. This allows you to provide additional, or case based, configuration @@ -81,7 +82,7 @@ profile, or override the shared config state, (AWS_SDK_LOAD_CONFIG). SharedConfigState: session.SharedConfigEnable, }) -# Adding Handlers +Adding Handlers You can add handlers to a session to decorate API operation, (e.g. adding HTTP headers). All clients that use the Session receive a copy of the Session's @@ -98,7 +99,7 @@ every requests made. r.ClientInfo.ServiceName, r.Operation, r.Params) }) -# Shared Config Fields +Shared Config Fields By default the SDK will only load the shared credentials file's (~/.aws/credentials) credentials values, and all other config is provided by @@ -130,7 +131,7 @@ other two fields. ; region only supported if SharedConfigEnabled. region = us-east-1 -# Assume Role configuration +Assume Role configuration The role_arn field allows you to configure the SDK to assume an IAM role using a set of credentials from another source. Such as when paired with static @@ -145,18 +146,19 @@ specified, such as "source_profile", "credential_source", or mfa_serial = role_session_name = session_name + The SDK supports assuming a role with MFA token. If "mfa_serial" is set, you must also set the Session Option.AssumeRoleTokenProvider. The Session will fail to load if the AssumeRoleTokenProvider is not specified. - sess := session.Must(session.NewSessionWithOptions(session.Options{ - AssumeRoleTokenProvider: stscreds.StdinTokenProvider, - })) + sess := session.Must(session.NewSessionWithOptions(session.Options{ + AssumeRoleTokenProvider: stscreds.StdinTokenProvider, + })) To setup Assume Role outside of a session see the stscreds.AssumeRoleProvider documentation. -# Environment Variables +Environment Variables When a Session is created several environment variables can be set to adjust how the SDK functions, and what configuration data it loads when creating @@ -206,7 +208,7 @@ env values as well. AWS_SDK_LOAD_CONFIG=1 -# Custom Shared Config and Credential Files +Custom Shared Config and Credential Files Shared credentials file path can be set to instruct the SDK to use an alternative file for the shared credentials. If not set the file will be loaded from @@ -222,7 +224,7 @@ $HOME/.aws/config on Linux/Unix based systems, and AWS_CONFIG_FILE=$HOME/my_shared_config -# Custom CA Bundle +Custom CA Bundle Path to a custom Credentials Authority (CA) bundle PEM file that the SDK will use instead of the default system's root CA bundle. Use this only @@ -244,7 +246,7 @@ Setting a custom HTTPClient in the aws.Config options will override this setting To use this option and custom HTTP client, the HTTP client needs to be provided when creating the session. Not the service client. -# Custom Client TLS Certificate +Custom Client TLS Certificate The SDK supports the environment and session option being configured with Client TLS certificates that are sent as a part of the client's TLS handshake @@ -265,26 +267,26 @@ This can also be configured via the session.Options ClientTLSCert and ClientTLSK ClientTLSKey: myKeyFile, }) -# Custom EC2 IMDS Endpoint +Custom EC2 IMDS Endpoint The endpoint of the EC2 IMDS client can be configured via the environment variable, AWS_EC2_METADATA_SERVICE_ENDPOINT when creating the client with a Session. See Options.EC2IMDSEndpoint for more details. - AWS_EC2_METADATA_SERVICE_ENDPOINT=http://169.254.169.254 + AWS_EC2_METADATA_SERVICE_ENDPOINT=http://169.254.169.254 If using an URL with an IPv6 address literal, the IPv6 address component must be enclosed in square brackets. - AWS_EC2_METADATA_SERVICE_ENDPOINT=http://[::1] + AWS_EC2_METADATA_SERVICE_ENDPOINT=http://[::1] The custom EC2 IMDS endpoint can also be specified via the Session options. - sess, err := session.NewSessionWithOptions(session.Options{ - EC2MetadataEndpoint: "http://[::1]", - }) + sess, err := session.NewSessionWithOptions(session.Options{ + EC2MetadataEndpoint: "http://[::1]", + }) -# FIPS and DualStack Endpoints +FIPS and DualStack Endpoints The SDK can be configured to resolve an endpoint with certain capabilities such as FIPS and DualStack. @@ -294,36 +296,36 @@ or programmatically. To configure a FIPS endpoint set the environment variable set the AWS_USE_FIPS_ENDPOINT to true or false to enable or disable FIPS endpoint resolution. - AWS_USE_FIPS_ENDPOINT=true + AWS_USE_FIPS_ENDPOINT=true To configure a FIPS endpoint using shared config, set use_fips_endpoint to true or false to enable or disable FIPS endpoint resolution. - [profile myprofile] - region=us-west-2 - use_fips_endpoint=true + [profile myprofile] + region=us-west-2 + use_fips_endpoint=true To configure a FIPS endpoint programmatically - // Option 1: Configure it on a session for all clients - sess, err := session.NewSessionWithOptions(session.Options{ - UseFIPSEndpoint: endpoints.FIPSEndpointStateEnabled, - }) - if err != nil { - // handle error - } + // Option 1: Configure it on a session for all clients + sess, err := session.NewSessionWithOptions(session.Options{ + UseFIPSEndpoint: endpoints.FIPSEndpointStateEnabled, + }) + if err != nil { + // handle error + } - client := s3.New(sess) + client := s3.New(sess) - // Option 2: Configure it per client - sess, err := session.NewSession() - if err != nil { - // handle error - } + // Option 2: Configure it per client + sess, err := session.NewSession() + if err != nil { + // handle error + } - client := s3.New(sess, &aws.Config{ - UseFIPSEndpoint: endpoints.FIPSEndpointStateEnabled, - }) + client := s3.New(sess, &aws.Config{ + UseFIPSEndpoint: endpoints.FIPSEndpointStateEnabled, + }) You can configure a DualStack endpoint using an environment variable, shared config ($HOME/.aws/config), or programmatically. @@ -331,35 +333,35 @@ or programmatically. To configure a DualStack endpoint set the environment variable set the AWS_USE_DUALSTACK_ENDPOINT to true or false to enable or disable DualStack endpoint resolution. - AWS_USE_DUALSTACK_ENDPOINT=true + AWS_USE_DUALSTACK_ENDPOINT=true To configure a DualStack endpoint using shared config, set use_dualstack_endpoint to true or false to enable or disable DualStack endpoint resolution. - [profile myprofile] - region=us-west-2 - use_dualstack_endpoint=true + [profile myprofile] + region=us-west-2 + use_dualstack_endpoint=true To configure a DualStack endpoint programmatically - // Option 1: Configure it on a session for all clients - sess, err := session.NewSessionWithOptions(session.Options{ - UseDualStackEndpoint: endpoints.DualStackEndpointStateEnabled, - }) - if err != nil { - // handle error - } - - client := s3.New(sess) - - // Option 2: Configure it per client - sess, err := session.NewSession() - if err != nil { - // handle error - } - - client := s3.New(sess, &aws.Config{ - UseDualStackEndpoint: endpoints.DualStackEndpointStateEnabled, - }) + // Option 1: Configure it on a session for all clients + sess, err := session.NewSessionWithOptions(session.Options{ + UseDualStackEndpoint: endpoints.DualStackEndpointStateEnabled, + }) + if err != nil { + // handle error + } + + client := s3.New(sess) + + // Option 2: Configure it per client + sess, err := session.NewSession() + if err != nil { + // handle error + } + + client := s3.New(sess, &aws.Config{ + UseDualStackEndpoint: endpoints.DualStackEndpointStateEnabled, + }) */ package session diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/assert.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/assert.go index 0bd7c0c4a635..cc7e22a9aa95 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/assert.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/assert.go @@ -154,7 +154,7 @@ func objectsAreEqual(expected, actual interface{}) bool { // Equal asserts that two objects are equal. // -// assert.Equal(t, 123, 123, "123 and 123 should be equal") +// assert.Equal(t, 123, 123, "123 and 123 should be equal") // // Returns whether the assertion was successful (true) or not (false). // diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/cmd/bucket_cleanup/main.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/cmd/bucket_cleanup/main.go index 2af49f1287ea..8771ee330311 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/cmd/bucket_cleanup/main.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/cmd/bucket_cleanup/main.go @@ -18,8 +18,7 @@ import ( // to confirm bucket should be deleted. Positive confirmation is required. // // Usage: -// -// go run deleteBuckets.go +// go run deleteBuckets.go func main() { sess := session.Must(session.NewSession()) diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/ini/doc.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/ini/doc.go index fdd5321b4c64..1e55bbd07b91 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/ini/doc.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/ini/doc.go @@ -13,31 +13,30 @@ // } // // Below is the BNF that describes this parser -// -// Grammar: -// stmt -> section | stmt' -// stmt' -> epsilon | expr -// expr -> value (stmt)* | equal_expr (stmt)* -// equal_expr -> value ( ':' | '=' ) equal_expr' -// equal_expr' -> number | string | quoted_string -// quoted_string -> " quoted_string' -// quoted_string' -> string quoted_string_end -// quoted_string_end -> " -// -// section -> [ section' -// section' -> section_value section_close -// section_value -> number | string_subset | boolean | quoted_string_subset -// quoted_string_subset -> " quoted_string_subset' -// quoted_string_subset' -> string_subset quoted_string_end -// quoted_string_subset -> " -// section_close -> ] -// -// value -> number | string_subset | boolean -// string -> ? UTF-8 Code-Points except '\n' (U+000A) and '\r\n' (U+000D U+000A) ? -// string_subset -> ? Code-points excepted by grammar except ':' (U+003A), '=' (U+003D), '[' (U+005B), and ']' (U+005D) ? -// -// SkipState will skip (NL WS)+ -// -// comment -> # comment' | ; comment' -// comment' -> epsilon | value +// Grammar: +// stmt -> section | stmt' +// stmt' -> epsilon | expr +// expr -> value (stmt)* | equal_expr (stmt)* +// equal_expr -> value ( ':' | '=' ) equal_expr' +// equal_expr' -> number | string | quoted_string +// quoted_string -> " quoted_string' +// quoted_string' -> string quoted_string_end +// quoted_string_end -> " +// +// section -> [ section' +// section' -> section_value section_close +// section_value -> number | string_subset | boolean | quoted_string_subset +// quoted_string_subset -> " quoted_string_subset' +// quoted_string_subset' -> string_subset quoted_string_end +// quoted_string_subset -> " +// section_close -> ] +// +// value -> number | string_subset | boolean +// string -> ? UTF-8 Code-Points except '\n' (U+000A) and '\r\n' (U+000D U+000A) ? +// string_subset -> ? Code-points excepted by grammar except ':' (U+003A), '=' (U+003D), '[' (U+005B), and ']' (U+005D) ? +// +// SkipState will skip (NL WS)+ +// +// comment -> # comment' | ; comment' +// comment' -> epsilon | value package ini diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/ini/literal_tokens.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/ini/literal_tokens.go index 6deb8ae4fb4d..b1b686086a9c 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/ini/literal_tokens.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/ini/literal_tokens.go @@ -153,7 +153,7 @@ func (v ValueType) String() string { // ValueType enums const ( - NoneType = ValueType(iota) + NoneType = ValueType(iota) DecimalType // deprecated IntegerType // deprecated StringType @@ -166,9 +166,9 @@ type Value struct { Type ValueType raw []rune - integer int64 // deprecated + integer int64 // deprecated decimal float64 // deprecated - boolean bool // deprecated + boolean bool // deprecated str string } diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/s3shared/arn/accesspoint_arn.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/s3shared/arn/accesspoint_arn.go index 7b6778e0bd67..24648493d4ba 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/s3shared/arn/accesspoint_arn.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/s3shared/arn/accesspoint_arn.go @@ -21,8 +21,9 @@ func (a AccessPointARN) GetARN() arn.ARN { // AccessPoint resource. // // Supported Access point resource format: -// - Access point format: arn:{partition}:s3:{region}:{accountId}:accesspoint/{accesspointName} -// - example: arn.aws.s3.us-west-2.012345678901:accesspoint/myaccesspoint +// - Access point format: arn:{partition}:s3:{region}:{accountId}:accesspoint/{accesspointName} +// - example: arn.aws.s3.us-west-2.012345678901:accesspoint/myaccesspoint +// func ParseAccessPointResource(a arn.ARN, resParts []string) (AccessPointARN, error) { if len(a.Region) == 0 { return AccessPointARN{}, InvalidARNError{ARN: a, Reason: "region not set"} diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/s3shared/arn/outpost_arn.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/s3shared/arn/outpost_arn.go index d06f1f18d994..e2fafd1a8f9f 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/s3shared/arn/outpost_arn.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/s3shared/arn/outpost_arn.go @@ -17,14 +17,15 @@ type OutpostARN interface { // // Currently supported outpost ARN formats: // * Outpost AccessPoint ARN format: -// - ARN format: arn:{partition}:s3-outposts:{region}:{accountId}:outpost/{outpostId}/accesspoint/{accesspointName} -// - example: arn:aws:s3-outposts:us-west-2:012345678901:outpost/op-1234567890123456/accesspoint/myaccesspoint +// - ARN format: arn:{partition}:s3-outposts:{region}:{accountId}:outpost/{outpostId}/accesspoint/{accesspointName} +// - example: arn:aws:s3-outposts:us-west-2:012345678901:outpost/op-1234567890123456/accesspoint/myaccesspoint // // * Outpost Bucket ARN format: -// - ARN format: arn:{partition}:s3-outposts:{region}:{accountId}:outpost/{outpostId}/bucket/{bucketName} -// - example: arn:aws:s3-outposts:us-west-2:012345678901:outpost/op-1234567890123456/bucket/mybucket +// - ARN format: arn:{partition}:s3-outposts:{region}:{accountId}:outpost/{outpostId}/bucket/{bucketName} +// - example: arn:aws:s3-outposts:us-west-2:012345678901:outpost/op-1234567890123456/bucket/mybucket // // Other outpost ARN formats may be supported and added in the future. +// func ParseOutpostARNResource(a arn.ARN, resParts []string) (OutpostARN, error) { if len(a.Region) == 0 { return nil, InvalidARNError{ARN: a, Reason: "region not set"} @@ -108,6 +109,7 @@ func (o OutpostBucketARN) GetARN() arn.ARN { // bucket resource id. // // parseBucketResource only parses the bucket resource id. +// func parseBucketResource(a arn.ARN, resParts []string) (bucketName string, err error) { if len(resParts) == 0 { return bucketName, InvalidARNError{ARN: a, Reason: "bucket resource-id not set"} diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/sdkmath/floor.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/sdkmath/floor.go index 326efc9eca31..a8452878324d 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/sdkmath/floor.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/sdkmath/floor.go @@ -8,7 +8,6 @@ import "math" // Round returns the nearest integer, rounding half away from zero. // // Special cases are: -// // Round(±0) = ±0 // Round(±Inf) = ±Inf // Round(NaN) = NaN diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/sdkmath/floor_go1.9.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/sdkmath/floor_go1.9.go index 24232660dd2c..a3ae3e5dba8c 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/sdkmath/floor_go1.9.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/internal/sdkmath/floor_go1.9.go @@ -19,7 +19,6 @@ const ( // Round returns the nearest integer, rounding half away from zero. // // Special cases are: -// // Round(±0) = ±0 // Round(±Inf) = ±Inf // Round(NaN) = NaN diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/api.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/api.go index deb0cb674d72..738b31ecd792 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/api.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/api.go @@ -95,11 +95,11 @@ type Metadata struct { EndpointsID string ServiceID string - NoResolveEndpoint bool + NoResolveEndpoint bool AWSQueryCompatible *awsQueryCompatible } -type awsQueryCompatible struct{} +type awsQueryCompatible struct {} // ProtocolSettings define how the SDK should handle requests in the context // of of a protocol. diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/legacy_jsonvalue.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/legacy_jsonvalue.go index 6e424bd52950..4d59ebf31890 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/legacy_jsonvalue.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/legacy_jsonvalue.go @@ -11,196 +11,196 @@ type legacyJSONValues struct { } var legacyJSONValueShapes = map[string]map[string]legacyJSONValues{ - "braket": { - "CreateQuantumTaskRequest": { + "braket": map[string]legacyJSONValues{ + "CreateQuantumTaskRequest": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "action": {}, - "deviceParameters": {}, + "action": struct{}{}, + "deviceParameters": struct{}{}, }, }, - "GetDeviceResponse": { + "GetDeviceResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "deviceCapabilities": {}, + "deviceCapabilities": struct{}{}, }, }, - "GetQuantumTaskResponse": { + "GetQuantumTaskResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "deviceParameters": {}, + "deviceParameters": struct{}{}, }, }, }, - "cloudwatchrum": { - "RumEvent": { + "cloudwatchrum": map[string]legacyJSONValues{ + "RumEvent": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "details": {}, - "metadata": {}, + "details": struct{}{}, + "metadata": struct{}{}, }, }, }, - "lexruntimeservice": { - "PostContentRequest": { + "lexruntimeservice": map[string]legacyJSONValues{ + "PostContentRequest": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "requestAttributes": {}, + "requestAttributes": struct{}{}, //"ActiveContexts": struct{}{}, - Disabled because JSON List - "sessionAttributes": {}, + "sessionAttributes": struct{}{}, }, }, - "PostContentResponse": { + "PostContentResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ // "alternativeIntents": struct{}{}, - Disabled because JSON List - "sessionAttributes": {}, - "nluIntentConfidence": {}, - "slots": {}, + "sessionAttributes": struct{}{}, + "nluIntentConfidence": struct{}{}, + "slots": struct{}{}, //"activeContexts": struct{}{}, - Disabled because JSON List }, }, - "PutSessionResponse": { + "PutSessionResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ // "activeContexts": struct{}{}, - Disabled because JSON List - "slots": {}, - "sessionAttributes": {}, + "slots": struct{}{}, + "sessionAttributes": struct{}{}, }, }, }, - "lookoutequipment": { - "DatasetSchema": { + "lookoutequipment": map[string]legacyJSONValues{ + "DatasetSchema": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "InlineDataSchema": {}, + "InlineDataSchema": struct{}{}, }, }, - "DescribeDatasetResponse": { + "DescribeDatasetResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "Schema": {}, + "Schema": struct{}{}, }, }, - "DescribeModelResponse": { + "DescribeModelResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "Schema": {}, - "ModelMetrics": {}, + "Schema": struct{}{}, + "ModelMetrics": struct{}{}, }, }, }, - "networkmanager": { - "CoreNetworkPolicy": { + "networkmanager": map[string]legacyJSONValues{ + "CoreNetworkPolicy": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "PolicyDocument": {}, + "PolicyDocument": struct{}{}, }, }, - "GetResourcePolicyResponse": { + "GetResourcePolicyResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "PolicyDocument": {}, + "PolicyDocument": struct{}{}, }, }, - "PutCoreNetworkPolicyRequest": { + "PutCoreNetworkPolicyRequest": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "PolicyDocument": {}, + "PolicyDocument": struct{}{}, }, }, - "PutResourcePolicyRequest": { + "PutResourcePolicyRequest": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "PolicyDocument": {}, + "PolicyDocument": struct{}{}, }, }, }, - "personalizeevents": { - "Event": { + "personalizeevents": map[string]legacyJSONValues{ + "Event": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "properties": {}, + "properties": struct{}{}, }, }, - "Item": { + "Item": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "properties": {}, + "properties": struct{}{}, }, }, - "User": { + "User": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "properties": {}, + "properties": struct{}{}, }, }, }, - "pricing": { - "PriceListJsonItems": { + "pricing": map[string]legacyJSONValues{ + "PriceListJsonItems": legacyJSONValues{ Type: "list", ListMemberRef: true, }, }, - "rekognition": { - "HumanLoopActivationOutput": { + "rekognition": map[string]legacyJSONValues{ + "HumanLoopActivationOutput": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "HumanLoopActivationConditionsEvaluationResults": {}, + "HumanLoopActivationConditionsEvaluationResults": struct{}{}, }, }, }, - "sagemaker": { - "HumanLoopActivationConditionsConfig": { + "sagemaker": map[string]legacyJSONValues{ + "HumanLoopActivationConditionsConfig": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "HumanLoopActivationConditions": {}, + "HumanLoopActivationConditions": struct{}{}, }, }, }, - "schemas": { - "GetResourcePolicyResponse": { + "schemas": map[string]legacyJSONValues{ + "GetResourcePolicyResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "Policy": {}, + "Policy": struct{}{}, }, }, - "PutResourcePolicyRequest": { + "PutResourcePolicyRequest": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "Policy": {}, + "Policy": struct{}{}, }, }, - "PutResourcePolicyResponse": { + "PutResourcePolicyResponse": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "Policy": {}, + "Policy": struct{}{}, }, }, - "GetResourcePolicyOutput": { + "GetResourcePolicyOutput": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "Policy": {}, + "Policy": struct{}{}, }, }, - "PutResourcePolicyInput": { + "PutResourcePolicyInput": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "Policy": {}, + "Policy": struct{}{}, }, }, - "PutResourcePolicyOutput": { + "PutResourcePolicyOutput": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "Policy": {}, + "Policy": struct{}{}, }, }, }, - "textract": { - "HumanLoopActivationOutput": { + "textract": map[string]legacyJSONValues{ + "HumanLoopActivationOutput": legacyJSONValues{ Type: "structure", StructMembers: map[string]struct{}{ - "HumanLoopActivationConditionsEvaluationResults": {}, + "HumanLoopActivationConditionsEvaluationResults": struct{}{}, }, }, }, diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/load.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/load.go index a9091d77aa01..2ff3614d1caf 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/load.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/api/load.go @@ -123,11 +123,11 @@ func attachModelFiles(modelPath string, modelFiles ...modelLoader) error { // pattern passed in. Returns the path of the model file to be loaded. Includes // all versions of a service model. // -// e.g: -// models/apis/*/*/api-2.json +// e.g: +// models/apis/*/*/api-2.json // -// Or with specific model file: -// models/apis/service/version/api-2.json +// Or with specific model file: +// models/apis/service/version/api-2.json func ExpandModelGlobPath(globs ...string) ([]string, error) { modelPaths := []string{} @@ -150,7 +150,7 @@ func ExpandModelGlobPath(globs ...string) ([]string, error) { // Uses the third from last path element to determine unique service. Only one // service version will be included. // -// models/apis/service/version/api-2.json +// models/apis/service/version/api-2.json func TrimModelServiceVersions(modelPaths []string) (include, exclude []string) { sort.Strings(modelPaths) diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/cli/gen-endpoints/main.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/cli/gen-endpoints/main.go index ed33fbce3656..f686c03231df 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/cli/gen-endpoints/main.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/model/cli/gen-endpoints/main.go @@ -4,7 +4,7 @@ // Command gen-endpoints parses a JSON description of the AWS endpoint // discovery logic and generates a Go file which returns an endpoint. // -// aws-gen-goendpoints apis/_endpoints.json aws/endpoints_map.go +// aws-gen-goendpoints apis/_endpoints.json aws/endpoints_map.go package main import ( @@ -18,9 +18,8 @@ import ( // Generates the endpoints from json description // // Args: -// -// -model The definition file to use -// -out The output file to generate +// -model The definition file to use +// -out The output file to generate func main() { var modelName, outName string flag.StringVar(&modelName, "model", "", "Endpoints definition model") diff --git a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/protocol/eventstream/eventstreamtest/testing.go b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/protocol/eventstream/eventstreamtest/testing.go index aeff141263ce..74d6bb420ba3 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/protocol/eventstream/eventstreamtest/testing.go +++ b/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/protocol/eventstream/eventstreamtest/testing.go @@ -16,12 +16,12 @@ import ( "testing" "time" - "golang.org/x/net/http2" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws/session" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/awstesting/unit" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/protocol" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/private/protocol/eventstream" + "golang.org/x/net/http2" ) const ( diff --git a/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider_test.go b/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider_test.go index 0033d27c68ea..478a13112779 100644 --- a/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider_test.go +++ b/cluster-autoscaler/cloudprovider/aws/aws_cloud_provider_test.go @@ -17,8 +17,7 @@ limitations under the License. package aws import ( - "testing" - + "fmt" "github.com/stretchr/testify/assert" "github.com/stretchr/testify/mock" apiv1 "k8s.io/api/core/v1" @@ -27,6 +26,7 @@ import ( "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/aws" "k8s.io/autoscaler/cluster-autoscaler/cloudprovider/aws/aws-sdk-go/service/autoscaling" "k8s.io/autoscaler/cluster-autoscaler/config" + "testing" ) var testAwsManager = &AwsManager{ @@ -603,7 +603,7 @@ func TestDeleteNodesWithPlaceholder(t *testing.T) { err = asgs[0].DeleteNodes([]*apiv1.Node{node}) assert.NoError(t, err) a.AssertNumberOfCalls(t, "SetDesiredCapacity", 1) - a.AssertNumberOfCalls(t, "DescribeAutoScalingGroupsPages", 1) + a.AssertNumberOfCalls(t, "DescribeAutoScalingGroupsPages", 2) newSize, err := asgs[0].TargetSize() assert.NoError(t, err) @@ -648,6 +648,115 @@ func TestDeleteNodesAfterMultipleRefreshes(t *testing.T) { assert.NoError(t, err) } +func TestDeleteNodesWithPlaceholderAndStaleCache(t *testing.T) { + // This test validates the scenario where ASG cache is not in sync with Autoscaling configuration. + // we are taking an example where ASG size is 10, cache as 3 instances "i-0000", "i-0001" and "i-0002 + // But ASG has 6 instances i-0000 to i-10005. When DeleteInstances is called with 2 instances ("i-0000", "i-0001" ) + // and placeholders, CAS will terminate only these 2 instances after reducing ASG size by the count of placeholders + + a := &autoScalingMock{} + provider := testProvider(t, newTestAwsManagerWithAsgs(t, a, nil, []string{"1:10:test-asg"})) + asgs := provider.NodeGroups() + commonAsg := &asg{ + AwsRef: AwsRef{Name: asgs[0].Id()}, + minSize: asgs[0].MinSize(), + maxSize: asgs[0].MaxSize(), + } + + // desired capacity will be set as 6 as ASG has 4 placeholders + a.On("SetDesiredCapacity", &autoscaling.SetDesiredCapacityInput{ + AutoScalingGroupName: aws.String(asgs[0].Id()), + DesiredCapacity: aws.Int64(6), + HonorCooldown: aws.Bool(false), + }).Return(&autoscaling.SetDesiredCapacityOutput{}) + + // Look up the current number of instances... + var expectedInstancesCount int64 = 10 + a.On("DescribeAutoScalingGroupsPages", + &autoscaling.DescribeAutoScalingGroupsInput{ + AutoScalingGroupNames: aws.StringSlice([]string{"test-asg"}), + MaxRecords: aws.Int64(maxRecordsReturnedByAPI), + }, + mock.AnythingOfType("func(*autoscaling.DescribeAutoScalingGroupsOutput, bool) bool"), + ).Run(func(args mock.Arguments) { + fn := args.Get(1).(func(*autoscaling.DescribeAutoScalingGroupsOutput, bool) bool) + fn(testNamedDescribeAutoScalingGroupsOutput("test-asg", expectedInstancesCount, "i-0000", "i-0001", "i-0002", "i-0003", "i-0004", "i-0005"), false) + + expectedInstancesCount = 4 + }).Return(nil) + + a.On("DescribeScalingActivities", + &autoscaling.DescribeScalingActivitiesInput{ + AutoScalingGroupName: aws.String("test-asg"), + }, + ).Return(&autoscaling.DescribeScalingActivitiesOutput{}, nil) + + provider.Refresh() + + initialSize, err := asgs[0].TargetSize() + assert.NoError(t, err) + assert.Equal(t, 10, initialSize) + + var awsInstanceRefs []AwsInstanceRef + instanceToAsg := make(map[AwsInstanceRef]*asg) + + var nodes []*apiv1.Node + for i := 3; i <= 9; i++ { + providerId := fmt.Sprintf("aws:///us-east-1a/i-placeholder-test-asg-%d", i) + node := &apiv1.Node{ + Spec: apiv1.NodeSpec{ + ProviderID: providerId, + }, + } + nodes = append(nodes, node) + awsInstanceRef := AwsInstanceRef{ + ProviderID: providerId, + Name: fmt.Sprintf("i-placeholder-test-asg-%d", i), + } + awsInstanceRefs = append(awsInstanceRefs, awsInstanceRef) + instanceToAsg[awsInstanceRef] = commonAsg + } + + for i := 0; i <= 2; i++ { + providerId := fmt.Sprintf("aws:///us-east-1a/i-000%d", i) + node := &apiv1.Node{ + Spec: apiv1.NodeSpec{ + ProviderID: providerId, + }, + } + // only setting 2 instances to be terminated out of 3 active instances + if i < 2 { + nodes = append(nodes, node) + a.On("TerminateInstanceInAutoScalingGroup", &autoscaling.TerminateInstanceInAutoScalingGroupInput{ + InstanceId: aws.String(fmt.Sprintf("i-000%d", i)), + ShouldDecrementDesiredCapacity: aws.Bool(true), + }).Return(&autoscaling.TerminateInstanceInAutoScalingGroupOutput{ + Activity: &autoscaling.Activity{Description: aws.String("Deleted instance")}, + }) + } + awsInstanceRef := AwsInstanceRef{ + ProviderID: providerId, + Name: fmt.Sprintf("i-000%d", i), + } + awsInstanceRefs = append(awsInstanceRefs, awsInstanceRef) + instanceToAsg[awsInstanceRef] = commonAsg + } + + // modifying provider to bring disparity between ASG and cache + provider.awsManager.asgCache.asgToInstances[AwsRef{Name: "test-asg"}] = awsInstanceRefs + provider.awsManager.asgCache.instanceToAsg = instanceToAsg + + // calling delete nodes 2 nodes and remaining placeholders + err = asgs[0].DeleteNodes(nodes) + assert.NoError(t, err) + a.AssertNumberOfCalls(t, "SetDesiredCapacity", 1) + a.AssertNumberOfCalls(t, "DescribeAutoScalingGroupsPages", 2) + + // This ensures only 2 instances are terminated which are mocked in this unit test + a.AssertNumberOfCalls(t, "TerminateInstanceInAutoScalingGroup", 2) + +} + func TestGetResourceLimiter(t *testing.T) { mockAutoScaling := &autoScalingMock{} mockEC2 := &ec2Mock{} diff --git a/cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go b/cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go index 4781bbbfd073..2d3c552c858a 100644 --- a/cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go +++ b/cluster-autoscaler/cloudprovider/aws/ec2_instance_types.go @@ -28,7 +28,7 @@ type InstanceType struct { } // StaticListLastUpdateTime is a string declaring the last time the static list was updated. -var StaticListLastUpdateTime = "2023-11-29" +var StaticListLastUpdateTime = "2024-07-15" // InstanceTypes is a map of ec2 resources var InstanceTypes = map[string]*InstanceType{ @@ -1110,6 +1110,13 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "arm64", }, + "c7gd.metal": { + InstanceType: "c7gd.metal", + VCPU: 64, + MemoryMb: 131072, + GPU: 0, + Architecture: "arm64", + }, "c7gd.xlarge": { InstanceType: "c7gd.xlarge", VCPU: 4, @@ -1166,6 +1173,13 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "arm64", }, + "c7gn.metal": { + InstanceType: "c7gn.metal", + VCPU: 64, + MemoryMb: 131072, + GPU: 0, + Architecture: "arm64", + }, "c7gn.xlarge": { InstanceType: "c7gn.xlarge", VCPU: 4, @@ -1376,20 +1390,6 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "amd64", }, - "g2.2xlarge": { - InstanceType: "g2.2xlarge", - VCPU: 8, - MemoryMb: 15360, - GPU: 1, - Architecture: "amd64", - }, - "g2.8xlarge": { - InstanceType: "g2.8xlarge", - VCPU: 32, - MemoryMb: 61440, - GPU: 4, - Architecture: "amd64", - }, "g3.16xlarge": { InstanceType: "g3.16xlarge", VCPU: 64, @@ -1600,6 +1600,76 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 1, Architecture: "arm64", }, + "g6.12xlarge": { + InstanceType: "g6.12xlarge", + VCPU: 48, + MemoryMb: 196608, + GPU: 4, + Architecture: "amd64", + }, + "g6.16xlarge": { + InstanceType: "g6.16xlarge", + VCPU: 64, + MemoryMb: 262144, + GPU: 1, + Architecture: "amd64", + }, + "g6.24xlarge": { + InstanceType: "g6.24xlarge", + VCPU: 96, + MemoryMb: 393216, + GPU: 4, + Architecture: "amd64", + }, + "g6.2xlarge": { + InstanceType: "g6.2xlarge", + VCPU: 8, + MemoryMb: 32768, + GPU: 1, + Architecture: "amd64", + }, + "g6.48xlarge": { + InstanceType: "g6.48xlarge", + VCPU: 192, + MemoryMb: 786432, + GPU: 8, + Architecture: "amd64", + }, + "g6.4xlarge": { + InstanceType: "g6.4xlarge", + VCPU: 16, + MemoryMb: 65536, + GPU: 1, + Architecture: "amd64", + }, + "g6.8xlarge": { + InstanceType: "g6.8xlarge", + VCPU: 32, + MemoryMb: 131072, + GPU: 1, + Architecture: "amd64", + }, + "g6.xlarge": { + InstanceType: "g6.xlarge", + VCPU: 4, + MemoryMb: 16384, + GPU: 1, + Architecture: "amd64", + }, + "gr6.4xlarge": { + InstanceType: "gr6.4xlarge", + VCPU: 16, + MemoryMb: 131072, + GPU: 1, + Architecture: "amd64", + }, + "gr6.8xlarge": { + InstanceType: "gr6.8xlarge", + VCPU: 32, + MemoryMb: 262144, + GPU: 1, + Architecture: "amd64", + }, "h1.16xlarge": { InstanceType: "h1.16xlarge", VCPU: 64, @@ -3245,6 +3315,13 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "arm64", }, + "m7gd.metal": { + InstanceType: "m7gd.metal", + VCPU: 64, + MemoryMb: 262144, + GPU: 0, + Architecture: "arm64", + }, "m7gd.xlarge": { InstanceType: "m7gd.xlarge", VCPU: 4, @@ -3371,6 +3448,13 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "amd64", }, + "mac2-m1ultra.metal": { + InstanceType: "mac2-m1ultra.metal", + VCPU: 20, + MemoryMb: 131072, + GPU: 0, + Architecture: "amd64", + }, "mac2-m2.metal": { InstanceType: "mac2-m2.metal", VCPU: 8, @@ -3378,6 +3462,13 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "amd64", }, + "mac2-m2pro.metal": { + InstanceType: "mac2-m2pro.metal", + VCPU: 12, + MemoryMb: 32768, + GPU: 0, + Architecture: "amd64", + }, "mac2.metal": { InstanceType: "mac2.metal", VCPU: 8, @@ -4592,7 +4683,7 @@ var InstanceTypes = map[string]*InstanceType{ "r7gd.12xlarge": { InstanceType: "r7gd.12xlarge", VCPU: 48, - MemoryMb: 262144, + MemoryMb: 393216, GPU: 0, Architecture: "arm64", }, @@ -4620,7 +4711,7 @@ var InstanceTypes = map[string]*InstanceType{ "r7gd.8xlarge": { InstanceType: "r7gd.8xlarge", VCPU: 32, - MemoryMb: 196608, + MemoryMb: 262144, GPU: 0, Architecture: "arm64", }, @@ -4638,6 +4729,13 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "arm64", }, + "r7gd.metal": { + InstanceType: "r7gd.metal", + VCPU: 64, + MemoryMb: 524288, + GPU: 0, + Architecture: "arm64", + }, "r7gd.xlarge": { InstanceType: "r7gd.xlarge", VCPU: 4, @@ -4792,6 +4890,90 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "amd64", }, + "r8g.12xlarge": { + InstanceType: "r8g.12xlarge", + VCPU: 48, + MemoryMb: 393216, + GPU: 0, + Architecture: "arm64", + }, + "r8g.16xlarge": { + InstanceType: "r8g.16xlarge", + VCPU: 64, + MemoryMb: 524288, + GPU: 0, + Architecture: "arm64", + }, + "r8g.24xlarge": { + InstanceType: "r8g.24xlarge", + VCPU: 96, + MemoryMb: 786432, + GPU: 0, + Architecture: "arm64", + }, + "r8g.2xlarge": { + InstanceType: "r8g.2xlarge", + VCPU: 8, + MemoryMb: 65536, + GPU: 0, + Architecture: "arm64", + }, + "r8g.48xlarge": { + InstanceType: "r8g.48xlarge", + VCPU: 192, + MemoryMb: 1572864, + GPU: 0, + Architecture: "arm64", + }, + "r8g.4xlarge": { + InstanceType: "r8g.4xlarge", + VCPU: 16, + MemoryMb: 131072, + GPU: 0, + Architecture: "arm64", + }, + "r8g.8xlarge": { + InstanceType: "r8g.8xlarge", + VCPU: 32, + MemoryMb: 262144, + GPU: 0, + Architecture: "arm64", + }, + "r8g.large": { + InstanceType: "r8g.large", + VCPU: 2, + MemoryMb: 16384, + GPU: 0, + Architecture: "arm64", + }, + "r8g.medium": { + InstanceType: "r8g.medium", + VCPU: 1, + MemoryMb: 8192, + GPU: 0, + Architecture: "arm64", + }, + "r8g.metal-24xl": { + InstanceType: "r8g.metal-24xl", + VCPU: 96, + MemoryMb: 786432, + GPU: 0, + Architecture: "arm64", + }, + "r8g.metal-48xl": { + InstanceType: "r8g.metal-48xl", + VCPU: 192, + MemoryMb: 1572864, + GPU: 0, + Architecture: "arm64", + }, + "r8g.xlarge": { + InstanceType: "r8g.xlarge", + VCPU: 4, + MemoryMb: 32768, + GPU: 0, + Architecture: "arm64", + }, "t1.micro": { InstanceType: "t1.micro", VCPU: 1, @@ -5065,6 +5247,34 @@ var InstanceTypes = map[string]*InstanceType{ GPU: 0, Architecture: "amd64", }, + "u7i-12tb.224xlarge": { + InstanceType: "u7i-12tb.224xlarge", + VCPU: 896, + MemoryMb: 12582912, + GPU: 0, + Architecture: "amd64", + }, + "u7in-16tb.224xlarge": { + InstanceType: "u7in-16tb.224xlarge", + VCPU: 896, + MemoryMb: 16777216, + GPU: 0, + Architecture: "amd64", + }, + "u7in-24tb.224xlarge": { + InstanceType: "u7in-24tb.224xlarge", + VCPU: 896, + MemoryMb: 25165824, + GPU: 0, + Architecture: "amd64", + }, + "u7in-32tb.224xlarge": { + InstanceType: "u7in-32tb.224xlarge", + VCPU: 896, + MemoryMb: 33554432, + GPU: 0, + Architecture: "amd64", + }, "vt1.24xlarge": { InstanceType: "vt1.24xlarge", VCPU: 96, diff --git a/cluster-autoscaler/cloudprovider/azure/azure_client.go b/cluster-autoscaler/cloudprovider/azure/azure_client.go index f8123f3f90b6..4928670f913c 100644 --- a/cluster-autoscaler/cloudprovider/azure/azure_client.go +++ b/cluster-autoscaler/cloudprovider/azure/azure_client.go @@ -24,6 +24,16 @@ import ( "os" "time" + _ "go.uber.org/mock/mockgen/model" // for go:generate + + azextensions "github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud" + azurecore_policy "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4" "github.com/Azure/azure-sdk-for-go/services/compute/mgmt/2019-07-01/compute" "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2017-05-10/resources" "github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2021-02-01/storage" @@ -138,6 +148,118 @@ func (az *azDeploymentsClient) Delete(ctx context.Context, resourceGroupName, de return future.Response(), err } +//go:generate sh -c "mockgen k8s.io/autoscaler/cluster-autoscaler/cloudprovider/azure AgentPoolsClient >./agentpool_client.go" + +// AgentPoolsClient interface defines the methods needed for scaling vms pool. +// it is implemented by track2 sdk armcontainerservice.AgentPoolsClient +type AgentPoolsClient interface { + Get(ctx context.Context, + resourceGroupName, resourceName, agentPoolName string, + options *armcontainerservice.AgentPoolsClientGetOptions) ( + armcontainerservice.AgentPoolsClientGetResponse, error) + BeginCreateOrUpdate( + ctx context.Context, + resourceGroupName, resourceName, agentPoolName string, + parameters armcontainerservice.AgentPool, + options *armcontainerservice.AgentPoolsClientBeginCreateOrUpdateOptions) ( + *runtime.Poller[armcontainerservice.AgentPoolsClientCreateOrUpdateResponse], error) + BeginDeleteMachines( + ctx context.Context, + resourceGroupName, resourceName, agentPoolName string, + machines armcontainerservice.AgentPoolDeleteMachinesParameter, + options *armcontainerservice.AgentPoolsClientBeginDeleteMachinesOptions) ( + *runtime.Poller[armcontainerservice.AgentPoolsClientDeleteMachinesResponse], error) +} + +func getAgentpoolClientCredentials(cfg *Config) (azcore.TokenCredential, error) { + var cred azcore.TokenCredential + var err error + if cfg.AuthMethod == authMethodCLI { + cred, err = azidentity.NewAzureCLICredential(&azidentity.AzureCLICredentialOptions{ + TenantID: cfg.TenantID}) + if err != nil { + klog.Errorf("NewAzureCLICredential failed: %v", err) + return nil, err + } + } else if cfg.AuthMethod == "" || cfg.AuthMethod == authMethodPrincipal { + cred, err = azidentity.NewClientSecretCredential(cfg.TenantID, cfg.AADClientID, cfg.AADClientSecret, nil) + if err != nil { + klog.Errorf("NewClientSecretCredential failed: %v", err) + return nil, err + } + } else { + return nil, fmt.Errorf("unsupported authorization method: %s", cfg.AuthMethod) + } + return cred, nil +} + +func getAgentpoolClientRetryOptions(cfg *Config) azurecore_policy.RetryOptions { + if cfg.AuthMethod == authMethodCLI { + return azurecore_policy.RetryOptions{ + MaxRetries: -1, // no retry when using CLI auth for UT + } + } + return azextensions.DefaultRetryOpts() +} + +func newAgentpoolClient(cfg *Config) (AgentPoolsClient, error) { + retryOptions := getAgentpoolClientRetryOptions(cfg) + + if cfg.ARMBaseURLForAPClient != "" { + klog.V(10).Infof("Using ARMBaseURLForAPClient to create agent pool client") + return newAgentpoolClientWithConfig(cfg.SubscriptionID, nil, cfg.ARMBaseURLForAPClient, "UNKNOWN", retryOptions) + } + + return newAgentpoolClientWithPublicEndpoint(cfg, retryOptions) +} + +func newAgentpoolClientWithConfig(subscriptionID string, cred azcore.TokenCredential, + cloudCfgEndpoint, cloudCfgAudience string, retryOptions azurecore_policy.RetryOptions) (AgentPoolsClient, error) { + agentPoolsClient, err := armcontainerservice.NewAgentPoolsClient(subscriptionID, cred, + &policy.ClientOptions{ + ClientOptions: azurecore_policy.ClientOptions{ + Cloud: cloud.Configuration{ + Services: map[cloud.ServiceName]cloud.ServiceConfiguration{ + cloud.ResourceManager: { + Endpoint: cloudCfgEndpoint, + Audience: cloudCfgAudience, + }, + }, + }, + Telemetry: azextensions.DefaultTelemetryOpts(getUserAgentExtension()), + Transport: azextensions.DefaultHTTPClient(), + Retry: retryOptions, + }, + }) + + if err != nil { + return nil, fmt.Errorf("failed to init cluster agent pools client: %w", err) + } + + klog.V(10).Infof("Successfully created agent pool client with ARMBaseURLForAPClient") + return agentPoolsClient, nil +} + +func newAgentpoolClientWithPublicEndpoint(cfg *Config, retryOptions azurecore_policy.RetryOptions) (AgentPoolsClient, error) { + cred, err := getAgentpoolClientCredentials(cfg) + if err != nil { + klog.Errorf("failed to get agent pool client credentials: %v", err) + return nil, err + } + + // default to public cloud + env := azure.PublicCloud + if cfg.Cloud != "" { + env, err = azure.EnvironmentFromName(cfg.Cloud) + if err != nil { + klog.Errorf("failed to get environment from name %s: with error: %v", cfg.Cloud, err) + return nil, err + } + } + + return newAgentpoolClientWithConfig(cfg.SubscriptionID, cred, env.ResourceManagerEndpoint, env.TokenAudience, retryOptions) +} + type azAccountsClient struct { client storage.AccountsClient } @@ -151,6 +273,7 @@ type azClient struct { disksClient diskclient.Interface storageAccountsClient storageaccountclient.Interface skuClient compute.ResourceSkusClient + agentPoolClient AgentPoolsClient } // newServicePrincipalTokenFromCredentials creates a new ServicePrincipalToken using values of the @@ -278,6 +401,13 @@ func newAzClient(cfg *Config, env *azure.Environment) (*azClient, error) { skuClient.Authorizer = azClientConfig.Authorizer klog.V(5).Infof("Created sku client with authorizer: %v", skuClient) + agentPoolClient, err := newAgentpoolClient(cfg) + if err != nil { + // we don't want to fail the whole process so we don't break any existing functionality + // since this may not be fatal - it is only used by vms pool which is still under development. + klog.Warningf("newAgentpoolClient failed with error: %s", err) + } + return &azClient{ disksClient: disksClient, interfacesClient: interfacesClient, @@ -287,5 +417,6 @@ func newAzClient(cfg *Config, env *azure.Environment) (*azClient, error) { virtualMachinesClient: virtualMachinesClient, storageAccountsClient: storageAccountsClient, skuClient: skuClient, + agentPoolClient: agentPoolClient, }, nil } diff --git a/cluster-autoscaler/cloudprovider/azure/azure_config.go b/cluster-autoscaler/cloudprovider/azure/azure_config.go index 9fe559d07e14..aa366f3a6e5d 100644 --- a/cluster-autoscaler/cloudprovider/azure/azure_config.go +++ b/cluster-autoscaler/cloudprovider/azure/azure_config.go @@ -87,8 +87,16 @@ type Config struct { Location string `json:"location" yaml:"location"` TenantID string `json:"tenantId" yaml:"tenantId"` SubscriptionID string `json:"subscriptionId" yaml:"subscriptionId"` - ResourceGroup string `json:"resourceGroup" yaml:"resourceGroup"` - VMType string `json:"vmType" yaml:"vmType"` + ClusterName string `json:"clusterName" yaml:"clusterName"` + // ResourceGroup is the MC_ resource group where the nodes are located. + ResourceGroup string `json:"resourceGroup" yaml:"resourceGroup"` + // ClusterResourceGroup is the resource group where the cluster is located. + ClusterResourceGroup string `json:"clusterResourceGroup" yaml:"clusterResourceGroup"` + VMType string `json:"vmType" yaml:"vmType"` + + // ARMBaseURLForAPClient is the URL to use for operations for the VMs pool. + // It can override the default public ARM endpoint for VMs pool scale operations. + ARMBaseURLForAPClient string `json:"armBaseURLForAPClient" yaml:"armBaseURLForAPClient"` // AuthMethod determines how to authorize requests for the Azure // cloud. Valid options are "principal" (= the traditional @@ -294,6 +302,12 @@ func BuildAzureConfig(configReader io.Reader) (*Config, error) { } } } + + // always read the following from environment variables since azure.json doesn't have these fields + cfg.ClusterName = os.Getenv("CLUSTER_NAME") + cfg.ClusterResourceGroup = os.Getenv("ARM_CLUSTER_RESOURCE_GROUP") + cfg.ARMBaseURLForAPClient = os.Getenv("ARM_BASE_URL_FOR_AP_CLIENT") + cfg.TrimSpace() if cloudProviderRateLimit := os.Getenv("CLOUD_PROVIDER_RATE_LIMIT"); cloudProviderRateLimit != "" { @@ -460,7 +474,9 @@ func (cfg *Config) TrimSpace() { cfg.Location = strings.TrimSpace(cfg.Location) cfg.TenantID = strings.TrimSpace(cfg.TenantID) cfg.SubscriptionID = strings.TrimSpace(cfg.SubscriptionID) + cfg.ClusterName = strings.TrimSpace(cfg.ClusterName) cfg.ResourceGroup = strings.TrimSpace(cfg.ResourceGroup) + cfg.ClusterResourceGroup = strings.TrimSpace(cfg.ClusterResourceGroup) cfg.VMType = strings.TrimSpace(cfg.VMType) cfg.AADClientID = strings.TrimSpace(cfg.AADClientID) cfg.AADClientSecret = strings.TrimSpace(cfg.AADClientSecret) diff --git a/cluster-autoscaler/cloudprovider/azure/azure_manager_test.go b/cluster-autoscaler/cloudprovider/azure/azure_manager_test.go index 83ca223e059b..95e76041690d 100644 --- a/cluster-autoscaler/cloudprovider/azure/azure_manager_test.go +++ b/cluster-autoscaler/cloudprovider/azure/azure_manager_test.go @@ -352,6 +352,9 @@ func TestCreateAzureManagerWithNilConfig(t *testing.T) { TenantID: "tenantId", SubscriptionID: "subscriptionId", ResourceGroup: "resourceGroup", + ClusterName: "mycluster", + ClusterResourceGroup: "myrg", + ARMBaseURLForAPClient: "nodeprovisioner-svc.nodeprovisioner.svc.cluster.local", VMType: "vmss", AADClientID: "aadClientId", AADClientSecret: "aadClientSecret", @@ -449,6 +452,9 @@ func TestCreateAzureManagerWithNilConfig(t *testing.T) { t.Setenv("BACKOFF_DURATION", "1") t.Setenv("BACKOFF_JITTER", "1") t.Setenv("CLOUD_PROVIDER_RATE_LIMIT", "true") + t.Setenv("CLUSTER_NAME", "mycluster") + t.Setenv("ARM_CLUSTER_RESOURCE_GROUP", "myrg") + t.Setenv("ARM_BASE_URL_FOR_AP_CLIENT", "nodeprovisioner-svc.nodeprovisioner.svc.cluster.local") t.Run("environment variables correctly set", func(t *testing.T) { manager, err := createAzureManagerInternal(nil, cloudprovider.NodeGroupDiscoveryOptions{}, mockAzClient) diff --git a/cluster-autoscaler/cloudprovider/azure/azure_mock_agentpool_client.go b/cluster-autoscaler/cloudprovider/azure/azure_mock_agentpool_client.go new file mode 100644 index 000000000000..eaad11f01d4c --- /dev/null +++ b/cluster-autoscaler/cloudprovider/azure/azure_mock_agentpool_client.go @@ -0,0 +1,94 @@ +/* +Copyright 2020 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package azure + +import ( + context "context" + reflect "reflect" + + runtime "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + armcontainerservice "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4" + gomock "go.uber.org/mock/gomock" +) + +// MockAgentPoolsClient is a mock of AgentPoolsClient interface. +type MockAgentPoolsClient struct { + ctrl *gomock.Controller + recorder *MockAgentPoolsClientMockRecorder +} + +// MockAgentPoolsClientMockRecorder is the mock recorder for MockAgentPoolsClient. +type MockAgentPoolsClientMockRecorder struct { + mock *MockAgentPoolsClient +} + +// NewMockAgentPoolsClient creates a new mock instance. +func NewMockAgentPoolsClient(ctrl *gomock.Controller) *MockAgentPoolsClient { + mock := &MockAgentPoolsClient{ctrl: ctrl} + mock.recorder = &MockAgentPoolsClientMockRecorder{mock} + return mock +} + +// EXPECT returns an object that allows the caller to indicate expected use. +func (m *MockAgentPoolsClient) EXPECT() *MockAgentPoolsClientMockRecorder { + return m.recorder +} + +// BeginCreateOrUpdate mocks base method. +func (m *MockAgentPoolsClient) BeginCreateOrUpdate(arg0 context.Context, arg1, arg2, arg3 string, arg4 armcontainerservice.AgentPool, arg5 *armcontainerservice.AgentPoolsClientBeginCreateOrUpdateOptions) (*runtime.Poller[armcontainerservice.AgentPoolsClientCreateOrUpdateResponse], error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "BeginCreateOrUpdate", arg0, arg1, arg2, arg3, arg4, arg5) + ret0, _ := ret[0].(*runtime.Poller[armcontainerservice.AgentPoolsClientCreateOrUpdateResponse]) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// BeginCreateOrUpdate indicates an expected call of BeginCreateOrUpdate. +func (mr *MockAgentPoolsClientMockRecorder) BeginCreateOrUpdate(arg0, arg1, arg2, arg3, arg4, arg5 any) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "BeginCreateOrUpdate", reflect.TypeOf((*MockAgentPoolsClient)(nil).BeginCreateOrUpdate), arg0, arg1, arg2, arg3, arg4, arg5) +} + +// BeginDeleteMachines mocks base method. +func (m *MockAgentPoolsClient) BeginDeleteMachines(arg0 context.Context, arg1, arg2, arg3 string, arg4 armcontainerservice.AgentPoolDeleteMachinesParameter, arg5 *armcontainerservice.AgentPoolsClientBeginDeleteMachinesOptions) (*runtime.Poller[armcontainerservice.AgentPoolsClientDeleteMachinesResponse], error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "BeginDeleteMachines", arg0, arg1, arg2, arg3, arg4, arg5) + ret0, _ := ret[0].(*runtime.Poller[armcontainerservice.AgentPoolsClientDeleteMachinesResponse]) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// BeginDeleteMachines indicates an expected call of BeginDeleteMachines. +func (mr *MockAgentPoolsClientMockRecorder) BeginDeleteMachines(arg0, arg1, arg2, arg3, arg4, arg5 any) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "BeginDeleteMachines", reflect.TypeOf((*MockAgentPoolsClient)(nil).BeginDeleteMachines), arg0, arg1, arg2, arg3, arg4, arg5) +} + +// Get mocks base method. +func (m *MockAgentPoolsClient) Get(arg0 context.Context, arg1, arg2, arg3 string, arg4 *armcontainerservice.AgentPoolsClientGetOptions) (armcontainerservice.AgentPoolsClientGetResponse, error) { + m.ctrl.T.Helper() + ret := m.ctrl.Call(m, "Get", arg0, arg1, arg2, arg3, arg4) + ret0, _ := ret[0].(armcontainerservice.AgentPoolsClientGetResponse) + ret1, _ := ret[1].(error) + return ret0, ret1 +} + +// Get indicates an expected call of Get. +func (mr *MockAgentPoolsClientMockRecorder) Get(arg0, arg1, arg2, arg3, arg4 any) *gomock.Call { + mr.mock.ctrl.T.Helper() + return mr.mock.ctrl.RecordCallWithMethodType(mr.mock, "Get", reflect.TypeOf((*MockAgentPoolsClient)(nil).Get), arg0, arg1, arg2, arg3, arg4) +} diff --git a/cluster-autoscaler/cloudprovider/cloud_provider.go b/cluster-autoscaler/cloudprovider/cloud_provider.go index e0bdc45b68fe..3410d14e336b 100644 --- a/cluster-autoscaler/cloudprovider/cloud_provider.go +++ b/cluster-autoscaler/cloudprovider/cloud_provider.go @@ -234,7 +234,7 @@ type NodeGroup interface { // GetOptions returns NodeGroupAutoscalingOptions that should be used for this particular // NodeGroup. Returning a nil will result in using default options. - // Implementation optional. + // Implementation optional. Callers MUST handle `cloudprovider.ErrNotImplemented`. GetOptions(defaults config.NodeGroupAutoscalingOptions) (*config.NodeGroupAutoscalingOptions, error) } diff --git a/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go b/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go index 179c92a5b23a..234c5700914d 100644 --- a/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go +++ b/cluster-autoscaler/cloudprovider/hetzner/hetzner_node_group.go @@ -18,14 +18,14 @@ package hetzner import ( "context" + "errors" "fmt" + "maps" "math/rand" "strings" "sync" "time" - "maps" - apiv1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/api/resource" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" @@ -92,12 +92,14 @@ func (n *hetznerNodeGroup) IncreaseSize(delta int) error { return fmt.Errorf("delta must be positive, have: %d", delta) } - targetSize := n.targetSize + delta - if targetSize > n.MaxSize() { - return fmt.Errorf("size increase is too large. current: %d desired: %d max: %d", n.targetSize, targetSize, n.MaxSize()) + desiredTargetSize := n.targetSize + delta + if desiredTargetSize > n.MaxSize() { + return fmt.Errorf("size increase is too large. current: %d desired: %d max: %d", n.targetSize, desiredTargetSize, n.MaxSize()) } - klog.V(4).Infof("Scaling Instance Pool %s to %d", n.id, targetSize) + actualDelta := delta + + klog.V(4).Infof("Scaling Instance Pool %s to %d", n.id, desiredTargetSize) n.clusterUpdateMutex.Lock() defer n.clusterUpdateMutex.Unlock() @@ -110,25 +112,43 @@ func (n *hetznerNodeGroup) IncreaseSize(delta int) error { return fmt.Errorf("server type %s not available in region %s", n.instanceType, n.region) } + defer func() { + // create new servers cache + if _, err := n.manager.cachedServers.servers(); err != nil { + klog.Errorf("failed to update servers cache: %v", err) + } + + // Update target size + n.resetTargetSize(actualDelta) + }() + + // There is no "Server Group" in Hetzner Cloud, we need to create every + // server manually. This operation might fail for some of the servers + // because of quotas, rate limiting or server type availability. We need to + // collect the errors and inform cluster-autoscaler about this, so it can + // try other node groups if configured. waitGroup := sync.WaitGroup{} + errsCh := make(chan error, delta) for i := 0; i < delta; i++ { waitGroup.Add(1) go func() { defer waitGroup.Done() err := createServer(n) if err != nil { - targetSize-- - klog.Errorf("failed to create error: %v", err) + actualDelta-- + errsCh <- err } }() } waitGroup.Wait() + close(errsCh) - n.targetSize = targetSize - - // create new servers cache - if _, err := n.manager.cachedServers.servers(); err != nil { - klog.Errorf("failed to get servers: %v", err) + errs := make([]error, 0, delta) + for err = range errsCh { + errs = append(errs, err) + } + if len(errs) > 0 { + return fmt.Errorf("failed to create all servers: %w", errors.Join(errs...)) } return nil @@ -142,13 +162,26 @@ func (n *hetznerNodeGroup) DeleteNodes(nodes []*apiv1.Node) error { n.clusterUpdateMutex.Lock() defer n.clusterUpdateMutex.Unlock() - targetSize := n.targetSize - len(nodes) + delta := len(nodes) + + targetSize := n.targetSize - delta if targetSize < n.MinSize() { return fmt.Errorf("size decrease is too large. current: %d desired: %d min: %d", n.targetSize, targetSize, n.MinSize()) } - waitGroup := sync.WaitGroup{} + actualDelta := delta + + defer func() { + // create new servers cache + if _, err := n.manager.cachedServers.servers(); err != nil { + klog.Errorf("failed to update servers cache: %v", err) + } + + n.resetTargetSize(-actualDelta) + }() + waitGroup := sync.WaitGroup{} + errsCh := make(chan error, len(nodes)) for _, node := range nodes { waitGroup.Add(1) go func(node *apiv1.Node) { @@ -156,20 +189,23 @@ func (n *hetznerNodeGroup) DeleteNodes(nodes []*apiv1.Node) error { err := n.manager.deleteByNode(node) if err != nil { - klog.Errorf("failed to delete server ID %s error: %v", node.Name, err) + actualDelta-- + errsCh <- fmt.Errorf("failed to delete server for node %q: %w", node.Name, err) } waitGroup.Done() }(node) } waitGroup.Wait() + close(errsCh) - // create new servers cache - if _, err := n.manager.cachedServers.servers(); err != nil { - klog.Errorf("failed to get servers: %v", err) + errs := make([]error, 0, len(nodes)) + for err := range errsCh { + errs = append(errs, err) + } + if len(errs) > 0 { + return fmt.Errorf("failed to delete all nodes: %w", errors.Join(errs...)) } - - n.resetTargetSize(-len(nodes)) return nil } @@ -224,10 +260,14 @@ func (n *hetznerNodeGroup) TemplateNodeInfo() (*schedulerframework.NodeInfo, err return nil, fmt.Errorf("failed to create resource list for node group %s error: %v", n.id, err) } + nodeName := newNodeName(n) + node := apiv1.Node{ ObjectMeta: metav1.ObjectMeta{ - Name: newNodeName(n), - Labels: map[string]string{}, + Name: nodeName, + Labels: map[string]string{ + apiv1.LabelHostname: nodeName, + }, }, Status: apiv1.NodeStatus{ Capacity: resourceList, @@ -553,8 +593,8 @@ func waitForServerAction(m *hetznerManager, serverName string, action *hcloud.Ac func (n *hetznerNodeGroup) resetTargetSize(expectedDelta int) { servers, err := n.manager.allServers(n.id) if err != nil { - klog.Errorf("failed to set node pool %s size, using delta %d error: %v", n.id, expectedDelta, err) - n.targetSize = n.targetSize - expectedDelta + klog.Warningf("failed to set node pool %s size, using delta %d error: %v", n.id, expectedDelta, err) + n.targetSize = n.targetSize + expectedDelta } else { klog.Infof("Set node group %s size from %d to %d, expected delta %d", n.id, n.targetSize, len(servers), expectedDelta) n.targetSize = len(servers) diff --git a/cluster-autoscaler/cloudprovider/mcm/mcm_cloud_provider.go b/cluster-autoscaler/cloudprovider/mcm/mcm_cloud_provider.go index 5d8fc293b919..d2c9f6782121 100644 --- a/cluster-autoscaler/cloudprovider/mcm/mcm_cloud_provider.go +++ b/cluster-autoscaler/cloudprovider/mcm/mcm_cloud_provider.go @@ -287,7 +287,7 @@ func ReferenceFromProviderID(m *McmManager, id string) (*Ref, error) { if Name == "" { // Could not find any machine corresponds to node %+v", id - klog.V(4).Infof("No machine found for node ID %q", id) + klog.V(2).Infof("No machine found for node ID %q", id) return nil, nil } return &Ref{ diff --git a/cluster-autoscaler/cloudprovider/mcm/mcm_manager.go b/cluster-autoscaler/cloudprovider/mcm/mcm_manager.go index 78701699a9bb..2edc1645096b 100644 --- a/cluster-autoscaler/cloudprovider/mcm/mcm_manager.go +++ b/cluster-autoscaler/cloudprovider/mcm/mcm_manager.go @@ -441,6 +441,7 @@ func (m *McmManager) SetMachineDeploymentSize(ctx context.Context, machinedeploy // DeleteMachines deletes the Machines and also reduces the desired replicas of the MachineDeployment in parallel. func (m *McmManager) DeleteMachines(targetMachineRefs []*Ref) error { if len(targetMachineRefs) == 0 { + klog.V(2).Infof("[DeleteMachines] No machines to delete") return nil } commonMachineDeployment, err := m.GetMachineDeploymentForMachine(targetMachineRefs[0]) diff --git a/cluster-autoscaler/cloudprovider/oci/vendor-internal/github.com/gofrs/flock/.gitignore b/cluster-autoscaler/cloudprovider/oci/vendor-internal/github.com/gofrs/flock/.gitignore index da9d9d00b474..daf913b1b347 100644 --- a/cluster-autoscaler/cloudprovider/oci/vendor-internal/github.com/gofrs/flock/.gitignore +++ b/cluster-autoscaler/cloudprovider/oci/vendor-internal/github.com/gofrs/flock/.gitignore @@ -6,7 +6,6 @@ # Folders _obj _test -vendor # Architecture specific extensions/prefixes *.[568vq] @@ -23,10 +22,3 @@ _testmain.go *.exe *.test *.prof -*.pprof -*.out -*.log - -/bin -cover.out -cover.html diff --git a/cluster-autoscaler/cloudprovider/tencentcloud/tencentcloud-sdk-go/common/netretry_test.go b/cluster-autoscaler/cloudprovider/tencentcloud/tencentcloud-sdk-go/common/netretry_test.go index ac8b52f467b3..769556376c4a 100644 --- a/cluster-autoscaler/cloudprovider/tencentcloud/tencentcloud-sdk-go/common/netretry_test.go +++ b/cluster-autoscaler/cloudprovider/tencentcloud/tencentcloud-sdk-go/common/netretry_test.go @@ -1,5 +1,5 @@ /* -Copyright 2023 The Kubernetes Authors. +Copyright 2021 The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. diff --git a/cluster-autoscaler/clusterstate/clusterstate.go b/cluster-autoscaler/clusterstate/clusterstate.go index fc8602f58607..54002840ef60 100644 --- a/cluster-autoscaler/clusterstate/clusterstate.go +++ b/cluster-autoscaler/clusterstate/clusterstate.go @@ -1044,7 +1044,7 @@ func getNotRegisteredNodes(allNodes []*apiv1.Node, cloudProviderNodeInstances ma } func expectedToRegister(instance cloudprovider.Instance) bool { - return instance.Status != nil && instance.Status.State != cloudprovider.InstanceDeleting && instance.Status.ErrorInfo == nil + return instance.Status == nil || (instance.Status.State != cloudprovider.InstanceDeleting && instance.Status.ErrorInfo == nil) } // Calculates which of the registered nodes in Kubernetes that do not exist in cloud provider. diff --git a/cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go b/cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go index 0b04988177ab..043f1eef16fe 100644 --- a/cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go +++ b/cluster-autoscaler/core/scaleup/orchestrator/orchestrator.go @@ -443,7 +443,7 @@ func (o *ScaleUpOrchestrator) filterValidScaleUpNodeGroups( continue } autoscalingOptions, err := nodeGroup.GetOptions(o.autoscalingContext.NodeGroupDefaults) - if err != nil { + if err != nil && err != cloudprovider.ErrNotImplemented { klog.Errorf("Couldn't get autoscaling options for ng: %v", nodeGroup.Id()) } numNodes := 1 @@ -500,7 +500,7 @@ func (o *ScaleUpOrchestrator) ComputeExpansionOption( metrics.UpdateDurationFromStart(metrics.Estimate, estimateStart) autoscalingOptions, err := nodeGroup.GetOptions(o.autoscalingContext.NodeGroupDefaults) - if err != nil { + if err != nil && err != cloudprovider.ErrNotImplemented { klog.Errorf("Failed to get autoscaling options for node group %s: %v", nodeGroup.Id(), err) } if autoscalingOptions != nil && autoscalingOptions.ZeroOrMaxNodeScaling { @@ -640,7 +640,7 @@ func (o *ScaleUpOrchestrator) ComputeSimilarNodeGroups( } autoscalingOptions, err := nodeGroup.GetOptions(o.autoscalingContext.NodeGroupDefaults) - if err != nil { + if err != nil && err != cloudprovider.ErrNotImplemented { klog.Errorf("Failed to get autoscaling options for node group %s: %v", nodeGroup.Id(), err) } if autoscalingOptions != nil && autoscalingOptions.ZeroOrMaxNodeScaling { diff --git a/cluster-autoscaler/core/static_autoscaler.go b/cluster-autoscaler/core/static_autoscaler.go index 5b2c4ad2d8b8..6adba6661f18 100644 --- a/cluster-autoscaler/core/static_autoscaler.go +++ b/cluster-autoscaler/core/static_autoscaler.go @@ -782,7 +782,7 @@ func (a *StaticAutoscaler) removeOldUnregisteredNodes(allUnregisteredNodes []clu nodesToDelete := toNodes(unregisteredNodesToDelete) opts, err := nodeGroup.GetOptions(a.NodeGroupDefaults) - if err != nil { + if err != nil && !errors.Is(err, cloudprovider.ErrNotImplemented) { klog.Warningf("Failed to get node group options for %s: %s", nodeGroupId, err) continue } @@ -860,8 +860,9 @@ func (a *StaticAutoscaler) deleteCreatedNodesWithErrors() (bool, error) { if nodeGroup == nil { err = fmt.Errorf("node group %s not found", nodeGroupId) } else { - opts, err := nodeGroup.GetOptions(a.NodeGroupDefaults) - if err != nil { + var opts *config.NodeGroupAutoscalingOptions + opts, err = nodeGroup.GetOptions(a.NodeGroupDefaults) + if err != nil && !errors.Is(err, cloudprovider.ErrNotImplemented) { klog.Warningf("Failed to get node group options for %s: %s", nodeGroupId, err) continue } diff --git a/cluster-autoscaler/go.mod b/cluster-autoscaler/go.mod index 777c3c9a8e03..a2e1cfb09028 100644 --- a/cluster-autoscaler/go.mod +++ b/cluster-autoscaler/go.mod @@ -5,6 +5,10 @@ go 1.21 require ( cloud.google.com/go/compute/metadata v0.2.3 github.com/Azure/azure-sdk-for-go v68.0.0+incompatible + github.com/Azure/azure-sdk-for-go-extensions v0.1.6 + github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.2 + github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.4.0 + github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4 v4.8.0-beta.1 github.com/Azure/go-autorest/autorest v0.11.29 github.com/Azure/go-autorest/autorest/adal v0.9.23 github.com/Azure/go-autorest/autorest/azure/auth v0.5.8 @@ -31,27 +35,28 @@ require ( github.com/satori/go.uuid v1.2.0 github.com/spf13/pflag v1.0.5 github.com/stretchr/testify v1.8.4 - golang.org/x/crypto v0.16.0 - golang.org/x/net v0.19.0 - golang.org/x/oauth2 v0.10.0 - golang.org/x/sys v0.15.0 + go.uber.org/mock v0.4.0 + golang.org/x/crypto v0.21.0 + golang.org/x/net v0.23.0 + golang.org/x/oauth2 v0.11.0 + golang.org/x/sys v0.18.0 google.golang.org/api v0.126.0 - google.golang.org/grpc v1.58.3 + google.golang.org/grpc v1.59.0 google.golang.org/protobuf v1.33.0 gopkg.in/gcfg.v1 v1.2.3 gopkg.in/yaml.v2 v2.4.0 - k8s.io/api v0.29.3 - k8s.io/apimachinery v0.29.3 - k8s.io/apiserver v0.29.3 - k8s.io/client-go v0.29.3 - k8s.io/cloud-provider v0.29.3 + k8s.io/api v0.29.6 + k8s.io/apimachinery v0.29.6 + k8s.io/apiserver v0.29.6 + k8s.io/client-go v0.29.6 + k8s.io/cloud-provider v0.29.6 k8s.io/cloud-provider-aws v1.27.0 - k8s.io/code-generator v0.29.3 - k8s.io/component-base v0.29.3 - k8s.io/component-helpers v0.29.3 + k8s.io/code-generator v0.29.6 + k8s.io/component-base v0.29.6 + k8s.io/component-helpers v0.29.6 k8s.io/klog/v2 v2.110.1 - k8s.io/kubelet v0.29.3 - k8s.io/kubernetes v1.29.0 + k8s.io/kubelet v0.29.6 + k8s.io/kubernetes v1.29.6 k8s.io/legacy-cloud-providers v0.0.0 k8s.io/utils v0.0.0-20230726121419-3b25d923346b sigs.k8s.io/cloud-provider-azure v1.28.0 @@ -61,12 +66,15 @@ require ( require ( cloud.google.com/go/compute v1.23.0 // indirect + github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2 // indirect + github.com/Azure/go-armbalancer v0.0.2 // indirect github.com/Azure/go-autorest v14.2.0+incompatible // indirect github.com/Azure/go-autorest/autorest/azure/cli v0.4.2 // indirect github.com/Azure/go-autorest/autorest/mocks v0.4.2 // indirect github.com/Azure/go-autorest/autorest/validation v0.3.1 // indirect github.com/Azure/go-autorest/logger v0.2.1 // indirect github.com/Azure/go-autorest/tracing v0.6.0 // indirect + github.com/AzureAD/microsoft-authentication-library-for-go v1.1.1 // indirect github.com/GoogleCloudPlatform/k8s-cloud-provider v1.18.1-0.20220218231025-f11817397a1b // indirect github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab // indirect github.com/Microsoft/go-winio v0.6.0 // indirect @@ -94,7 +102,7 @@ require ( github.com/emicklei/go-restful/v3 v3.11.0 // indirect github.com/euank/go-kmsg-parser v2.0.0+incompatible // indirect github.com/evanphx/json-patch v5.6.0+incompatible // indirect - github.com/felixge/httpsnoop v1.0.3 // indirect + github.com/felixge/httpsnoop v1.0.4 // indirect github.com/fsnotify/fsnotify v1.7.0 // indirect github.com/go-logr/logr v1.3.0 // indirect github.com/go-logr/stdr v1.2.2 // indirect @@ -106,6 +114,7 @@ require ( github.com/godbus/dbus/v5 v5.1.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect github.com/golang-jwt/jwt/v4 v4.5.0 // indirect + github.com/golang-jwt/jwt/v5 v5.0.0 // indirect github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect github.com/golang/protobuf v1.5.4 // indirect github.com/google/cadvisor v0.48.1 // indirect @@ -123,6 +132,7 @@ require ( github.com/inconshreveable/mousetrap v1.1.0 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/karrick/godirwalk v1.17.0 // indirect + github.com/kylelemons/godebug v1.1.0 // indirect github.com/libopenstorage/openstorage v1.0.0 // indirect github.com/mailru/easyjson v0.7.7 // indirect github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect @@ -140,6 +150,7 @@ require ( github.com/opencontainers/runc v1.1.10 // indirect github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78 // indirect github.com/opencontainers/selinux v1.11.0 // indirect + github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect github.com/prometheus/client_model v0.4.0 // indirect github.com/prometheus/common v0.44.0 // indirect @@ -160,13 +171,13 @@ require ( go.opencensus.io v0.24.0 // indirect go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.42.0 // indirect go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.42.0 // indirect - go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.44.0 // indirect - go.opentelemetry.io/otel v1.19.0 // indirect - go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0 // indirect - go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0 // indirect - go.opentelemetry.io/otel/metric v1.19.0 // indirect - go.opentelemetry.io/otel/sdk v1.19.0 // indirect - go.opentelemetry.io/otel/trace v1.19.0 // indirect + go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 // indirect + go.opentelemetry.io/otel v1.21.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 // indirect + go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 // indirect + go.opentelemetry.io/otel/metric v1.21.0 // indirect + go.opentelemetry.io/otel/sdk v1.21.0 // indirect + go.opentelemetry.io/otel/trace v1.21.0 // indirect go.opentelemetry.io/proto/otlp v1.0.0 // indirect go.uber.org/atomic v1.10.0 // indirect go.uber.org/multierr v1.11.0 // indirect @@ -174,25 +185,25 @@ require ( golang.org/x/exp v0.0.0-20230905200255-921286631fa9 // indirect golang.org/x/mod v0.14.0 // indirect golang.org/x/sync v0.5.0 // indirect - golang.org/x/term v0.15.0 // indirect + golang.org/x/term v0.18.0 // indirect golang.org/x/text v0.14.0 // indirect golang.org/x/time v0.3.0 // indirect golang.org/x/tools v0.16.1 // indirect google.golang.org/appengine v1.6.7 // indirect - google.golang.org/genproto v0.0.0-20230803162519-f966b187b2e5 // indirect - google.golang.org/genproto/googleapis/api v0.0.0-20230726155614-23370e0ffb3e // indirect + google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d // indirect + google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d // indirect google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect gopkg.in/warnings.v0 v0.1.2 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect k8s.io/apiextensions-apiserver v0.29.3 // indirect - k8s.io/controller-manager v0.29.3 // indirect - k8s.io/cri-api v0.29.3 // indirect + k8s.io/controller-manager v0.29.6 // indirect + k8s.io/cri-api v0.29.6 // indirect k8s.io/csi-translation-lib v0.27.0 // indirect k8s.io/dynamic-resource-allocation v0.0.0 // indirect k8s.io/gengo v0.0.0-20230829151522-9cce18d56c01 // indirect - k8s.io/kms v0.29.3 // indirect + k8s.io/kms v0.29.6 // indirect k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 // indirect k8s.io/kube-scheduler v0.0.0 // indirect k8s.io/kubectl v0.28.0 // indirect @@ -207,64 +218,62 @@ replace github.com/digitalocean/godo => github.com/digitalocean/godo v1.27.0 replace github.com/rancher/go-rancher => github.com/rancher/go-rancher v0.1.0 -replace k8s.io/api => k8s.io/api v0.29.0 +replace k8s.io/api => k8s.io/api v0.29.6 -replace k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.29.0 +replace k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.29.6 -replace k8s.io/apimachinery => k8s.io/apimachinery v0.29.0 +replace k8s.io/apimachinery => k8s.io/apimachinery v0.29.6 -replace k8s.io/apiserver => k8s.io/apiserver v0.29.0 +replace k8s.io/apiserver => k8s.io/apiserver v0.29.6 -replace k8s.io/cli-runtime => k8s.io/cli-runtime v0.29.0 +replace k8s.io/cli-runtime => k8s.io/cli-runtime v0.29.6 -replace k8s.io/client-go => k8s.io/client-go v0.29.0 +replace k8s.io/client-go => k8s.io/client-go v0.29.6 -replace k8s.io/cloud-provider => k8s.io/cloud-provider v0.29.0 +replace k8s.io/cloud-provider => k8s.io/cloud-provider v0.29.6 -replace k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.29.0 +replace k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.29.6 -replace k8s.io/code-generator => k8s.io/code-generator v0.29.0 +replace k8s.io/code-generator => k8s.io/code-generator v0.29.6 -replace k8s.io/component-base => k8s.io/component-base v0.29.0 +replace k8s.io/component-base => k8s.io/component-base v0.29.6 -replace k8s.io/component-helpers => k8s.io/component-helpers v0.29.0 +replace k8s.io/component-helpers => k8s.io/component-helpers v0.29.6 -replace k8s.io/controller-manager => k8s.io/controller-manager v0.29.0 +replace k8s.io/controller-manager => k8s.io/controller-manager v0.29.6 -replace k8s.io/cri-api => k8s.io/cri-api v0.29.0 +replace k8s.io/cri-api => k8s.io/cri-api v0.29.6 -replace k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.29.0 +replace k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.29.6 -replace k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.29.0 +replace k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.29.6 -replace k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.29.0 +replace k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.29.6 -replace k8s.io/kube-proxy => k8s.io/kube-proxy v0.29.0 +replace k8s.io/kube-proxy => k8s.io/kube-proxy v0.29.6 -replace k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.29.0 +replace k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.29.6 -replace k8s.io/kubectl => k8s.io/kubectl v0.29.0 +replace k8s.io/kubectl => k8s.io/kubectl v0.29.6 -replace k8s.io/kubelet => k8s.io/kubelet v0.29.0 +replace k8s.io/kubelet => k8s.io/kubelet v0.29.6 -replace k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.29.0 +replace k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.29.6 -replace k8s.io/metrics => k8s.io/metrics v0.29.0 +replace k8s.io/metrics => k8s.io/metrics v0.29.6 -replace k8s.io/mount-utils => k8s.io/mount-utils v0.30.0-alpha.0 +replace k8s.io/mount-utils => k8s.io/mount-utils v0.29.6 -replace k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.29.0 +replace k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.29.6 -replace k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.29.0 +replace k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.29.6 -replace k8s.io/sample-controller => k8s.io/sample-controller v0.29.0 +replace k8s.io/sample-controller => k8s.io/sample-controller v0.29.6 -replace k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.29.0 +replace k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.29.6 -replace k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.29.0 +replace k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.29.6 -replace k8s.io/kms => k8s.io/kms v0.29.0 +replace k8s.io/kms => k8s.io/kms v0.29.6 -replace k8s.io/noderesourcetopology-api => k8s.io/noderesourcetopology-api v0.27.0 - -replace k8s.io/endpointslice => k8s.io/endpointslice v0.29.0 +replace k8s.io/endpointslice => k8s.io/endpointslice v0.29.6 diff --git a/cluster-autoscaler/go.sum b/cluster-autoscaler/go.sum index 4c5e460af988..7207677282c4 100644 --- a/cluster-autoscaler/go.sum +++ b/cluster-autoscaler/go.sum @@ -51,6 +51,24 @@ dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7 github.com/Azure/azure-sdk-for-go v46.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= github.com/Azure/azure-sdk-for-go v68.0.0+incompatible h1:fcYLmCpyNYRnvJbPerq7U0hS+6+I79yEDJBqVNcqUzU= github.com/Azure/azure-sdk-for-go v68.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc= +github.com/Azure/azure-sdk-for-go-extensions v0.1.6 h1:EXGvDcj54u98XfaI/Cy65Ds6vNsIJeGKYf0eNLB1y4Q= +github.com/Azure/azure-sdk-for-go-extensions v0.1.6/go.mod h1:27StPiXJp6Xzkq2AQL7gPK7VC0hgmCnUKlco1dO1jaM= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.2 h1:c4k2FIYIh4xtwqrQwV0Ct1v5+ehlNXj5NI/MWVsiTkQ= +github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.2/go.mod h1:5FDJtLEO/GxwNgUxbwrY3LP0pEoThTQJtk2oysdXHxM= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.4.0 h1:BMAjVKJM0U/CYF27gA0ZMmXGkOcvfFtD0oHVZ1TIPRI= +github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.4.0/go.mod h1:1fXstnBMas5kzG+S3q8UoJcmyU6nUeunJcMDHcRYHhs= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2 h1:LqbJ/WzJUwBf8UiaSzgX7aMclParm9/5Vgp+TY51uBQ= +github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2/go.mod h1:yInRyqWXAuaPrgI7p70+lDDgh3mlBohis29jGMISnmc= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v2 v2.4.0 h1:1u/K2BFv0MwkG6he8RYuUcbbeK22rkoZbg4lKa/msZU= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v2 v2.4.0/go.mod h1:U5gpsREQZE6SLk1t/cFfc1eMhYAlYpEzvaYXuDfefy8= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4 v4.8.0-beta.1 h1:6RFNcR7iE8Ka8j76gE0a/b28eAX6AZF4zqSw0XnFWbg= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4 v4.8.0-beta.1/go.mod h1:gYq8wyDgv6JLhGbAU6gg8amCPgQWRE+aCvrV2gyzdfs= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/internal/v2 v2.0.0 h1:PTFGRSlMKCQelWwxUyYVEUqseBJVemLyqWJjvMyt0do= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/internal/v2 v2.0.0/go.mod h1:LRr2FzBTQlONPPa5HREE5+RjSCTXl7BwOvYOaWTqCaI= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources v1.2.0 h1:Dd+RhdJn0OTtVGaeDLZpcumkIVCtA/3/Fo42+eoYvVM= +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armresources v1.2.0/go.mod h1:5kakwfW5CjC9KK+Q4wjXAg+ShuIm2mBMua0ZFj2C8PE= +github.com/Azure/go-armbalancer v0.0.2 h1:NVnxsTWHI5/fEzL6k6TjxPUfcB/3Si3+HFOZXOu0QtA= +github.com/Azure/go-armbalancer v0.0.2/go.mod h1:yTg7MA/8YnfKQc9o97tzAJ7fbdVkod1xGsIvKmhYPRE= github.com/Azure/go-autorest v14.2.0+incompatible h1:V5VMDjClD3GiElqLWO7mz2MxNAK/vTfRHdAubSIPRgs= github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24= github.com/Azure/go-autorest/autorest v0.11.4/go.mod h1:JFgpikqFJ/MleTTxwepExTKnFUKKszPS8UavbQYUMuw= @@ -86,6 +104,8 @@ github.com/Azure/go-autorest/tracing v0.6.0 h1:TYi4+3m5t6K48TGI9AUdb+IzbnSxvnvUM github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU= github.com/Azure/skewer v0.0.14 h1:0mzUJhspECkajYyynYsOCp//E2PSnYXrgP45bcskqfQ= github.com/Azure/skewer v0.0.14/go.mod h1:6WTecuPyfGtuvS8Mh4JYWuHhO4kcWycGfsUBB+XTFG4= +github.com/AzureAD/microsoft-authentication-library-for-go v1.1.1 h1:WpB/QDNLpMw72xHJc34BNNykqSOeEJDAWkhf0u12/Jk= +github.com/AzureAD/microsoft-authentication-library-for-go v1.1.1/go.mod h1:wP83P5OoQ5p6ip3ScPr0BAq0BvuPAvacpEuSzyouqAI= github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU= github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo= github.com/GoogleCloudPlatform/k8s-cloud-provider v1.18.1-0.20220218231025-f11817397a1b h1:Heo1J/ttaQFgGJSVnCZquy3e5eH5j1nqxBuomztB3P0= @@ -220,8 +240,8 @@ github.com/euank/go-kmsg-parser v2.0.0+incompatible h1:cHD53+PLQuuQyLZeriD1V/esu github.com/euank/go-kmsg-parser v2.0.0+incompatible/go.mod h1:MhmAMZ8V4CYH4ybgdRwPr2TU5ThnS43puaKEMpja1uw= github.com/evanphx/json-patch v5.6.0+incompatible h1:jBYDEEiFBPxA0v50tFdvOzQQTCvpL6mnFh5mB2/l16U= github.com/evanphx/json-patch v5.6.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk= -github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBdXk= -github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= +github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg= +github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= github.com/form3tech-oss/jwt-go v3.2.2+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k= github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k= github.com/frankban/quicktest v1.14.0 h1:+cqqvzZV87b4adx/5ayVOaYZ2CrvM4ejQvUdBzPPUss= @@ -273,9 +293,11 @@ github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69 github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg= github.com/golang-jwt/jwt/v4 v4.5.0 h1:7cYmW1XlMY7h7ii7UhUyChSgS5wUJEnm9uZVTGqOWzg= github.com/golang-jwt/jwt/v4 v4.5.0/go.mod h1:m21LjoU+eqJr34lmDMbreY2eSTRJ1cv77w39/MY0Ch0= +github.com/golang-jwt/jwt/v5 v5.0.0 h1:1n1XNM9hk7O9mnQoNBGolZvzebBQ7p93ULHRc28XJUE= +github.com/golang-jwt/jwt/v5 v5.0.0/go.mod h1:pqrtFR0X4osieyHYxtmOUWsAWrfe1Q5UVIyoH402zdk= github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q= -github.com/golang/glog v1.1.0 h1:/d3pCKDPWNnvIWe0vVUpNP32qc8U3PDVxySP/y360qE= -github.com/golang/glog v1.1.0/go.mod h1:pfYeQZ3JWZoXTV5sFc986z3HTpwQs9At6P4ImfuP3NQ= +github.com/golang/glog v1.1.2 h1:DVjP2PbBOzHyzA+dn3WhHIq4NdVu3Q+pvivFICf/7fo= +github.com/golang/glog v1.1.2/go.mod h1:zR+okUeTbrL6EL3xHUDxZuEtGv04p5shwip1+mL/rLQ= github.com/golang/groupcache v0.0.0-20190129154638-5b532d6fd5ef/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20190702054246-869f871628b6/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= github.com/golang/groupcache v0.0.0-20191227052852-215e87163ea7/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc= @@ -430,6 +452,8 @@ github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= +github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc= +github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw= github.com/libopenstorage/openstorage v1.0.0 h1:GLPam7/0mpdP8ZZtKjbfcXJBTIA/T1O6CBErVEFEyIM= github.com/libopenstorage/openstorage v1.0.0/go.mod h1:Sp1sIObHjat1BeXhfMqLZ14wnOzEhNx2YQedreMcUyc= github.com/lithammer/dedent v1.1.0 h1:VNzHMVCBNG1j0fh3OrsFRkVUwStdDArbgBWoPAffktY= @@ -481,6 +505,8 @@ github.com/opencontainers/runtime-spec v1.0.3-0.20220909204839-494a5a6aca78/go.m github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU= github.com/opencontainers/selinux v1.11.0/go.mod h1:E5dMC3VPuVvVHDYmi78qvhJp8+M586T4DlDRYpFkyec= github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic= +github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 h1:KoWmjvw+nsYOo29YJK9vDA65RGE3NrOnUtO7a+RF9HU= +github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8/go.mod h1:HKlIX3XHQyzLZPlr7++PzdhaXEj94dEiJgZDTsxEqUI= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= @@ -611,22 +637,22 @@ go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelr go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.42.0/go.mod h1:XiglO+8SPMqM3Mqh5/rtxR1VHc63o8tb38QrU6tm4mU= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.42.0 h1:ZOLJc06r4CB42laIXg/7udr0pbZyuAihN10A/XuiQRY= go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.42.0/go.mod h1:5z+/ZWJQKXa9YT34fQNx5K8Hd1EoIhvtUygUQPqEOgQ= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.44.0 h1:KfYpVmrjI7JuToy5k8XV3nkapjWx48k4E4JOtVstzQI= -go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.44.0/go.mod h1:SeQhzAEccGVZVEy7aH87Nh0km+utSpo1pTv6eMMop48= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 h1:aFJWCqJMNjENlcleuuOkGAPH82y0yULBScfXcIEdS24= +go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1/go.mod h1:sEGXWArGqc3tVa+ekntsN65DmVbVeW+7lTKTjZF3/Fo= go.opentelemetry.io/contrib/propagators/b3 v1.17.0 h1:ImOVvHnku8jijXqkwCSyYKRDt2YrnGXD4BbhcpfbfJo= go.opentelemetry.io/contrib/propagators/b3 v1.17.0/go.mod h1:IkfUfMpKWmynvvE0264trz0sf32NRTZL4nuAN9AbWRc= -go.opentelemetry.io/otel v1.19.0 h1:MuS/TNf4/j4IXsZuJegVzI1cwut7Qc00344rgH7p8bs= -go.opentelemetry.io/otel v1.19.0/go.mod h1:i0QyjOq3UPoTzff0PJB2N66fb4S0+rSbSB15/oyH9fY= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0 h1:Mne5On7VWdx7omSrSSZvM4Kw7cS7NQkOOmLcgscI51U= -go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0/go.mod h1:IPtUMKL4O3tH5y+iXVyAXqpAwMuzC1IrxVS81rummfE= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0 h1:3d+S281UTjM+AbF31XSOYn1qXn3BgIdWl8HNEpx08Jk= -go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0/go.mod h1:0+KuTDyKL4gjKCF75pHOX4wuzYDUZYfAQdSu43o+Z2I= -go.opentelemetry.io/otel/metric v1.19.0 h1:aTzpGtV0ar9wlV4Sna9sdJyII5jTVJEvKETPiOKwvpE= -go.opentelemetry.io/otel/metric v1.19.0/go.mod h1:L5rUsV9kM1IxCj1MmSdS+JQAcVm319EUrDVLrt7jqt8= -go.opentelemetry.io/otel/sdk v1.19.0 h1:6USY6zH+L8uMH8L3t1enZPR3WFEmSTADlqldyHtJi3o= -go.opentelemetry.io/otel/sdk v1.19.0/go.mod h1:NedEbbS4w3C6zElbLdPJKOpJQOrGUJ+GfzpjUvI0v1A= -go.opentelemetry.io/otel/trace v1.19.0 h1:DFVQmlVbfVeOuBRrwdtaehRrWiL1JoVs9CPIQ1Dzxpg= -go.opentelemetry.io/otel/trace v1.19.0/go.mod h1:mfaSyvGyEJEI0nyV2I4qhNQnbBOUUmYZpYojqMnX2vo= +go.opentelemetry.io/otel v1.21.0 h1:hzLeKBZEL7Okw2mGzZ0cc4k/A7Fta0uoPgaJCr8fsFc= +go.opentelemetry.io/otel v1.21.0/go.mod h1:QZzNPQPm1zLX4gZK4cMi+71eaorMSGT3A4znnUvNNEo= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 h1:cl5P5/GIfFh4t6xyruOgJP5QiA1pw4fYYdv6nc6CBWw= +go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0/go.mod h1:zgBdWWAu7oEEMC06MMKc5NLbA/1YDXV1sMpSqEeLQLg= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 h1:tIqheXEFWAZ7O8A7m+J0aPTmpJN3YQ7qetUAdkkkKpk= +go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0/go.mod h1:nUeKExfxAQVbiVFn32YXpXZZHZ61Cc3s3Rn1pDBGAb0= +go.opentelemetry.io/otel/metric v1.21.0 h1:tlYWfeo+Bocx5kLEloTjbcDwBuELRrIFxwdQ36PlJu4= +go.opentelemetry.io/otel/metric v1.21.0/go.mod h1:o1p3CA8nNHW8j5yuQLdc1eeqEaPfzug24uvsyIEJRWM= +go.opentelemetry.io/otel/sdk v1.21.0 h1:FTt8qirL1EysG6sTQRZ5TokkU8d0ugCj8htOgThZXQ8= +go.opentelemetry.io/otel/sdk v1.21.0/go.mod h1:Nna6Yv7PWTdgJHVRD9hIYywQBRx7pbox6nwBnZIxl/E= +go.opentelemetry.io/otel/trace v1.21.0 h1:WD9i5gzvoUPuXIXH24ZNBudiarZDKuekPqi/E8fpfLc= +go.opentelemetry.io/otel/trace v1.21.0/go.mod h1:LGbsEB0f9LGjN+OZaQQ26sohbOmiMR+BaslueVtS/qQ= go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI= go.opentelemetry.io/proto/otlp v1.0.0 h1:T0TX0tmXU8a3CbNXzEKGeU5mIVOdf0oykP+u2lIVU/I= go.opentelemetry.io/proto/otlp v1.0.0/go.mod h1:Sy6pihPLfYHkr3NkUbEhGHFhINUSI/v80hjKIs5JXpM= @@ -635,8 +661,10 @@ go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc= go.uber.org/atomic v1.10.0 h1:9qC72Qh0+3MqyJbAn8YU5xVq1frD8bn3JtD2oXtafVQ= go.uber.org/atomic v1.10.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0= go.uber.org/goleak v1.1.10/go.mod h1:8a7PlsEVH3e/a/GLqe5IIrQx6GzcnRmZEufDUTk4A7A= -go.uber.org/goleak v1.2.1 h1:NBol2c7O1ZokfZ0LEU9K6Whx/KnwvepVetCUhtKja4A= -go.uber.org/goleak v1.2.1/go.mod h1:qlT2yGI9QafXHhZZLxlSuNsMw3FFLxBr+tBRlmO1xH4= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= +go.uber.org/mock v0.4.0 h1:VcM4ZOtdbR4f6VXfiOpwpVJDL6lCReaZ6mw31wqh7KU= +go.uber.org/mock v0.4.0/go.mod h1:a6FSlNadKUHUa9IP5Vyt1zh4fC7uAwxMutEAscFbkZc= go.uber.org/multierr v1.1.0/go.mod h1:wR5kodmAFQ0UK8QlbwjlSNy0Z68gJhDJUG5sjR94q/0= go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU= go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= @@ -657,8 +685,8 @@ golang.org/x/crypto v0.0.0-20210513164829-c07d793c2f9a/go.mod h1:P+XmwS30IXTQdn5 golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc= golang.org/x/crypto v0.0.0-20220722155217-630584e8d5aa/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4= golang.org/x/crypto v0.6.0/go.mod h1:OFC/31mSvZgRz0V1QTNCzfAI1aIRzbiufJtkMIlEp58= -golang.org/x/crypto v0.16.0 h1:mMMrFzRSCF0GvB7Ne27XVtVAaXLrPmgPC7/v0tkwHaY= -golang.org/x/crypto v0.16.0/go.mod h1:gCAAfMLgwOJRpTjQ2zCCt2OcSfYMTeZVSRtQlPC7Nq4= +golang.org/x/crypto v0.21.0 h1:X31++rzVUdKhX5sWmSOFZxx8UW/ldWx55cbf08iNAMA= +golang.org/x/crypto v0.21.0/go.mod h1:0BP7YvVV9gBbVKyeTG0Gyn+gZm94bibOW5BjDEYAOMs= golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA= golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8= @@ -741,8 +769,8 @@ golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qx golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c= golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco= golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs= -golang.org/x/net v0.19.0 h1:zTwKpTd2XuCqf8huc7Fo2iSy+4RHPd10s4KzeTnVr1c= -golang.org/x/net v0.19.0/go.mod h1:CfAk/cbD4CthTvqiEl8NpboMuiuOYsAr/7NOjZJtv1U= +golang.org/x/net v0.23.0 h1:7EYJ93RZ9vYSZAIb2x3lnuvqO5zneoD6IvWjuhfxjTs= +golang.org/x/net v0.23.0/go.mod h1:JKghWKKOSdJwpW2GEx0Ja7fmaKnMsbu+MWVZTokSYmg= golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U= golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= golang.org/x/oauth2 v0.0.0-20190402181905-9f3314589c9a/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw= @@ -760,8 +788,8 @@ golang.org/x/oauth2 v0.0.0-20210628180205-a41e5a781914/go.mod h1:KelEdhl1UZF7XfJ golang.org/x/oauth2 v0.0.0-20210805134026-6f1e6394065a/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20210819190943-2bc19b11175f/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= golang.org/x/oauth2 v0.0.0-20211005180243-6b3c2da341f1/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A= -golang.org/x/oauth2 v0.10.0 h1:zHCpF2Khkwy4mMB4bv0U37YtJdTGW8jI0glAApi0Kh8= -golang.org/x/oauth2 v0.10.0/go.mod h1:kTpgurOux7LqtuxjuyZa4Gj2gdezIt/jQtGnNFfypQI= +golang.org/x/oauth2 v0.11.0 h1:vPL4xzxBM4niKCW6g9whtaWVXTJf1U5e4aZxxFx/gbU= +golang.org/x/oauth2 v0.11.0/go.mod h1:LdF7O/8bLR/qWK9DrpXmbHLTouvRHK0SgJl0GmDBchk= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= @@ -825,6 +853,7 @@ golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210603125802-9665404d3644/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= +golang.org/x/sys v0.0.0-20210616045830-e2b7044e8c71/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= @@ -836,14 +865,14 @@ golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBc golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg= -golang.org/x/sys v0.15.0 h1:h48lPFYpsTvQJZF4EKyI4aLHaev3CxivZmv7yZig9pc= -golang.org/x/sys v0.15.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= +golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4= +golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8= golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k= -golang.org/x/term v0.15.0 h1:y/Oo/a/q3IXu26lQgl04j/gjuBDOBlx7X6Om1j2CPW4= -golang.org/x/term v0.15.0/go.mod h1:BDl952bC7+uMoWR75FIrCDx79TPU9oHkTZ9yRbYOrX0= +golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8= +golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58= golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= @@ -1023,10 +1052,10 @@ google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEc google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY= google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= google.golang.org/genproto v0.0.0-20211021150943-2b146023228c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc= -google.golang.org/genproto v0.0.0-20230803162519-f966b187b2e5 h1:L6iMMGrtzgHsWofoFcihmDEMYeDR9KN/ThbPWGrh++g= -google.golang.org/genproto v0.0.0-20230803162519-f966b187b2e5/go.mod h1:oH/ZOT02u4kWEp7oYBGYFFkCdKS/uYR9Z7+0/xuuFp8= -google.golang.org/genproto/googleapis/api v0.0.0-20230726155614-23370e0ffb3e h1:z3vDksarJxsAKM5dmEGv0GHwE2hKJ096wZra71Vs4sw= -google.golang.org/genproto/googleapis/api v0.0.0-20230726155614-23370e0ffb3e/go.mod h1:rsr7RhLuwsDKL7RmgDDCUc6yaGr1iqceVb5Wv6f6YvQ= +google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d h1:VBu5YqKPv6XiJ199exd8Br+Aetz+o08F+PLMnwJQHAY= +google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d/go.mod h1:yZTlhN0tQnXo3h00fuXNCxJdLdIdnVFVBaRJ5LWBbw4= +google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d h1:DoPTO70H+bcDXcd39vOqb2viZxgqeBeSGtZ55yZU4/Q= +google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d/go.mod h1:KjSP20unUpOx5kyQUFa7k4OJg0qeJ7DEZflGDu2p6Bk= google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d h1:uvYuEyMHKNt+lT4K3bN6fGswmK8qSvcreM3BwjDh+y4= google.golang.org/genproto/googleapis/rpc v0.0.0-20230822172742-b8732ec3820d/go.mod h1:+Bk1OCOj40wS2hwAMA+aCW9ypzm63QTBBHp6lQ3p+9M= google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c= @@ -1055,8 +1084,8 @@ google.golang.org/grpc v1.38.0/go.mod h1:NREThFqKR1f3iQ6oBuvc5LadQuXVGo9rkm5ZGrQ google.golang.org/grpc v1.39.0/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE= google.golang.org/grpc v1.39.1/go.mod h1:PImNr+rS9TWYb2O4/emRugxiyHZ5JyHW5F+RPnDzfrE= google.golang.org/grpc v1.40.0/go.mod h1:ogyxbiOoUXAkP+4+xa6PZSE9DZgIHtSpzjDTB9KAK34= -google.golang.org/grpc v1.58.3 h1:BjnpXut1btbtgN/6sp+brB2Kbm2LjNXnidYujAVbSoQ= -google.golang.org/grpc v1.58.3/go.mod h1:tgX3ZQDlNJGU96V6yHh1T/JeoBQ2TXdr43YbYSsCJk0= +google.golang.org/grpc v1.59.0 h1:Z5Iec2pjwb+LEOqzpB2MR12/eKFhDPhuqW91O+4bwUk= +google.golang.org/grpc v1.59.0/go.mod h1:aUPDwccQo6OTjy7Hct4AfBPD1GptF4fyUjIkQ9YtF98= google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw= google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8= google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0= @@ -1108,56 +1137,56 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg= honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k= -k8s.io/api v0.29.0 h1:NiCdQMY1QOp1H8lfRyeEf8eOwV6+0xA6XEE44ohDX2A= -k8s.io/api v0.29.0/go.mod h1:sdVmXoz2Bo/cb77Pxi71IPTSErEW32xa4aXwKH7gfBA= -k8s.io/apiextensions-apiserver v0.29.0 h1:0VuspFG7Hj+SxyF/Z/2T0uFbI5gb5LRgEyUVE3Q4lV0= -k8s.io/apiextensions-apiserver v0.29.0/go.mod h1:TKmpy3bTS0mr9pylH0nOt/QzQRrW7/h7yLdRForMZwc= -k8s.io/apimachinery v0.29.0 h1:+ACVktwyicPz0oc6MTMLwa2Pw3ouLAfAon1wPLtG48o= -k8s.io/apimachinery v0.29.0/go.mod h1:eVBxQ/cwiJxH58eK/jd/vAk4mrxmVlnpBH5J2GbMeis= -k8s.io/apiserver v0.29.0 h1:Y1xEMjJkP+BIi0GSEv1BBrf1jLU9UPfAnnGGbbDdp7o= -k8s.io/apiserver v0.29.0/go.mod h1:31n78PsRKPmfpee7/l9NYEv67u6hOL6AfcE761HapDM= -k8s.io/client-go v0.29.0 h1:KmlDtFcrdUzOYrBhXHgKw5ycWzc3ryPX5mQe0SkG3y8= -k8s.io/client-go v0.29.0/go.mod h1:yLkXH4HKMAywcrD82KMSmfYg2DlE8mepPR4JGSo5n38= -k8s.io/cloud-provider v0.29.0 h1:Qgk/jHsSKGRk/ltTlN6e7eaNuuamLROOzVBd0RPp94M= -k8s.io/cloud-provider v0.29.0/go.mod h1:gBCt7YYKFV4oUcJ/0xF9lS/9il4MxKunJ+ZKvh39WGo= +k8s.io/api v0.29.6 h1:eDxIl8+PeEpwbe2YyS5RXJ9vdn4hnKWMBf4WUJP9DQM= +k8s.io/api v0.29.6/go.mod h1:ZuUPMhJV74DJXapldbg6upaHfiOjrBb+0ffUbBi1jaw= +k8s.io/apiextensions-apiserver v0.29.6 h1:tUu1N6Zt9GT8KVcPF5aGDqfISz1mveM4yFh7eL5bxmE= +k8s.io/apiextensions-apiserver v0.29.6/go.mod h1:iw1EbwZat08I219qrQKoFMHGo7J9KxPqMpVKxCbNbCs= +k8s.io/apimachinery v0.29.6 h1:CLjJ5b0hWW7531n/njRE3rnusw3rhVGCFftPfnG54CI= +k8s.io/apimachinery v0.29.6/go.mod h1:i3FJVwhvSp/6n8Fl4K97PJEP8C+MM+aoDq4+ZJBf70Y= +k8s.io/apiserver v0.29.6 h1:JxgDbpgahOgqoDOf+zVl2mI+rQcHcLQnK6YhhtsjbNs= +k8s.io/apiserver v0.29.6/go.mod h1:HrQwfPWxhwEa+n8/+5YwSF5yT2WXbeyFjqq6KEXHTX8= +k8s.io/client-go v0.29.6 h1:5E2ebuB/p0F0THuQatyvhDvPL2SIeqwTPrtnrwKob/8= +k8s.io/client-go v0.29.6/go.mod h1:jHZcrQqDplyv20v7eu+iFM4gTpglZSZoMVcKrh8sRGg= +k8s.io/cloud-provider v0.29.6 h1:W6dafBlIRQlv8oDkK5UmuM0ZGw1lDCReh+BYtTBsBSI= +k8s.io/cloud-provider v0.29.6/go.mod h1:+bjtIdnbIW+Ubs5xbnWvYURymR3tcrmxWPgQp4lwoN0= k8s.io/cloud-provider-aws v1.27.0 h1:PF8YrH8QcN6JoXB3Xxlaz84SBDYMPunJuCc0cPuCWXA= k8s.io/cloud-provider-aws v1.27.0/go.mod h1:9vUb5mnVnReSRDBWcBxB1b0HOeEc472iOPmrnwpN9SA= -k8s.io/code-generator v0.29.0 h1:2LQfayGDhaIlaamXjIjEQlCMy4JNCH9lrzas4DNW1GQ= -k8s.io/code-generator v0.29.0/go.mod h1:5bqIZoCxs2zTRKMWNYqyQWW/bajc+ah4rh0tMY8zdGA= -k8s.io/component-base v0.29.0 h1:T7rjd5wvLnPBV1vC4zWd/iWRbV8Mdxs+nGaoaFzGw3s= -k8s.io/component-base v0.29.0/go.mod h1:sADonFTQ9Zc9yFLghpDpmNXEdHyQmFIGbiuZbqAXQ1M= -k8s.io/component-helpers v0.29.0 h1:Y8W70NGeitKxWwhsPo/vEQbQx5VqJV+3xfLpP3V1VxU= -k8s.io/component-helpers v0.29.0/go.mod h1:j2coxVfmzTOXWSE6sta0MTgNSr572Dcx68F6DD+8fWc= -k8s.io/controller-manager v0.29.0 h1:kEv9sKLnjDkoSqeouWp2lZ8P33an5wrDJpOMqoyD7pc= -k8s.io/controller-manager v0.29.0/go.mod h1:UKtadWkULF5bfX7vu3hHppzY/hz88C03t70GItg/x08= -k8s.io/cri-api v0.29.0 h1:atenAqOltRsFqcCQlFFpDnl/R4aGfOELoNLTDJfd7t8= -k8s.io/cri-api v0.29.0/go.mod h1:Rls2JoVwfC7kW3tndm7267kriuRukQ02qfht0PCRuIc= -k8s.io/csi-translation-lib v0.29.0 h1:we4X1yUlDikvm5Rv0dwMuPHNw6KwjwsQiAuOPWXha8M= -k8s.io/csi-translation-lib v0.29.0/go.mod h1:Cp6t3CNBSm1dXS17V8IImUjkqfIB6KCj8Fs8wf6uyTA= -k8s.io/dynamic-resource-allocation v0.29.0 h1:JQW5erdoOsvhst7DxMfEpnXhrfm9SmNTnvyaXdqTLAE= -k8s.io/dynamic-resource-allocation v0.29.0/go.mod h1:4T9Fg4J8B2SdRVVj5/1hM7hGAUrBqzyEUIGFY+wQl4Q= +k8s.io/code-generator v0.29.6 h1:Z8T9VMR0mr7V5GG66c6GVAZrIiEy2uFoQwbeVeWLqPA= +k8s.io/code-generator v0.29.6/go.mod h1:7TYnI0dYItL2cKuhhgPSuF3WED9uMdELgbVXFfn/joE= +k8s.io/component-base v0.29.6 h1:XkVJI67FvBgNb/3kKqvaGKokxUrIR0RrksCPNI+JYCs= +k8s.io/component-base v0.29.6/go.mod h1:kIahZm8aw9lV8Vw17LF89REmeBrv5+QEl3v7HsrmITY= +k8s.io/component-helpers v0.29.6 h1:kG/tK0gXPXj6n3Oxn5Eul8nYzer3SejZI3ClwiWkreQ= +k8s.io/component-helpers v0.29.6/go.mod h1:Ltb44cbXci9fy9rytWwYsu8vHfi4fjyQdSwk6UlCR4E= +k8s.io/controller-manager v0.29.6 h1:bcn/i+8HncLlCbg1ccQxgn9PQ3YDeomDcwGF11a/Svo= +k8s.io/controller-manager v0.29.6/go.mod h1:LJr7QA1iBNXnoiNb0+oGJv1jHlXcc+cxon5DG2KxzKQ= +k8s.io/cri-api v0.29.6 h1:95kqUc2TzkxOiRBI9tSA+HY6JA/Zyg5jx6u541oJY9k= +k8s.io/cri-api v0.29.6/go.mod h1:A6pdbjzML2xi9B0Clqn5qt1HJ3Ik12x2j+jv/TkqjRE= +k8s.io/csi-translation-lib v0.29.6 h1:a1cs5yob5Nj85gZ7Vlr1IVt/W28Ju0MZS8rzh/K+M1s= +k8s.io/csi-translation-lib v0.29.6/go.mod h1:SkPOKuBw94YX+DTcSSPeRXkG/sCQfjMnXHA3Q4JZNPQ= +k8s.io/dynamic-resource-allocation v0.29.6 h1:1/Gx02V5+kdT4fq0wBKqXBEge8WEMtXIXlT3wU755WU= +k8s.io/dynamic-resource-allocation v0.29.6/go.mod h1:BRaEJZtSil21NNzuhPrFVput+dD8Sr+xG77MsEuxRJw= k8s.io/gengo v0.0.0-20230829151522-9cce18d56c01 h1:pWEwq4Asjm4vjW7vcsmijwBhOr1/shsbSYiWXmNGlks= k8s.io/gengo v0.0.0-20230829151522-9cce18d56c01/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E= k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE= k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y= k8s.io/klog/v2 v2.110.1 h1:U/Af64HJf7FcwMcXyKm2RPM22WZzyR7OSpYj5tg3cL0= k8s.io/klog/v2 v2.110.1/go.mod h1:YGtd1984u+GgbuZ7e08/yBuAfKLSO0+uR1Fhi6ExXjo= -k8s.io/kms v0.29.0 h1:KJ1zaZt74CgvgV3NR7tnURJ/mJOKC5X3nwon/WdwgxI= -k8s.io/kms v0.29.0/go.mod h1:mB0f9HLxRXeXUfHfn1A7rpwOlzXI1gIWu86z6buNoYA= +k8s.io/kms v0.29.6 h1:Aa+4zZDqUaFacjNGzCHIC0ilqnEhA1qHqvyn9igirPQ= +k8s.io/kms v0.29.6/go.mod h1:vWVImKkJd+1BQY4tBwdfSwjQBiLrnbNtHADcDEDQFtk= k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00 h1:aVUu9fTY98ivBPKR9Y5w/AuzbMm96cd3YHRTU83I780= k8s.io/kube-openapi v0.0.0-20231010175941-2dd684a91f00/go.mod h1:AsvuZPBlUDVuCdzJ87iajxtXuR9oktsTctW/R9wwouA= -k8s.io/kube-scheduler v0.29.0 h1:n4v68EvxYhy7o5Q/LFPgqBEGi7lKoiAxwQ0gQyMoj9M= -k8s.io/kube-scheduler v0.29.0/go.mod h1:mJMGpqS+aC6/Qf6SDpaqvM6/kLENHN5U7SACSdrZV7o= -k8s.io/kubectl v0.29.0 h1:Oqi48gXjikDhrBF67AYuZRTcJV4lg2l42GmvsP7FmYI= -k8s.io/kubectl v0.29.0/go.mod h1:0jMjGWIcMIQzmUaMgAzhSELv5WtHo2a8pq67DtviAJs= -k8s.io/kubelet v0.29.0 h1:SX5hlznTBcGIrS1scaf8r8p6m3e475KMifwt9i12iOk= -k8s.io/kubelet v0.29.0/go.mod h1:kvKS2+Bz2tgDOG1S1q0TH2z1DasNuVF+8p6Aw7xvKkI= -k8s.io/kubernetes v1.29.0 h1:DOLN7g8+nnAYBi8JHoW0+/MCrZKDPIqAxzLCXDXd0cg= -k8s.io/kubernetes v1.29.0/go.mod h1:9kztbUQf9stVDcIYXx+BX3nuGCsAQDsuClkGMpPs3pA= -k8s.io/legacy-cloud-providers v0.29.0 h1:fjGV9OhqseUTp3R8xOm2TBoAxyuRTOS6B2zFTSJ80RE= -k8s.io/legacy-cloud-providers v0.29.0/go.mod h1:qR0/QoI19+G4zI71hSSZ1SEgiwdV358SU8u3jCwvK8c= -k8s.io/mount-utils v0.30.0-alpha.0 h1:hga3HePjAnYTofEKHmjvTZZzSCTV4auLQ9D172n1Gig= -k8s.io/mount-utils v0.30.0-alpha.0/go.mod h1:N3lDK/G1B8R/IkAt4NhHyqB07OqEr7P763z3TNge94U= +k8s.io/kube-scheduler v0.29.6 h1:E1J5JIpNx4N+0rCpTtsfl8PFqofcrXhEde+DgwmqhNU= +k8s.io/kube-scheduler v0.29.6/go.mod h1:TPkLhhoWRwysksJ4LHhW9vbWyHGdDmKaoUD/fm6KLjc= +k8s.io/kubectl v0.29.6 h1:hmkOMyH2uSUV16gIB3Qp2dv09fM2+PGEXz5SH1gwp7Y= +k8s.io/kubectl v0.29.6/go.mod h1:IUpyXy2OCbIMuBMAisDHM9shh5/Nseij4w+HIt0aq6A= +k8s.io/kubelet v0.29.6 h1:jXnnBNHK/KNNEJesmlIZmCvlYC3a5/e04BIS9VPM49M= +k8s.io/kubelet v0.29.6/go.mod h1:kGEUqodVM120YTTQLSCTXzZP4XMFDp7qLf7iU3hrRE4= +k8s.io/kubernetes v1.29.6 h1:jn8kA/oVOAWZOeoorx6xZ4d+KgGp+Evgi90x9bEI/DE= +k8s.io/kubernetes v1.29.6/go.mod h1:28sDhcb87LX5z3GWAKYmLrhrifxi4W9bEWua4DRTIvk= +k8s.io/legacy-cloud-providers v0.29.6 h1:WvmX09CTrwZd7VfTZviiM9dDDCXqSWDY5EErBb5TzuU= +k8s.io/legacy-cloud-providers v0.29.6/go.mod h1:K/JsdfhAzfRWclCbLjQaz0CwloAMCDDF3/CWCeF1M7U= +k8s.io/mount-utils v0.29.6 h1:44NDngKV5z/vt9YsYFVT0mPD68XjSfbqYfBEvUSwKb0= +k8s.io/mount-utils v0.29.6/go.mod h1:SHUMR9n3b6tLgEmlyT36cL6fV6Sjwa5CJhc0guCXvb0= k8s.io/utils v0.0.0-20230726121419-3b25d923346b h1:sgn3ZU783SCgtaSJjpcVVlRqd6GSnlTLKgpAAttJvpI= k8s.io/utils v0.0.0-20230726121419-3b25d923346b/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= rsc.io/binaryregexp v0.2.0/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8= diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/LICENSE b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/LICENSE new file mode 100644 index 000000000000..9e841e7a26e4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/LICENSE @@ -0,0 +1,21 @@ + MIT License + + Copyright (c) Microsoft Corporation. + + Permission is hereby granted, free of charge, to any person obtaining a copy + of this software and associated documentation files (the "Software"), to deal + in the Software without restriction, including without limitation the rights + to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + copies of the Software, and to permit persons to whom the Software is + furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included in all + copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + SOFTWARE diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/arm_client_opts.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/arm_client_opts.go new file mode 100644 index 000000000000..3a6473766ae8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/arm_client_opts.go @@ -0,0 +1,66 @@ +/* +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package middleware + +import ( + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" +) + +func DefaultArmOpts(userAgent string, logCollector ArmRequestMetricCollector, customPerCallPolicies ...policy.Policy) *arm.ClientOptions { + opts := &arm.ClientOptions{} + opts.Telemetry = DefaultTelemetryOpts(userAgent) + opts.Retry = DefaultRetryOpts() + opts.Transport = DefaultHTTPClient() + // we add the logging policy to the PerRetryPolicies so we can track + // any retries that happened + opts.PerRetryPolicies = []policy.Policy{ + runtime.NewRequestIDPolicy(), + &ArmRequestMetricPolicy{Collector: logCollector}, + } + opts.PerCallPolicies = []policy.Policy{} + if customPerCallPolicies != nil { + opts.PerCallPolicies = append(opts.PerCallPolicies, customPerCallPolicies...) + } + return opts +} + +func DefaultRetryOpts() policy.RetryOptions { + return policy.RetryOptions{ + MaxRetries: 5, + // Note the default retry behavior is exponential backoff + RetryDelay: time.Second * 5, + // StatusCodes specifies the HTTP status codes that indicate the operation should be retried. + // A nil slice will use the following values. + // http.StatusRequestTimeout 408 + // http.StatusTooManyRequests 429 + // http.StatusInternalServerError 500 + // http.StatusBadGateway 502 + // http.StatusServiceUnavailable 503 + // http.StatusGatewayTimeout 504 + // Specifying values will replace the default values. + // Specifying an empty slice will disable retries for HTTP status codes. + // StatusCodes: nil, + } +} + +func DefaultTelemetryOpts(userAgent string) policy.TelemetryOptions { + return policy.TelemetryOptions{ + ApplicationID: userAgent, + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/arm_error_collector.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/arm_error_collector.go new file mode 100644 index 000000000000..ea8ffb0289ad --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/arm_error_collector.go @@ -0,0 +1,243 @@ +package middleware + +import ( + "context" + "crypto/tls" + "errors" + "fmt" + "net/http" + "net/http/httptrace" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/to" +) + +const ( + headerKeyRequestID = "X-Ms-Client-Request-Id" + headerKeyCorrelationID = "X-Ms-Correlation-Request-id" + ArmErrorCodeCastToArmResponseErrorFailed ArmErrorCode = "CastToArmResponseErrorFailed" + ArmErrorCodeTransportError ArmErrorCode = "TransportError" + ArmErrorCodeUnexpectedTransportError ArmErrorCode = "UnexpectedTransportError" + ArmErrorCodeContextCanceled ArmErrorCode = "ContextCanceled" + ArmErrorCodeContextDeadlineExceeded ArmErrorCode = "ContextDeadlineExceeded" +) + +// ArmError is unified Error Experience across AzureResourceManager, it contains Code Message. +type ArmError struct { + Code ArmErrorCode `json:"code"` + Message string `json:"message"` +} + +type ArmErrorCode string + +type RequestInfo struct { + Request *http.Request + ArmResId *arm.ResourceID +} + +func newRequestInfo(req *http.Request, resId *arm.ResourceID) *RequestInfo { + return &RequestInfo{Request: req, ArmResId: resId} +} + +type ResponseInfo struct { + Response *http.Response + Error *ArmError + Latency time.Duration + RequestId string + CorrelationId string + ConnTracking *HttpConnTracking +} + +type HttpConnTracking struct { + TotalLatency string + DnsLatency string + ConnLatency string + TlsLatency string + Protocol string + ReqConnInfo *httptrace.GotConnInfo +} + +// ArmRequestMetricCollector is a interface that collectors need to implement. +// TODO: use *policy.Request or *http.Request? +type ArmRequestMetricCollector interface { + // RequestStarted is called when a request is about to be sent. + // context is not provided, get it from RequestInfo.Request.Context() + RequestStarted(*RequestInfo) + + // RequestCompleted is called when a request is finished + // context is not provided, get it from RequestInfo.Request.Context() + // if an error occurred, ResponseInfo.Error will be populated + RequestCompleted(*RequestInfo, *ResponseInfo) +} + +// ArmRequestMetricPolicy is a policy that collects metrics/telemetry for ARM requests. +type ArmRequestMetricPolicy struct { + Collector ArmRequestMetricCollector +} + +// Do implements the azcore/policy.Policy interface. +func (p *ArmRequestMetricPolicy) Do(req *policy.Request) (*http.Response, error) { + httpReq := req.Raw() + if httpReq == nil || httpReq.URL == nil { + // not able to collect telemetry, just pass through + return req.Next() + } + + armResId, err := arm.ParseResourceID(httpReq.URL.Path) + if err != nil { + // TODO: error handling without break the request. + } + + connTracking := &HttpConnTracking{} + // have to add to the context at first - then clone the policy.Request struct + // this allows the connection tracing to happen + // otherwise we can't change the underlying http request of req, we have to use + // newARMReq + newCtx := addConnectionTracingToRequestContext(httpReq.Context(), connTracking) + newARMReq := req.Clone(newCtx) + requestInfo := newRequestInfo(httpReq, armResId) + started := time.Now() + + p.requestStarted(requestInfo) + + var resp *http.Response + var reqErr error + + // defer this function in case there's a panic somewhere down the pipeline. + // It's the calling user's responsibility to handle panics, not this policy + defer func() { + latency := time.Since(started) + respInfo := &ResponseInfo{ + Response: resp, + Latency: latency, + ConnTracking: connTracking, + } + + if reqErr != nil { + // either it's a transport error + // or it is already handled by previous policy + respInfo.Error = parseTransportError(reqErr) + } else { + respInfo.Error = parseArmErrorFromResponse(resp) + } + + // need to get the request id and correlation id from the response.request header + // because the headers were added by policy and might be called after this policy + if resp != nil && resp.Request != nil { + respInfo.RequestId = resp.Request.Header.Get(headerKeyRequestID) + respInfo.CorrelationId = resp.Request.Header.Get(headerKeyCorrelationID) + } + + p.requestCompleted(requestInfo, respInfo) + }() + + resp, reqErr = newARMReq.Next() + return resp, reqErr +} + +// shortcut function to handle nil collector +func (p *ArmRequestMetricPolicy) requestStarted(iReq *RequestInfo) { + if p.Collector != nil { + p.Collector.RequestStarted(iReq) + } +} + +// shortcut function to handle nil collector +func (p *ArmRequestMetricPolicy) requestCompleted(iReq *RequestInfo, iResp *ResponseInfo) { + if p.Collector != nil { + p.Collector.RequestCompleted(iReq, iResp) + } +} + +func parseArmErrorFromResponse(resp *http.Response) *ArmError { + if resp == nil { + return &ArmError{Code: ArmErrorCodeUnexpectedTransportError, Message: "nil response"} + } + if resp.StatusCode > 399 { + // for 4xx, 5xx response, ARM should include {error:{code, message}} in the body + err := runtime.NewResponseError(resp) + respErr := &azcore.ResponseError{} + if errors.As(err, &respErr) { + return &ArmError{Code: ArmErrorCode(respErr.ErrorCode), Message: respErr.Error()} + } + return &ArmError{Code: ArmErrorCodeCastToArmResponseErrorFailed, Message: fmt.Sprintf("Response body is not in ARM error form: {error:{code, message}}: %s", err.Error())} + } + return nil +} + +// distinguash +// - Context Cancelled (request configured context to have timeout) +// - ClientTimeout (context still valid, http client have timeout configured) +// - Transport Error (DNS/Dial/TLS/ServerTimeout) +func parseTransportError(err error) *ArmError { + if err == nil { + return nil + } + if errors.Is(err, context.Canceled) { + return &ArmError{Code: ArmErrorCodeContextCanceled, Message: err.Error()} + } + if errors.Is(err, context.DeadlineExceeded) { + return &ArmError{Code: ArmErrorCodeContextDeadlineExceeded, Message: err.Error()} + } + return &ArmError{Code: ArmErrorCodeTransportError, Message: err.Error()} +} + +func addConnectionTracingToRequestContext(ctx context.Context, connTracking *HttpConnTracking) context.Context { + var getConn, dnsStart, connStart, tlsStart *time.Time + + trace := &httptrace.ClientTrace{ + GetConn: func(hostPort string) { + getConn = to.Ptr(time.Now()) + }, + GotConn: func(connInfo httptrace.GotConnInfo) { + if getConn != nil { + connTracking.TotalLatency = fmt.Sprintf("%dms", time.Now().Sub(*getConn).Milliseconds()) + } + + connTracking.ReqConnInfo = &connInfo + }, + DNSStart: func(_ httptrace.DNSStartInfo) { + dnsStart = to.Ptr(time.Now()) + }, + DNSDone: func(dnsInfo httptrace.DNSDoneInfo) { + if dnsInfo.Err == nil { + if dnsStart != nil { + connTracking.DnsLatency = fmt.Sprintf("%dms", time.Now().Sub(*dnsStart).Milliseconds()) + } + } else { + connTracking.DnsLatency = dnsInfo.Err.Error() + } + }, + ConnectStart: func(_, _ string) { + connStart = to.Ptr(time.Now()) + }, + ConnectDone: func(_, _ string, err error) { + if err == nil { + if connStart != nil { + connTracking.ConnLatency = fmt.Sprintf("%dms", time.Now().Sub(*connStart).Milliseconds()) + } + } else { + connTracking.ConnLatency = err.Error() + } + }, + TLSHandshakeStart: func() { + tlsStart = to.Ptr(time.Now()) + }, + TLSHandshakeDone: func(t tls.ConnectionState, err error) { + if err == nil { + if tlsStart != nil { + connTracking.TlsLatency = fmt.Sprintf("%dms", time.Now().Sub(*tlsStart).Milliseconds()) + } + connTracking.Protocol = t.NegotiatedProtocol + } else { + connTracking.TlsLatency = err.Error() + } + }, + } + ctx = httptrace.WithClientTrace(ctx, trace) + return ctx +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/default_http_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/default_http_client.go new file mode 100644 index 000000000000..97d4a754013e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/default_http_client.go @@ -0,0 +1,102 @@ +/* +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package middleware + +import ( + "net" + "net/http" + "time" + + "github.com/Azure/go-armbalancer" + "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" + "go.opentelemetry.io/otel/propagation" + "golang.org/x/net/http2" +) + +var ( + // defaultHTTPClient is configured with the defaultRoundTripper + defaultHTTPClient *http.Client + // defaultTransport is a pre-configured *http.Transport for http/2 + defaultTransport *http.Transport + // defaultRoundTripper wraps the defaultTransport with arm balancer and otel propagation + defaultRoundTripper http.RoundTripper +) + +// DefaultHTTPClient returns a shared http client, and transport leveraging armbalancer for +// clientside loadbalancing, so we can leverage HTTP/2, and not get throttled by arm at the instance level. +func DefaultHTTPClient() *http.Client { + return defaultHTTPClient +} + +func init() { + defaultTransport = &http.Transport{ + Proxy: http.ProxyFromEnvironment, + DialContext: (&net.Dialer{ + Timeout: 30 * time.Second, + KeepAlive: 30 * time.Second, + }).DialContext, + ForceAttemptHTTP2: true, + MaxIdleConns: 100, + MaxIdleConnsPerHost: 100, + IdleConnTimeout: 90 * time.Second, + TLSHandshakeTimeout: 10 * time.Second, + ExpectContinueTimeout: 1 * time.Second, + } + // We call configureHttp2TransportPing() in the package init to ensure that our defaultTransport is always configured + // with the http2 additional settings that work around the issue described here: + // https://github.com/golang/go/issues/59690 + // azure sdk related issue is here: + // https://github.com/Azure/azure-sdk-for-go/issues/21346#issuecomment-1699665586 + configureHttp2TransportPing(defaultTransport) + defaultRoundTripper = armbalancer.New(armbalancer.Options{ + // PoolSize is the number of clientside http/2 persistent connections + // we want to have configured in our transport. Note, that without clientside loadbalancing + // with arm, HTTP/2 Will force persistent connection to stick to a single arm instance, and will + // result in a substantial amount of throttling + PoolSize: 100, + Transport: defaultTransport, + }) + + defaultHTTPClient = &http.Client{ + Transport: otelhttp.NewTransport( + defaultRoundTripper, + otelhttp.WithPropagators(propagation.TraceContext{}), + ), + } +} + +// configureHttp2Transport ensures that our defaultTransport is configured +// with the http2 additional settings that work around the issue described here: +// https://github.com/golang/go/issues/59690 +// azure sdk related issue is here: +// https://github.com/Azure/azure-sdk-for-go/issues/21346#issuecomment-1699665586 +// This is called by the package init to ensure that our defaultTransport is always configured +// you should not call this anywhere else. +func configureHttp2TransportPing(tr *http.Transport) { + // http2Transport holds a reference to the default transport and configures "h2" middlewares that + // will use the below settings, making the standard http.Transport behave correctly for dropped connections + http2Transport, err := http2.ConfigureTransports(tr) + if err != nil { + // by initializing in init(), we know it is only called once. + // this errors if called twice. + panic(err) + } + // we give 10s to the server to respond to the ping. if no response is received, + // the transport will close the connection, so that the next request will open a new connection, and not + // hit a context deadline exceeded error. + http2Transport.PingTimeout = 10 * time.Second + // if no frame is received for 30s, the transport will issue a ping health check to the server. + http2Transport.ReadIdleTimeout = 30 * time.Second +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/poller.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/poller.go new file mode 100644 index 000000000000..0702fea911d7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/poller.go @@ -0,0 +1,28 @@ +/* +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + http://www.apache.org/licenses/LICENSE-2.0 +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package middleware + +import ( + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" +) + +// AggressivePollingOptions is a very aggressive set of poller options +func AggressivePollingOptions() *runtime.PollUntilDoneOptions { + return &runtime.PollUntilDoneOptions{ + // Frequency is the time to wait between polling intervals in absence of a Retry-After header. + //Allowed minimum is one second. + Frequency: time.Second * 1, + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/queryparam_policy.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/queryparam_policy.go new file mode 100644 index 000000000000..25c88252cd70 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware/queryparam_policy.go @@ -0,0 +1,37 @@ +package middleware + +import ( + "fmt" + "net/http" + "net/url" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +type QueryParameterPolicy struct { + Name string + Value string + Replace bool +} + +func (p *QueryParameterPolicy) Do(req *policy.Request) (*http.Response, error) { + if !p.Replace { + // append behavior + if req.Raw().URL.RawQuery != "" { + req.Raw().URL.RawQuery += "&" + } + + req.Raw().URL.RawQuery += url.QueryEscape(p.Name) + "=" + url.QueryEscape(p.Value) + return req.Next() + } + + // replace behavior + originalQueryParams, err := url.ParseQuery(req.Raw().URL.RawQuery) + if err != nil { + return nil, fmt.Errorf("cannot replace url query parameter due to parsing err: %w", err) + } + + originalQueryParams.Set(p.Name, p.Value) + req.Raw().URL.RawQuery = originalQueryParams.Encode() + return req.Next() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/CHANGELOG.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/CHANGELOG.md new file mode 100644 index 000000000000..7a0a524e3321 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/CHANGELOG.md @@ -0,0 +1,745 @@ +# Release History + +## 1.9.2 (2024-02-06) + +### Bugs Fixed + +* `runtime.MarshalAsByteArray` and `runtime.MarshalAsJSON` will preserve the preexisting value of the `Content-Type` header. + +### Other Changes + +* Update to latest version of `internal`. + +## 1.9.1 (2023-12-11) + +### Bugs Fixed + +* The `retry-after-ms` and `x-ms-retry-after-ms` headers weren't being checked during retries. + +### Other Changes + +* Update dependencies. + +## 1.9.0 (2023-11-06) + +### Breaking Changes +> These changes affect only code written against previous beta versions of `v1.7.0` and `v1.8.0` +* The function `NewTokenCredential` has been removed from the `fake` package. Use a literal `&fake.TokenCredential{}` instead. +* The field `TracingNamespace` in `runtime.PipelineOptions` has been replaced by `TracingOptions`. + +### Bugs Fixed + +* Fixed an issue that could cause some allowed HTTP header values to not show up in logs. +* Include error text instead of error type in traces when the transport returns an error. +* Fixed an issue that could cause an HTTP/2 request to hang when the TCP connection becomes unresponsive. +* Block key and SAS authentication for non TLS protected endpoints. +* Passing a `nil` credential value will no longer cause a panic. Instead, the authentication is skipped. +* Calling `Error` on a zero-value `azcore.ResponseError` will no longer panic. +* Fixed an issue in `fake.PagerResponder[T]` that would cause a trailing error to be omitted when iterating over pages. +* Context values created by `azcore` will no longer flow across disjoint HTTP requests. + +### Other Changes + +* Skip generating trace info for no-op tracers. +* The `clientName` paramater in client constructors has been renamed to `moduleName`. + +## 1.9.0-beta.1 (2023-10-05) + +### Other Changes + +* The beta features for tracing and fakes have been reinstated. + +## 1.8.0 (2023-10-05) + +### Features Added + +* This includes the following features from `v1.8.0-beta.N` releases. + * Claims and CAE for authentication. + * New `messaging` package. + * Various helpers in the `runtime` package. + * Deprecation of `runtime.With*` funcs and their replacements in the `policy` package. +* Added types `KeyCredential` and `SASCredential` to the `azcore` package. + * Includes their respective constructor functions. +* Added types `KeyCredentialPolicy` and `SASCredentialPolicy` to the `azcore/runtime` package. + * Includes their respective constructor functions and options types. + +### Breaking Changes +> These changes affect only code written against beta versions of `v1.8.0` +* The beta features for tracing and fakes have been omitted for this release. + +### Bugs Fixed + +* Fixed an issue that could cause some ARM RPs to not be automatically registered. +* Block bearer token authentication for non TLS protected endpoints. + +### Other Changes + +* Updated dependencies. + +## 1.8.0-beta.3 (2023-09-07) + +### Features Added + +* Added function `FetcherForNextLink` and `FetcherForNextLinkOptions` to the `runtime` package to centralize creation of `Pager[T].Fetcher` from a next link URL. + +### Bugs Fixed + +* Suppress creating spans for nested SDK API calls. The HTTP span will be a child of the outer API span. + +### Other Changes + +* The following functions in the `runtime` package are now exposed from the `policy` package, and the `runtime` versions have been deprecated. + * `WithCaptureResponse` + * `WithHTTPHeader` + * `WithRetryOptions` + +## 1.7.2 (2023-09-06) + +### Bugs Fixed + +* Fix default HTTP transport to work in WASM modules. + +## 1.8.0-beta.2 (2023-08-14) + +### Features Added + +* Added function `SanitizePagerPollerPath` to the `server` package to centralize sanitization and formalize the contract. +* Added `TokenRequestOptions.EnableCAE` to indicate whether to request a CAE token. + +### Breaking Changes + +> This change affects only code written against beta version `v1.8.0-beta.1`. +* `messaging.CloudEvent` deserializes JSON objects as `[]byte`, instead of `json.RawMessage`. See the documentation for CloudEvent.Data for more information. + +> This change affects only code written against beta versions `v1.7.0-beta.2` and `v1.8.0-beta.1`. +* Removed parameter from method `Span.End()` and its type `tracing.SpanEndOptions`. This API GA'ed in `v1.2.0` so we cannot change it. + +### Bugs Fixed + +* Propagate any query parameters when constructing a fake poller and/or injecting next links. + +## 1.7.1 (2023-08-14) + +## Bugs Fixed + +* Enable TLS renegotiation in the default transport policy. + +## 1.8.0-beta.1 (2023-07-12) + +### Features Added + +- `messaging/CloudEvent` allows you to serialize/deserialize CloudEvents, as described in the CloudEvents 1.0 specification: [link](https://github.com/cloudevents/spec) + +### Other Changes + +* The beta features for CAE, tracing, and fakes have been reinstated. + +## 1.7.0 (2023-07-12) + +### Features Added +* Added method `WithClientName()` to type `azcore.Client` to support shallow cloning of a client with a new name used for tracing. + +### Breaking Changes +> These changes affect only code written against beta versions v1.7.0-beta.1 or v1.7.0-beta.2 +* The beta features for CAE, tracing, and fakes have been omitted for this release. + +## 1.7.0-beta.2 (2023-06-06) + +### Breaking Changes +> These changes affect only code written against beta version v1.7.0-beta.1 +* Method `SpanFromContext()` on type `tracing.Tracer` had the `bool` return value removed. + * This includes the field `SpanFromContext` in supporting type `tracing.TracerOptions`. +* Method `AddError()` has been removed from type `tracing.Span`. +* Method `Span.End()` now requires an argument of type `*tracing.SpanEndOptions`. + +## 1.6.1 (2023-06-06) + +### Bugs Fixed +* Fixed an issue in `azcore.NewClient()` and `arm.NewClient()` that could cause an incorrect module name to be used in telemetry. + +### Other Changes +* This version contains all bug fixes from `v1.7.0-beta.1` + +## 1.7.0-beta.1 (2023-05-24) + +### Features Added +* Restored CAE support for ARM clients. +* Added supporting features to enable distributed tracing. + * Added func `runtime.StartSpan()` for use by SDKs to start spans. + * Added method `WithContext()` to `runtime.Request` to support shallow cloning with a new context. + * Added field `TracingNamespace` to `runtime.PipelineOptions`. + * Added field `Tracer` to `runtime.NewPollerOptions` and `runtime.NewPollerFromResumeTokenOptions` types. + * Added field `SpanFromContext` to `tracing.TracerOptions`. + * Added methods `Enabled()`, `SetAttributes()`, and `SpanFromContext()` to `tracing.Tracer`. + * Added supporting pipeline policies to include HTTP spans when creating clients. +* Added package `fake` to support generated fakes packages in SDKs. + * The package contains public surface area exposed by fake servers and supporting APIs intended only for use by the fake server implementations. + * Added an internal fake poller implementation. + +### Bugs Fixed +* Retry policy always clones the underlying `*http.Request` before invoking the next policy. +* Added some non-standard error codes to the list of error codes for unregistered resource providers. + +## 1.6.0 (2023-05-04) + +### Features Added +* Added support for ARM cross-tenant authentication. Set the `AuxiliaryTenants` field of `arm.ClientOptions` to enable. +* Added `TenantID` field to `policy.TokenRequestOptions`. + +## 1.5.0 (2023-04-06) + +### Features Added +* Added `ShouldRetry` to `policy.RetryOptions` for finer-grained control over when to retry. + +### Breaking Changes +> These changes affect only code written against a beta version such as v1.5.0-beta.1 +> These features will return in v1.6.0-beta.1. +* Removed `TokenRequestOptions.Claims` and `.TenantID` +* Removed ARM client support for CAE and cross-tenant auth. + +### Bugs Fixed +* Added non-conformant LRO terminal states `Cancelled` and `Completed`. + +### Other Changes +* Updated to latest `internal` module. + +## 1.5.0-beta.1 (2023-03-02) + +### Features Added +* This release includes the features added in v1.4.0-beta.1 + +## 1.4.0 (2023-03-02) +> This release doesn't include features added in v1.4.0-beta.1. They will return in v1.5.0-beta.1. + +### Features Added +* Add `Clone()` method for `arm/policy.ClientOptions`. + +### Bugs Fixed +* ARM's RP registration policy will no longer swallow unrecognized errors. +* Fixed an issue in `runtime.NewPollerFromResumeToken()` when resuming a `Poller` with a custom `PollingHandler`. +* Fixed wrong policy copy in `arm/runtime.NewPipeline()`. + +## 1.4.0-beta.1 (2023-02-02) + +### Features Added +* Added support for ARM cross-tenant authentication. Set the `AuxiliaryTenants` field of `arm.ClientOptions` to enable. +* Added `Claims` and `TenantID` fields to `policy.TokenRequestOptions`. +* ARM bearer token policy handles CAE challenges. + +## 1.3.1 (2023-02-02) + +### Other Changes +* Update dependencies to latest versions. + +## 1.3.0 (2023-01-06) + +### Features Added +* Added `BearerTokenOptions.AuthorizationHandler` to enable extending `runtime.BearerTokenPolicy` + with custom authorization logic +* Added `Client` types and matching constructors to the `azcore` and `arm` packages. These represent a basic client for HTTP and ARM respectively. + +### Other Changes +* Updated `internal` module to latest version. +* `policy/Request.SetBody()` allows replacing a request's body with an empty one + +## 1.2.0 (2022-11-04) + +### Features Added +* Added `ClientOptions.APIVersion` field, which overrides the default version a client + requests of the service, if the client supports this (all ARM clients do). +* Added package `tracing` that contains the building blocks for distributed tracing. +* Added field `TracingProvider` to type `policy.ClientOptions` that will be used to set the per-client tracing implementation. + +### Bugs Fixed +* Fixed an issue in `runtime.SetMultipartFormData` to properly handle slices of `io.ReadSeekCloser`. +* Fixed the MaxRetryDelay default to be 60s. +* Failure to poll the state of an LRO will now return an `*azcore.ResponseError` for poller types that require this behavior. +* Fixed a bug in `runtime.NewPipeline` that would cause pipeline-specified allowed headers and query parameters to be lost. + +### Other Changes +* Retain contents of read-only fields when sending requests. + +## 1.1.4 (2022-10-06) + +### Bugs Fixed +* Don't retry a request if the `Retry-After` delay is greater than the configured `RetryOptions.MaxRetryDelay`. +* `runtime.JoinPaths`: do not unconditionally add a forward slash before the query string + +### Other Changes +* Removed logging URL from retry policy as it's redundant. +* Retry policy logs when it exits due to a non-retriable status code. + +## 1.1.3 (2022-09-01) + +### Bugs Fixed +* Adjusted the initial retry delay to 800ms per the Azure SDK guidelines. + +## 1.1.2 (2022-08-09) + +### Other Changes +* Fixed various doc bugs. + +## 1.1.1 (2022-06-30) + +### Bugs Fixed +* Avoid polling when a RELO LRO synchronously terminates. + +## 1.1.0 (2022-06-03) + +### Other Changes +* The one-second floor for `Frequency` when calling `PollUntilDone()` has been removed when running tests. + +## 1.0.0 (2022-05-12) + +### Features Added +* Added interface `runtime.PollingHandler` to support custom poller implementations. + * Added field `PollingHandler` of this type to `runtime.NewPollerOptions[T]` and `runtime.NewPollerFromResumeTokenOptions[T]`. + +### Breaking Changes +* Renamed `cloud.Configuration.LoginEndpoint` to `.ActiveDirectoryAuthorityHost` +* Renamed `cloud.AzurePublicCloud` to `cloud.AzurePublic` +* Removed `AuxiliaryTenants` field from `arm/ClientOptions` and `arm/policy/BearerTokenOptions` +* Removed `TokenRequestOptions.TenantID` +* `Poller[T].PollUntilDone()` now takes an `options *PollUntilDoneOptions` param instead of `freq time.Duration` +* Removed `arm/runtime.Poller[T]`, `arm/runtime.NewPoller[T]()` and `arm/runtime.NewPollerFromResumeToken[T]()` +* Removed `arm/runtime.FinalStateVia` and related `const` values +* Renamed `runtime.PageProcessor` to `runtime.PagingHandler` +* The `arm/runtime.ProviderRepsonse` and `arm/runtime.Provider` types are no longer exported. +* Renamed `NewRequestIdPolicy()` to `NewRequestIDPolicy()` +* `TokenCredential.GetToken` now returns `AccessToken` by value. + +### Bugs Fixed +* When per-try timeouts are enabled, only cancel the context after the body has been read and closed. +* The `Operation-Location` poller now properly handles `final-state-via` values. +* Improvements in `runtime.Poller[T]` + * `Poll()` shouldn't cache errors, allowing for additional retries when in a non-terminal state. + * `Result()` will cache the terminal result or error but not transient errors, allowing for additional retries. + +### Other Changes +* Updated to latest `internal` module and absorbed breaking changes. + * Use `temporal.Resource` and deleted copy. +* The internal poller implementation has been refactored. + * The implementation in `internal/pollers/poller.go` has been merged into `runtime/poller.go` with some slight modification. + * The internal poller types had their methods updated to conform to the `runtime.PollingHandler` interface. + * The creation of resume tokens has been refactored so that implementers of `runtime.PollingHandler` don't need to know about it. +* `NewPipeline()` places policies from `ClientOptions` after policies from `PipelineOptions` +* Default User-Agent headers no longer include `azcore` version information + +## 0.23.1 (2022-04-14) + +### Bugs Fixed +* Include XML header when marshalling XML content. +* Handle XML namespaces when searching for error code. +* Handle `odata.error` when searching for error code. + +## 0.23.0 (2022-04-04) + +### Features Added +* Added `runtime.Pager[T any]` and `runtime.Poller[T any]` supporting types for central, generic, implementations. +* Added `cloud` package with a new API for cloud configuration +* Added `FinalStateVia` field to `runtime.NewPollerOptions[T any]` type. + +### Breaking Changes +* Removed the `Poller` type-alias to the internal poller implementation. +* Added `Ptr[T any]` and `SliceOfPtrs[T any]` in the `to` package and removed all non-generic implementations. +* `NullValue` and `IsNullValue` now take a generic type parameter instead of an interface func parameter. +* Replaced `arm.Endpoint` with `cloud` API + * Removed the `endpoint` parameter from `NewRPRegistrationPolicy()` + * `arm/runtime.NewPipeline()` and `.NewRPRegistrationPolicy()` now return an `error` +* Refactored `NewPoller` and `NewPollerFromResumeToken` funcs in `arm/runtime` and `runtime` packages. + * Removed the `pollerID` parameter as it's no longer required. + * Created optional parameter structs and moved optional parameters into them. +* Changed `FinalStateVia` field to a `const` type. + +### Other Changes +* Converted expiring resource and dependent types to use generics. + +## 0.22.0 (2022-03-03) + +### Features Added +* Added header `WWW-Authenticate` to the default allow-list of headers for logging. +* Added a pipeline policy that enables the retrieval of HTTP responses from API calls. + * Added `runtime.WithCaptureResponse` to enable the policy at the API level (off by default). + +### Breaking Changes +* Moved `WithHTTPHeader` and `WithRetryOptions` from the `policy` package to the `runtime` package. + +## 0.21.1 (2022-02-04) + +### Bugs Fixed +* Restore response body after reading in `Poller.FinalResponse()`. (#16911) +* Fixed bug in `NullValue` that could lead to incorrect comparisons for empty maps/slices (#16969) + +### Other Changes +* `BearerTokenPolicy` is more resilient to transient authentication failures. (#16789) + +## 0.21.0 (2022-01-11) + +### Features Added +* Added `AllowedHeaders` and `AllowedQueryParams` to `policy.LogOptions` to control which headers and query parameters are written to the logger. +* Added `azcore.ResponseError` type which is returned from APIs when a non-success HTTP status code is received. + +### Breaking Changes +* Moved `[]policy.Policy` parameters of `arm/runtime.NewPipeline` and `runtime.NewPipeline` into a new struct, `runtime.PipelineOptions` +* Renamed `arm/ClientOptions.Host` to `.Endpoint` +* Moved `Request.SkipBodyDownload` method to function `runtime.SkipBodyDownload` +* Removed `azcore.HTTPResponse` interface type +* `arm.NewPoller()` and `runtime.NewPoller()` no longer require an `eu` parameter +* `runtime.NewResponseError()` no longer requires an `error` parameter + +## 0.20.0 (2021-10-22) + +### Breaking Changes +* Removed `arm.Connection` +* Removed `azcore.Credential` and `.NewAnonymousCredential()` + * `NewRPRegistrationPolicy` now requires an `azcore.TokenCredential` +* `runtime.NewPipeline` has a new signature that simplifies implementing custom authentication +* `arm/runtime.RegistrationOptions` embeds `policy.ClientOptions` +* Contents in the `log` package have been slightly renamed. +* Removed `AuthenticationOptions` in favor of `policy.BearerTokenOptions` +* Changed parameters for `NewBearerTokenPolicy()` +* Moved policy config options out of `arm/runtime` and into `arm/policy` + +### Features Added +* Updating Documentation +* Added string typdef `arm.Endpoint` to provide a hint toward expected ARM client endpoints +* `azcore.ClientOptions` contains common pipeline configuration settings +* Added support for multi-tenant authorization in `arm/runtime` +* Require one second minimum when calling `PollUntilDone()` + +### Bug Fixes +* Fixed a potential panic when creating the default Transporter. +* Close LRO initial response body when creating a poller. +* Fixed a panic when recursively cloning structs that contain time.Time. + +## 0.19.0 (2021-08-25) + +### Breaking Changes +* Split content out of `azcore` into various packages. The intent is to separate content based on its usage (common, uncommon, SDK authors). + * `azcore` has all core functionality. + * `log` contains facilities for configuring in-box logging. + * `policy` is used for configuring pipeline options and creating custom pipeline policies. + * `runtime` contains various helpers used by SDK authors and generated content. + * `streaming` has helpers for streaming IO operations. +* `NewTelemetryPolicy()` now requires module and version parameters and the `Value` option has been removed. + * As a result, the `Request.Telemetry()` method has been removed. +* The telemetry policy now includes the SDK prefix `azsdk-go-` so callers no longer need to provide it. +* The `*http.Request` in `runtime.Request` is no longer anonymously embedded. Use the `Raw()` method to access it. +* The `UserAgent` and `Version` constants have been made internal, `Module` and `Version` respectively. + +### Bug Fixes +* Fixed an issue in the retry policy where the request body could be overwritten after a rewind. + +### Other Changes +* Moved modules `armcore` and `to` content into `arm` and `to` packages respectively. + * The `Pipeline()` method on `armcore.Connection` has been replaced by `NewPipeline()` in `arm.Connection`. It takes module and version parameters used by the telemetry policy. +* Poller logic has been consolidated across ARM and core implementations. + * This required some changes to the internal interfaces for core pollers. +* The core poller types have been improved, including more logging and test coverage. + +## 0.18.1 (2021-08-20) + +### Features Added +* Adds an `ETag` type for comparing etags and handling etags on requests +* Simplifies the `requestBodyProgess` and `responseBodyProgress` into a single `progress` object + +### Bugs Fixed +* `JoinPaths` will preserve query parameters encoded in the `root` url. + +### Other Changes +* Bumps dependency on `internal` module to the latest version (v0.7.0) + +## 0.18.0 (2021-07-29) +### Features Added +* Replaces methods from Logger type with two package methods for interacting with the logging functionality. +* `azcore.SetClassifications` replaces `azcore.Logger().SetClassifications` +* `azcore.SetListener` replaces `azcore.Logger().SetListener` + +### Breaking Changes +* Removes `Logger` type from `azcore` + + +## 0.17.0 (2021-07-27) +### Features Added +* Adding TenantID to TokenRequestOptions (https://github.com/Azure/azure-sdk-for-go/pull/14879) +* Adding AuxiliaryTenants to AuthenticationOptions (https://github.com/Azure/azure-sdk-for-go/pull/15123) + +### Breaking Changes +* Rename `AnonymousCredential` to `NewAnonymousCredential` (https://github.com/Azure/azure-sdk-for-go/pull/15104) +* rename `AuthenticationPolicyOptions` to `AuthenticationOptions` (https://github.com/Azure/azure-sdk-for-go/pull/15103) +* Make Header constants private (https://github.com/Azure/azure-sdk-for-go/pull/15038) + + +## 0.16.2 (2021-05-26) +### Features Added +* Improved support for byte arrays [#14715](https://github.com/Azure/azure-sdk-for-go/pull/14715) + + +## 0.16.1 (2021-05-19) +### Features Added +* Add license.txt to azcore module [#14682](https://github.com/Azure/azure-sdk-for-go/pull/14682) + + +## 0.16.0 (2021-05-07) +### Features Added +* Remove extra `*` in UnmarshalAsByteArray() [#14642](https://github.com/Azure/azure-sdk-for-go/pull/14642) + + +## 0.15.1 (2021-05-06) +### Features Added +* Cache the original request body on Request [#14634](https://github.com/Azure/azure-sdk-for-go/pull/14634) + + +## 0.15.0 (2021-05-05) +### Features Added +* Add support for null map and slice +* Export `Response.Payload` method + +### Breaking Changes +* remove `Response.UnmarshalError` as it's no longer required + + +## 0.14.5 (2021-04-23) +### Features Added +* Add `UnmarshalError()` on `azcore.Response` + + +## 0.14.4 (2021-04-22) +### Features Added +* Support for basic LRO polling +* Added type `LROPoller` and supporting types for basic polling on long running operations. +* rename poller param and added doc comment + +### Bugs Fixed +* Fixed content type detection bug in logging. + + +## 0.14.3 (2021-03-29) +### Features Added +* Add support for multi-part form data +* Added method `WriteMultipartFormData()` to Request. + + +## 0.14.2 (2021-03-17) +### Features Added +* Add support for encoding JSON null values +* Adds `NullValue()` and `IsNullValue()` functions for setting and detecting sentinel values used for encoding a JSON null. +* Documentation fixes + +### Bugs Fixed +* Fixed improper error wrapping + + +## 0.14.1 (2021-02-08) +### Features Added +* Add `Pager` and `Poller` interfaces to azcore + + +## 0.14.0 (2021-01-12) +### Features Added +* Accept zero-value options for default values +* Specify zero-value options structs to accept default values. +* Remove `DefaultXxxOptions()` methods. +* Do not silently change TryTimeout on negative values +* make per-try timeout opt-in + + +## 0.13.4 (2020-11-20) +### Features Added +* Include telemetry string in User Agent + + +## 0.13.3 (2020-11-20) +### Features Added +* Updating response body handling on `azcore.Response` + + +## 0.13.2 (2020-11-13) +### Features Added +* Remove implementation of stateless policies as first-class functions. + + +## 0.13.1 (2020-11-05) +### Features Added +* Add `Telemetry()` method to `azcore.Request()` + + +## 0.13.0 (2020-10-14) +### Features Added +* Rename `log` to `logger` to avoid name collision with the log package. +* Documentation improvements +* Simplified `DefaultHTTPClientTransport()` implementation + + +## 0.12.1 (2020-10-13) +### Features Added +* Update `internal` module dependence to `v0.5.0` + + +## 0.12.0 (2020-10-08) +### Features Added +* Removed storage specific content +* Removed internal content to prevent API clutter +* Refactored various policy options to conform with our options pattern + + +## 0.11.0 (2020-09-22) +### Features Added + +* Removed `LogError` and `LogSlowResponse`. +* Renamed `options` in `RequestLogOptions`. +* Updated `NewRequestLogPolicy()` to follow standard pattern for options. +* Refactored `requestLogPolicy.Do()` per above changes. +* Cleaned up/added logging in retry policy. +* Export `NewResponseError()` +* Fix `RequestLogOptions` comment + + +## 0.10.1 (2020-09-17) +### Features Added +* Add default console logger +* Default console logger writes to stderr. To enable it, set env var `AZURE_SDK_GO_LOGGING` to the value 'all'. +* Added `Logger.Writef()` to reduce the need for `ShouldLog()` checks. +* Add `LogLongRunningOperation` + + +## 0.10.0 (2020-09-10) +### Features Added +* The `request` and `transport` interfaces have been refactored to align with the patterns in the standard library. +* `NewRequest()` now uses `http.NewRequestWithContext()` and performs additional validation, it also requires a context parameter. +* The `Policy` and `Transport` interfaces have had their context parameter removed as the context is associated with the underlying `http.Request`. +* `Pipeline.Do()` will validate the HTTP request before sending it through the pipeline, avoiding retries on a malformed request. +* The `Retrier` interface has been replaced with the `NonRetriableError` interface, and the retry policy updated to test for this. +* `Request.SetBody()` now requires a content type parameter for setting the request's MIME type. +* moved path concatenation into `JoinPaths()` func + + +## 0.9.6 (2020-08-18) +### Features Added +* Improvements to body download policy +* Always download the response body for error responses, i.e. HTTP status codes >= 400. +* Simplify variable declarations + + +## 0.9.5 (2020-08-11) +### Features Added +* Set the Content-Length header in `Request.SetBody` + + +## 0.9.4 (2020-08-03) +### Features Added +* Fix cancellation of per try timeout +* Per try timeout is used to ensure that an HTTP operation doesn't take too long, e.g. that a GET on some URL doesn't take an inordinant amount of time. +* Once the HTTP request returns, the per try timeout should be cancelled, not when the response has been read to completion. +* Do not drain response body if there are no more retries +* Do not retry non-idempotent operations when body download fails + + +## 0.9.3 (2020-07-28) +### Features Added +* Add support for custom HTTP request headers +* Inserts an internal policy into the pipeline that can extract HTTP header values from the caller's context, adding them to the request. +* Use `azcore.WithHTTPHeader` to add HTTP headers to a context. +* Remove method specific to Go 1.14 + + +## 0.9.2 (2020-07-28) +### Features Added +* Omit read-only content from request payloads +* If any field in a payload's object graph contains `azure:"ro"`, make a clone of the object graph, omitting all fields with this annotation. +* Verify no fields were dropped +* Handle embedded struct types +* Added test for cloning by value +* Add messages to failures + + +## 0.9.1 (2020-07-22) +### Features Added +* Updated dependency on internal module to fix race condition. + + +## 0.9.0 (2020-07-09) +### Features Added +* Add `HTTPResponse` interface to be used by callers to access the raw HTTP response from an error in the event of an API call failure. +* Updated `sdk/internal` dependency to latest version. +* Rename package alias + + +## 0.8.2 (2020-06-29) +### Features Added +* Added missing documentation comments + +### Bugs Fixed +* Fixed a bug in body download policy. + + +## 0.8.1 (2020-06-26) +### Features Added +* Miscellaneous clean-up reported by linters + + +## 0.8.0 (2020-06-01) +### Features Added +* Differentiate between standard and URL encoding. + + +## 0.7.1 (2020-05-27) +### Features Added +* Add support for for base64 encoding and decoding of payloads. + + +## 0.7.0 (2020-05-12) +### Features Added +* Change `RetryAfter()` to a function. + + +## 0.6.0 (2020-04-29) +### Features Added +* Updating `RetryAfter` to only return the detaion in the RetryAfter header + + +## 0.5.0 (2020-03-23) +### Features Added +* Export `TransportFunc` + +### Breaking Changes +* Removed `IterationDone` + + +## 0.4.1 (2020-02-25) +### Features Added +* Ensure per-try timeout is properly cancelled +* Explicitly call cancel the per-try timeout when the response body has been read/closed by the body download policy. +* When the response body is returned to the caller for reading/closing, wrap it in a `responseBodyReader` that will cancel the timeout when the body is closed. +* `Logger.Should()` will return false if no listener is set. + + +## 0.4.0 (2020-02-18) +### Features Added +* Enable custom `RetryOptions` to be specified per API call +* Added `WithRetryOptions()` that adds a custom `RetryOptions` to the provided context, allowing custom settings per API call. +* Remove 429 from the list of default HTTP status codes for retry. +* Change StatusCodesForRetry to a slice so consumers can append to it. +* Added support for retry-after in HTTP-date format. +* Cleaned up some comments specific to storage. +* Remove `Request.SetQueryParam()` +* Renamed `MaxTries` to `MaxRetries` + +## 0.3.0 (2020-01-16) +### Features Added +* Added `DefaultRetryOptions` to create initialized default options. + +### Breaking Changes +* Removed `Response.CheckStatusCode()` + + +## 0.2.0 (2020-01-15) +### Features Added +* Add support for marshalling and unmarshalling JSON +* Removed `Response.Payload` field +* Exit early when unmarsahlling if there is no payload + + +## 0.1.0 (2020-01-10) +### Features Added +* Initial release diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/LICENSE.txt b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/LICENSE.txt new file mode 100644 index 000000000000..48ea6616b5b8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/LICENSE.txt @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) Microsoft Corporation. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/README.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/README.md new file mode 100644 index 000000000000..35a74e18d09a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/README.md @@ -0,0 +1,39 @@ +# Azure Core Client Module for Go + +[![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/azcore)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore) +[![Build Status](https://dev.azure.com/azure-sdk/public/_apis/build/status/go/go%20-%20azcore%20-%20ci?branchName=main)](https://dev.azure.com/azure-sdk/public/_build/latest?definitionId=1843&branchName=main) +[![Code Coverage](https://img.shields.io/azure-devops/coverage/azure-sdk/public/1843/main)](https://img.shields.io/azure-devops/coverage/azure-sdk/public/1843/main) + +The `azcore` module provides a set of common interfaces and types for Go SDK client modules. +These modules follow the [Azure SDK Design Guidelines for Go](https://azure.github.io/azure-sdk/golang_introduction.html). + +## Getting started + +This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management. + +Typically, you will not need to explicitly install `azcore` as it will be installed as a client module dependency. +To add the latest version to your `go.mod` file, execute the following command. + +```bash +go get github.com/Azure/azure-sdk-for-go/sdk/azcore +``` + +General documentation and examples can be found on [pkg.go.dev](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore). + +## Contributing +This project welcomes contributions and suggestions. Most contributions require +you to agree to a Contributor License Agreement (CLA) declaring that you have +the right to, and actually do, grant us the rights to use your contribution. +For details, visit [https://cla.microsoft.com](https://cla.microsoft.com). + +When you submit a pull request, a CLA-bot will automatically determine whether +you need to provide a CLA and decorate the PR appropriately (e.g., label, +comment). Simply follow the instructions provided by the bot. You will only +need to do this once across all repos using our CLA. + +This project has adopted the +[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information, see the +[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) +or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any +additional questions or comments. diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/client.go new file mode 100644 index 000000000000..c373cc43fd50 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/client.go @@ -0,0 +1,72 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package arm + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + armpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy" + armruntime "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing" +) + +// ClientOptions contains configuration settings for a client's pipeline. +type ClientOptions = armpolicy.ClientOptions + +// Client is a HTTP client for use with ARM endpoints. It consists of an endpoint, pipeline, and tracing provider. +type Client struct { + ep string + pl runtime.Pipeline + tr tracing.Tracer +} + +// NewClient creates a new Client instance with the provided values. +// This client is intended to be used with Azure Resource Manager endpoints. +// - moduleName - the fully qualified name of the module where the client is defined; used by the telemetry policy and tracing provider. +// - moduleVersion - the semantic version of the module; used by the telemetry policy and tracing provider. +// - cred - the TokenCredential used to authenticate the request +// - options - optional client configurations; pass nil to accept the default values +func NewClient(moduleName, moduleVersion string, cred azcore.TokenCredential, options *ClientOptions) (*Client, error) { + if options == nil { + options = &ClientOptions{} + } + + if !options.Telemetry.Disabled { + if err := shared.ValidateModVer(moduleVersion); err != nil { + return nil, err + } + } + + ep := cloud.AzurePublic.Services[cloud.ResourceManager].Endpoint + if c, ok := options.Cloud.Services[cloud.ResourceManager]; ok { + ep = c.Endpoint + } + pl, err := armruntime.NewPipeline(moduleName, moduleVersion, cred, runtime.PipelineOptions{}, options) + if err != nil { + return nil, err + } + + tr := options.TracingProvider.NewTracer(moduleName, moduleVersion) + return &Client{ep: ep, pl: pl, tr: tr}, nil +} + +// Endpoint returns the service's base URL for this client. +func (c *Client) Endpoint() string { + return c.ep +} + +// Pipeline returns the pipeline for this client. +func (c *Client) Pipeline() runtime.Pipeline { + return c.pl +} + +// Tracer returns the tracer for this client. +func (c *Client) Tracer() tracing.Tracer { + return c.tr +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/doc.go new file mode 100644 index 000000000000..1bdd16a3d034 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/doc.go @@ -0,0 +1,9 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright 2017 Microsoft Corporation. All rights reserved. +// Use of this source code is governed by an MIT +// license that can be found in the LICENSE file. + +// Package arm contains functionality specific to Azure Resource Manager clients. +package arm diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_identifier.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_identifier.go new file mode 100644 index 000000000000..187fe82b97c4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_identifier.go @@ -0,0 +1,224 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package resource + +import ( + "fmt" + "strings" +) + +const ( + providersKey = "providers" + subscriptionsKey = "subscriptions" + resourceGroupsLowerKey = "resourcegroups" + locationsKey = "locations" + builtInResourceNamespace = "Microsoft.Resources" +) + +// RootResourceID defines the tenant as the root parent of all other ResourceID. +var RootResourceID = &ResourceID{ + Parent: nil, + ResourceType: TenantResourceType, + Name: "", +} + +// ResourceID represents a resource ID such as `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRg`. +// Don't create this type directly, use ParseResourceID instead. +type ResourceID struct { + // Parent is the parent ResourceID of this instance. + // Can be nil if there is no parent. + Parent *ResourceID + + // SubscriptionID is the subscription ID in this resource ID. + // The value can be empty if the resource ID does not contain a subscription ID. + SubscriptionID string + + // ResourceGroupName is the resource group name in this resource ID. + // The value can be empty if the resource ID does not contain a resource group name. + ResourceGroupName string + + // Provider represents the provider name in this resource ID. + // This is only valid when the resource ID represents a resource provider. + // Example: `/subscriptions/00000000-0000-0000-0000-000000000000/providers/Microsoft.Insights` + Provider string + + // Location is the location in this resource ID. + // The value can be empty if the resource ID does not contain a location name. + Location string + + // ResourceType represents the type of this resource ID. + ResourceType ResourceType + + // Name is the resource name of this resource ID. + Name string + + isChild bool + stringValue string +} + +// ParseResourceID parses a string to an instance of ResourceID +func ParseResourceID(id string) (*ResourceID, error) { + if len(id) == 0 { + return nil, fmt.Errorf("invalid resource ID: id cannot be empty") + } + + if !strings.HasPrefix(id, "/") { + return nil, fmt.Errorf("invalid resource ID: resource id '%s' must start with '/'", id) + } + + parts := splitStringAndOmitEmpty(id, "/") + + if len(parts) < 2 { + return nil, fmt.Errorf("invalid resource ID: %s", id) + } + + if !strings.EqualFold(parts[0], subscriptionsKey) && !strings.EqualFold(parts[0], providersKey) { + return nil, fmt.Errorf("invalid resource ID: %s", id) + } + + return appendNext(RootResourceID, parts, id) +} + +// String returns the string of the ResourceID +func (id *ResourceID) String() string { + if len(id.stringValue) > 0 { + return id.stringValue + } + + if id.Parent == nil { + return "" + } + + builder := strings.Builder{} + builder.WriteString(id.Parent.String()) + + if id.isChild { + builder.WriteString(fmt.Sprintf("/%s", id.ResourceType.lastType())) + if len(id.Name) > 0 { + builder.WriteString(fmt.Sprintf("/%s", id.Name)) + } + } else { + builder.WriteString(fmt.Sprintf("/providers/%s/%s/%s", id.ResourceType.Namespace, id.ResourceType.Type, id.Name)) + } + + id.stringValue = builder.String() + + return id.stringValue +} + +func newResourceID(parent *ResourceID, resourceTypeName string, resourceName string) *ResourceID { + id := &ResourceID{} + id.init(parent, chooseResourceType(resourceTypeName, parent), resourceName, true) + return id +} + +func newResourceIDWithResourceType(parent *ResourceID, resourceType ResourceType, resourceName string) *ResourceID { + id := &ResourceID{} + id.init(parent, resourceType, resourceName, true) + return id +} + +func newResourceIDWithProvider(parent *ResourceID, providerNamespace, resourceTypeName, resourceName string) *ResourceID { + id := &ResourceID{} + id.init(parent, NewResourceType(providerNamespace, resourceTypeName), resourceName, false) + return id +} + +func chooseResourceType(resourceTypeName string, parent *ResourceID) ResourceType { + if strings.EqualFold(resourceTypeName, resourceGroupsLowerKey) { + return ResourceGroupResourceType + } else if strings.EqualFold(resourceTypeName, subscriptionsKey) && parent != nil && parent.ResourceType.String() == TenantResourceType.String() { + return SubscriptionResourceType + } + + return parent.ResourceType.AppendChild(resourceTypeName) +} + +func (id *ResourceID) init(parent *ResourceID, resourceType ResourceType, name string, isChild bool) { + if parent != nil { + id.Provider = parent.Provider + id.SubscriptionID = parent.SubscriptionID + id.ResourceGroupName = parent.ResourceGroupName + id.Location = parent.Location + } + + if resourceType.String() == SubscriptionResourceType.String() { + id.SubscriptionID = name + } + + if resourceType.lastType() == locationsKey { + id.Location = name + } + + if resourceType.String() == ResourceGroupResourceType.String() { + id.ResourceGroupName = name + } + + if resourceType.String() == ProviderResourceType.String() { + id.Provider = name + } + + if parent == nil { + id.Parent = RootResourceID + } else { + id.Parent = parent + } + id.isChild = isChild + id.ResourceType = resourceType + id.Name = name +} + +func appendNext(parent *ResourceID, parts []string, id string) (*ResourceID, error) { + if len(parts) == 0 { + return parent, nil + } + + if len(parts) == 1 { + // subscriptions and resourceGroups are not valid ids without their names + if strings.EqualFold(parts[0], subscriptionsKey) || strings.EqualFold(parts[0], resourceGroupsLowerKey) { + return nil, fmt.Errorf("invalid resource ID: %s", id) + } + + // resourceGroup must contain either child or provider resource type + if parent.ResourceType.String() == ResourceGroupResourceType.String() { + return nil, fmt.Errorf("invalid resource ID: %s", id) + } + + return newResourceID(parent, parts[0], ""), nil + } + + if strings.EqualFold(parts[0], providersKey) && (len(parts) == 2 || strings.EqualFold(parts[2], providersKey)) { + //provider resource can only be on a tenant or a subscription parent + if parent.ResourceType.String() != SubscriptionResourceType.String() && parent.ResourceType.String() != TenantResourceType.String() { + return nil, fmt.Errorf("invalid resource ID: %s", id) + } + + return appendNext(newResourceIDWithResourceType(parent, ProviderResourceType, parts[1]), parts[2:], id) + } + + if len(parts) > 3 && strings.EqualFold(parts[0], providersKey) { + return appendNext(newResourceIDWithProvider(parent, parts[1], parts[2], parts[3]), parts[4:], id) + } + + if len(parts) > 1 && !strings.EqualFold(parts[0], providersKey) { + return appendNext(newResourceID(parent, parts[0], parts[1]), parts[2:], id) + } + + return nil, fmt.Errorf("invalid resource ID: %s", id) +} + +func splitStringAndOmitEmpty(v, sep string) []string { + r := make([]string, 0) + for _, s := range strings.Split(v, sep) { + if len(s) == 0 { + continue + } + r = append(r, s) + } + + return r +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_type.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_type.go new file mode 100644 index 000000000000..ca03ac9713d5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource/resource_type.go @@ -0,0 +1,114 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package resource + +import ( + "fmt" + "strings" +) + +// SubscriptionResourceType is the ResourceType of a subscription +var SubscriptionResourceType = NewResourceType(builtInResourceNamespace, "subscriptions") + +// ResourceGroupResourceType is the ResourceType of a resource group +var ResourceGroupResourceType = NewResourceType(builtInResourceNamespace, "resourceGroups") + +// TenantResourceType is the ResourceType of a tenant +var TenantResourceType = NewResourceType(builtInResourceNamespace, "tenants") + +// ProviderResourceType is the ResourceType of a provider +var ProviderResourceType = NewResourceType(builtInResourceNamespace, "providers") + +// ResourceType represents an Azure resource type, e.g. "Microsoft.Network/virtualNetworks/subnets". +// Don't create this type directly, use ParseResourceType or NewResourceType instead. +type ResourceType struct { + // Namespace is the namespace of the resource type. + // e.g. "Microsoft.Network" in resource type "Microsoft.Network/virtualNetworks/subnets" + Namespace string + + // Type is the full type name of the resource type. + // e.g. "virtualNetworks/subnets" in resource type "Microsoft.Network/virtualNetworks/subnets" + Type string + + // Types is the slice of all the sub-types of this resource type. + // e.g. ["virtualNetworks", "subnets"] in resource type "Microsoft.Network/virtualNetworks/subnets" + Types []string + + stringValue string +} + +// String returns the string of the ResourceType +func (t ResourceType) String() string { + return t.stringValue +} + +// IsParentOf returns true when the receiver is the parent resource type of the child. +func (t ResourceType) IsParentOf(child ResourceType) bool { + if !strings.EqualFold(t.Namespace, child.Namespace) { + return false + } + if len(t.Types) >= len(child.Types) { + return false + } + for i := range t.Types { + if !strings.EqualFold(t.Types[i], child.Types[i]) { + return false + } + } + + return true +} + +// AppendChild creates an instance of ResourceType using the receiver as the parent with childType appended to it. +func (t ResourceType) AppendChild(childType string) ResourceType { + return NewResourceType(t.Namespace, fmt.Sprintf("%s/%s", t.Type, childType)) +} + +// NewResourceType creates an instance of ResourceType using a provider namespace +// such as "Microsoft.Network" and type such as "virtualNetworks/subnets". +func NewResourceType(providerNamespace, typeName string) ResourceType { + return ResourceType{ + Namespace: providerNamespace, + Type: typeName, + Types: splitStringAndOmitEmpty(typeName, "/"), + stringValue: fmt.Sprintf("%s/%s", providerNamespace, typeName), + } +} + +// ParseResourceType parses the ResourceType from a resource type string (e.g. Microsoft.Network/virtualNetworks/subsets) +// or a resource identifier string. +// e.g. /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRg/providers/Microsoft.Network/virtualNetworks/vnet/subnets/mySubnet) +func ParseResourceType(resourceIDOrType string) (ResourceType, error) { + // split the path into segments + parts := splitStringAndOmitEmpty(resourceIDOrType, "/") + + // There must be at least a namespace and type name + if len(parts) < 1 { + return ResourceType{}, fmt.Errorf("invalid resource ID or type: %s", resourceIDOrType) + } + + // if the type is just subscriptions, it is a built-in type in the Microsoft.Resources namespace + if len(parts) == 1 { + // Simple resource type + return NewResourceType(builtInResourceNamespace, parts[0]), nil + } else if strings.Contains(parts[0], ".") { + // Handle resource types (Microsoft.Compute/virtualMachines, Microsoft.Network/virtualNetworks/subnets) + // it is a full type name + return NewResourceType(parts[0], strings.Join(parts[1:], "/")), nil + } else { + // Check if ResourceID + id, err := ParseResourceID(resourceIDOrType) + if err != nil { + return ResourceType{}, err + } + return NewResourceType(id.ResourceType.Namespace, id.ResourceType.Type), nil + } +} + +func (t ResourceType) lastType() string { + return t.Types[len(t.Types)-1] +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy/policy.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy/policy.go new file mode 100644 index 000000000000..83cf91e3ecb5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy/policy.go @@ -0,0 +1,98 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package policy + +import ( + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +// BearerTokenOptions configures the bearer token policy's behavior. +type BearerTokenOptions struct { + // AuxiliaryTenants are additional tenant IDs for authenticating cross-tenant requests. + // The policy will add a token from each of these tenants to every request. The + // authenticating user or service principal must be a guest in these tenants, and the + // policy's credential must support multitenant authentication. + AuxiliaryTenants []string + + // Scopes contains the list of permission scopes required for the token. + Scopes []string +} + +// RegistrationOptions configures the registration policy's behavior. +// All zero-value fields will be initialized with their default values. +type RegistrationOptions struct { + policy.ClientOptions + + // MaxAttempts is the total number of times to attempt automatic registration + // in the event that an attempt fails. + // The default value is 3. + // Set to a value less than zero to disable the policy. + MaxAttempts int + + // PollingDelay is the amount of time to sleep between polling intervals. + // The default value is 15 seconds. + // A value less than zero means no delay between polling intervals (not recommended). + PollingDelay time.Duration + + // PollingDuration is the amount of time to wait before abandoning polling. + // The default valule is 5 minutes. + // NOTE: Setting this to a small value might cause the policy to prematurely fail. + PollingDuration time.Duration +} + +// ClientOptions contains configuration settings for a client's pipeline. +type ClientOptions struct { + policy.ClientOptions + + // AuxiliaryTenants are additional tenant IDs for authenticating cross-tenant requests. + // The client will add a token from each of these tenants to every request. The + // authenticating user or service principal must be a guest in these tenants, and the + // client's credential must support multitenant authentication. + AuxiliaryTenants []string + + // DisableRPRegistration disables the auto-RP registration policy. Defaults to false. + DisableRPRegistration bool +} + +// Clone return a deep copy of the current options. +func (o *ClientOptions) Clone() *ClientOptions { + if o == nil { + return nil + } + copiedOptions := *o + copiedOptions.Cloud.Services = copyMap(copiedOptions.Cloud.Services) + copiedOptions.Logging.AllowedHeaders = copyArray(copiedOptions.Logging.AllowedHeaders) + copiedOptions.Logging.AllowedQueryParams = copyArray(copiedOptions.Logging.AllowedQueryParams) + copiedOptions.Retry.StatusCodes = copyArray(copiedOptions.Retry.StatusCodes) + copiedOptions.PerRetryPolicies = copyArray(copiedOptions.PerRetryPolicies) + copiedOptions.PerCallPolicies = copyArray(copiedOptions.PerCallPolicies) + return &copiedOptions +} + +// copyMap return a new map with all the key value pair in the src map +func copyMap[K comparable, V any](src map[K]V) map[K]V { + if src == nil { + return nil + } + copiedMap := make(map[K]V) + for k, v := range src { + copiedMap[k] = v + } + return copiedMap +} + +// copyMap return a new array with all the elements in the src array +func copyArray[T any](src []T) []T { + if src == nil { + return nil + } + copiedArray := make([]T, len(src)) + copy(copiedArray, src) + return copiedArray +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/resource_identifier.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/resource_identifier.go new file mode 100644 index 000000000000..d35d6374fdc7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/resource_identifier.go @@ -0,0 +1,23 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package arm + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource" +) + +// RootResourceID defines the tenant as the root parent of all other ResourceID. +var RootResourceID = resource.RootResourceID + +// ResourceID represents a resource ID such as `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRg`. +// Don't create this type directly, use ParseResourceID instead. +type ResourceID = resource.ResourceID + +// ParseResourceID parses a string to an instance of ResourceID +func ParseResourceID(id string) (*ResourceID, error) { + return resource.ParseResourceID(id) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/resource_type.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/resource_type.go new file mode 100644 index 000000000000..fc7fbffd2603 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/resource_type.go @@ -0,0 +1,40 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package arm + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource" +) + +// SubscriptionResourceType is the ResourceType of a subscription +var SubscriptionResourceType = resource.SubscriptionResourceType + +// ResourceGroupResourceType is the ResourceType of a resource group +var ResourceGroupResourceType = resource.ResourceGroupResourceType + +// TenantResourceType is the ResourceType of a tenant +var TenantResourceType = resource.TenantResourceType + +// ProviderResourceType is the ResourceType of a provider +var ProviderResourceType = resource.ProviderResourceType + +// ResourceType represents an Azure resource type, e.g. "Microsoft.Network/virtualNetworks/subnets". +// Don't create this type directly, use ParseResourceType or NewResourceType instead. +type ResourceType = resource.ResourceType + +// NewResourceType creates an instance of ResourceType using a provider namespace +// such as "Microsoft.Network" and type such as "virtualNetworks/subnets". +func NewResourceType(providerNamespace, typeName string) ResourceType { + return resource.NewResourceType(providerNamespace, typeName) +} + +// ParseResourceType parses the ResourceType from a resource type string (e.g. Microsoft.Network/virtualNetworks/subsets) +// or a resource identifier string. +// e.g. /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/myRg/providers/Microsoft.Network/virtualNetworks/vnet/subnets/mySubnet) +func ParseResourceType(resourceIDOrType string) (ResourceType, error) { + return resource.ParseResourceType(resourceIDOrType) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/pipeline.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/pipeline.go new file mode 100644 index 000000000000..302c19cd4265 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/pipeline.go @@ -0,0 +1,65 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "errors" + "reflect" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + armpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + azpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + azruntime "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" +) + +// NewPipeline creates a pipeline from connection options. Policies from ClientOptions are +// placed after policies from PipelineOptions. The telemetry policy, when enabled, will +// use the specified module and version info. +func NewPipeline(module, version string, cred azcore.TokenCredential, plOpts azruntime.PipelineOptions, options *armpolicy.ClientOptions) (azruntime.Pipeline, error) { + if options == nil { + options = &armpolicy.ClientOptions{} + } + conf, err := getConfiguration(&options.ClientOptions) + if err != nil { + return azruntime.Pipeline{}, err + } + authPolicy := NewBearerTokenPolicy(cred, &armpolicy.BearerTokenOptions{ + AuxiliaryTenants: options.AuxiliaryTenants, + Scopes: []string{conf.Audience + "/.default"}, + }) + perRetry := make([]azpolicy.Policy, len(plOpts.PerRetry), len(plOpts.PerRetry)+1) + copy(perRetry, plOpts.PerRetry) + plOpts.PerRetry = append(perRetry, authPolicy, exported.PolicyFunc(httpTraceNamespacePolicy)) + if !options.DisableRPRegistration { + regRPOpts := armpolicy.RegistrationOptions{ClientOptions: options.ClientOptions} + regPolicy, err := NewRPRegistrationPolicy(cred, ®RPOpts) + if err != nil { + return azruntime.Pipeline{}, err + } + perCall := make([]azpolicy.Policy, len(plOpts.PerCall), len(plOpts.PerCall)+1) + copy(perCall, plOpts.PerCall) + plOpts.PerCall = append(perCall, regPolicy) + } + if plOpts.APIVersion.Name == "" { + plOpts.APIVersion.Name = "api-version" + } + return azruntime.NewPipeline(module, version, plOpts, &options.ClientOptions), nil +} + +func getConfiguration(o *azpolicy.ClientOptions) (cloud.ServiceConfiguration, error) { + c := cloud.AzurePublic + if !reflect.ValueOf(o.Cloud).IsZero() { + c = o.Cloud + } + if conf, ok := c.Services[cloud.ResourceManager]; ok && conf.Endpoint != "" && conf.Audience != "" { + return conf, nil + } else { + return conf, errors.New("provided Cloud field is missing Azure Resource Manager configuration") + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_bearer_token.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_bearer_token.go new file mode 100644 index 000000000000..54b3bb78d859 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_bearer_token.go @@ -0,0 +1,145 @@ +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "encoding/base64" + "fmt" + "net/http" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + armpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + azpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + azruntime "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/internal/temporal" +) + +const headerAuxiliaryAuthorization = "x-ms-authorization-auxiliary" + +// acquiringResourceState holds data for an auxiliary token request +type acquiringResourceState struct { + ctx context.Context + p *BearerTokenPolicy + tenant string +} + +// acquireAuxToken acquires a token from an auxiliary tenant. Only one thread/goroutine at a time ever calls this function. +func acquireAuxToken(state acquiringResourceState) (newResource azcore.AccessToken, newExpiration time.Time, err error) { + tk, err := state.p.cred.GetToken(state.ctx, azpolicy.TokenRequestOptions{ + EnableCAE: true, + Scopes: state.p.scopes, + TenantID: state.tenant, + }) + if err != nil { + return azcore.AccessToken{}, time.Time{}, err + } + return tk, tk.ExpiresOn, nil +} + +// BearerTokenPolicy authorizes requests with bearer tokens acquired from a TokenCredential. +type BearerTokenPolicy struct { + auxResources map[string]*temporal.Resource[azcore.AccessToken, acquiringResourceState] + btp *azruntime.BearerTokenPolicy + cred azcore.TokenCredential + scopes []string +} + +// NewBearerTokenPolicy creates a policy object that authorizes requests with bearer tokens. +// cred: an azcore.TokenCredential implementation such as a credential object from azidentity +// opts: optional settings. Pass nil to accept default values; this is the same as passing a zero-value options. +func NewBearerTokenPolicy(cred azcore.TokenCredential, opts *armpolicy.BearerTokenOptions) *BearerTokenPolicy { + if opts == nil { + opts = &armpolicy.BearerTokenOptions{} + } + p := &BearerTokenPolicy{cred: cred} + p.auxResources = make(map[string]*temporal.Resource[azcore.AccessToken, acquiringResourceState], len(opts.AuxiliaryTenants)) + for _, t := range opts.AuxiliaryTenants { + p.auxResources[t] = temporal.NewResource(acquireAuxToken) + } + p.scopes = make([]string, len(opts.Scopes)) + copy(p.scopes, opts.Scopes) + p.btp = azruntime.NewBearerTokenPolicy(cred, opts.Scopes, &azpolicy.BearerTokenOptions{ + AuthorizationHandler: azpolicy.AuthorizationHandler{ + OnChallenge: p.onChallenge, + OnRequest: p.onRequest, + }, + }) + return p +} + +func (b *BearerTokenPolicy) onChallenge(req *azpolicy.Request, res *http.Response, authNZ func(azpolicy.TokenRequestOptions) error) error { + challenge := res.Header.Get(shared.HeaderWWWAuthenticate) + claims, err := parseChallenge(challenge) + if err != nil { + // the challenge contains claims we can't parse + return err + } else if claims != "" { + // request a new token having the specified claims, send the request again + return authNZ(azpolicy.TokenRequestOptions{Claims: claims, EnableCAE: true, Scopes: b.scopes}) + } + // auth challenge didn't include claims, so this is a simple authorization failure + return azruntime.NewResponseError(res) +} + +// onRequest authorizes requests with one or more bearer tokens +func (b *BearerTokenPolicy) onRequest(req *azpolicy.Request, authNZ func(azpolicy.TokenRequestOptions) error) error { + // authorize the request with a token for the primary tenant + err := authNZ(azpolicy.TokenRequestOptions{EnableCAE: true, Scopes: b.scopes}) + if err != nil || len(b.auxResources) == 0 { + return err + } + // add tokens for auxiliary tenants + as := acquiringResourceState{ + ctx: req.Raw().Context(), + p: b, + } + auxTokens := make([]string, 0, len(b.auxResources)) + for tenant, er := range b.auxResources { + as.tenant = tenant + auxTk, err := er.Get(as) + if err != nil { + return err + } + auxTokens = append(auxTokens, fmt.Sprintf("%s%s", shared.BearerTokenPrefix, auxTk.Token)) + } + req.Raw().Header.Set(headerAuxiliaryAuthorization, strings.Join(auxTokens, ", ")) + return nil +} + +// Do authorizes a request with a bearer token +func (b *BearerTokenPolicy) Do(req *azpolicy.Request) (*http.Response, error) { + return b.btp.Do(req) +} + +// parseChallenge parses claims from an authentication challenge issued by ARM so a client can request a token +// that will satisfy conditional access policies. It returns a non-nil error when the given value contains +// claims it can't parse. If the value contains no claims, it returns an empty string and a nil error. +func parseChallenge(wwwAuthenticate string) (string, error) { + claims := "" + var err error + for _, param := range strings.Split(wwwAuthenticate, ",") { + if _, after, found := strings.Cut(param, "claims="); found { + if claims != "" { + // The header contains multiple challenges, at least two of which specify claims. The specs allow this + // but it's unclear what a client should do in this case and there's as yet no concrete example of it. + err = fmt.Errorf("found multiple claims challenges in %q", wwwAuthenticate) + break + } + // trim stuff that would get an error from RawURLEncoding; claims may or may not be padded + claims = strings.Trim(after, `\"=`) + // we don't return this error because it's something unhelpful like "illegal base64 data at input byte 42" + if b, decErr := base64.RawURLEncoding.DecodeString(claims); decErr == nil { + claims = string(b) + } else { + err = fmt.Errorf("failed to parse claims from %q", wwwAuthenticate) + break + } + } + } + return claims, err +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_register_rp.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_register_rp.go new file mode 100644 index 000000000000..83e15949aa36 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_register_rp.go @@ -0,0 +1,347 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "errors" + "fmt" + "net/http" + "net/url" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + armpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + azpolicy "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" +) + +const ( + // LogRPRegistration entries contain information specific to the automatic registration of an RP. + // Entries of this classification are written IFF the policy needs to take any action. + LogRPRegistration log.Event = "RPRegistration" +) + +// init sets any default values +func setDefaults(r *armpolicy.RegistrationOptions) { + if r.MaxAttempts == 0 { + r.MaxAttempts = 3 + } else if r.MaxAttempts < 0 { + r.MaxAttempts = 0 + } + if r.PollingDelay == 0 { + r.PollingDelay = 15 * time.Second + } else if r.PollingDelay < 0 { + r.PollingDelay = 0 + } + if r.PollingDuration == 0 { + r.PollingDuration = 5 * time.Minute + } +} + +// NewRPRegistrationPolicy creates a policy object configured using the specified options. +// The policy controls whether an unregistered resource provider should automatically be +// registered. See https://aka.ms/rps-not-found for more information. +func NewRPRegistrationPolicy(cred azcore.TokenCredential, o *armpolicy.RegistrationOptions) (azpolicy.Policy, error) { + if o == nil { + o = &armpolicy.RegistrationOptions{} + } + conf, err := getConfiguration(&o.ClientOptions) + if err != nil { + return nil, err + } + authPolicy := NewBearerTokenPolicy(cred, &armpolicy.BearerTokenOptions{Scopes: []string{conf.Audience + "/.default"}}) + p := &rpRegistrationPolicy{ + endpoint: conf.Endpoint, + pipeline: runtime.NewPipeline(shared.Module, shared.Version, runtime.PipelineOptions{PerRetry: []azpolicy.Policy{authPolicy}}, &o.ClientOptions), + options: *o, + } + // init the copy + setDefaults(&p.options) + return p, nil +} + +type rpRegistrationPolicy struct { + endpoint string + pipeline runtime.Pipeline + options armpolicy.RegistrationOptions +} + +func (r *rpRegistrationPolicy) Do(req *azpolicy.Request) (*http.Response, error) { + if r.options.MaxAttempts == 0 { + // policy is disabled + return req.Next() + } + const registeredState = "Registered" + var rp string + var resp *http.Response + for attempts := 0; attempts < r.options.MaxAttempts; attempts++ { + var err error + // make the original request + resp, err = req.Next() + // getting a 409 is the first indication that the RP might need to be registered, check error response + if err != nil || resp.StatusCode != http.StatusConflict { + return resp, err + } + var reqErr requestError + if err = runtime.UnmarshalAsJSON(resp, &reqErr); err != nil { + return resp, err + } + if reqErr.ServiceError == nil { + // missing service error info. just return the response + // to the caller so its error unmarshalling will kick in + return resp, err + } + if !isUnregisteredRPCode(reqErr.ServiceError.Code) { + // not a 409 due to unregistered RP. just return the response + // to the caller so its error unmarshalling will kick in + return resp, err + } + // RP needs to be registered. start by getting the subscription ID from the original request + subID, err := getSubscription(req.Raw().URL.Path) + if err != nil { + return resp, err + } + // now get the RP from the error + rp, err = getProvider(reqErr) + if err != nil { + return resp, err + } + logRegistrationExit := func(v interface{}) { + log.Writef(LogRPRegistration, "END registration for %s: %v", rp, v) + } + log.Writef(LogRPRegistration, "BEGIN registration for %s", rp) + // create client and make the registration request + // we use the scheme and host from the original request + rpOps := &providersOperations{ + p: r.pipeline, + u: r.endpoint, + subID: subID, + } + if _, err = rpOps.Register(&shared.ContextWithDeniedValues{Context: req.Raw().Context()}, rp); err != nil { + logRegistrationExit(err) + return resp, err + } + + // RP was registered, however we need to wait for the registration to complete + pollCtx, pollCancel := context.WithTimeout(&shared.ContextWithDeniedValues{Context: req.Raw().Context()}, r.options.PollingDuration) + var lastRegState string + for { + // get the current registration state + getResp, err := rpOps.Get(pollCtx, rp) + if err != nil { + pollCancel() + logRegistrationExit(err) + return resp, err + } + if getResp.Provider.RegistrationState != nil && !strings.EqualFold(*getResp.Provider.RegistrationState, lastRegState) { + // registration state has changed, or was updated for the first time + lastRegState = *getResp.Provider.RegistrationState + log.Writef(LogRPRegistration, "registration state is %s", lastRegState) + } + if strings.EqualFold(lastRegState, registeredState) { + // registration complete + pollCancel() + logRegistrationExit(lastRegState) + break + } + // wait before trying again + select { + case <-time.After(r.options.PollingDelay): + // continue polling + case <-pollCtx.Done(): + pollCancel() + logRegistrationExit(pollCtx.Err()) + return resp, pollCtx.Err() + } + } + // RP was successfully registered, retry the original request + err = req.RewindBody() + if err != nil { + return resp, err + } + } + // if we get here it means we exceeded the number of attempts + return resp, fmt.Errorf("exceeded attempts to register %s", rp) +} + +var unregisteredRPCodes = []string{ + "MissingSubscriptionRegistration", + "MissingRegistrationForResourceProvider", + "Subscription Not Registered", + "SubscriptionNotRegistered", +} + +func isUnregisteredRPCode(errorCode string) bool { + for _, code := range unregisteredRPCodes { + if strings.EqualFold(errorCode, code) { + return true + } + } + return false +} + +func getSubscription(path string) (string, error) { + parts := strings.Split(path, "/") + for i, v := range parts { + if v == "subscriptions" && (i+1) < len(parts) { + return parts[i+1], nil + } + } + return "", fmt.Errorf("failed to obtain subscription ID from %s", path) +} + +func getProvider(re requestError) (string, error) { + if len(re.ServiceError.Details) > 0 { + return re.ServiceError.Details[0].Target, nil + } + return "", errors.New("unexpected empty Details") +} + +// minimal error definitions to simplify detection +type requestError struct { + ServiceError *serviceError `json:"error"` +} + +type serviceError struct { + Code string `json:"code"` + Details []serviceErrorDetails `json:"details"` +} + +type serviceErrorDetails struct { + Code string `json:"code"` + Target string `json:"target"` +} + +/////////////////////////////////////////////////////////////////////////////////////////////// +// the following code was copied from module armresources, providers.go and models.go +// only the minimum amount of code was copied to get this working and some edits were made. +/////////////////////////////////////////////////////////////////////////////////////////////// + +type providersOperations struct { + p runtime.Pipeline + u string + subID string +} + +// Get - Gets the specified resource provider. +func (client *providersOperations) Get(ctx context.Context, resourceProviderNamespace string) (providerResponse, error) { + req, err := client.getCreateRequest(ctx, resourceProviderNamespace) + if err != nil { + return providerResponse{}, err + } + resp, err := client.p.Do(req) + if err != nil { + return providerResponse{}, err + } + result, err := client.getHandleResponse(resp) + if err != nil { + return providerResponse{}, err + } + return result, nil +} + +// getCreateRequest creates the Get request. +func (client *providersOperations) getCreateRequest(ctx context.Context, resourceProviderNamespace string) (*azpolicy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}" + urlPath = strings.ReplaceAll(urlPath, "{resourceProviderNamespace}", url.PathEscape(resourceProviderNamespace)) + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subID)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.u, urlPath)) + if err != nil { + return nil, err + } + query := req.Raw().URL.Query() + query.Set("api-version", "2019-05-01") + req.Raw().URL.RawQuery = query.Encode() + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *providersOperations) getHandleResponse(resp *http.Response) (providerResponse, error) { + if !runtime.HasStatusCode(resp, http.StatusOK) { + return providerResponse{}, exported.NewResponseError(resp) + } + result := providerResponse{RawResponse: resp} + err := runtime.UnmarshalAsJSON(resp, &result.Provider) + if err != nil { + return providerResponse{}, err + } + return result, err +} + +// Register - Registers a subscription with a resource provider. +func (client *providersOperations) Register(ctx context.Context, resourceProviderNamespace string) (providerResponse, error) { + req, err := client.registerCreateRequest(ctx, resourceProviderNamespace) + if err != nil { + return providerResponse{}, err + } + resp, err := client.p.Do(req) + if err != nil { + return providerResponse{}, err + } + result, err := client.registerHandleResponse(resp) + if err != nil { + return providerResponse{}, err + } + return result, nil +} + +// registerCreateRequest creates the Register request. +func (client *providersOperations) registerCreateRequest(ctx context.Context, resourceProviderNamespace string) (*azpolicy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/{resourceProviderNamespace}/register" + urlPath = strings.ReplaceAll(urlPath, "{resourceProviderNamespace}", url.PathEscape(resourceProviderNamespace)) + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subID)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.u, urlPath)) + if err != nil { + return nil, err + } + query := req.Raw().URL.Query() + query.Set("api-version", "2019-05-01") + req.Raw().URL.RawQuery = query.Encode() + return req, nil +} + +// registerHandleResponse handles the Register response. +func (client *providersOperations) registerHandleResponse(resp *http.Response) (providerResponse, error) { + if !runtime.HasStatusCode(resp, http.StatusOK) { + return providerResponse{}, exported.NewResponseError(resp) + } + result := providerResponse{RawResponse: resp} + err := runtime.UnmarshalAsJSON(resp, &result.Provider) + if err != nil { + return providerResponse{}, err + } + return result, err +} + +// ProviderResponse is the response envelope for operations that return a Provider type. +type providerResponse struct { + // Resource provider information. + Provider *provider + + // RawResponse contains the underlying HTTP response. + RawResponse *http.Response +} + +// Provider - Resource provider information. +type provider struct { + // The provider ID. + ID *string `json:"id,omitempty"` + + // The namespace of the resource provider. + Namespace *string `json:"namespace,omitempty"` + + // The registration policy of the resource provider. + RegistrationPolicy *string `json:"registrationPolicy,omitempty"` + + // The registration state of the resource provider. + RegistrationState *string `json:"registrationState,omitempty"` +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_trace_namespace.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_trace_namespace.go new file mode 100644 index 000000000000..6cea184240f2 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/policy_trace_namespace.go @@ -0,0 +1,30 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing" +) + +// httpTraceNamespacePolicy is a policy that adds the az.namespace attribute to the current Span +func httpTraceNamespacePolicy(req *policy.Request) (resp *http.Response, err error) { + rawTracer := req.Raw().Context().Value(shared.CtxWithTracingTracer{}) + if tracer, ok := rawTracer.(tracing.Tracer); ok && tracer.Enabled() { + rt, err := resource.ParseResourceType(req.Raw().URL.Path) + if err == nil { + // add the namespace attribute to the current span + span := tracer.SpanFromContext(req.Raw().Context()) + span.SetAttributes(tracing.Attribute{Key: shared.TracingNamespaceAttrName, Value: rt.Namespace}) + } + } + return req.Next() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/runtime.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/runtime.go new file mode 100644 index 000000000000..1400d43799f3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime/runtime.go @@ -0,0 +1,24 @@ +//go:build go1.16 +// +build go1.16 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud" + +func init() { + cloud.AzureChina.Services[cloud.ResourceManager] = cloud.ServiceConfiguration{ + Audience: "https://management.core.chinacloudapi.cn", + Endpoint: "https://management.chinacloudapi.cn", + } + cloud.AzureGovernment.Services[cloud.ResourceManager] = cloud.ServiceConfiguration{ + Audience: "https://management.core.usgovcloudapi.net", + Endpoint: "https://management.usgovcloudapi.net", + } + cloud.AzurePublic.Services[cloud.ResourceManager] = cloud.ServiceConfiguration{ + Audience: "https://management.core.windows.net/", + Endpoint: "https://management.azure.com", + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/ci.yml b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/ci.yml new file mode 100644 index 000000000000..aab9218538da --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/ci.yml @@ -0,0 +1,29 @@ +# NOTE: Please refer to https://aka.ms/azsdk/engsys/ci-yaml before editing this file. +trigger: + branches: + include: + - main + - feature/* + - hotfix/* + - release/* + paths: + include: + - sdk/azcore/ + - eng/ + +pr: + branches: + include: + - main + - feature/* + - hotfix/* + - release/* + paths: + include: + - sdk/azcore/ + - eng/ + +stages: +- template: /eng/pipelines/templates/jobs/archetype-sdk-client.yml + parameters: + ServiceDirectory: azcore diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/cloud.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/cloud.go new file mode 100644 index 000000000000..9d077a3e1260 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/cloud.go @@ -0,0 +1,44 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package cloud + +var ( + // AzureChina contains configuration for Azure China. + AzureChina = Configuration{ + ActiveDirectoryAuthorityHost: "https://login.chinacloudapi.cn/", Services: map[ServiceName]ServiceConfiguration{}, + } + // AzureGovernment contains configuration for Azure Government. + AzureGovernment = Configuration{ + ActiveDirectoryAuthorityHost: "https://login.microsoftonline.us/", Services: map[ServiceName]ServiceConfiguration{}, + } + // AzurePublic contains configuration for Azure Public Cloud. + AzurePublic = Configuration{ + ActiveDirectoryAuthorityHost: "https://login.microsoftonline.com/", Services: map[ServiceName]ServiceConfiguration{}, + } +) + +// ServiceName identifies a cloud service. +type ServiceName string + +// ResourceManager is a global constant identifying Azure Resource Manager. +const ResourceManager ServiceName = "resourceManager" + +// ServiceConfiguration configures a specific cloud service such as Azure Resource Manager. +type ServiceConfiguration struct { + // Audience is the audience the client will request for its access tokens. + Audience string + // Endpoint is the service's base URL. + Endpoint string +} + +// Configuration configures a cloud. +type Configuration struct { + // ActiveDirectoryAuthorityHost is the base URL of the cloud's Azure Active Directory. + ActiveDirectoryAuthorityHost string + // Services contains configuration for the cloud's services. + Services map[ServiceName]ServiceConfiguration +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/doc.go new file mode 100644 index 000000000000..985b1bde2f2d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud/doc.go @@ -0,0 +1,53 @@ +//go:build go1.16 +// +build go1.16 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +/* +Package cloud implements a configuration API for applications deployed to sovereign or private Azure clouds. + +Azure SDK client configuration defaults are appropriate for Azure Public Cloud (sometimes referred to as +"Azure Commercial" or simply "Microsoft Azure"). This package enables applications deployed to other +Azure Clouds to configure clients appropriately. + +This package contains predefined configuration for well-known sovereign clouds such as Azure Government and +Azure China. Azure SDK clients accept this configuration via the Cloud field of azcore.ClientOptions. For +example, configuring a credential and ARM client for Azure Government: + + opts := azcore.ClientOptions{Cloud: cloud.AzureGovernment} + cred, err := azidentity.NewDefaultAzureCredential( + &azidentity.DefaultAzureCredentialOptions{ClientOptions: opts}, + ) + handle(err) + + client, err := armsubscription.NewClient( + cred, &arm.ClientOptions{ClientOptions: opts}, + ) + handle(err) + +Applications deployed to a private cloud such as Azure Stack create a Configuration object with +appropriate values: + + c := cloud.Configuration{ + ActiveDirectoryAuthorityHost: "https://...", + Services: map[cloud.ServiceName]cloud.ServiceConfiguration{ + cloud.ResourceManager: { + Audience: "...", + Endpoint: "https://...", + }, + }, + } + opts := azcore.ClientOptions{Cloud: c} + + cred, err := azidentity.NewDefaultAzureCredential( + &azidentity.DefaultAzureCredentialOptions{ClientOptions: opts}, + ) + handle(err) + + client, err := armsubscription.NewClient( + cred, &arm.ClientOptions{ClientOptions: opts}, + ) + handle(err) +*/ +package cloud diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/core.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/core.go new file mode 100644 index 000000000000..8eef8633a7e8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/core.go @@ -0,0 +1,154 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azcore + +import ( + "reflect" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing" +) + +// AccessToken represents an Azure service bearer access token with expiry information. +type AccessToken = exported.AccessToken + +// TokenCredential represents a credential capable of providing an OAuth token. +type TokenCredential = exported.TokenCredential + +// KeyCredential contains an authentication key used to authenticate to an Azure service. +type KeyCredential = exported.KeyCredential + +// NewKeyCredential creates a new instance of [KeyCredential] with the specified values. +// - key is the authentication key +func NewKeyCredential(key string) *KeyCredential { + return exported.NewKeyCredential(key) +} + +// SASCredential contains a shared access signature used to authenticate to an Azure service. +type SASCredential = exported.SASCredential + +// NewSASCredential creates a new instance of [SASCredential] with the specified values. +// - sas is the shared access signature +func NewSASCredential(sas string) *SASCredential { + return exported.NewSASCredential(sas) +} + +// holds sentinel values used to send nulls +var nullables map[reflect.Type]interface{} = map[reflect.Type]interface{}{} + +// NullValue is used to send an explicit 'null' within a request. +// This is typically used in JSON-MERGE-PATCH operations to delete a value. +func NullValue[T any]() T { + t := shared.TypeOfT[T]() + v, found := nullables[t] + if !found { + var o reflect.Value + if k := t.Kind(); k == reflect.Map { + o = reflect.MakeMap(t) + } else if k == reflect.Slice { + // empty slices appear to all point to the same data block + // which causes comparisons to become ambiguous. so we create + // a slice with len/cap of one which ensures a unique address. + o = reflect.MakeSlice(t, 1, 1) + } else { + o = reflect.New(t.Elem()) + } + v = o.Interface() + nullables[t] = v + } + // return the sentinel object + return v.(T) +} + +// IsNullValue returns true if the field contains a null sentinel value. +// This is used by custom marshallers to properly encode a null value. +func IsNullValue[T any](v T) bool { + // see if our map has a sentinel object for this *T + t := reflect.TypeOf(v) + if o, found := nullables[t]; found { + o1 := reflect.ValueOf(o) + v1 := reflect.ValueOf(v) + // we found it; return true if v points to the sentinel object. + // NOTE: maps and slices can only be compared to nil, else you get + // a runtime panic. so we compare addresses instead. + return o1.Pointer() == v1.Pointer() + } + // no sentinel object for this *t + return false +} + +// ClientOptions contains optional settings for a client's pipeline. +// Instances can be shared across calls to SDK client constructors when uniform configuration is desired. +// Zero-value fields will have their specified default values applied during use. +type ClientOptions = policy.ClientOptions + +// Client is a basic HTTP client. It consists of a pipeline and tracing provider. +type Client struct { + pl runtime.Pipeline + tr tracing.Tracer + + // cached on the client to support shallow copying with new values + tp tracing.Provider + modVer string + namespace string +} + +// NewClient creates a new Client instance with the provided values. +// - moduleName - the fully qualified name of the module where the client is defined; used by the telemetry policy and tracing provider. +// - moduleVersion - the semantic version of the module; used by the telemetry policy and tracing provider. +// - plOpts - pipeline configuration options; can be the zero-value +// - options - optional client configurations; pass nil to accept the default values +func NewClient(moduleName, moduleVersion string, plOpts runtime.PipelineOptions, options *ClientOptions) (*Client, error) { + if options == nil { + options = &ClientOptions{} + } + + if !options.Telemetry.Disabled { + if err := shared.ValidateModVer(moduleVersion); err != nil { + return nil, err + } + } + + pl := runtime.NewPipeline(moduleName, moduleVersion, plOpts, options) + + tr := options.TracingProvider.NewTracer(moduleName, moduleVersion) + if tr.Enabled() && plOpts.Tracing.Namespace != "" { + tr.SetAttributes(tracing.Attribute{Key: shared.TracingNamespaceAttrName, Value: plOpts.Tracing.Namespace}) + } + + return &Client{ + pl: pl, + tr: tr, + tp: options.TracingProvider, + modVer: moduleVersion, + namespace: plOpts.Tracing.Namespace, + }, nil +} + +// Pipeline returns the pipeline for this client. +func (c *Client) Pipeline() runtime.Pipeline { + return c.pl +} + +// Tracer returns the tracer for this client. +func (c *Client) Tracer() tracing.Tracer { + return c.tr +} + +// WithClientName returns a shallow copy of the Client with its tracing client name changed to clientName. +// Note that the values for module name and version will be preserved from the source Client. +// - clientName - the fully qualified name of the client ("package.Client"); this is used by the tracing provider when creating spans +func (c *Client) WithClientName(clientName string) *Client { + tr := c.tp.NewTracer(clientName, c.modVer) + if tr.Enabled() && c.namespace != "" { + tr.SetAttributes(tracing.Attribute{Key: shared.TracingNamespaceAttrName, Value: c.namespace}) + } + return &Client{pl: c.pl, tr: tr, tp: c.tp, modVer: c.modVer, namespace: c.namespace} +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/doc.go new file mode 100644 index 000000000000..654a5f404314 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/doc.go @@ -0,0 +1,264 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright 2017 Microsoft Corporation. All rights reserved. +// Use of this source code is governed by an MIT +// license that can be found in the LICENSE file. + +/* +Package azcore implements an HTTP request/response middleware pipeline used by Azure SDK clients. + +The middleware consists of three components. + + - One or more Policy instances. + - A Transporter instance. + - A Pipeline instance that combines the Policy and Transporter instances. + +# Implementing the Policy Interface + +A Policy can be implemented in two ways; as a first-class function for a stateless Policy, or as +a method on a type for a stateful Policy. Note that HTTP requests made via the same pipeline share +the same Policy instances, so if a Policy mutates its state it MUST be properly synchronized to +avoid race conditions. + +A Policy's Do method is called when an HTTP request wants to be sent over the network. The Do method can +perform any operation(s) it desires. For example, it can log the outgoing request, mutate the URL, headers, +and/or query parameters, inject a failure, etc. Once the Policy has successfully completed its request +work, it must call the Next() method on the *policy.Request instance in order to pass the request to the +next Policy in the chain. + +When an HTTP response comes back, the Policy then gets a chance to process the response/error. The Policy instance +can log the response, retry the operation if it failed due to a transient error or timeout, unmarshal the response +body, etc. Once the Policy has successfully completed its response work, it must return the *http.Response +and error instances to its caller. + +Template for implementing a stateless Policy: + + type policyFunc func(*policy.Request) (*http.Response, error) + + // Do implements the Policy interface on policyFunc. + func (pf policyFunc) Do(req *policy.Request) (*http.Response, error) { + return pf(req) + } + + func NewMyStatelessPolicy() policy.Policy { + return policyFunc(func(req *policy.Request) (*http.Response, error) { + // TODO: mutate/process Request here + + // forward Request to next Policy & get Response/error + resp, err := req.Next() + + // TODO: mutate/process Response/error here + + // return Response/error to previous Policy + return resp, err + }) + } + +Template for implementing a stateful Policy: + + type MyStatefulPolicy struct { + // TODO: add configuration/setting fields here + } + + // TODO: add initialization args to NewMyStatefulPolicy() + func NewMyStatefulPolicy() policy.Policy { + return &MyStatefulPolicy{ + // TODO: initialize configuration/setting fields here + } + } + + func (p *MyStatefulPolicy) Do(req *policy.Request) (resp *http.Response, err error) { + // TODO: mutate/process Request here + + // forward Request to next Policy & get Response/error + resp, err := req.Next() + + // TODO: mutate/process Response/error here + + // return Response/error to previous Policy + return resp, err + } + +# Implementing the Transporter Interface + +The Transporter interface is responsible for sending the HTTP request and returning the corresponding +HTTP response or error. The Transporter is invoked by the last Policy in the chain. The default Transporter +implementation uses a shared http.Client from the standard library. + +The same stateful/stateless rules for Policy implementations apply to Transporter implementations. + +# Using Policy and Transporter Instances Via a Pipeline + +To use the Policy and Transporter instances, an application passes them to the runtime.NewPipeline function. + + func NewPipeline(transport Transporter, policies ...Policy) Pipeline + +The specified Policy instances form a chain and are invoked in the order provided to NewPipeline +followed by the Transporter. + +Once the Pipeline has been created, create a runtime.Request instance and pass it to Pipeline's Do method. + + func NewRequest(ctx context.Context, httpMethod string, endpoint string) (*Request, error) + + func (p Pipeline) Do(req *Request) (*http.Request, error) + +The Pipeline.Do method sends the specified Request through the chain of Policy and Transporter +instances. The response/error is then sent through the same chain of Policy instances in reverse +order. For example, assuming there are Policy types PolicyA, PolicyB, and PolicyC along with +TransportA. + + pipeline := NewPipeline(TransportA, PolicyA, PolicyB, PolicyC) + +The flow of Request and Response looks like the following: + + policy.Request -> PolicyA -> PolicyB -> PolicyC -> TransportA -----+ + | + HTTP(S) endpoint + | + caller <--------- PolicyA <- PolicyB <- PolicyC <- http.Response-+ + +# Creating a Request Instance + +The Request instance passed to Pipeline's Do method is a wrapper around an *http.Request. It also +contains some internal state and provides various convenience methods. You create a Request instance +by calling the runtime.NewRequest function: + + func NewRequest(ctx context.Context, httpMethod string, endpoint string) (*Request, error) + +If the Request should contain a body, call the SetBody method. + + func (req *Request) SetBody(body ReadSeekCloser, contentType string) error + +A seekable stream is required so that upon retry, the retry Policy instance can seek the stream +back to the beginning before retrying the network request and re-uploading the body. + +# Sending an Explicit Null + +Operations like JSON-MERGE-PATCH send a JSON null to indicate a value should be deleted. + + { + "delete-me": null + } + +This requirement conflicts with the SDK's default marshalling that specifies "omitempty" as +a means to resolve the ambiguity between a field to be excluded and its zero-value. + + type Widget struct { + Name *string `json:",omitempty"` + Count *int `json:",omitempty"` + } + +In the above example, Name and Count are defined as pointer-to-type to disambiguate between +a missing value (nil) and a zero-value (0) which might have semantic differences. + +In a PATCH operation, any fields left as nil are to have their values preserved. When updating +a Widget's count, one simply specifies the new value for Count, leaving Name nil. + +To fulfill the requirement for sending a JSON null, the NullValue() function can be used. + + w := Widget{ + Count: azcore.NullValue[*int](), + } + +This sends an explict "null" for Count, indicating that any current value for Count should be deleted. + +# Processing the Response + +When the HTTP response is received, the *http.Response is returned directly. Each Policy instance +can inspect/mutate the *http.Response. + +# Built-in Logging + +To enable logging, set environment variable AZURE_SDK_GO_LOGGING to "all" before executing your program. + +By default the logger writes to stderr. This can be customized by calling log.SetListener, providing +a callback that writes to the desired location. Any custom logging implementation MUST provide its +own synchronization to handle concurrent invocations. + +See the docs for the log package for further details. + +# Pageable Operations + +Pageable operations return potentially large data sets spread over multiple GET requests. The result of +each GET is a "page" of data consisting of a slice of items. + +Pageable operations can be identified by their New*Pager naming convention and return type of *runtime.Pager[T]. + + func (c *WidgetClient) NewListWidgetsPager(o *Options) *runtime.Pager[PageResponse] + +The call to WidgetClient.NewListWidgetsPager() returns an instance of *runtime.Pager[T] for fetching pages +and determining if there are more pages to fetch. No IO calls are made until the NextPage() method is invoked. + + pager := widgetClient.NewListWidgetsPager(nil) + for pager.More() { + page, err := pager.NextPage(context.TODO()) + // handle err + for _, widget := range page.Values { + // process widget + } + } + +# Long-Running Operations + +Long-running operations (LROs) are operations consisting of an initial request to start the operation followed +by polling to determine when the operation has reached a terminal state. An LRO's terminal state is one +of the following values. + + - Succeeded - the LRO completed successfully + - Failed - the LRO failed to complete + - Canceled - the LRO was canceled + +LROs can be identified by their Begin* prefix and their return type of *runtime.Poller[T]. + + func (c *WidgetClient) BeginCreateOrUpdate(ctx context.Context, w Widget, o *Options) (*runtime.Poller[Response], error) + +When a call to WidgetClient.BeginCreateOrUpdate() returns a nil error, it means that the LRO has started. +It does _not_ mean that the widget has been created or updated (or failed to be created/updated). + +The *runtime.Poller[T] provides APIs for determining the state of the LRO. To wait for the LRO to complete, +call the PollUntilDone() method. + + poller, err := widgetClient.BeginCreateOrUpdate(context.TODO(), Widget{}, nil) + // handle err + result, err := poller.PollUntilDone(context.TODO(), nil) + // handle err + // use result + +The call to PollUntilDone() will block the current goroutine until the LRO has reached a terminal state or the +context is canceled/timed out. + +Note that LROs can take anywhere from several seconds to several minutes. The duration is operation-dependent. Due to +this variant behavior, pollers do _not_ have a preconfigured time-out. Use a context with the appropriate cancellation +mechanism as required. + +# Resume Tokens + +Pollers provide the ability to serialize their state into a "resume token" which can be used by another process to +recreate the poller. This is achieved via the runtime.Poller[T].ResumeToken() method. + + token, err := poller.ResumeToken() + // handle error + +Note that a token can only be obtained for a poller that's in a non-terminal state. Also note that any subsequent calls +to poller.Poll() might change the poller's state. In this case, a new token should be created. + +After the token has been obtained, it can be used to recreate an instance of the originating poller. + + poller, err := widgetClient.BeginCreateOrUpdate(nil, Widget{}, &Options{ + ResumeToken: token, + }) + +When resuming a poller, no IO is performed, and zero-value arguments can be used for everything but the Options.ResumeToken. + +Resume tokens are unique per service client and operation. Attempting to resume a poller for LRO BeginB() with a token from LRO +BeginA() will result in an error. + +# Fakes + +The fake package contains types used for constructing in-memory fake servers used in unit tests. +This allows writing tests to cover various success/error conditions without the need for connecting to a live service. + +Please see https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes. +*/ +package azcore diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/errors.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/errors.go new file mode 100644 index 000000000000..17bd50c67320 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/errors.go @@ -0,0 +1,14 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azcore + +import "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + +// ResponseError is returned when a request is made to a service and +// the service returns a non-success HTTP status code. +// Use errors.As() to access this type in the error chain. +type ResponseError = exported.ResponseError diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/etag.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/etag.go new file mode 100644 index 000000000000..23ea7e7c8eac --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/etag.go @@ -0,0 +1,48 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azcore + +import ( + "strings" +) + +// ETag is a property used for optimistic concurrency during updates +// ETag is a validator based on https://tools.ietf.org/html/rfc7232#section-2.3.2 +// An ETag can be empty (""). +type ETag string + +// ETagAny is an ETag that represents everything, the value is "*" +const ETagAny ETag = "*" + +// Equals does a strong comparison of two ETags. Equals returns true when both +// ETags are not weak and the values of the underlying strings are equal. +func (e ETag) Equals(other ETag) bool { + return !e.IsWeak() && !other.IsWeak() && e == other +} + +// WeakEquals does a weak comparison of two ETags. Two ETags are equivalent if their opaque-tags match +// character-by-character, regardless of either or both being tagged as "weak". +func (e ETag) WeakEquals(other ETag) bool { + getStart := func(e1 ETag) int { + if e1.IsWeak() { + return 2 + } + return 0 + } + aStart := getStart(e) + bStart := getStart(other) + + aVal := e[aStart:] + bVal := other[bStart:] + + return aVal == bVal +} + +// IsWeak specifies whether the ETag is strong or weak. +func (e ETag) IsWeak() bool { + return len(e) >= 4 && strings.HasPrefix(string(e), "W/\"") && strings.HasSuffix(string(e), "\"") +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/exported.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/exported.go new file mode 100644 index 000000000000..f2b296b6dc7c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/exported.go @@ -0,0 +1,175 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package exported + +import ( + "context" + "encoding/base64" + "fmt" + "io" + "net/http" + "sync/atomic" + "time" +) + +type nopCloser struct { + io.ReadSeeker +} + +func (n nopCloser) Close() error { + return nil +} + +// NopCloser returns a ReadSeekCloser with a no-op close method wrapping the provided io.ReadSeeker. +// Exported as streaming.NopCloser(). +func NopCloser(rs io.ReadSeeker) io.ReadSeekCloser { + return nopCloser{rs} +} + +// HasStatusCode returns true if the Response's status code is one of the specified values. +// Exported as runtime.HasStatusCode(). +func HasStatusCode(resp *http.Response, statusCodes ...int) bool { + if resp == nil { + return false + } + for _, sc := range statusCodes { + if resp.StatusCode == sc { + return true + } + } + return false +} + +// AccessToken represents an Azure service bearer access token with expiry information. +// Exported as azcore.AccessToken. +type AccessToken struct { + Token string + ExpiresOn time.Time +} + +// TokenRequestOptions contain specific parameter that may be used by credentials types when attempting to get a token. +// Exported as policy.TokenRequestOptions. +type TokenRequestOptions struct { + // Claims are any additional claims required for the token to satisfy a conditional access policy, such as a + // service may return in a claims challenge following an authorization failure. If a service returned the + // claims value base64 encoded, it must be decoded before setting this field. + Claims string + + // EnableCAE indicates whether to enable Continuous Access Evaluation (CAE) for the requested token. When true, + // azidentity credentials request CAE tokens for resource APIs supporting CAE. Clients are responsible for + // handling CAE challenges. If a client that doesn't handle CAE challenges receives a CAE token, it may end up + // in a loop retrying an API call with a token that has been revoked due to CAE. + EnableCAE bool + + // Scopes contains the list of permission scopes required for the token. + Scopes []string + + // TenantID identifies the tenant from which to request the token. azidentity credentials authenticate in + // their configured default tenants when this field isn't set. + TenantID string +} + +// TokenCredential represents a credential capable of providing an OAuth token. +// Exported as azcore.TokenCredential. +type TokenCredential interface { + // GetToken requests an access token for the specified set of scopes. + GetToken(ctx context.Context, options TokenRequestOptions) (AccessToken, error) +} + +// DecodeByteArray will base-64 decode the provided string into v. +// Exported as runtime.DecodeByteArray() +func DecodeByteArray(s string, v *[]byte, format Base64Encoding) error { + if len(s) == 0 { + return nil + } + payload := string(s) + if payload[0] == '"' { + // remove surrounding quotes + payload = payload[1 : len(payload)-1] + } + switch format { + case Base64StdFormat: + decoded, err := base64.StdEncoding.DecodeString(payload) + if err == nil { + *v = decoded + return nil + } + return err + case Base64URLFormat: + // use raw encoding as URL format should not contain any '=' characters + decoded, err := base64.RawURLEncoding.DecodeString(payload) + if err == nil { + *v = decoded + return nil + } + return err + default: + return fmt.Errorf("unrecognized byte array format: %d", format) + } +} + +// KeyCredential contains an authentication key used to authenticate to an Azure service. +// Exported as azcore.KeyCredential. +type KeyCredential struct { + cred *keyCredential +} + +// NewKeyCredential creates a new instance of [KeyCredential] with the specified values. +// - key is the authentication key +func NewKeyCredential(key string) *KeyCredential { + return &KeyCredential{cred: newKeyCredential(key)} +} + +// Update replaces the existing key with the specified value. +func (k *KeyCredential) Update(key string) { + k.cred.Update(key) +} + +// SASCredential contains a shared access signature used to authenticate to an Azure service. +// Exported as azcore.SASCredential. +type SASCredential struct { + cred *keyCredential +} + +// NewSASCredential creates a new instance of [SASCredential] with the specified values. +// - sas is the shared access signature +func NewSASCredential(sas string) *SASCredential { + return &SASCredential{cred: newKeyCredential(sas)} +} + +// Update replaces the existing shared access signature with the specified value. +func (k *SASCredential) Update(sas string) { + k.cred.Update(sas) +} + +// KeyCredentialGet returns the key for cred. +func KeyCredentialGet(cred *KeyCredential) string { + return cred.cred.Get() +} + +// SASCredentialGet returns the shared access sig for cred. +func SASCredentialGet(cred *SASCredential) string { + return cred.cred.Get() +} + +type keyCredential struct { + key atomic.Value // string +} + +func newKeyCredential(key string) *keyCredential { + keyCred := keyCredential{} + keyCred.key.Store(key) + return &keyCred +} + +func (k *keyCredential) Get() string { + return k.key.Load().(string) +} + +func (k *keyCredential) Update(key string) { + k.key.Store(key) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/pipeline.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/pipeline.go new file mode 100644 index 000000000000..e45f831ed2a4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/pipeline.go @@ -0,0 +1,77 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package exported + +import ( + "errors" + "net/http" +) + +// Policy represents an extensibility point for the Pipeline that can mutate the specified +// Request and react to the received Response. +// Exported as policy.Policy. +type Policy interface { + // Do applies the policy to the specified Request. When implementing a Policy, mutate the + // request before calling req.Next() to move on to the next policy, and respond to the result + // before returning to the caller. + Do(req *Request) (*http.Response, error) +} + +// Pipeline represents a primitive for sending HTTP requests and receiving responses. +// Its behavior can be extended by specifying policies during construction. +// Exported as runtime.Pipeline. +type Pipeline struct { + policies []Policy +} + +// Transporter represents an HTTP pipeline transport used to send HTTP requests and receive responses. +// Exported as policy.Transporter. +type Transporter interface { + // Do sends the HTTP request and returns the HTTP response or error. + Do(req *http.Request) (*http.Response, error) +} + +// used to adapt a TransportPolicy to a Policy +type transportPolicy struct { + trans Transporter +} + +func (tp transportPolicy) Do(req *Request) (*http.Response, error) { + if tp.trans == nil { + return nil, errors.New("missing transporter") + } + resp, err := tp.trans.Do(req.Raw()) + if err != nil { + return nil, err + } else if resp == nil { + // there was no response and no error (rare but can happen) + // this ensures the retry policy will retry the request + return nil, errors.New("received nil response") + } + return resp, nil +} + +// NewPipeline creates a new Pipeline object from the specified Policies. +// Not directly exported, but used as part of runtime.NewPipeline(). +func NewPipeline(transport Transporter, policies ...Policy) Pipeline { + // transport policy must always be the last in the slice + policies = append(policies, transportPolicy{trans: transport}) + return Pipeline{ + policies: policies, + } +} + +// Do is called for each and every HTTP request. It passes the request through all +// the Policy objects (which can transform the Request's URL/query parameters/headers) +// and ultimately sends the transformed HTTP request over the network. +func (p Pipeline) Do(req *Request) (*http.Response, error) { + if req == nil { + return nil, errors.New("request cannot be nil") + } + req.policies = p.policies + return req.Next() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/request.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/request.go new file mode 100644 index 000000000000..8d1ae213c958 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/request.go @@ -0,0 +1,223 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package exported + +import ( + "context" + "encoding/base64" + "errors" + "fmt" + "io" + "net/http" + "reflect" + "strconv" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" +) + +// Base64Encoding is usesd to specify which base-64 encoder/decoder to use when +// encoding/decoding a slice of bytes to/from a string. +// Exported as runtime.Base64Encoding +type Base64Encoding int + +const ( + // Base64StdFormat uses base64.StdEncoding for encoding and decoding payloads. + Base64StdFormat Base64Encoding = 0 + + // Base64URLFormat uses base64.RawURLEncoding for encoding and decoding payloads. + Base64URLFormat Base64Encoding = 1 +) + +// EncodeByteArray will base-64 encode the byte slice v. +// Exported as runtime.EncodeByteArray() +func EncodeByteArray(v []byte, format Base64Encoding) string { + if format == Base64URLFormat { + return base64.RawURLEncoding.EncodeToString(v) + } + return base64.StdEncoding.EncodeToString(v) +} + +// Request is an abstraction over the creation of an HTTP request as it passes through the pipeline. +// Don't use this type directly, use NewRequest() instead. +// Exported as policy.Request. +type Request struct { + req *http.Request + body io.ReadSeekCloser + policies []Policy + values opValues +} + +type opValues map[reflect.Type]interface{} + +// Set adds/changes a value +func (ov opValues) set(value interface{}) { + ov[reflect.TypeOf(value)] = value +} + +// Get looks for a value set by SetValue first +func (ov opValues) get(value interface{}) bool { + v, ok := ov[reflect.ValueOf(value).Elem().Type()] + if ok { + reflect.ValueOf(value).Elem().Set(reflect.ValueOf(v)) + } + return ok +} + +// NewRequest creates a new Request with the specified input. +// Exported as runtime.NewRequest(). +func NewRequest(ctx context.Context, httpMethod string, endpoint string) (*Request, error) { + req, err := http.NewRequestWithContext(ctx, httpMethod, endpoint, nil) + if err != nil { + return nil, err + } + if req.URL.Host == "" { + return nil, errors.New("no Host in request URL") + } + if !(req.URL.Scheme == "http" || req.URL.Scheme == "https") { + return nil, fmt.Errorf("unsupported protocol scheme %s", req.URL.Scheme) + } + return &Request{req: req}, nil +} + +// Body returns the original body specified when the Request was created. +func (req *Request) Body() io.ReadSeekCloser { + return req.body +} + +// Raw returns the underlying HTTP request. +func (req *Request) Raw() *http.Request { + return req.req +} + +// Next calls the next policy in the pipeline. +// If there are no more policies, nil and an error are returned. +// This method is intended to be called from pipeline policies. +// To send a request through a pipeline call Pipeline.Do(). +func (req *Request) Next() (*http.Response, error) { + if len(req.policies) == 0 { + return nil, errors.New("no more policies") + } + nextPolicy := req.policies[0] + nextReq := *req + nextReq.policies = nextReq.policies[1:] + return nextPolicy.Do(&nextReq) +} + +// SetOperationValue adds/changes a mutable key/value associated with a single operation. +func (req *Request) SetOperationValue(value interface{}) { + if req.values == nil { + req.values = opValues{} + } + req.values.set(value) +} + +// OperationValue looks for a value set by SetOperationValue(). +func (req *Request) OperationValue(value interface{}) bool { + if req.values == nil { + return false + } + return req.values.get(value) +} + +// SetBody sets the specified ReadSeekCloser as the HTTP request body, and sets Content-Type and Content-Length +// accordingly. If the ReadSeekCloser is nil or empty, Content-Length won't be set. If contentType is "", +// Content-Type won't be set, and if it was set, will be deleted. +// Use streaming.NopCloser to turn an io.ReadSeeker into an io.ReadSeekCloser. +func (req *Request) SetBody(body io.ReadSeekCloser, contentType string) error { + // clobber the existing Content-Type to preserve behavior + return SetBody(req, body, contentType, true) +} + +// RewindBody seeks the request's Body stream back to the beginning so it can be resent when retrying an operation. +func (req *Request) RewindBody() error { + if req.body != nil { + // Reset the stream back to the beginning and restore the body + _, err := req.body.Seek(0, io.SeekStart) + req.req.Body = req.body + return err + } + return nil +} + +// Close closes the request body. +func (req *Request) Close() error { + if req.body == nil { + return nil + } + return req.body.Close() +} + +// Clone returns a deep copy of the request with its context changed to ctx. +func (req *Request) Clone(ctx context.Context) *Request { + r2 := *req + r2.req = req.req.Clone(ctx) + return &r2 +} + +// WithContext returns a shallow copy of the request with its context changed to ctx. +func (req *Request) WithContext(ctx context.Context) *Request { + r2 := new(Request) + *r2 = *req + r2.req = r2.req.WithContext(ctx) + return r2 +} + +// not exported but dependent on Request + +// PolicyFunc is a type that implements the Policy interface. +// Use this type when implementing a stateless policy as a first-class function. +type PolicyFunc func(*Request) (*http.Response, error) + +// Do implements the Policy interface on policyFunc. +func (pf PolicyFunc) Do(req *Request) (*http.Response, error) { + return pf(req) +} + +// SetBody sets the specified ReadSeekCloser as the HTTP request body, and sets Content-Type and Content-Length accordingly. +// - req is the request to modify +// - body is the request body; if nil or empty, Content-Length won't be set +// - contentType is the value for the Content-Type header; if empty, Content-Type will be deleted +// - clobberContentType when true, will overwrite the existing value of Content-Type with contentType +func SetBody(req *Request, body io.ReadSeekCloser, contentType string, clobberContentType bool) error { + var err error + var size int64 + if body != nil { + size, err = body.Seek(0, io.SeekEnd) // Seek to the end to get the stream's size + if err != nil { + return err + } + } + if size == 0 { + // treat an empty stream the same as a nil one: assign req a nil body + body = nil + // RFC 9110 specifies a client shouldn't set Content-Length on a request containing no content + // (Del is a no-op when the header has no value) + req.req.Header.Del(shared.HeaderContentLength) + } else { + _, err = body.Seek(0, io.SeekStart) + if err != nil { + return err + } + req.req.Header.Set(shared.HeaderContentLength, strconv.FormatInt(size, 10)) + req.Raw().GetBody = func() (io.ReadCloser, error) { + _, err := body.Seek(0, io.SeekStart) // Seek back to the beginning of the stream + return body, err + } + } + // keep a copy of the body argument. this is to handle cases + // where req.Body is replaced, e.g. httputil.DumpRequest and friends. + req.body = body + req.req.Body = body + req.req.ContentLength = size + if contentType == "" { + // Del is a no-op when the header has no value + req.req.Header.Del(shared.HeaderContentType) + } else if req.req.Header.Get(shared.HeaderContentType) == "" || clobberContentType { + req.req.Header.Set(shared.HeaderContentType, contentType) + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/response_error.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/response_error.go new file mode 100644 index 000000000000..f243552885d1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported/response_error.go @@ -0,0 +1,157 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package exported + +import ( + "bytes" + "encoding/json" + "fmt" + "net/http" + "regexp" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/internal/exported" +) + +// NewResponseError creates a new *ResponseError from the provided HTTP response. +// Exported as runtime.NewResponseError(). +func NewResponseError(resp *http.Response) error { + respErr := &ResponseError{ + StatusCode: resp.StatusCode, + RawResponse: resp, + } + + // prefer the error code in the response header + if ec := resp.Header.Get(shared.HeaderXMSErrorCode); ec != "" { + respErr.ErrorCode = ec + return respErr + } + + // if we didn't get x-ms-error-code, check in the response body + body, err := exported.Payload(resp, nil) + if err != nil { + return err + } + + if len(body) > 0 { + if code := extractErrorCodeJSON(body); code != "" { + respErr.ErrorCode = code + } else if code := extractErrorCodeXML(body); code != "" { + respErr.ErrorCode = code + } + } + + return respErr +} + +func extractErrorCodeJSON(body []byte) string { + var rawObj map[string]interface{} + if err := json.Unmarshal(body, &rawObj); err != nil { + // not a JSON object + return "" + } + + // check if this is a wrapped error, i.e. { "error": { ... } } + // if so then unwrap it + if wrapped, ok := rawObj["error"]; ok { + unwrapped, ok := wrapped.(map[string]interface{}) + if !ok { + return "" + } + rawObj = unwrapped + } else if wrapped, ok := rawObj["odata.error"]; ok { + // check if this a wrapped odata error, i.e. { "odata.error": { ... } } + unwrapped, ok := wrapped.(map[string]any) + if !ok { + return "" + } + rawObj = unwrapped + } + + // now check for the error code + code, ok := rawObj["code"] + if !ok { + return "" + } + codeStr, ok := code.(string) + if !ok { + return "" + } + return codeStr +} + +func extractErrorCodeXML(body []byte) string { + // regular expression is much easier than dealing with the XML parser + rx := regexp.MustCompile(`<(?:\w+:)?[c|C]ode>\s*(\w+)\s*<\/(?:\w+:)?[c|C]ode>`) + res := rx.FindStringSubmatch(string(body)) + if len(res) != 2 { + return "" + } + // first submatch is the entire thing, second one is the captured error code + return res[1] +} + +// ResponseError is returned when a request is made to a service and +// the service returns a non-success HTTP status code. +// Use errors.As() to access this type in the error chain. +// Exported as azcore.ResponseError. +type ResponseError struct { + // ErrorCode is the error code returned by the resource provider if available. + ErrorCode string + + // StatusCode is the HTTP status code as defined in https://pkg.go.dev/net/http#pkg-constants. + StatusCode int + + // RawResponse is the underlying HTTP response. + RawResponse *http.Response +} + +// Error implements the error interface for type ResponseError. +// Note that the message contents are not contractual and can change over time. +func (e *ResponseError) Error() string { + const separator = "--------------------------------------------------------------------------------" + // write the request method and URL with response status code + msg := &bytes.Buffer{} + if e.RawResponse != nil { + if e.RawResponse.Request != nil { + fmt.Fprintf(msg, "%s %s://%s%s\n", e.RawResponse.Request.Method, e.RawResponse.Request.URL.Scheme, e.RawResponse.Request.URL.Host, e.RawResponse.Request.URL.Path) + } else { + fmt.Fprintln(msg, "Request information not available") + } + fmt.Fprintln(msg, separator) + fmt.Fprintf(msg, "RESPONSE %d: %s\n", e.RawResponse.StatusCode, e.RawResponse.Status) + } else { + fmt.Fprintln(msg, "Missing RawResponse") + fmt.Fprintln(msg, separator) + } + if e.ErrorCode != "" { + fmt.Fprintf(msg, "ERROR CODE: %s\n", e.ErrorCode) + } else { + fmt.Fprintln(msg, "ERROR CODE UNAVAILABLE") + } + if e.RawResponse != nil { + fmt.Fprintln(msg, separator) + body, err := exported.Payload(e.RawResponse, nil) + if err != nil { + // this really shouldn't fail at this point as the response + // body is already cached (it was read in NewResponseError) + fmt.Fprintf(msg, "Error reading response body: %v", err) + } else if len(body) > 0 { + if err := json.Indent(msg, body, "", " "); err != nil { + // failed to pretty-print so just dump it verbatim + fmt.Fprint(msg, string(body)) + } + // the standard library doesn't have a pretty-printer for XML + fmt.Fprintln(msg) + } else { + fmt.Fprintln(msg, "Response contained no body") + } + } + fmt.Fprintln(msg, separator) + + return msg.String() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log/log.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log/log.go new file mode 100644 index 000000000000..0684cb317390 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log/log.go @@ -0,0 +1,38 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +// This is an internal helper package to combine the complete logging APIs. +package log + +import ( + azlog "github.com/Azure/azure-sdk-for-go/sdk/azcore/log" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" +) + +type Event = log.Event + +const ( + EventRequest = azlog.EventRequest + EventResponse = azlog.EventResponse + EventRetryPolicy = azlog.EventRetryPolicy + EventLRO = azlog.EventLRO +) + +func Write(cls log.Event, msg string) { + log.Write(cls, msg) +} + +func Writef(cls log.Event, format string, a ...interface{}) { + log.Writef(cls, format, a...) +} + +func SetListener(lst func(Event, string)) { + log.SetListener(lst) +} + +func Should(cls log.Event) bool { + return log.Should(cls) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async/async.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async/async.go new file mode 100644 index 000000000000..b05bd8b38d2b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async/async.go @@ -0,0 +1,159 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package async + +import ( + "context" + "errors" + "fmt" + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/internal/poller" +) + +// see https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/async-api-reference.md + +// Applicable returns true if the LRO is using Azure-AsyncOperation. +func Applicable(resp *http.Response) bool { + return resp.Header.Get(shared.HeaderAzureAsync) != "" +} + +// CanResume returns true if the token can rehydrate this poller type. +func CanResume(token map[string]interface{}) bool { + _, ok := token["asyncURL"] + return ok +} + +// Poller is an LRO poller that uses the Azure-AsyncOperation pattern. +type Poller[T any] struct { + pl exported.Pipeline + + resp *http.Response + + // The URL from Azure-AsyncOperation header. + AsyncURL string `json:"asyncURL"` + + // The URL from Location header. + LocURL string `json:"locURL"` + + // The URL from the initial LRO request. + OrigURL string `json:"origURL"` + + // The HTTP method from the initial LRO request. + Method string `json:"method"` + + // The value of final-state-via from swagger, can be the empty string. + FinalState pollers.FinalStateVia `json:"finalState"` + + // The LRO's current state. + CurState string `json:"state"` +} + +// New creates a new Poller from the provided initial response and final-state type. +// Pass nil for response to create an empty Poller for rehydration. +func New[T any](pl exported.Pipeline, resp *http.Response, finalState pollers.FinalStateVia) (*Poller[T], error) { + if resp == nil { + log.Write(log.EventLRO, "Resuming Azure-AsyncOperation poller.") + return &Poller[T]{pl: pl}, nil + } + log.Write(log.EventLRO, "Using Azure-AsyncOperation poller.") + asyncURL := resp.Header.Get(shared.HeaderAzureAsync) + if asyncURL == "" { + return nil, errors.New("response is missing Azure-AsyncOperation header") + } + if !poller.IsValidURL(asyncURL) { + return nil, fmt.Errorf("invalid polling URL %s", asyncURL) + } + // check for provisioning state. if the operation is a RELO + // and terminates synchronously this will prevent extra polling. + // it's ok if there's no provisioning state. + state, _ := poller.GetProvisioningState(resp) + if state == "" { + state = poller.StatusInProgress + } + p := &Poller[T]{ + pl: pl, + resp: resp, + AsyncURL: asyncURL, + LocURL: resp.Header.Get(shared.HeaderLocation), + OrigURL: resp.Request.URL.String(), + Method: resp.Request.Method, + FinalState: finalState, + CurState: state, + } + return p, nil +} + +// Done returns true if the LRO is in a terminal state. +func (p *Poller[T]) Done() bool { + return poller.IsTerminalState(p.CurState) +} + +// Poll retrieves the current state of the LRO. +func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) { + err := pollers.PollHelper(ctx, p.AsyncURL, p.pl, func(resp *http.Response) (string, error) { + if !poller.StatusCodeValid(resp) { + p.resp = resp + return "", exported.NewResponseError(resp) + } + state, err := poller.GetStatus(resp) + if err != nil { + return "", err + } else if state == "" { + return "", errors.New("the response did not contain a status") + } + p.resp = resp + p.CurState = state + return p.CurState, nil + }) + if err != nil { + return nil, err + } + return p.resp, nil +} + +func (p *Poller[T]) Result(ctx context.Context, out *T) error { + if p.resp.StatusCode == http.StatusNoContent { + return nil + } else if poller.Failed(p.CurState) { + return exported.NewResponseError(p.resp) + } + var req *exported.Request + var err error + if p.Method == http.MethodPatch || p.Method == http.MethodPut { + // for PATCH and PUT, the final GET is on the original resource URL + req, err = exported.NewRequest(ctx, http.MethodGet, p.OrigURL) + } else if p.Method == http.MethodPost { + if p.FinalState == pollers.FinalStateViaAzureAsyncOp { + // no final GET required + } else if p.FinalState == pollers.FinalStateViaOriginalURI { + req, err = exported.NewRequest(ctx, http.MethodGet, p.OrigURL) + } else if p.LocURL != "" { + // ideally FinalState would be set to "location" but it isn't always. + // must check last due to more permissive condition. + req, err = exported.NewRequest(ctx, http.MethodGet, p.LocURL) + } + } + if err != nil { + return err + } + + // if a final GET request has been created, execute it + if req != nil { + resp, err := p.pl.Do(req) + if err != nil { + return err + } + p.resp = resp + } + + return pollers.ResultHelper(p.resp, poller.Failed(p.CurState), out) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body/body.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body/body.go new file mode 100644 index 000000000000..2bb9e105b666 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body/body.go @@ -0,0 +1,135 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package body + +import ( + "context" + "errors" + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers" + "github.com/Azure/azure-sdk-for-go/sdk/internal/poller" +) + +// Kind is the identifier of this type in a resume token. +const kind = "body" + +// Applicable returns true if the LRO is using no headers, just provisioning state. +// This is only applicable to PATCH and PUT methods and assumes no polling headers. +func Applicable(resp *http.Response) bool { + // we can't check for absense of headers due to some misbehaving services + // like redis that return a Location header but don't actually use that protocol + return resp.Request.Method == http.MethodPatch || resp.Request.Method == http.MethodPut +} + +// CanResume returns true if the token can rehydrate this poller type. +func CanResume(token map[string]interface{}) bool { + t, ok := token["type"] + if !ok { + return false + } + tt, ok := t.(string) + if !ok { + return false + } + return tt == kind +} + +// Poller is an LRO poller that uses the Body pattern. +type Poller[T any] struct { + pl exported.Pipeline + + resp *http.Response + + // The poller's type, used for resume token processing. + Type string `json:"type"` + + // The URL for polling. + PollURL string `json:"pollURL"` + + // The LRO's current state. + CurState string `json:"state"` +} + +// New creates a new Poller from the provided initial response. +// Pass nil for response to create an empty Poller for rehydration. +func New[T any](pl exported.Pipeline, resp *http.Response) (*Poller[T], error) { + if resp == nil { + log.Write(log.EventLRO, "Resuming Body poller.") + return &Poller[T]{pl: pl}, nil + } + log.Write(log.EventLRO, "Using Body poller.") + p := &Poller[T]{ + pl: pl, + resp: resp, + Type: kind, + PollURL: resp.Request.URL.String(), + } + // default initial state to InProgress. depending on the HTTP + // status code and provisioning state, we might change the value. + curState := poller.StatusInProgress + provState, err := poller.GetProvisioningState(resp) + if err != nil && !errors.Is(err, poller.ErrNoBody) { + return nil, err + } + if resp.StatusCode == http.StatusCreated && provState != "" { + // absense of provisioning state is ok for a 201, means the operation is in progress + curState = provState + } else if resp.StatusCode == http.StatusOK { + if provState != "" { + curState = provState + } else if provState == "" { + // for a 200, absense of provisioning state indicates success + curState = poller.StatusSucceeded + } + } else if resp.StatusCode == http.StatusNoContent { + curState = poller.StatusSucceeded + } + p.CurState = curState + return p, nil +} + +func (p *Poller[T]) Done() bool { + return poller.IsTerminalState(p.CurState) +} + +func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) { + err := pollers.PollHelper(ctx, p.PollURL, p.pl, func(resp *http.Response) (string, error) { + if !poller.StatusCodeValid(resp) { + p.resp = resp + return "", exported.NewResponseError(resp) + } + if resp.StatusCode == http.StatusNoContent { + p.resp = resp + p.CurState = poller.StatusSucceeded + return p.CurState, nil + } + state, err := poller.GetProvisioningState(resp) + if errors.Is(err, poller.ErrNoBody) { + // a missing response body in non-204 case is an error + return "", err + } else if state == "" { + // a response body without provisioning state is considered terminal success + state = poller.StatusSucceeded + } else if err != nil { + return "", err + } + p.resp = resp + p.CurState = state + return p.CurState, nil + }) + if err != nil { + return nil, err + } + return p.resp, nil +} + +func (p *Poller[T]) Result(ctx context.Context, out *T) error { + return pollers.ResultHelper(p.resp, poller.Failed(p.CurState), out) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake/fake.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake/fake.go new file mode 100644 index 000000000000..25983471867b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake/fake.go @@ -0,0 +1,133 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package fake + +import ( + "context" + "errors" + "fmt" + "net/http" + "strings" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/internal/poller" +) + +// Applicable returns true if the LRO is a fake. +func Applicable(resp *http.Response) bool { + return resp.Header.Get(shared.HeaderFakePollerStatus) != "" +} + +// CanResume returns true if the token can rehydrate this poller type. +func CanResume(token map[string]interface{}) bool { + _, ok := token["fakeURL"] + return ok +} + +// Poller is an LRO poller that uses the Core-Fake-Poller pattern. +type Poller[T any] struct { + pl exported.Pipeline + + resp *http.Response + + // The API name from CtxAPINameKey + APIName string `json:"apiName"` + + // The URL from Core-Fake-Poller header. + FakeURL string `json:"fakeURL"` + + // The LRO's current state. + FakeStatus string `json:"status"` +} + +// lroStatusURLSuffix is the URL path suffix for a faked LRO. +const lroStatusURLSuffix = "/get/fake/status" + +// New creates a new Poller from the provided initial response. +// Pass nil for response to create an empty Poller for rehydration. +func New[T any](pl exported.Pipeline, resp *http.Response) (*Poller[T], error) { + if resp == nil { + log.Write(log.EventLRO, "Resuming Core-Fake-Poller poller.") + return &Poller[T]{pl: pl}, nil + } + + log.Write(log.EventLRO, "Using Core-Fake-Poller poller.") + fakeStatus := resp.Header.Get(shared.HeaderFakePollerStatus) + if fakeStatus == "" { + return nil, errors.New("response is missing Fake-Poller-Status header") + } + + ctxVal := resp.Request.Context().Value(shared.CtxAPINameKey{}) + if ctxVal == nil { + return nil, errors.New("missing value for CtxAPINameKey") + } + + apiName, ok := ctxVal.(string) + if !ok { + return nil, fmt.Errorf("expected string for CtxAPINameKey, the type was %T", ctxVal) + } + + qp := "" + if resp.Request.URL.RawQuery != "" { + qp = "?" + resp.Request.URL.RawQuery + } + + p := &Poller[T]{ + pl: pl, + resp: resp, + APIName: apiName, + // NOTE: any changes to this path format MUST be reflected in SanitizePollerPath() + FakeURL: fmt.Sprintf("%s://%s%s%s%s", resp.Request.URL.Scheme, resp.Request.URL.Host, resp.Request.URL.Path, lroStatusURLSuffix, qp), + FakeStatus: fakeStatus, + } + return p, nil +} + +// Done returns true if the LRO is in a terminal state. +func (p *Poller[T]) Done() bool { + return poller.IsTerminalState(p.FakeStatus) +} + +// Poll retrieves the current state of the LRO. +func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) { + ctx = context.WithValue(ctx, shared.CtxAPINameKey{}, p.APIName) + err := pollers.PollHelper(ctx, p.FakeURL, p.pl, func(resp *http.Response) (string, error) { + if !poller.StatusCodeValid(resp) { + p.resp = resp + return "", exported.NewResponseError(resp) + } + fakeStatus := resp.Header.Get(shared.HeaderFakePollerStatus) + if fakeStatus == "" { + return "", errors.New("response is missing Fake-Poller-Status header") + } + p.resp = resp + p.FakeStatus = fakeStatus + return p.FakeStatus, nil + }) + if err != nil { + return nil, err + } + return p.resp, nil +} + +func (p *Poller[T]) Result(ctx context.Context, out *T) error { + if p.resp.StatusCode == http.StatusNoContent { + return nil + } else if poller.Failed(p.FakeStatus) { + return exported.NewResponseError(p.resp) + } + + return pollers.ResultHelper(p.resp, poller.Failed(p.FakeStatus), out) +} + +// SanitizePollerPath removes any fake-appended suffix from a URL's path. +func SanitizePollerPath(path string) string { + return strings.TrimSuffix(path, lroStatusURLSuffix) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc/loc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc/loc.go new file mode 100644 index 000000000000..d6be89876aba --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc/loc.go @@ -0,0 +1,119 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package loc + +import ( + "context" + "errors" + "fmt" + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/internal/poller" +) + +// Kind is the identifier of this type in a resume token. +const kind = "loc" + +// Applicable returns true if the LRO is using Location. +func Applicable(resp *http.Response) bool { + return resp.Header.Get(shared.HeaderLocation) != "" +} + +// CanResume returns true if the token can rehydrate this poller type. +func CanResume(token map[string]interface{}) bool { + t, ok := token["type"] + if !ok { + return false + } + tt, ok := t.(string) + if !ok { + return false + } + return tt == kind +} + +// Poller is an LRO poller that uses the Location pattern. +type Poller[T any] struct { + pl exported.Pipeline + resp *http.Response + + Type string `json:"type"` + PollURL string `json:"pollURL"` + CurState string `json:"state"` +} + +// New creates a new Poller from the provided initial response. +// Pass nil for response to create an empty Poller for rehydration. +func New[T any](pl exported.Pipeline, resp *http.Response) (*Poller[T], error) { + if resp == nil { + log.Write(log.EventLRO, "Resuming Location poller.") + return &Poller[T]{pl: pl}, nil + } + log.Write(log.EventLRO, "Using Location poller.") + locURL := resp.Header.Get(shared.HeaderLocation) + if locURL == "" { + return nil, errors.New("response is missing Location header") + } + if !poller.IsValidURL(locURL) { + return nil, fmt.Errorf("invalid polling URL %s", locURL) + } + // check for provisioning state. if the operation is a RELO + // and terminates synchronously this will prevent extra polling. + // it's ok if there's no provisioning state. + state, _ := poller.GetProvisioningState(resp) + if state == "" { + state = poller.StatusInProgress + } + return &Poller[T]{ + pl: pl, + resp: resp, + Type: kind, + PollURL: locURL, + CurState: state, + }, nil +} + +func (p *Poller[T]) Done() bool { + return poller.IsTerminalState(p.CurState) +} + +func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) { + err := pollers.PollHelper(ctx, p.PollURL, p.pl, func(resp *http.Response) (string, error) { + // location polling can return an updated polling URL + if h := resp.Header.Get(shared.HeaderLocation); h != "" { + p.PollURL = h + } + // if provisioning state is available, use that. this is only + // for some ARM LRO scenarios (e.g. DELETE with a Location header) + // so if it's missing then use HTTP status code. + provState, _ := poller.GetProvisioningState(resp) + p.resp = resp + if provState != "" { + p.CurState = provState + } else if resp.StatusCode == http.StatusAccepted { + p.CurState = poller.StatusInProgress + } else if resp.StatusCode > 199 && resp.StatusCode < 300 { + // any 2xx other than a 202 indicates success + p.CurState = poller.StatusSucceeded + } else { + p.CurState = poller.StatusFailed + } + return p.CurState, nil + }) + if err != nil { + return nil, err + } + return p.resp, nil +} + +func (p *Poller[T]) Result(ctx context.Context, out *T) error { + return pollers.ResultHelper(p.resp, poller.Failed(p.CurState), out) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op/op.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op/op.go new file mode 100644 index 000000000000..1bc7ad0acedb --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op/op.go @@ -0,0 +1,145 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package op + +import ( + "context" + "errors" + "fmt" + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/internal/poller" +) + +// Applicable returns true if the LRO is using Operation-Location. +func Applicable(resp *http.Response) bool { + return resp.Header.Get(shared.HeaderOperationLocation) != "" +} + +// CanResume returns true if the token can rehydrate this poller type. +func CanResume(token map[string]interface{}) bool { + _, ok := token["oplocURL"] + return ok +} + +// Poller is an LRO poller that uses the Operation-Location pattern. +type Poller[T any] struct { + pl exported.Pipeline + resp *http.Response + + OpLocURL string `json:"oplocURL"` + LocURL string `json:"locURL"` + OrigURL string `json:"origURL"` + Method string `json:"method"` + FinalState pollers.FinalStateVia `json:"finalState"` + CurState string `json:"state"` +} + +// New creates a new Poller from the provided initial response. +// Pass nil for response to create an empty Poller for rehydration. +func New[T any](pl exported.Pipeline, resp *http.Response, finalState pollers.FinalStateVia) (*Poller[T], error) { + if resp == nil { + log.Write(log.EventLRO, "Resuming Operation-Location poller.") + return &Poller[T]{pl: pl}, nil + } + log.Write(log.EventLRO, "Using Operation-Location poller.") + opURL := resp.Header.Get(shared.HeaderOperationLocation) + if opURL == "" { + return nil, errors.New("response is missing Operation-Location header") + } + if !poller.IsValidURL(opURL) { + return nil, fmt.Errorf("invalid Operation-Location URL %s", opURL) + } + locURL := resp.Header.Get(shared.HeaderLocation) + // Location header is optional + if locURL != "" && !poller.IsValidURL(locURL) { + return nil, fmt.Errorf("invalid Location URL %s", locURL) + } + // default initial state to InProgress. if the + // service sent us a status then use that instead. + curState := poller.StatusInProgress + status, err := poller.GetStatus(resp) + if err != nil && !errors.Is(err, poller.ErrNoBody) { + return nil, err + } + if status != "" { + curState = status + } + + return &Poller[T]{ + pl: pl, + resp: resp, + OpLocURL: opURL, + LocURL: locURL, + OrigURL: resp.Request.URL.String(), + Method: resp.Request.Method, + FinalState: finalState, + CurState: curState, + }, nil +} + +func (p *Poller[T]) Done() bool { + return poller.IsTerminalState(p.CurState) +} + +func (p *Poller[T]) Poll(ctx context.Context) (*http.Response, error) { + err := pollers.PollHelper(ctx, p.OpLocURL, p.pl, func(resp *http.Response) (string, error) { + if !poller.StatusCodeValid(resp) { + p.resp = resp + return "", exported.NewResponseError(resp) + } + state, err := poller.GetStatus(resp) + if err != nil { + return "", err + } else if state == "" { + return "", errors.New("the response did not contain a status") + } + p.resp = resp + p.CurState = state + return p.CurState, nil + }) + if err != nil { + return nil, err + } + return p.resp, nil +} + +func (p *Poller[T]) Result(ctx context.Context, out *T) error { + var req *exported.Request + var err error + if p.FinalState == pollers.FinalStateViaLocation && p.LocURL != "" { + req, err = exported.NewRequest(ctx, http.MethodGet, p.LocURL) + } else if p.FinalState == pollers.FinalStateViaOpLocation && p.Method == http.MethodPost { + // no final GET required, terminal response should have it + } else if rl, rlErr := poller.GetResourceLocation(p.resp); rlErr != nil && !errors.Is(rlErr, poller.ErrNoBody) { + return rlErr + } else if rl != "" { + req, err = exported.NewRequest(ctx, http.MethodGet, rl) + } else if p.Method == http.MethodPatch || p.Method == http.MethodPut { + req, err = exported.NewRequest(ctx, http.MethodGet, p.OrigURL) + } else if p.Method == http.MethodPost && p.LocURL != "" { + req, err = exported.NewRequest(ctx, http.MethodGet, p.LocURL) + } + if err != nil { + return err + } + + // if a final GET request has been created, execute it + if req != nil { + resp, err := p.pl.Do(req) + if err != nil { + return err + } + p.resp = resp + } + + return pollers.ResultHelper(p.resp, poller.Failed(p.CurState), out) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/poller.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/poller.go new file mode 100644 index 000000000000..37ed647f4e0d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/poller.go @@ -0,0 +1,24 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package pollers + +// FinalStateVia is the enumerated type for the possible final-state-via values. +type FinalStateVia string + +const ( + // FinalStateViaAzureAsyncOp indicates the final payload comes from the Azure-AsyncOperation URL. + FinalStateViaAzureAsyncOp FinalStateVia = "azure-async-operation" + + // FinalStateViaLocation indicates the final payload comes from the Location URL. + FinalStateViaLocation FinalStateVia = "location" + + // FinalStateViaOriginalURI indicates the final payload comes from the original URL. + FinalStateViaOriginalURI FinalStateVia = "original-uri" + + // FinalStateViaOpLocation indicates the final payload comes from the Operation-Location URL. + FinalStateViaOpLocation FinalStateVia = "operation-location" +) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/util.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/util.go new file mode 100644 index 000000000000..d8d86a46c2de --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/util.go @@ -0,0 +1,187 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package pollers + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "net/http" + "reflect" + + azexported "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/internal/poller" +) + +// getTokenTypeName creates a type name from the type parameter T. +func getTokenTypeName[T any]() (string, error) { + tt := shared.TypeOfT[T]() + var n string + if tt.Kind() == reflect.Pointer { + n = "*" + tt = tt.Elem() + } + n += tt.Name() + if n == "" { + return "", errors.New("nameless types are not allowed") + } + return n, nil +} + +type resumeTokenWrapper[T any] struct { + Type string `json:"type"` + Token T `json:"token"` +} + +// NewResumeToken creates a resume token from the specified type. +// An error is returned if the generic type has no name (e.g. struct{}). +func NewResumeToken[TResult, TSource any](from TSource) (string, error) { + n, err := getTokenTypeName[TResult]() + if err != nil { + return "", err + } + b, err := json.Marshal(resumeTokenWrapper[TSource]{ + Type: n, + Token: from, + }) + if err != nil { + return "", err + } + return string(b), nil +} + +// ExtractToken returns the poller-specific token information from the provided token value. +func ExtractToken(token string) ([]byte, error) { + raw := map[string]json.RawMessage{} + if err := json.Unmarshal([]byte(token), &raw); err != nil { + return nil, err + } + // this is dependent on the type resumeTokenWrapper[T] + tk, ok := raw["token"] + if !ok { + return nil, errors.New("missing token value") + } + return tk, nil +} + +// IsTokenValid returns an error if the specified token isn't applicable for generic type T. +func IsTokenValid[T any](token string) error { + raw := map[string]interface{}{} + if err := json.Unmarshal([]byte(token), &raw); err != nil { + return err + } + t, ok := raw["type"] + if !ok { + return errors.New("missing type value") + } + tt, ok := t.(string) + if !ok { + return fmt.Errorf("invalid type format %T", t) + } + n, err := getTokenTypeName[T]() + if err != nil { + return err + } + if tt != n { + return fmt.Errorf("cannot resume from this poller token. token is for type %s, not %s", tt, n) + } + return nil +} + +// used if the operation synchronously completed +type NopPoller[T any] struct { + resp *http.Response + result T +} + +// NewNopPoller creates a NopPoller from the provided response. +// It unmarshals the response body into an instance of T. +func NewNopPoller[T any](resp *http.Response) (*NopPoller[T], error) { + np := &NopPoller[T]{resp: resp} + if resp.StatusCode == http.StatusNoContent { + return np, nil + } + payload, err := exported.Payload(resp, nil) + if err != nil { + return nil, err + } + if len(payload) == 0 { + return np, nil + } + if err = json.Unmarshal(payload, &np.result); err != nil { + return nil, err + } + return np, nil +} + +func (*NopPoller[T]) Done() bool { + return true +} + +func (p *NopPoller[T]) Poll(context.Context) (*http.Response, error) { + return p.resp, nil +} + +func (p *NopPoller[T]) Result(ctx context.Context, out *T) error { + *out = p.result + return nil +} + +// PollHelper creates and executes the request, calling update() with the response. +// If the request fails, the update func is not called. +// The update func returns the state of the operation for logging purposes or an error +// if it fails to extract the required state from the response. +func PollHelper(ctx context.Context, endpoint string, pl azexported.Pipeline, update func(resp *http.Response) (string, error)) error { + req, err := azexported.NewRequest(ctx, http.MethodGet, endpoint) + if err != nil { + return err + } + resp, err := pl.Do(req) + if err != nil { + return err + } + state, err := update(resp) + if err != nil { + return err + } + log.Writef(log.EventLRO, "State %s", state) + return nil +} + +// ResultHelper processes the response as success or failure. +// In the success case, it unmarshals the payload into either a new instance of T or out. +// In the failure case, it creates an *azcore.Response error from the response. +func ResultHelper[T any](resp *http.Response, failed bool, out *T) error { + // short-circuit the simple success case with no response body to unmarshal + if resp.StatusCode == http.StatusNoContent { + return nil + } + + defer resp.Body.Close() + if !poller.StatusCodeValid(resp) || failed { + // the LRO failed. unmarshall the error and update state + return azexported.NewResponseError(resp) + } + + // success case + payload, err := exported.Payload(resp, nil) + if err != nil { + return err + } + if len(payload) == 0 { + return nil + } + + if err = json.Unmarshal(payload, out); err != nil { + return err + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/constants.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/constants.go new file mode 100644 index 000000000000..8f749f48d9bf --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/constants.go @@ -0,0 +1,44 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package shared + +const ( + ContentTypeAppJSON = "application/json" + ContentTypeAppXML = "application/xml" + ContentTypeTextPlain = "text/plain" +) + +const ( + HeaderAuthorization = "Authorization" + HeaderAuxiliaryAuthorization = "x-ms-authorization-auxiliary" + HeaderAzureAsync = "Azure-AsyncOperation" + HeaderContentLength = "Content-Length" + HeaderContentType = "Content-Type" + HeaderFakePollerStatus = "Fake-Poller-Status" + HeaderLocation = "Location" + HeaderOperationLocation = "Operation-Location" + HeaderRetryAfter = "Retry-After" + HeaderRetryAfterMS = "Retry-After-Ms" + HeaderUserAgent = "User-Agent" + HeaderWWWAuthenticate = "WWW-Authenticate" + HeaderXMSClientRequestID = "x-ms-client-request-id" + HeaderXMSRequestID = "x-ms-request-id" + HeaderXMSErrorCode = "x-ms-error-code" + HeaderXMSRetryAfterMS = "x-ms-retry-after-ms" +) + +const BearerTokenPrefix = "Bearer " + +const TracingNamespaceAttrName = "az.namespace" + +const ( + // Module is the name of the calling module used in telemetry data. + Module = "azcore" + + // Version is the semantic version (see http://semver.org) of this module. + Version = "v1.9.2" +) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/shared.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/shared.go new file mode 100644 index 000000000000..d3da2c5fdfa3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared/shared.go @@ -0,0 +1,149 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package shared + +import ( + "context" + "fmt" + "net/http" + "reflect" + "regexp" + "strconv" + "time" +) + +// NOTE: when adding a new context key type, it likely needs to be +// added to the deny-list of key types in ContextWithDeniedValues + +// CtxWithHTTPHeaderKey is used as a context key for adding/retrieving http.Header. +type CtxWithHTTPHeaderKey struct{} + +// CtxWithRetryOptionsKey is used as a context key for adding/retrieving RetryOptions. +type CtxWithRetryOptionsKey struct{} + +// CtxWithCaptureResponse is used as a context key for retrieving the raw response. +type CtxWithCaptureResponse struct{} + +// CtxWithTracingTracer is used as a context key for adding/retrieving tracing.Tracer. +type CtxWithTracingTracer struct{} + +// CtxAPINameKey is used as a context key for adding/retrieving the API name. +type CtxAPINameKey struct{} + +// Delay waits for the duration to elapse or the context to be cancelled. +func Delay(ctx context.Context, delay time.Duration) error { + select { + case <-time.After(delay): + return nil + case <-ctx.Done(): + return ctx.Err() + } +} + +// RetryAfter returns non-zero if the response contains one of the headers with a "retry after" value. +// Headers are checked in the following order: retry-after-ms, x-ms-retry-after-ms, retry-after +func RetryAfter(resp *http.Response) time.Duration { + if resp == nil { + return 0 + } + + type retryData struct { + header string + units time.Duration + + // custom is used when the regular algorithm failed and is optional. + // the returned duration is used verbatim (units is not applied). + custom func(string) time.Duration + } + + nop := func(string) time.Duration { return 0 } + + // the headers are listed in order of preference + retries := []retryData{ + { + header: HeaderRetryAfterMS, + units: time.Millisecond, + custom: nop, + }, + { + header: HeaderXMSRetryAfterMS, + units: time.Millisecond, + custom: nop, + }, + { + header: HeaderRetryAfter, + units: time.Second, + + // retry-after values are expressed in either number of + // seconds or an HTTP-date indicating when to try again + custom: func(ra string) time.Duration { + t, err := time.Parse(time.RFC1123, ra) + if err != nil { + return 0 + } + return time.Until(t) + }, + }, + } + + for _, retry := range retries { + v := resp.Header.Get(retry.header) + if v == "" { + continue + } + if retryAfter, _ := strconv.Atoi(v); retryAfter > 0 { + return time.Duration(retryAfter) * retry.units + } else if d := retry.custom(v); d > 0 { + return d + } + } + + return 0 +} + +// TypeOfT returns the type of the generic type param. +func TypeOfT[T any]() reflect.Type { + // you can't, at present, obtain the type of + // a type parameter, so this is the trick + return reflect.TypeOf((*T)(nil)).Elem() +} + +// TransportFunc is a helper to use a first-class func to satisfy the Transporter interface. +type TransportFunc func(*http.Request) (*http.Response, error) + +// Do implements the Transporter interface for the TransportFunc type. +func (pf TransportFunc) Do(req *http.Request) (*http.Response, error) { + return pf(req) +} + +// ValidateModVer verifies that moduleVersion is a valid semver 2.0 string. +func ValidateModVer(moduleVersion string) error { + modVerRegx := regexp.MustCompile(`^v\d+\.\d+\.\d+(?:-[a-zA-Z0-9_.-]+)?$`) + if !modVerRegx.MatchString(moduleVersion) { + return fmt.Errorf("malformed moduleVersion param value %s", moduleVersion) + } + return nil +} + +// ContextWithDeniedValues wraps an existing [context.Context], denying access to certain context values. +// Pipeline policies that create new requests to be sent down their own pipeline MUST wrap the caller's +// context with an instance of this type. This is to prevent context values from flowing across disjoint +// requests which can have unintended side-effects. +type ContextWithDeniedValues struct { + context.Context +} + +// Value implements part of the [context.Context] interface. +// It acts as a deny-list for certain context keys. +func (c *ContextWithDeniedValues) Value(key any) any { + switch key.(type) { + case CtxAPINameKey, CtxWithCaptureResponse, CtxWithHTTPHeaderKey, CtxWithRetryOptionsKey, CtxWithTracingTracer: + return nil + default: + return c.Context.Value(key) + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/doc.go new file mode 100644 index 000000000000..2f3901bff3c4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/doc.go @@ -0,0 +1,10 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright 2017 Microsoft Corporation. All rights reserved. +// Use of this source code is governed by an MIT +// license that can be found in the LICENSE file. + +// Package log contains functionality for configuring logging behavior. +// Default logging to stderr can be enabled by setting environment variable AZURE_SDK_GO_LOGGING to "all". +package log diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/log.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/log.go new file mode 100644 index 000000000000..7bde29d0a462 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/log/log.go @@ -0,0 +1,50 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +// Package log provides functionality for configuring logging facilities. +package log + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" +) + +// Event is used to group entries. Each group can be toggled on or off. +type Event = log.Event + +const ( + // EventRequest entries contain information about HTTP requests. + // This includes information like the URL, query parameters, and headers. + EventRequest Event = "Request" + + // EventResponse entries contain information about HTTP responses. + // This includes information like the HTTP status code, headers, and request URL. + EventResponse Event = "Response" + + // EventRetryPolicy entries contain information specific to the retry policy in use. + EventRetryPolicy Event = "Retry" + + // EventLRO entries contain information specific to long-running operations. + // This includes information like polling location, operation state, and sleep intervals. + EventLRO Event = "LongRunningOperation" +) + +// SetEvents is used to control which events are written to +// the log. By default all log events are writen. +// NOTE: this is not goroutine safe and should be called before using SDK clients. +func SetEvents(cls ...Event) { + log.SetEvents(cls...) +} + +// SetListener will set the Logger to write to the specified Listener. +// NOTE: this is not goroutine safe and should be called before using SDK clients. +func SetListener(lst func(Event, string)) { + log.SetListener(lst) +} + +// for testing purposes +func resetEvents() { + log.TestResetEvents() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/doc.go new file mode 100644 index 000000000000..fad2579ed6c5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/doc.go @@ -0,0 +1,10 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright 2017 Microsoft Corporation. All rights reserved. +// Use of this source code is governed by an MIT +// license that can be found in the LICENSE file. + +// Package policy contains the definitions needed for configuring in-box pipeline policies +// and creating custom policies. +package policy diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/policy.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/policy.go new file mode 100644 index 000000000000..d934f1dc5fa3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/policy/policy.go @@ -0,0 +1,187 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package policy + +import ( + "context" + "net/http" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing" +) + +// Policy represents an extensibility point for the Pipeline that can mutate the specified +// Request and react to the received Response. +type Policy = exported.Policy + +// Transporter represents an HTTP pipeline transport used to send HTTP requests and receive responses. +type Transporter = exported.Transporter + +// Request is an abstraction over the creation of an HTTP request as it passes through the pipeline. +// Don't use this type directly, use runtime.NewRequest() instead. +type Request = exported.Request + +// ClientOptions contains optional settings for a client's pipeline. +// Instances can be shared across calls to SDK client constructors when uniform configuration is desired. +// Zero-value fields will have their specified default values applied during use. +type ClientOptions struct { + // APIVersion overrides the default version requested of the service. + // Set with caution as this package version has not been tested with arbitrary service versions. + APIVersion string + + // Cloud specifies a cloud for the client. The default is Azure Public Cloud. + Cloud cloud.Configuration + + // Logging configures the built-in logging policy. + Logging LogOptions + + // Retry configures the built-in retry policy. + Retry RetryOptions + + // Telemetry configures the built-in telemetry policy. + Telemetry TelemetryOptions + + // TracingProvider configures the tracing provider. + // It defaults to a no-op tracer. + TracingProvider tracing.Provider + + // Transport sets the transport for HTTP requests. + Transport Transporter + + // PerCallPolicies contains custom policies to inject into the pipeline. + // Each policy is executed once per request. + PerCallPolicies []Policy + + // PerRetryPolicies contains custom policies to inject into the pipeline. + // Each policy is executed once per request, and for each retry of that request. + PerRetryPolicies []Policy +} + +// LogOptions configures the logging policy's behavior. +type LogOptions struct { + // IncludeBody indicates if request and response bodies should be included in logging. + // The default value is false. + // NOTE: enabling this can lead to disclosure of sensitive information, use with care. + IncludeBody bool + + // AllowedHeaders is the slice of headers to log with their values intact. + // All headers not in the slice will have their values REDACTED. + // Applies to request and response headers. + AllowedHeaders []string + + // AllowedQueryParams is the slice of query parameters to log with their values intact. + // All query parameters not in the slice will have their values REDACTED. + AllowedQueryParams []string +} + +// RetryOptions configures the retry policy's behavior. +// Zero-value fields will have their specified default values applied during use. +// This allows for modification of a subset of fields. +type RetryOptions struct { + // MaxRetries specifies the maximum number of attempts a failed operation will be retried + // before producing an error. + // The default value is three. A value less than zero means one try and no retries. + MaxRetries int32 + + // TryTimeout indicates the maximum time allowed for any single try of an HTTP request. + // This is disabled by default. Specify a value greater than zero to enable. + // NOTE: Setting this to a small value might cause premature HTTP request time-outs. + TryTimeout time.Duration + + // RetryDelay specifies the initial amount of delay to use before retrying an operation. + // The value is used only if the HTTP response does not contain a Retry-After header. + // The delay increases exponentially with each retry up to the maximum specified by MaxRetryDelay. + // The default value is four seconds. A value less than zero means no delay between retries. + RetryDelay time.Duration + + // MaxRetryDelay specifies the maximum delay allowed before retrying an operation. + // Typically the value is greater than or equal to the value specified in RetryDelay. + // The default Value is 60 seconds. A value less than zero means there is no cap. + MaxRetryDelay time.Duration + + // StatusCodes specifies the HTTP status codes that indicate the operation should be retried. + // A nil slice will use the following values. + // http.StatusRequestTimeout 408 + // http.StatusTooManyRequests 429 + // http.StatusInternalServerError 500 + // http.StatusBadGateway 502 + // http.StatusServiceUnavailable 503 + // http.StatusGatewayTimeout 504 + // Specifying values will replace the default values. + // Specifying an empty slice will disable retries for HTTP status codes. + StatusCodes []int + + // ShouldRetry evaluates if the retry policy should retry the request. + // When specified, the function overrides comparison against the list of + // HTTP status codes and error checking within the retry policy. Context + // and NonRetriable errors remain evaluated before calling ShouldRetry. + // The *http.Response and error parameters are mutually exclusive, i.e. + // if one is nil, the other is not nil. + // A return value of true means the retry policy should retry. + ShouldRetry func(*http.Response, error) bool +} + +// TelemetryOptions configures the telemetry policy's behavior. +type TelemetryOptions struct { + // ApplicationID is an application-specific identification string to add to the User-Agent. + // It has a maximum length of 24 characters and must not contain any spaces. + ApplicationID string + + // Disabled will prevent the addition of any telemetry data to the User-Agent. + Disabled bool +} + +// TokenRequestOptions contain specific parameter that may be used by credentials types when attempting to get a token. +type TokenRequestOptions = exported.TokenRequestOptions + +// BearerTokenOptions configures the bearer token policy's behavior. +type BearerTokenOptions struct { + // AuthorizationHandler allows SDK developers to run client-specific logic when BearerTokenPolicy must authorize a request. + // When this field isn't set, the policy follows its default behavior of authorizing every request with a bearer token from + // its given credential. + AuthorizationHandler AuthorizationHandler +} + +// AuthorizationHandler allows SDK developers to insert custom logic that runs when BearerTokenPolicy must authorize a request. +type AuthorizationHandler struct { + // OnRequest is called each time the policy receives a request. Its func parameter authorizes the request with a token + // from the policy's given credential. Implementations that need to perform I/O should use the Request's context, + // available from Request.Raw().Context(). When OnRequest returns an error, the policy propagates that error and doesn't + // send the request. When OnRequest is nil, the policy follows its default behavior, authorizing the request with a + // token from its credential according to its configuration. + OnRequest func(*Request, func(TokenRequestOptions) error) error + + // OnChallenge is called when the policy receives a 401 response, allowing the AuthorizationHandler to re-authorize the + // request according to an authentication challenge (the Response's WWW-Authenticate header). OnChallenge is responsible + // for parsing parameters from the challenge. Its func parameter will authorize the request with a token from the policy's + // given credential. Implementations that need to perform I/O should use the Request's context, available from + // Request.Raw().Context(). When OnChallenge returns nil, the policy will send the request again. When OnChallenge is nil, + // the policy will return any 401 response to the client. + OnChallenge func(*Request, *http.Response, func(TokenRequestOptions) error) error +} + +// WithCaptureResponse applies the HTTP response retrieval annotation to the parent context. +// The resp parameter will contain the HTTP response after the request has completed. +func WithCaptureResponse(parent context.Context, resp **http.Response) context.Context { + return context.WithValue(parent, shared.CtxWithCaptureResponse{}, resp) +} + +// WithHTTPHeader adds the specified http.Header to the parent context. +// Use this to specify custom HTTP headers at the API-call level. +// Any overlapping headers will have their values replaced with the values specified here. +func WithHTTPHeader(parent context.Context, header http.Header) context.Context { + return context.WithValue(parent, shared.CtxWithHTTPHeaderKey{}, header) +} + +// WithRetryOptions adds the specified RetryOptions to the parent context. +// Use this to specify custom RetryOptions at the API-call level. +func WithRetryOptions(parent context.Context, options RetryOptions) context.Context { + return context.WithValue(parent, shared.CtxWithRetryOptionsKey{}, options) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/doc.go new file mode 100644 index 000000000000..c9cfa438cb34 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/doc.go @@ -0,0 +1,10 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright 2017 Microsoft Corporation. All rights reserved. +// Use of this source code is governed by an MIT +// license that can be found in the LICENSE file. + +// Package runtime contains various facilities for creating requests and handling responses. +// The content is intended for SDK authors. +package runtime diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/errors.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/errors.go new file mode 100644 index 000000000000..6d03b291ebff --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/errors.go @@ -0,0 +1,19 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" +) + +// NewResponseError creates an *azcore.ResponseError from the provided HTTP response. +// Call this when a service request returns a non-successful status code. +func NewResponseError(resp *http.Response) error { + return exported.NewResponseError(resp) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pager.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pager.go new file mode 100644 index 000000000000..cffe692d7e30 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pager.go @@ -0,0 +1,128 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "net/http" + "reflect" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing" +) + +// PagingHandler contains the required data for constructing a Pager. +type PagingHandler[T any] struct { + // More returns a boolean indicating if there are more pages to fetch. + // It uses the provided page to make the determination. + More func(T) bool + + // Fetcher fetches the first and subsequent pages. + Fetcher func(context.Context, *T) (T, error) + + // Tracer contains the Tracer from the client that's creating the Pager. + Tracer tracing.Tracer +} + +// Pager provides operations for iterating over paged responses. +type Pager[T any] struct { + current *T + handler PagingHandler[T] + tracer tracing.Tracer + firstPage bool +} + +// NewPager creates an instance of Pager using the specified PagingHandler. +// Pass a non-nil T for firstPage if the first page has already been retrieved. +func NewPager[T any](handler PagingHandler[T]) *Pager[T] { + return &Pager[T]{ + handler: handler, + tracer: handler.Tracer, + firstPage: true, + } +} + +// More returns true if there are more pages to retrieve. +func (p *Pager[T]) More() bool { + if p.current != nil { + return p.handler.More(*p.current) + } + return true +} + +// NextPage advances the pager to the next page. +func (p *Pager[T]) NextPage(ctx context.Context) (T, error) { + if p.current != nil { + if p.firstPage { + // we get here if it's an LRO-pager, we already have the first page + p.firstPage = false + return *p.current, nil + } else if !p.handler.More(*p.current) { + return *new(T), errors.New("no more pages") + } + } else { + // non-LRO case, first page + p.firstPage = false + } + + var err error + ctx, endSpan := StartSpan(ctx, fmt.Sprintf("%s.NextPage", shortenTypeName(reflect.TypeOf(*p).Name())), p.tracer, nil) + defer func() { endSpan(err) }() + + resp, err := p.handler.Fetcher(ctx, p.current) + if err != nil { + return *new(T), err + } + p.current = &resp + return *p.current, nil +} + +// UnmarshalJSON implements the json.Unmarshaler interface for Pager[T]. +func (p *Pager[T]) UnmarshalJSON(data []byte) error { + return json.Unmarshal(data, &p.current) +} + +// FetcherForNextLinkOptions contains the optional values for [FetcherForNextLink]. +type FetcherForNextLinkOptions struct { + // NextReq is the func to be called when requesting subsequent pages. + // Used for paged operations that have a custom next link operation. + NextReq func(context.Context, string) (*policy.Request, error) +} + +// FetcherForNextLink is a helper containing boilerplate code to simplify creating a PagingHandler[T].Fetcher from a next link URL. +// - ctx is the [context.Context] controlling the lifetime of the HTTP operation +// - pl is the [Pipeline] used to dispatch the HTTP request +// - nextLink is the URL used to fetch the next page. the empty string indicates the first page is to be requested +// - firstReq is the func to be called when creating the request for the first page +// - options contains any optional parameters, pass nil to accept the default values +func FetcherForNextLink(ctx context.Context, pl Pipeline, nextLink string, firstReq func(context.Context) (*policy.Request, error), options *FetcherForNextLinkOptions) (*http.Response, error) { + var req *policy.Request + var err error + if nextLink == "" { + req, err = firstReq(ctx) + } else if nextLink, err = EncodeQueryParams(nextLink); err == nil { + if options != nil && options.NextReq != nil { + req, err = options.NextReq(ctx, nextLink) + } else { + req, err = NewRequest(ctx, http.MethodGet, nextLink) + } + } + if err != nil { + return nil, err + } + resp, err := pl.Do(req) + if err != nil { + return nil, err + } + if !HasStatusCode(resp, http.StatusOK) { + return nil, NewResponseError(resp) + } + return resp, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pipeline.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pipeline.go new file mode 100644 index 000000000000..6b1f5c083eb6 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/pipeline.go @@ -0,0 +1,94 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +// PipelineOptions contains Pipeline options for SDK developers +type PipelineOptions struct { + // AllowedHeaders is the slice of headers to log with their values intact. + // All headers not in the slice will have their values REDACTED. + // Applies to request and response headers. + AllowedHeaders []string + + // AllowedQueryParameters is the slice of query parameters to log with their values intact. + // All query parameters not in the slice will have their values REDACTED. + AllowedQueryParameters []string + + // APIVersion overrides the default version requested of the service. + // Set with caution as this package version has not been tested with arbitrary service versions. + APIVersion APIVersionOptions + + // PerCall contains custom policies to inject into the pipeline. + // Each policy is executed once per request. + PerCall []policy.Policy + + // PerRetry contains custom policies to inject into the pipeline. + // Each policy is executed once per request, and for each retry of that request. + PerRetry []policy.Policy + + // Tracing contains options used to configure distributed tracing. + Tracing TracingOptions +} + +// TracingOptions contains tracing options for SDK developers. +type TracingOptions struct { + // Namespace contains the value to use for the az.namespace span attribute. + Namespace string +} + +// Pipeline represents a primitive for sending HTTP requests and receiving responses. +// Its behavior can be extended by specifying policies during construction. +type Pipeline = exported.Pipeline + +// NewPipeline creates a pipeline from connection options, with any additional policies as specified. +// Policies from ClientOptions are placed after policies from PipelineOptions. +// The module and version parameters are used by the telemetry policy, when enabled. +func NewPipeline(module, version string, plOpts PipelineOptions, options *policy.ClientOptions) Pipeline { + cp := policy.ClientOptions{} + if options != nil { + cp = *options + } + if len(plOpts.AllowedHeaders) > 0 { + headers := make([]string, len(plOpts.AllowedHeaders)+len(cp.Logging.AllowedHeaders)) + copy(headers, plOpts.AllowedHeaders) + headers = append(headers, cp.Logging.AllowedHeaders...) + cp.Logging.AllowedHeaders = headers + } + if len(plOpts.AllowedQueryParameters) > 0 { + qp := make([]string, len(plOpts.AllowedQueryParameters)+len(cp.Logging.AllowedQueryParams)) + copy(qp, plOpts.AllowedQueryParameters) + qp = append(qp, cp.Logging.AllowedQueryParams...) + cp.Logging.AllowedQueryParams = qp + } + // we put the includeResponsePolicy at the very beginning so that the raw response + // is populated with the final response (some policies might mutate the response) + policies := []policy.Policy{exported.PolicyFunc(includeResponsePolicy)} + if cp.APIVersion != "" { + policies = append(policies, newAPIVersionPolicy(cp.APIVersion, &plOpts.APIVersion)) + } + if !cp.Telemetry.Disabled { + policies = append(policies, NewTelemetryPolicy(module, version, &cp.Telemetry)) + } + policies = append(policies, plOpts.PerCall...) + policies = append(policies, cp.PerCallPolicies...) + policies = append(policies, NewRetryPolicy(&cp.Retry)) + policies = append(policies, plOpts.PerRetry...) + policies = append(policies, cp.PerRetryPolicies...) + policies = append(policies, exported.PolicyFunc(httpHeaderPolicy)) + policies = append(policies, newHTTPTracePolicy(cp.Logging.AllowedQueryParams)) + policies = append(policies, NewLogPolicy(&cp.Logging)) + policies = append(policies, exported.PolicyFunc(bodyDownloadPolicy)) + transport := cp.Transport + if transport == nil { + transport = defaultHTTPClient + } + return exported.NewPipeline(transport, policies...) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_api_version.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_api_version.go new file mode 100644 index 000000000000..e5309aa6c15b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_api_version.go @@ -0,0 +1,75 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "errors" + "fmt" + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +// APIVersionOptions contains options for API versions +type APIVersionOptions struct { + // Location indicates where to set the version on a request, for example in a header or query param + Location APIVersionLocation + // Name is the name of the header or query parameter, for example "api-version" + Name string +} + +// APIVersionLocation indicates which part of a request identifies the service version +type APIVersionLocation int + +const ( + // APIVersionLocationQueryParam indicates a query parameter + APIVersionLocationQueryParam = 0 + // APIVersionLocationHeader indicates a header + APIVersionLocationHeader = 1 +) + +// newAPIVersionPolicy constructs an APIVersionPolicy. If version is "", Do will be a no-op. If version +// isn't empty and opts.Name is empty, Do will return an error. +func newAPIVersionPolicy(version string, opts *APIVersionOptions) *apiVersionPolicy { + if opts == nil { + opts = &APIVersionOptions{} + } + return &apiVersionPolicy{location: opts.Location, name: opts.Name, version: version} +} + +// apiVersionPolicy enables users to set the API version of every request a client sends. +type apiVersionPolicy struct { + // location indicates whether "name" refers to a query parameter or header. + location APIVersionLocation + + // name of the query param or header whose value should be overridden; provided by the client. + name string + + // version is the value (provided by the user) that replaces the default version value. + version string +} + +// Do sets the request's API version, if the policy is configured to do so, replacing any prior value. +func (a *apiVersionPolicy) Do(req *policy.Request) (*http.Response, error) { + if a.version != "" { + if a.name == "" { + // user set ClientOptions.APIVersion but the client ctor didn't set PipelineOptions.APIVersionOptions + return nil, errors.New("this client doesn't support overriding its API version") + } + switch a.location { + case APIVersionLocationHeader: + req.Raw().Header.Set(a.name, a.version) + case APIVersionLocationQueryParam: + q := req.Raw().URL.Query() + q.Set(a.name, a.version) + req.Raw().URL.RawQuery = q.Encode() + default: + return nil, fmt.Errorf("unknown APIVersionLocation %d", a.location) + } + } + return req.Next() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_bearer_token.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_bearer_token.go new file mode 100644 index 000000000000..f0f280355955 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_bearer_token.go @@ -0,0 +1,121 @@ +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "errors" + "net/http" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo" + "github.com/Azure/azure-sdk-for-go/sdk/internal/temporal" +) + +// BearerTokenPolicy authorizes requests with bearer tokens acquired from a TokenCredential. +type BearerTokenPolicy struct { + // mainResource is the resource to be retreived using the tenant specified in the credential + mainResource *temporal.Resource[exported.AccessToken, acquiringResourceState] + // the following fields are read-only + authzHandler policy.AuthorizationHandler + cred exported.TokenCredential + scopes []string +} + +type acquiringResourceState struct { + req *policy.Request + p *BearerTokenPolicy + tro policy.TokenRequestOptions +} + +// acquire acquires or updates the resource; only one +// thread/goroutine at a time ever calls this function +func acquire(state acquiringResourceState) (newResource exported.AccessToken, newExpiration time.Time, err error) { + tk, err := state.p.cred.GetToken(&shared.ContextWithDeniedValues{Context: state.req.Raw().Context()}, state.tro) + if err != nil { + return exported.AccessToken{}, time.Time{}, err + } + return tk, tk.ExpiresOn, nil +} + +// NewBearerTokenPolicy creates a policy object that authorizes requests with bearer tokens. +// cred: an azcore.TokenCredential implementation such as a credential object from azidentity +// scopes: the list of permission scopes required for the token. +// opts: optional settings. Pass nil to accept default values; this is the same as passing a zero-value options. +func NewBearerTokenPolicy(cred exported.TokenCredential, scopes []string, opts *policy.BearerTokenOptions) *BearerTokenPolicy { + if opts == nil { + opts = &policy.BearerTokenOptions{} + } + return &BearerTokenPolicy{ + authzHandler: opts.AuthorizationHandler, + cred: cred, + scopes: scopes, + mainResource: temporal.NewResource(acquire), + } +} + +// authenticateAndAuthorize returns a function which authorizes req with a token from the policy's credential +func (b *BearerTokenPolicy) authenticateAndAuthorize(req *policy.Request) func(policy.TokenRequestOptions) error { + return func(tro policy.TokenRequestOptions) error { + as := acquiringResourceState{p: b, req: req, tro: tro} + tk, err := b.mainResource.Get(as) + if err != nil { + return err + } + req.Raw().Header.Set(shared.HeaderAuthorization, shared.BearerTokenPrefix+tk.Token) + return nil + } +} + +// Do authorizes a request with a bearer token +func (b *BearerTokenPolicy) Do(req *policy.Request) (*http.Response, error) { + // skip adding the authorization header if no TokenCredential was provided. + // this prevents a panic that might be hard to diagnose and allows testing + // against http endpoints that don't require authentication. + if b.cred == nil { + return req.Next() + } + + if err := checkHTTPSForAuth(req); err != nil { + return nil, err + } + + var err error + if b.authzHandler.OnRequest != nil { + err = b.authzHandler.OnRequest(req, b.authenticateAndAuthorize(req)) + } else { + err = b.authenticateAndAuthorize(req)(policy.TokenRequestOptions{Scopes: b.scopes}) + } + if err != nil { + return nil, errorinfo.NonRetriableError(err) + } + + res, err := req.Next() + if err != nil { + return nil, err + } + + if res.StatusCode == http.StatusUnauthorized { + b.mainResource.Expire() + if res.Header.Get("WWW-Authenticate") != "" && b.authzHandler.OnChallenge != nil { + if err = b.authzHandler.OnChallenge(req, res, b.authenticateAndAuthorize(req)); err == nil { + res, err = req.Next() + } + } + } + if err != nil { + err = errorinfo.NonRetriableError(err) + } + return res, err +} + +func checkHTTPSForAuth(req *policy.Request) error { + if strings.ToLower(req.Raw().URL.Scheme) != "https" { + return errorinfo.NonRetriableError(errors.New("authenticated requests are not permitted for non TLS protected (https) endpoints")) + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_body_download.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_body_download.go new file mode 100644 index 000000000000..99dc029f0c17 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_body_download.go @@ -0,0 +1,72 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "fmt" + "net/http" + "strings" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo" +) + +// bodyDownloadPolicy creates a policy object that downloads the response's body to a []byte. +func bodyDownloadPolicy(req *policy.Request) (*http.Response, error) { + resp, err := req.Next() + if err != nil { + return resp, err + } + var opValues bodyDownloadPolicyOpValues + // don't skip downloading error response bodies + if req.OperationValue(&opValues); opValues.Skip && resp.StatusCode < 400 { + return resp, err + } + // Either bodyDownloadPolicyOpValues was not specified (so skip is false) + // or it was specified and skip is false: don't skip downloading the body + _, err = Payload(resp) + if err != nil { + return resp, newBodyDownloadError(err, req) + } + return resp, err +} + +// bodyDownloadPolicyOpValues is the struct containing the per-operation values +type bodyDownloadPolicyOpValues struct { + Skip bool +} + +type bodyDownloadError struct { + err error +} + +func newBodyDownloadError(err error, req *policy.Request) error { + // on failure, only retry the request for idempotent operations. + // we currently identify them as DELETE, GET, and PUT requests. + if m := strings.ToUpper(req.Raw().Method); m == http.MethodDelete || m == http.MethodGet || m == http.MethodPut { + // error is safe for retry + return err + } + // wrap error to avoid retries + return &bodyDownloadError{ + err: err, + } +} + +func (b *bodyDownloadError) Error() string { + return fmt.Sprintf("body download policy: %s", b.err.Error()) +} + +func (b *bodyDownloadError) NonRetriable() { + // marker method +} + +func (b *bodyDownloadError) Unwrap() error { + return b.err +} + +var _ errorinfo.NonRetriable = (*bodyDownloadError)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_header.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_header.go new file mode 100644 index 000000000000..c230af0afa89 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_header.go @@ -0,0 +1,40 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +// newHTTPHeaderPolicy creates a policy object that adds custom HTTP headers to a request +func httpHeaderPolicy(req *policy.Request) (*http.Response, error) { + // check if any custom HTTP headers have been specified + if header := req.Raw().Context().Value(shared.CtxWithHTTPHeaderKey{}); header != nil { + for k, v := range header.(http.Header) { + // use Set to replace any existing value + // it also canonicalizes the header key + req.Raw().Header.Set(k, v[0]) + // add any remaining values + for i := 1; i < len(v); i++ { + req.Raw().Header.Add(k, v[i]) + } + } + } + return req.Next() +} + +// WithHTTPHeader adds the specified http.Header to the parent context. +// Use this to specify custom HTTP headers at the API-call level. +// Any overlapping headers will have their values replaced with the values specified here. +// Deprecated: use [policy.WithHTTPHeader] instead. +func WithHTTPHeader(parent context.Context, header http.Header) context.Context { + return policy.WithHTTPHeader(parent, header) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_trace.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_trace.go new file mode 100644 index 000000000000..3df1c1218901 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_http_trace.go @@ -0,0 +1,143 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "errors" + "fmt" + "net/http" + "net/url" + "strings" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing" +) + +const ( + attrHTTPMethod = "http.method" + attrHTTPURL = "http.url" + attrHTTPUserAgent = "http.user_agent" + attrHTTPStatusCode = "http.status_code" + + attrAZClientReqID = "az.client_request_id" + attrAZServiceReqID = "az.service_request_id" + + attrNetPeerName = "net.peer.name" +) + +// newHTTPTracePolicy creates a new instance of the httpTracePolicy. +// - allowedQueryParams contains the user-specified query parameters that don't need to be redacted from the trace +func newHTTPTracePolicy(allowedQueryParams []string) exported.Policy { + return &httpTracePolicy{allowedQP: getAllowedQueryParams(allowedQueryParams)} +} + +// httpTracePolicy is a policy that creates a trace for the HTTP request and its response +type httpTracePolicy struct { + allowedQP map[string]struct{} +} + +// Do implements the pipeline.Policy interfaces for the httpTracePolicy type. +func (h *httpTracePolicy) Do(req *policy.Request) (resp *http.Response, err error) { + rawTracer := req.Raw().Context().Value(shared.CtxWithTracingTracer{}) + if tracer, ok := rawTracer.(tracing.Tracer); ok && tracer.Enabled() { + attributes := []tracing.Attribute{ + {Key: attrHTTPMethod, Value: req.Raw().Method}, + {Key: attrHTTPURL, Value: getSanitizedURL(*req.Raw().URL, h.allowedQP)}, + {Key: attrNetPeerName, Value: req.Raw().URL.Host}, + } + + if ua := req.Raw().Header.Get(shared.HeaderUserAgent); ua != "" { + attributes = append(attributes, tracing.Attribute{Key: attrHTTPUserAgent, Value: ua}) + } + if reqID := req.Raw().Header.Get(shared.HeaderXMSClientRequestID); reqID != "" { + attributes = append(attributes, tracing.Attribute{Key: attrAZClientReqID, Value: reqID}) + } + + ctx := req.Raw().Context() + ctx, span := tracer.Start(ctx, "HTTP "+req.Raw().Method, &tracing.SpanOptions{ + Kind: tracing.SpanKindClient, + Attributes: attributes, + }) + + defer func() { + if resp != nil { + span.SetAttributes(tracing.Attribute{Key: attrHTTPStatusCode, Value: resp.StatusCode}) + if resp.StatusCode > 399 { + span.SetStatus(tracing.SpanStatusError, resp.Status) + } + if reqID := resp.Header.Get(shared.HeaderXMSRequestID); reqID != "" { + span.SetAttributes(tracing.Attribute{Key: attrAZServiceReqID, Value: reqID}) + } + } else if err != nil { + var urlErr *url.Error + if errors.As(err, &urlErr) { + // calling *url.Error.Error() will include the unsanitized URL + // which we don't want. in addition, we already have the HTTP verb + // and sanitized URL in the trace so we aren't losing any info + err = urlErr.Err + } + span.SetStatus(tracing.SpanStatusError, err.Error()) + } + span.End() + }() + + req = req.WithContext(ctx) + } + resp, err = req.Next() + return +} + +// StartSpanOptions contains the optional values for StartSpan. +type StartSpanOptions struct { + // for future expansion +} + +// StartSpan starts a new tracing span. +// You must call the returned func to terminate the span. Pass the applicable error +// if the span will exit with an error condition. +// - ctx is the parent context of the newly created context +// - name is the name of the span. this is typically the fully qualified name of an API ("Client.Method") +// - tracer is the client's Tracer for creating spans +// - options contains optional values. pass nil to accept any default values +func StartSpan(ctx context.Context, name string, tracer tracing.Tracer, options *StartSpanOptions) (context.Context, func(error)) { + if !tracer.Enabled() { + return ctx, func(err error) {} + } + + // we MUST propagate the active tracer before returning so that the trace policy can access it + ctx = context.WithValue(ctx, shared.CtxWithTracingTracer{}, tracer) + + const newSpanKind = tracing.SpanKindInternal + if activeSpan := ctx.Value(ctxActiveSpan{}); activeSpan != nil { + // per the design guidelines, if a SDK method Foo() calls SDK method Bar(), + // then the span for Bar() must be suppressed. however, if Bar() makes a REST + // call, then Bar's HTTP span must be a child of Foo's span. + // however, there is an exception to this rule. if the SDK method Foo() is a + // messaging producer/consumer, and it takes a callback that's a SDK method + // Bar(), then the span for Bar() must _not_ be suppressed. + if kind := activeSpan.(tracing.SpanKind); kind == tracing.SpanKindClient || kind == tracing.SpanKindInternal { + return ctx, func(err error) {} + } + } + ctx, span := tracer.Start(ctx, name, &tracing.SpanOptions{ + Kind: newSpanKind, + }) + ctx = context.WithValue(ctx, ctxActiveSpan{}, newSpanKind) + return ctx, func(err error) { + if err != nil { + errType := strings.Replace(fmt.Sprintf("%T", err), "*exported.", "*azcore.", 1) + span.SetStatus(tracing.SpanStatusError, fmt.Sprintf("%s:\n%s", errType, err.Error())) + } + span.End() + } +} + +// ctxActiveSpan is used as a context key for indicating a SDK client span is in progress. +type ctxActiveSpan struct{} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_include_response.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_include_response.go new file mode 100644 index 000000000000..bb00f6c2fdb7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_include_response.go @@ -0,0 +1,35 @@ +//go:build go1.16 +// +build go1.16 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +// includeResponsePolicy creates a policy that retrieves the raw HTTP response upon request +func includeResponsePolicy(req *policy.Request) (*http.Response, error) { + resp, err := req.Next() + if resp == nil { + return resp, err + } + if httpOutRaw := req.Raw().Context().Value(shared.CtxWithCaptureResponse{}); httpOutRaw != nil { + httpOut := httpOutRaw.(**http.Response) + *httpOut = resp + } + return resp, err +} + +// WithCaptureResponse applies the HTTP response retrieval annotation to the parent context. +// The resp parameter will contain the HTTP response after the request has completed. +// Deprecated: use [policy.WithCaptureResponse] instead. +func WithCaptureResponse(parent context.Context, resp **http.Response) context.Context { + return policy.WithCaptureResponse(parent, resp) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_key_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_key_credential.go new file mode 100644 index 000000000000..6f577fa7a9e4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_key_credential.go @@ -0,0 +1,57 @@ +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +// KeyCredentialPolicy authorizes requests with a [azcore.KeyCredential]. +type KeyCredentialPolicy struct { + cred *exported.KeyCredential + header string + prefix string +} + +// KeyCredentialPolicyOptions contains the optional values configuring [KeyCredentialPolicy]. +type KeyCredentialPolicyOptions struct { + // Prefix is used if the key requires a prefix before it's inserted into the HTTP request. + Prefix string +} + +// NewKeyCredentialPolicy creates a new instance of [KeyCredentialPolicy]. +// - cred is the [azcore.KeyCredential] used to authenticate with the service +// - header is the name of the HTTP request header in which the key is placed +// - options contains optional configuration, pass nil to accept the default values +func NewKeyCredentialPolicy(cred *exported.KeyCredential, header string, options *KeyCredentialPolicyOptions) *KeyCredentialPolicy { + if options == nil { + options = &KeyCredentialPolicyOptions{} + } + return &KeyCredentialPolicy{ + cred: cred, + header: header, + prefix: options.Prefix, + } +} + +// Do implementes the Do method on the [policy.Polilcy] interface. +func (k *KeyCredentialPolicy) Do(req *policy.Request) (*http.Response, error) { + // skip adding the authorization header if no KeyCredential was provided. + // this prevents a panic that might be hard to diagnose and allows testing + // against http endpoints that don't require authentication. + if k.cred != nil { + if err := checkHTTPSForAuth(req); err != nil { + return nil, err + } + val := exported.KeyCredentialGet(k.cred) + if k.prefix != "" { + val = k.prefix + val + } + req.Raw().Header.Add(k.header, val) + } + return req.Next() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_logging.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_logging.go new file mode 100644 index 000000000000..f048d7fb53f5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_logging.go @@ -0,0 +1,264 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "bytes" + "fmt" + "io" + "net/http" + "net/url" + "sort" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/diag" +) + +type logPolicy struct { + includeBody bool + allowedHeaders map[string]struct{} + allowedQP map[string]struct{} +} + +// NewLogPolicy creates a request/response logging policy object configured using the specified options. +// Pass nil to accept the default values; this is the same as passing a zero-value options. +func NewLogPolicy(o *policy.LogOptions) policy.Policy { + if o == nil { + o = &policy.LogOptions{} + } + // construct default hash set of allowed headers + allowedHeaders := map[string]struct{}{ + "accept": {}, + "cache-control": {}, + "connection": {}, + "content-length": {}, + "content-type": {}, + "date": {}, + "etag": {}, + "expires": {}, + "if-match": {}, + "if-modified-since": {}, + "if-none-match": {}, + "if-unmodified-since": {}, + "last-modified": {}, + "ms-cv": {}, + "pragma": {}, + "request-id": {}, + "retry-after": {}, + "server": {}, + "traceparent": {}, + "transfer-encoding": {}, + "user-agent": {}, + "www-authenticate": {}, + "x-ms-request-id": {}, + "x-ms-client-request-id": {}, + "x-ms-return-client-request-id": {}, + } + // add any caller-specified allowed headers to the set + for _, ah := range o.AllowedHeaders { + allowedHeaders[strings.ToLower(ah)] = struct{}{} + } + // now do the same thing for query params + allowedQP := getAllowedQueryParams(o.AllowedQueryParams) + return &logPolicy{ + includeBody: o.IncludeBody, + allowedHeaders: allowedHeaders, + allowedQP: allowedQP, + } +} + +// getAllowedQueryParams merges the default set of allowed query parameters +// with a custom set (usually comes from client options). +func getAllowedQueryParams(customAllowedQP []string) map[string]struct{} { + allowedQP := map[string]struct{}{ + "api-version": {}, + } + for _, qp := range customAllowedQP { + allowedQP[strings.ToLower(qp)] = struct{}{} + } + return allowedQP +} + +// logPolicyOpValues is the struct containing the per-operation values +type logPolicyOpValues struct { + try int32 + start time.Time +} + +func (p *logPolicy) Do(req *policy.Request) (*http.Response, error) { + // Get the per-operation values. These are saved in the Message's map so that they persist across each retry calling into this policy object. + var opValues logPolicyOpValues + if req.OperationValue(&opValues); opValues.start.IsZero() { + opValues.start = time.Now() // If this is the 1st try, record this operation's start time + } + opValues.try++ // The first try is #1 (not #0) + req.SetOperationValue(opValues) + + // Log the outgoing request as informational + if log.Should(log.EventRequest) { + b := &bytes.Buffer{} + fmt.Fprintf(b, "==> OUTGOING REQUEST (Try=%d)\n", opValues.try) + p.writeRequestWithResponse(b, req, nil, nil) + var err error + if p.includeBody { + err = writeReqBody(req, b) + } + log.Write(log.EventRequest, b.String()) + if err != nil { + return nil, err + } + } + + // Set the time for this particular retry operation and then Do the operation. + tryStart := time.Now() + response, err := req.Next() // Make the request + tryEnd := time.Now() + tryDuration := tryEnd.Sub(tryStart) + opDuration := tryEnd.Sub(opValues.start) + + if log.Should(log.EventResponse) { + // We're going to log this; build the string to log + b := &bytes.Buffer{} + fmt.Fprintf(b, "==> REQUEST/RESPONSE (Try=%d/%v, OpTime=%v) -- ", opValues.try, tryDuration, opDuration) + if err != nil { // This HTTP request did not get a response from the service + fmt.Fprint(b, "REQUEST ERROR\n") + } else { + fmt.Fprint(b, "RESPONSE RECEIVED\n") + } + + p.writeRequestWithResponse(b, req, response, err) + if err != nil { + // skip frames runtime.Callers() and runtime.StackTrace() + b.WriteString(diag.StackTrace(2, 32)) + } else if p.includeBody { + err = writeRespBody(response, b) + } + log.Write(log.EventResponse, b.String()) + } + return response, err +} + +const redactedValue = "REDACTED" + +// getSanitizedURL returns a sanitized string for the provided url.URL +func getSanitizedURL(u url.URL, allowedQueryParams map[string]struct{}) string { + // redact applicable query params + qp := u.Query() + for k := range qp { + if _, ok := allowedQueryParams[strings.ToLower(k)]; !ok { + qp.Set(k, redactedValue) + } + } + u.RawQuery = qp.Encode() + return u.String() +} + +// writeRequestWithResponse appends a formatted HTTP request into a Buffer. If request and/or err are +// not nil, then these are also written into the Buffer. +func (p *logPolicy) writeRequestWithResponse(b *bytes.Buffer, req *policy.Request, resp *http.Response, err error) { + // Write the request into the buffer. + fmt.Fprint(b, " "+req.Raw().Method+" "+getSanitizedURL(*req.Raw().URL, p.allowedQP)+"\n") + p.writeHeader(b, req.Raw().Header) + if resp != nil { + fmt.Fprintln(b, " --------------------------------------------------------------------------------") + fmt.Fprint(b, " RESPONSE Status: "+resp.Status+"\n") + p.writeHeader(b, resp.Header) + } + if err != nil { + fmt.Fprintln(b, " --------------------------------------------------------------------------------") + fmt.Fprint(b, " ERROR:\n"+err.Error()+"\n") + } +} + +// formatHeaders appends an HTTP request's or response's header into a Buffer. +func (p *logPolicy) writeHeader(b *bytes.Buffer, header http.Header) { + if len(header) == 0 { + b.WriteString(" (no headers)\n") + return + } + keys := make([]string, 0, len(header)) + // Alphabetize the headers + for k := range header { + keys = append(keys, k) + } + sort.Strings(keys) + for _, k := range keys { + // don't use Get() as it will canonicalize k which might cause a mismatch + value := header[k][0] + // redact all header values not in the allow-list + if _, ok := p.allowedHeaders[strings.ToLower(k)]; !ok { + value = redactedValue + } + fmt.Fprintf(b, " %s: %+v\n", k, value) + } +} + +// returns true if the request/response body should be logged. +// this is determined by looking at the content-type header value. +func shouldLogBody(b *bytes.Buffer, contentType string) bool { + contentType = strings.ToLower(contentType) + if strings.HasPrefix(contentType, "text") || + strings.Contains(contentType, "json") || + strings.Contains(contentType, "xml") { + return true + } + fmt.Fprintf(b, " Skip logging body for %s\n", contentType) + return false +} + +// writes to a buffer, used for logging purposes +func writeReqBody(req *policy.Request, b *bytes.Buffer) error { + if req.Raw().Body == nil { + fmt.Fprint(b, " Request contained no body\n") + return nil + } + if ct := req.Raw().Header.Get(shared.HeaderContentType); !shouldLogBody(b, ct) { + return nil + } + body, err := io.ReadAll(req.Raw().Body) + if err != nil { + fmt.Fprintf(b, " Failed to read request body: %s\n", err.Error()) + return err + } + if err := req.RewindBody(); err != nil { + return err + } + logBody(b, body) + return nil +} + +// writes to a buffer, used for logging purposes +func writeRespBody(resp *http.Response, b *bytes.Buffer) error { + ct := resp.Header.Get(shared.HeaderContentType) + if ct == "" { + fmt.Fprint(b, " Response contained no body\n") + return nil + } else if !shouldLogBody(b, ct) { + return nil + } + body, err := Payload(resp) + if err != nil { + fmt.Fprintf(b, " Failed to read response body: %s\n", err.Error()) + return err + } + if len(body) > 0 { + logBody(b, body) + } else { + fmt.Fprint(b, " Response contained no body\n") + } + return nil +} + +func logBody(b *bytes.Buffer, body []byte) { + fmt.Fprintln(b, " --------------------------------------------------------------------------------") + fmt.Fprintln(b, string(body)) + fmt.Fprintln(b, " --------------------------------------------------------------------------------") +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_request_id.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_request_id.go new file mode 100644 index 000000000000..360a7f2118a3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_request_id.go @@ -0,0 +1,34 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/uuid" +) + +type requestIDPolicy struct{} + +// NewRequestIDPolicy returns a policy that add the x-ms-client-request-id header +func NewRequestIDPolicy() policy.Policy { + return &requestIDPolicy{} +} + +func (r *requestIDPolicy) Do(req *policy.Request) (*http.Response, error) { + if req.Raw().Header.Get(shared.HeaderXMSClientRequestID) == "" { + id, err := uuid.New() + if err != nil { + return nil, err + } + req.Raw().Header.Set(shared.HeaderXMSClientRequestID, id.String()) + } + + return req.Next() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_retry.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_retry.go new file mode 100644 index 000000000000..04d7bb4ecbc6 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_retry.go @@ -0,0 +1,255 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "errors" + "io" + "math" + "math/rand" + "net/http" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo" + "github.com/Azure/azure-sdk-for-go/sdk/internal/exported" +) + +const ( + defaultMaxRetries = 3 +) + +func setDefaults(o *policy.RetryOptions) { + if o.MaxRetries == 0 { + o.MaxRetries = defaultMaxRetries + } else if o.MaxRetries < 0 { + o.MaxRetries = 0 + } + + // SDK guidelines specify the default MaxRetryDelay is 60 seconds + if o.MaxRetryDelay == 0 { + o.MaxRetryDelay = 60 * time.Second + } else if o.MaxRetryDelay < 0 { + // not really an unlimited cap, but sufficiently large enough to be considered as such + o.MaxRetryDelay = math.MaxInt64 + } + if o.RetryDelay == 0 { + o.RetryDelay = 800 * time.Millisecond + } else if o.RetryDelay < 0 { + o.RetryDelay = 0 + } + if o.StatusCodes == nil { + // NOTE: if you change this list, you MUST update the docs in policy/policy.go + o.StatusCodes = []int{ + http.StatusRequestTimeout, // 408 + http.StatusTooManyRequests, // 429 + http.StatusInternalServerError, // 500 + http.StatusBadGateway, // 502 + http.StatusServiceUnavailable, // 503 + http.StatusGatewayTimeout, // 504 + } + } +} + +func calcDelay(o policy.RetryOptions, try int32) time.Duration { // try is >=1; never 0 + delay := time.Duration((1< o.MaxRetryDelay { + delay = o.MaxRetryDelay + } + return delay +} + +// NewRetryPolicy creates a policy object configured using the specified options. +// Pass nil to accept the default values; this is the same as passing a zero-value options. +func NewRetryPolicy(o *policy.RetryOptions) policy.Policy { + if o == nil { + o = &policy.RetryOptions{} + } + p := &retryPolicy{options: *o} + return p +} + +type retryPolicy struct { + options policy.RetryOptions +} + +func (p *retryPolicy) Do(req *policy.Request) (resp *http.Response, err error) { + options := p.options + // check if the retry options have been overridden for this call + if override := req.Raw().Context().Value(shared.CtxWithRetryOptionsKey{}); override != nil { + options = override.(policy.RetryOptions) + } + setDefaults(&options) + // Exponential retry algorithm: ((2 ^ attempt) - 1) * delay * random(0.8, 1.2) + // When to retry: connection failure or temporary/timeout. + var rwbody *retryableRequestBody + if req.Body() != nil { + // wrap the body so we control when it's actually closed. + // do this outside the for loop so defers don't accumulate. + rwbody = &retryableRequestBody{body: req.Body()} + defer rwbody.realClose() + } + try := int32(1) + for { + resp = nil // reset + log.Writef(log.EventRetryPolicy, "=====> Try=%d", try) + + // For each try, seek to the beginning of the Body stream. We do this even for the 1st try because + // the stream may not be at offset 0 when we first get it and we want the same behavior for the + // 1st try as for additional tries. + err = req.RewindBody() + if err != nil { + return + } + // RewindBody() restores Raw().Body to its original state, so set our rewindable after + if rwbody != nil { + req.Raw().Body = rwbody + } + + if options.TryTimeout == 0 { + clone := req.Clone(req.Raw().Context()) + resp, err = clone.Next() + } else { + // Set the per-try time for this particular retry operation and then Do the operation. + tryCtx, tryCancel := context.WithTimeout(req.Raw().Context(), options.TryTimeout) + clone := req.Clone(tryCtx) + resp, err = clone.Next() // Make the request + // if the body was already downloaded or there was an error it's safe to cancel the context now + if err != nil { + tryCancel() + } else if exported.PayloadDownloaded(resp) { + tryCancel() + } else { + // must cancel the context after the body has been read and closed + resp.Body = &contextCancelReadCloser{cf: tryCancel, body: resp.Body} + } + } + if err == nil { + log.Writef(log.EventRetryPolicy, "response %d", resp.StatusCode) + } else { + log.Writef(log.EventRetryPolicy, "error %v", err) + } + + if ctxErr := req.Raw().Context().Err(); ctxErr != nil { + // don't retry if the parent context has been cancelled or its deadline exceeded + err = ctxErr + log.Writef(log.EventRetryPolicy, "abort due to %v", err) + return + } + + // check if the error is not retriable + var nre errorinfo.NonRetriable + if errors.As(err, &nre) { + // the error says it's not retriable so don't retry + log.Writef(log.EventRetryPolicy, "non-retriable error %T", nre) + return + } + + if options.ShouldRetry != nil { + // a non-nil ShouldRetry overrides our HTTP status code check + if !options.ShouldRetry(resp, err) { + // predicate says we shouldn't retry + log.Write(log.EventRetryPolicy, "exit due to ShouldRetry") + return + } + } else if err == nil && !HasStatusCode(resp, options.StatusCodes...) { + // if there is no error and the response code isn't in the list of retry codes then we're done. + log.Write(log.EventRetryPolicy, "exit due to non-retriable status code") + return + } + + if try == options.MaxRetries+1 { + // max number of tries has been reached, don't sleep again + log.Writef(log.EventRetryPolicy, "MaxRetries %d exceeded", options.MaxRetries) + return + } + + // use the delay from retry-after if available + delay := shared.RetryAfter(resp) + if delay <= 0 { + delay = calcDelay(options, try) + } else if delay > options.MaxRetryDelay { + // the retry-after delay exceeds the the cap so don't retry + log.Writef(log.EventRetryPolicy, "Retry-After delay %s exceeds MaxRetryDelay of %s", delay, options.MaxRetryDelay) + return + } + + // drain before retrying so nothing is leaked + Drain(resp) + + log.Writef(log.EventRetryPolicy, "End Try #%d, Delay=%v", try, delay) + select { + case <-time.After(delay): + try++ + case <-req.Raw().Context().Done(): + err = req.Raw().Context().Err() + log.Writef(log.EventRetryPolicy, "abort due to %v", err) + return + } + } +} + +// WithRetryOptions adds the specified RetryOptions to the parent context. +// Use this to specify custom RetryOptions at the API-call level. +// Deprecated: use [policy.WithRetryOptions] instead. +func WithRetryOptions(parent context.Context, options policy.RetryOptions) context.Context { + return policy.WithRetryOptions(parent, options) +} + +// ********** The following type/methods implement the retryableRequestBody (a ReadSeekCloser) + +// This struct is used when sending a body to the network +type retryableRequestBody struct { + body io.ReadSeeker // Seeking is required to support retries +} + +// Read reads a block of data from an inner stream and reports progress +func (b *retryableRequestBody) Read(p []byte) (n int, err error) { + return b.body.Read(p) +} + +func (b *retryableRequestBody) Seek(offset int64, whence int) (offsetFromStart int64, err error) { + return b.body.Seek(offset, whence) +} + +func (b *retryableRequestBody) Close() error { + // We don't want the underlying transport to close the request body on transient failures so this is a nop. + // The retry policy closes the request body upon success. + return nil +} + +func (b *retryableRequestBody) realClose() error { + if c, ok := b.body.(io.Closer); ok { + return c.Close() + } + return nil +} + +// ********** The following type/methods implement the contextCancelReadCloser + +// contextCancelReadCloser combines an io.ReadCloser with a cancel func. +// it ensures the cancel func is invoked once the body has been read and closed. +type contextCancelReadCloser struct { + cf context.CancelFunc + body io.ReadCloser +} + +func (rc *contextCancelReadCloser) Read(p []byte) (n int, err error) { + return rc.body.Read(p) +} + +func (rc *contextCancelReadCloser) Close() error { + err := rc.body.Close() + rc.cf() + return err +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_sas_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_sas_credential.go new file mode 100644 index 000000000000..ebe2b7772ba7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_sas_credential.go @@ -0,0 +1,47 @@ +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +// SASCredentialPolicy authorizes requests with a [azcore.SASCredential]. +type SASCredentialPolicy struct { + cred *exported.SASCredential + header string +} + +// SASCredentialPolicyOptions contains the optional values configuring [SASCredentialPolicy]. +type SASCredentialPolicyOptions struct { + // placeholder for future optional values +} + +// NewSASCredentialPolicy creates a new instance of [SASCredentialPolicy]. +// - cred is the [azcore.SASCredential] used to authenticate with the service +// - header is the name of the HTTP request header in which the shared access signature is placed +// - options contains optional configuration, pass nil to accept the default values +func NewSASCredentialPolicy(cred *exported.SASCredential, header string, options *SASCredentialPolicyOptions) *SASCredentialPolicy { + return &SASCredentialPolicy{ + cred: cred, + header: header, + } +} + +// Do implementes the Do method on the [policy.Polilcy] interface. +func (k *SASCredentialPolicy) Do(req *policy.Request) (*http.Response, error) { + // skip adding the authorization header if no SASCredential was provided. + // this prevents a panic that might be hard to diagnose and allows testing + // against http endpoints that don't require authentication. + if k.cred != nil { + if err := checkHTTPSForAuth(req); err != nil { + return nil, err + } + req.Raw().Header.Add(k.header, exported.SASCredentialGet(k.cred)) + } + return req.Next() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_telemetry.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_telemetry.go new file mode 100644 index 000000000000..80a903546193 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/policy_telemetry.go @@ -0,0 +1,83 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "bytes" + "fmt" + "net/http" + "os" + "runtime" + "strings" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +type telemetryPolicy struct { + telemetryValue string +} + +// NewTelemetryPolicy creates a telemetry policy object that adds telemetry information to outgoing HTTP requests. +// The format is [ ]azsdk-go-/ . +// Pass nil to accept the default values; this is the same as passing a zero-value options. +func NewTelemetryPolicy(mod, ver string, o *policy.TelemetryOptions) policy.Policy { + if o == nil { + o = &policy.TelemetryOptions{} + } + tp := telemetryPolicy{} + if o.Disabled { + return &tp + } + b := &bytes.Buffer{} + // normalize ApplicationID + if o.ApplicationID != "" { + o.ApplicationID = strings.ReplaceAll(o.ApplicationID, " ", "/") + if len(o.ApplicationID) > 24 { + o.ApplicationID = o.ApplicationID[:24] + } + b.WriteString(o.ApplicationID) + b.WriteRune(' ') + } + // mod might be the fully qualified name. in that case, we just want the package name + if i := strings.LastIndex(mod, "/"); i > -1 { + mod = mod[i+1:] + } + b.WriteString(formatTelemetry(mod, ver)) + b.WriteRune(' ') + b.WriteString(platformInfo) + tp.telemetryValue = b.String() + return &tp +} + +func formatTelemetry(comp, ver string) string { + return fmt.Sprintf("azsdk-go-%s/%s", comp, ver) +} + +func (p telemetryPolicy) Do(req *policy.Request) (*http.Response, error) { + if p.telemetryValue == "" { + return req.Next() + } + // preserve the existing User-Agent string + if ua := req.Raw().Header.Get(shared.HeaderUserAgent); ua != "" { + p.telemetryValue = fmt.Sprintf("%s %s", p.telemetryValue, ua) + } + req.Raw().Header.Set(shared.HeaderUserAgent, p.telemetryValue) + return req.Next() +} + +// NOTE: the ONLY function that should write to this variable is this func +var platformInfo = func() string { + operatingSystem := runtime.GOOS // Default OS string + switch operatingSystem { + case "windows": + operatingSystem = os.Getenv("OS") // Get more specific OS information + case "linux": // accept default OS info + case "freebsd": // accept default OS info + } + return fmt.Sprintf("(%s; %s)", runtime.Version(), operatingSystem) +}() diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/poller.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/poller.go new file mode 100644 index 000000000000..c373f68962e3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/poller.go @@ -0,0 +1,384 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "encoding/json" + "errors" + "flag" + "fmt" + "net/http" + "reflect" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing" + "github.com/Azure/azure-sdk-for-go/sdk/internal/poller" +) + +// FinalStateVia is the enumerated type for the possible final-state-via values. +type FinalStateVia = pollers.FinalStateVia + +const ( + // FinalStateViaAzureAsyncOp indicates the final payload comes from the Azure-AsyncOperation URL. + FinalStateViaAzureAsyncOp = pollers.FinalStateViaAzureAsyncOp + + // FinalStateViaLocation indicates the final payload comes from the Location URL. + FinalStateViaLocation = pollers.FinalStateViaLocation + + // FinalStateViaOriginalURI indicates the final payload comes from the original URL. + FinalStateViaOriginalURI = pollers.FinalStateViaOriginalURI + + // FinalStateViaOpLocation indicates the final payload comes from the Operation-Location URL. + FinalStateViaOpLocation = pollers.FinalStateViaOpLocation +) + +// NewPollerOptions contains the optional parameters for NewPoller. +type NewPollerOptions[T any] struct { + // FinalStateVia contains the final-state-via value for the LRO. + FinalStateVia FinalStateVia + + // Response contains a preconstructed response type. + // The final payload will be unmarshaled into it and returned. + Response *T + + // Handler[T] contains a custom polling implementation. + Handler PollingHandler[T] + + // Tracer contains the Tracer from the client that's creating the Poller. + Tracer tracing.Tracer +} + +// NewPoller creates a Poller based on the provided initial response. +func NewPoller[T any](resp *http.Response, pl exported.Pipeline, options *NewPollerOptions[T]) (*Poller[T], error) { + if options == nil { + options = &NewPollerOptions[T]{} + } + result := options.Response + if result == nil { + result = new(T) + } + if options.Handler != nil { + return &Poller[T]{ + op: options.Handler, + resp: resp, + result: result, + tracer: options.Tracer, + }, nil + } + + defer resp.Body.Close() + // this is a back-stop in case the swagger is incorrect (i.e. missing one or more status codes for success). + // ideally the codegen should return an error if the initial response failed and not even create a poller. + if !poller.StatusCodeValid(resp) { + return nil, errors.New("the operation failed or was cancelled") + } + + // determine the polling method + var opr PollingHandler[T] + var err error + if fake.Applicable(resp) { + opr, err = fake.New[T](pl, resp) + } else if async.Applicable(resp) { + // async poller must be checked first as it can also have a location header + opr, err = async.New[T](pl, resp, options.FinalStateVia) + } else if op.Applicable(resp) { + // op poller must be checked before loc as it can also have a location header + opr, err = op.New[T](pl, resp, options.FinalStateVia) + } else if loc.Applicable(resp) { + opr, err = loc.New[T](pl, resp) + } else if body.Applicable(resp) { + // must test body poller last as it's a subset of the other pollers. + // TODO: this is ambiguous for PATCH/PUT if it returns a 200 with no polling headers (sync completion) + opr, err = body.New[T](pl, resp) + } else if m := resp.Request.Method; resp.StatusCode == http.StatusAccepted && (m == http.MethodDelete || m == http.MethodPost) { + // if we get here it means we have a 202 with no polling headers. + // for DELETE and POST this is a hard error per ARM RPC spec. + return nil, errors.New("response is missing polling URL") + } else { + opr, err = pollers.NewNopPoller[T](resp) + } + + if err != nil { + return nil, err + } + return &Poller[T]{ + op: opr, + resp: resp, + result: result, + tracer: options.Tracer, + }, nil +} + +// NewPollerFromResumeTokenOptions contains the optional parameters for NewPollerFromResumeToken. +type NewPollerFromResumeTokenOptions[T any] struct { + // Response contains a preconstructed response type. + // The final payload will be unmarshaled into it and returned. + Response *T + + // Handler[T] contains a custom polling implementation. + Handler PollingHandler[T] + + // Tracer contains the Tracer from the client that's creating the Poller. + Tracer tracing.Tracer +} + +// NewPollerFromResumeToken creates a Poller from a resume token string. +func NewPollerFromResumeToken[T any](token string, pl exported.Pipeline, options *NewPollerFromResumeTokenOptions[T]) (*Poller[T], error) { + if options == nil { + options = &NewPollerFromResumeTokenOptions[T]{} + } + result := options.Response + if result == nil { + result = new(T) + } + + if err := pollers.IsTokenValid[T](token); err != nil { + return nil, err + } + raw, err := pollers.ExtractToken(token) + if err != nil { + return nil, err + } + var asJSON map[string]interface{} + if err := json.Unmarshal(raw, &asJSON); err != nil { + return nil, err + } + + opr := options.Handler + // now rehydrate the poller based on the encoded poller type + if fake.CanResume(asJSON) { + opr, _ = fake.New[T](pl, nil) + } else if opr != nil { + log.Writef(log.EventLRO, "Resuming custom poller %T.", opr) + } else if async.CanResume(asJSON) { + opr, _ = async.New[T](pl, nil, "") + } else if body.CanResume(asJSON) { + opr, _ = body.New[T](pl, nil) + } else if loc.CanResume(asJSON) { + opr, _ = loc.New[T](pl, nil) + } else if op.CanResume(asJSON) { + opr, _ = op.New[T](pl, nil, "") + } else { + return nil, fmt.Errorf("unhandled poller token %s", string(raw)) + } + if err := json.Unmarshal(raw, &opr); err != nil { + return nil, err + } + return &Poller[T]{ + op: opr, + result: result, + tracer: options.Tracer, + }, nil +} + +// PollingHandler[T] abstracts the differences among poller implementations. +type PollingHandler[T any] interface { + // Done returns true if the LRO has reached a terminal state. + Done() bool + + // Poll fetches the latest state of the LRO. + Poll(context.Context) (*http.Response, error) + + // Result is called once the LRO has reached a terminal state. It populates the out parameter + // with the result of the operation. + Result(ctx context.Context, out *T) error +} + +// Poller encapsulates a long-running operation, providing polling facilities until the operation reaches a terminal state. +type Poller[T any] struct { + op PollingHandler[T] + resp *http.Response + err error + result *T + tracer tracing.Tracer + done bool +} + +// PollUntilDoneOptions contains the optional values for the Poller[T].PollUntilDone() method. +type PollUntilDoneOptions struct { + // Frequency is the time to wait between polling intervals in absence of a Retry-After header. Allowed minimum is one second. + // Pass zero to accept the default value (30s). + Frequency time.Duration +} + +// PollUntilDone will poll the service endpoint until a terminal state is reached, an error is received, or the context expires. +// It internally uses Poll(), Done(), and Result() in its polling loop, sleeping for the specified duration between intervals. +// options: pass nil to accept the default values. +// NOTE: the default polling frequency is 30 seconds which works well for most operations. However, some operations might +// benefit from a shorter or longer duration. +func (p *Poller[T]) PollUntilDone(ctx context.Context, options *PollUntilDoneOptions) (res T, err error) { + if options == nil { + options = &PollUntilDoneOptions{} + } + cp := *options + if cp.Frequency == 0 { + cp.Frequency = 30 * time.Second + } + + ctx, endSpan := StartSpan(ctx, fmt.Sprintf("%s.PollUntilDone", shortenTypeName(reflect.TypeOf(*p).Name())), p.tracer, nil) + defer func() { endSpan(err) }() + + // skip the floor check when executing tests so they don't take so long + if isTest := flag.Lookup("test.v"); isTest == nil && cp.Frequency < time.Second { + err = errors.New("polling frequency minimum is one second") + return + } + + start := time.Now() + logPollUntilDoneExit := func(v interface{}) { + log.Writef(log.EventLRO, "END PollUntilDone() for %T: %v, total time: %s", p.op, v, time.Since(start)) + } + log.Writef(log.EventLRO, "BEGIN PollUntilDone() for %T", p.op) + if p.resp != nil { + // initial check for a retry-after header existing on the initial response + if retryAfter := shared.RetryAfter(p.resp); retryAfter > 0 { + log.Writef(log.EventLRO, "initial Retry-After delay for %s", retryAfter.String()) + if err = shared.Delay(ctx, retryAfter); err != nil { + logPollUntilDoneExit(err) + return + } + } + } + // begin polling the endpoint until a terminal state is reached + for { + var resp *http.Response + resp, err = p.Poll(ctx) + if err != nil { + logPollUntilDoneExit(err) + return + } + if p.Done() { + logPollUntilDoneExit("succeeded") + res, err = p.Result(ctx) + return + } + d := cp.Frequency + if retryAfter := shared.RetryAfter(resp); retryAfter > 0 { + log.Writef(log.EventLRO, "Retry-After delay for %s", retryAfter.String()) + d = retryAfter + } else { + log.Writef(log.EventLRO, "delay for %s", d.String()) + } + if err = shared.Delay(ctx, d); err != nil { + logPollUntilDoneExit(err) + return + } + } +} + +// Poll fetches the latest state of the LRO. It returns an HTTP response or error. +// If Poll succeeds, the poller's state is updated and the HTTP response is returned. +// If Poll fails, the poller's state is unmodified and the error is returned. +// Calling Poll on an LRO that has reached a terminal state will return the last HTTP response. +func (p *Poller[T]) Poll(ctx context.Context) (resp *http.Response, err error) { + if p.Done() { + // the LRO has reached a terminal state, don't poll again + resp = p.resp + return + } + + ctx, endSpan := StartSpan(ctx, fmt.Sprintf("%s.Poll", shortenTypeName(reflect.TypeOf(*p).Name())), p.tracer, nil) + defer func() { endSpan(err) }() + + resp, err = p.op.Poll(ctx) + if err != nil { + return + } + p.resp = resp + return +} + +// Done returns true if the LRO has reached a terminal state. +// Once a terminal state is reached, call Result(). +func (p *Poller[T]) Done() bool { + return p.op.Done() +} + +// Result returns the result of the LRO and is meant to be used in conjunction with Poll and Done. +// If the LRO completed successfully, a populated instance of T is returned. +// If the LRO failed or was canceled, an *azcore.ResponseError error is returned. +// Calling this on an LRO in a non-terminal state will return an error. +func (p *Poller[T]) Result(ctx context.Context) (res T, err error) { + if !p.Done() { + err = errors.New("poller is in a non-terminal state") + return + } + if p.done { + // the result has already been retrieved, return the cached value + if p.err != nil { + err = p.err + return + } + res = *p.result + return + } + + ctx, endSpan := StartSpan(ctx, fmt.Sprintf("%s.Result", shortenTypeName(reflect.TypeOf(*p).Name())), p.tracer, nil) + defer func() { endSpan(err) }() + + err = p.op.Result(ctx, p.result) + var respErr *exported.ResponseError + if errors.As(err, &respErr) { + // the LRO failed. record the error + p.err = err + } else if err != nil { + // the call to Result failed, don't cache anything in this case + return + } + p.done = true + if p.err != nil { + err = p.err + return + } + res = *p.result + return +} + +// ResumeToken returns a value representing the poller that can be used to resume +// the LRO at a later time. ResumeTokens are unique per service operation. +// The token's format should be considered opaque and is subject to change. +// Calling this on an LRO in a terminal state will return an error. +func (p *Poller[T]) ResumeToken() (string, error) { + if p.Done() { + return "", errors.New("poller is in a terminal state") + } + tk, err := pollers.NewResumeToken[T](p.op) + if err != nil { + return "", err + } + return tk, err +} + +// extracts the type name from the string returned from reflect.Value.Name() +func shortenTypeName(s string) string { + // the value is formatted as follows + // Poller[module/Package.Type].Method + // we want to shorten the generic type parameter string to Type + // anything we don't recognize will be left as-is + begin := strings.Index(s, "[") + end := strings.Index(s, "]") + if begin == -1 || end == -1 { + return s + } + + typeName := s[begin+1 : end] + if i := strings.LastIndex(typeName, "."); i > -1 { + typeName = typeName[i+1:] + } + return s[:begin+1] + typeName + s[end:] +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/request.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/request.go new file mode 100644 index 000000000000..5d1569c8dd2b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/request.go @@ -0,0 +1,179 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "bytes" + "context" + "encoding/json" + "encoding/xml" + "fmt" + "io" + "mime/multipart" + "net/url" + "path" + "strings" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +// Base64Encoding is usesd to specify which base-64 encoder/decoder to use when +// encoding/decoding a slice of bytes to/from a string. +type Base64Encoding = exported.Base64Encoding + +const ( + // Base64StdFormat uses base64.StdEncoding for encoding and decoding payloads. + Base64StdFormat Base64Encoding = exported.Base64StdFormat + + // Base64URLFormat uses base64.RawURLEncoding for encoding and decoding payloads. + Base64URLFormat Base64Encoding = exported.Base64URLFormat +) + +// NewRequest creates a new policy.Request with the specified input. +// The endpoint MUST be properly encoded before calling this function. +func NewRequest(ctx context.Context, httpMethod string, endpoint string) (*policy.Request, error) { + return exported.NewRequest(ctx, httpMethod, endpoint) +} + +// EncodeQueryParams will parse and encode any query parameters in the specified URL. +func EncodeQueryParams(u string) (string, error) { + before, after, found := strings.Cut(u, "?") + if !found { + return u, nil + } + qp, err := url.ParseQuery(after) + if err != nil { + return "", err + } + return before + "?" + qp.Encode(), nil +} + +// JoinPaths concatenates multiple URL path segments into one path, +// inserting path separation characters as required. JoinPaths will preserve +// query parameters in the root path +func JoinPaths(root string, paths ...string) string { + if len(paths) == 0 { + return root + } + + qps := "" + if strings.Contains(root, "?") { + splitPath := strings.Split(root, "?") + root, qps = splitPath[0], splitPath[1] + } + + p := path.Join(paths...) + // path.Join will remove any trailing slashes. + // if one was provided, preserve it. + if strings.HasSuffix(paths[len(paths)-1], "/") && !strings.HasSuffix(p, "/") { + p += "/" + } + + if qps != "" { + p = p + "?" + qps + } + + if strings.HasSuffix(root, "/") && strings.HasPrefix(p, "/") { + root = root[:len(root)-1] + } else if !strings.HasSuffix(root, "/") && !strings.HasPrefix(p, "/") { + p = "/" + p + } + return root + p +} + +// EncodeByteArray will base-64 encode the byte slice v. +func EncodeByteArray(v []byte, format Base64Encoding) string { + return exported.EncodeByteArray(v, format) +} + +// MarshalAsByteArray will base-64 encode the byte slice v, then calls SetBody. +// The encoded value is treated as a JSON string. +func MarshalAsByteArray(req *policy.Request, v []byte, format Base64Encoding) error { + // send as a JSON string + encode := fmt.Sprintf("\"%s\"", EncodeByteArray(v, format)) + // tsp generated code can set Content-Type so we must prefer that + return exported.SetBody(req, exported.NopCloser(strings.NewReader(encode)), shared.ContentTypeAppJSON, false) +} + +// MarshalAsJSON calls json.Marshal() to get the JSON encoding of v then calls SetBody. +func MarshalAsJSON(req *policy.Request, v interface{}) error { + b, err := json.Marshal(v) + if err != nil { + return fmt.Errorf("error marshalling type %T: %s", v, err) + } + // tsp generated code can set Content-Type so we must prefer that + return exported.SetBody(req, exported.NopCloser(bytes.NewReader(b)), shared.ContentTypeAppJSON, false) +} + +// MarshalAsXML calls xml.Marshal() to get the XML encoding of v then calls SetBody. +func MarshalAsXML(req *policy.Request, v interface{}) error { + b, err := xml.Marshal(v) + if err != nil { + return fmt.Errorf("error marshalling type %T: %s", v, err) + } + // inclue the XML header as some services require it + b = []byte(xml.Header + string(b)) + return req.SetBody(exported.NopCloser(bytes.NewReader(b)), shared.ContentTypeAppXML) +} + +// SetMultipartFormData writes the specified keys/values as multi-part form +// fields with the specified value. File content must be specified as a ReadSeekCloser. +// All other values are treated as string values. +func SetMultipartFormData(req *policy.Request, formData map[string]interface{}) error { + body := bytes.Buffer{} + writer := multipart.NewWriter(&body) + + writeContent := func(fieldname, filename string, src io.Reader) error { + fd, err := writer.CreateFormFile(fieldname, filename) + if err != nil { + return err + } + // copy the data to the form file + if _, err = io.Copy(fd, src); err != nil { + return err + } + return nil + } + + for k, v := range formData { + if rsc, ok := v.(io.ReadSeekCloser); ok { + if err := writeContent(k, k, rsc); err != nil { + return err + } + continue + } else if rscs, ok := v.([]io.ReadSeekCloser); ok { + for _, rsc := range rscs { + if err := writeContent(k, k, rsc); err != nil { + return err + } + } + continue + } + // ensure the value is in string format + s, ok := v.(string) + if !ok { + s = fmt.Sprintf("%v", v) + } + if err := writer.WriteField(k, s); err != nil { + return err + } + } + if err := writer.Close(); err != nil { + return err + } + return req.SetBody(exported.NopCloser(bytes.NewReader(body.Bytes())), writer.FormDataContentType()) +} + +// SkipBodyDownload will disable automatic downloading of the response body. +func SkipBodyDownload(req *policy.Request) { + req.SetOperationValue(bodyDownloadPolicyOpValues{Skip: true}) +} + +// CtxAPINameKey is used as a context key for adding/retrieving the API name. +type CtxAPINameKey = shared.CtxAPINameKey diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/response.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/response.go new file mode 100644 index 000000000000..003c875b1f56 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/response.go @@ -0,0 +1,109 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "bytes" + "encoding/json" + "encoding/xml" + "fmt" + "io" + "net/http" + + azexported "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" + "github.com/Azure/azure-sdk-for-go/sdk/internal/exported" +) + +// Payload reads and returns the response body or an error. +// On a successful read, the response body is cached. +// Subsequent reads will access the cached value. +func Payload(resp *http.Response) ([]byte, error) { + return exported.Payload(resp, nil) +} + +// HasStatusCode returns true if the Response's status code is one of the specified values. +func HasStatusCode(resp *http.Response, statusCodes ...int) bool { + return exported.HasStatusCode(resp, statusCodes...) +} + +// UnmarshalAsByteArray will base-64 decode the received payload and place the result into the value pointed to by v. +func UnmarshalAsByteArray(resp *http.Response, v *[]byte, format Base64Encoding) error { + p, err := Payload(resp) + if err != nil { + return err + } + return DecodeByteArray(string(p), v, format) +} + +// UnmarshalAsJSON calls json.Unmarshal() to unmarshal the received payload into the value pointed to by v. +func UnmarshalAsJSON(resp *http.Response, v interface{}) error { + payload, err := Payload(resp) + if err != nil { + return err + } + // TODO: verify early exit is correct + if len(payload) == 0 { + return nil + } + err = removeBOM(resp) + if err != nil { + return err + } + err = json.Unmarshal(payload, v) + if err != nil { + err = fmt.Errorf("unmarshalling type %T: %s", v, err) + } + return err +} + +// UnmarshalAsXML calls xml.Unmarshal() to unmarshal the received payload into the value pointed to by v. +func UnmarshalAsXML(resp *http.Response, v interface{}) error { + payload, err := Payload(resp) + if err != nil { + return err + } + // TODO: verify early exit is correct + if len(payload) == 0 { + return nil + } + err = removeBOM(resp) + if err != nil { + return err + } + err = xml.Unmarshal(payload, v) + if err != nil { + err = fmt.Errorf("unmarshalling type %T: %s", v, err) + } + return err +} + +// Drain reads the response body to completion then closes it. The bytes read are discarded. +func Drain(resp *http.Response) { + if resp != nil && resp.Body != nil { + _, _ = io.Copy(io.Discard, resp.Body) + resp.Body.Close() + } +} + +// removeBOM removes any byte-order mark prefix from the payload if present. +func removeBOM(resp *http.Response) error { + _, err := exported.Payload(resp, &exported.PayloadOptions{ + BytesModifier: func(b []byte) []byte { + // UTF8 + return bytes.TrimPrefix(b, []byte("\xef\xbb\xbf")) + }, + }) + if err != nil { + return err + } + return nil +} + +// DecodeByteArray will base-64 decode the provided string into v. +func DecodeByteArray(s string, v *[]byte, format Base64Encoding) error { + return azexported.DecodeByteArray(s, v, format) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_other.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_other.go new file mode 100644 index 000000000000..1c75d771f2e4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_other.go @@ -0,0 +1,15 @@ +//go:build !wasm + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "net" +) + +func defaultTransportDialContext(dialer *net.Dialer) func(context.Context, string, string) (net.Conn, error) { + return dialer.DialContext +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_wasm.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_wasm.go new file mode 100644 index 000000000000..3dc9eeecddf6 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_dialer_wasm.go @@ -0,0 +1,15 @@ +//go:build (js && wasm) || wasip1 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "context" + "net" +) + +func defaultTransportDialContext(dialer *net.Dialer) func(context.Context, string, string) (net.Conn, error) { + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_http_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_http_client.go new file mode 100644 index 000000000000..2124c1d48b9a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime/transport_default_http_client.go @@ -0,0 +1,48 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package runtime + +import ( + "crypto/tls" + "net" + "net/http" + "time" + + "golang.org/x/net/http2" +) + +var defaultHTTPClient *http.Client + +func init() { + defaultTransport := &http.Transport{ + Proxy: http.ProxyFromEnvironment, + DialContext: defaultTransportDialContext(&net.Dialer{ + Timeout: 30 * time.Second, + KeepAlive: 30 * time.Second, + }), + ForceAttemptHTTP2: true, + MaxIdleConns: 100, + MaxIdleConnsPerHost: 10, + IdleConnTimeout: 90 * time.Second, + TLSHandshakeTimeout: 10 * time.Second, + ExpectContinueTimeout: 1 * time.Second, + TLSClientConfig: &tls.Config{ + MinVersion: tls.VersionTLS12, + Renegotiation: tls.RenegotiateFreelyAsClient, + }, + } + // TODO: evaluate removing this once https://github.com/golang/go/issues/59690 has been fixed + if http2Transport, err := http2.ConfigureTransports(defaultTransport); err == nil { + // if the connection has been idle for 10 seconds, send a ping frame for a health check + http2Transport.ReadIdleTimeout = 10 * time.Second + // if there's no response to the ping within the timeout, the connection will be closed + http2Transport.PingTimeout = 5 * time.Second + } + defaultHTTPClient = &http.Client{ + Transport: defaultTransport, + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/doc.go new file mode 100644 index 000000000000..cadaef3d5842 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/doc.go @@ -0,0 +1,9 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright 2017 Microsoft Corporation. All rights reserved. +// Use of this source code is governed by an MIT +// license that can be found in the LICENSE file. + +// Package streaming contains helpers for streaming IO operations and progress reporting. +package streaming diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/progress.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/progress.go new file mode 100644 index 000000000000..fbcd48311b85 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming/progress.go @@ -0,0 +1,75 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package streaming + +import ( + "io" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported" +) + +type progress struct { + rc io.ReadCloser + rsc io.ReadSeekCloser + pr func(bytesTransferred int64) + offset int64 +} + +// NopCloser returns a ReadSeekCloser with a no-op close method wrapping the provided io.ReadSeeker. +// In addition to adding a Close method to an io.ReadSeeker, this can also be used to wrap an +// io.ReadSeekCloser with a no-op Close method to allow explicit control of when the io.ReedSeekCloser +// has its underlying stream closed. +func NopCloser(rs io.ReadSeeker) io.ReadSeekCloser { + return exported.NopCloser(rs) +} + +// NewRequestProgress adds progress reporting to an HTTP request's body stream. +func NewRequestProgress(body io.ReadSeekCloser, pr func(bytesTransferred int64)) io.ReadSeekCloser { + return &progress{ + rc: body, + rsc: body, + pr: pr, + offset: 0, + } +} + +// NewResponseProgress adds progress reporting to an HTTP response's body stream. +func NewResponseProgress(body io.ReadCloser, pr func(bytesTransferred int64)) io.ReadCloser { + return &progress{ + rc: body, + rsc: nil, + pr: pr, + offset: 0, + } +} + +// Read reads a block of data from an inner stream and reports progress +func (p *progress) Read(b []byte) (n int, err error) { + n, err = p.rc.Read(b) + if err != nil && err != io.EOF { + return + } + p.offset += int64(n) + // Invokes the user's callback method to report progress + p.pr(p.offset) + return +} + +// Seek only expects a zero or from beginning. +func (p *progress) Seek(offset int64, whence int) (int64, error) { + // This should only ever be called with offset = 0 and whence = io.SeekStart + n, err := p.rsc.Seek(offset, whence) + if err == nil { + p.offset = int64(n) + } + return n, err +} + +// requestBodyProgress supports Close but the underlying stream may not; if it does, Close will close it. +func (p *progress) Close() error { + return p.rc.Close() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/to/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/to/doc.go new file mode 100644 index 000000000000..faa98c9dc514 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/to/doc.go @@ -0,0 +1,9 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright 2017 Microsoft Corporation. All rights reserved. +// Use of this source code is governed by an MIT +// license that can be found in the LICENSE file. + +// Package to contains various type-conversion helper functions. +package to diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/to/to.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/to/to.go new file mode 100644 index 000000000000..e0e4817b90d1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/to/to.go @@ -0,0 +1,21 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package to + +// Ptr returns a pointer to the provided value. +func Ptr[T any](v T) *T { + return &v +} + +// SliceOfPtrs returns a slice of *T from the specified values. +func SliceOfPtrs[T any](vv ...T) []*T { + slc := make([]*T, len(vv)) + for i := range vv { + slc[i] = Ptr(vv[i]) + } + return slc +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/constants.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/constants.go new file mode 100644 index 000000000000..80282d4ab0a6 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/constants.go @@ -0,0 +1,41 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package tracing + +// SpanKind represents the role of a Span inside a Trace. Often, this defines how a Span will be processed and visualized by various backends. +type SpanKind int + +const ( + // SpanKindInternal indicates the span represents an internal operation within an application. + SpanKindInternal SpanKind = 1 + + // SpanKindServer indicates the span covers server-side handling of a request. + SpanKindServer SpanKind = 2 + + // SpanKindClient indicates the span describes a request to a remote service. + SpanKindClient SpanKind = 3 + + // SpanKindProducer indicates the span was created by a messaging producer. + SpanKindProducer SpanKind = 4 + + // SpanKindConsumer indicates the span was created by a messaging consumer. + SpanKindConsumer SpanKind = 5 +) + +// SpanStatus represents the status of a span. +type SpanStatus int + +const ( + // SpanStatusUnset is the default status code. + SpanStatusUnset SpanStatus = 0 + + // SpanStatusError indicates the operation contains an error. + SpanStatusError SpanStatus = 1 + + // SpanStatusOK indicates the operation completed successfully. + SpanStatusOK SpanStatus = 2 +) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/tracing.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/tracing.go new file mode 100644 index 000000000000..1ade7c560ff1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing/tracing.go @@ -0,0 +1,191 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +// Package tracing contains the definitions needed to support distributed tracing. +package tracing + +import ( + "context" +) + +// ProviderOptions contains the optional values when creating a Provider. +type ProviderOptions struct { + // for future expansion +} + +// NewProvider creates a new Provider with the specified values. +// - newTracerFn is the underlying implementation for creating Tracer instances +// - options contains optional values; pass nil to accept the default value +func NewProvider(newTracerFn func(name, version string) Tracer, options *ProviderOptions) Provider { + return Provider{ + newTracerFn: newTracerFn, + } +} + +// Provider is the factory that creates Tracer instances. +// It defaults to a no-op provider. +type Provider struct { + newTracerFn func(name, version string) Tracer +} + +// NewTracer creates a new Tracer for the specified module name and version. +// - module - the fully qualified name of the module +// - version - the version of the module +func (p Provider) NewTracer(module, version string) (tracer Tracer) { + if p.newTracerFn != nil { + tracer = p.newTracerFn(module, version) + } + return +} + +///////////////////////////////////////////////////////////////////////////////////////////////////////////// + +// TracerOptions contains the optional values when creating a Tracer. +type TracerOptions struct { + // SpanFromContext contains the implementation for the Tracer.SpanFromContext method. + SpanFromContext func(context.Context) Span +} + +// NewTracer creates a Tracer with the specified values. +// - newSpanFn is the underlying implementation for creating Span instances +// - options contains optional values; pass nil to accept the default value +func NewTracer(newSpanFn func(ctx context.Context, spanName string, options *SpanOptions) (context.Context, Span), options *TracerOptions) Tracer { + if options == nil { + options = &TracerOptions{} + } + return Tracer{ + newSpanFn: newSpanFn, + spanFromContextFn: options.SpanFromContext, + } +} + +// Tracer is the factory that creates Span instances. +type Tracer struct { + attrs []Attribute + newSpanFn func(ctx context.Context, spanName string, options *SpanOptions) (context.Context, Span) + spanFromContextFn func(ctx context.Context) Span +} + +// Start creates a new span and a context.Context that contains it. +// - ctx is the parent context for this span. If it contains a Span, the newly created span will be a child of that span, else it will be a root span +// - spanName identifies the span within a trace, it's typically the fully qualified API name +// - options contains optional values for the span, pass nil to accept any defaults +func (t Tracer) Start(ctx context.Context, spanName string, options *SpanOptions) (context.Context, Span) { + if t.newSpanFn != nil { + opts := SpanOptions{} + if options != nil { + opts = *options + } + opts.Attributes = append(opts.Attributes, t.attrs...) + return t.newSpanFn(ctx, spanName, &opts) + } + return ctx, Span{} +} + +// SetAttributes sets attrs to be applied to each Span. If a key from attrs +// already exists for an attribute of the Span it will be overwritten with +// the value contained in attrs. +func (t *Tracer) SetAttributes(attrs ...Attribute) { + t.attrs = append(t.attrs, attrs...) +} + +// Enabled returns true if this Tracer is capable of creating Spans. +func (t Tracer) Enabled() bool { + return t.newSpanFn != nil +} + +// SpanFromContext returns the Span associated with the current context. +// If the provided context has no Span, false is returned. +func (t Tracer) SpanFromContext(ctx context.Context) Span { + if t.spanFromContextFn != nil { + return t.spanFromContextFn(ctx) + } + return Span{} +} + +// SpanOptions contains optional settings for creating a span. +type SpanOptions struct { + // Kind indicates the kind of Span. + Kind SpanKind + + // Attributes contains key-value pairs of attributes for the span. + Attributes []Attribute +} + +///////////////////////////////////////////////////////////////////////////////////////////////////////////// + +// SpanImpl abstracts the underlying implementation for Span, +// allowing it to work with various tracing implementations. +// Any zero-values will have their default, no-op behavior. +type SpanImpl struct { + // End contains the implementation for the Span.End method. + End func() + + // SetAttributes contains the implementation for the Span.SetAttributes method. + SetAttributes func(...Attribute) + + // AddEvent contains the implementation for the Span.AddEvent method. + AddEvent func(string, ...Attribute) + + // SetStatus contains the implementation for the Span.SetStatus method. + SetStatus func(SpanStatus, string) +} + +// NewSpan creates a Span with the specified implementation. +func NewSpan(impl SpanImpl) Span { + return Span{ + impl: impl, + } +} + +// Span is a single unit of a trace. A trace can contain multiple spans. +// A zero-value Span provides a no-op implementation. +type Span struct { + impl SpanImpl +} + +// End terminates the span and MUST be called before the span leaves scope. +// Any further updates to the span will be ignored after End is called. +func (s Span) End() { + if s.impl.End != nil { + s.impl.End() + } +} + +// SetAttributes sets the specified attributes on the Span. +// Any existing attributes with the same keys will have their values overwritten. +func (s Span) SetAttributes(attrs ...Attribute) { + if s.impl.SetAttributes != nil { + s.impl.SetAttributes(attrs...) + } +} + +// AddEvent adds a named event with an optional set of attributes to the span. +func (s Span) AddEvent(name string, attrs ...Attribute) { + if s.impl.AddEvent != nil { + s.impl.AddEvent(name, attrs...) + } +} + +// SetStatus sets the status on the span along with a description. +func (s Span) SetStatus(code SpanStatus, desc string) { + if s.impl.SetStatus != nil { + s.impl.SetStatus(code, desc) + } +} + +///////////////////////////////////////////////////////////////////////////////////////////////////////////// + +// Attribute is a key-value pair. +type Attribute struct { + // Key is the name of the attribute. + Key string + + // Value is the attribute's value. + // Types that are natively supported include int64, float64, int, bool, string. + // Any other type will be formatted per rules of fmt.Sprintf("%v"). + Value any +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/CHANGELOG.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/CHANGELOG.md new file mode 100644 index 000000000000..7ea119ab30dd --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/CHANGELOG.md @@ -0,0 +1,457 @@ +# Release History + +## 1.4.0 (2023-10-10) + +### Bugs Fixed +* `ManagedIdentityCredential` will now retry when IMDS responds 410 or 503 + +## 1.4.0-beta.5 (2023-09-12) + +### Features Added +* Service principal credentials can request CAE tokens + +### Breaking Changes +> These changes affect only code written against a beta version such as v1.4.0-beta.4 +* Whether `GetToken` requests a CAE token is now determined by `TokenRequestOptions.EnableCAE`. Azure + SDK clients which support CAE will set this option automatically. Credentials no longer request CAE + tokens by default or observe the environment variable "AZURE_IDENTITY_DISABLE_CP1". + +### Bugs Fixed +* Credential chains such as `DefaultAzureCredential` now try their next credential, if any, when + managed identity authentication fails in a Docker Desktop container + ([#21417](https://github.com/Azure/azure-sdk-for-go/issues/21417)) + +## 1.4.0-beta.4 (2023-08-16) + +### Other Changes +* Upgraded dependencies + +## 1.3.1 (2023-08-16) + +### Other Changes +* Upgraded dependencies + +## 1.4.0-beta.3 (2023-08-08) + +### Bugs Fixed +* One invocation of `AzureCLICredential.GetToken()` and `OnBehalfOfCredential.GetToken()` + can no longer make two authentication attempts + +## 1.4.0-beta.2 (2023-07-14) + +### Other Changes +* `DefaultAzureCredentialOptions.TenantID` applies to workload identity authentication +* Upgraded dependencies + +## 1.4.0-beta.1 (2023-06-06) + +### Other Changes +* Re-enabled CAE support as in v1.3.0-beta.3 + +## 1.3.0 (2023-05-09) + +### Breaking Changes +> These changes affect only code written against a beta version such as v1.3.0-beta.5 +* Renamed `NewOnBehalfOfCredentialFromCertificate` to `NewOnBehalfOfCredentialWithCertificate` +* Renamed `NewOnBehalfOfCredentialFromSecret` to `NewOnBehalfOfCredentialWithSecret` + +### Other Changes +* Upgraded to MSAL v1.0.0 + +## 1.3.0-beta.5 (2023-04-11) + +### Breaking Changes +> These changes affect only code written against a beta version such as v1.3.0-beta.4 +* Moved `NewWorkloadIdentityCredential()` parameters into `WorkloadIdentityCredentialOptions`. + The constructor now reads default configuration from environment variables set by the Azure + workload identity webhook by default. + ([#20478](https://github.com/Azure/azure-sdk-for-go/pull/20478)) +* Removed CAE support. It will return in v1.4.0-beta.1 + ([#20479](https://github.com/Azure/azure-sdk-for-go/pull/20479)) + +### Bugs Fixed +* Fixed an issue in `DefaultAzureCredential` that could cause the managed identity endpoint check to fail in rare circumstances. + +## 1.3.0-beta.4 (2023-03-08) + +### Features Added +* Added `WorkloadIdentityCredentialOptions.AdditionallyAllowedTenants` and `.DisableInstanceDiscovery` + +### Bugs Fixed +* Credentials now synchronize within `GetToken()` so a single instance can be shared among goroutines + ([#20044](https://github.com/Azure/azure-sdk-for-go/issues/20044)) + +### Other Changes +* Upgraded dependencies + +## 1.2.2 (2023-03-07) + +### Other Changes +* Upgraded dependencies + +## 1.3.0-beta.3 (2023-02-07) + +### Features Added +* By default, credentials set client capability "CP1" to enable support for + [Continuous Access Evaluation (CAE)](https://docs.microsoft.com/azure/active-directory/develop/app-resilience-continuous-access-evaluation). + This indicates to Azure Active Directory that your application can handle CAE claims challenges. + You can disable this behavior by setting the environment variable "AZURE_IDENTITY_DISABLE_CP1" to "true". +* `InteractiveBrowserCredentialOptions.LoginHint` enables pre-populating the login + prompt with a username ([#15599](https://github.com/Azure/azure-sdk-for-go/pull/15599)) +* Service principal and user credentials support ADFS authentication on Azure Stack. + Specify "adfs" as the credential's tenant. +* Applications running in private or disconnected clouds can prevent credentials from + requesting Azure AD instance metadata by setting the `DisableInstanceDiscovery` + field on credential options. +* Many credentials can now be configured to authenticate in multiple tenants. The + options types for these credentials have an `AdditionallyAllowedTenants` field + that specifies additional tenants in which the credential may authenticate. + +## 1.3.0-beta.2 (2023-01-10) + +### Features Added +* Added `OnBehalfOfCredential` to support the on-behalf-of flow + ([#16642](https://github.com/Azure/azure-sdk-for-go/issues/16642)) + +### Bugs Fixed +* `AzureCLICredential` reports token expiration in local time (should be UTC) + +### Other Changes +* `AzureCLICredential` imposes its default timeout only when the `Context` + passed to `GetToken()` has no deadline +* Added `NewCredentialUnavailableError()`. This function constructs an error indicating + a credential can't authenticate and an encompassing `ChainedTokenCredential` should + try its next credential, if any. + +## 1.3.0-beta.1 (2022-12-13) + +### Features Added +* `WorkloadIdentityCredential` and `DefaultAzureCredential` support + Workload Identity Federation on Kubernetes. `DefaultAzureCredential` + support requires environment variable configuration as set by the + Workload Identity webhook. + ([#15615](https://github.com/Azure/azure-sdk-for-go/issues/15615)) + +## 1.2.0 (2022-11-08) + +### Other Changes +* This version includes all fixes and features from 1.2.0-beta.* + +## 1.2.0-beta.3 (2022-10-11) + +### Features Added +* `ManagedIdentityCredential` caches tokens in memory + +### Bugs Fixed +* `ClientCertificateCredential` sends only the leaf cert for SNI authentication + +## 1.2.0-beta.2 (2022-08-10) + +### Features Added +* Added `ClientAssertionCredential` to enable applications to authenticate + with custom client assertions + +### Other Changes +* Updated AuthenticationFailedError with links to TROUBLESHOOTING.md for relevant errors +* Upgraded `microsoft-authentication-library-for-go` requirement to v0.6.0 + +## 1.2.0-beta.1 (2022-06-07) + +### Features Added +* `EnvironmentCredential` reads certificate passwords from `AZURE_CLIENT_CERTIFICATE_PASSWORD` + ([#17099](https://github.com/Azure/azure-sdk-for-go/pull/17099)) + +## 1.1.0 (2022-06-07) + +### Features Added +* `ClientCertificateCredential` and `ClientSecretCredential` support ESTS-R. First-party + applications can set environment variable `AZURE_REGIONAL_AUTHORITY_NAME` with a + region name. + ([#15605](https://github.com/Azure/azure-sdk-for-go/issues/15605)) + +## 1.0.1 (2022-06-07) + +### Other Changes +* Upgrade `microsoft-authentication-library-for-go` requirement to v0.5.1 + ([#18176](https://github.com/Azure/azure-sdk-for-go/issues/18176)) + +## 1.0.0 (2022-05-12) + +### Features Added +* `DefaultAzureCredential` reads environment variable `AZURE_CLIENT_ID` for the + client ID of a user-assigned managed identity + ([#17293](https://github.com/Azure/azure-sdk-for-go/pull/17293)) + +### Breaking Changes +* Removed `AuthorizationCodeCredential`. Use `InteractiveBrowserCredential` instead + to authenticate a user with the authorization code flow. +* Instances of `AuthenticationFailedError` are now returned by pointer. +* `GetToken()` returns `azcore.AccessToken` by value + +### Bugs Fixed +* `AzureCLICredential` panics after receiving an unexpected error type + ([#17490](https://github.com/Azure/azure-sdk-for-go/issues/17490)) + +### Other Changes +* `GetToken()` returns an error when the caller specifies no scope +* Updated to the latest versions of `golang.org/x/crypto`, `azcore` and `internal` + +## 0.14.0 (2022-04-05) + +### Breaking Changes +* This module now requires Go 1.18 +* Removed `AuthorityHost`. Credentials are now configured for sovereign or private + clouds with the API in `azcore/cloud`, for example: + ```go + // before + opts := azidentity.ClientSecretCredentialOptions{AuthorityHost: azidentity.AzureGovernment} + cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, &opts) + + // after + import "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud" + + opts := azidentity.ClientSecretCredentialOptions{} + opts.Cloud = cloud.AzureGovernment + cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, &opts) + ``` + +## 0.13.2 (2022-03-08) + +### Bugs Fixed +* Prevented a data race in `DefaultAzureCredential` and `ChainedTokenCredential` + ([#17144](https://github.com/Azure/azure-sdk-for-go/issues/17144)) + +### Other Changes +* Upgraded App Service managed identity version from 2017-09-01 to 2019-08-01 + ([#17086](https://github.com/Azure/azure-sdk-for-go/pull/17086)) + +## 0.13.1 (2022-02-08) + +### Features Added +* `EnvironmentCredential` supports certificate SNI authentication when + `AZURE_CLIENT_SEND_CERTIFICATE_CHAIN` is "true". + ([#16851](https://github.com/Azure/azure-sdk-for-go/pull/16851)) + +### Bugs Fixed +* `ManagedIdentityCredential.GetToken()` now returns an error when configured for + a user assigned identity in Azure Cloud Shell (which doesn't support such identities) + ([#16946](https://github.com/Azure/azure-sdk-for-go/pull/16946)) + +### Other Changes +* `NewDefaultAzureCredential()` logs non-fatal errors. These errors are also included in the + error returned by `DefaultAzureCredential.GetToken()` when it's unable to acquire a token + from any source. ([#15923](https://github.com/Azure/azure-sdk-for-go/issues/15923)) + +## 0.13.0 (2022-01-11) + +### Breaking Changes +* Replaced `AuthenticationFailedError.RawResponse()` with a field having the same name +* Unexported `CredentialUnavailableError` +* Instances of `ChainedTokenCredential` will now skip looping through the list of source credentials and re-use the first successful credential on subsequent calls to `GetToken`. + * If `ChainedTokenCredentialOptions.RetrySources` is true, `ChainedTokenCredential` will continue to try all of the originally provided credentials each time the `GetToken` method is called. + * `ChainedTokenCredential.successfulCredential` will contain a reference to the last successful credential. + * `DefaultAzureCredenial` will also re-use the first successful credential on subsequent calls to `GetToken`. + * `DefaultAzureCredential.chain.successfulCredential` will also contain a reference to the last successful credential. + +### Other Changes +* `ManagedIdentityCredential` no longer probes IMDS before requesting a token + from it. Also, an error response from IMDS no longer disables a credential + instance. Following an error, a credential instance will continue to send + requests to IMDS as necessary. +* Adopted MSAL for user and service principal authentication +* Updated `azcore` requirement to 0.21.0 + +## 0.12.0 (2021-11-02) +### Breaking Changes +* Raised minimum go version to 1.16 +* Removed `NewAuthenticationPolicy()` from credentials. Clients should instead use azcore's + `runtime.NewBearerTokenPolicy()` to construct a bearer token authorization policy. +* The `AuthorityHost` field in credential options structs is now a custom type, + `AuthorityHost`, with underlying type `string` +* `NewChainedTokenCredential` has a new signature to accommodate a placeholder + options struct: + ```go + // before + cred, err := NewChainedTokenCredential(credA, credB) + + // after + cred, err := NewChainedTokenCredential([]azcore.TokenCredential{credA, credB}, nil) + ``` +* Removed `ExcludeAzureCLICredential`, `ExcludeEnvironmentCredential`, and `ExcludeMSICredential` + from `DefaultAzureCredentialOptions` +* `NewClientCertificateCredential` requires a `[]*x509.Certificate` and `crypto.PrivateKey` instead of + a path to a certificate file. Added `ParseCertificates` to simplify getting these in common cases: + ```go + // before + cred, err := NewClientCertificateCredential("tenant", "client-id", "/cert.pem", nil) + + // after + certData, err := os.ReadFile("/cert.pem") + certs, key, err := ParseCertificates(certData, password) + cred, err := NewClientCertificateCredential(tenantID, clientID, certs, key, nil) + ``` +* Removed `InteractiveBrowserCredentialOptions.ClientSecret` and `.Port` +* Removed `AADAuthenticationFailedError` +* Removed `id` parameter of `NewManagedIdentityCredential()`. User assigned identities are now + specified by `ManagedIdentityCredentialOptions.ID`: + ```go + // before + cred, err := NewManagedIdentityCredential("client-id", nil) + // or, for a resource ID + opts := &ManagedIdentityCredentialOptions{ID: ResourceID} + cred, err := NewManagedIdentityCredential("/subscriptions/...", opts) + + // after + clientID := ClientID("7cf7db0d-...") + opts := &ManagedIdentityCredentialOptions{ID: clientID} + // or, for a resource ID + resID: ResourceID("/subscriptions/...") + opts := &ManagedIdentityCredentialOptions{ID: resID} + cred, err := NewManagedIdentityCredential(opts) + ``` +* `DeviceCodeCredentialOptions.UserPrompt` has a new type: `func(context.Context, DeviceCodeMessage) error` +* Credential options structs now embed `azcore.ClientOptions`. In addition to changing literal initialization + syntax, this change renames `HTTPClient` fields to `Transport`. +* Renamed `LogCredential` to `EventCredential` +* `AzureCLICredential` no longer reads the environment variable `AZURE_CLI_PATH` +* `NewManagedIdentityCredential` no longer reads environment variables `AZURE_CLIENT_ID` and + `AZURE_RESOURCE_ID`. Use `ManagedIdentityCredentialOptions.ID` instead. +* Unexported `AuthenticationFailedError` and `CredentialUnavailableError` structs. In their place are two + interfaces having the same names. + +### Bugs Fixed +* `AzureCLICredential.GetToken` no longer mutates its `opts.Scopes` + +### Features Added +* Added connection configuration options to `DefaultAzureCredentialOptions` +* `AuthenticationFailedError.RawResponse()` returns the HTTP response motivating the error, + if available + +### Other Changes +* `NewDefaultAzureCredential()` returns `*DefaultAzureCredential` instead of `*ChainedTokenCredential` +* Added `TenantID` field to `DefaultAzureCredentialOptions` and `AzureCLICredentialOptions` + +## 0.11.0 (2021-09-08) +### Breaking Changes +* Unexported `AzureCLICredentialOptions.TokenProvider` and its type, + `AzureCLITokenProvider` + +### Bug Fixes +* `ManagedIdentityCredential.GetToken` returns `CredentialUnavailableError` + when IMDS has no assigned identity, signaling `DefaultAzureCredential` to + try other credentials + + +## 0.10.0 (2021-08-30) +### Breaking Changes +* Update based on `azcore` refactor [#15383](https://github.com/Azure/azure-sdk-for-go/pull/15383) + +## 0.9.3 (2021-08-20) + +### Bugs Fixed +* `ManagedIdentityCredential.GetToken` no longer mutates its `opts.Scopes` + +### Other Changes +* Bumps version of `azcore` to `v0.18.1` + + +## 0.9.2 (2021-07-23) +### Features Added +* Adding support for Service Fabric environment in `ManagedIdentityCredential` +* Adding an option for using a resource ID instead of client ID in `ManagedIdentityCredential` + + +## 0.9.1 (2021-05-24) +### Features Added +* Add LICENSE.txt and bump version information + + +## 0.9.0 (2021-05-21) +### Features Added +* Add support for authenticating in Azure Stack environments +* Enable user assigned identities for the IMDS scenario in `ManagedIdentityCredential` +* Add scope to resource conversion in `GetToken()` on `ManagedIdentityCredential` + + +## 0.8.0 (2021-01-20) +### Features Added +* Updating documentation + + +## 0.7.1 (2021-01-04) +### Features Added +* Adding port option to `InteractiveBrowserCredential` + + +## 0.7.0 (2020-12-11) +### Features Added +* Add `redirectURI` parameter back to authentication code flow + + +## 0.6.1 (2020-12-09) +### Features Added +* Updating query parameter in `ManagedIdentityCredential` and updating datetime string for parsing managed identity access tokens. + + +## 0.6.0 (2020-11-16) +### Features Added +* Remove `RedirectURL` parameter from auth code flow to align with the MSAL implementation which relies on the native client redirect URL. + + +## 0.5.0 (2020-10-30) +### Features Added +* Flattening credential options + + +## 0.4.3 (2020-10-21) +### Features Added +* Adding Azure Arc support in `ManagedIdentityCredential` + + +## 0.4.2 (2020-10-16) +### Features Added +* Typo fixes + + +## 0.4.1 (2020-10-16) +### Features Added +* Ensure authority hosts are only HTTPs + + +## 0.4.0 (2020-10-16) +### Features Added +* Adding options structs for credentials + + +## 0.3.0 (2020-10-09) +### Features Added +* Update `DeviceCodeCredential` callback + + +## 0.2.2 (2020-10-09) +### Features Added +* Add `AuthorizationCodeCredential` + + +## 0.2.1 (2020-10-06) +### Features Added +* Add `InteractiveBrowserCredential` + + +## 0.2.0 (2020-09-11) +### Features Added +* Refactor `azidentity` on top of `azcore` refactor +* Updated policies to conform to `policy.Policy` interface changes. +* Updated non-retriable errors to conform to `azcore.NonRetriableError`. +* Fixed calls to `Request.SetBody()` to include content type. +* Switched endpoints to string types and removed extra parsing code. + + +## 0.1.1 (2020-09-02) +### Features Added +* Add `AzureCLICredential` to `DefaultAzureCredential` chain + + +## 0.1.0 (2020-07-23) +### Features Added +* Initial Release. Azure Identity library that provides Azure Active Directory token authentication support for the SDK. diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/LICENSE.txt b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/LICENSE.txt new file mode 100644 index 000000000000..48ea6616b5b8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/LICENSE.txt @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) Microsoft Corporation. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/MIGRATION.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/MIGRATION.md new file mode 100644 index 000000000000..4ac53eb7b276 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/MIGRATION.md @@ -0,0 +1,307 @@ +# Migrating from autorest/adal to azidentity + +`azidentity` provides Azure Active Directory (Azure AD) authentication for the newest Azure SDK modules (`github.com/azure-sdk-for-go/sdk/...`). Older Azure SDK packages (`github.com/azure-sdk-for-go/services/...`) use types from `github.com/go-autorest/autorest/adal` instead. + +This guide shows common authentication code using `autorest/adal` and its equivalent using `azidentity`. + +## Table of contents + +- [Acquire a token](#acquire-a-token) +- [Client certificate authentication](#client-certificate-authentication) +- [Client secret authentication](#client-secret-authentication) +- [Configuration](#configuration) +- [Device code authentication](#device-code-authentication) +- [Managed identity](#managed-identity) +- [Use azidentity credentials with older packages](#use-azidentity-credentials-with-older-packages) + +## Configuration + +### `autorest/adal` + +Token providers require a token audience (resource identifier) and an instance of `adal.OAuthConfig`, which requires an Azure AD endpoint and tenant: + +```go +import "github.com/Azure/go-autorest/autorest/adal" + +oauthCfg, err := adal.NewOAuthConfig("https://login.chinacloudapi.cn", tenantID) +handle(err) + +spt, err := adal.NewServicePrincipalTokenWithSecret( + *oauthCfg, clientID, "https://management.chinacloudapi.cn/", &adal.ServicePrincipalTokenSecret{ClientSecret: secret}, +) +``` + +### `azidentity` + +A credential instance can acquire tokens for any audience. The audience for each token is determined by the client requesting it. Credentials require endpoint configuration only for sovereign or private clouds. The `azcore/cloud` package has predefined configuration for sovereign clouds such as Azure China: + +```go +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud" + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" +) + +clientOpts := azcore.ClientOptions{Cloud: cloud.AzureChina} + +cred, err := azidentity.NewClientSecretCredential( + tenantID, clientID, secret, &azidentity.ClientSecretCredentialOptions{ClientOptions: clientOpts}, +) +handle(err) +``` + +## Client secret authentication + +### `autorest/adal` + +```go +import ( + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/adal" +) + +oauthCfg, err := adal.NewOAuthConfig("https://login.microsoftonline.com", tenantID) +handle(err) +spt, err := adal.NewServicePrincipalTokenWithSecret( + *oauthCfg, clientID, "https://management.azure.com/", &adal.ServicePrincipalTokenSecret{ClientSecret: secret}, +) +handle(err) + +client := subscriptions.NewClient() +client.Authorizer = autorest.NewBearerAuthorizer(spt) +``` + +### `azidentity` + +```go +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armsubscriptions" +) + +cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, nil) +handle(err) + +client, err := armsubscriptions.NewClient(cred, nil) +handle(err) +``` + +## Client certificate authentication + +### `autorest/adal` + +```go +import ( + "os" + + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/adal" +) +certData, err := os.ReadFile("./example.pfx") +handle(err) + +certificate, rsaPrivateKey, err := decodePkcs12(certData, "") +handle(err) + +oauthCfg, err := adal.NewOAuthConfig("https://login.microsoftonline.com", tenantID) +handle(err) + +spt, err := adal.NewServicePrincipalTokenFromCertificate( + *oauthConfig, clientID, certificate, rsaPrivateKey, "https://management.azure.com/", +) + +client := subscriptions.NewClient() +client.Authorizer = autorest.NewBearerAuthorizer(spt) +``` + +### `azidentity` + +```go +import ( + "os" + + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armsubscriptions" +) + +certData, err := os.ReadFile("./example.pfx") +handle(err) + +certs, key, err := azidentity.ParseCertificates(certData, nil) +handle(err) + +cred, err = azidentity.NewClientCertificateCredential(tenantID, clientID, certs, key, nil) +handle(err) + +client, err := armsubscriptions.NewClient(cred, nil) +handle(err) +``` + +## Managed identity + +### `autorest/adal` + +```go +import ( + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/adal" +) + +spt, err := adal.NewServicePrincipalTokenFromManagedIdentity("https://management.azure.com/", nil) +handle(err) + +client := subscriptions.NewClient() +client.Authorizer = autorest.NewBearerAuthorizer(spt) +``` + +### `azidentity` + +```go +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armsubscriptions" +) + +cred, err := azidentity.NewManagedIdentityCredential(nil) +handle(err) + +client, err := armsubscriptions.NewClient(cred, nil) +handle(err) +``` + +### User-assigned identities + +`autorest/adal`: + +```go +import "github.com/Azure/go-autorest/autorest/adal" + +opts := &adal.ManagedIdentityOptions{ClientID: "..."} +spt, err := adal.NewServicePrincipalTokenFromManagedIdentity("https://management.azure.com/") +handle(err) +``` + +`azidentity`: + +```go +import "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + +opts := azidentity.ManagedIdentityCredentialOptions{ID: azidentity.ClientID("...")} +cred, err := azidentity.NewManagedIdentityCredential(&opts) +handle(err) +``` + +## Device code authentication + +### `autorest/adal` + +```go +import ( + "fmt" + "net/http" + + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions" + "github.com/Azure/go-autorest/autorest" + "github.com/Azure/go-autorest/autorest/adal" +) + +oauthClient := &http.Client{} +oauthCfg, err := adal.NewOAuthConfig("https://login.microsoftonline.com", tenantID) +handle(err) +resource := "https://management.azure.com/" +deviceCode, err := adal.InitiateDeviceAuth(oauthClient, *oauthCfg, clientID, resource) +handle(err) + +// display instructions, wait for the user to authenticate +fmt.Println(*deviceCode.Message) +token, err := adal.WaitForUserCompletion(oauthClient, deviceCode) +handle(err) + +spt, err := adal.NewServicePrincipalTokenFromManualToken(*oauthCfg, clientID, resource, *token) +handle(err) + +client := subscriptions.NewClient() +client.Authorizer = autorest.NewBearerAuthorizer(spt) +``` + +### `azidentity` + +```go +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/resources/armsubscriptions" +) + +cred, err := azidentity.NewDeviceCodeCredential(nil) +handle(err) + +client, err := armsubscriptions.NewSubscriptionsClient(cred, nil) +handle(err) +``` + +`azidentity.DeviceCodeCredential` will guide a user through authentication, printing instructions to the console by default. The user prompt is customizable. For more information, see the [package documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DeviceCodeCredential). + +## Acquire a token + +### `autorest/adal` + +```go +import "github.com/Azure/go-autorest/autorest/adal" + +oauthCfg, err := adal.NewOAuthConfig("https://login.microsoftonline.com", tenantID) +handle(err) + +spt, err := adal.NewServicePrincipalTokenWithSecret( + *oauthCfg, clientID, "https://vault.azure.net", &adal.ServicePrincipalTokenSecret{ClientSecret: secret}, +) + +err = spt.Refresh() +if err == nil { + token := spt.Token +} +``` + +### `azidentity` + +In ordinary usage, application code doesn't need to request tokens from credentials directly. Azure SDK clients handle token acquisition and refreshing internally. However, applications may call `GetToken()` to do so. All credential types have this method. + +```go +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" +) + +cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, nil) +handle(err) + +tk, err := cred.GetToken( + context.TODO(), policy.TokenRequestOptions{Scopes: []string{"https://vault.azure.net/.default"}}, +) +if err == nil { + token := tk.Token +} +``` + +Note that `azidentity` credentials use the Azure AD v2.0 endpoint, which requires OAuth 2 scopes instead of the resource identifiers `autorest/adal` expects. For more information, see [Azure AD documentation](https://docs.microsoft.com/azure/active-directory/develop/v2-permissions-and-consent). + +## Use azidentity credentials with older packages + +The [azidext module](https://pkg.go.dev/github.com/jongio/azidext/go/azidext) provides an adapter for `azidentity` credential types. The adapter enables using the credential types with older Azure SDK clients. For example: + +```go +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azidentity" + "github.com/Azure/azure-sdk-for-go/services/resources/mgmt/2018-06-01/subscriptions" + "github.com/jongio/azidext/go/azidext" +) + +cred, err := azidentity.NewClientSecretCredential(tenantID, clientID, secret, nil) +handle(err) + +client := subscriptions.NewClient() +client.Authorizer = azidext.NewTokenCredentialAdapter(cred, []string{"https://management.azure.com//.default"}) +``` + +![Impressions](https://azure-sdk-impressions.azurewebsites.net/api/impressions/azure-sdk-for-go%2Fsdk%2Fazidentity%2FMIGRATION.png) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/README.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/README.md new file mode 100644 index 000000000000..da0baa9add3d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/README.md @@ -0,0 +1,243 @@ +# Azure Identity Client Module for Go + +The Azure Identity module provides Azure Active Directory (Azure AD) token authentication support across the Azure SDK. It includes a set of `TokenCredential` implementations, which can be used with Azure SDK clients supporting token authentication. + +[![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/azidentity)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity) +| [Azure Active Directory documentation](https://docs.microsoft.com/azure/active-directory/) +| [Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/azidentity) + +# Getting started + +## Install the module + +This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management. + +Install the Azure Identity module: + +```sh +go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity +``` + +## Prerequisites + +- an [Azure subscription](https://azure.microsoft.com/free/) +- Go 1.18 + +### Authenticating during local development + +When debugging and executing code locally, developers typically use their own accounts to authenticate calls to Azure services. The `azidentity` module supports authenticating through developer tools to simplify local development. + +#### Authenticating via the Azure CLI + +`DefaultAzureCredential` and `AzureCLICredential` can authenticate as the user +signed in to the [Azure CLI](https://docs.microsoft.com/cli/azure). To sign in to the Azure CLI, run `az login`. On a system with a default web browser, the Azure CLI will launch the browser to authenticate a user. + +When no default browser is available, `az login` will use the device code +authentication flow. This can also be selected manually by running `az login --use-device-code`. + +## Key concepts + +### Credentials + +A credential is a type which contains or can obtain the data needed for a +service client to authenticate requests. Service clients across the Azure SDK +accept a credential instance when they are constructed, and use that credential +to authenticate requests. + +The `azidentity` module focuses on OAuth authentication with Azure Active +Directory (AAD). It offers a variety of credential types capable of acquiring +an Azure AD access token. See [Credential Types](#credential-types "Credential Types") for a list of this module's credential types. + +### DefaultAzureCredential + +`DefaultAzureCredential` is appropriate for most apps that will be deployed to Azure. It combines common production credentials with development credentials. It attempts to authenticate via the following mechanisms in this order, stopping when one succeeds: + +![DefaultAzureCredential authentication flow](img/mermaidjs/DefaultAzureCredentialAuthFlow.svg) + +1. **Environment** - `DefaultAzureCredential` will read account information specified via [environment variables](#environment-variables) and use it to authenticate. +1. **Workload Identity** - If the app is deployed on Kubernetes with environment variables set by the workload identity webhook, `DefaultAzureCredential` will authenticate the configured identity. +1. **Managed Identity** - If the app is deployed to an Azure host with managed identity enabled, `DefaultAzureCredential` will authenticate with it. +1. **Azure CLI** - If a user or service principal has authenticated via the Azure CLI `az login` command, `DefaultAzureCredential` will authenticate that identity. + +> Note: `DefaultAzureCredential` is intended to simplify getting started with the SDK by handling common scenarios with reasonable default behaviors. Developers who want more control or whose scenario isn't served by the default settings should use other credential types. + +## Managed Identity + +`DefaultAzureCredential` and `ManagedIdentityCredential` support +[managed identity authentication](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview) +in any hosting environment which supports managed identities, such as (this list is not exhaustive): +* [Azure App Service](https://docs.microsoft.com/azure/app-service/overview-managed-identity) +* [Azure Arc](https://docs.microsoft.com/azure/azure-arc/servers/managed-identity-authentication) +* [Azure Cloud Shell](https://docs.microsoft.com/azure/cloud-shell/msi-authorization) +* [Azure Kubernetes Service](https://docs.microsoft.com/azure/aks/use-managed-identity) +* [Azure Service Fabric](https://docs.microsoft.com/azure/service-fabric/concepts-managed-identity) +* [Azure Virtual Machines](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token) + +## Examples + +- [Authenticate with DefaultAzureCredential](#authenticate-with-defaultazurecredential "Authenticate with DefaultAzureCredential") +- [Define a custom authentication flow with ChainedTokenCredential](#define-a-custom-authentication-flow-with-chainedtokencredential "Define a custom authentication flow with ChainedTokenCredential") +- [Specify a user-assigned managed identity for DefaultAzureCredential](#specify-a-user-assigned-managed-identity-for-defaultazurecredential) + +### Authenticate with DefaultAzureCredential + +This example demonstrates authenticating a client from the `armresources` module with `DefaultAzureCredential`. + +```go +cred, err := azidentity.NewDefaultAzureCredential(nil) +if err != nil { + // handle error +} + +client := armresources.NewResourceGroupsClient("subscription ID", cred, nil) +``` + +### Specify a user-assigned managed identity for DefaultAzureCredential + +To configure `DefaultAzureCredential` to authenticate a user-assigned managed identity, set the environment variable `AZURE_CLIENT_ID` to the identity's client ID. + +### Define a custom authentication flow with `ChainedTokenCredential` + +`DefaultAzureCredential` is generally the quickest way to get started developing apps for Azure. For more advanced scenarios, `ChainedTokenCredential` links multiple credential instances to be tried sequentially when authenticating. It will try each chained credential in turn until one provides a token or fails to authenticate due to an error. + +The following example demonstrates creating a credential, which will attempt to authenticate using managed identity. It will fall back to authenticating via the Azure CLI when a managed identity is unavailable. + +```go +managed, err := azidentity.NewManagedIdentityCredential(nil) +if err != nil { + // handle error +} +azCLI, err := azidentity.NewAzureCLICredential(nil) +if err != nil { + // handle error +} +chain, err := azidentity.NewChainedTokenCredential([]azcore.TokenCredential{managed, azCLI}, nil) +if err != nil { + // handle error +} + +client := armresources.NewResourceGroupsClient("subscription ID", chain, nil) +``` + +## Credential Types + +### Authenticating Azure Hosted Applications + +|Credential|Usage +|-|- +|[DefaultAzureCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DefaultAzureCredential)|Simplified authentication experience for getting started developing Azure apps +|[ChainedTokenCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ChainedTokenCredential)|Define custom authentication flows, composing multiple credentials +|[EnvironmentCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#EnvironmentCredential)|Authenticate a service principal or user configured by environment variables +|[ManagedIdentityCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ManagedIdentityCredential)|Authenticate the managed identity of an Azure resource +|[WorkloadIdentityCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#WorkloadIdentityCredential)|Authenticate a workload identity on Kubernetes + +### Authenticating Service Principals + +|Credential|Usage +|-|- +|[ClientAssertionCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ClientAssertionCredential)|Authenticate a service principal with a signed client assertion +|[ClientCertificateCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ClientCertificateCredential)|Authenticate a service principal with a certificate +|[ClientSecretCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ClientSecretCredential)|Authenticate a service principal with a secret + +### Authenticating Users + +|Credential|Usage +|-|- +|[InteractiveBrowserCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#InteractiveBrowserCredential)|Interactively authenticate a user with the default web browser +|[DeviceCodeCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#DeviceCodeCredential)|Interactively authenticate a user on a device with limited UI +|[UsernamePasswordCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#UsernamePasswordCredential)|Authenticate a user with a username and password + +### Authenticating via Development Tools + +|Credential|Usage +|-|- +|[AzureCLICredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#AzureCLICredential)|Authenticate as the user signed in to the Azure CLI + +## Environment Variables + +`DefaultAzureCredential` and `EnvironmentCredential` can be configured with environment variables. Each type of authentication requires values for specific variables: + +#### Service principal with secret + +|variable name|value +|-|- +|`AZURE_CLIENT_ID`|ID of an Azure Active Directory application +|`AZURE_TENANT_ID`|ID of the application's Azure Active Directory tenant +|`AZURE_CLIENT_SECRET`|one of the application's client secrets + +#### Service principal with certificate + +|variable name|value +|-|- +|`AZURE_CLIENT_ID`|ID of an Azure Active Directory application +|`AZURE_TENANT_ID`|ID of the application's Azure Active Directory tenant +|`AZURE_CLIENT_CERTIFICATE_PATH`|path to a certificate file including private key +|`AZURE_CLIENT_CERTIFICATE_PASSWORD`|password of the certificate file, if any + +#### Username and password + +|variable name|value +|-|- +|`AZURE_CLIENT_ID`|ID of an Azure Active Directory application +|`AZURE_USERNAME`|a username (usually an email address) +|`AZURE_PASSWORD`|that user's password + +Configuration is attempted in the above order. For example, if values for a +client secret and certificate are both present, the client secret will be used. + +## Troubleshooting + +### Error Handling + +Credentials return an `error` when they fail to authenticate or lack data they require to authenticate. For guidance on resolving errors from specific credential types, see the [troubleshooting guide](https://aka.ms/azsdk/go/identity/troubleshoot). + +For more details on handling specific Azure Active Directory errors please refer to the +Azure Active Directory +[error code documentation](https://docs.microsoft.com/azure/active-directory/develop/reference-aadsts-error-codes). + +### Logging + +This module uses the classification-based logging implementation in `azcore`. To enable console logging for all SDK modules, set `AZURE_SDK_GO_LOGGING` to `all`. Use the `azcore/log` package to control log event output or to enable logs for `azidentity` only. For example: +```go +import azlog "github.com/Azure/azure-sdk-for-go/sdk/azcore/log" + +// print log output to stdout +azlog.SetListener(func(event azlog.Event, s string) { + fmt.Println(s) +}) + +// include only azidentity credential logs +azlog.SetEvents(azidentity.EventAuthentication) +``` + +Credentials log basic information only, such as `GetToken` success or failure and errors. These log entries don't contain authentication secrets but may contain sensitive information. + +## Next steps + +Client and management modules listed on the [Azure SDK releases page](https://azure.github.io/azure-sdk/releases/latest/go.html) support authenticating with `azidentity` credential types. You can learn more about using these libraries in their documentation, which is linked from the release page. + +## Provide Feedback + +If you encounter bugs or have suggestions, please +[open an issue](https://github.com/Azure/azure-sdk-for-go/issues). + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require +you to agree to a Contributor License Agreement (CLA) declaring that you have +the right to, and actually do, grant us the rights to use your contribution. +For details, visit [https://cla.microsoft.com](https://cla.microsoft.com). + +When you submit a pull request, a CLA-bot will automatically determine whether +you need to provide a CLA and decorate the PR appropriately (e.g., label, +comment). Simply follow the instructions provided by the bot. You will only +need to do this once across all repos using our CLA. + +This project has adopted the +[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information, see the +[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) +or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any +additional questions or comments. + +![Impressions](https://azure-sdk-impressions.azurewebsites.net/api/impressions/azure-sdk-for-go%2Fsdk%2Fazidentity%2FREADME.png) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/TROUBLESHOOTING.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/TROUBLESHOOTING.md new file mode 100644 index 000000000000..fef099813c87 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/TROUBLESHOOTING.md @@ -0,0 +1,207 @@ +# Troubleshoot Azure Identity authentication issues + +This troubleshooting guide covers failure investigation techniques, common errors for the credential types in the `azidentity` module, and mitigation steps to resolve these errors. + +## Table of contents + +- [Handle azidentity errors](#handle-azidentity-errors) + - [Permission issues](#permission-issues) +- [Find relevant information in errors](#find-relevant-information-in-errors) +- [Enable and configure logging](#enable-and-configure-logging) +- [Troubleshoot AzureCliCredential authentication issues](#troubleshoot-azureclicredential-authentication-issues) +- [Troubleshoot ClientCertificateCredential authentication issues](#troubleshoot-clientcertificatecredential-authentication-issues) +- [Troubleshoot ClientSecretCredential authentication issues](#troubleshoot-clientsecretcredential-authentication-issues) +- [Troubleshoot DefaultAzureCredential authentication issues](#troubleshoot-defaultazurecredential-authentication-issues) +- [Troubleshoot EnvironmentCredential authentication issues](#troubleshoot-environmentcredential-authentication-issues) +- [Troubleshoot ManagedIdentityCredential authentication issues](#troubleshoot-managedidentitycredential-authentication-issues) + - [Azure App Service and Azure Functions managed identity](#azure-app-service-and-azure-functions-managed-identity) + - [Azure Kubernetes Service managed identity](#azure-kubernetes-service-managed-identity) + - [Azure Virtual Machine managed identity](#azure-virtual-machine-managed-identity) +- [Troubleshoot UsernamePasswordCredential authentication issues](#troubleshoot-usernamepasswordcredential-authentication-issues) +- [Troubleshoot WorkloadIdentityCredential authentication issues](#troubleshoot-workloadidentitycredential-authentication-issues) +- [Get additional help](#get-additional-help) + +## Handle azidentity errors + +Any service client method that makes a request to the service may return an error due to authentication failure. This is because the credential authenticates on the first call to the service and on any subsequent call that needs to refresh an access token. Authentication errors include a description of the failure and possibly an error message from Azure Active Directory (Azure AD). Depending on the application, these errors may or may not be recoverable. + +### Permission issues + +Service client errors with a status code of 401 or 403 often indicate that authentication succeeded but the caller doesn't have permission to access the specified API. Check the service documentation to determine which RBAC roles are needed for the request, and ensure the authenticated user or service principal has the appropriate role assignments. + +## Find relevant information in errors + +Authentication errors can include responses from Azure AD and often contain information helpful in diagnosis. Consider the following error message: + +``` +ClientSecretCredential authentication failed +POST https://login.microsoftonline.com/3c631bb7-a9f7-4343-a5ba-a615913/oauth2/v2.0/token +-------------------------------------------------------------------------------- +RESPONSE 401 Unauthorized +-------------------------------------------------------------------------------- +{ + "error": "invalid_client", + "error_description": "AADSTS7000215: Invalid client secret provided. Ensure the secret being sent in the request is the client secret value, not the client secret ID, for a secret added to app '86be4c01-505b-45e9-bfc0-9b825fd84'.\r\nTrace ID: 03da4b8e-5ffe-48ca-9754-aff4276f0100\r\nCorrelation ID: 7b12f9bb-2eef-42e3-ad75-eee69ec9088d\r\nTimestamp: 2022-03-02 18:25:26Z", + "error_codes": [ + 7000215 + ], + "timestamp": "2022-03-02 18:25:26Z", + "trace_id": "03da4b8e-5ffe-48ca-9754-aff4276f0100", + "correlation_id": "7b12f9bb-2eef-42e3-ad75-eee69ec9088d", + "error_uri": "https://login.microsoftonline.com/error?code=7000215" +} +-------------------------------------------------------------------------------- +``` + +This error contains several pieces of information: + +- __Failing Credential Type__: The type of credential that failed to authenticate. This can be helpful when diagnosing issues with chained credential types such as `DefaultAzureCredential` or `ChainedTokenCredential`. + +- __Azure AD Error Code and Message__: The error code and message returned by Azure AD. This can give insight into the specific reason the request failed. For instance, in this case authentication failed because the provided client secret is incorrect. [Azure AD documentation](https://docs.microsoft.com/azure/active-directory/develop/reference-aadsts-error-codes#aadsts-error-codes) has more information on AADSTS error codes. + +- __Correlation ID and Timestamp__: The correlation ID and timestamp identify the request in server-side logs. This information can be useful to support engineers diagnosing unexpected Azure AD failures. + +### Enable and configure logging + +`azidentity` provides the same logging capabilities as the rest of the Azure SDK. The simplest way to see the logs to help debug authentication issues is to print credential logs to the console. +```go +import azlog "github.com/Azure/azure-sdk-for-go/sdk/azcore/log" + +// print log output to stdout +azlog.SetListener(func(event azlog.Event, s string) { + fmt.Println(s) +}) + +// include only azidentity credential logs +azlog.SetEvents(azidentity.EventAuthentication) +``` + + +## Troubleshoot DefaultAzureCredential authentication issues + +| Error |Description| Mitigation | +|---|---|---| +|"DefaultAzureCredential failed to acquire a token"|No credential in the `DefaultAzureCredential` chain provided a token|
  • [Enable logging](#enable-and-configure-logging) to get further diagnostic information.
  • Consult the troubleshooting guide for underlying credential types for more information.
    • [EnvironmentCredential](#troubleshoot-environmentcredential-authentication-issues)
    • [ManagedIdentityCredential](#troubleshoot-managedidentitycredential-authentication-issues)
    • [AzureCLICredential](#troubleshoot-azureclicredential-authentication-issues)
    | +|Error from the client with a status code of 401 or 403|Authentication succeeded but the authorizing Azure service responded with a 401 (Unauthorized), or 403 (Forbidden) status code|
    • [Enable logging](#enable-and-configure-logging) to determine which credential in the chain returned the authenticating token.
    • If an unexpected credential is returning a token, check application configuration such as environment variables.
    • Ensure the correct role is assigned to the authenticated identity. For example, a service specific role rather than the subscription Owner role.
    | +|"managed identity timed out"|`DefaultAzureCredential` sets a short timeout on its first managed identity authentication attempt to prevent very long timeouts during local development when no managed identity is available. That timeout causes this error in production when an application requests a token before the hosting environment is ready to provide one.|Use [ManagedIdentityCredential](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#ManagedIdentityCredential) directly, at least in production. It doesn't set a timeout on its authentication attempts.| + +## Troubleshoot EnvironmentCredential authentication issues + +| Error Message |Description| Mitigation | +|---|---|---| +|Missing or incomplete environment variable configuration|A valid combination of environment variables wasn't set|Ensure the appropriate environment variables are set for the intended authentication method as described in the [module documentation](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity#EnvironmentCredential)| + + +## Troubleshoot ClientSecretCredential authentication issues + +| Error Code | Issue | Mitigation | +|---|---|---| +|AADSTS7000215|An invalid client secret was provided.|Ensure the secret provided to the credential constructor is valid. If unsure, create a new client secret using the Azure portal. Details on creating a new client secret are in [Azure AD documentation](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret).| +|AADSTS7000222|An expired client secret was provided.|Create a new client secret using the Azure portal. Details on creating a new client secret are in [Azure AD documentation](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret).| +|AADSTS700016|The specified application wasn't found in the specified tenant.|Ensure the client and tenant IDs provided to the credential constructor are correct for your application registration. For multi-tenant apps, ensure the application has been added to the desired tenant by a tenant admin. To add a new application in the desired tenant, follow the [Azure AD instructions](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal).| + + +## Troubleshoot ClientCertificateCredential authentication issues + +| Error Code | Description | Mitigation | +|---|---|---| +|AADSTS700027|Client assertion contains an invalid signature.|Ensure the specified certificate has been uploaded to the application registration as described in [Azure AD documentation](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal#option-1-upload-a-certificate).| +|AADSTS700016|The specified application wasn't found in the specified tenant.|Ensure the client and tenant IDs provided to the credential constructor are correct for your application registration. For multi-tenant apps, ensure the application has been added to the desired tenant by a tenant admin. To add a new application in the desired tenant, follow the [Azure AD instructions](https://docs.microsoft.com/azure/active-directory/develop/howto-create-service-principal-portal).| + + +## Troubleshoot UsernamePasswordCredential authentication issues + +| Error Code | Issue | Mitigation | +|---|---|---| +|AADSTS50126|The provided username or password is invalid.|Ensure the username and password provided to the credential constructor are valid.| + + +## Troubleshoot ManagedIdentityCredential authentication issues + +`ManagedIdentityCredential` is designed to work on a variety of Azure hosts support managed identity. Configuration and troubleshooting vary from host to host. The below table lists the Azure hosts that can be assigned a managed identity and are supported by `ManagedIdentityCredential`. + +|Host Environment| | | +|---|---|---| +|Azure Virtual Machines and Scale Sets|[Configuration](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm)|[Troubleshooting](#azure-virtual-machine-managed-identity)| +|Azure App Service and Azure Functions|[Configuration](https://docs.microsoft.com/azure/app-service/overview-managed-identity)|[Troubleshooting](#azure-app-service-and-azure-functions-managed-identity)| +|Azure Kubernetes Service|[Configuration](https://azure.github.io/aad-pod-identity/docs/)|[Troubleshooting](#azure-kubernetes-service-managed-identity)| +|Azure Arc|[Configuration](https://docs.microsoft.com/azure/azure-arc/servers/managed-identity-authentication)|| +|Azure Service Fabric|[Configuration](https://docs.microsoft.com/azure/service-fabric/concepts-managed-identity)|| + +### Azure Virtual Machine managed identity + +| Error Message |Description| Mitigation | +|---|---|---| +|The requested identity hasn’t been assigned to this resource.|The IMDS endpoint responded with a status code of 400, indicating the requested identity isn’t assigned to the VM.|If using a user assigned identity, ensure the specified ID is correct.

    If using a system assigned identity, make sure it has been enabled as described in [managed identity documentation](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm).| +|The request failed due to a gateway error.|The request to the IMDS endpoint failed due to a gateway error, 502 or 504 status code.|IMDS doesn't support requests via proxy or gateway. Disable proxies or gateways running on the VM for requests to the IMDS endpoint `http://169.254.169.254`| +|No response received from the managed identity endpoint.|No response was received for the request to IMDS or the request timed out.|

    • Ensure the VM is configured for managed identity as described in [managed identity documentation](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).
    • Verify the IMDS endpoint is reachable on the VM. See [below](#verify-imds-is-available-on-the-vm) for instructions.
    | +|Multiple attempts failed to obtain a token from the managed identity endpoint.|The credential has exhausted its retries for a token request.|
    • Refer to the error message for more details on specific failures.
    • Ensure the VM is configured for managed identity as described in [managed identity documentation](https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm).
    • Verify the IMDS endpoint is reachable on the VM. See [below](#verify-imds-is-available-on-the-vm) for instructions.
    | + +#### Verify IMDS is available on the VM + +If you have access to the VM, you can use `curl` to verify the managed identity endpoint is available. + +```sh +curl 'http://169.254.169.254/metadata/identity/oauth2/token?resource=https://management.core.windows.net&api-version=2018-02-01' -H "Metadata: true" +``` + +> This command's output will contain an access token and SHOULD NOT BE SHARED, to avoid compromising account security. + +### Azure App Service and Azure Functions managed identity + +| Error Message |Description| Mitigation | +|---|---|---| +|Get "`http://169.254.169.254/...`" i/o timeout|The App Service host hasn't set environment variables for managed identity configuration.|
    • Ensure the App Service is configured for managed identity as described in [App Service documentation](https://docs.microsoft.com/azure/app-service/overview-managed-identity).
    • Verify the App Service environment is properly configured and the managed identity endpoint is available. See [below](#verify-the-app-service-managed-identity-endpoint-is-available) for instructions.
    | + +#### Verify the App Service managed identity endpoint is available + +If you can SSH into the App Service, you can verify managed identity is available in the environment. First ensure the environment variables `IDENTITY_ENDPOINT` and `IDENTITY_SECRET` are set. Then you can verify the managed identity endpoint is available using `curl`. + +```sh +curl "$IDENTITY_ENDPOINT?resource=https://management.core.windows.net&api-version=2019-08-01" -H "X-IDENTITY-HEADER: $IDENTITY_HEADER" +``` + +> This command's output will contain an access token and SHOULD NOT BE SHARED, to avoid compromising account security. + +### Azure Kubernetes Service managed identity + +#### Pod Identity + +| Error Message |Description| Mitigation | +|---|---|---| +|"no azure identity found for request clientID"|The application attempted to authenticate before an identity was assigned to its pod|Verify the pod is labeled correctly. This also occurs when a correctly labeled pod authenticates before the identity is ready. To prevent initialization races, configure NMI to set the Retry-After header in its responses as described in [Pod Identity documentation](https://azure.github.io/aad-pod-identity/docs/configure/feature_flags/#set-retry-after-header-in-nmi-response). + + +## Troubleshoot AzureCliCredential authentication issues + +| Error Message |Description| Mitigation | +|---|---|---| +|Azure CLI not found on path|The Azure CLI isn’t installed or isn't on the application's path.|
    • Ensure the Azure CLI is installed as described in [Azure CLI documentation](https://docs.microsoft.com/cli/azure/install-azure-cli).
    • Validate the installation location is in the application's `PATH` environment variable.
    | +|Please run 'az login' to set up account|No account is currently logged into the Azure CLI, or the login has expired.|
    • Run `az login` to log into the Azure CLI. More information about Azure CLI authentication is available in the [Azure CLI documentation](https://docs.microsoft.com/cli/azure/authenticate-azure-cli).
    • Verify that the Azure CLI can obtain tokens. See [below](#verify-the-azure-cli-can-obtain-tokens) for instructions.
    | + +#### Verify the Azure CLI can obtain tokens + +You can manually verify that the Azure CLI can authenticate and obtain tokens. First, use the `account` command to verify the logged in account. + +```azurecli +az account show +``` + +Once you've verified the Azure CLI is using the correct account, you can validate that it's able to obtain tokens for that account. + +```azurecli +az account get-access-token --output json --resource https://management.core.windows.net +``` + +> This command's output will contain an access token and SHOULD NOT BE SHARED, to avoid compromising account security. + + +## Troubleshoot `WorkloadIdentityCredential` authentication issues + +| Error Message |Description| Mitigation | +|---|---|---| +|no client ID/tenant ID/token file specified|Incomplete configuration|In most cases these values are provided via environment variables set by Azure Workload Identity.
    • If your application runs on Azure Kubernetes Servide (AKS) or a cluster that has deployed the Azure Workload Identity admission webhook, check pod labels and service account configuration. See the [AKS documentation](https://learn.microsoft.com/azure/aks/workload-identity-deploy-cluster#disable-workload-identity) and [Azure Workload Identity troubleshooting guide](https://azure.github.io/azure-workload-identity/docs/troubleshooting.html) for more details.
    • If your application isn't running on AKS or your cluster hasn't deployed the Workload Identity admission webhook, set these values in `WorkloadIdentityCredentialOptions` + +## Get additional help + +Additional information on ways to reach out for support can be found in [SUPPORT.md](https://github.com/Azure/azure-sdk-for-go/blob/main/SUPPORT.md). diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/assets.json b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/assets.json new file mode 100644 index 000000000000..47e77f88e3f7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/assets.json @@ -0,0 +1,6 @@ +{ + "AssetsRepo": "Azure/azure-sdk-assets", + "AssetsRepoPrefixPath": "go", + "TagPrefix": "go/azidentity", + "Tag": "go/azidentity_6225ab0470" +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azidentity.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azidentity.go new file mode 100644 index 000000000000..10b742ce1a13 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azidentity.go @@ -0,0 +1,178 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "bytes" + "context" + "errors" + "fmt" + "io" + "net/http" + "net/url" + "os" + "regexp" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/public" +) + +const ( + azureAdditionallyAllowedTenants = "AZURE_ADDITIONALLY_ALLOWED_TENANTS" + azureAuthorityHost = "AZURE_AUTHORITY_HOST" + azureClientCertificatePassword = "AZURE_CLIENT_CERTIFICATE_PASSWORD" + azureClientCertificatePath = "AZURE_CLIENT_CERTIFICATE_PATH" + azureClientID = "AZURE_CLIENT_ID" + azureClientSecret = "AZURE_CLIENT_SECRET" + azureFederatedTokenFile = "AZURE_FEDERATED_TOKEN_FILE" + azurePassword = "AZURE_PASSWORD" + azureRegionalAuthorityName = "AZURE_REGIONAL_AUTHORITY_NAME" + azureTenantID = "AZURE_TENANT_ID" + azureUsername = "AZURE_USERNAME" + + organizationsTenantID = "organizations" + developerSignOnClientID = "04b07795-8ddb-461a-bbee-02f9e1bf7b46" + defaultSuffix = "/.default" +) + +var ( + // capability CP1 indicates the client application is capable of handling CAE claims challenges + cp1 = []string{"CP1"} + errInvalidTenantID = errors.New("invalid tenantID. You can locate your tenantID by following the instructions listed here: https://learn.microsoft.com/partner-center/find-ids-and-domain-names") +) + +// setAuthorityHost initializes the authority host for credentials. Precedence is: +// 1. cloud.Configuration.ActiveDirectoryAuthorityHost value set by user +// 2. value of AZURE_AUTHORITY_HOST +// 3. default: Azure Public Cloud +func setAuthorityHost(cc cloud.Configuration) (string, error) { + host := cc.ActiveDirectoryAuthorityHost + if host == "" { + if len(cc.Services) > 0 { + return "", errors.New("missing ActiveDirectoryAuthorityHost for specified cloud") + } + host = cloud.AzurePublic.ActiveDirectoryAuthorityHost + if envAuthorityHost := os.Getenv(azureAuthorityHost); envAuthorityHost != "" { + host = envAuthorityHost + } + } + u, err := url.Parse(host) + if err != nil { + return "", err + } + if u.Scheme != "https" { + return "", errors.New("cannot use an authority host without https") + } + return host, nil +} + +// resolveAdditionalTenants returns a copy of tenants, simplified when tenants contains a wildcard +func resolveAdditionalTenants(tenants []string) []string { + if len(tenants) == 0 { + return nil + } + for _, t := range tenants { + // a wildcard makes all other values redundant + if t == "*" { + return []string{"*"} + } + } + cp := make([]string, len(tenants)) + copy(cp, tenants) + return cp +} + +// resolveTenant returns the correct tenant for a token request +func resolveTenant(defaultTenant, specified, credName string, additionalTenants []string) (string, error) { + if specified == "" || specified == defaultTenant { + return defaultTenant, nil + } + if defaultTenant == "adfs" { + return "", errors.New("ADFS doesn't support tenants") + } + if !validTenantID(specified) { + return "", errInvalidTenantID + } + for _, t := range additionalTenants { + if t == "*" || t == specified { + return specified, nil + } + } + return "", fmt.Errorf(`%s isn't configured to acquire tokens for tenant %q. To enable acquiring tokens for this tenant add it to the AdditionallyAllowedTenants on the credential options, or add "*" to allow acquiring tokens for any tenant`, credName, specified) +} + +// validTenantID return true is it receives a valid tenantID, returns false otherwise +func validTenantID(tenantID string) bool { + match, err := regexp.MatchString("^[0-9a-zA-Z-.]+$", tenantID) + if err != nil { + return false + } + return match +} + +func newPipelineAdapter(opts *azcore.ClientOptions) pipelineAdapter { + pl := runtime.NewPipeline(component, version, runtime.PipelineOptions{}, opts) + return pipelineAdapter{pl: pl} +} + +type pipelineAdapter struct { + pl runtime.Pipeline +} + +func (p pipelineAdapter) CloseIdleConnections() { + // do nothing +} + +func (p pipelineAdapter) Do(r *http.Request) (*http.Response, error) { + req, err := runtime.NewRequest(r.Context(), r.Method, r.URL.String()) + if err != nil { + return nil, err + } + if r.Body != nil && r.Body != http.NoBody { + // create a rewindable body from the existing body as required + var body io.ReadSeekCloser + if rsc, ok := r.Body.(io.ReadSeekCloser); ok { + body = rsc + } else { + b, err := io.ReadAll(r.Body) + if err != nil { + return nil, err + } + body = streaming.NopCloser(bytes.NewReader(b)) + } + err = req.SetBody(body, r.Header.Get("Content-Type")) + if err != nil { + return nil, err + } + } + resp, err := p.pl.Do(req) + if err != nil { + return nil, err + } + return resp, err +} + +// enables fakes for test scenarios +type msalConfidentialClient interface { + AcquireTokenSilent(ctx context.Context, scopes []string, options ...confidential.AcquireSilentOption) (confidential.AuthResult, error) + AcquireTokenByAuthCode(ctx context.Context, code string, redirectURI string, scopes []string, options ...confidential.AcquireByAuthCodeOption) (confidential.AuthResult, error) + AcquireTokenByCredential(ctx context.Context, scopes []string, options ...confidential.AcquireByCredentialOption) (confidential.AuthResult, error) + AcquireTokenOnBehalfOf(ctx context.Context, userAssertion string, scopes []string, options ...confidential.AcquireOnBehalfOfOption) (confidential.AuthResult, error) +} + +// enables fakes for test scenarios +type msalPublicClient interface { + AcquireTokenSilent(ctx context.Context, scopes []string, options ...public.AcquireSilentOption) (public.AuthResult, error) + AcquireTokenByUsernamePassword(ctx context.Context, scopes []string, username string, password string, options ...public.AcquireByUsernamePasswordOption) (public.AuthResult, error) + AcquireTokenByDeviceCode(ctx context.Context, scopes []string, options ...public.AcquireByDeviceCodeOption) (public.DeviceCode, error) + AcquireTokenByAuthCode(ctx context.Context, code string, redirectURI string, scopes []string, options ...public.AcquireByAuthCodeOption) (public.AuthResult, error) + AcquireTokenInteractive(ctx context.Context, scopes []string, options ...public.AcquireInteractiveOption) (public.AuthResult, error) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_cli_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_cli_credential.go new file mode 100644 index 000000000000..55a0d654347e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/azure_cli_credential.go @@ -0,0 +1,183 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "bytes" + "context" + "encoding/json" + "errors" + "fmt" + "os" + "os/exec" + "regexp" + "runtime" + "strings" + "sync" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" +) + +const ( + credNameAzureCLI = "AzureCLICredential" + timeoutCLIRequest = 10 * time.Second +) + +// used by tests to fake invoking the CLI +type azureCLITokenProvider func(ctx context.Context, resource string, tenantID string) ([]byte, error) + +// AzureCLICredentialOptions contains optional parameters for AzureCLICredential. +type AzureCLICredentialOptions struct { + // AdditionallyAllowedTenants specifies tenants for which the credential may acquire tokens, in addition + // to TenantID. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant the + // logged in account can access. + AdditionallyAllowedTenants []string + // TenantID identifies the tenant the credential should authenticate in. + // Defaults to the CLI's default tenant, which is typically the home tenant of the logged in user. + TenantID string + + tokenProvider azureCLITokenProvider +} + +// init returns an instance of AzureCLICredentialOptions initialized with default values. +func (o *AzureCLICredentialOptions) init() { + if o.tokenProvider == nil { + o.tokenProvider = defaultTokenProvider + } +} + +// AzureCLICredential authenticates as the identity logged in to the Azure CLI. +type AzureCLICredential struct { + mu *sync.Mutex + opts AzureCLICredentialOptions +} + +// NewAzureCLICredential constructs an AzureCLICredential. Pass nil to accept default options. +func NewAzureCLICredential(options *AzureCLICredentialOptions) (*AzureCLICredential, error) { + cp := AzureCLICredentialOptions{} + if options != nil { + cp = *options + } + cp.init() + cp.AdditionallyAllowedTenants = resolveAdditionalTenants(cp.AdditionallyAllowedTenants) + return &AzureCLICredential{mu: &sync.Mutex{}, opts: cp}, nil +} + +// GetToken requests a token from the Azure CLI. This credential doesn't cache tokens, so every call invokes the CLI. +// This method is called automatically by Azure SDK clients. +func (c *AzureCLICredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + if len(opts.Scopes) != 1 { + return azcore.AccessToken{}, errors.New(credNameAzureCLI + ": GetToken() requires exactly one scope") + } + tenant, err := resolveTenant(c.opts.TenantID, opts.TenantID, credNameAzureCLI, c.opts.AdditionallyAllowedTenants) + if err != nil { + return azcore.AccessToken{}, err + } + // pass the CLI an AAD v1 resource because we don't know which CLI version is installed and older ones don't support v2 scopes + opts.Scopes = []string{strings.TrimSuffix(opts.Scopes[0], defaultSuffix)} + c.mu.Lock() + defer c.mu.Unlock() + b, err := c.opts.tokenProvider(ctx, opts.Scopes[0], tenant) + if err != nil { + return azcore.AccessToken{}, err + } + at, err := c.createAccessToken(b) + if err != nil { + return azcore.AccessToken{}, err + } + msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", credNameAzureCLI, strings.Join(opts.Scopes, ", ")) + log.Write(EventAuthentication, msg) + return at, nil +} + +var defaultTokenProvider azureCLITokenProvider = func(ctx context.Context, resource string, tenantID string) ([]byte, error) { + match, err := regexp.MatchString("^[0-9a-zA-Z-.:/]+$", resource) + if err != nil { + return nil, err + } + if !match { + return nil, fmt.Errorf(`%s: unexpected scope "%s". Only alphanumeric characters and ".", ";", "-", and "/" are allowed`, credNameAzureCLI, resource) + } + + // set a default timeout for this authentication iff the application hasn't done so already + var cancel context.CancelFunc + if _, hasDeadline := ctx.Deadline(); !hasDeadline { + ctx, cancel = context.WithTimeout(ctx, timeoutCLIRequest) + defer cancel() + } + + commandLine := "az account get-access-token -o json --resource " + resource + if tenantID != "" { + commandLine += " --tenant " + tenantID + } + var cliCmd *exec.Cmd + if runtime.GOOS == "windows" { + dir := os.Getenv("SYSTEMROOT") + if dir == "" { + return nil, newCredentialUnavailableError(credNameAzureCLI, "environment variable 'SYSTEMROOT' has no value") + } + cliCmd = exec.CommandContext(ctx, "cmd.exe", "/c", commandLine) + cliCmd.Dir = dir + } else { + cliCmd = exec.CommandContext(ctx, "/bin/sh", "-c", commandLine) + cliCmd.Dir = "/bin" + } + cliCmd.Env = os.Environ() + var stderr bytes.Buffer + cliCmd.Stderr = &stderr + + output, err := cliCmd.Output() + if err != nil { + msg := stderr.String() + var exErr *exec.ExitError + if errors.As(err, &exErr) && exErr.ExitCode() == 127 || strings.HasPrefix(msg, "'az' is not recognized") { + msg = "Azure CLI not found on path" + } + if msg == "" { + msg = err.Error() + } + return nil, newCredentialUnavailableError(credNameAzureCLI, msg) + } + + return output, nil +} + +func (c *AzureCLICredential) createAccessToken(tk []byte) (azcore.AccessToken, error) { + t := struct { + AccessToken string `json:"accessToken"` + Authority string `json:"_authority"` + ClientID string `json:"_clientId"` + ExpiresOn string `json:"expiresOn"` + IdentityProvider string `json:"identityProvider"` + IsMRRT bool `json:"isMRRT"` + RefreshToken string `json:"refreshToken"` + Resource string `json:"resource"` + TokenType string `json:"tokenType"` + UserID string `json:"userId"` + }{} + err := json.Unmarshal(tk, &t) + if err != nil { + return azcore.AccessToken{}, err + } + + // the Azure CLI's "expiresOn" is local time + exp, err := time.ParseInLocation("2006-01-02 15:04:05.999999", t.ExpiresOn, time.Local) + if err != nil { + return azcore.AccessToken{}, fmt.Errorf("Error parsing token expiration time %q: %v", t.ExpiresOn, err) + } + + converted := azcore.AccessToken{ + Token: t.AccessToken, + ExpiresOn: exp.UTC(), + } + return converted, nil +} + +var _ azcore.TokenCredential = (*AzureCLICredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/chained_token_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/chained_token_credential.go new file mode 100644 index 000000000000..dc855edf7868 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/chained_token_credential.go @@ -0,0 +1,138 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "errors" + "fmt" + "strings" + "sync" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" +) + +// ChainedTokenCredentialOptions contains optional parameters for ChainedTokenCredential. +type ChainedTokenCredentialOptions struct { + // RetrySources configures how the credential uses its sources. When true, the credential always attempts to + // authenticate through each source in turn, stopping when one succeeds. When false, the credential authenticates + // only through this first successful source--it never again tries the sources which failed. + RetrySources bool +} + +// ChainedTokenCredential links together multiple credentials and tries them sequentially when authenticating. By default, +// it tries all the credentials until one authenticates, after which it always uses that credential. +type ChainedTokenCredential struct { + cond *sync.Cond + iterating bool + name string + retrySources bool + sources []azcore.TokenCredential + successfulCredential azcore.TokenCredential +} + +// NewChainedTokenCredential creates a ChainedTokenCredential. Pass nil for options to accept defaults. +func NewChainedTokenCredential(sources []azcore.TokenCredential, options *ChainedTokenCredentialOptions) (*ChainedTokenCredential, error) { + if len(sources) == 0 { + return nil, errors.New("sources must contain at least one TokenCredential") + } + for _, source := range sources { + if source == nil { // cannot have a nil credential in the chain or else the application will panic when GetToken() is called on nil + return nil, errors.New("sources cannot contain nil") + } + } + cp := make([]azcore.TokenCredential, len(sources)) + copy(cp, sources) + if options == nil { + options = &ChainedTokenCredentialOptions{} + } + return &ChainedTokenCredential{ + cond: sync.NewCond(&sync.Mutex{}), + name: "ChainedTokenCredential", + retrySources: options.RetrySources, + sources: cp, + }, nil +} + +// GetToken calls GetToken on the chained credentials in turn, stopping when one returns a token. +// This method is called automatically by Azure SDK clients. +func (c *ChainedTokenCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + if !c.retrySources { + // ensure only one goroutine at a time iterates the sources and perhaps sets c.successfulCredential + c.cond.L.Lock() + for { + if c.successfulCredential != nil { + c.cond.L.Unlock() + return c.successfulCredential.GetToken(ctx, opts) + } + if !c.iterating { + c.iterating = true + // allow other goroutines to wait while this one iterates + c.cond.L.Unlock() + break + } + c.cond.Wait() + } + } + + var ( + err error + errs []error + successfulCredential azcore.TokenCredential + token azcore.AccessToken + unavailableErr *credentialUnavailableError + ) + for _, cred := range c.sources { + token, err = cred.GetToken(ctx, opts) + if err == nil { + log.Writef(EventAuthentication, "%s authenticated with %s", c.name, extractCredentialName(cred)) + successfulCredential = cred + break + } + errs = append(errs, err) + // continue to the next source iff this one returned credentialUnavailableError + if !errors.As(err, &unavailableErr) { + break + } + } + if c.iterating { + c.cond.L.Lock() + // this is nil when all credentials returned an error + c.successfulCredential = successfulCredential + c.iterating = false + c.cond.L.Unlock() + c.cond.Broadcast() + } + // err is the error returned by the last GetToken call. It will be nil when that call succeeds + if err != nil { + // return credentialUnavailableError iff all sources did so; return AuthenticationFailedError otherwise + msg := createChainedErrorMessage(errs) + if errors.As(err, &unavailableErr) { + err = newCredentialUnavailableError(c.name, msg) + } else { + res := getResponseFromError(err) + err = newAuthenticationFailedError(c.name, msg, res, err) + } + } + return token, err +} + +func createChainedErrorMessage(errs []error) string { + msg := "failed to acquire a token.\nAttempted credentials:" + for _, err := range errs { + msg += fmt.Sprintf("\n\t%s", err.Error()) + } + return msg +} + +func extractCredentialName(credential azcore.TokenCredential) string { + return strings.TrimPrefix(fmt.Sprintf("%T", credential), "*azidentity.") +} + +var _ azcore.TokenCredential = (*ChainedTokenCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/ci.yml b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/ci.yml new file mode 100644 index 000000000000..9002ea0b0505 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/ci.yml @@ -0,0 +1,34 @@ +# NOTE: Please refer to https://aka.ms/azsdk/engsys/ci-yaml before editing this file. +trigger: + branches: + include: + - main + - feature/* + - hotfix/* + - release/* + paths: + include: + - sdk/azidentity/ + +pr: + branches: + include: + - main + - feature/* + - hotfix/* + - release/* + paths: + include: + - sdk/azidentity/ + +stages: +- template: /eng/pipelines/templates/jobs/archetype-sdk-client.yml + parameters: + RunLiveTests: true + ServiceDirectory: 'azidentity' + CloudConfig: + Public: + SubscriptionConfigurations: + - $(sub-config-azure-cloud-test-resources) + # Contains alternate tenant, AAD app and cert info for testing + - $(sub-config-identity-test-resources) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_assertion_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_assertion_credential.go new file mode 100644 index 000000000000..303d5fc0925c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_assertion_credential.go @@ -0,0 +1,75 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "errors" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential" +) + +const credNameAssertion = "ClientAssertionCredential" + +// ClientAssertionCredential authenticates an application with assertions provided by a callback function. +// This credential is for advanced scenarios. [ClientCertificateCredential] has a more convenient API for +// the most common assertion scenario, authenticating a service principal with a certificate. See +// [Azure AD documentation] for details of the assertion format. +// +// [Azure AD documentation]: https://docs.microsoft.com/azure/active-directory/develop/active-directory-certificate-credentials#assertion-format +type ClientAssertionCredential struct { + client *confidentialClient +} + +// ClientAssertionCredentialOptions contains optional parameters for ClientAssertionCredential. +type ClientAssertionCredentialOptions struct { + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. + // Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the + // application is registered. + AdditionallyAllowedTenants []string + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool +} + +// NewClientAssertionCredential constructs a ClientAssertionCredential. The getAssertion function must be thread safe. Pass nil for options to accept defaults. +func NewClientAssertionCredential(tenantID, clientID string, getAssertion func(context.Context) (string, error), options *ClientAssertionCredentialOptions) (*ClientAssertionCredential, error) { + if getAssertion == nil { + return nil, errors.New("getAssertion must be a function that returns assertions") + } + if options == nil { + options = &ClientAssertionCredentialOptions{} + } + cred := confidential.NewCredFromAssertionCallback( + func(ctx context.Context, _ confidential.AssertionRequestOptions) (string, error) { + return getAssertion(ctx) + }, + ) + msalOpts := confidentialClientOptions{ + AdditionallyAllowedTenants: options.AdditionallyAllowedTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + } + c, err := newConfidentialClient(tenantID, clientID, credNameAssertion, cred, msalOpts) + if err != nil { + return nil, err + } + return &ClientAssertionCredential{client: c}, nil +} + +// GetToken requests an access token from Azure Active Directory. This method is called automatically by Azure SDK clients. +func (c *ClientAssertionCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return c.client.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*ClientAssertionCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_certificate_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_certificate_credential.go new file mode 100644 index 000000000000..d3300e3053bd --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_certificate_credential.go @@ -0,0 +1,160 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "crypto" + "crypto/x509" + "encoding/pem" + "errors" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential" + "golang.org/x/crypto/pkcs12" +) + +const credNameCert = "ClientCertificateCredential" + +// ClientCertificateCredentialOptions contains optional parameters for ClientCertificateCredential. +type ClientCertificateCredentialOptions struct { + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. + // Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the + // application is registered. + AdditionallyAllowedTenants []string + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool + // SendCertificateChain controls whether the credential sends the public certificate chain in the x5c + // header of each token request's JWT. This is required for Subject Name/Issuer (SNI) authentication. + // Defaults to False. + SendCertificateChain bool +} + +// ClientCertificateCredential authenticates a service principal with a certificate. +type ClientCertificateCredential struct { + client *confidentialClient +} + +// NewClientCertificateCredential constructs a ClientCertificateCredential. Pass nil for options to accept defaults. +func NewClientCertificateCredential(tenantID string, clientID string, certs []*x509.Certificate, key crypto.PrivateKey, options *ClientCertificateCredentialOptions) (*ClientCertificateCredential, error) { + if len(certs) == 0 { + return nil, errors.New("at least one certificate is required") + } + if options == nil { + options = &ClientCertificateCredentialOptions{} + } + cred, err := confidential.NewCredFromCert(certs, key) + if err != nil { + return nil, err + } + msalOpts := confidentialClientOptions{ + AdditionallyAllowedTenants: options.AdditionallyAllowedTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + SendX5C: options.SendCertificateChain, + } + c, err := newConfidentialClient(tenantID, clientID, credNameCert, cred, msalOpts) + if err != nil { + return nil, err + } + return &ClientCertificateCredential{client: c}, nil +} + +// GetToken requests an access token from Azure Active Directory. This method is called automatically by Azure SDK clients. +func (c *ClientCertificateCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return c.client.GetToken(ctx, opts) +} + +// ParseCertificates loads certificates and a private key, in PEM or PKCS12 format, for use with NewClientCertificateCredential. +// Pass nil for password if the private key isn't encrypted. This function can't decrypt keys in PEM format. +func ParseCertificates(certData []byte, password []byte) ([]*x509.Certificate, crypto.PrivateKey, error) { + var blocks []*pem.Block + var err error + if len(password) == 0 { + blocks, err = loadPEMCert(certData) + } + if len(blocks) == 0 || err != nil { + blocks, err = loadPKCS12Cert(certData, string(password)) + } + if err != nil { + return nil, nil, err + } + var certs []*x509.Certificate + var pk crypto.PrivateKey + for _, block := range blocks { + switch block.Type { + case "CERTIFICATE": + c, err := x509.ParseCertificate(block.Bytes) + if err != nil { + return nil, nil, err + } + certs = append(certs, c) + case "PRIVATE KEY": + if pk != nil { + return nil, nil, errors.New("certData contains multiple private keys") + } + pk, err = x509.ParsePKCS8PrivateKey(block.Bytes) + if err != nil { + pk, err = x509.ParsePKCS1PrivateKey(block.Bytes) + } + if err != nil { + return nil, nil, err + } + case "RSA PRIVATE KEY": + if pk != nil { + return nil, nil, errors.New("certData contains multiple private keys") + } + pk, err = x509.ParsePKCS1PrivateKey(block.Bytes) + if err != nil { + return nil, nil, err + } + } + } + if len(certs) == 0 { + return nil, nil, errors.New("found no certificate") + } + if pk == nil { + return nil, nil, errors.New("found no private key") + } + return certs, pk, nil +} + +func loadPEMCert(certData []byte) ([]*pem.Block, error) { + blocks := []*pem.Block{} + for { + var block *pem.Block + block, certData = pem.Decode(certData) + if block == nil { + break + } + blocks = append(blocks, block) + } + if len(blocks) == 0 { + return nil, errors.New("didn't find any PEM blocks") + } + return blocks, nil +} + +func loadPKCS12Cert(certData []byte, password string) ([]*pem.Block, error) { + blocks, err := pkcs12.ToPEM(certData, password) + if err != nil { + return nil, err + } + if len(blocks) == 0 { + // not mentioning PKCS12 in this message because we end up here when certData is garbage + return nil, errors.New("didn't find any certificate content") + } + return blocks, err +} + +var _ azcore.TokenCredential = (*ClientCertificateCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_secret_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_secret_credential.go new file mode 100644 index 000000000000..d2ff7582b997 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/client_secret_credential.go @@ -0,0 +1,65 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential" +) + +const credNameSecret = "ClientSecretCredential" + +// ClientSecretCredentialOptions contains optional parameters for ClientSecretCredential. +type ClientSecretCredentialOptions struct { + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. + // Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the + // application is registered. + AdditionallyAllowedTenants []string + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool +} + +// ClientSecretCredential authenticates an application with a client secret. +type ClientSecretCredential struct { + client *confidentialClient +} + +// NewClientSecretCredential constructs a ClientSecretCredential. Pass nil for options to accept defaults. +func NewClientSecretCredential(tenantID string, clientID string, clientSecret string, options *ClientSecretCredentialOptions) (*ClientSecretCredential, error) { + if options == nil { + options = &ClientSecretCredentialOptions{} + } + cred, err := confidential.NewCredFromSecret(clientSecret) + if err != nil { + return nil, err + } + msalOpts := confidentialClientOptions{ + AdditionallyAllowedTenants: options.AdditionallyAllowedTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + } + c, err := newConfidentialClient(tenantID, clientID, credNameSecret, cred, msalOpts) + if err != nil { + return nil, err + } + return &ClientSecretCredential{c}, nil +} + +// GetToken requests an access token from Azure Active Directory. This method is called automatically by Azure SDK clients. +func (c *ClientSecretCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return c.client.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*ClientSecretCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/confidential_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/confidential_client.go new file mode 100644 index 000000000000..4853a9a0095d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/confidential_client.go @@ -0,0 +1,156 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "errors" + "fmt" + "os" + "strings" + "sync" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential" +) + +type confidentialClientOptions struct { + azcore.ClientOptions + + AdditionallyAllowedTenants []string + // Assertion for on-behalf-of authentication + Assertion string + DisableInstanceDiscovery, SendX5C bool +} + +// confidentialClient wraps the MSAL confidential client +type confidentialClient struct { + cae, noCAE msalConfidentialClient + caeMu, noCAEMu, clientMu *sync.Mutex + clientID, tenantID string + cred confidential.Credential + host string + name string + opts confidentialClientOptions + region string +} + +func newConfidentialClient(tenantID, clientID, name string, cred confidential.Credential, opts confidentialClientOptions) (*confidentialClient, error) { + if !validTenantID(tenantID) { + return nil, errInvalidTenantID + } + host, err := setAuthorityHost(opts.Cloud) + if err != nil { + return nil, err + } + opts.AdditionallyAllowedTenants = resolveAdditionalTenants(opts.AdditionallyAllowedTenants) + return &confidentialClient{ + caeMu: &sync.Mutex{}, + clientID: clientID, + clientMu: &sync.Mutex{}, + cred: cred, + host: host, + name: name, + noCAEMu: &sync.Mutex{}, + opts: opts, + region: os.Getenv(azureRegionalAuthorityName), + tenantID: tenantID, + }, nil +} + +// GetToken requests an access token from MSAL, checking the cache first. +func (c *confidentialClient) GetToken(ctx context.Context, tro policy.TokenRequestOptions) (azcore.AccessToken, error) { + if len(tro.Scopes) < 1 { + return azcore.AccessToken{}, fmt.Errorf("%s.GetToken() requires at least one scope", c.name) + } + // we don't resolve the tenant for managed identities because they acquire tokens only from their home tenants + if c.name != credNameManagedIdentity { + tenant, err := c.resolveTenant(tro.TenantID) + if err != nil { + return azcore.AccessToken{}, err + } + tro.TenantID = tenant + } + client, mu, err := c.client(ctx, tro) + if err != nil { + return azcore.AccessToken{}, err + } + mu.Lock() + defer mu.Unlock() + var ar confidential.AuthResult + if c.opts.Assertion != "" { + ar, err = client.AcquireTokenOnBehalfOf(ctx, c.opts.Assertion, tro.Scopes, confidential.WithClaims(tro.Claims), confidential.WithTenantID(tro.TenantID)) + } else { + ar, err = client.AcquireTokenSilent(ctx, tro.Scopes, confidential.WithClaims(tro.Claims), confidential.WithTenantID(tro.TenantID)) + if err != nil { + ar, err = client.AcquireTokenByCredential(ctx, tro.Scopes, confidential.WithClaims(tro.Claims), confidential.WithTenantID(tro.TenantID)) + } + } + if err != nil { + // We could get a credentialUnavailableError from managed identity authentication because in that case the error comes from our code. + // We return it directly because it affects the behavior of credential chains. Otherwise, we return AuthenticationFailedError. + var unavailableErr *credentialUnavailableError + if !errors.As(err, &unavailableErr) { + res := getResponseFromError(err) + err = newAuthenticationFailedError(c.name, err.Error(), res, err) + } + } else { + msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", c.name, strings.Join(ar.GrantedScopes, ", ")) + log.Write(EventAuthentication, msg) + } + return azcore.AccessToken{Token: ar.AccessToken, ExpiresOn: ar.ExpiresOn.UTC()}, err +} + +func (c *confidentialClient) client(ctx context.Context, tro policy.TokenRequestOptions) (msalConfidentialClient, *sync.Mutex, error) { + c.clientMu.Lock() + defer c.clientMu.Unlock() + if tro.EnableCAE { + if c.cae == nil { + client, err := c.newMSALClient(true) + if err != nil { + return nil, nil, err + } + c.cae = client + } + return c.cae, c.caeMu, nil + } + if c.noCAE == nil { + client, err := c.newMSALClient(false) + if err != nil { + return nil, nil, err + } + c.noCAE = client + } + return c.noCAE, c.noCAEMu, nil +} + +func (c *confidentialClient) newMSALClient(enableCAE bool) (msalConfidentialClient, error) { + authority := runtime.JoinPaths(c.host, c.tenantID) + o := []confidential.Option{ + confidential.WithAzureRegion(c.region), + confidential.WithHTTPClient(newPipelineAdapter(&c.opts.ClientOptions)), + } + if enableCAE { + o = append(o, confidential.WithClientCapabilities(cp1)) + } + if c.opts.SendX5C { + o = append(o, confidential.WithX5C()) + } + if c.opts.DisableInstanceDiscovery || strings.ToLower(c.tenantID) == "adfs" { + o = append(o, confidential.WithInstanceDiscovery(false)) + } + return confidential.New(authority, c.clientID, c.cred, o...) +} + +// resolveTenant returns the correct tenant for a token request given the client's +// configuration, or an error when that configuration doesn't allow the specified tenant +func (c *confidentialClient) resolveTenant(specified string) (string, error) { + return resolveTenant(c.tenantID, specified, c.name, c.opts.AdditionallyAllowedTenants) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/default_azure_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/default_azure_credential.go new file mode 100644 index 000000000000..7647c60b1cb7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/default_azure_credential.go @@ -0,0 +1,196 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "errors" + "os" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" +) + +// DefaultAzureCredentialOptions contains optional parameters for DefaultAzureCredential. +// These options may not apply to all credentials in the chain. +type DefaultAzureCredentialOptions struct { + // ClientOptions has additional options for credentials that use an Azure SDK HTTP pipeline. These options don't apply + // to credential types that authenticate via external tools such as the Azure CLI. + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. Add + // the wildcard value "*" to allow the credential to acquire tokens for any tenant. This value can also be + // set as a semicolon delimited list of tenants in the environment variable AZURE_ADDITIONALLY_ALLOWED_TENANTS. + AdditionallyAllowedTenants []string + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool + // TenantID sets the default tenant for authentication via the Azure CLI and workload identity. + TenantID string +} + +// DefaultAzureCredential is a default credential chain for applications that will deploy to Azure. +// It combines credentials suitable for deployment with credentials suitable for local development. +// It attempts to authenticate with each of these credential types, in the following order, stopping +// when one provides a token: +// +// - [EnvironmentCredential] +// - [WorkloadIdentityCredential], if environment variable configuration is set by the Azure workload +// identity webhook. Use [WorkloadIdentityCredential] directly when not using the webhook or needing +// more control over its configuration. +// - [ManagedIdentityCredential] +// - [AzureCLICredential] +// +// Consult the documentation for these credential types for more information on how they authenticate. +// Once a credential has successfully authenticated, DefaultAzureCredential will use that credential for +// every subsequent authentication. +type DefaultAzureCredential struct { + chain *ChainedTokenCredential +} + +// NewDefaultAzureCredential creates a DefaultAzureCredential. Pass nil for options to accept defaults. +func NewDefaultAzureCredential(options *DefaultAzureCredentialOptions) (*DefaultAzureCredential, error) { + var creds []azcore.TokenCredential + var errorMessages []string + + if options == nil { + options = &DefaultAzureCredentialOptions{} + } + additionalTenants := options.AdditionallyAllowedTenants + if len(additionalTenants) == 0 { + if tenants := os.Getenv(azureAdditionallyAllowedTenants); tenants != "" { + additionalTenants = strings.Split(tenants, ";") + } + } + + envCred, err := NewEnvironmentCredential(&EnvironmentCredentialOptions{ + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + additionallyAllowedTenants: additionalTenants, + }) + if err == nil { + creds = append(creds, envCred) + } else { + errorMessages = append(errorMessages, "EnvironmentCredential: "+err.Error()) + creds = append(creds, &defaultCredentialErrorReporter{credType: "EnvironmentCredential", err: err}) + } + + wic, err := NewWorkloadIdentityCredential(&WorkloadIdentityCredentialOptions{ + AdditionallyAllowedTenants: additionalTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + TenantID: options.TenantID, + }) + if err == nil { + creds = append(creds, wic) + } else { + errorMessages = append(errorMessages, credNameWorkloadIdentity+": "+err.Error()) + creds = append(creds, &defaultCredentialErrorReporter{credType: credNameWorkloadIdentity, err: err}) + } + + o := &ManagedIdentityCredentialOptions{ClientOptions: options.ClientOptions} + if ID, ok := os.LookupEnv(azureClientID); ok { + o.ID = ClientID(ID) + } + miCred, err := NewManagedIdentityCredential(o) + if err == nil { + creds = append(creds, &timeoutWrapper{mic: miCred, timeout: time.Second}) + } else { + errorMessages = append(errorMessages, credNameManagedIdentity+": "+err.Error()) + creds = append(creds, &defaultCredentialErrorReporter{credType: credNameManagedIdentity, err: err}) + } + + cliCred, err := NewAzureCLICredential(&AzureCLICredentialOptions{AdditionallyAllowedTenants: additionalTenants, TenantID: options.TenantID}) + if err == nil { + creds = append(creds, cliCred) + } else { + errorMessages = append(errorMessages, credNameAzureCLI+": "+err.Error()) + creds = append(creds, &defaultCredentialErrorReporter{credType: credNameAzureCLI, err: err}) + } + + if len(errorMessages) > 0 { + log.Writef(EventAuthentication, "NewDefaultAzureCredential failed to initialize some credentials:\n\t%s", strings.Join(errorMessages, "\n\t")) + } + + chain, err := NewChainedTokenCredential(creds, nil) + if err != nil { + return nil, err + } + chain.name = "DefaultAzureCredential" + return &DefaultAzureCredential{chain: chain}, nil +} + +// GetToken requests an access token from Azure Active Directory. This method is called automatically by Azure SDK clients. +func (c *DefaultAzureCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return c.chain.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*DefaultAzureCredential)(nil) + +// defaultCredentialErrorReporter is a substitute for credentials that couldn't be constructed. +// Its GetToken method always returns a credentialUnavailableError having the same message as +// the error that prevented constructing the credential. This ensures the message is present +// in the error returned by ChainedTokenCredential.GetToken() +type defaultCredentialErrorReporter struct { + credType string + err error +} + +func (d *defaultCredentialErrorReporter) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + if _, ok := d.err.(*credentialUnavailableError); ok { + return azcore.AccessToken{}, d.err + } + return azcore.AccessToken{}, newCredentialUnavailableError(d.credType, d.err.Error()) +} + +var _ azcore.TokenCredential = (*defaultCredentialErrorReporter)(nil) + +// timeoutWrapper prevents a potentially very long timeout when managed identity isn't available +type timeoutWrapper struct { + mic *ManagedIdentityCredential + // timeout applies to all auth attempts until one doesn't time out + timeout time.Duration +} + +// GetToken wraps DefaultAzureCredential's initial managed identity auth attempt with a short timeout +// because managed identity may not be available and connecting to IMDS can take several minutes to time out. +func (w *timeoutWrapper) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + var tk azcore.AccessToken + var err error + // no need to synchronize around this value because it's written only within ChainedTokenCredential's critical section + if w.timeout > 0 { + c, cancel := context.WithTimeout(ctx, w.timeout) + defer cancel() + tk, err = w.mic.GetToken(c, opts) + if isAuthFailedDueToContext(err) { + err = newCredentialUnavailableError(credNameManagedIdentity, "managed identity timed out. See https://aka.ms/azsdk/go/identity/troubleshoot#dac for more information") + } else { + // some managed identity implementation is available, so don't apply the timeout to future calls + w.timeout = 0 + } + } else { + tk, err = w.mic.GetToken(ctx, opts) + } + return tk, err +} + +// unwraps nested AuthenticationFailedErrors to get the root error +func isAuthFailedDueToContext(err error) bool { + for { + var authFailedErr *AuthenticationFailedError + if !errors.As(err, &authFailedErr) { + break + } + err = authFailedErr.err + } + return errors.Is(err, context.Canceled) || errors.Is(err, context.DeadlineExceeded) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/device_code_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/device_code_credential.go new file mode 100644 index 000000000000..d245c269a760 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/device_code_credential.go @@ -0,0 +1,106 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "fmt" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +const credNameDeviceCode = "DeviceCodeCredential" + +// DeviceCodeCredentialOptions contains optional parameters for DeviceCodeCredential. +type DeviceCodeCredentialOptions struct { + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire + // tokens. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant. + AdditionallyAllowedTenants []string + // ClientID is the ID of the application users will authenticate to. + // Defaults to the ID of an Azure development application. + ClientID string + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool + // TenantID is the Azure Active Directory tenant the credential authenticates in. Defaults to the + // "organizations" tenant, which can authenticate work and school accounts. Required for single-tenant + // applications. + TenantID string + + // UserPrompt controls how the credential presents authentication instructions. The credential calls + // this function with authentication details when it receives a device code. By default, the credential + // prints these details to stdout. + UserPrompt func(context.Context, DeviceCodeMessage) error +} + +func (o *DeviceCodeCredentialOptions) init() { + if o.TenantID == "" { + o.TenantID = organizationsTenantID + } + if o.ClientID == "" { + o.ClientID = developerSignOnClientID + } + if o.UserPrompt == nil { + o.UserPrompt = func(ctx context.Context, dc DeviceCodeMessage) error { + fmt.Println(dc.Message) + return nil + } + } +} + +// DeviceCodeMessage contains the information a user needs to complete authentication. +type DeviceCodeMessage struct { + // UserCode is the user code returned by the service. + UserCode string `json:"user_code"` + // VerificationURL is the URL at which the user must authenticate. + VerificationURL string `json:"verification_uri"` + // Message is user instruction from Azure Active Directory. + Message string `json:"message"` +} + +// DeviceCodeCredential acquires tokens for a user via the device code flow, which has the +// user browse to an Azure Active Directory URL, enter a code, and authenticate. It's useful +// for authenticating a user in an environment without a web browser, such as an SSH session. +// If a web browser is available, InteractiveBrowserCredential is more convenient because it +// automatically opens a browser to the login page. +type DeviceCodeCredential struct { + client *publicClient +} + +// NewDeviceCodeCredential creates a DeviceCodeCredential. Pass nil to accept default options. +func NewDeviceCodeCredential(options *DeviceCodeCredentialOptions) (*DeviceCodeCredential, error) { + cp := DeviceCodeCredentialOptions{} + if options != nil { + cp = *options + } + cp.init() + msalOpts := publicClientOptions{ + AdditionallyAllowedTenants: cp.AdditionallyAllowedTenants, + ClientOptions: cp.ClientOptions, + DeviceCodePrompt: cp.UserPrompt, + DisableInstanceDiscovery: cp.DisableInstanceDiscovery, + } + c, err := newPublicClient(cp.TenantID, cp.ClientID, credNameDeviceCode, msalOpts) + if err != nil { + return nil, err + } + c.name = credNameDeviceCode + return &DeviceCodeCredential{client: c}, nil +} + +// GetToken requests an access token from Azure Active Directory. It will begin the device code flow and poll until the user completes authentication. +// This method is called automatically by Azure SDK clients. +func (c *DeviceCodeCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return c.client.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*DeviceCodeCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/environment_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/environment_credential.go new file mode 100644 index 000000000000..7ecd928e0245 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/environment_credential.go @@ -0,0 +1,164 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "errors" + "fmt" + "os" + "strings" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" +) + +const envVarSendCertChain = "AZURE_CLIENT_SEND_CERTIFICATE_CHAIN" + +// EnvironmentCredentialOptions contains optional parameters for EnvironmentCredential +type EnvironmentCredentialOptions struct { + azcore.ClientOptions + + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool + // additionallyAllowedTenants is used only by NewDefaultAzureCredential() to enable that constructor's explicit + // option to override the value of AZURE_ADDITIONALLY_ALLOWED_TENANTS. Applications using EnvironmentCredential + // directly should set that variable instead. This field should remain unexported to preserve this credential's + // unambiguous "all configuration from environment variables" design. + additionallyAllowedTenants []string +} + +// EnvironmentCredential authenticates a service principal with a secret or certificate, or a user with a password, depending +// on environment variable configuration. It reads configuration from these variables, in the following order: +// +// # Service principal with client secret +// +// AZURE_TENANT_ID: ID of the service principal's tenant. Also called its "directory" ID. +// +// AZURE_CLIENT_ID: the service principal's client ID +// +// AZURE_CLIENT_SECRET: one of the service principal's client secrets +// +// # Service principal with certificate +// +// AZURE_TENANT_ID: ID of the service principal's tenant. Also called its "directory" ID. +// +// AZURE_CLIENT_ID: the service principal's client ID +// +// AZURE_CLIENT_CERTIFICATE_PATH: path to a PEM or PKCS12 certificate file including the private key. +// +// AZURE_CLIENT_CERTIFICATE_PASSWORD: (optional) password for the certificate file. +// +// # User with username and password +// +// AZURE_TENANT_ID: (optional) tenant to authenticate in. Defaults to "organizations". +// +// AZURE_CLIENT_ID: client ID of the application the user will authenticate to +// +// AZURE_USERNAME: a username (usually an email address) +// +// AZURE_PASSWORD: the user's password +// +// # Configuration for multitenant applications +// +// To enable multitenant authentication, set AZURE_ADDITIONALLY_ALLOWED_TENANTS with a semicolon delimited list of tenants +// the credential may request tokens from in addition to the tenant specified by AZURE_TENANT_ID. Set +// AZURE_ADDITIONALLY_ALLOWED_TENANTS to "*" to enable the credential to request a token from any tenant. +type EnvironmentCredential struct { + cred azcore.TokenCredential +} + +// NewEnvironmentCredential creates an EnvironmentCredential. Pass nil to accept default options. +func NewEnvironmentCredential(options *EnvironmentCredentialOptions) (*EnvironmentCredential, error) { + if options == nil { + options = &EnvironmentCredentialOptions{} + } + tenantID := os.Getenv(azureTenantID) + if tenantID == "" { + return nil, errors.New("missing environment variable AZURE_TENANT_ID") + } + clientID := os.Getenv(azureClientID) + if clientID == "" { + return nil, errors.New("missing environment variable " + azureClientID) + } + // tenants set by NewDefaultAzureCredential() override the value of AZURE_ADDITIONALLY_ALLOWED_TENANTS + additionalTenants := options.additionallyAllowedTenants + if len(additionalTenants) == 0 { + if tenants := os.Getenv(azureAdditionallyAllowedTenants); tenants != "" { + additionalTenants = strings.Split(tenants, ";") + } + } + if clientSecret := os.Getenv(azureClientSecret); clientSecret != "" { + log.Write(EventAuthentication, "EnvironmentCredential will authenticate with ClientSecretCredential") + o := &ClientSecretCredentialOptions{ + AdditionallyAllowedTenants: additionalTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + } + cred, err := NewClientSecretCredential(tenantID, clientID, clientSecret, o) + if err != nil { + return nil, err + } + return &EnvironmentCredential{cred: cred}, nil + } + if certPath := os.Getenv(azureClientCertificatePath); certPath != "" { + log.Write(EventAuthentication, "EnvironmentCredential will authenticate with ClientCertificateCredential") + certData, err := os.ReadFile(certPath) + if err != nil { + return nil, fmt.Errorf(`failed to read certificate file "%s": %v`, certPath, err) + } + var password []byte + if v := os.Getenv(azureClientCertificatePassword); v != "" { + password = []byte(v) + } + certs, key, err := ParseCertificates(certData, password) + if err != nil { + return nil, fmt.Errorf(`failed to load certificate from "%s": %v`, certPath, err) + } + o := &ClientCertificateCredentialOptions{ + AdditionallyAllowedTenants: additionalTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + } + if v, ok := os.LookupEnv(envVarSendCertChain); ok { + o.SendCertificateChain = v == "1" || strings.ToLower(v) == "true" + } + cred, err := NewClientCertificateCredential(tenantID, clientID, certs, key, o) + if err != nil { + return nil, err + } + return &EnvironmentCredential{cred: cred}, nil + } + if username := os.Getenv(azureUsername); username != "" { + if password := os.Getenv(azurePassword); password != "" { + log.Write(EventAuthentication, "EnvironmentCredential will authenticate with UsernamePasswordCredential") + o := &UsernamePasswordCredentialOptions{ + AdditionallyAllowedTenants: additionalTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + } + cred, err := NewUsernamePasswordCredential(tenantID, clientID, username, password, o) + if err != nil { + return nil, err + } + return &EnvironmentCredential{cred: cred}, nil + } + return nil, errors.New("no value for AZURE_PASSWORD") + } + return nil, errors.New("incomplete environment variable configuration. Only AZURE_TENANT_ID and AZURE_CLIENT_ID are set") +} + +// GetToken requests an access token from Azure Active Directory. This method is called automatically by Azure SDK clients. +func (c *EnvironmentCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return c.cred.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*EnvironmentCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/errors.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/errors.go new file mode 100644 index 000000000000..e1a21e0030a9 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/errors.go @@ -0,0 +1,128 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "bytes" + "encoding/json" + "errors" + "fmt" + "net/http" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo" + msal "github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors" +) + +// getResponseFromError retrieves the response carried by +// an AuthenticationFailedError or MSAL CallErr, if any +func getResponseFromError(err error) *http.Response { + var a *AuthenticationFailedError + var c msal.CallErr + var res *http.Response + if errors.As(err, &c) { + res = c.Resp + } else if errors.As(err, &a) { + res = a.RawResponse + } + return res +} + +// AuthenticationFailedError indicates an authentication request has failed. +type AuthenticationFailedError struct { + // RawResponse is the HTTP response motivating the error, if available. + RawResponse *http.Response + + credType string + message string + err error +} + +func newAuthenticationFailedError(credType string, message string, resp *http.Response, err error) error { + return &AuthenticationFailedError{credType: credType, message: message, RawResponse: resp, err: err} +} + +// Error implements the error interface. Note that the message contents are not contractual and can change over time. +func (e *AuthenticationFailedError) Error() string { + if e.RawResponse == nil { + return e.credType + ": " + e.message + } + msg := &bytes.Buffer{} + fmt.Fprintf(msg, e.credType+" authentication failed\n") + fmt.Fprintf(msg, "%s %s://%s%s\n", e.RawResponse.Request.Method, e.RawResponse.Request.URL.Scheme, e.RawResponse.Request.URL.Host, e.RawResponse.Request.URL.Path) + fmt.Fprintln(msg, "--------------------------------------------------------------------------------") + fmt.Fprintf(msg, "RESPONSE %s\n", e.RawResponse.Status) + fmt.Fprintln(msg, "--------------------------------------------------------------------------------") + body, err := runtime.Payload(e.RawResponse) + switch { + case err != nil: + fmt.Fprintf(msg, "Error reading response body: %v", err) + case len(body) > 0: + if err := json.Indent(msg, body, "", " "); err != nil { + // failed to pretty-print so just dump it verbatim + fmt.Fprint(msg, string(body)) + } + default: + fmt.Fprint(msg, "Response contained no body") + } + fmt.Fprintln(msg, "\n--------------------------------------------------------------------------------") + var anchor string + switch e.credType { + case credNameAzureCLI: + anchor = "azure-cli" + case credNameCert: + anchor = "client-cert" + case credNameSecret: + anchor = "client-secret" + case credNameManagedIdentity: + anchor = "managed-id" + case credNameUserPassword: + anchor = "username-password" + case credNameWorkloadIdentity: + anchor = "workload" + } + if anchor != "" { + fmt.Fprintf(msg, "To troubleshoot, visit https://aka.ms/azsdk/go/identity/troubleshoot#%s", anchor) + } + return msg.String() +} + +// NonRetriable indicates the request which provoked this error shouldn't be retried. +func (*AuthenticationFailedError) NonRetriable() { + // marker method +} + +var _ errorinfo.NonRetriable = (*AuthenticationFailedError)(nil) + +// credentialUnavailableError indicates a credential can't attempt authentication because it lacks required +// data or state +type credentialUnavailableError struct { + message string +} + +// newCredentialUnavailableError is an internal helper that ensures consistent error message formatting +func newCredentialUnavailableError(credType, message string) error { + msg := fmt.Sprintf("%s: %s", credType, message) + return &credentialUnavailableError{msg} +} + +// NewCredentialUnavailableError constructs an error indicating a credential can't attempt authentication +// because it lacks required data or state. When [ChainedTokenCredential] receives this error it will try +// its next credential, if any. +func NewCredentialUnavailableError(message string) error { + return &credentialUnavailableError{message} +} + +// Error implements the error interface. Note that the message contents are not contractual and can change over time. +func (e *credentialUnavailableError) Error() string { + return e.message +} + +// NonRetriable is a marker method indicating this error should not be retried. It has no implementation. +func (e *credentialUnavailableError) NonRetriable() {} + +var _ errorinfo.NonRetriable = (*credentialUnavailableError)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/interactive_browser_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/interactive_browser_credential.go new file mode 100644 index 000000000000..08f3efbf3ec4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/interactive_browser_credential.go @@ -0,0 +1,86 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +const credNameBrowser = "InteractiveBrowserCredential" + +// InteractiveBrowserCredentialOptions contains optional parameters for InteractiveBrowserCredential. +type InteractiveBrowserCredentialOptions struct { + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire + // tokens. Add the wildcard value "*" to allow the credential to acquire tokens for any tenant. + AdditionallyAllowedTenants []string + // ClientID is the ID of the application users will authenticate to. + // Defaults to the ID of an Azure development application. + ClientID string + + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool + + // LoginHint pre-populates the account prompt with a username. Users may choose to authenticate a different account. + LoginHint string + // RedirectURL is the URL Azure Active Directory will redirect to with the access token. This is required + // only when setting ClientID, and must match a redirect URI in the application's registration. + // Applications which have registered "http://localhost" as a redirect URI need not set this option. + RedirectURL string + + // TenantID is the Azure Active Directory tenant the credential authenticates in. Defaults to the + // "organizations" tenant, which can authenticate work and school accounts. + TenantID string +} + +func (o *InteractiveBrowserCredentialOptions) init() { + if o.TenantID == "" { + o.TenantID = organizationsTenantID + } + if o.ClientID == "" { + o.ClientID = developerSignOnClientID + } +} + +// InteractiveBrowserCredential opens a browser to interactively authenticate a user. +type InteractiveBrowserCredential struct { + client *publicClient +} + +// NewInteractiveBrowserCredential constructs a new InteractiveBrowserCredential. Pass nil to accept default options. +func NewInteractiveBrowserCredential(options *InteractiveBrowserCredentialOptions) (*InteractiveBrowserCredential, error) { + cp := InteractiveBrowserCredentialOptions{} + if options != nil { + cp = *options + } + cp.init() + msalOpts := publicClientOptions{ + ClientOptions: cp.ClientOptions, + DisableInstanceDiscovery: cp.DisableInstanceDiscovery, + LoginHint: cp.LoginHint, + RedirectURL: cp.RedirectURL, + } + c, err := newPublicClient(cp.TenantID, cp.ClientID, credNameBrowser, msalOpts) + if err != nil { + return nil, err + } + return &InteractiveBrowserCredential{client: c}, nil +} + +// GetToken requests an access token from Azure Active Directory. This method is called automatically by Azure SDK clients. +func (c *InteractiveBrowserCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return c.client.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*InteractiveBrowserCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/logging.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/logging.go new file mode 100644 index 000000000000..1aa1e0fc7c8e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/logging.go @@ -0,0 +1,14 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import "github.com/Azure/azure-sdk-for-go/sdk/internal/log" + +// EventAuthentication entries contain information about authentication. +// This includes information like the names of environment variables +// used when obtaining credentials and the type of credential used. +const EventAuthentication log.Event = "Authentication" diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_client.go new file mode 100644 index 000000000000..fdc3c1f67760 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_client.go @@ -0,0 +1,404 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "encoding/json" + "errors" + "fmt" + "net/http" + "net/url" + "os" + "strconv" + "strings" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential" +) + +const ( + arcIMDSEndpoint = "IMDS_ENDPOINT" + identityEndpoint = "IDENTITY_ENDPOINT" + identityHeader = "IDENTITY_HEADER" + identityServerThumbprint = "IDENTITY_SERVER_THUMBPRINT" + headerMetadata = "Metadata" + imdsEndpoint = "http://169.254.169.254/metadata/identity/oauth2/token" + msiEndpoint = "MSI_ENDPOINT" + imdsAPIVersion = "2018-02-01" + azureArcAPIVersion = "2019-08-15" + serviceFabricAPIVersion = "2019-07-01-preview" + + qpClientID = "client_id" + qpResID = "mi_res_id" +) + +type msiType int + +const ( + msiTypeAppService msiType = iota + msiTypeAzureArc + msiTypeCloudShell + msiTypeIMDS + msiTypeServiceFabric +) + +// managedIdentityClient provides the base for authenticating in managed identity environments +// This type includes an runtime.Pipeline and TokenCredentialOptions. +type managedIdentityClient struct { + pipeline runtime.Pipeline + msiType msiType + endpoint string + id ManagedIDKind +} + +type wrappedNumber json.Number + +func (n *wrappedNumber) UnmarshalJSON(b []byte) error { + c := string(b) + if c == "\"\"" { + return nil + } + return json.Unmarshal(b, (*json.Number)(n)) +} + +// setIMDSRetryOptionDefaults sets zero-valued fields to default values appropriate for IMDS +func setIMDSRetryOptionDefaults(o *policy.RetryOptions) { + if o.MaxRetries == 0 { + o.MaxRetries = 5 + } + if o.MaxRetryDelay == 0 { + o.MaxRetryDelay = 1 * time.Minute + } + if o.RetryDelay == 0 { + o.RetryDelay = 2 * time.Second + } + if o.StatusCodes == nil { + o.StatusCodes = []int{ + // IMDS docs recommend retrying 404, 410, 429 and 5xx + // https://learn.microsoft.com/azure/active-directory/managed-identities-azure-resources/how-to-use-vm-token#error-handling + http.StatusNotFound, // 404 + http.StatusGone, // 410 + http.StatusTooManyRequests, // 429 + http.StatusInternalServerError, // 500 + http.StatusNotImplemented, // 501 + http.StatusBadGateway, // 502 + http.StatusServiceUnavailable, // 503 + http.StatusGatewayTimeout, // 504 + http.StatusHTTPVersionNotSupported, // 505 + http.StatusVariantAlsoNegotiates, // 506 + http.StatusInsufficientStorage, // 507 + http.StatusLoopDetected, // 508 + http.StatusNotExtended, // 510 + http.StatusNetworkAuthenticationRequired, // 511 + } + } + if o.TryTimeout == 0 { + o.TryTimeout = 1 * time.Minute + } +} + +// newManagedIdentityClient creates a new instance of the ManagedIdentityClient with the ManagedIdentityCredentialOptions +// that are passed into it along with a default pipeline. +// options: ManagedIdentityCredentialOptions configure policies for the pipeline and the authority host that +// will be used to retrieve tokens and authenticate +func newManagedIdentityClient(options *ManagedIdentityCredentialOptions) (*managedIdentityClient, error) { + if options == nil { + options = &ManagedIdentityCredentialOptions{} + } + cp := options.ClientOptions + c := managedIdentityClient{id: options.ID, endpoint: imdsEndpoint, msiType: msiTypeIMDS} + env := "IMDS" + if endpoint, ok := os.LookupEnv(identityEndpoint); ok { + if _, ok := os.LookupEnv(identityHeader); ok { + if _, ok := os.LookupEnv(identityServerThumbprint); ok { + env = "Service Fabric" + c.endpoint = endpoint + c.msiType = msiTypeServiceFabric + } else { + env = "App Service" + c.endpoint = endpoint + c.msiType = msiTypeAppService + } + } else if _, ok := os.LookupEnv(arcIMDSEndpoint); ok { + env = "Azure Arc" + c.endpoint = endpoint + c.msiType = msiTypeAzureArc + } + } else if endpoint, ok := os.LookupEnv(msiEndpoint); ok { + env = "Cloud Shell" + c.endpoint = endpoint + c.msiType = msiTypeCloudShell + } else { + setIMDSRetryOptionDefaults(&cp.Retry) + } + c.pipeline = runtime.NewPipeline(component, version, runtime.PipelineOptions{}, &cp) + + if log.Should(EventAuthentication) { + log.Writef(EventAuthentication, "Managed Identity Credential will use %s managed identity", env) + } + + return &c, nil +} + +// provideToken acquires a token for MSAL's confidential.Client, which caches the token +func (c *managedIdentityClient) provideToken(ctx context.Context, params confidential.TokenProviderParameters) (confidential.TokenProviderResult, error) { + result := confidential.TokenProviderResult{} + tk, err := c.authenticate(ctx, c.id, params.Scopes) + if err == nil { + result.AccessToken = tk.Token + result.ExpiresInSeconds = int(time.Until(tk.ExpiresOn).Seconds()) + } + return result, err +} + +// authenticate acquires an access token +func (c *managedIdentityClient) authenticate(ctx context.Context, id ManagedIDKind, scopes []string) (azcore.AccessToken, error) { + msg, err := c.createAuthRequest(ctx, id, scopes) + if err != nil { + return azcore.AccessToken{}, err + } + + resp, err := c.pipeline.Do(msg) + if err != nil { + return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, err.Error(), nil, err) + } + + if runtime.HasStatusCode(resp, http.StatusOK, http.StatusCreated) { + return c.createAccessToken(resp) + } + + if c.msiType == msiTypeIMDS { + switch resp.StatusCode { + case http.StatusBadRequest: + if id != nil { + return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "the requested identity isn't assigned to this resource", resp, nil) + } + msg := "failed to authenticate a system assigned identity" + if body, err := runtime.Payload(resp); err == nil && len(body) > 0 { + msg += fmt.Sprintf(". The endpoint responded with %s", body) + } + return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, msg) + case http.StatusForbidden: + // Docker Desktop runs a proxy that responds 403 to IMDS token requests. If we get that response, + // we return credentialUnavailableError so credential chains continue to their next credential + body, err := runtime.Payload(resp) + if err == nil && strings.Contains(string(body), "A socket operation was attempted to an unreachable network") { + return azcore.AccessToken{}, newCredentialUnavailableError(credNameManagedIdentity, fmt.Sprintf("unexpected response %q", string(body))) + } + } + } + + return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "authentication failed", resp, nil) +} + +func (c *managedIdentityClient) createAccessToken(res *http.Response) (azcore.AccessToken, error) { + value := struct { + // these are the only fields that we use + Token string `json:"access_token,omitempty"` + RefreshToken string `json:"refresh_token,omitempty"` + ExpiresIn wrappedNumber `json:"expires_in,omitempty"` // this field should always return the number of seconds for which a token is valid + ExpiresOn interface{} `json:"expires_on,omitempty"` // the value returned in this field varies between a number and a date string + }{} + if err := runtime.UnmarshalAsJSON(res, &value); err != nil { + return azcore.AccessToken{}, fmt.Errorf("internal AccessToken: %v", err) + } + if value.ExpiresIn != "" { + expiresIn, err := json.Number(value.ExpiresIn).Int64() + if err != nil { + return azcore.AccessToken{}, err + } + return azcore.AccessToken{Token: value.Token, ExpiresOn: time.Now().Add(time.Second * time.Duration(expiresIn)).UTC()}, nil + } + switch v := value.ExpiresOn.(type) { + case float64: + return azcore.AccessToken{Token: value.Token, ExpiresOn: time.Unix(int64(v), 0).UTC()}, nil + case string: + if expiresOn, err := strconv.Atoi(v); err == nil { + return azcore.AccessToken{Token: value.Token, ExpiresOn: time.Unix(int64(expiresOn), 0).UTC()}, nil + } + return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, "unexpected expires_on value: "+v, res, nil) + default: + msg := fmt.Sprintf("unsupported type received in expires_on: %T, %v", v, v) + return azcore.AccessToken{}, newAuthenticationFailedError(credNameManagedIdentity, msg, res, nil) + } +} + +func (c *managedIdentityClient) createAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) { + switch c.msiType { + case msiTypeIMDS: + return c.createIMDSAuthRequest(ctx, id, scopes) + case msiTypeAppService: + return c.createAppServiceAuthRequest(ctx, id, scopes) + case msiTypeAzureArc: + // need to perform preliminary request to retreive the secret key challenge provided by the HIMDS service + key, err := c.getAzureArcSecretKey(ctx, scopes) + if err != nil { + msg := fmt.Sprintf("failed to retreive secret key from the identity endpoint: %v", err) + return nil, newAuthenticationFailedError(credNameManagedIdentity, msg, nil, err) + } + return c.createAzureArcAuthRequest(ctx, id, scopes, key) + case msiTypeServiceFabric: + return c.createServiceFabricAuthRequest(ctx, id, scopes) + case msiTypeCloudShell: + return c.createCloudShellAuthRequest(ctx, id, scopes) + default: + return nil, newCredentialUnavailableError(credNameManagedIdentity, "managed identity isn't supported in this environment") + } +} + +func (c *managedIdentityClient) createIMDSAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) { + request, err := runtime.NewRequest(ctx, http.MethodGet, c.endpoint) + if err != nil { + return nil, err + } + request.Raw().Header.Set(headerMetadata, "true") + q := request.Raw().URL.Query() + q.Add("api-version", imdsAPIVersion) + q.Add("resource", strings.Join(scopes, " ")) + if id != nil { + if id.idKind() == miResourceID { + q.Add(qpResID, id.String()) + } else { + q.Add(qpClientID, id.String()) + } + } + request.Raw().URL.RawQuery = q.Encode() + return request, nil +} + +func (c *managedIdentityClient) createAppServiceAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) { + request, err := runtime.NewRequest(ctx, http.MethodGet, c.endpoint) + if err != nil { + return nil, err + } + request.Raw().Header.Set("X-IDENTITY-HEADER", os.Getenv(identityHeader)) + q := request.Raw().URL.Query() + q.Add("api-version", "2019-08-01") + q.Add("resource", scopes[0]) + if id != nil { + if id.idKind() == miResourceID { + q.Add(qpResID, id.String()) + } else { + q.Add(qpClientID, id.String()) + } + } + request.Raw().URL.RawQuery = q.Encode() + return request, nil +} + +func (c *managedIdentityClient) createServiceFabricAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) { + request, err := runtime.NewRequest(ctx, http.MethodGet, c.endpoint) + if err != nil { + return nil, err + } + q := request.Raw().URL.Query() + request.Raw().Header.Set("Accept", "application/json") + request.Raw().Header.Set("Secret", os.Getenv(identityHeader)) + q.Add("api-version", serviceFabricAPIVersion) + q.Add("resource", strings.Join(scopes, " ")) + if id != nil { + log.Write(EventAuthentication, "WARNING: Service Fabric doesn't support selecting a user-assigned identity at runtime") + if id.idKind() == miResourceID { + q.Add(qpResID, id.String()) + } else { + q.Add(qpClientID, id.String()) + } + } + request.Raw().URL.RawQuery = q.Encode() + return request, nil +} + +func (c *managedIdentityClient) getAzureArcSecretKey(ctx context.Context, resources []string) (string, error) { + // create the request to retreive the secret key challenge provided by the HIMDS service + request, err := runtime.NewRequest(ctx, http.MethodGet, c.endpoint) + if err != nil { + return "", err + } + request.Raw().Header.Set(headerMetadata, "true") + q := request.Raw().URL.Query() + q.Add("api-version", azureArcAPIVersion) + q.Add("resource", strings.Join(resources, " ")) + request.Raw().URL.RawQuery = q.Encode() + // send the initial request to get the short-lived secret key + response, err := c.pipeline.Do(request) + if err != nil { + return "", err + } + // the endpoint is expected to return a 401 with the WWW-Authenticate header set to the location + // of the secret key file. Any other status code indicates an error in the request. + if response.StatusCode != 401 { + msg := fmt.Sprintf("expected a 401 response, received %d", response.StatusCode) + return "", newAuthenticationFailedError(credNameManagedIdentity, msg, response, nil) + } + header := response.Header.Get("WWW-Authenticate") + if len(header) == 0 { + return "", errors.New("did not receive a value from WWW-Authenticate header") + } + // the WWW-Authenticate header is expected in the following format: Basic realm=/some/file/path.key + pos := strings.LastIndex(header, "=") + if pos == -1 { + return "", fmt.Errorf("did not receive a correct value from WWW-Authenticate header: %s", header) + } + key, err := os.ReadFile(header[pos+1:]) + if err != nil { + return "", fmt.Errorf("could not read file (%s) contents: %v", header[pos+1:], err) + } + return string(key), nil +} + +func (c *managedIdentityClient) createAzureArcAuthRequest(ctx context.Context, id ManagedIDKind, resources []string, key string) (*policy.Request, error) { + request, err := runtime.NewRequest(ctx, http.MethodGet, c.endpoint) + if err != nil { + return nil, err + } + request.Raw().Header.Set(headerMetadata, "true") + request.Raw().Header.Set("Authorization", fmt.Sprintf("Basic %s", key)) + q := request.Raw().URL.Query() + q.Add("api-version", azureArcAPIVersion) + q.Add("resource", strings.Join(resources, " ")) + if id != nil { + log.Write(EventAuthentication, "WARNING: Azure Arc doesn't support user-assigned managed identities") + if id.idKind() == miResourceID { + q.Add(qpResID, id.String()) + } else { + q.Add(qpClientID, id.String()) + } + } + request.Raw().URL.RawQuery = q.Encode() + return request, nil +} + +func (c *managedIdentityClient) createCloudShellAuthRequest(ctx context.Context, id ManagedIDKind, scopes []string) (*policy.Request, error) { + request, err := runtime.NewRequest(ctx, http.MethodPost, c.endpoint) + if err != nil { + return nil, err + } + request.Raw().Header.Set(headerMetadata, "true") + data := url.Values{} + data.Set("resource", strings.Join(scopes, " ")) + dataEncoded := data.Encode() + body := streaming.NopCloser(strings.NewReader(dataEncoded)) + if err := request.SetBody(body, "application/x-www-form-urlencoded"); err != nil { + return nil, err + } + if id != nil { + log.Write(EventAuthentication, "WARNING: Cloud Shell doesn't support user-assigned managed identities") + q := request.Raw().URL.Query() + if id.idKind() == miResourceID { + q.Add(qpResID, id.String()) + } else { + q.Add(qpClientID, id.String()) + } + } + return request, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_credential.go new file mode 100644 index 000000000000..35c5e6725cda --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/managed_identity_credential.go @@ -0,0 +1,113 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "fmt" + "strings" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential" +) + +const credNameManagedIdentity = "ManagedIdentityCredential" + +type managedIdentityIDKind int + +const ( + miClientID managedIdentityIDKind = 0 + miResourceID managedIdentityIDKind = 1 +) + +// ManagedIDKind identifies the ID of a managed identity as either a client or resource ID +type ManagedIDKind interface { + fmt.Stringer + idKind() managedIdentityIDKind +} + +// ClientID is the client ID of a user-assigned managed identity. +type ClientID string + +func (ClientID) idKind() managedIdentityIDKind { + return miClientID +} + +// String returns the string value of the ID. +func (c ClientID) String() string { + return string(c) +} + +// ResourceID is the resource ID of a user-assigned managed identity. +type ResourceID string + +func (ResourceID) idKind() managedIdentityIDKind { + return miResourceID +} + +// String returns the string value of the ID. +func (r ResourceID) String() string { + return string(r) +} + +// ManagedIdentityCredentialOptions contains optional parameters for ManagedIdentityCredential. +type ManagedIdentityCredentialOptions struct { + azcore.ClientOptions + + // ID is the ID of a managed identity the credential should authenticate. Set this field to use a specific identity + // instead of the hosting environment's default. The value may be the identity's client ID or resource ID, but note that + // some platforms don't accept resource IDs. + ID ManagedIDKind +} + +// ManagedIdentityCredential authenticates an Azure managed identity in any hosting environment supporting managed identities. +// This credential authenticates a system-assigned identity by default. Use ManagedIdentityCredentialOptions.ID to specify a +// user-assigned identity. See Azure Active Directory documentation for more information about managed identities: +// https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview +type ManagedIdentityCredential struct { + client *confidentialClient + mic *managedIdentityClient +} + +// NewManagedIdentityCredential creates a ManagedIdentityCredential. Pass nil to accept default options. +func NewManagedIdentityCredential(options *ManagedIdentityCredentialOptions) (*ManagedIdentityCredential, error) { + if options == nil { + options = &ManagedIdentityCredentialOptions{} + } + mic, err := newManagedIdentityClient(options) + if err != nil { + return nil, err + } + cred := confidential.NewCredFromTokenProvider(mic.provideToken) + + // It's okay to give MSAL an invalid client ID because MSAL will use it only as part of a cache key. + // ManagedIdentityClient handles all the details of authentication and won't receive this value from MSAL. + clientID := "SYSTEM-ASSIGNED-MANAGED-IDENTITY" + if options.ID != nil { + clientID = options.ID.String() + } + // similarly, it's okay to give MSAL an incorrect tenant because MSAL won't use the value + c, err := newConfidentialClient("common", clientID, credNameManagedIdentity, cred, confidentialClientOptions{}) + if err != nil { + return nil, err + } + return &ManagedIdentityCredential{client: c, mic: mic}, nil +} + +// GetToken requests an access token from the hosting environment. This method is called automatically by Azure SDK clients. +func (c *ManagedIdentityCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + if len(opts.Scopes) != 1 { + err := fmt.Errorf("%s.GetToken() requires exactly one scope", credNameManagedIdentity) + return azcore.AccessToken{}, err + } + // managed identity endpoints require an AADv1 resource (i.e. token audience), not a v2 scope, so we remove "/.default" here + opts.Scopes = []string{strings.TrimSuffix(opts.Scopes[0], defaultSuffix)} + return c.client.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*ManagedIdentityCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/on_behalf_of_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/on_behalf_of_credential.go new file mode 100644 index 000000000000..2b360b681df1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/on_behalf_of_credential.go @@ -0,0 +1,92 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "crypto" + "crypto/x509" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential" +) + +const credNameOBO = "OnBehalfOfCredential" + +// OnBehalfOfCredential authenticates a service principal via the on-behalf-of flow. This is typically used by +// middle-tier services that authorize requests to other services with a delegated user identity. Because this +// is not an interactive authentication flow, an application using it must have admin consent for any delegated +// permissions before requesting tokens for them. See [Azure Active Directory documentation] for more details. +// +// [Azure Active Directory documentation]: https://docs.microsoft.com/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow +type OnBehalfOfCredential struct { + client *confidentialClient +} + +// OnBehalfOfCredentialOptions contains optional parameters for OnBehalfOfCredential +type OnBehalfOfCredentialOptions struct { + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. + // Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the + // application is registered. + AdditionallyAllowedTenants []string + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool + // SendCertificateChain applies only when the credential is configured to authenticate with a certificate. + // This setting controls whether the credential sends the public certificate chain in the x5c header of each + // token request's JWT. This is required for, and only used in, Subject Name/Issuer (SNI) authentication. + SendCertificateChain bool +} + +// NewOnBehalfOfCredentialWithCertificate constructs an OnBehalfOfCredential that authenticates with a certificate. +// See [ParseCertificates] for help loading a certificate. +func NewOnBehalfOfCredentialWithCertificate(tenantID, clientID, userAssertion string, certs []*x509.Certificate, key crypto.PrivateKey, options *OnBehalfOfCredentialOptions) (*OnBehalfOfCredential, error) { + cred, err := confidential.NewCredFromCert(certs, key) + if err != nil { + return nil, err + } + return newOnBehalfOfCredential(tenantID, clientID, userAssertion, cred, options) +} + +// NewOnBehalfOfCredentialWithSecret constructs an OnBehalfOfCredential that authenticates with a client secret. +func NewOnBehalfOfCredentialWithSecret(tenantID, clientID, userAssertion, clientSecret string, options *OnBehalfOfCredentialOptions) (*OnBehalfOfCredential, error) { + cred, err := confidential.NewCredFromSecret(clientSecret) + if err != nil { + return nil, err + } + return newOnBehalfOfCredential(tenantID, clientID, userAssertion, cred, options) +} + +func newOnBehalfOfCredential(tenantID, clientID, userAssertion string, cred confidential.Credential, options *OnBehalfOfCredentialOptions) (*OnBehalfOfCredential, error) { + if options == nil { + options = &OnBehalfOfCredentialOptions{} + } + opts := confidentialClientOptions{ + AdditionallyAllowedTenants: options.AdditionallyAllowedTenants, + Assertion: userAssertion, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + SendX5C: options.SendCertificateChain, + } + c, err := newConfidentialClient(tenantID, clientID, credNameOBO, cred, opts) + if err != nil { + return nil, err + } + return &OnBehalfOfCredential{c}, nil +} + +// GetToken requests an access token from Azure Active Directory. This method is called automatically by Azure SDK clients. +func (o *OnBehalfOfCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return o.client.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*OnBehalfOfCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/public_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/public_client.go new file mode 100644 index 000000000000..6512d3e25fd8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/public_client.go @@ -0,0 +1,178 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "fmt" + "strings" + "sync" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "github.com/Azure/azure-sdk-for-go/sdk/internal/log" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/public" +) + +type publicClientOptions struct { + azcore.ClientOptions + + AdditionallyAllowedTenants []string + DeviceCodePrompt func(context.Context, DeviceCodeMessage) error + DisableInstanceDiscovery bool + LoginHint, RedirectURL string + Username, Password string +} + +// publicClient wraps the MSAL public client +type publicClient struct { + account public.Account + cae, noCAE msalPublicClient + caeMu, noCAEMu, clientMu *sync.Mutex + clientID, tenantID string + host string + name string + opts publicClientOptions +} + +func newPublicClient(tenantID, clientID, name string, o publicClientOptions) (*publicClient, error) { + if !validTenantID(tenantID) { + return nil, errInvalidTenantID + } + host, err := setAuthorityHost(o.Cloud) + if err != nil { + return nil, err + } + o.AdditionallyAllowedTenants = resolveAdditionalTenants(o.AdditionallyAllowedTenants) + return &publicClient{ + caeMu: &sync.Mutex{}, + clientID: clientID, + clientMu: &sync.Mutex{}, + host: host, + name: name, + noCAEMu: &sync.Mutex{}, + opts: o, + tenantID: tenantID, + }, nil +} + +// GetToken requests an access token from MSAL, checking the cache first. +func (p *publicClient) GetToken(ctx context.Context, tro policy.TokenRequestOptions) (azcore.AccessToken, error) { + if len(tro.Scopes) < 1 { + return azcore.AccessToken{}, fmt.Errorf("%s.GetToken() requires at least one scope", p.name) + } + tenant, err := p.resolveTenant(tro.TenantID) + if err != nil { + return azcore.AccessToken{}, err + } + client, mu, err := p.client(tro) + if err != nil { + return azcore.AccessToken{}, err + } + mu.Lock() + defer mu.Unlock() + ar, err := client.AcquireTokenSilent(ctx, tro.Scopes, public.WithSilentAccount(p.account), public.WithClaims(tro.Claims), public.WithTenantID(tenant)) + if err == nil { + return p.token(ar, err) + } + at, err := p.reqToken(ctx, client, tro) + if err == nil { + msg := fmt.Sprintf("%s.GetToken() acquired a token for scope %q", p.name, strings.Join(ar.GrantedScopes, ", ")) + log.Write(EventAuthentication, msg) + } + return at, err +} + +// reqToken requests a token from the MSAL public client. It's separate from GetToken() to enable Authenticate() to bypass the cache. +func (p *publicClient) reqToken(ctx context.Context, c msalPublicClient, tro policy.TokenRequestOptions) (azcore.AccessToken, error) { + tenant, err := p.resolveTenant(tro.TenantID) + if err != nil { + return azcore.AccessToken{}, err + } + var ar public.AuthResult + switch p.name { + case credNameBrowser: + ar, err = c.AcquireTokenInteractive(ctx, tro.Scopes, + public.WithClaims(tro.Claims), + public.WithLoginHint(p.opts.LoginHint), + public.WithRedirectURI(p.opts.RedirectURL), + public.WithTenantID(tenant), + ) + case credNameDeviceCode: + dc, e := c.AcquireTokenByDeviceCode(ctx, tro.Scopes, public.WithClaims(tro.Claims), public.WithTenantID(tenant)) + if e != nil { + return azcore.AccessToken{}, e + } + err = p.opts.DeviceCodePrompt(ctx, DeviceCodeMessage{ + Message: dc.Result.Message, + UserCode: dc.Result.UserCode, + VerificationURL: dc.Result.VerificationURL, + }) + if err == nil { + ar, err = dc.AuthenticationResult(ctx) + } + case credNameUserPassword: + ar, err = c.AcquireTokenByUsernamePassword(ctx, tro.Scopes, p.opts.Username, p.opts.Password, public.WithClaims(tro.Claims), public.WithTenantID(tenant)) + default: + return azcore.AccessToken{}, fmt.Errorf("unknown credential %q", p.name) + } + return p.token(ar, err) +} + +func (p *publicClient) client(tro policy.TokenRequestOptions) (msalPublicClient, *sync.Mutex, error) { + p.clientMu.Lock() + defer p.clientMu.Unlock() + if tro.EnableCAE { + if p.cae == nil { + client, err := p.newMSALClient(true) + if err != nil { + return nil, nil, err + } + p.cae = client + } + return p.cae, p.caeMu, nil + } + if p.noCAE == nil { + client, err := p.newMSALClient(false) + if err != nil { + return nil, nil, err + } + p.noCAE = client + } + return p.noCAE, p.noCAEMu, nil +} + +func (p *publicClient) newMSALClient(enableCAE bool) (msalPublicClient, error) { + o := []public.Option{ + public.WithAuthority(runtime.JoinPaths(p.host, p.tenantID)), + public.WithHTTPClient(newPipelineAdapter(&p.opts.ClientOptions)), + } + if enableCAE { + o = append(o, public.WithClientCapabilities(cp1)) + } + if p.opts.DisableInstanceDiscovery || strings.ToLower(p.tenantID) == "adfs" { + o = append(o, public.WithInstanceDiscovery(false)) + } + return public.New(p.clientID, o...) +} + +func (p *publicClient) token(ar public.AuthResult, err error) (azcore.AccessToken, error) { + if err == nil { + p.account = ar.Account + } else { + res := getResponseFromError(err) + err = newAuthenticationFailedError(p.name, err.Error(), res, err) + } + return azcore.AccessToken{Token: ar.AccessToken, ExpiresOn: ar.ExpiresOn.UTC()}, err +} + +// resolveTenant returns the correct tenant for a token request given the client's +// configuration, or an error when that configuration doesn't allow the specified tenant +func (p *publicClient) resolveTenant(specified string) (string, error) { + return resolveTenant(p.tenantID, specified, p.name, p.opts.AdditionallyAllowedTenants) +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources-pre.ps1 b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources-pre.ps1 new file mode 100644 index 000000000000..fe0183addeb5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources-pre.ps1 @@ -0,0 +1,36 @@ +[CmdletBinding(SupportsShouldProcess = $true, ConfirmImpact = 'Medium')] +param ( + # Captures any arguments from eng/New-TestResources.ps1 not declared here (no parameter errors). + [Parameter(ValueFromRemainingArguments = $true)] + $RemainingArguments +) + +if (!$CI) { + # TODO: Remove this once auto-cloud config downloads are supported locally + Write-Host "Skipping cert setup in local testing mode" + return +} + +if ($EnvironmentVariables -eq $null -or $EnvironmentVariables.Count -eq 0) { + throw "EnvironmentVariables must be set in the calling script New-TestResources.ps1" +} + +$tmp = $env:TEMP ? $env:TEMP : [System.IO.Path]::GetTempPath() +$pfxPath = Join-Path $tmp "test.pfx" +$pemPath = Join-Path $tmp "test.pem" +$sniPath = Join-Path $tmp "testsni.pfx" + +Write-Host "Creating identity test files: $pfxPath $pemPath $sniPath" + +[System.Convert]::FromBase64String($EnvironmentVariables['PFX_CONTENTS']) | Set-Content -Path $pfxPath -AsByteStream +Set-Content -Path $pemPath -Value $EnvironmentVariables['PEM_CONTENTS'] +[System.Convert]::FromBase64String($EnvironmentVariables['SNI_CONTENTS']) | Set-Content -Path $sniPath -AsByteStream + +# Set for pipeline +Write-Host "##vso[task.setvariable variable=IDENTITY_SP_CERT_PFX;]$pfxPath" +Write-Host "##vso[task.setvariable variable=IDENTITY_SP_CERT_PEM;]$pemPath" +Write-Host "##vso[task.setvariable variable=IDENTITY_SP_CERT_SNI;]$sniPath" +# Set for local +$env:IDENTITY_SP_CERT_PFX = $pfxPath +$env:IDENTITY_SP_CERT_PEM = $pemPath +$env:IDENTITY_SP_CERT_SNI = $sniPath diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources.bicep b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources.bicep new file mode 100644 index 000000000000..b3490d3b50af --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/test-resources.bicep @@ -0,0 +1 @@ +param baseName string diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/username_password_credential.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/username_password_credential.go new file mode 100644 index 000000000000..f787ec0ce18f --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/username_password_credential.go @@ -0,0 +1,66 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +const credNameUserPassword = "UsernamePasswordCredential" + +// UsernamePasswordCredentialOptions contains optional parameters for UsernamePasswordCredential. +type UsernamePasswordCredentialOptions struct { + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. + // Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the + // application is registered. + AdditionallyAllowedTenants []string + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool +} + +// UsernamePasswordCredential authenticates a user with a password. Microsoft doesn't recommend this kind of authentication, +// because it's less secure than other authentication flows. This credential is not interactive, so it isn't compatible +// with any form of multi-factor authentication, and the application must already have user or admin consent. +// This credential can only authenticate work and school accounts; it can't authenticate Microsoft accounts. +type UsernamePasswordCredential struct { + client *publicClient +} + +// NewUsernamePasswordCredential creates a UsernamePasswordCredential. clientID is the ID of the application the user +// will authenticate to. Pass nil for options to accept defaults. +func NewUsernamePasswordCredential(tenantID string, clientID string, username string, password string, options *UsernamePasswordCredentialOptions) (*UsernamePasswordCredential, error) { + if options == nil { + options = &UsernamePasswordCredentialOptions{} + } + opts := publicClientOptions{ + AdditionallyAllowedTenants: options.AdditionallyAllowedTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + Password: password, + Username: username, + } + c, err := newPublicClient(tenantID, clientID, credNameUserPassword, opts) + if err != nil { + return nil, err + } + return &UsernamePasswordCredential{client: c}, err +} + +// GetToken requests an access token from Azure Active Directory. This method is called automatically by Azure SDK clients. +func (c *UsernamePasswordCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return c.client.GetToken(ctx, opts) +} + +var _ azcore.TokenCredential = (*UsernamePasswordCredential)(nil) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/version.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/version.go new file mode 100644 index 000000000000..65e74e31e3b1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/version.go @@ -0,0 +1,15 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +const ( + // UserAgent is the string to be used in the user agent string when making requests. + component = "azidentity" + + // Version is the semantic version (see http://semver.org) of this module. + version = "v1.4.0" +) diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/workload_identity.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/workload_identity.go new file mode 100644 index 000000000000..7e016324d229 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/azidentity/workload_identity.go @@ -0,0 +1,126 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package azidentity + +import ( + "context" + "errors" + "os" + "sync" + "time" + + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" +) + +const credNameWorkloadIdentity = "WorkloadIdentityCredential" + +// WorkloadIdentityCredential supports Azure workload identity on Kubernetes. +// See [Azure Kubernetes Service documentation] for more information. +// +// [Azure Kubernetes Service documentation]: https://learn.microsoft.com/azure/aks/workload-identity-overview +type WorkloadIdentityCredential struct { + assertion, file string + cred *ClientAssertionCredential + expires time.Time + mtx *sync.RWMutex +} + +// WorkloadIdentityCredentialOptions contains optional parameters for WorkloadIdentityCredential. +type WorkloadIdentityCredentialOptions struct { + azcore.ClientOptions + + // AdditionallyAllowedTenants specifies additional tenants for which the credential may acquire tokens. + // Add the wildcard value "*" to allow the credential to acquire tokens for any tenant in which the + // application is registered. + AdditionallyAllowedTenants []string + // ClientID of the service principal. Defaults to the value of the environment variable AZURE_CLIENT_ID. + ClientID string + // DisableInstanceDiscovery should be set true only by applications authenticating in disconnected clouds, or + // private clouds such as Azure Stack. It determines whether the credential requests Azure AD instance metadata + // from https://login.microsoft.com before authenticating. Setting this to true will skip this request, making + // the application responsible for ensuring the configured authority is valid and trustworthy. + DisableInstanceDiscovery bool + // TenantID of the service principal. Defaults to the value of the environment variable AZURE_TENANT_ID. + TenantID string + // TokenFilePath is the path of a file containing a Kubernetes service account token. Defaults to the value of the + // environment variable AZURE_FEDERATED_TOKEN_FILE. + TokenFilePath string +} + +// NewWorkloadIdentityCredential constructs a WorkloadIdentityCredential. Service principal configuration is read +// from environment variables as set by the Azure workload identity webhook. Set options to override those values. +func NewWorkloadIdentityCredential(options *WorkloadIdentityCredentialOptions) (*WorkloadIdentityCredential, error) { + if options == nil { + options = &WorkloadIdentityCredentialOptions{} + } + ok := false + clientID := options.ClientID + if clientID == "" { + if clientID, ok = os.LookupEnv(azureClientID); !ok { + return nil, errors.New("no client ID specified. Check pod configuration or set ClientID in the options") + } + } + file := options.TokenFilePath + if file == "" { + if file, ok = os.LookupEnv(azureFederatedTokenFile); !ok { + return nil, errors.New("no token file specified. Check pod configuration or set TokenFilePath in the options") + } + } + tenantID := options.TenantID + if tenantID == "" { + if tenantID, ok = os.LookupEnv(azureTenantID); !ok { + return nil, errors.New("no tenant ID specified. Check pod configuration or set TenantID in the options") + } + } + w := WorkloadIdentityCredential{file: file, mtx: &sync.RWMutex{}} + caco := ClientAssertionCredentialOptions{ + AdditionallyAllowedTenants: options.AdditionallyAllowedTenants, + ClientOptions: options.ClientOptions, + DisableInstanceDiscovery: options.DisableInstanceDiscovery, + } + cred, err := NewClientAssertionCredential(tenantID, clientID, w.getAssertion, &caco) + if err != nil { + return nil, err + } + // we want "WorkloadIdentityCredential" in log messages, not "ClientAssertionCredential" + cred.client.name = credNameWorkloadIdentity + w.cred = cred + return &w, nil +} + +// GetToken requests an access token from Azure Active Directory. Azure SDK clients call this method automatically. +func (w *WorkloadIdentityCredential) GetToken(ctx context.Context, opts policy.TokenRequestOptions) (azcore.AccessToken, error) { + return w.cred.GetToken(ctx, opts) +} + +// getAssertion returns the specified file's content, which is expected to be a Kubernetes service account token. +// Kubernetes is responsible for updating the file as service account tokens expire. +func (w *WorkloadIdentityCredential) getAssertion(context.Context) (string, error) { + w.mtx.RLock() + if w.expires.Before(time.Now()) { + // ensure only one goroutine at a time updates the assertion + w.mtx.RUnlock() + w.mtx.Lock() + defer w.mtx.Unlock() + // double check because another goroutine may have acquired the write lock first and done the update + if now := time.Now(); w.expires.Before(now) { + content, err := os.ReadFile(w.file) + if err != nil { + return "", err + } + w.assertion = string(content) + // Kubernetes rotates service account tokens when they reach 80% of their total TTL. The shortest TTL + // is 1 hour. That implies the token we just read is valid for at least 12 minutes (20% of 1 hour), + // but we add some margin for safety. + w.expires = now.Add(10 * time.Minute) + } + } else { + defer w.mtx.RUnlock() + } + return w.assertion, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/LICENSE.txt b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/LICENSE.txt new file mode 100644 index 000000000000..48ea6616b5b8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/LICENSE.txt @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) Microsoft Corporation. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/diag.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/diag.go new file mode 100644 index 000000000000..245af7d2bec4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/diag.go @@ -0,0 +1,51 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package diag + +import ( + "fmt" + "runtime" + "strings" +) + +// Caller returns the file and line number of a frame on the caller's stack. +// If the funtion fails an empty string is returned. +// skipFrames - the number of frames to skip when determining the caller. +// Passing a value of 0 will return the immediate caller of this function. +func Caller(skipFrames int) string { + if pc, file, line, ok := runtime.Caller(skipFrames + 1); ok { + // the skipFrames + 1 is to skip ourselves + frame := runtime.FuncForPC(pc) + return fmt.Sprintf("%s()\n\t%s:%d", frame.Name(), file, line) + } + return "" +} + +// StackTrace returns a formatted stack trace string. +// If the funtion fails an empty string is returned. +// skipFrames - the number of stack frames to skip before composing the trace string. +// totalFrames - the maximum number of stack frames to include in the trace string. +func StackTrace(skipFrames, totalFrames int) string { + pcCallers := make([]uintptr, totalFrames) + if frames := runtime.Callers(skipFrames, pcCallers); frames == 0 { + return "" + } + frames := runtime.CallersFrames(pcCallers) + sb := strings.Builder{} + for { + frame, more := frames.Next() + sb.WriteString(frame.Function) + sb.WriteString("()\n\t") + sb.WriteString(frame.File) + sb.WriteRune(':') + sb.WriteString(fmt.Sprintf("%d\n", frame.Line)) + if !more { + break + } + } + return sb.String() +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/doc.go new file mode 100644 index 000000000000..66bf13e5f04b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/diag/doc.go @@ -0,0 +1,7 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package diag diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/doc.go new file mode 100644 index 000000000000..8c6eacb618a3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/doc.go @@ -0,0 +1,7 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package errorinfo diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/errorinfo.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/errorinfo.go new file mode 100644 index 000000000000..8ee66b52676e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo/errorinfo.go @@ -0,0 +1,46 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package errorinfo + +// NonRetriable represents a non-transient error. This works in +// conjunction with the retry policy, indicating that the error condition +// is idempotent, so no retries will be attempted. +// Use errors.As() to access this interface in the error chain. +type NonRetriable interface { + error + NonRetriable() +} + +// NonRetriableError marks the specified error as non-retriable. +// This function takes an error as input and returns a new error that is marked as non-retriable. +func NonRetriableError(err error) error { + return &nonRetriableError{err} +} + +// nonRetriableError is a struct that embeds the error interface. +// It is used to represent errors that should not be retried. +type nonRetriableError struct { + error +} + +// Error method for nonRetriableError struct. +// It returns the error message of the embedded error. +func (p *nonRetriableError) Error() string { + return p.error.Error() +} + +// NonRetriable is a marker method for nonRetriableError struct. +// Non-functional and indicates that the error is non-retriable. +func (*nonRetriableError) NonRetriable() { + // marker method +} + +// Unwrap method for nonRetriableError struct. +// It returns the original error that was marked as non-retriable. +func (p *nonRetriableError) Unwrap() error { + return p.error +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/exported/exported.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/exported/exported.go new file mode 100644 index 000000000000..9948f604b301 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/exported/exported.go @@ -0,0 +1,129 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package exported + +import ( + "errors" + "io" + "net/http" +) + +// HasStatusCode returns true if the Response's status code is one of the specified values. +// Exported as runtime.HasStatusCode(). +func HasStatusCode(resp *http.Response, statusCodes ...int) bool { + if resp == nil { + return false + } + for _, sc := range statusCodes { + if resp.StatusCode == sc { + return true + } + } + return false +} + +// PayloadOptions contains the optional values for the Payload func. +// NOT exported but used by azcore. +type PayloadOptions struct { + // BytesModifier receives the downloaded byte slice and returns an updated byte slice. + // Use this to modify the downloaded bytes in a payload (e.g. removing a BOM). + BytesModifier func([]byte) []byte +} + +// Payload reads and returns the response body or an error. +// On a successful read, the response body is cached. +// Subsequent reads will access the cached value. +// Exported as runtime.Payload() WITHOUT the opts parameter. +func Payload(resp *http.Response, opts *PayloadOptions) ([]byte, error) { + if resp.Body == nil { + // this shouldn't happen in real-world scenarios as a + // response with no body should set it to http.NoBody + return nil, nil + } + modifyBytes := func(b []byte) []byte { return b } + if opts != nil && opts.BytesModifier != nil { + modifyBytes = opts.BytesModifier + } + + // r.Body won't be a nopClosingBytesReader if downloading was skipped + if buf, ok := resp.Body.(*nopClosingBytesReader); ok { + bytesBody := modifyBytes(buf.Bytes()) + buf.Set(bytesBody) + return bytesBody, nil + } + + bytesBody, err := io.ReadAll(resp.Body) + resp.Body.Close() + if err != nil { + return nil, err + } + + bytesBody = modifyBytes(bytesBody) + resp.Body = &nopClosingBytesReader{s: bytesBody} + return bytesBody, nil +} + +// PayloadDownloaded returns true if the response body has already been downloaded. +// This implies that the Payload() func above has been previously called. +// NOT exported but used by azcore. +func PayloadDownloaded(resp *http.Response) bool { + _, ok := resp.Body.(*nopClosingBytesReader) + return ok +} + +// nopClosingBytesReader is an io.ReadSeekCloser around a byte slice. +// It also provides direct access to the byte slice to avoid rereading. +type nopClosingBytesReader struct { + s []byte + i int64 +} + +// Bytes returns the underlying byte slice. +func (r *nopClosingBytesReader) Bytes() []byte { + return r.s +} + +// Close implements the io.Closer interface. +func (*nopClosingBytesReader) Close() error { + return nil +} + +// Read implements the io.Reader interface. +func (r *nopClosingBytesReader) Read(b []byte) (n int, err error) { + if r.i >= int64(len(r.s)) { + return 0, io.EOF + } + n = copy(b, r.s[r.i:]) + r.i += int64(n) + return +} + +// Set replaces the existing byte slice with the specified byte slice and resets the reader. +func (r *nopClosingBytesReader) Set(b []byte) { + r.s = b + r.i = 0 +} + +// Seek implements the io.Seeker interface. +func (r *nopClosingBytesReader) Seek(offset int64, whence int) (int64, error) { + var i int64 + switch whence { + case io.SeekStart: + i = offset + case io.SeekCurrent: + i = r.i + offset + case io.SeekEnd: + i = int64(len(r.s)) + offset + default: + return 0, errors.New("nopClosingBytesReader: invalid whence") + } + if i < 0 { + return 0, errors.New("nopClosingBytesReader: negative position") + } + r.i = i + return i, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/doc.go new file mode 100644 index 000000000000..d7876d297ae9 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/doc.go @@ -0,0 +1,7 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package log diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/log.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/log.go new file mode 100644 index 000000000000..4f1dcf1b78a6 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/log/log.go @@ -0,0 +1,104 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package log + +import ( + "fmt" + "os" + "time" +) + +/////////////////////////////////////////////////////////////////////////////////////////////////// +// NOTE: The following are exported as public surface area from azcore. DO NOT MODIFY +/////////////////////////////////////////////////////////////////////////////////////////////////// + +// Event is used to group entries. Each group can be toggled on or off. +type Event string + +// SetEvents is used to control which events are written to +// the log. By default all log events are writen. +func SetEvents(cls ...Event) { + log.cls = cls +} + +// SetListener will set the Logger to write to the specified listener. +func SetListener(lst func(Event, string)) { + log.lst = lst +} + +/////////////////////////////////////////////////////////////////////////////////////////////////// +// END PUBLIC SURFACE AREA +/////////////////////////////////////////////////////////////////////////////////////////////////// + +// Should returns true if the specified log event should be written to the log. +// By default all log events will be logged. Call SetEvents() to limit +// the log events for logging. +// If no listener has been set this will return false. +// Calling this method is useful when the message to log is computationally expensive +// and you want to avoid the overhead if its log event is not enabled. +func Should(cls Event) bool { + if log.lst == nil { + return false + } + if log.cls == nil || len(log.cls) == 0 { + return true + } + for _, c := range log.cls { + if c == cls { + return true + } + } + return false +} + +// Write invokes the underlying listener with the specified event and message. +// If the event shouldn't be logged or there is no listener then Write does nothing. +func Write(cls Event, message string) { + if !Should(cls) { + return + } + log.lst(cls, message) +} + +// Writef invokes the underlying listener with the specified event and formatted message. +// If the event shouldn't be logged or there is no listener then Writef does nothing. +func Writef(cls Event, format string, a ...interface{}) { + if !Should(cls) { + return + } + log.lst(cls, fmt.Sprintf(format, a...)) +} + +// TestResetEvents is used for TESTING PURPOSES ONLY. +func TestResetEvents() { + log.cls = nil +} + +// logger controls which events to log and writing to the underlying log. +type logger struct { + cls []Event + lst func(Event, string) +} + +// the process-wide logger +var log logger + +func init() { + initLogging() +} + +// split out for testing purposes +func initLogging() { + if cls := os.Getenv("AZURE_SDK_GO_LOGGING"); cls == "all" { + // cls could be enhanced to support a comma-delimited list of log events + log.lst = func(cls Event, msg string) { + // simple console logger, it writes to stderr in the following format: + // [time-stamp] Event: message + fmt.Fprintf(os.Stderr, "[%s] %s: %s\n", time.Now().Format(time.StampMicro), cls, msg) + } + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/poller/util.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/poller/util.go new file mode 100644 index 000000000000..db8269627d39 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/poller/util.go @@ -0,0 +1,155 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package poller + +import ( + "encoding/json" + "errors" + "fmt" + "net/http" + "net/url" + "strings" + + "github.com/Azure/azure-sdk-for-go/sdk/internal/exported" +) + +// the well-known set of LRO status/provisioning state values. +const ( + StatusSucceeded = "Succeeded" + StatusCanceled = "Canceled" + StatusFailed = "Failed" + StatusInProgress = "InProgress" +) + +// these are non-conformant states that we've seen in the wild. +// we support them for back-compat. +const ( + StatusCancelled = "Cancelled" + StatusCompleted = "Completed" +) + +// IsTerminalState returns true if the LRO's state is terminal. +func IsTerminalState(s string) bool { + return Failed(s) || Succeeded(s) +} + +// Failed returns true if the LRO's state is terminal failure. +func Failed(s string) bool { + return strings.EqualFold(s, StatusFailed) || strings.EqualFold(s, StatusCanceled) || strings.EqualFold(s, StatusCancelled) +} + +// Succeeded returns true if the LRO's state is terminal success. +func Succeeded(s string) bool { + return strings.EqualFold(s, StatusSucceeded) || strings.EqualFold(s, StatusCompleted) +} + +// returns true if the LRO response contains a valid HTTP status code +func StatusCodeValid(resp *http.Response) bool { + return exported.HasStatusCode(resp, http.StatusOK, http.StatusAccepted, http.StatusCreated, http.StatusNoContent) +} + +// IsValidURL verifies that the URL is valid and absolute. +func IsValidURL(s string) bool { + u, err := url.Parse(s) + return err == nil && u.IsAbs() +} + +// ErrNoBody is returned if the response didn't contain a body. +var ErrNoBody = errors.New("the response did not contain a body") + +// GetJSON reads the response body into a raw JSON object. +// It returns ErrNoBody if there was no content. +func GetJSON(resp *http.Response) (map[string]any, error) { + body, err := exported.Payload(resp, nil) + if err != nil { + return nil, err + } + if len(body) == 0 { + return nil, ErrNoBody + } + // unmarshall the body to get the value + var jsonBody map[string]any + if err = json.Unmarshal(body, &jsonBody); err != nil { + return nil, err + } + return jsonBody, nil +} + +// provisioningState returns the provisioning state from the response or the empty string. +func provisioningState(jsonBody map[string]any) string { + jsonProps, ok := jsonBody["properties"] + if !ok { + return "" + } + props, ok := jsonProps.(map[string]any) + if !ok { + return "" + } + rawPs, ok := props["provisioningState"] + if !ok { + return "" + } + ps, ok := rawPs.(string) + if !ok { + return "" + } + return ps +} + +// status returns the status from the response or the empty string. +func status(jsonBody map[string]any) string { + rawStatus, ok := jsonBody["status"] + if !ok { + return "" + } + status, ok := rawStatus.(string) + if !ok { + return "" + } + return status +} + +// GetStatus returns the LRO's status from the response body. +// Typically used for Azure-AsyncOperation flows. +// If there is no status in the response body the empty string is returned. +func GetStatus(resp *http.Response) (string, error) { + jsonBody, err := GetJSON(resp) + if err != nil { + return "", err + } + return status(jsonBody), nil +} + +// GetProvisioningState returns the LRO's state from the response body. +// If there is no state in the response body the empty string is returned. +func GetProvisioningState(resp *http.Response) (string, error) { + jsonBody, err := GetJSON(resp) + if err != nil { + return "", err + } + return provisioningState(jsonBody), nil +} + +// GetResourceLocation returns the LRO's resourceLocation value from the response body. +// Typically used for Operation-Location flows. +// If there is no resourceLocation in the response body the empty string is returned. +func GetResourceLocation(resp *http.Response) (string, error) { + jsonBody, err := GetJSON(resp) + if err != nil { + return "", err + } + v, ok := jsonBody["resourceLocation"] + if !ok { + // it might be ok if the field doesn't exist, the caller must make that determination + return "", nil + } + vv, ok := v.(string) + if !ok { + return "", fmt.Errorf("the resourceLocation value %v was not in string format", v) + } + return vv, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/temporal/resource.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/temporal/resource.go new file mode 100644 index 000000000000..238ef42ed03a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/temporal/resource.go @@ -0,0 +1,123 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package temporal + +import ( + "sync" + "time" +) + +// AcquireResource abstracts a method for refreshing a temporal resource. +type AcquireResource[TResource, TState any] func(state TState) (newResource TResource, newExpiration time.Time, err error) + +// Resource is a temporal resource (usually a credential) that requires periodic refreshing. +type Resource[TResource, TState any] struct { + // cond is used to synchronize access to the shared resource embodied by the remaining fields + cond *sync.Cond + + // acquiring indicates that some thread/goroutine is in the process of acquiring/updating the resource + acquiring bool + + // resource contains the value of the shared resource + resource TResource + + // expiration indicates when the shared resource expires; it is 0 if the resource was never acquired + expiration time.Time + + // lastAttempt indicates when a thread/goroutine last attempted to acquire/update the resource + lastAttempt time.Time + + // acquireResource is the callback function that actually acquires the resource + acquireResource AcquireResource[TResource, TState] +} + +// NewResource creates a new Resource that uses the specified AcquireResource for refreshing. +func NewResource[TResource, TState any](ar AcquireResource[TResource, TState]) *Resource[TResource, TState] { + return &Resource[TResource, TState]{cond: sync.NewCond(&sync.Mutex{}), acquireResource: ar} +} + +// Get returns the underlying resource. +// If the resource is fresh, no refresh is performed. +func (er *Resource[TResource, TState]) Get(state TState) (TResource, error) { + // If the resource is expiring within this time window, update it eagerly. + // This allows other threads/goroutines to keep running by using the not-yet-expired + // resource value while one thread/goroutine updates the resource. + const window = 5 * time.Minute // This example updates the resource 5 minutes prior to expiration + const backoff = 30 * time.Second // Minimum wait time between eager update attempts + + now, acquire, expired := time.Now(), false, false + + // acquire exclusive lock + er.cond.L.Lock() + resource := er.resource + + for { + expired = er.expiration.IsZero() || er.expiration.Before(now) + if expired { + // The resource was never acquired or has expired + if !er.acquiring { + // If another thread/goroutine is not acquiring/updating the resource, this thread/goroutine will do it + er.acquiring, acquire = true, true + break + } + // Getting here means that this thread/goroutine will wait for the updated resource + } else if er.expiration.Add(-window).Before(now) { + // The resource is valid but is expiring within the time window + if !er.acquiring && er.lastAttempt.Add(backoff).Before(now) { + // If another thread/goroutine is not acquiring/renewing the resource, and none has attempted + // to do so within the last 30 seconds, this thread/goroutine will do it + er.acquiring, acquire = true, true + break + } + // This thread/goroutine will use the existing resource value while another updates it + resource = er.resource + break + } else { + // The resource is not close to expiring, this thread/goroutine should use its current value + resource = er.resource + break + } + // If we get here, wait for the new resource value to be acquired/updated + er.cond.Wait() + } + er.cond.L.Unlock() // Release the lock so no threads/goroutines are blocked + + var err error + if acquire { + // This thread/goroutine has been selected to acquire/update the resource + var expiration time.Time + var newValue TResource + er.lastAttempt = now + newValue, expiration, err = er.acquireResource(state) + + // Atomically, update the shared resource's new value & expiration. + er.cond.L.Lock() + if err == nil { + // Update resource & expiration, return the new value + resource = newValue + er.resource, er.expiration = resource, expiration + } else if !expired { + // An eager update failed. Discard the error and return the current--still valid--resource value + err = nil + } + er.acquiring = false // Indicate that no thread/goroutine is currently acquiring the resource + + // Wake up any waiting threads/goroutines since there is a resource they can ALL use + er.cond.L.Unlock() + er.cond.Broadcast() + } + return resource, err // Return the resource this thread/goroutine can use +} + +// Expire marks the resource as expired, ensuring it's refreshed on the next call to Get(). +func (er *Resource[TResource, TState]) Expire() { + er.cond.L.Lock() + defer er.cond.L.Unlock() + + // Reset the expiration as if we never got this resource to begin with + er.expiration = time.Time{} +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/doc.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/doc.go new file mode 100644 index 000000000000..a3824bee8b5b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/doc.go @@ -0,0 +1,7 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package uuid diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/uuid.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/uuid.go new file mode 100644 index 000000000000..278ac9cd1c2c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/internal/uuid/uuid.go @@ -0,0 +1,76 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. + +package uuid + +import ( + "crypto/rand" + "errors" + "fmt" + "strconv" +) + +// The UUID reserved variants. +const ( + reservedRFC4122 byte = 0x40 +) + +// A UUID representation compliant with specification in RFC4122 document. +type UUID [16]byte + +// New returns a new UUID using the RFC4122 algorithm. +func New() (UUID, error) { + u := UUID{} + // Set all bits to pseudo-random values. + // NOTE: this takes a process-wide lock + _, err := rand.Read(u[:]) + if err != nil { + return u, err + } + u[8] = (u[8] | reservedRFC4122) & 0x7F // u.setVariant(ReservedRFC4122) + + var version byte = 4 + u[6] = (u[6] & 0xF) | (version << 4) // u.setVersion(4) + return u, nil +} + +// String returns the UUID in "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" format. +func (u UUID) String() string { + return fmt.Sprintf("%x-%x-%x-%x-%x", u[0:4], u[4:6], u[6:8], u[8:10], u[10:]) +} + +// Parse parses a string formatted as "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" +// or "{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}" into a UUID. +func Parse(s string) (UUID, error) { + var uuid UUID + // ensure format + switch len(s) { + case 36: + // xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx + case 38: + // {xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx} + s = s[1:37] + default: + return uuid, errors.New("invalid UUID format") + } + if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' { + return uuid, errors.New("invalid UUID format") + } + // parse chunks + for i, x := range [16]int{ + 0, 2, 4, 6, + 9, 11, + 14, 16, + 19, 21, + 24, 26, 28, 30, 32, 34} { + b, err := strconv.ParseUint(s[x:x+2], 16, 8) + if err != nil { + return uuid, fmt.Errorf("invalid UUID format: %s", err) + } + uuid[i] = byte(b) + } + return uuid, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/CHANGELOG.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/CHANGELOG.md new file mode 100644 index 000000000000..8861dcfc4d1d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/CHANGELOG.md @@ -0,0 +1,726 @@ +# Release History + +## 4.8.0-beta.1 (2024-02-23) +### Features Added + +- New value `AgentPoolTypeVirtualMachines` added to enum type `AgentPoolType` +- New value `NetworkPolicyNone` added to enum type `NetworkPolicy` +- New value `NodeOSUpgradeChannelSecurityPatch` added to enum type `NodeOSUpgradeChannel` +- New value `OSSKUMariner`, `OSSKUWindowsAnnual` added to enum type `OSSKU` +- New value `PublicNetworkAccessSecuredByPerimeter` added to enum type `PublicNetworkAccess` +- New value `SnapshotTypeManagedCluster` added to enum type `SnapshotType` +- New value `WorkloadRuntimeKataMshvVMIsolation` added to enum type `WorkloadRuntime` +- New enum type `AddonAutoscaling` with values `AddonAutoscalingDisabled`, `AddonAutoscalingEnabled` +- New enum type `AgentPoolSSHAccess` with values `AgentPoolSSHAccessDisabled`, `AgentPoolSSHAccessLocalUser` +- New enum type `GuardrailsSupport` with values `GuardrailsSupportPreview`, `GuardrailsSupportStable` +- New enum type `IpvsScheduler` with values `IpvsSchedulerLeastConnection`, `IpvsSchedulerRoundRobin` +- New enum type `Level` with values `LevelEnforcement`, `LevelOff`, `LevelWarning` +- New enum type `Mode` with values `ModeIPTABLES`, `ModeIPVS` +- New enum type `NodeProvisioningMode` with values `NodeProvisioningModeAuto`, `NodeProvisioningModeManual` +- New enum type `RestrictionLevel` with values `RestrictionLevelReadOnly`, `RestrictionLevelUnrestricted` +- New enum type `SafeguardsSupport` with values `SafeguardsSupportPreview`, `SafeguardsSupportStable` +- New function `*AgentPoolsClient.BeginDeleteMachines(context.Context, string, string, string, AgentPoolDeleteMachinesParameter, *AgentPoolsClientBeginDeleteMachinesOptions) (*runtime.Poller[AgentPoolsClientDeleteMachinesResponse], error)` +- New function `*ClientFactory.NewMachinesClient() *MachinesClient` +- New function `*ClientFactory.NewManagedClusterSnapshotsClient() *ManagedClusterSnapshotsClient` +- New function `*ClientFactory.NewOperationStatusResultClient() *OperationStatusResultClient` +- New function `NewMachinesClient(string, azcore.TokenCredential, *arm.ClientOptions) (*MachinesClient, error)` +- New function `*MachinesClient.Get(context.Context, string, string, string, string, *MachinesClientGetOptions) (MachinesClientGetResponse, error)` +- New function `*MachinesClient.NewListPager(string, string, string, *MachinesClientListOptions) *runtime.Pager[MachinesClientListResponse]` +- New function `NewManagedClusterSnapshotsClient(string, azcore.TokenCredential, *arm.ClientOptions) (*ManagedClusterSnapshotsClient, error)` +- New function `*ManagedClusterSnapshotsClient.CreateOrUpdate(context.Context, string, string, ManagedClusterSnapshot, *ManagedClusterSnapshotsClientCreateOrUpdateOptions) (ManagedClusterSnapshotsClientCreateOrUpdateResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Delete(context.Context, string, string, *ManagedClusterSnapshotsClientDeleteOptions) (ManagedClusterSnapshotsClientDeleteResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Get(context.Context, string, string, *ManagedClusterSnapshotsClientGetOptions) (ManagedClusterSnapshotsClientGetResponse, error)` +- New function `*ManagedClusterSnapshotsClient.NewListByResourceGroupPager(string, *ManagedClusterSnapshotsClientListByResourceGroupOptions) *runtime.Pager[ManagedClusterSnapshotsClientListByResourceGroupResponse]` +- New function `*ManagedClusterSnapshotsClient.NewListPager(*ManagedClusterSnapshotsClientListOptions) *runtime.Pager[ManagedClusterSnapshotsClientListResponse]` +- New function `*ManagedClusterSnapshotsClient.UpdateTags(context.Context, string, string, TagsObject, *ManagedClusterSnapshotsClientUpdateTagsOptions) (ManagedClusterSnapshotsClientUpdateTagsResponse, error)` +- New function `*ManagedClustersClient.GetGuardrailsVersions(context.Context, string, string, *ManagedClustersClientGetGuardrailsVersionsOptions) (ManagedClustersClientGetGuardrailsVersionsResponse, error)` +- New function `*ManagedClustersClient.GetSafeguardsVersions(context.Context, string, string, *ManagedClustersClientGetSafeguardsVersionsOptions) (ManagedClustersClientGetSafeguardsVersionsResponse, error)` +- New function `*ManagedClustersClient.NewListGuardrailsVersionsPager(string, *ManagedClustersClientListGuardrailsVersionsOptions) *runtime.Pager[ManagedClustersClientListGuardrailsVersionsResponse]` +- New function `*ManagedClustersClient.NewListSafeguardsVersionsPager(string, *ManagedClustersClientListSafeguardsVersionsOptions) *runtime.Pager[ManagedClustersClientListSafeguardsVersionsResponse]` +- New function `NewOperationStatusResultClient(string, azcore.TokenCredential, *arm.ClientOptions) (*OperationStatusResultClient, error)` +- New function `*OperationStatusResultClient.Get(context.Context, string, string, string, *OperationStatusResultClientGetOptions) (OperationStatusResultClientGetResponse, error)` +- New function `*OperationStatusResultClient.GetByAgentPool(context.Context, string, string, string, string, *OperationStatusResultClientGetByAgentPoolOptions) (OperationStatusResultClientGetByAgentPoolResponse, error)` +- New function `*OperationStatusResultClient.NewListPager(string, string, *OperationStatusResultClientListOptions) *runtime.Pager[OperationStatusResultClientListResponse]` +- New struct `AgentPoolArtifactStreamingProfile` +- New struct `AgentPoolDeleteMachinesParameter` +- New struct `AgentPoolGPUProfile` +- New struct `AgentPoolSecurityProfile` +- New struct `AgentPoolWindowsProfile` +- New struct `ErrorAdditionalInfo` +- New struct `ErrorDetail` +- New struct `GuardrailsAvailableVersion` +- New struct `GuardrailsAvailableVersionsList` +- New struct `GuardrailsAvailableVersionsProperties` +- New struct `Machine` +- New struct `MachineIPAddress` +- New struct `MachineListResult` +- New struct `MachineNetworkProperties` +- New struct `MachineProperties` +- New struct `ManagedClusterAIToolchainOperatorProfile` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoring` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics` +- New struct `ManagedClusterAzureMonitorProfileContainerInsights` +- New struct `ManagedClusterAzureMonitorProfileLogs` +- New struct `ManagedClusterAzureMonitorProfileWindowsHostLogs` +- New struct `ManagedClusterCostAnalysis` +- New struct `ManagedClusterIngressProfile` +- New struct `ManagedClusterIngressProfileWebAppRouting` +- New struct `ManagedClusterMetricsProfile` +- New struct `ManagedClusterNodeProvisioningProfile` +- New struct `ManagedClusterNodeResourceGroupProfile` +- New struct `ManagedClusterPropertiesForSnapshot` +- New struct `ManagedClusterSecurityProfileImageIntegrity` +- New struct `ManagedClusterSecurityProfileNodeRestriction` +- New struct `ManagedClusterSnapshot` +- New struct `ManagedClusterSnapshotListResult` +- New struct `ManagedClusterSnapshotProperties` +- New struct `ManualScaleProfile` +- New struct `NetworkMonitoring` +- New struct `NetworkProfileForSnapshot` +- New struct `NetworkProfileKubeProxyConfig` +- New struct `NetworkProfileKubeProxyConfigIpvsConfig` +- New struct `OperationStatusResult` +- New struct `OperationStatusResultList` +- New struct `SafeguardsAvailableVersion` +- New struct `SafeguardsAvailableVersionsList` +- New struct `SafeguardsAvailableVersionsProperties` +- New struct `SafeguardsProfile` +- New struct `ScaleProfile` +- New struct `VirtualMachineNodes` +- New struct `VirtualMachinesProfile` +- New field `IgnorePodDisruptionBudget` in struct `AgentPoolsClientBeginDeleteOptions` +- New field `EnableVnetIntegration`, `SubnetID` in struct `ManagedClusterAPIServerAccessProfile` +- New field `ArtifactStreamingProfile`, `EnableCustomCATrust`, `GpuProfile`, `MessageOfTheDay`, `NodeInitializationTaints`, `SecurityProfile`, `VirtualMachineNodesStatus`, `VirtualMachinesProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfile` +- New field `ArtifactStreamingProfile`, `EnableCustomCATrust`, `GpuProfile`, `MessageOfTheDay`, `NodeInitializationTaints`, `SecurityProfile`, `VirtualMachineNodesStatus`, `VirtualMachinesProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfileProperties` +- New field `Logs` in struct `ManagedClusterAzureMonitorProfile` +- New field `AppMonitoringOpenTelemetryMetrics` in struct `ManagedClusterAzureMonitorProfileMetrics` +- New field `EffectiveNoProxy` in struct `ManagedClusterHTTPProxyConfig` +- New field `AiToolchainOperatorProfile`, `CreationData`, `EnableNamespaceResources`, `IngressProfile`, `MetricsProfile`, `NodeProvisioningProfile`, `NodeResourceGroupProfile`, `SafeguardsProfile` in struct `ManagedClusterProperties` +- New field `DaemonsetEvictionForEmptyNodes`, `DaemonsetEvictionForOccupiedNodes`, `IgnoreDaemonsetsUtilization` in struct `ManagedClusterPropertiesAutoScalerProfile` +- New field `CustomCATrustCertificates`, `ImageIntegrity`, `NodeRestriction` in struct `ManagedClusterSecurityProfile` +- New field `Version` in struct `ManagedClusterStorageProfileDiskCSIDriver` +- New field `AddonAutoscaling` in struct `ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler` +- New field `IgnorePodDisruptionBudget` in struct `ManagedClustersClientBeginDeleteOptions` +- New field `KubeProxyConfig`, `Monitoring` in struct `NetworkProfile` + + +## 4.7.0 (2024-01-26) +### Features Added + +- New field `NodeSoakDurationInMinutes` in struct `AgentPoolUpgradeSettings` + + +## 4.7.0-beta.1 (2023-12-22) +### Features Added + +- New value `AgentPoolTypeVirtualMachines` added to enum type `AgentPoolType` +- New value `NetworkPolicyNone` added to enum type `NetworkPolicy` +- New value `NodeOSUpgradeChannelSecurityPatch` added to enum type `NodeOSUpgradeChannel` +- New value `OSSKUMariner`, `OSSKUWindowsAnnual` added to enum type `OSSKU` +- New value `PublicNetworkAccessSecuredByPerimeter` added to enum type `PublicNetworkAccess` +- New value `SnapshotTypeManagedCluster` added to enum type `SnapshotType` +- New value `WorkloadRuntimeKataMshvVMIsolation` added to enum type `WorkloadRuntime` +- New enum type `AddonAutoscaling` with values `AddonAutoscalingDisabled`, `AddonAutoscalingEnabled` +- New enum type `AgentPoolSSHAccess` with values `AgentPoolSSHAccessDisabled`, `AgentPoolSSHAccessLocalUser` +- New enum type `GuardrailsSupport` with values `GuardrailsSupportPreview`, `GuardrailsSupportStable` +- New enum type `IpvsScheduler` with values `IpvsSchedulerLeastConnection`, `IpvsSchedulerRoundRobin` +- New enum type `Level` with values `LevelEnforcement`, `LevelOff`, `LevelWarning` +- New enum type `Mode` with values `ModeIPTABLES`, `ModeIPVS` +- New enum type `NodeProvisioningMode` with values `NodeProvisioningModeAuto`, `NodeProvisioningModeManual` +- New enum type `RestrictionLevel` with values `RestrictionLevelReadOnly`, `RestrictionLevelUnrestricted` +- New function `*AgentPoolsClient.BeginDeleteMachines(context.Context, string, string, string, AgentPoolDeleteMachinesParameter, *AgentPoolsClientBeginDeleteMachinesOptions) (*runtime.Poller[AgentPoolsClientDeleteMachinesResponse], error)` +- New function `*ClientFactory.NewMachinesClient() *MachinesClient` +- New function `*ClientFactory.NewManagedClusterSnapshotsClient() *ManagedClusterSnapshotsClient` +- New function `*ClientFactory.NewOperationStatusResultClient() *OperationStatusResultClient` +- New function `NewMachinesClient(string, azcore.TokenCredential, *arm.ClientOptions) (*MachinesClient, error)` +- New function `*MachinesClient.Get(context.Context, string, string, string, string, *MachinesClientGetOptions) (MachinesClientGetResponse, error)` +- New function `*MachinesClient.NewListPager(string, string, string, *MachinesClientListOptions) *runtime.Pager[MachinesClientListResponse]` +- New function `NewManagedClusterSnapshotsClient(string, azcore.TokenCredential, *arm.ClientOptions) (*ManagedClusterSnapshotsClient, error)` +- New function `*ManagedClusterSnapshotsClient.CreateOrUpdate(context.Context, string, string, ManagedClusterSnapshot, *ManagedClusterSnapshotsClientCreateOrUpdateOptions) (ManagedClusterSnapshotsClientCreateOrUpdateResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Delete(context.Context, string, string, *ManagedClusterSnapshotsClientDeleteOptions) (ManagedClusterSnapshotsClientDeleteResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Get(context.Context, string, string, *ManagedClusterSnapshotsClientGetOptions) (ManagedClusterSnapshotsClientGetResponse, error)` +- New function `*ManagedClusterSnapshotsClient.NewListByResourceGroupPager(string, *ManagedClusterSnapshotsClientListByResourceGroupOptions) *runtime.Pager[ManagedClusterSnapshotsClientListByResourceGroupResponse]` +- New function `*ManagedClusterSnapshotsClient.NewListPager(*ManagedClusterSnapshotsClientListOptions) *runtime.Pager[ManagedClusterSnapshotsClientListResponse]` +- New function `*ManagedClusterSnapshotsClient.UpdateTags(context.Context, string, string, TagsObject, *ManagedClusterSnapshotsClientUpdateTagsOptions) (ManagedClusterSnapshotsClientUpdateTagsResponse, error)` +- New function `*ManagedClustersClient.GetGuardrailsVersions(context.Context, string, string, *ManagedClustersClientGetGuardrailsVersionsOptions) (ManagedClustersClientGetGuardrailsVersionsResponse, error)` +- New function `*ManagedClustersClient.NewListGuardrailsVersionsPager(string, *ManagedClustersClientListGuardrailsVersionsOptions) *runtime.Pager[ManagedClustersClientListGuardrailsVersionsResponse]` +- New function `NewOperationStatusResultClient(string, azcore.TokenCredential, *arm.ClientOptions) (*OperationStatusResultClient, error)` +- New function `*OperationStatusResultClient.Get(context.Context, string, string, string, *OperationStatusResultClientGetOptions) (OperationStatusResultClientGetResponse, error)` +- New function `*OperationStatusResultClient.GetByAgentPool(context.Context, string, string, string, string, *OperationStatusResultClientGetByAgentPoolOptions) (OperationStatusResultClientGetByAgentPoolResponse, error)` +- New function `*OperationStatusResultClient.NewListPager(string, string, *OperationStatusResultClientListOptions) *runtime.Pager[OperationStatusResultClientListResponse]` +- New struct `AgentPoolArtifactStreamingProfile` +- New struct `AgentPoolDeleteMachinesParameter` +- New struct `AgentPoolGPUProfile` +- New struct `AgentPoolSecurityProfile` +- New struct `AgentPoolWindowsProfile` +- New struct `ErrorAdditionalInfo` +- New struct `ErrorDetail` +- New struct `GuardrailsAvailableVersion` +- New struct `GuardrailsAvailableVersionsList` +- New struct `GuardrailsAvailableVersionsProperties` +- New struct `GuardrailsProfile` +- New struct `Machine` +- New struct `MachineIPAddress` +- New struct `MachineListResult` +- New struct `MachineNetworkProperties` +- New struct `MachineProperties` +- New struct `ManagedClusterAIToolchainOperatorProfile` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoring` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics` +- New struct `ManagedClusterAzureMonitorProfileContainerInsights` +- New struct `ManagedClusterAzureMonitorProfileLogs` +- New struct `ManagedClusterAzureMonitorProfileWindowsHostLogs` +- New struct `ManagedClusterCostAnalysis` +- New struct `ManagedClusterIngressProfile` +- New struct `ManagedClusterIngressProfileWebAppRouting` +- New struct `ManagedClusterMetricsProfile` +- New struct `ManagedClusterNodeProvisioningProfile` +- New struct `ManagedClusterNodeResourceGroupProfile` +- New struct `ManagedClusterPropertiesForSnapshot` +- New struct `ManagedClusterSecurityProfileImageIntegrity` +- New struct `ManagedClusterSecurityProfileNodeRestriction` +- New struct `ManagedClusterSnapshot` +- New struct `ManagedClusterSnapshotListResult` +- New struct `ManagedClusterSnapshotProperties` +- New struct `ManualScaleProfile` +- New struct `NetworkMonitoring` +- New struct `NetworkProfileForSnapshot` +- New struct `NetworkProfileKubeProxyConfig` +- New struct `NetworkProfileKubeProxyConfigIpvsConfig` +- New struct `OperationStatusResult` +- New struct `OperationStatusResultList` +- New struct `ScaleProfile` +- New struct `VirtualMachineNodes` +- New struct `VirtualMachinesProfile` +- New field `NodeSoakDurationInMinutes` in struct `AgentPoolUpgradeSettings` +- New field `IgnorePodDisruptionBudget` in struct `AgentPoolsClientBeginDeleteOptions` +- New field `EnableVnetIntegration`, `SubnetID` in struct `ManagedClusterAPIServerAccessProfile` +- New field `ArtifactStreamingProfile`, `EnableCustomCATrust`, `GpuProfile`, `MessageOfTheDay`, `NodeInitializationTaints`, `SecurityProfile`, `VirtualMachineNodesStatus`, `VirtualMachinesProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfile` +- New field `ArtifactStreamingProfile`, `EnableCustomCATrust`, `GpuProfile`, `MessageOfTheDay`, `NodeInitializationTaints`, `SecurityProfile`, `VirtualMachineNodesStatus`, `VirtualMachinesProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfileProperties` +- New field `Logs` in struct `ManagedClusterAzureMonitorProfile` +- New field `AppMonitoringOpenTelemetryMetrics` in struct `ManagedClusterAzureMonitorProfileMetrics` +- New field `EffectiveNoProxy` in struct `ManagedClusterHTTPProxyConfig` +- New field `AiToolchainOperatorProfile`, `CreationData`, `EnableNamespaceResources`, `GuardrailsProfile`, `IngressProfile`, `MetricsProfile`, `NodeProvisioningProfile`, `NodeResourceGroupProfile` in struct `ManagedClusterProperties` +- New field `DaemonsetEvictionForEmptyNodes`, `DaemonsetEvictionForOccupiedNodes`, `IgnoreDaemonsetsUtilization` in struct `ManagedClusterPropertiesAutoScalerProfile` +- New field `CustomCATrustCertificates`, `ImageIntegrity`, `NodeRestriction` in struct `ManagedClusterSecurityProfile` +- New field `Version` in struct `ManagedClusterStorageProfileDiskCSIDriver` +- New field `AddonAutoscaling` in struct `ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler` +- New field `IgnorePodDisruptionBudget` in struct `ManagedClustersClientBeginDeleteOptions` +- New field `KubeProxyConfig`, `Monitoring` in struct `NetworkProfile` + + +## 4.6.0 (2023-11-24) +### Features Added + +- New enum type `BackendPoolType` with values `BackendPoolTypeNodeIP`, `BackendPoolTypeNodeIPConfiguration` +- New enum type `Protocol` with values `ProtocolTCP`, `ProtocolUDP` +- New struct `AgentPoolNetworkProfile` +- New struct `IPTag` +- New struct `PortRange` +- New field `CapacityReservationGroupID`, `NetworkProfile` in struct `ManagedClusterAgentPoolProfile` +- New field `CapacityReservationGroupID`, `NetworkProfile` in struct `ManagedClusterAgentPoolProfileProperties` +- New field `BackendPoolType` in struct `ManagedClusterLoadBalancerProfile` + + +## 4.6.0-beta.1 (2023-11-24) +### Features Added + +- New value `AgentPoolTypeVirtualMachines` added to enum type `AgentPoolType` +- New value `NetworkPolicyNone` added to enum type `NetworkPolicy` +- New value `NodeOSUpgradeChannelSecurityPatch` added to enum type `NodeOSUpgradeChannel` +- New value `OSSKUMariner`, `OSSKUWindowsAnnual` added to enum type `OSSKU` +- New value `PublicNetworkAccessSecuredByPerimeter` added to enum type `PublicNetworkAccess` +- New value `SnapshotTypeManagedCluster` added to enum type `SnapshotType` +- New value `WorkloadRuntimeKataMshvVMIsolation` added to enum type `WorkloadRuntime` +- New enum type `AddonAutoscaling` with values `AddonAutoscalingDisabled`, `AddonAutoscalingEnabled` +- New enum type `AgentPoolSSHAccess` with values `AgentPoolSSHAccessDisabled`, `AgentPoolSSHAccessLocalUser` +- New enum type `BackendPoolType` with values `BackendPoolTypeNodeIP`, `BackendPoolTypeNodeIPConfiguration` +- New enum type `GuardrailsSupport` with values `GuardrailsSupportPreview`, `GuardrailsSupportStable` +- New enum type `IpvsScheduler` with values `IpvsSchedulerLeastConnection`, `IpvsSchedulerRoundRobin` +- New enum type `Level` with values `LevelEnforcement`, `LevelOff`, `LevelWarning` +- New enum type `Mode` with values `ModeIPTABLES`, `ModeIPVS` +- New enum type `NodeProvisioningMode` with values `NodeProvisioningModeAuto`, `NodeProvisioningModeManual` +- New enum type `Protocol` with values `ProtocolTCP`, `ProtocolUDP` +- New enum type `RestrictionLevel` with values `RestrictionLevelReadOnly`, `RestrictionLevelUnrestricted` +- New function `*ClientFactory.NewMachinesClient() *MachinesClient` +- New function `*ClientFactory.NewManagedClusterSnapshotsClient() *ManagedClusterSnapshotsClient` +- New function `NewMachinesClient(string, azcore.TokenCredential, *arm.ClientOptions) (*MachinesClient, error)` +- New function `*MachinesClient.Get(context.Context, string, string, string, string, *MachinesClientGetOptions) (MachinesClientGetResponse, error)` +- New function `*MachinesClient.NewListPager(string, string, string, *MachinesClientListOptions) *runtime.Pager[MachinesClientListResponse]` +- New function `NewManagedClusterSnapshotsClient(string, azcore.TokenCredential, *arm.ClientOptions) (*ManagedClusterSnapshotsClient, error)` +- New function `*ManagedClusterSnapshotsClient.CreateOrUpdate(context.Context, string, string, ManagedClusterSnapshot, *ManagedClusterSnapshotsClientCreateOrUpdateOptions) (ManagedClusterSnapshotsClientCreateOrUpdateResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Delete(context.Context, string, string, *ManagedClusterSnapshotsClientDeleteOptions) (ManagedClusterSnapshotsClientDeleteResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Get(context.Context, string, string, *ManagedClusterSnapshotsClientGetOptions) (ManagedClusterSnapshotsClientGetResponse, error)` +- New function `*ManagedClusterSnapshotsClient.NewListByResourceGroupPager(string, *ManagedClusterSnapshotsClientListByResourceGroupOptions) *runtime.Pager[ManagedClusterSnapshotsClientListByResourceGroupResponse]` +- New function `*ManagedClusterSnapshotsClient.NewListPager(*ManagedClusterSnapshotsClientListOptions) *runtime.Pager[ManagedClusterSnapshotsClientListResponse]` +- New function `*ManagedClusterSnapshotsClient.UpdateTags(context.Context, string, string, TagsObject, *ManagedClusterSnapshotsClientUpdateTagsOptions) (ManagedClusterSnapshotsClientUpdateTagsResponse, error)` +- New function `*ManagedClustersClient.GetGuardrailsVersions(context.Context, string, string, *ManagedClustersClientGetGuardrailsVersionsOptions) (ManagedClustersClientGetGuardrailsVersionsResponse, error)` +- New function `*ManagedClustersClient.NewListGuardrailsVersionsPager(string, *ManagedClustersClientListGuardrailsVersionsOptions) *runtime.Pager[ManagedClustersClientListGuardrailsVersionsResponse]` +- New struct `AgentPoolArtifactStreamingProfile` +- New struct `AgentPoolGPUProfile` +- New struct `AgentPoolNetworkProfile` +- New struct `AgentPoolSecurityProfile` +- New struct `AgentPoolWindowsProfile` +- New struct `GuardrailsAvailableVersion` +- New struct `GuardrailsAvailableVersionsList` +- New struct `GuardrailsAvailableVersionsProperties` +- New struct `GuardrailsProfile` +- New struct `IPTag` +- New struct `Machine` +- New struct `MachineIPAddress` +- New struct `MachineListResult` +- New struct `MachineNetworkProperties` +- New struct `MachineProperties` +- New struct `ManagedClusterAIToolchainOperatorProfile` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoring` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics` +- New struct `ManagedClusterAzureMonitorProfileContainerInsights` +- New struct `ManagedClusterAzureMonitorProfileLogs` +- New struct `ManagedClusterAzureMonitorProfileWindowsHostLogs` +- New struct `ManagedClusterCostAnalysis` +- New struct `ManagedClusterIngressProfile` +- New struct `ManagedClusterIngressProfileWebAppRouting` +- New struct `ManagedClusterMetricsProfile` +- New struct `ManagedClusterNodeProvisioningProfile` +- New struct `ManagedClusterNodeResourceGroupProfile` +- New struct `ManagedClusterPropertiesForSnapshot` +- New struct `ManagedClusterSecurityProfileImageIntegrity` +- New struct `ManagedClusterSecurityProfileNodeRestriction` +- New struct `ManagedClusterSnapshot` +- New struct `ManagedClusterSnapshotListResult` +- New struct `ManagedClusterSnapshotProperties` +- New struct `NetworkMonitoring` +- New struct `NetworkProfileForSnapshot` +- New struct `NetworkProfileKubeProxyConfig` +- New struct `NetworkProfileKubeProxyConfigIpvsConfig` +- New struct `PortRange` +- New field `NodeSoakDurationInMinutes` in struct `AgentPoolUpgradeSettings` +- New field `IgnorePodDisruptionBudget` in struct `AgentPoolsClientBeginDeleteOptions` +- New field `EnableVnetIntegration`, `SubnetID` in struct `ManagedClusterAPIServerAccessProfile` +- New field `ArtifactStreamingProfile`, `CapacityReservationGroupID`, `EnableCustomCATrust`, `GpuProfile`, `MessageOfTheDay`, `NetworkProfile`, `SecurityProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfile` +- New field `ArtifactStreamingProfile`, `CapacityReservationGroupID`, `EnableCustomCATrust`, `GpuProfile`, `MessageOfTheDay`, `NetworkProfile`, `SecurityProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfileProperties` +- New field `Logs` in struct `ManagedClusterAzureMonitorProfile` +- New field `AppMonitoringOpenTelemetryMetrics` in struct `ManagedClusterAzureMonitorProfileMetrics` +- New field `EffectiveNoProxy` in struct `ManagedClusterHTTPProxyConfig` +- New field `BackendPoolType` in struct `ManagedClusterLoadBalancerProfile` +- New field `AiToolchainOperatorProfile`, `CreationData`, `EnableNamespaceResources`, `GuardrailsProfile`, `IngressProfile`, `MetricsProfile`, `NodeProvisioningProfile`, `NodeResourceGroupProfile` in struct `ManagedClusterProperties` +- New field `DaemonsetEvictionForEmptyNodes`, `DaemonsetEvictionForOccupiedNodes`, `Expanders`, `IgnoreDaemonsetsUtilization` in struct `ManagedClusterPropertiesAutoScalerProfile` +- New field `CustomCATrustCertificates`, `ImageIntegrity`, `NodeRestriction` in struct `ManagedClusterSecurityProfile` +- New field `Version` in struct `ManagedClusterStorageProfileDiskCSIDriver` +- New field `AddonAutoscaling` in struct `ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler` +- New field `IgnorePodDisruptionBudget` in struct `ManagedClustersClientBeginDeleteOptions` +- New field `KubeProxyConfig`, `Monitoring` in struct `NetworkProfile` + + +## 4.5.0 (2023-11-24) +### Features Added + +- Support for test fakes and OpenTelemetry trace spans. +- New enum type `TrustedAccessRoleBindingProvisioningState` with values `TrustedAccessRoleBindingProvisioningStateCanceled`, `TrustedAccessRoleBindingProvisioningStateDeleting`, `TrustedAccessRoleBindingProvisioningStateFailed`, `TrustedAccessRoleBindingProvisioningStateSucceeded`, `TrustedAccessRoleBindingProvisioningStateUpdating` +- New function `*ClientFactory.NewTrustedAccessRoleBindingsClient() *TrustedAccessRoleBindingsClient` +- New function `*ClientFactory.NewTrustedAccessRolesClient() *TrustedAccessRolesClient` +- New function `NewTrustedAccessRoleBindingsClient(string, azcore.TokenCredential, *arm.ClientOptions) (*TrustedAccessRoleBindingsClient, error)` +- New function `*TrustedAccessRoleBindingsClient.BeginCreateOrUpdate(context.Context, string, string, string, TrustedAccessRoleBinding, *TrustedAccessRoleBindingsClientBeginCreateOrUpdateOptions) (*runtime.Poller[TrustedAccessRoleBindingsClientCreateOrUpdateResponse], error)` +- New function `*TrustedAccessRoleBindingsClient.BeginDelete(context.Context, string, string, string, *TrustedAccessRoleBindingsClientBeginDeleteOptions) (*runtime.Poller[TrustedAccessRoleBindingsClientDeleteResponse], error)` +- New function `*TrustedAccessRoleBindingsClient.Get(context.Context, string, string, string, *TrustedAccessRoleBindingsClientGetOptions) (TrustedAccessRoleBindingsClientGetResponse, error)` +- New function `*TrustedAccessRoleBindingsClient.NewListPager(string, string, *TrustedAccessRoleBindingsClientListOptions) *runtime.Pager[TrustedAccessRoleBindingsClientListResponse]` +- New function `NewTrustedAccessRolesClient(string, azcore.TokenCredential, *arm.ClientOptions) (*TrustedAccessRolesClient, error)` +- New function `*TrustedAccessRolesClient.NewListPager(string, *TrustedAccessRolesClientListOptions) *runtime.Pager[TrustedAccessRolesClientListResponse]` +- New struct `TrustedAccessRole` +- New struct `TrustedAccessRoleBinding` +- New struct `TrustedAccessRoleBindingListResult` +- New struct `TrustedAccessRoleBindingProperties` +- New struct `TrustedAccessRoleListResult` +- New struct `TrustedAccessRoleRule` + + +## 4.5.0-beta.1 (2023-10-27) +### Features Added + +- Support for test fakes and OpenTelemetry trace spans. +- New value `NetworkPolicyNone` added to enum type `NetworkPolicy` +- New value `NodeOSUpgradeChannelSecurityPatch` added to enum type `NodeOSUpgradeChannel` +- New value `OSSKUMariner` added to enum type `OSSKU` +- New value `PublicNetworkAccessSecuredByPerimeter` added to enum type `PublicNetworkAccess` +- New value `SnapshotTypeManagedCluster` added to enum type `SnapshotType` +- New value `WorkloadRuntimeKataMshvVMIsolation` added to enum type `WorkloadRuntime` +- New enum type `AddonAutoscaling` with values `AddonAutoscalingDisabled`, `AddonAutoscalingEnabled` +- New enum type `AgentPoolSSHAccess` with values `AgentPoolSSHAccessDisabled`, `AgentPoolSSHAccessLocalUser` +- New enum type `BackendPoolType` with values `BackendPoolTypeNodeIP`, `BackendPoolTypeNodeIPConfiguration` +- New enum type `GuardrailsSupport` with values `GuardrailsSupportPreview`, `GuardrailsSupportStable` +- New enum type `IpvsScheduler` with values `IpvsSchedulerLeastConnection`, `IpvsSchedulerRoundRobin` +- New enum type `Level` with values `LevelEnforcement`, `LevelOff`, `LevelWarning` +- New enum type `Mode` with values `ModeIPTABLES`, `ModeIPVS` +- New enum type `Protocol` with values `ProtocolTCP`, `ProtocolUDP` +- New enum type `RestrictionLevel` with values `RestrictionLevelReadOnly`, `RestrictionLevelUnrestricted` +- New enum type `TrustedAccessRoleBindingProvisioningState` with values `TrustedAccessRoleBindingProvisioningStateCanceled`, `TrustedAccessRoleBindingProvisioningStateDeleting`, `TrustedAccessRoleBindingProvisioningStateFailed`, `TrustedAccessRoleBindingProvisioningStateSucceeded`, `TrustedAccessRoleBindingProvisioningStateUpdating` +- New function `*ClientFactory.NewMachinesClient() *MachinesClient` +- New function `*ClientFactory.NewManagedClusterSnapshotsClient() *ManagedClusterSnapshotsClient` +- New function `*ClientFactory.NewTrustedAccessRoleBindingsClient() *TrustedAccessRoleBindingsClient` +- New function `*ClientFactory.NewTrustedAccessRolesClient() *TrustedAccessRolesClient` +- New function `NewMachinesClient(string, azcore.TokenCredential, *arm.ClientOptions) (*MachinesClient, error)` +- New function `*MachinesClient.Get(context.Context, string, string, string, string, *MachinesClientGetOptions) (MachinesClientGetResponse, error)` +- New function `*MachinesClient.NewListPager(string, string, string, *MachinesClientListOptions) *runtime.Pager[MachinesClientListResponse]` +- New function `NewManagedClusterSnapshotsClient(string, azcore.TokenCredential, *arm.ClientOptions) (*ManagedClusterSnapshotsClient, error)` +- New function `*ManagedClusterSnapshotsClient.CreateOrUpdate(context.Context, string, string, ManagedClusterSnapshot, *ManagedClusterSnapshotsClientCreateOrUpdateOptions) (ManagedClusterSnapshotsClientCreateOrUpdateResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Delete(context.Context, string, string, *ManagedClusterSnapshotsClientDeleteOptions) (ManagedClusterSnapshotsClientDeleteResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Get(context.Context, string, string, *ManagedClusterSnapshotsClientGetOptions) (ManagedClusterSnapshotsClientGetResponse, error)` +- New function `*ManagedClusterSnapshotsClient.NewListByResourceGroupPager(string, *ManagedClusterSnapshotsClientListByResourceGroupOptions) *runtime.Pager[ManagedClusterSnapshotsClientListByResourceGroupResponse]` +- New function `*ManagedClusterSnapshotsClient.NewListPager(*ManagedClusterSnapshotsClientListOptions) *runtime.Pager[ManagedClusterSnapshotsClientListResponse]` +- New function `*ManagedClusterSnapshotsClient.UpdateTags(context.Context, string, string, TagsObject, *ManagedClusterSnapshotsClientUpdateTagsOptions) (ManagedClusterSnapshotsClientUpdateTagsResponse, error)` +- New function `*ManagedClustersClient.GetGuardrailsVersions(context.Context, string, string, *ManagedClustersClientGetGuardrailsVersionsOptions) (ManagedClustersClientGetGuardrailsVersionsResponse, error)` +- New function `*ManagedClustersClient.NewListGuardrailsVersionsPager(string, *ManagedClustersClientListGuardrailsVersionsOptions) *runtime.Pager[ManagedClustersClientListGuardrailsVersionsResponse]` +- New function `NewTrustedAccessRoleBindingsClient(string, azcore.TokenCredential, *arm.ClientOptions) (*TrustedAccessRoleBindingsClient, error)` +- New function `*TrustedAccessRoleBindingsClient.BeginCreateOrUpdate(context.Context, string, string, string, TrustedAccessRoleBinding, *TrustedAccessRoleBindingsClientBeginCreateOrUpdateOptions) (*runtime.Poller[TrustedAccessRoleBindingsClientCreateOrUpdateResponse], error)` +- New function `*TrustedAccessRoleBindingsClient.BeginDelete(context.Context, string, string, string, *TrustedAccessRoleBindingsClientBeginDeleteOptions) (*runtime.Poller[TrustedAccessRoleBindingsClientDeleteResponse], error)` +- New function `*TrustedAccessRoleBindingsClient.Get(context.Context, string, string, string, *TrustedAccessRoleBindingsClientGetOptions) (TrustedAccessRoleBindingsClientGetResponse, error)` +- New function `*TrustedAccessRoleBindingsClient.NewListPager(string, string, *TrustedAccessRoleBindingsClientListOptions) *runtime.Pager[TrustedAccessRoleBindingsClientListResponse]` +- New function `NewTrustedAccessRolesClient(string, azcore.TokenCredential, *arm.ClientOptions) (*TrustedAccessRolesClient, error)` +- New function `*TrustedAccessRolesClient.NewListPager(string, *TrustedAccessRolesClientListOptions) *runtime.Pager[TrustedAccessRolesClientListResponse]` +- New struct `AgentPoolNetworkProfile` +- New struct `AgentPoolSecurityProfile` +- New struct `AgentPoolWindowsProfile` +- New struct `GuardrailsAvailableVersion` +- New struct `GuardrailsAvailableVersionsList` +- New struct `GuardrailsAvailableVersionsProperties` +- New struct `GuardrailsProfile` +- New struct `IPTag` +- New struct `Machine` +- New struct `MachineIPAddress` +- New struct `MachineListResult` +- New struct `MachineNetworkProperties` +- New struct `MachineProperties` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoring` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics` +- New struct `ManagedClusterAzureMonitorProfileContainerInsights` +- New struct `ManagedClusterAzureMonitorProfileLogs` +- New struct `ManagedClusterAzureMonitorProfileWindowsHostLogs` +- New struct `ManagedClusterCostAnalysis` +- New struct `ManagedClusterIngressProfile` +- New struct `ManagedClusterIngressProfileWebAppRouting` +- New struct `ManagedClusterMetricsProfile` +- New struct `ManagedClusterNodeResourceGroupProfile` +- New struct `ManagedClusterPropertiesForSnapshot` +- New struct `ManagedClusterSecurityProfileImageIntegrity` +- New struct `ManagedClusterSecurityProfileNodeRestriction` +- New struct `ManagedClusterSnapshot` +- New struct `ManagedClusterSnapshotListResult` +- New struct `ManagedClusterSnapshotProperties` +- New struct `NetworkMonitoring` +- New struct `NetworkProfileForSnapshot` +- New struct `NetworkProfileKubeProxyConfig` +- New struct `NetworkProfileKubeProxyConfigIpvsConfig` +- New struct `PortRange` +- New struct `TrustedAccessRole` +- New struct `TrustedAccessRoleBinding` +- New struct `TrustedAccessRoleBindingListResult` +- New struct `TrustedAccessRoleBindingProperties` +- New struct `TrustedAccessRoleListResult` +- New struct `TrustedAccessRoleRule` +- New field `IgnorePodDisruptionBudget` in struct `AgentPoolsClientBeginDeleteOptions` +- New field `EnableVnetIntegration`, `SubnetID` in struct `ManagedClusterAPIServerAccessProfile` +- New field `CapacityReservationGroupID`, `EnableCustomCATrust`, `MessageOfTheDay`, `NetworkProfile`, `SecurityProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfile` +- New field `CapacityReservationGroupID`, `EnableCustomCATrust`, `MessageOfTheDay`, `NetworkProfile`, `SecurityProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfileProperties` +- New field `Logs` in struct `ManagedClusterAzureMonitorProfile` +- New field `AppMonitoringOpenTelemetryMetrics` in struct `ManagedClusterAzureMonitorProfileMetrics` +- New field `EffectiveNoProxy` in struct `ManagedClusterHTTPProxyConfig` +- New field `BackendPoolType` in struct `ManagedClusterLoadBalancerProfile` +- New field `CreationData`, `EnableNamespaceResources`, `GuardrailsProfile`, `IngressProfile`, `MetricsProfile`, `NodeResourceGroupProfile` in struct `ManagedClusterProperties` +- New field `DaemonsetEvictionForEmptyNodes`, `DaemonsetEvictionForOccupiedNodes`, `Expanders`, `IgnoreDaemonsetsUtilization` in struct `ManagedClusterPropertiesAutoScalerProfile` +- New field `CustomCATrustCertificates`, `ImageIntegrity`, `NodeRestriction` in struct `ManagedClusterSecurityProfile` +- New field `Version` in struct `ManagedClusterStorageProfileDiskCSIDriver` +- New field `AddonAutoscaling` in struct `ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler` +- New field `IgnorePodDisruptionBudget` in struct `ManagedClustersClientBeginDeleteOptions` +- New field `KubeProxyConfig`, `Monitoring` in struct `NetworkProfile` + + +## 4.4.0 (2023-10-27) +### Features Added + +- New enum type `IstioIngressGatewayMode` with values `IstioIngressGatewayModeExternal`, `IstioIngressGatewayModeInternal` +- New enum type `ServiceMeshMode` with values `ServiceMeshModeDisabled`, `ServiceMeshModeIstio` +- New function `*ManagedClustersClient.GetMeshRevisionProfile(context.Context, string, string, *ManagedClustersClientGetMeshRevisionProfileOptions) (ManagedClustersClientGetMeshRevisionProfileResponse, error)` +- New function `*ManagedClustersClient.GetMeshUpgradeProfile(context.Context, string, string, string, *ManagedClustersClientGetMeshUpgradeProfileOptions) (ManagedClustersClientGetMeshUpgradeProfileResponse, error)` +- New function `*ManagedClustersClient.NewListMeshRevisionProfilesPager(string, *ManagedClustersClientListMeshRevisionProfilesOptions) *runtime.Pager[ManagedClustersClientListMeshRevisionProfilesResponse]` +- New function `*ManagedClustersClient.NewListMeshUpgradeProfilesPager(string, string, *ManagedClustersClientListMeshUpgradeProfilesOptions) *runtime.Pager[ManagedClustersClientListMeshUpgradeProfilesResponse]` +- New struct `CompatibleVersions` +- New struct `IstioCertificateAuthority` +- New struct `IstioComponents` +- New struct `IstioEgressGateway` +- New struct `IstioIngressGateway` +- New struct `IstioPluginCertificateAuthority` +- New struct `IstioServiceMesh` +- New struct `MeshRevision` +- New struct `MeshRevisionProfile` +- New struct `MeshRevisionProfileList` +- New struct `MeshRevisionProfileProperties` +- New struct `MeshUpgradeProfile` +- New struct `MeshUpgradeProfileList` +- New struct `MeshUpgradeProfileProperties` +- New struct `ServiceMeshProfile` +- New field `ResourceUID`, `ServiceMeshProfile` in struct `ManagedClusterProperties` + + +## 4.4.0-beta.2 (2023-10-09) +### Features Added + +- Support for test fakes and OpenTelemetry trace spans. + +## 4.4.0-beta.1 (2023-09-22) +### Features Added + +- New value `NodeOSUpgradeChannelSecurityPatch` added to enum type `NodeOSUpgradeChannel` +- New value `OSSKUMariner` added to enum type `OSSKU` +- New value `PublicNetworkAccessSecuredByPerimeter` added to enum type `PublicNetworkAccess` +- New value `SnapshotTypeManagedCluster` added to enum type `SnapshotType` +- New value `WorkloadRuntimeKataMshvVMIsolation` added to enum type `WorkloadRuntime` +- New enum type `AgentPoolSSHAccess` with values `AgentPoolSSHAccessDisabled`, `AgentPoolSSHAccessLocalUser` +- New enum type `BackendPoolType` with values `BackendPoolTypeNodeIP`, `BackendPoolTypeNodeIPConfiguration` +- New enum type `GuardrailsSupport` with values `GuardrailsSupportPreview`, `GuardrailsSupportStable` +- New enum type `IpvsScheduler` with values `IpvsSchedulerLeastConnection`, `IpvsSchedulerRoundRobin` +- New enum type `IstioIngressGatewayMode` with values `IstioIngressGatewayModeExternal`, `IstioIngressGatewayModeInternal` +- New enum type `Level` with values `LevelEnforcement`, `LevelOff`, `LevelWarning` +- New enum type `Mode` with values `ModeIPTABLES`, `ModeIPVS` +- New enum type `Protocol` with values `ProtocolTCP`, `ProtocolUDP` +- New enum type `RestrictionLevel` with values `RestrictionLevelReadOnly`, `RestrictionLevelUnrestricted` +- New enum type `ServiceMeshMode` with values `ServiceMeshModeDisabled`, `ServiceMeshModeIstio` +- New enum type `TrustedAccessRoleBindingProvisioningState` with values `TrustedAccessRoleBindingProvisioningStateCanceled`, `TrustedAccessRoleBindingProvisioningStateDeleting`, `TrustedAccessRoleBindingProvisioningStateFailed`, `TrustedAccessRoleBindingProvisioningStateSucceeded`, `TrustedAccessRoleBindingProvisioningStateUpdating` +- New function `*ClientFactory.NewMachinesClient() *MachinesClient` +- New function `*ClientFactory.NewManagedClusterSnapshotsClient() *ManagedClusterSnapshotsClient` +- New function `*ClientFactory.NewTrustedAccessRoleBindingsClient() *TrustedAccessRoleBindingsClient` +- New function `*ClientFactory.NewTrustedAccessRolesClient() *TrustedAccessRolesClient` +- New function `NewMachinesClient(string, azcore.TokenCredential, *arm.ClientOptions) (*MachinesClient, error)` +- New function `*MachinesClient.Get(context.Context, string, string, string, string, *MachinesClientGetOptions) (MachinesClientGetResponse, error)` +- New function `*MachinesClient.NewListPager(string, string, string, *MachinesClientListOptions) *runtime.Pager[MachinesClientListResponse]` +- New function `NewManagedClusterSnapshotsClient(string, azcore.TokenCredential, *arm.ClientOptions) (*ManagedClusterSnapshotsClient, error)` +- New function `*ManagedClusterSnapshotsClient.CreateOrUpdate(context.Context, string, string, ManagedClusterSnapshot, *ManagedClusterSnapshotsClientCreateOrUpdateOptions) (ManagedClusterSnapshotsClientCreateOrUpdateResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Delete(context.Context, string, string, *ManagedClusterSnapshotsClientDeleteOptions) (ManagedClusterSnapshotsClientDeleteResponse, error)` +- New function `*ManagedClusterSnapshotsClient.Get(context.Context, string, string, *ManagedClusterSnapshotsClientGetOptions) (ManagedClusterSnapshotsClientGetResponse, error)` +- New function `*ManagedClusterSnapshotsClient.NewListByResourceGroupPager(string, *ManagedClusterSnapshotsClientListByResourceGroupOptions) *runtime.Pager[ManagedClusterSnapshotsClientListByResourceGroupResponse]` +- New function `*ManagedClusterSnapshotsClient.NewListPager(*ManagedClusterSnapshotsClientListOptions) *runtime.Pager[ManagedClusterSnapshotsClientListResponse]` +- New function `*ManagedClusterSnapshotsClient.UpdateTags(context.Context, string, string, TagsObject, *ManagedClusterSnapshotsClientUpdateTagsOptions) (ManagedClusterSnapshotsClientUpdateTagsResponse, error)` +- New function `*ManagedClustersClient.GetGuardrailsVersions(context.Context, string, string, *ManagedClustersClientGetGuardrailsVersionsOptions) (ManagedClustersClientGetGuardrailsVersionsResponse, error)` +- New function `*ManagedClustersClient.GetMeshRevisionProfile(context.Context, string, string, *ManagedClustersClientGetMeshRevisionProfileOptions) (ManagedClustersClientGetMeshRevisionProfileResponse, error)` +- New function `*ManagedClustersClient.GetMeshUpgradeProfile(context.Context, string, string, string, *ManagedClustersClientGetMeshUpgradeProfileOptions) (ManagedClustersClientGetMeshUpgradeProfileResponse, error)` +- New function `*ManagedClustersClient.NewListGuardrailsVersionsPager(string, *ManagedClustersClientListGuardrailsVersionsOptions) *runtime.Pager[ManagedClustersClientListGuardrailsVersionsResponse]` +- New function `*ManagedClustersClient.NewListMeshRevisionProfilesPager(string, *ManagedClustersClientListMeshRevisionProfilesOptions) *runtime.Pager[ManagedClustersClientListMeshRevisionProfilesResponse]` +- New function `*ManagedClustersClient.NewListMeshUpgradeProfilesPager(string, string, *ManagedClustersClientListMeshUpgradeProfilesOptions) *runtime.Pager[ManagedClustersClientListMeshUpgradeProfilesResponse]` +- New function `NewTrustedAccessRoleBindingsClient(string, azcore.TokenCredential, *arm.ClientOptions) (*TrustedAccessRoleBindingsClient, error)` +- New function `*TrustedAccessRoleBindingsClient.CreateOrUpdate(context.Context, string, string, string, TrustedAccessRoleBinding, *TrustedAccessRoleBindingsClientCreateOrUpdateOptions) (TrustedAccessRoleBindingsClientCreateOrUpdateResponse, error)` +- New function `*TrustedAccessRoleBindingsClient.Delete(context.Context, string, string, string, *TrustedAccessRoleBindingsClientDeleteOptions) (TrustedAccessRoleBindingsClientDeleteResponse, error)` +- New function `*TrustedAccessRoleBindingsClient.Get(context.Context, string, string, string, *TrustedAccessRoleBindingsClientGetOptions) (TrustedAccessRoleBindingsClientGetResponse, error)` +- New function `*TrustedAccessRoleBindingsClient.NewListPager(string, string, *TrustedAccessRoleBindingsClientListOptions) *runtime.Pager[TrustedAccessRoleBindingsClientListResponse]` +- New function `NewTrustedAccessRolesClient(string, azcore.TokenCredential, *arm.ClientOptions) (*TrustedAccessRolesClient, error)` +- New function `*TrustedAccessRolesClient.NewListPager(string, *TrustedAccessRolesClientListOptions) *runtime.Pager[TrustedAccessRolesClientListResponse]` +- New struct `AgentPoolNetworkProfile` +- New struct `AgentPoolSecurityProfile` +- New struct `AgentPoolWindowsProfile` +- New struct `CompatibleVersions` +- New struct `GuardrailsAvailableVersion` +- New struct `GuardrailsAvailableVersionsList` +- New struct `GuardrailsAvailableVersionsProperties` +- New struct `GuardrailsProfile` +- New struct `IPTag` +- New struct `IstioCertificateAuthority` +- New struct `IstioComponents` +- New struct `IstioIngressGateway` +- New struct `IstioPluginCertificateAuthority` +- New struct `IstioServiceMesh` +- New struct `Machine` +- New struct `MachineIPAddress` +- New struct `MachineListResult` +- New struct `MachineNetworkProperties` +- New struct `MachineProperties` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoring` +- New struct `ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics` +- New struct `ManagedClusterAzureMonitorProfileContainerInsights` +- New struct `ManagedClusterAzureMonitorProfileLogs` +- New struct `ManagedClusterAzureMonitorProfileWindowsHostLogs` +- New struct `ManagedClusterCostAnalysis` +- New struct `ManagedClusterIngressProfile` +- New struct `ManagedClusterIngressProfileWebAppRouting` +- New struct `ManagedClusterMetricsProfile` +- New struct `ManagedClusterNodeResourceGroupProfile` +- New struct `ManagedClusterPropertiesForSnapshot` +- New struct `ManagedClusterSecurityProfileImageIntegrity` +- New struct `ManagedClusterSecurityProfileNodeRestriction` +- New struct `ManagedClusterSnapshot` +- New struct `ManagedClusterSnapshotListResult` +- New struct `ManagedClusterSnapshotProperties` +- New struct `MeshRevision` +- New struct `MeshRevisionProfile` +- New struct `MeshRevisionProfileList` +- New struct `MeshRevisionProfileProperties` +- New struct `MeshUpgradeProfile` +- New struct `MeshUpgradeProfileList` +- New struct `MeshUpgradeProfileProperties` +- New struct `NetworkMonitoring` +- New struct `NetworkProfileForSnapshot` +- New struct `NetworkProfileKubeProxyConfig` +- New struct `NetworkProfileKubeProxyConfigIpvsConfig` +- New struct `PortRange` +- New struct `ServiceMeshProfile` +- New struct `TrustedAccessRole` +- New struct `TrustedAccessRoleBinding` +- New struct `TrustedAccessRoleBindingListResult` +- New struct `TrustedAccessRoleBindingProperties` +- New struct `TrustedAccessRoleListResult` +- New struct `TrustedAccessRoleRule` +- New field `IgnorePodDisruptionBudget` in struct `AgentPoolsClientBeginDeleteOptions` +- New field `EnableVnetIntegration`, `SubnetID` in struct `ManagedClusterAPIServerAccessProfile` +- New field `CapacityReservationGroupID`, `EnableCustomCATrust`, `MessageOfTheDay`, `NetworkProfile`, `SecurityProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfile` +- New field `CapacityReservationGroupID`, `EnableCustomCATrust`, `MessageOfTheDay`, `NetworkProfile`, `SecurityProfile`, `WindowsProfile` in struct `ManagedClusterAgentPoolProfileProperties` +- New field `Logs` in struct `ManagedClusterAzureMonitorProfile` +- New field `AppMonitoringOpenTelemetryMetrics` in struct `ManagedClusterAzureMonitorProfileMetrics` +- New field `EffectiveNoProxy` in struct `ManagedClusterHTTPProxyConfig` +- New field `BackendPoolType` in struct `ManagedClusterLoadBalancerProfile` +- New field `CreationData`, `EnableNamespaceResources`, `GuardrailsProfile`, `IngressProfile`, `MetricsProfile`, `NodeResourceGroupProfile`, `ResourceUID`, `ServiceMeshProfile` in struct `ManagedClusterProperties` +- New field `CustomCATrustCertificates`, `ImageIntegrity`, `NodeRestriction` in struct `ManagedClusterSecurityProfile` +- New field `Version` in struct `ManagedClusterStorageProfileDiskCSIDriver` +- New field `IgnorePodDisruptionBudget` in struct `ManagedClustersClientBeginDeleteOptions` +- New field `KubeProxyConfig`, `Monitoring` in struct `NetworkProfile` + + +## 4.3.0 (2023-08-25) +### Features Added + +- New struct `ClusterUpgradeSettings` +- New struct `UpgradeOverrideSettings` +- New field `UpgradeSettings` in struct `ManagedClusterProperties` + + +## 4.2.0 (2023-08-25) +### Features Added + +- New enum type `NodeOSUpgradeChannel` with values `NodeOSUpgradeChannelNodeImage`, `NodeOSUpgradeChannelNone`, `NodeOSUpgradeChannelUnmanaged` +- New struct `DelegatedResource` +- New struct `ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler` +- New field `DrainTimeoutInMinutes` in struct `AgentPoolUpgradeSettings` +- New field `NodeOSUpgradeChannel` in struct `ManagedClusterAutoUpgradeProfile` +- New field `DelegatedResources` in struct `ManagedClusterIdentity` +- New field `VerticalPodAutoscaler` in struct `ManagedClusterWorkloadAutoScalerProfile` + + +## 4.1.0 (2023-07-28) +### Features Added + +- New enum type `Type` with values `TypeFirst`, `TypeFourth`, `TypeLast`, `TypeSecond`, `TypeThird` +- New struct `AbsoluteMonthlySchedule` +- New struct `DailySchedule` +- New struct `DateSpan` +- New struct `MaintenanceWindow` +- New struct `RelativeMonthlySchedule` +- New struct `Schedule` +- New struct `WeeklySchedule` +- New field `MaintenanceWindow` in struct `MaintenanceConfigurationProperties` + + +## 4.0.0 (2023-05-26) +### Breaking Changes + +- Field `DockerBridgeCidr` of struct `NetworkProfile` has been removed + +### Features Added + +- New value `OSSKUAzureLinux` added to enum type `OSSKU` + + +## 3.0.0 (2023-04-28) +### Breaking Changes + +- Const `ManagedClusterSKUNameBasic` from type alias `ManagedClusterSKUName` has been removed +- Const `ManagedClusterSKUTierPaid` from type alias `ManagedClusterSKUTier` has been removed + +### Features Added + +- New value `ManagedClusterSKUTierPremium` added to enum type `ManagedClusterSKUTier` +- New value `NetworkPolicyCilium` added to enum type `NetworkPolicy` +- New enum type `KubernetesSupportPlan` with values `KubernetesSupportPlanAKSLongTermSupport`, `KubernetesSupportPlanKubernetesOfficial` +- New enum type `NetworkDataplane` with values `NetworkDataplaneAzure`, `NetworkDataplaneCilium` +- New enum type `NetworkPluginMode` with values `NetworkPluginModeOverlay` +- New function `*ManagedClustersClient.ListKubernetesVersions(context.Context, string, *ManagedClustersClientListKubernetesVersionsOptions) (ManagedClustersClientListKubernetesVersionsResponse, error)` +- New struct `KubernetesPatchVersion` +- New struct `KubernetesVersion` +- New struct `KubernetesVersionCapabilities` +- New struct `KubernetesVersionListResult` +- New struct `ManagedClusterSecurityProfileImageCleaner` +- New struct `ManagedClusterSecurityProfileWorkloadIdentity` +- New field `SupportPlan` in struct `ManagedClusterProperties` +- New field `ImageCleaner` in struct `ManagedClusterSecurityProfile` +- New field `WorkloadIdentity` in struct `ManagedClusterSecurityProfile` +- New field `NetworkDataplane` in struct `NetworkProfile` +- New field `NetworkPluginMode` in struct `NetworkProfile` + + +## 2.4.0 (2023-03-24) +### Features Added + +- New struct `ClientFactory` which is a client factory used to create any client in this module +- New value `ManagedClusterSKUNameBase` added to enum type `ManagedClusterSKUName` +- New value `ManagedClusterSKUTierStandard` added to enum type `ManagedClusterSKUTier` +- New function `*AgentPoolsClient.BeginAbortLatestOperation(context.Context, string, string, string, *AgentPoolsClientBeginAbortLatestOperationOptions) (*runtime.Poller[AgentPoolsClientAbortLatestOperationResponse], error)` +- New function `*ManagedClustersClient.BeginAbortLatestOperation(context.Context, string, string, *ManagedClustersClientBeginAbortLatestOperationOptions) (*runtime.Poller[ManagedClustersClientAbortLatestOperationResponse], error)` +- New struct `ManagedClusterAzureMonitorProfile` +- New struct `ManagedClusterAzureMonitorProfileKubeStateMetrics` +- New struct `ManagedClusterAzureMonitorProfileMetrics` +- New field `AzureMonitorProfile` in struct `ManagedClusterProperties` + + +## 2.3.0 (2023-01-27) +### Features Added + +- New value `ManagedClusterPodIdentityProvisioningStateCanceled`, `ManagedClusterPodIdentityProvisioningStateSucceeded` added to type alias `ManagedClusterPodIdentityProvisioningState` +- New value `PrivateEndpointConnectionProvisioningStateCanceled` added to type alias `PrivateEndpointConnectionProvisioningState` +- New struct `ManagedClusterWorkloadAutoScalerProfile` +- New struct `ManagedClusterWorkloadAutoScalerProfileKeda` +- New field `WorkloadAutoScalerProfile` in struct `ManagedClusterProperties` +- New field `Location` in struct `ManagedClustersClientGetCommandResultResponse` + + +## 2.2.0 (2022-10-26) +### Features Added + +- New function `*ManagedClustersClient.BeginRotateServiceAccountSigningKeys(context.Context, string, string, *ManagedClustersClientBeginRotateServiceAccountSigningKeysOptions) (*runtime.Poller[ManagedClustersClientRotateServiceAccountSigningKeysResponse], error)` +- New struct `ManagedClusterOIDCIssuerProfile` +- New struct `ManagedClusterStorageProfileBlobCSIDriver` +- New struct `ManagedClustersClientBeginRotateServiceAccountSigningKeysOptions` +- New struct `ManagedClustersClientRotateServiceAccountSigningKeysResponse` +- New field `BlobCSIDriver` in struct `ManagedClusterStorageProfile` +- New field `OidcIssuerProfile` in struct `ManagedClusterProperties` + + +## 2.1.0 (2022-08-25) +### Features Added + +- New const `OSSKUWindows2019` +- New const `OSSKUWindows2022` + + +## 2.0.0 (2022-07-22) +### Breaking Changes + +- Struct `ManagedClusterSecurityProfileAzureDefender` has been removed +- Field `AzureDefender` of struct `ManagedClusterSecurityProfile` has been removed + +### Features Added + +- New const `KeyVaultNetworkAccessTypesPrivate` +- New const `NetworkPluginNone` +- New const `KeyVaultNetworkAccessTypesPublic` +- New function `PossibleKeyVaultNetworkAccessTypesValues() []KeyVaultNetworkAccessTypes` +- New struct `AzureKeyVaultKms` +- New struct `ManagedClusterSecurityProfileDefender` +- New struct `ManagedClusterSecurityProfileDefenderSecurityMonitoring` +- New field `HostGroupID` in struct `ManagedClusterAgentPoolProfileProperties` +- New field `HostGroupID` in struct `ManagedClusterAgentPoolProfile` +- New field `AzureKeyVaultKms` in struct `ManagedClusterSecurityProfile` +- New field `Defender` in struct `ManagedClusterSecurityProfile` + + +## 1.0.0 (2022-05-16) + +The package of `github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice` is using our [next generation design principles](https://azure.github.io/azure-sdk/general_introduction.html) since version 1.0.0, which contains breaking changes. + +To migrate the existing applications to the latest version, please refer to [Migration Guide](https://aka.ms/azsdk/go/mgmt/migration). + +To learn more, please refer to our documentation [Quick Start](https://aka.ms/azsdk/go/mgmt). diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/LICENSE.txt b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/LICENSE.txt new file mode 100644 index 000000000000..dc0c2ffb3dc1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/LICENSE.txt @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) Microsoft Corporation. All rights reserved. + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. \ No newline at end of file diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/README.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/README.md new file mode 100644 index 000000000000..44d307082574 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/README.md @@ -0,0 +1,108 @@ +# Azure Container Service Module for Go + +[![PkgGoDev](https://pkg.go.dev/badge/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4)](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4) + +The `armcontainerservice` module provides operations for working with Azure Container Service. + +[Source code](https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/resourcemanager/containerservice/armcontainerservice) + +# Getting started + +## Prerequisites + +- an [Azure subscription](https://azure.microsoft.com/free/) +- Go 1.18 or above (You could download and install the latest version of Go from [here](https://go.dev/doc/install). It will replace the existing Go on your machine. If you want to install multiple Go versions on the same machine, you could refer this [doc](https://go.dev/doc/manage-install).) + +## Install the package + +This project uses [Go modules](https://github.com/golang/go/wiki/Modules) for versioning and dependency management. + +Install the Azure Container Service module: + +```sh +go get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4 +``` + +## Authorization + +When creating a client, you will need to provide a credential for authenticating with Azure Container Service. The `azidentity` module provides facilities for various ways of authenticating with Azure including client/secret, certificate, managed identity, and more. + +```go +cred, err := azidentity.NewDefaultAzureCredential(nil) +``` + +For more information on authentication, please see the documentation for `azidentity` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azidentity). + +## Client Factory + +Azure Container Service module consists of one or more clients. We provide a client factory which could be used to create any client in this module. + +```go +clientFactory, err := armcontainerservice.NewClientFactory(, cred, nil) +``` + +You can use `ClientOptions` in package `github.com/Azure/azure-sdk-for-go/sdk/azcore/arm` to set endpoint to connect with public and sovereign clouds as well as Azure Stack. For more information, please see the documentation for `azcore` at [pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/azcore). + +```go +options := arm.ClientOptions { + ClientOptions: azcore.ClientOptions { + Cloud: cloud.AzureChina, + }, +} +clientFactory, err := armcontainerservice.NewClientFactory(, cred, &options) +``` + +## Clients + +A client groups a set of related APIs, providing access to its functionality. Create one or more clients to access the APIs you require using client factory. + +```go +client := clientFactory.NewAgentPoolsClient() +``` + +## Fakes + +The fake package contains types used for constructing in-memory fake servers used in unit tests. +This allows writing tests to cover various success/error conditions without the need for connecting to a live service. + +Please see https://github.com/Azure/azure-sdk-for-go/tree/main/sdk/samples/fakes for details and examples on how to use fakes. + +## More sample code + +- [Agent Pool](https://aka.ms/azsdk/go/mgmt/samples?path=sdk/resourcemanager/containerservice/agent_pool) +- [Creating a Fake](https://github.com/Azure/azure-sdk-for-go/blob/main/sdk/resourcemanager/containerservice/armcontainerservice/fake_example_test.go) +- [Maintenance Configuration](https://aka.ms/azsdk/go/mgmt/samples?path=sdk/resourcemanager/containerservice/maintenance_configurations) +- [Managed Clusters](https://aka.ms/azsdk/go/mgmt/samples?path=sdk/resourcemanager/containerservice/managed_clusters) + +## Major Version Upgrade + +Go uses [semantic import versioning](https://github.com/golang/go/wiki/Modules#semantic-import-versioning) to ensure a good backward compatibility for modules. For Azure Go management SDK, we usually upgrade module version according to cooresponding service's API version. Regarding it could be a complicated experience for major version upgrade, we will try our best to keep the SDK API stable and release new version in backward compatible way. However, if any unavoidable breaking changes and a new major version releases for SDK modules, you could use these commands under your module folder to upgrade: + +```sh +go install github.com/icholy/gomajor@latest +gomajor get github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/compute/armcompute@latest +``` + +## Provide Feedback + +If you encounter bugs or have suggestions, please +[open an issue](https://github.com/Azure/azure-sdk-for-go/issues) and assign the `Container Service` label. + +# Contributing + +This project welcomes contributions and suggestions. Most contributions require +you to agree to a Contributor License Agreement (CLA) declaring that you have +the right to, and actually do, grant us the rights to use your contribution. +For details, visit [https://cla.microsoft.com](https://cla.microsoft.com). + +When you submit a pull request, a CLA-bot will automatically determine whether +you need to provide a CLA and decorate the PR appropriately (e.g., label, +comment). Simply follow the instructions provided by the bot. You will only +need to do this once across all repos using our CLA. + +This project has adopted the +[Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information, see the +[Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) +or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any +additional questions or comments. diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/agentpools_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/agentpools_client.go new file mode 100644 index 000000000000..738657364e80 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/agentpools_client.go @@ -0,0 +1,739 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strconv" + "strings" +) + +// AgentPoolsClient contains the methods for the AgentPools group. +// Don't use this type directly, use NewAgentPoolsClient() instead. +type AgentPoolsClient struct { + internal *arm.Client + subscriptionID string +} + +// NewAgentPoolsClient creates a new instance of AgentPoolsClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewAgentPoolsClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*AgentPoolsClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &AgentPoolsClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// BeginAbortLatestOperation - Aborts the currently running operation on the agent pool. The Agent Pool will be moved to a +// Canceling state and eventually to a Canceled state when cancellation finishes. If the operation completes +// before cancellation can take place, an error is returned. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - options - AgentPoolsClientBeginAbortLatestOperationOptions contains the optional parameters for the AgentPoolsClient.BeginAbortLatestOperation +// method. +func (client *AgentPoolsClient) BeginAbortLatestOperation(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginAbortLatestOperationOptions) (*runtime.Poller[AgentPoolsClientAbortLatestOperationResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.abortLatestOperation(ctx, resourceGroupName, resourceName, agentPoolName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[AgentPoolsClientAbortLatestOperationResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[AgentPoolsClientAbortLatestOperationResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// AbortLatestOperation - Aborts the currently running operation on the agent pool. The Agent Pool will be moved to a Canceling +// state and eventually to a Canceled state when cancellation finishes. If the operation completes +// before cancellation can take place, an error is returned. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *AgentPoolsClient) abortLatestOperation(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginAbortLatestOperationOptions) (*http.Response, error) { + var err error + const operationName = "AgentPoolsClient.BeginAbortLatestOperation" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.abortLatestOperationCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// abortLatestOperationCreateRequest creates the AbortLatestOperation request. +func (client *AgentPoolsClient) abortLatestOperationCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginAbortLatestOperationOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/agentPools/{agentPoolName}/abort" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// BeginCreateOrUpdate - Creates or updates an agent pool in the specified managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - parameters - The agent pool to create or update. +// - options - AgentPoolsClientBeginCreateOrUpdateOptions contains the optional parameters for the AgentPoolsClient.BeginCreateOrUpdate +// method. +func (client *AgentPoolsClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, parameters AgentPool, options *AgentPoolsClientBeginCreateOrUpdateOptions) (*runtime.Poller[AgentPoolsClientCreateOrUpdateResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.createOrUpdate(ctx, resourceGroupName, resourceName, agentPoolName, parameters, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[AgentPoolsClientCreateOrUpdateResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[AgentPoolsClientCreateOrUpdateResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// CreateOrUpdate - Creates or updates an agent pool in the specified managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *AgentPoolsClient) createOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, parameters AgentPool, options *AgentPoolsClientBeginCreateOrUpdateOptions) (*http.Response, error) { + var err error + const operationName = "AgentPoolsClient.BeginCreateOrUpdate" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.createOrUpdateCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, parameters, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusCreated) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// createOrUpdateCreateRequest creates the CreateOrUpdate request. +func (client *AgentPoolsClient) createOrUpdateCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, parameters AgentPool, options *AgentPoolsClientBeginCreateOrUpdateOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + req, err := runtime.NewRequest(ctx, http.MethodPut, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// BeginDelete - Deletes an agent pool in the specified managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - options - AgentPoolsClientBeginDeleteOptions contains the optional parameters for the AgentPoolsClient.BeginDelete method. +func (client *AgentPoolsClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginDeleteOptions) (*runtime.Poller[AgentPoolsClientDeleteResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.deleteOperation(ctx, resourceGroupName, resourceName, agentPoolName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[AgentPoolsClientDeleteResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[AgentPoolsClientDeleteResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// Delete - Deletes an agent pool in the specified managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *AgentPoolsClient) deleteOperation(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginDeleteOptions) (*http.Response, error) { + var err error + const operationName = "AgentPoolsClient.BeginDelete" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.deleteCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// deleteCreateRequest creates the Delete request. +func (client *AgentPoolsClient) deleteCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginDeleteOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + req, err := runtime.NewRequest(ctx, http.MethodDelete, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + if options != nil && options.IgnorePodDisruptionBudget != nil { + reqQP.Set("ignore-pod-disruption-budget", strconv.FormatBool(*options.IgnorePodDisruptionBudget)) + } + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// BeginDeleteMachines - Deletes specific machines in an agent pool. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - machines - A list of machines from the agent pool to be deleted. +// - options - AgentPoolsClientBeginDeleteMachinesOptions contains the optional parameters for the AgentPoolsClient.BeginDeleteMachines +// method. +func (client *AgentPoolsClient) BeginDeleteMachines(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, machines AgentPoolDeleteMachinesParameter, options *AgentPoolsClientBeginDeleteMachinesOptions) (*runtime.Poller[AgentPoolsClientDeleteMachinesResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.deleteMachines(ctx, resourceGroupName, resourceName, agentPoolName, machines, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[AgentPoolsClientDeleteMachinesResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[AgentPoolsClientDeleteMachinesResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// DeleteMachines - Deletes specific machines in an agent pool. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *AgentPoolsClient) deleteMachines(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, machines AgentPoolDeleteMachinesParameter, options *AgentPoolsClientBeginDeleteMachinesOptions) (*http.Response, error) { + var err error + const operationName = "AgentPoolsClient.BeginDeleteMachines" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.deleteMachinesCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, machines, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// deleteMachinesCreateRequest creates the DeleteMachines request. +func (client *AgentPoolsClient) deleteMachinesCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, machines AgentPoolDeleteMachinesParameter, options *AgentPoolsClientBeginDeleteMachinesOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}/deleteMachines" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, machines); err != nil { + return nil, err + } + return req, nil +} + +// Get - Gets the specified managed cluster agent pool. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - options - AgentPoolsClientGetOptions contains the optional parameters for the AgentPoolsClient.Get method. +func (client *AgentPoolsClient) Get(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientGetOptions) (AgentPoolsClientGetResponse, error) { + var err error + const operationName = "AgentPoolsClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, options) + if err != nil { + return AgentPoolsClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return AgentPoolsClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return AgentPoolsClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *AgentPoolsClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *AgentPoolsClient) getHandleResponse(resp *http.Response) (AgentPoolsClientGetResponse, error) { + result := AgentPoolsClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.AgentPool); err != nil { + return AgentPoolsClientGetResponse{}, err + } + return result, nil +} + +// GetAvailableAgentPoolVersions - See supported Kubernetes versions [https://docs.microsoft.com/azure/aks/supported-kubernetes-versions] +// for more details about the version lifecycle. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - AgentPoolsClientGetAvailableAgentPoolVersionsOptions contains the optional parameters for the AgentPoolsClient.GetAvailableAgentPoolVersions +// method. +func (client *AgentPoolsClient) GetAvailableAgentPoolVersions(ctx context.Context, resourceGroupName string, resourceName string, options *AgentPoolsClientGetAvailableAgentPoolVersionsOptions) (AgentPoolsClientGetAvailableAgentPoolVersionsResponse, error) { + var err error + const operationName = "AgentPoolsClient.GetAvailableAgentPoolVersions" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getAvailableAgentPoolVersionsCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return AgentPoolsClientGetAvailableAgentPoolVersionsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return AgentPoolsClientGetAvailableAgentPoolVersionsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return AgentPoolsClientGetAvailableAgentPoolVersionsResponse{}, err + } + resp, err := client.getAvailableAgentPoolVersionsHandleResponse(httpResp) + return resp, err +} + +// getAvailableAgentPoolVersionsCreateRequest creates the GetAvailableAgentPoolVersions request. +func (client *AgentPoolsClient) getAvailableAgentPoolVersionsCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *AgentPoolsClientGetAvailableAgentPoolVersionsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/availableAgentPoolVersions" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getAvailableAgentPoolVersionsHandleResponse handles the GetAvailableAgentPoolVersions response. +func (client *AgentPoolsClient) getAvailableAgentPoolVersionsHandleResponse(resp *http.Response) (AgentPoolsClientGetAvailableAgentPoolVersionsResponse, error) { + result := AgentPoolsClientGetAvailableAgentPoolVersionsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.AgentPoolAvailableVersions); err != nil { + return AgentPoolsClientGetAvailableAgentPoolVersionsResponse{}, err + } + return result, nil +} + +// GetUpgradeProfile - Gets the upgrade profile for an agent pool. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - options - AgentPoolsClientGetUpgradeProfileOptions contains the optional parameters for the AgentPoolsClient.GetUpgradeProfile +// method. +func (client *AgentPoolsClient) GetUpgradeProfile(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientGetUpgradeProfileOptions) (AgentPoolsClientGetUpgradeProfileResponse, error) { + var err error + const operationName = "AgentPoolsClient.GetUpgradeProfile" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getUpgradeProfileCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, options) + if err != nil { + return AgentPoolsClientGetUpgradeProfileResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return AgentPoolsClientGetUpgradeProfileResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return AgentPoolsClientGetUpgradeProfileResponse{}, err + } + resp, err := client.getUpgradeProfileHandleResponse(httpResp) + return resp, err +} + +// getUpgradeProfileCreateRequest creates the GetUpgradeProfile request. +func (client *AgentPoolsClient) getUpgradeProfileCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientGetUpgradeProfileOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}/upgradeProfiles/default" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getUpgradeProfileHandleResponse handles the GetUpgradeProfile response. +func (client *AgentPoolsClient) getUpgradeProfileHandleResponse(resp *http.Response) (AgentPoolsClientGetUpgradeProfileResponse, error) { + result := AgentPoolsClientGetUpgradeProfileResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.AgentPoolUpgradeProfile); err != nil { + return AgentPoolsClientGetUpgradeProfileResponse{}, err + } + return result, nil +} + +// NewListPager - Gets a list of agent pools in the specified managed cluster. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - AgentPoolsClientListOptions contains the optional parameters for the AgentPoolsClient.NewListPager method. +func (client *AgentPoolsClient) NewListPager(resourceGroupName string, resourceName string, options *AgentPoolsClientListOptions) *runtime.Pager[AgentPoolsClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[AgentPoolsClientListResponse]{ + More: func(page AgentPoolsClientListResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *AgentPoolsClientListResponse) (AgentPoolsClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "AgentPoolsClient.NewListPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listCreateRequest(ctx, resourceGroupName, resourceName, options) + }, nil) + if err != nil { + return AgentPoolsClientListResponse{}, err + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *AgentPoolsClient) listCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *AgentPoolsClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *AgentPoolsClient) listHandleResponse(resp *http.Response) (AgentPoolsClientListResponse, error) { + result := AgentPoolsClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.AgentPoolListResult); err != nil { + return AgentPoolsClientListResponse{}, err + } + return result, nil +} + +// BeginUpgradeNodeImageVersion - Upgrading the node image version of an agent pool applies the newest OS and runtime updates +// to the nodes. AKS provides one new image per week with the latest updates. For more details on node image +// versions, see: https://docs.microsoft.com/azure/aks/node-image-upgrade +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - options - AgentPoolsClientBeginUpgradeNodeImageVersionOptions contains the optional parameters for the AgentPoolsClient.BeginUpgradeNodeImageVersion +// method. +func (client *AgentPoolsClient) BeginUpgradeNodeImageVersion(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginUpgradeNodeImageVersionOptions) (*runtime.Poller[AgentPoolsClientUpgradeNodeImageVersionResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.upgradeNodeImageVersion(ctx, resourceGroupName, resourceName, agentPoolName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[AgentPoolsClientUpgradeNodeImageVersionResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[AgentPoolsClientUpgradeNodeImageVersionResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// UpgradeNodeImageVersion - Upgrading the node image version of an agent pool applies the newest OS and runtime updates to +// the nodes. AKS provides one new image per week with the latest updates. For more details on node image +// versions, see: https://docs.microsoft.com/azure/aks/node-image-upgrade +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *AgentPoolsClient) upgradeNodeImageVersion(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginUpgradeNodeImageVersionOptions) (*http.Response, error) { + var err error + const operationName = "AgentPoolsClient.BeginUpgradeNodeImageVersion" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.upgradeNodeImageVersionCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusAccepted) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// upgradeNodeImageVersionCreateRequest creates the UpgradeNodeImageVersion request. +func (client *AgentPoolsClient) upgradeNodeImageVersionCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *AgentPoolsClientBeginUpgradeNodeImageVersionOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}/upgradeNodeImageVersion" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/assets.json b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/assets.json new file mode 100644 index 000000000000..5f0930eecb21 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/assets.json @@ -0,0 +1,6 @@ +{ + "AssetsRepo": "Azure/azure-sdk-assets", + "AssetsRepoPrefixPath": "go", + "TagPrefix": "go/resourcemanager/containerservice/armcontainerservice", + "Tag": "go/resourcemanager/containerservice/armcontainerservice_5e026610aa" +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/autorest.md b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/autorest.md new file mode 100644 index 000000000000..3eac1efff175 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/autorest.md @@ -0,0 +1,13 @@ +### AutoRest Configuration + +> see https://aka.ms/autorest + +``` yaml +azure-arm: true +require: +- https://github.com/Azure/azure-rest-api-specs/blob/d4205894880b989ede35d62d97c8e901ed14fb5a/specification/containerservice/resource-manager/Microsoft.ContainerService/aks/readme.md +- https://github.com/Azure/azure-rest-api-specs/blob/d4205894880b989ede35d62d97c8e901ed14fb5a/specification/containerservice/resource-manager/Microsoft.ContainerService/aks/readme.go.md +license-header: MICROSOFT_MIT_NO_VERSION +module-version: 4.8.0-beta.1 +tag: package-preview-2023-11 +``` diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/build.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/build.go new file mode 100644 index 000000000000..70432c3431c6 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/build.go @@ -0,0 +1,7 @@ +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. + +// This file enables 'go generate' to regenerate this specific SDK +//go:generate pwsh ../../../../eng/scripts/build.ps1 -skipBuild -cleanGenerated -format -tidy -generate -removeUnreferencedTypes resourcemanager/containerservice/armcontainerservice + +package armcontainerservice diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/ci.yml b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/ci.yml new file mode 100644 index 000000000000..1c3615696e32 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/ci.yml @@ -0,0 +1,28 @@ +# NOTE: Please refer to https://aka.ms/azsdk/engsys/ci-yaml before editing this file. +trigger: + branches: + include: + - main + - feature/* + - hotfix/* + - release/* + paths: + include: + - sdk/resourcemanager/containerservice/armcontainerservice/ + +pr: + branches: + include: + - main + - feature/* + - hotfix/* + - release/* + paths: + include: + - sdk/resourcemanager/containerservice/armcontainerservice/ + +stages: +- template: /eng/pipelines/templates/jobs/archetype-sdk-client.yml + parameters: + IncludeRelease: true + ServiceDirectory: 'resourcemanager/containerservice/armcontainerservice' diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/client_factory.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/client_factory.go new file mode 100644 index 000000000000..a53dc8828f4f --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/client_factory.go @@ -0,0 +1,140 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" +) + +// ClientFactory is a client factory used to create any client in this module. +// Don't use this type directly, use NewClientFactory instead. +type ClientFactory struct { + subscriptionID string + internal *arm.Client +} + +// NewClientFactory creates a new instance of ClientFactory with the specified values. +// The parameter values will be propagated to any client created from this factory. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewClientFactory(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*ClientFactory, error) { + internal, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + return &ClientFactory{ + subscriptionID: subscriptionID, + internal: internal, + }, nil +} + +// NewAgentPoolsClient creates a new instance of AgentPoolsClient. +func (c *ClientFactory) NewAgentPoolsClient() *AgentPoolsClient { + return &AgentPoolsClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewMachinesClient creates a new instance of MachinesClient. +func (c *ClientFactory) NewMachinesClient() *MachinesClient { + return &MachinesClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewMaintenanceConfigurationsClient creates a new instance of MaintenanceConfigurationsClient. +func (c *ClientFactory) NewMaintenanceConfigurationsClient() *MaintenanceConfigurationsClient { + return &MaintenanceConfigurationsClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewManagedClusterSnapshotsClient creates a new instance of ManagedClusterSnapshotsClient. +func (c *ClientFactory) NewManagedClusterSnapshotsClient() *ManagedClusterSnapshotsClient { + return &ManagedClusterSnapshotsClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewManagedClustersClient creates a new instance of ManagedClustersClient. +func (c *ClientFactory) NewManagedClustersClient() *ManagedClustersClient { + return &ManagedClustersClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewOperationStatusResultClient creates a new instance of OperationStatusResultClient. +func (c *ClientFactory) NewOperationStatusResultClient() *OperationStatusResultClient { + return &OperationStatusResultClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewOperationsClient creates a new instance of OperationsClient. +func (c *ClientFactory) NewOperationsClient() *OperationsClient { + return &OperationsClient{ + internal: c.internal, + } +} + +// NewPrivateEndpointConnectionsClient creates a new instance of PrivateEndpointConnectionsClient. +func (c *ClientFactory) NewPrivateEndpointConnectionsClient() *PrivateEndpointConnectionsClient { + return &PrivateEndpointConnectionsClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewPrivateLinkResourcesClient creates a new instance of PrivateLinkResourcesClient. +func (c *ClientFactory) NewPrivateLinkResourcesClient() *PrivateLinkResourcesClient { + return &PrivateLinkResourcesClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewResolvePrivateLinkServiceIDClient creates a new instance of ResolvePrivateLinkServiceIDClient. +func (c *ClientFactory) NewResolvePrivateLinkServiceIDClient() *ResolvePrivateLinkServiceIDClient { + return &ResolvePrivateLinkServiceIDClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewSnapshotsClient creates a new instance of SnapshotsClient. +func (c *ClientFactory) NewSnapshotsClient() *SnapshotsClient { + return &SnapshotsClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewTrustedAccessRoleBindingsClient creates a new instance of TrustedAccessRoleBindingsClient. +func (c *ClientFactory) NewTrustedAccessRoleBindingsClient() *TrustedAccessRoleBindingsClient { + return &TrustedAccessRoleBindingsClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} + +// NewTrustedAccessRolesClient creates a new instance of TrustedAccessRolesClient. +func (c *ClientFactory) NewTrustedAccessRolesClient() *TrustedAccessRolesClient { + return &TrustedAccessRolesClient{ + subscriptionID: c.subscriptionID, + internal: c.internal, + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/constants.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/constants.go new file mode 100644 index 000000000000..2935ce0978dc --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/constants.go @@ -0,0 +1,1143 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +const ( + moduleName = "github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice" + moduleVersion = "v4.8.0-beta.1" +) + +// AddonAutoscaling - Whether VPA add-on is enabled and configured to scale AKS-managed add-ons. +type AddonAutoscaling string + +const ( + // AddonAutoscalingDisabled - Feature to autoscale AKS-managed add-ons is disabled. + AddonAutoscalingDisabled AddonAutoscaling = "Disabled" + // AddonAutoscalingEnabled - Feature to autoscale AKS-managed add-ons is enabled. The default VPA update mode is Initial mode. + AddonAutoscalingEnabled AddonAutoscaling = "Enabled" +) + +// PossibleAddonAutoscalingValues returns the possible values for the AddonAutoscaling const type. +func PossibleAddonAutoscalingValues() []AddonAutoscaling { + return []AddonAutoscaling{ + AddonAutoscalingDisabled, + AddonAutoscalingEnabled, + } +} + +// AgentPoolMode - A cluster must have at least one 'System' Agent Pool at all times. For additional information on agent +// pool restrictions and best practices, see: https://docs.microsoft.com/azure/aks/use-system-pools +type AgentPoolMode string + +const ( + // AgentPoolModeSystem - System agent pools are primarily for hosting critical system pods such as CoreDNS and metrics-server. + // System agent pools osType must be Linux. System agent pools VM SKU must have at least 2vCPUs and 4GB of memory. + AgentPoolModeSystem AgentPoolMode = "System" + // AgentPoolModeUser - User agent pools are primarily for hosting your application pods. + AgentPoolModeUser AgentPoolMode = "User" +) + +// PossibleAgentPoolModeValues returns the possible values for the AgentPoolMode const type. +func PossibleAgentPoolModeValues() []AgentPoolMode { + return []AgentPoolMode{ + AgentPoolModeSystem, + AgentPoolModeUser, + } +} + +// AgentPoolSSHAccess - SSH access method of an agent pool. +type AgentPoolSSHAccess string + +const ( + // AgentPoolSSHAccessDisabled - SSH service will be turned off on the node. + AgentPoolSSHAccessDisabled AgentPoolSSHAccess = "Disabled" + // AgentPoolSSHAccessLocalUser - Can SSH onto the node as a local user using private key. + AgentPoolSSHAccessLocalUser AgentPoolSSHAccess = "LocalUser" +) + +// PossibleAgentPoolSSHAccessValues returns the possible values for the AgentPoolSSHAccess const type. +func PossibleAgentPoolSSHAccessValues() []AgentPoolSSHAccess { + return []AgentPoolSSHAccess{ + AgentPoolSSHAccessDisabled, + AgentPoolSSHAccessLocalUser, + } +} + +// AgentPoolType - The type of Agent Pool. +type AgentPoolType string + +const ( + // AgentPoolTypeAvailabilitySet - Use of this is strongly discouraged. + AgentPoolTypeAvailabilitySet AgentPoolType = "AvailabilitySet" + // AgentPoolTypeVirtualMachineScaleSets - Create an Agent Pool backed by a Virtual Machine Scale Set. + AgentPoolTypeVirtualMachineScaleSets AgentPoolType = "VirtualMachineScaleSets" + // AgentPoolTypeVirtualMachines - Create an Agent Pool backed by a Single Instance VM orchestration mode. + AgentPoolTypeVirtualMachines AgentPoolType = "VirtualMachines" +) + +// PossibleAgentPoolTypeValues returns the possible values for the AgentPoolType const type. +func PossibleAgentPoolTypeValues() []AgentPoolType { + return []AgentPoolType{ + AgentPoolTypeAvailabilitySet, + AgentPoolTypeVirtualMachineScaleSets, + AgentPoolTypeVirtualMachines, + } +} + +// BackendPoolType - The type of the managed inbound Load Balancer BackendPool. +type BackendPoolType string + +const ( + // BackendPoolTypeNodeIP - The type of the managed inbound Load Balancer BackendPool. https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#configure-load-balancer-backend. + BackendPoolTypeNodeIP BackendPoolType = "NodeIP" + // BackendPoolTypeNodeIPConfiguration - The type of the managed inbound Load Balancer BackendPool. https://cloud-provider-azure.sigs.k8s.io/topics/loadbalancer/#configure-load-balancer-backend. + BackendPoolTypeNodeIPConfiguration BackendPoolType = "NodeIPConfiguration" +) + +// PossibleBackendPoolTypeValues returns the possible values for the BackendPoolType const type. +func PossibleBackendPoolTypeValues() []BackendPoolType { + return []BackendPoolType{ + BackendPoolTypeNodeIP, + BackendPoolTypeNodeIPConfiguration, + } +} + +// Code - Tells whether the cluster is Running or Stopped +type Code string + +const ( + // CodeRunning - The cluster is running. + CodeRunning Code = "Running" + // CodeStopped - The cluster is stopped. + CodeStopped Code = "Stopped" +) + +// PossibleCodeValues returns the possible values for the Code const type. +func PossibleCodeValues() []Code { + return []Code{ + CodeRunning, + CodeStopped, + } +} + +// ConnectionStatus - The private link service connection status. +type ConnectionStatus string + +const ( + ConnectionStatusApproved ConnectionStatus = "Approved" + ConnectionStatusDisconnected ConnectionStatus = "Disconnected" + ConnectionStatusPending ConnectionStatus = "Pending" + ConnectionStatusRejected ConnectionStatus = "Rejected" +) + +// PossibleConnectionStatusValues returns the possible values for the ConnectionStatus const type. +func PossibleConnectionStatusValues() []ConnectionStatus { + return []ConnectionStatus{ + ConnectionStatusApproved, + ConnectionStatusDisconnected, + ConnectionStatusPending, + ConnectionStatusRejected, + } +} + +// CreatedByType - The type of identity that created the resource. +type CreatedByType string + +const ( + CreatedByTypeApplication CreatedByType = "Application" + CreatedByTypeKey CreatedByType = "Key" + CreatedByTypeManagedIdentity CreatedByType = "ManagedIdentity" + CreatedByTypeUser CreatedByType = "User" +) + +// PossibleCreatedByTypeValues returns the possible values for the CreatedByType const type. +func PossibleCreatedByTypeValues() []CreatedByType { + return []CreatedByType{ + CreatedByTypeApplication, + CreatedByTypeKey, + CreatedByTypeManagedIdentity, + CreatedByTypeUser, + } +} + +// Expander - If not specified, the default is 'random'. See expanders [https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders] +// for more information. +type Expander string + +const ( + // ExpanderLeastWaste - Selects the node group that will have the least idle CPU (if tied, unused memory) after scale-up. + // This is useful when you have different classes of nodes, for example, high CPU or high memory nodes, and only want to expand + // those when there are pending pods that need a lot of those resources. + ExpanderLeastWaste Expander = "least-waste" + // ExpanderMostPods - Selects the node group that would be able to schedule the most pods when scaling up. This is useful + // when you are using nodeSelector to make sure certain pods land on certain nodes. Note that this won't cause the autoscaler + // to select bigger nodes vs. smaller, as it can add multiple smaller nodes at once. + ExpanderMostPods Expander = "most-pods" + // ExpanderPriority - Selects the node group that has the highest priority assigned by the user. It's configuration is described + // in more details [here](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/expander/priority/readme.md). + ExpanderPriority Expander = "priority" + // ExpanderRandom - Used when you don't have a particular need for the node groups to scale differently. + ExpanderRandom Expander = "random" +) + +// PossibleExpanderValues returns the possible values for the Expander const type. +func PossibleExpanderValues() []Expander { + return []Expander{ + ExpanderLeastWaste, + ExpanderMostPods, + ExpanderPriority, + ExpanderRandom, + } +} + +// ExtendedLocationTypes - The type of extendedLocation. +type ExtendedLocationTypes string + +const ( + ExtendedLocationTypesEdgeZone ExtendedLocationTypes = "EdgeZone" +) + +// PossibleExtendedLocationTypesValues returns the possible values for the ExtendedLocationTypes const type. +func PossibleExtendedLocationTypesValues() []ExtendedLocationTypes { + return []ExtendedLocationTypes{ + ExtendedLocationTypesEdgeZone, + } +} + +type Format string + +const ( + // FormatAzure - Return azure auth-provider kubeconfig. This format is deprecated in v1.22 and will be fully removed in v1.26. + // See: https://aka.ms/k8s/changes-1-26. + FormatAzure Format = "azure" + // FormatExec - Return exec format kubeconfig. This format requires kubelogin binary in the path. + FormatExec Format = "exec" +) + +// PossibleFormatValues returns the possible values for the Format const type. +func PossibleFormatValues() []Format { + return []Format{ + FormatAzure, + FormatExec, + } +} + +// GPUInstanceProfile - GPUInstanceProfile to be used to specify GPU MIG instance profile for supported GPU VM SKU. +type GPUInstanceProfile string + +const ( + GPUInstanceProfileMIG1G GPUInstanceProfile = "MIG1g" + GPUInstanceProfileMIG2G GPUInstanceProfile = "MIG2g" + GPUInstanceProfileMIG3G GPUInstanceProfile = "MIG3g" + GPUInstanceProfileMIG4G GPUInstanceProfile = "MIG4g" + GPUInstanceProfileMIG7G GPUInstanceProfile = "MIG7g" +) + +// PossibleGPUInstanceProfileValues returns the possible values for the GPUInstanceProfile const type. +func PossibleGPUInstanceProfileValues() []GPUInstanceProfile { + return []GPUInstanceProfile{ + GPUInstanceProfileMIG1G, + GPUInstanceProfileMIG2G, + GPUInstanceProfileMIG3G, + GPUInstanceProfileMIG4G, + GPUInstanceProfileMIG7G, + } +} + +// GuardrailsSupport - Whether the version is preview or stable. +type GuardrailsSupport string + +const ( + // GuardrailsSupportPreview - The version is preview. It is not recommended to use preview versions on critical production + // clusters. The preview version may not support all use-cases. + GuardrailsSupportPreview GuardrailsSupport = "Preview" + // GuardrailsSupportStable - The version is stable and can be used on critical production clusters. + GuardrailsSupportStable GuardrailsSupport = "Stable" +) + +// PossibleGuardrailsSupportValues returns the possible values for the GuardrailsSupport const type. +func PossibleGuardrailsSupportValues() []GuardrailsSupport { + return []GuardrailsSupport{ + GuardrailsSupportPreview, + GuardrailsSupportStable, + } +} + +// IPFamily - To determine if address belongs IPv4 or IPv6 family. +type IPFamily string + +const ( + // IPFamilyIPv4 - IPv4 family + IPFamilyIPv4 IPFamily = "IPv4" + // IPFamilyIPv6 - IPv6 family + IPFamilyIPv6 IPFamily = "IPv6" +) + +// PossibleIPFamilyValues returns the possible values for the IPFamily const type. +func PossibleIPFamilyValues() []IPFamily { + return []IPFamily{ + IPFamilyIPv4, + IPFamilyIPv6, + } +} + +// IpvsScheduler - IPVS scheduler, for more information please see http://www.linuxvirtualserver.org/docs/scheduling.html. +type IpvsScheduler string + +const ( + // IpvsSchedulerLeastConnection - Least Connection + IpvsSchedulerLeastConnection IpvsScheduler = "LeastConnection" + // IpvsSchedulerRoundRobin - Round Robin + IpvsSchedulerRoundRobin IpvsScheduler = "RoundRobin" +) + +// PossibleIpvsSchedulerValues returns the possible values for the IpvsScheduler const type. +func PossibleIpvsSchedulerValues() []IpvsScheduler { + return []IpvsScheduler{ + IpvsSchedulerLeastConnection, + IpvsSchedulerRoundRobin, + } +} + +// IstioIngressGatewayMode - Mode of an ingress gateway. +type IstioIngressGatewayMode string + +const ( + // IstioIngressGatewayModeExternal - The ingress gateway is assigned a public IP address and is publicly accessible. + IstioIngressGatewayModeExternal IstioIngressGatewayMode = "External" + // IstioIngressGatewayModeInternal - The ingress gateway is assigned an internal IP address and cannot is accessed publicly. + IstioIngressGatewayModeInternal IstioIngressGatewayMode = "Internal" +) + +// PossibleIstioIngressGatewayModeValues returns the possible values for the IstioIngressGatewayMode const type. +func PossibleIstioIngressGatewayModeValues() []IstioIngressGatewayMode { + return []IstioIngressGatewayMode{ + IstioIngressGatewayModeExternal, + IstioIngressGatewayModeInternal, + } +} + +// KeyVaultNetworkAccessTypes - Network access of key vault. The possible values are Public and Private. Public means the +// key vault allows public access from all networks. Private means the key vault disables public access and +// enables private link. The default value is Public. +type KeyVaultNetworkAccessTypes string + +const ( + KeyVaultNetworkAccessTypesPrivate KeyVaultNetworkAccessTypes = "Private" + KeyVaultNetworkAccessTypesPublic KeyVaultNetworkAccessTypes = "Public" +) + +// PossibleKeyVaultNetworkAccessTypesValues returns the possible values for the KeyVaultNetworkAccessTypes const type. +func PossibleKeyVaultNetworkAccessTypesValues() []KeyVaultNetworkAccessTypes { + return []KeyVaultNetworkAccessTypes{ + KeyVaultNetworkAccessTypesPrivate, + KeyVaultNetworkAccessTypesPublic, + } +} + +// KubeletDiskType - Determines the placement of emptyDir volumes, container runtime data root, and Kubelet ephemeral storage. +type KubeletDiskType string + +const ( + // KubeletDiskTypeOS - Kubelet will use the OS disk for its data. + KubeletDiskTypeOS KubeletDiskType = "OS" + // KubeletDiskTypeTemporary - Kubelet will use the temporary disk for its data. + KubeletDiskTypeTemporary KubeletDiskType = "Temporary" +) + +// PossibleKubeletDiskTypeValues returns the possible values for the KubeletDiskType const type. +func PossibleKubeletDiskTypeValues() []KubeletDiskType { + return []KubeletDiskType{ + KubeletDiskTypeOS, + KubeletDiskTypeTemporary, + } +} + +// KubernetesSupportPlan - Different support tiers for AKS managed clusters +type KubernetesSupportPlan string + +const ( + // KubernetesSupportPlanAKSLongTermSupport - Support for the version extended past the KubernetesOfficial support of 1 year. + // AKS continues to patch CVEs for another 1 year, for a total of 2 years of support. + KubernetesSupportPlanAKSLongTermSupport KubernetesSupportPlan = "AKSLongTermSupport" + // KubernetesSupportPlanKubernetesOfficial - Support for the version is the same as for the open source Kubernetes offering. + // Official Kubernetes open source community support versions for 1 year after release. + KubernetesSupportPlanKubernetesOfficial KubernetesSupportPlan = "KubernetesOfficial" +) + +// PossibleKubernetesSupportPlanValues returns the possible values for the KubernetesSupportPlan const type. +func PossibleKubernetesSupportPlanValues() []KubernetesSupportPlan { + return []KubernetesSupportPlan{ + KubernetesSupportPlanAKSLongTermSupport, + KubernetesSupportPlanKubernetesOfficial, + } +} + +// Level - The Safeguards level to be used. By default, Safeguards is enabled for all namespaces except those that AKS excludes +// via systemExcludedNamespaces +type Level string + +const ( + LevelEnforcement Level = "Enforcement" + LevelOff Level = "Off" + LevelWarning Level = "Warning" +) + +// PossibleLevelValues returns the possible values for the Level const type. +func PossibleLevelValues() []Level { + return []Level{ + LevelEnforcement, + LevelOff, + LevelWarning, + } +} + +// LicenseType - The license type to use for Windows VMs. See Azure Hybrid User Benefits [https://azure.microsoft.com/pricing/hybrid-benefit/faq/] +// for more details. +type LicenseType string + +const ( + // LicenseTypeNone - No additional licensing is applied. + LicenseTypeNone LicenseType = "None" + // LicenseTypeWindowsServer - Enables Azure Hybrid User Benefits for Windows VMs. + LicenseTypeWindowsServer LicenseType = "Windows_Server" +) + +// PossibleLicenseTypeValues returns the possible values for the LicenseType const type. +func PossibleLicenseTypeValues() []LicenseType { + return []LicenseType{ + LicenseTypeNone, + LicenseTypeWindowsServer, + } +} + +// LoadBalancerSKU - The default is 'standard'. See Azure Load Balancer SKUs [https://docs.microsoft.com/azure/load-balancer/skus] +// for more information about the differences between load balancer SKUs. +type LoadBalancerSKU string + +const ( + // LoadBalancerSKUBasic - Use a basic Load Balancer with limited functionality. + LoadBalancerSKUBasic LoadBalancerSKU = "basic" + // LoadBalancerSKUStandard - Use a a standard Load Balancer. This is the recommended Load Balancer SKU. For more information + // about on working with the load balancer in the managed cluster, see the [standard Load Balancer](https://docs.microsoft.com/azure/aks/load-balancer-standard) + // article. + LoadBalancerSKUStandard LoadBalancerSKU = "standard" +) + +// PossibleLoadBalancerSKUValues returns the possible values for the LoadBalancerSKU const type. +func PossibleLoadBalancerSKUValues() []LoadBalancerSKU { + return []LoadBalancerSKU{ + LoadBalancerSKUBasic, + LoadBalancerSKUStandard, + } +} + +// ManagedClusterPodIdentityProvisioningState - The current provisioning state of the pod identity. +type ManagedClusterPodIdentityProvisioningState string + +const ( + ManagedClusterPodIdentityProvisioningStateAssigned ManagedClusterPodIdentityProvisioningState = "Assigned" + ManagedClusterPodIdentityProvisioningStateCanceled ManagedClusterPodIdentityProvisioningState = "Canceled" + ManagedClusterPodIdentityProvisioningStateDeleting ManagedClusterPodIdentityProvisioningState = "Deleting" + ManagedClusterPodIdentityProvisioningStateFailed ManagedClusterPodIdentityProvisioningState = "Failed" + ManagedClusterPodIdentityProvisioningStateSucceeded ManagedClusterPodIdentityProvisioningState = "Succeeded" + ManagedClusterPodIdentityProvisioningStateUpdating ManagedClusterPodIdentityProvisioningState = "Updating" +) + +// PossibleManagedClusterPodIdentityProvisioningStateValues returns the possible values for the ManagedClusterPodIdentityProvisioningState const type. +func PossibleManagedClusterPodIdentityProvisioningStateValues() []ManagedClusterPodIdentityProvisioningState { + return []ManagedClusterPodIdentityProvisioningState{ + ManagedClusterPodIdentityProvisioningStateAssigned, + ManagedClusterPodIdentityProvisioningStateCanceled, + ManagedClusterPodIdentityProvisioningStateDeleting, + ManagedClusterPodIdentityProvisioningStateFailed, + ManagedClusterPodIdentityProvisioningStateSucceeded, + ManagedClusterPodIdentityProvisioningStateUpdating, + } +} + +// ManagedClusterSKUName - The name of a managed cluster SKU. +type ManagedClusterSKUName string + +const ( + // ManagedClusterSKUNameBase - Base option for the AKS control plane. + ManagedClusterSKUNameBase ManagedClusterSKUName = "Base" +) + +// PossibleManagedClusterSKUNameValues returns the possible values for the ManagedClusterSKUName const type. +func PossibleManagedClusterSKUNameValues() []ManagedClusterSKUName { + return []ManagedClusterSKUName{ + ManagedClusterSKUNameBase, + } +} + +// ManagedClusterSKUTier - If not specified, the default is 'Free'. See AKS Pricing Tier [https://learn.microsoft.com/azure/aks/free-standard-pricing-tiers] +// for more details. +type ManagedClusterSKUTier string + +const ( + // ManagedClusterSKUTierFree - The cluster management is free, but charged for VM, storage, and networking usage. Best for + // experimenting, learning, simple testing, or workloads with fewer than 10 nodes. Not recommended for production use cases. + ManagedClusterSKUTierFree ManagedClusterSKUTier = "Free" + // ManagedClusterSKUTierPremium - Cluster has premium capabilities in addition to all of the capabilities included in 'Standard'. + // Premium enables selection of LongTermSupport (aka.ms/aks/lts) for certain Kubernetes versions. + ManagedClusterSKUTierPremium ManagedClusterSKUTier = "Premium" + // ManagedClusterSKUTierStandard - Recommended for mission-critical and production workloads. Includes Kubernetes control + // plane autoscaling, workload-intensive testing, and up to 5,000 nodes per cluster. Guarantees 99.95% availability of the + // Kubernetes API server endpoint for clusters that use Availability Zones and 99.9% of availability for clusters that don't + // use Availability Zones. + ManagedClusterSKUTierStandard ManagedClusterSKUTier = "Standard" +) + +// PossibleManagedClusterSKUTierValues returns the possible values for the ManagedClusterSKUTier const type. +func PossibleManagedClusterSKUTierValues() []ManagedClusterSKUTier { + return []ManagedClusterSKUTier{ + ManagedClusterSKUTierFree, + ManagedClusterSKUTierPremium, + ManagedClusterSKUTierStandard, + } +} + +// Mode - Specify which proxy mode to use ('IPTABLES' or 'IPVS') +type Mode string + +const ( + // ModeIPTABLES - IPTables proxy mode + ModeIPTABLES Mode = "IPTABLES" + // ModeIPVS - IPVS proxy mode. Must be using Kubernetes version >= 1.22. + ModeIPVS Mode = "IPVS" +) + +// PossibleModeValues returns the possible values for the Mode const type. +func PossibleModeValues() []Mode { + return []Mode{ + ModeIPTABLES, + ModeIPVS, + } +} + +// NetworkDataplane - Network dataplane used in the Kubernetes cluster. +type NetworkDataplane string + +const ( + // NetworkDataplaneAzure - Use Azure network dataplane. + NetworkDataplaneAzure NetworkDataplane = "azure" + // NetworkDataplaneCilium - Use Cilium network dataplane. See [Azure CNI Powered by Cilium](https://learn.microsoft.com/azure/aks/azure-cni-powered-by-cilium) + // for more information. + NetworkDataplaneCilium NetworkDataplane = "cilium" +) + +// PossibleNetworkDataplaneValues returns the possible values for the NetworkDataplane const type. +func PossibleNetworkDataplaneValues() []NetworkDataplane { + return []NetworkDataplane{ + NetworkDataplaneAzure, + NetworkDataplaneCilium, + } +} + +// NetworkMode - This cannot be specified if networkPlugin is anything other than 'azure'. +type NetworkMode string + +const ( + // NetworkModeBridge - This is no longer supported + NetworkModeBridge NetworkMode = "bridge" + // NetworkModeTransparent - No bridge is created. Intra-VM Pod to Pod communication is through IP routes created by Azure + // CNI. See [Transparent Mode](https://docs.microsoft.com/azure/aks/faq#transparent-mode) for more information. + NetworkModeTransparent NetworkMode = "transparent" +) + +// PossibleNetworkModeValues returns the possible values for the NetworkMode const type. +func PossibleNetworkModeValues() []NetworkMode { + return []NetworkMode{ + NetworkModeBridge, + NetworkModeTransparent, + } +} + +// NetworkPlugin - Network plugin used for building the Kubernetes network. +type NetworkPlugin string + +const ( + // NetworkPluginAzure - Use the Azure CNI network plugin. See [Azure CNI (advanced) networking](https://docs.microsoft.com/azure/aks/concepts-network#azure-cni-advanced-networking) + // for more information. + NetworkPluginAzure NetworkPlugin = "azure" + // NetworkPluginKubenet - Use the Kubenet network plugin. See [Kubenet (basic) networking](https://docs.microsoft.com/azure/aks/concepts-network#kubenet-basic-networking) + // for more information. + NetworkPluginKubenet NetworkPlugin = "kubenet" + // NetworkPluginNone - Do not use a network plugin. A custom CNI will need to be installed after cluster creation for networking + // functionality. + NetworkPluginNone NetworkPlugin = "none" +) + +// PossibleNetworkPluginValues returns the possible values for the NetworkPlugin const type. +func PossibleNetworkPluginValues() []NetworkPlugin { + return []NetworkPlugin{ + NetworkPluginAzure, + NetworkPluginKubenet, + NetworkPluginNone, + } +} + +// NetworkPluginMode - The mode the network plugin should use. +type NetworkPluginMode string + +const ( + // NetworkPluginModeOverlay - Pods are given IPs from the PodCIDR address space but use Azure Routing Domains rather than + // Kubenet reference plugins host-local and bridge. + NetworkPluginModeOverlay NetworkPluginMode = "overlay" +) + +// PossibleNetworkPluginModeValues returns the possible values for the NetworkPluginMode const type. +func PossibleNetworkPluginModeValues() []NetworkPluginMode { + return []NetworkPluginMode{ + NetworkPluginModeOverlay, + } +} + +// NetworkPolicy - Network policy used for building the Kubernetes network. +type NetworkPolicy string + +const ( + // NetworkPolicyAzure - Use Azure network policies. See [differences between Azure and Calico policies](https://docs.microsoft.com/azure/aks/use-network-policies#differences-between-azure-and-calico-policies-and-their-capabilities) + // for more information. + NetworkPolicyAzure NetworkPolicy = "azure" + // NetworkPolicyCalico - Use Calico network policies. See [differences between Azure and Calico policies](https://docs.microsoft.com/azure/aks/use-network-policies#differences-between-azure-and-calico-policies-and-their-capabilities) + // for more information. + NetworkPolicyCalico NetworkPolicy = "calico" + // NetworkPolicyCilium - Use Cilium to enforce network policies. This requires networkDataplane to be 'cilium'. + NetworkPolicyCilium NetworkPolicy = "cilium" + // NetworkPolicyNone - Network policies will not be enforced. This is the default value when NetworkPolicy is not specified. + NetworkPolicyNone NetworkPolicy = "none" +) + +// PossibleNetworkPolicyValues returns the possible values for the NetworkPolicy const type. +func PossibleNetworkPolicyValues() []NetworkPolicy { + return []NetworkPolicy{ + NetworkPolicyAzure, + NetworkPolicyCalico, + NetworkPolicyCilium, + NetworkPolicyNone, + } +} + +// NodeOSUpgradeChannel - The default is Unmanaged, but may change to either NodeImage or SecurityPatch at GA. +type NodeOSUpgradeChannel string + +const ( + // NodeOSUpgradeChannelNodeImage - AKS will update the nodes with a newly patched VHD containing security fixes and bugfixes + // on a weekly cadence. With the VHD update machines will be rolling reimaged to that VHD following maintenance windows and + // surge settings. No extra VHD cost is incurred when choosing this option as AKS hosts the images. + NodeOSUpgradeChannelNodeImage NodeOSUpgradeChannel = "NodeImage" + // NodeOSUpgradeChannelNone - No attempt to update your machines OS will be made either by OS or by rolling VHDs. This means + // you are responsible for your security updates + NodeOSUpgradeChannelNone NodeOSUpgradeChannel = "None" + // NodeOSUpgradeChannelSecurityPatch - AKS will update the nodes VHD with patches from the image maintainer labelled "security + // only" on a regular basis. Where possible, patches will also be applied without reimaging to existing nodes. Some patches, + // such as kernel patches, cannot be applied to existing nodes without disruption. For such patches, the VHD will be updated, + // and machines will be rolling reimaged to that VHD following maintenance windows and surge settings. This option incurs + // the extra cost of hosting the VHDs in your node resource group. + NodeOSUpgradeChannelSecurityPatch NodeOSUpgradeChannel = "SecurityPatch" + // NodeOSUpgradeChannelUnmanaged - OS updates will be applied automatically through the OS built-in patching infrastructure. + // Newly scaled in machines will be unpatched initially, and will be patched at some later time by the OS's infrastructure. + // Behavior of this option depends on the OS in question. Ubuntu and Mariner apply security patches through unattended upgrade + // roughly once a day around 06:00 UTC. Windows does not apply security patches automatically and so for them this option + // is equivalent to None till further notice + NodeOSUpgradeChannelUnmanaged NodeOSUpgradeChannel = "Unmanaged" +) + +// PossibleNodeOSUpgradeChannelValues returns the possible values for the NodeOSUpgradeChannel const type. +func PossibleNodeOSUpgradeChannelValues() []NodeOSUpgradeChannel { + return []NodeOSUpgradeChannel{ + NodeOSUpgradeChannelNodeImage, + NodeOSUpgradeChannelNone, + NodeOSUpgradeChannelSecurityPatch, + NodeOSUpgradeChannelUnmanaged, + } +} + +// NodeProvisioningMode - Once the mode it set to Auto, it cannot be changed back to Manual. +type NodeProvisioningMode string + +const ( + // NodeProvisioningModeAuto - Nodes are provisioned automatically by AKS using Karpenter. Fixed size Node Pools can still + // be created, but autoscaling Node Pools cannot be. (See aka.ms/aks/nap for more details). + NodeProvisioningModeAuto NodeProvisioningMode = "Auto" + // NodeProvisioningModeManual - Nodes are provisioned manually by the user + NodeProvisioningModeManual NodeProvisioningMode = "Manual" +) + +// PossibleNodeProvisioningModeValues returns the possible values for the NodeProvisioningMode const type. +func PossibleNodeProvisioningModeValues() []NodeProvisioningMode { + return []NodeProvisioningMode{ + NodeProvisioningModeAuto, + NodeProvisioningModeManual, + } +} + +// OSDiskType - The default is 'Ephemeral' if the VM supports it and has a cache disk larger than the requested OSDiskSizeGB. +// Otherwise, defaults to 'Managed'. May not be changed after creation. For more information +// see Ephemeral OS [https://docs.microsoft.com/azure/aks/cluster-configuration#ephemeral-os]. +type OSDiskType string + +const ( + // OSDiskTypeEphemeral - Ephemeral OS disks are stored only on the host machine, just like a temporary disk. This provides + // lower read/write latency, along with faster node scaling and cluster upgrades. + OSDiskTypeEphemeral OSDiskType = "Ephemeral" + // OSDiskTypeManaged - Azure replicates the operating system disk for a virtual machine to Azure storage to avoid data loss + // should the VM need to be relocated to another host. Since containers aren't designed to have local state persisted, this + // behavior offers limited value while providing some drawbacks, including slower node provisioning and higher read/write + // latency. + OSDiskTypeManaged OSDiskType = "Managed" +) + +// PossibleOSDiskTypeValues returns the possible values for the OSDiskType const type. +func PossibleOSDiskTypeValues() []OSDiskType { + return []OSDiskType{ + OSDiskTypeEphemeral, + OSDiskTypeManaged, + } +} + +// OSSKU - Specifies the OS SKU used by the agent pool. If not specified, the default is Ubuntu if OSType=Linux or Windows2019 +// if OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 +// after Windows2019 is deprecated. +type OSSKU string + +const ( + // OSSKUAzureLinux - Use AzureLinux as the OS for node images. Azure Linux is a container-optimized Linux distro built by + // Microsoft, visit https://aka.ms/azurelinux for more information. + OSSKUAzureLinux OSSKU = "AzureLinux" + // OSSKUCBLMariner - Deprecated OSSKU. Microsoft recommends that new deployments choose 'AzureLinux' instead. + OSSKUCBLMariner OSSKU = "CBLMariner" + // OSSKUMariner - Deprecated OSSKU. Microsoft recommends that new deployments choose 'AzureLinux' instead. + OSSKUMariner OSSKU = "Mariner" + // OSSKUUbuntu - Use Ubuntu as the OS for node images. + OSSKUUbuntu OSSKU = "Ubuntu" + // OSSKUWindows2019 - Use Windows2019 as the OS for node images. Unsupported for system node pools. Windows2019 only supports + // Windows2019 containers; it cannot run Windows2022 containers and vice versa. + OSSKUWindows2019 OSSKU = "Windows2019" + // OSSKUWindows2022 - Use Windows2022 as the OS for node images. Unsupported for system node pools. Windows2022 only supports + // Windows2022 containers; it cannot run Windows2019 containers and vice versa. + OSSKUWindows2022 OSSKU = "Windows2022" + // OSSKUWindowsAnnual - Use Windows Annual Channel version as the OS for node images. Unsupported for system node pools. Details + // about supported container images and kubernetes versions under different AKS Annual Channel versions could be seen in https://aka.ms/aks/windows-annual-channel-details. + OSSKUWindowsAnnual OSSKU = "WindowsAnnual" +) + +// PossibleOSSKUValues returns the possible values for the OSSKU const type. +func PossibleOSSKUValues() []OSSKU { + return []OSSKU{ + OSSKUAzureLinux, + OSSKUCBLMariner, + OSSKUMariner, + OSSKUUbuntu, + OSSKUWindows2019, + OSSKUWindows2022, + OSSKUWindowsAnnual, + } +} + +// OSType - The operating system type. The default is Linux. +type OSType string + +const ( + // OSTypeLinux - Use Linux. + OSTypeLinux OSType = "Linux" + // OSTypeWindows - Use Windows. + OSTypeWindows OSType = "Windows" +) + +// PossibleOSTypeValues returns the possible values for the OSType const type. +func PossibleOSTypeValues() []OSType { + return []OSType{ + OSTypeLinux, + OSTypeWindows, + } +} + +// OutboundType - This can only be set at cluster creation time and cannot be changed later. For more information see egress +// outbound type [https://docs.microsoft.com/azure/aks/egress-outboundtype]. +type OutboundType string + +const ( + // OutboundTypeLoadBalancer - The load balancer is used for egress through an AKS assigned public IP. This supports Kubernetes + // services of type 'loadBalancer'. For more information see [outbound type loadbalancer](https://docs.microsoft.com/azure/aks/egress-outboundtype#outbound-type-of-loadbalancer). + OutboundTypeLoadBalancer OutboundType = "loadBalancer" + // OutboundTypeManagedNATGateway - The AKS-managed NAT gateway is used for egress. + OutboundTypeManagedNATGateway OutboundType = "managedNATGateway" + // OutboundTypeUserAssignedNATGateway - The user-assigned NAT gateway associated to the cluster subnet is used for egress. + // This is an advanced scenario and requires proper network configuration. + OutboundTypeUserAssignedNATGateway OutboundType = "userAssignedNATGateway" + // OutboundTypeUserDefinedRouting - Egress paths must be defined by the user. This is an advanced scenario and requires proper + // network configuration. For more information see [outbound type userDefinedRouting](https://docs.microsoft.com/azure/aks/egress-outboundtype#outbound-type-of-userdefinedrouting). + OutboundTypeUserDefinedRouting OutboundType = "userDefinedRouting" +) + +// PossibleOutboundTypeValues returns the possible values for the OutboundType const type. +func PossibleOutboundTypeValues() []OutboundType { + return []OutboundType{ + OutboundTypeLoadBalancer, + OutboundTypeManagedNATGateway, + OutboundTypeUserAssignedNATGateway, + OutboundTypeUserDefinedRouting, + } +} + +// PrivateEndpointConnectionProvisioningState - The current provisioning state. +type PrivateEndpointConnectionProvisioningState string + +const ( + PrivateEndpointConnectionProvisioningStateCanceled PrivateEndpointConnectionProvisioningState = "Canceled" + PrivateEndpointConnectionProvisioningStateCreating PrivateEndpointConnectionProvisioningState = "Creating" + PrivateEndpointConnectionProvisioningStateDeleting PrivateEndpointConnectionProvisioningState = "Deleting" + PrivateEndpointConnectionProvisioningStateFailed PrivateEndpointConnectionProvisioningState = "Failed" + PrivateEndpointConnectionProvisioningStateSucceeded PrivateEndpointConnectionProvisioningState = "Succeeded" +) + +// PossiblePrivateEndpointConnectionProvisioningStateValues returns the possible values for the PrivateEndpointConnectionProvisioningState const type. +func PossiblePrivateEndpointConnectionProvisioningStateValues() []PrivateEndpointConnectionProvisioningState { + return []PrivateEndpointConnectionProvisioningState{ + PrivateEndpointConnectionProvisioningStateCanceled, + PrivateEndpointConnectionProvisioningStateCreating, + PrivateEndpointConnectionProvisioningStateDeleting, + PrivateEndpointConnectionProvisioningStateFailed, + PrivateEndpointConnectionProvisioningStateSucceeded, + } +} + +// Protocol - The network protocol of the port. +type Protocol string + +const ( + // ProtocolTCP - TCP protocol. + ProtocolTCP Protocol = "TCP" + // ProtocolUDP - UDP protocol. + ProtocolUDP Protocol = "UDP" +) + +// PossibleProtocolValues returns the possible values for the Protocol const type. +func PossibleProtocolValues() []Protocol { + return []Protocol{ + ProtocolTCP, + ProtocolUDP, + } +} + +// PublicNetworkAccess - Allow or deny public network access for AKS +type PublicNetworkAccess string + +const ( + // PublicNetworkAccessDisabled - Inbound traffic to managedCluster is disabled, traffic from managedCluster is allowed. + PublicNetworkAccessDisabled PublicNetworkAccess = "Disabled" + // PublicNetworkAccessEnabled - Inbound/Outbound to the managedCluster is allowed. + PublicNetworkAccessEnabled PublicNetworkAccess = "Enabled" + // PublicNetworkAccessSecuredByPerimeter - Inbound/Outbound traffic is managed by Microsoft.Network/NetworkSecurityPerimeters. + PublicNetworkAccessSecuredByPerimeter PublicNetworkAccess = "SecuredByPerimeter" +) + +// PossiblePublicNetworkAccessValues returns the possible values for the PublicNetworkAccess const type. +func PossiblePublicNetworkAccessValues() []PublicNetworkAccess { + return []PublicNetworkAccess{ + PublicNetworkAccessDisabled, + PublicNetworkAccessEnabled, + PublicNetworkAccessSecuredByPerimeter, + } +} + +// ResourceIdentityType - For more information see use managed identities in AKS [https://docs.microsoft.com/azure/aks/use-managed-identity]. +type ResourceIdentityType string + +const ( + // ResourceIdentityTypeNone - Do not use a managed identity for the Managed Cluster, service principal will be used instead. + ResourceIdentityTypeNone ResourceIdentityType = "None" + // ResourceIdentityTypeSystemAssigned - Use an implicitly created system assigned managed identity to manage cluster resources. + // Master components in the control plane such as kube-controller-manager will use the system assigned managed identity to + // manipulate Azure resources. + ResourceIdentityTypeSystemAssigned ResourceIdentityType = "SystemAssigned" + // ResourceIdentityTypeUserAssigned - Use a user-specified identity to manage cluster resources. Master components in the + // control plane such as kube-controller-manager will use the specified user assigned managed identity to manipulate Azure + // resources. + ResourceIdentityTypeUserAssigned ResourceIdentityType = "UserAssigned" +) + +// PossibleResourceIdentityTypeValues returns the possible values for the ResourceIdentityType const type. +func PossibleResourceIdentityTypeValues() []ResourceIdentityType { + return []ResourceIdentityType{ + ResourceIdentityTypeNone, + ResourceIdentityTypeSystemAssigned, + ResourceIdentityTypeUserAssigned, + } +} + +// RestrictionLevel - The restriction level applied to the cluster's node resource group +type RestrictionLevel string + +const ( + // RestrictionLevelReadOnly - Only */read RBAC permissions allowed on the managed node resource group + RestrictionLevelReadOnly RestrictionLevel = "ReadOnly" + // RestrictionLevelUnrestricted - All RBAC permissions are allowed on the managed node resource group + RestrictionLevelUnrestricted RestrictionLevel = "Unrestricted" +) + +// PossibleRestrictionLevelValues returns the possible values for the RestrictionLevel const type. +func PossibleRestrictionLevelValues() []RestrictionLevel { + return []RestrictionLevel{ + RestrictionLevelReadOnly, + RestrictionLevelUnrestricted, + } +} + +// SafeguardsSupport - Whether the version is preview or stable. +type SafeguardsSupport string + +const ( + // SafeguardsSupportPreview - The version is preview. It is not recommended to use preview versions on critical production + // clusters. The preview version may not support all use-cases. + SafeguardsSupportPreview SafeguardsSupport = "Preview" + // SafeguardsSupportStable - The version is stable and can be used on critical production clusters. + SafeguardsSupportStable SafeguardsSupport = "Stable" +) + +// PossibleSafeguardsSupportValues returns the possible values for the SafeguardsSupport const type. +func PossibleSafeguardsSupportValues() []SafeguardsSupport { + return []SafeguardsSupport{ + SafeguardsSupportPreview, + SafeguardsSupportStable, + } +} + +// ScaleDownMode - Describes how VMs are added to or removed from Agent Pools. See billing states [https://docs.microsoft.com/azure/virtual-machines/states-billing]. +type ScaleDownMode string + +const ( + // ScaleDownModeDeallocate - Attempt to start deallocated instances (if they exist) during scale up and deallocate instances + // during scale down. + ScaleDownModeDeallocate ScaleDownMode = "Deallocate" + // ScaleDownModeDelete - Create new instances during scale up and remove instances during scale down. + ScaleDownModeDelete ScaleDownMode = "Delete" +) + +// PossibleScaleDownModeValues returns the possible values for the ScaleDownMode const type. +func PossibleScaleDownModeValues() []ScaleDownMode { + return []ScaleDownMode{ + ScaleDownModeDeallocate, + ScaleDownModeDelete, + } +} + +// ScaleSetEvictionPolicy - The eviction policy specifies what to do with the VM when it is evicted. The default is Delete. +// For more information about eviction see spot VMs +// [https://docs.microsoft.com/azure/virtual-machines/spot-vms] +type ScaleSetEvictionPolicy string + +const ( + // ScaleSetEvictionPolicyDeallocate - Nodes in the underlying Scale Set of the node pool are set to the stopped-deallocated + // state upon eviction. Nodes in the stopped-deallocated state count against your compute quota and can cause issues with + // cluster scaling or upgrading. + ScaleSetEvictionPolicyDeallocate ScaleSetEvictionPolicy = "Deallocate" + // ScaleSetEvictionPolicyDelete - Nodes in the underlying Scale Set of the node pool are deleted when they're evicted. + ScaleSetEvictionPolicyDelete ScaleSetEvictionPolicy = "Delete" +) + +// PossibleScaleSetEvictionPolicyValues returns the possible values for the ScaleSetEvictionPolicy const type. +func PossibleScaleSetEvictionPolicyValues() []ScaleSetEvictionPolicy { + return []ScaleSetEvictionPolicy{ + ScaleSetEvictionPolicyDeallocate, + ScaleSetEvictionPolicyDelete, + } +} + +// ScaleSetPriority - The Virtual Machine Scale Set priority. +type ScaleSetPriority string + +const ( + // ScaleSetPriorityRegular - Regular VMs will be used. + ScaleSetPriorityRegular ScaleSetPriority = "Regular" + // ScaleSetPrioritySpot - Spot priority VMs will be used. There is no SLA for spot nodes. See [spot on AKS](https://docs.microsoft.com/azure/aks/spot-node-pool) + // for more information. + ScaleSetPrioritySpot ScaleSetPriority = "Spot" +) + +// PossibleScaleSetPriorityValues returns the possible values for the ScaleSetPriority const type. +func PossibleScaleSetPriorityValues() []ScaleSetPriority { + return []ScaleSetPriority{ + ScaleSetPriorityRegular, + ScaleSetPrioritySpot, + } +} + +// ServiceMeshMode - Mode of the service mesh. +type ServiceMeshMode string + +const ( + // ServiceMeshModeDisabled - Mesh is disabled. + ServiceMeshModeDisabled ServiceMeshMode = "Disabled" + // ServiceMeshModeIstio - Istio deployed as an AKS addon. + ServiceMeshModeIstio ServiceMeshMode = "Istio" +) + +// PossibleServiceMeshModeValues returns the possible values for the ServiceMeshMode const type. +func PossibleServiceMeshModeValues() []ServiceMeshMode { + return []ServiceMeshMode{ + ServiceMeshModeDisabled, + ServiceMeshModeIstio, + } +} + +// SnapshotType - The type of a snapshot. The default is NodePool. +type SnapshotType string + +const ( + // SnapshotTypeManagedCluster - The snapshot is a snapshot of a managed cluster. + SnapshotTypeManagedCluster SnapshotType = "ManagedCluster" + // SnapshotTypeNodePool - The snapshot is a snapshot of a node pool. + SnapshotTypeNodePool SnapshotType = "NodePool" +) + +// PossibleSnapshotTypeValues returns the possible values for the SnapshotType const type. +func PossibleSnapshotTypeValues() []SnapshotType { + return []SnapshotType{ + SnapshotTypeManagedCluster, + SnapshotTypeNodePool, + } +} + +// TrustedAccessRoleBindingProvisioningState - The current provisioning state of trusted access role binding. +type TrustedAccessRoleBindingProvisioningState string + +const ( + TrustedAccessRoleBindingProvisioningStateCanceled TrustedAccessRoleBindingProvisioningState = "Canceled" + TrustedAccessRoleBindingProvisioningStateDeleting TrustedAccessRoleBindingProvisioningState = "Deleting" + TrustedAccessRoleBindingProvisioningStateFailed TrustedAccessRoleBindingProvisioningState = "Failed" + TrustedAccessRoleBindingProvisioningStateSucceeded TrustedAccessRoleBindingProvisioningState = "Succeeded" + TrustedAccessRoleBindingProvisioningStateUpdating TrustedAccessRoleBindingProvisioningState = "Updating" +) + +// PossibleTrustedAccessRoleBindingProvisioningStateValues returns the possible values for the TrustedAccessRoleBindingProvisioningState const type. +func PossibleTrustedAccessRoleBindingProvisioningStateValues() []TrustedAccessRoleBindingProvisioningState { + return []TrustedAccessRoleBindingProvisioningState{ + TrustedAccessRoleBindingProvisioningStateCanceled, + TrustedAccessRoleBindingProvisioningStateDeleting, + TrustedAccessRoleBindingProvisioningStateFailed, + TrustedAccessRoleBindingProvisioningStateSucceeded, + TrustedAccessRoleBindingProvisioningStateUpdating, + } +} + +// Type - Specifies on which instance of the allowed days specified in daysOfWeek the maintenance occurs. +type Type string + +const ( + // TypeFirst - First. + TypeFirst Type = "First" + // TypeFourth - Fourth. + TypeFourth Type = "Fourth" + // TypeLast - Last. + TypeLast Type = "Last" + // TypeSecond - Second. + TypeSecond Type = "Second" + // TypeThird - Third. + TypeThird Type = "Third" +) + +// PossibleTypeValues returns the possible values for the Type const type. +func PossibleTypeValues() []Type { + return []Type{ + TypeFirst, + TypeFourth, + TypeLast, + TypeSecond, + TypeThird, + } +} + +// UpgradeChannel - For more information see setting the AKS cluster auto-upgrade channel [https://docs.microsoft.com/azure/aks/upgrade-cluster#set-auto-upgrade-channel]. +type UpgradeChannel string + +const ( + // UpgradeChannelNodeImage - Automatically upgrade the node image to the latest version available. Consider using nodeOSUpgradeChannel + // instead as that allows you to configure node OS patching separate from Kubernetes version patching + UpgradeChannelNodeImage UpgradeChannel = "node-image" + // UpgradeChannelNone - Disables auto-upgrades and keeps the cluster at its current version of Kubernetes. + UpgradeChannelNone UpgradeChannel = "none" + // UpgradeChannelPatch - Automatically upgrade the cluster to the latest supported patch version when it becomes available + // while keeping the minor version the same. For example, if a cluster is running version 1.17.7 and versions 1.17.9, 1.18.4, + // 1.18.6, and 1.19.1 are available, your cluster is upgraded to 1.17.9. + UpgradeChannelPatch UpgradeChannel = "patch" + // UpgradeChannelRapid - Automatically upgrade the cluster to the latest supported patch release on the latest supported minor + // version. In cases where the cluster is at a version of Kubernetes that is at an N-2 minor version where N is the latest + // supported minor version, the cluster first upgrades to the latest supported patch version on N-1 minor version. For example, + // if a cluster is running version 1.17.7 and versions 1.17.9, 1.18.4, 1.18.6, and 1.19.1 are available, your cluster first + // is upgraded to 1.18.6, then is upgraded to 1.19.1. + UpgradeChannelRapid UpgradeChannel = "rapid" + // UpgradeChannelStable - Automatically upgrade the cluster to the latest supported patch release on minor version N-1, where + // N is the latest supported minor version. For example, if a cluster is running version 1.17.7 and versions 1.17.9, 1.18.4, + // 1.18.6, and 1.19.1 are available, your cluster is upgraded to 1.18.6. + UpgradeChannelStable UpgradeChannel = "stable" +) + +// PossibleUpgradeChannelValues returns the possible values for the UpgradeChannel const type. +func PossibleUpgradeChannelValues() []UpgradeChannel { + return []UpgradeChannel{ + UpgradeChannelNodeImage, + UpgradeChannelNone, + UpgradeChannelPatch, + UpgradeChannelRapid, + UpgradeChannelStable, + } +} + +// WeekDay - The weekday enum. +type WeekDay string + +const ( + WeekDayFriday WeekDay = "Friday" + WeekDayMonday WeekDay = "Monday" + WeekDaySaturday WeekDay = "Saturday" + WeekDaySunday WeekDay = "Sunday" + WeekDayThursday WeekDay = "Thursday" + WeekDayTuesday WeekDay = "Tuesday" + WeekDayWednesday WeekDay = "Wednesday" +) + +// PossibleWeekDayValues returns the possible values for the WeekDay const type. +func PossibleWeekDayValues() []WeekDay { + return []WeekDay{ + WeekDayFriday, + WeekDayMonday, + WeekDaySaturday, + WeekDaySunday, + WeekDayThursday, + WeekDayTuesday, + WeekDayWednesday, + } +} + +// WorkloadRuntime - Determines the type of workload a node can run. +type WorkloadRuntime string + +const ( + // WorkloadRuntimeKataMshvVMIsolation - Nodes can use (Kata + Cloud Hypervisor + Hyper-V) to enable Nested VM-based pods (Preview). + // Due to the use Hyper-V, AKS node OS itself is a nested VM (the root OS) of Hyper-V. Thus it can only be used with VM series + // that support Nested Virtualization such as Dv3 series. + WorkloadRuntimeKataMshvVMIsolation WorkloadRuntime = "KataMshvVmIsolation" + // WorkloadRuntimeOCIContainer - Nodes will use Kubelet to run standard OCI container workloads. + WorkloadRuntimeOCIContainer WorkloadRuntime = "OCIContainer" + // WorkloadRuntimeWasmWasi - Nodes will use Krustlet to run WASM workloads using the WASI provider (Preview). + WorkloadRuntimeWasmWasi WorkloadRuntime = "WasmWasi" +) + +// PossibleWorkloadRuntimeValues returns the possible values for the WorkloadRuntime const type. +func PossibleWorkloadRuntimeValues() []WorkloadRuntime { + return []WorkloadRuntime{ + WorkloadRuntimeKataMshvVMIsolation, + WorkloadRuntimeOCIContainer, + WorkloadRuntimeWasmWasi, + } +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/date_type.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/date_type.go new file mode 100644 index 000000000000..1345db0d1f38 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/date_type.go @@ -0,0 +1,58 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "encoding/json" + "fmt" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "reflect" + "time" +) + +const ( + fullDateJSON = `"2006-01-02"` + jsonFormat = `"%04d-%02d-%02d"` +) + +type dateType time.Time + +func (t dateType) MarshalJSON() ([]byte, error) { + return []byte(fmt.Sprintf(jsonFormat, time.Time(t).Year(), time.Time(t).Month(), time.Time(t).Day())), nil +} + +func (d *dateType) UnmarshalJSON(data []byte) (err error) { + t, err := time.Parse(fullDateJSON, string(data)) + *d = (dateType)(t) + return err +} + +func populateDateType(m map[string]any, k string, t *time.Time) { + if t == nil { + return + } else if azcore.IsNullValue(t) { + m[k] = nil + return + } else if reflect.ValueOf(t).IsNil() { + return + } + m[k] = (*dateType)(t) +} + +func unpopulateDateType(data json.RawMessage, fn string, t **time.Time) error { + if data == nil || string(data) == "null" { + return nil + } + var aux dateType + if err := json.Unmarshal(data, &aux); err != nil { + return fmt.Errorf("struct field %s: %v", fn, err) + } + *t = (*time.Time)(&aux) + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/machines_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/machines_client.go new file mode 100644 index 000000000000..7c990cf79175 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/machines_client.go @@ -0,0 +1,187 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// MachinesClient contains the methods for the Machines group. +// Don't use this type directly, use NewMachinesClient() instead. +type MachinesClient struct { + internal *arm.Client + subscriptionID string +} + +// NewMachinesClient creates a new instance of MachinesClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewMachinesClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*MachinesClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &MachinesClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// Get - Get a specific machine in the specified agent pool. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - machineName - host name of the machine +// - options - MachinesClientGetOptions contains the optional parameters for the MachinesClient.Get method. +func (client *MachinesClient) Get(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, machineName string, options *MachinesClientGetOptions) (MachinesClientGetResponse, error) { + var err error + const operationName = "MachinesClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, machineName, options) + if err != nil { + return MachinesClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return MachinesClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return MachinesClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *MachinesClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, machineName string, options *MachinesClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}/machines/{machineName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + if machineName == "" { + return nil, errors.New("parameter machineName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{machineName}", url.PathEscape(machineName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *MachinesClient) getHandleResponse(resp *http.Response) (MachinesClientGetResponse, error) { + result := MachinesClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.Machine); err != nil { + return MachinesClientGetResponse{}, err + } + return result, nil +} + +// NewListPager - Gets a list of machines in the specified agent pool. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - options - MachinesClientListOptions contains the optional parameters for the MachinesClient.NewListPager method. +func (client *MachinesClient) NewListPager(resourceGroupName string, resourceName string, agentPoolName string, options *MachinesClientListOptions) *runtime.Pager[MachinesClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[MachinesClientListResponse]{ + More: func(page MachinesClientListResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *MachinesClientListResponse) (MachinesClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "MachinesClient.NewListPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, options) + }, nil) + if err != nil { + return MachinesClientListResponse{}, err + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *MachinesClient) listCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, options *MachinesClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}/machines" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *MachinesClient) listHandleResponse(resp *http.Response) (MachinesClientListResponse, error) { + result := MachinesClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.MachineListResult); err != nil { + return MachinesClientListResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/maintenanceconfigurations_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/maintenanceconfigurations_client.go new file mode 100644 index 000000000000..e3846ef77d5e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/maintenanceconfigurations_client.go @@ -0,0 +1,313 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// MaintenanceConfigurationsClient contains the methods for the MaintenanceConfigurations group. +// Don't use this type directly, use NewMaintenanceConfigurationsClient() instead. +type MaintenanceConfigurationsClient struct { + internal *arm.Client + subscriptionID string +} + +// NewMaintenanceConfigurationsClient creates a new instance of MaintenanceConfigurationsClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewMaintenanceConfigurationsClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*MaintenanceConfigurationsClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &MaintenanceConfigurationsClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// CreateOrUpdate - Creates or updates a maintenance configuration in the specified managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - configName - The name of the maintenance configuration. +// - parameters - The maintenance configuration to create or update. +// - options - MaintenanceConfigurationsClientCreateOrUpdateOptions contains the optional parameters for the MaintenanceConfigurationsClient.CreateOrUpdate +// method. +func (client *MaintenanceConfigurationsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, configName string, parameters MaintenanceConfiguration, options *MaintenanceConfigurationsClientCreateOrUpdateOptions) (MaintenanceConfigurationsClientCreateOrUpdateResponse, error) { + var err error + const operationName = "MaintenanceConfigurationsClient.CreateOrUpdate" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.createOrUpdateCreateRequest(ctx, resourceGroupName, resourceName, configName, parameters, options) + if err != nil { + return MaintenanceConfigurationsClientCreateOrUpdateResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return MaintenanceConfigurationsClientCreateOrUpdateResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusCreated) { + err = runtime.NewResponseError(httpResp) + return MaintenanceConfigurationsClientCreateOrUpdateResponse{}, err + } + resp, err := client.createOrUpdateHandleResponse(httpResp) + return resp, err +} + +// createOrUpdateCreateRequest creates the CreateOrUpdate request. +func (client *MaintenanceConfigurationsClient) createOrUpdateCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, configName string, parameters MaintenanceConfiguration, options *MaintenanceConfigurationsClientCreateOrUpdateOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/maintenanceConfigurations/{configName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if configName == "" { + return nil, errors.New("parameter configName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{configName}", url.PathEscape(configName)) + req, err := runtime.NewRequest(ctx, http.MethodPut, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// createOrUpdateHandleResponse handles the CreateOrUpdate response. +func (client *MaintenanceConfigurationsClient) createOrUpdateHandleResponse(resp *http.Response) (MaintenanceConfigurationsClientCreateOrUpdateResponse, error) { + result := MaintenanceConfigurationsClientCreateOrUpdateResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.MaintenanceConfiguration); err != nil { + return MaintenanceConfigurationsClientCreateOrUpdateResponse{}, err + } + return result, nil +} + +// Delete - Deletes a maintenance configuration. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - configName - The name of the maintenance configuration. +// - options - MaintenanceConfigurationsClientDeleteOptions contains the optional parameters for the MaintenanceConfigurationsClient.Delete +// method. +func (client *MaintenanceConfigurationsClient) Delete(ctx context.Context, resourceGroupName string, resourceName string, configName string, options *MaintenanceConfigurationsClientDeleteOptions) (MaintenanceConfigurationsClientDeleteResponse, error) { + var err error + const operationName = "MaintenanceConfigurationsClient.Delete" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.deleteCreateRequest(ctx, resourceGroupName, resourceName, configName, options) + if err != nil { + return MaintenanceConfigurationsClientDeleteResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return MaintenanceConfigurationsClientDeleteResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return MaintenanceConfigurationsClientDeleteResponse{}, err + } + return MaintenanceConfigurationsClientDeleteResponse{}, nil +} + +// deleteCreateRequest creates the Delete request. +func (client *MaintenanceConfigurationsClient) deleteCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, configName string, options *MaintenanceConfigurationsClientDeleteOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/maintenanceConfigurations/{configName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if configName == "" { + return nil, errors.New("parameter configName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{configName}", url.PathEscape(configName)) + req, err := runtime.NewRequest(ctx, http.MethodDelete, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// Get - Gets the specified maintenance configuration of a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - configName - The name of the maintenance configuration. +// - options - MaintenanceConfigurationsClientGetOptions contains the optional parameters for the MaintenanceConfigurationsClient.Get +// method. +func (client *MaintenanceConfigurationsClient) Get(ctx context.Context, resourceGroupName string, resourceName string, configName string, options *MaintenanceConfigurationsClientGetOptions) (MaintenanceConfigurationsClientGetResponse, error) { + var err error + const operationName = "MaintenanceConfigurationsClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, configName, options) + if err != nil { + return MaintenanceConfigurationsClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return MaintenanceConfigurationsClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return MaintenanceConfigurationsClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *MaintenanceConfigurationsClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, configName string, options *MaintenanceConfigurationsClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/maintenanceConfigurations/{configName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if configName == "" { + return nil, errors.New("parameter configName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{configName}", url.PathEscape(configName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *MaintenanceConfigurationsClient) getHandleResponse(resp *http.Response) (MaintenanceConfigurationsClientGetResponse, error) { + result := MaintenanceConfigurationsClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.MaintenanceConfiguration); err != nil { + return MaintenanceConfigurationsClientGetResponse{}, err + } + return result, nil +} + +// NewListByManagedClusterPager - Gets a list of maintenance configurations in the specified managed cluster. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - MaintenanceConfigurationsClientListByManagedClusterOptions contains the optional parameters for the MaintenanceConfigurationsClient.NewListByManagedClusterPager +// method. +func (client *MaintenanceConfigurationsClient) NewListByManagedClusterPager(resourceGroupName string, resourceName string, options *MaintenanceConfigurationsClientListByManagedClusterOptions) *runtime.Pager[MaintenanceConfigurationsClientListByManagedClusterResponse] { + return runtime.NewPager(runtime.PagingHandler[MaintenanceConfigurationsClientListByManagedClusterResponse]{ + More: func(page MaintenanceConfigurationsClientListByManagedClusterResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *MaintenanceConfigurationsClientListByManagedClusterResponse) (MaintenanceConfigurationsClientListByManagedClusterResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "MaintenanceConfigurationsClient.NewListByManagedClusterPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listByManagedClusterCreateRequest(ctx, resourceGroupName, resourceName, options) + }, nil) + if err != nil { + return MaintenanceConfigurationsClientListByManagedClusterResponse{}, err + } + return client.listByManagedClusterHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listByManagedClusterCreateRequest creates the ListByManagedCluster request. +func (client *MaintenanceConfigurationsClient) listByManagedClusterCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *MaintenanceConfigurationsClientListByManagedClusterOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/maintenanceConfigurations" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listByManagedClusterHandleResponse handles the ListByManagedCluster response. +func (client *MaintenanceConfigurationsClient) listByManagedClusterHandleResponse(resp *http.Response) (MaintenanceConfigurationsClientListByManagedClusterResponse, error) { + result := MaintenanceConfigurationsClientListByManagedClusterResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.MaintenanceConfigurationListResult); err != nil { + return MaintenanceConfigurationsClientListByManagedClusterResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/managedclusters_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/managedclusters_client.go new file mode 100644 index 000000000000..ecbb1143397d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/managedclusters_client.go @@ -0,0 +1,2232 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strconv" + "strings" +) + +// ManagedClustersClient contains the methods for the ManagedClusters group. +// Don't use this type directly, use NewManagedClustersClient() instead. +type ManagedClustersClient struct { + internal *arm.Client + subscriptionID string +} + +// NewManagedClustersClient creates a new instance of ManagedClustersClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewManagedClustersClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*ManagedClustersClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &ManagedClustersClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// BeginAbortLatestOperation - Aborts the currently running operation on the managed cluster. The Managed Cluster will be +// moved to a Canceling state and eventually to a Canceled state when cancellation finishes. If the operation +// completes before cancellation can take place, an error is returned. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientBeginAbortLatestOperationOptions contains the optional parameters for the ManagedClustersClient.BeginAbortLatestOperation +// method. +func (client *ManagedClustersClient) BeginAbortLatestOperation(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginAbortLatestOperationOptions) (*runtime.Poller[ManagedClustersClientAbortLatestOperationResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.abortLatestOperation(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientAbortLatestOperationResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientAbortLatestOperationResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// AbortLatestOperation - Aborts the currently running operation on the managed cluster. The Managed Cluster will be moved +// to a Canceling state and eventually to a Canceled state when cancellation finishes. If the operation +// completes before cancellation can take place, an error is returned. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) abortLatestOperation(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginAbortLatestOperationOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginAbortLatestOperation" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.abortLatestOperationCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// abortLatestOperationCreateRequest creates the AbortLatestOperation request. +func (client *ManagedClustersClient) abortLatestOperationCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginAbortLatestOperationOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclusters/{resourceName}/abort" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// BeginCreateOrUpdate - Creates or updates a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - The managed cluster to create or update. +// - options - ManagedClustersClientBeginCreateOrUpdateOptions contains the optional parameters for the ManagedClustersClient.BeginCreateOrUpdate +// method. +func (client *ManagedClustersClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedCluster, options *ManagedClustersClientBeginCreateOrUpdateOptions) (*runtime.Poller[ManagedClustersClientCreateOrUpdateResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.createOrUpdate(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientCreateOrUpdateResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientCreateOrUpdateResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// CreateOrUpdate - Creates or updates a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) createOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedCluster, options *ManagedClustersClientBeginCreateOrUpdateOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginCreateOrUpdate" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.createOrUpdateCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusCreated) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// createOrUpdateCreateRequest creates the CreateOrUpdate request. +func (client *ManagedClustersClient) createOrUpdateCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedCluster, options *ManagedClustersClientBeginCreateOrUpdateOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPut, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// BeginDelete - Deletes a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientBeginDeleteOptions contains the optional parameters for the ManagedClustersClient.BeginDelete +// method. +func (client *ManagedClustersClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginDeleteOptions) (*runtime.Poller[ManagedClustersClientDeleteResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.deleteOperation(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientDeleteResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientDeleteResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// Delete - Deletes a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) deleteOperation(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginDeleteOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginDelete" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.deleteCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// deleteCreateRequest creates the Delete request. +func (client *ManagedClustersClient) deleteCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginDeleteOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodDelete, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + if options != nil && options.IgnorePodDisruptionBudget != nil { + reqQP.Set("ignore-pod-disruption-budget", strconv.FormatBool(*options.IgnorePodDisruptionBudget)) + } + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// Get - Gets a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientGetOptions contains the optional parameters for the ManagedClustersClient.Get method. +func (client *ManagedClustersClient) Get(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientGetOptions) (ManagedClustersClientGetResponse, error) { + var err error + const operationName = "ManagedClustersClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return ManagedClustersClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *ManagedClustersClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *ManagedClustersClient) getHandleResponse(resp *http.Response) (ManagedClustersClientGetResponse, error) { + result := ManagedClustersClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedCluster); err != nil { + return ManagedClustersClientGetResponse{}, err + } + return result, nil +} + +// GetAccessProfile - WARNING: This API will be deprecated. Instead use ListClusterUserCredentials [https://docs.microsoft.com/rest/api/aks/managedclusters/listclusterusercredentials] +// or ListClusterAdminCredentials +// [https://docs.microsoft.com/rest/api/aks/managedclusters/listclusteradmincredentials] . +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - roleName - The name of the role for managed cluster accessProfile resource. +// - options - ManagedClustersClientGetAccessProfileOptions contains the optional parameters for the ManagedClustersClient.GetAccessProfile +// method. +func (client *ManagedClustersClient) GetAccessProfile(ctx context.Context, resourceGroupName string, resourceName string, roleName string, options *ManagedClustersClientGetAccessProfileOptions) (ManagedClustersClientGetAccessProfileResponse, error) { + var err error + const operationName = "ManagedClustersClient.GetAccessProfile" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getAccessProfileCreateRequest(ctx, resourceGroupName, resourceName, roleName, options) + if err != nil { + return ManagedClustersClientGetAccessProfileResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetAccessProfileResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetAccessProfileResponse{}, err + } + resp, err := client.getAccessProfileHandleResponse(httpResp) + return resp, err +} + +// getAccessProfileCreateRequest creates the GetAccessProfile request. +func (client *ManagedClustersClient) getAccessProfileCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, roleName string, options *ManagedClustersClientGetAccessProfileOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/accessProfiles/{roleName}/listCredential" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if roleName == "" { + return nil, errors.New("parameter roleName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{roleName}", url.PathEscape(roleName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getAccessProfileHandleResponse handles the GetAccessProfile response. +func (client *ManagedClustersClient) getAccessProfileHandleResponse(resp *http.Response) (ManagedClustersClientGetAccessProfileResponse, error) { + result := ManagedClustersClientGetAccessProfileResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterAccessProfile); err != nil { + return ManagedClustersClientGetAccessProfileResponse{}, err + } + return result, nil +} + +// GetCommandResult - Gets the results of a command which has been run on the Managed Cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - commandID - Id of the command. +// - options - ManagedClustersClientGetCommandResultOptions contains the optional parameters for the ManagedClustersClient.GetCommandResult +// method. +func (client *ManagedClustersClient) GetCommandResult(ctx context.Context, resourceGroupName string, resourceName string, commandID string, options *ManagedClustersClientGetCommandResultOptions) (ManagedClustersClientGetCommandResultResponse, error) { + var err error + const operationName = "ManagedClustersClient.GetCommandResult" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCommandResultCreateRequest(ctx, resourceGroupName, resourceName, commandID, options) + if err != nil { + return ManagedClustersClientGetCommandResultResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetCommandResultResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusAccepted) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetCommandResultResponse{}, err + } + resp, err := client.getCommandResultHandleResponse(httpResp) + return resp, err +} + +// getCommandResultCreateRequest creates the GetCommandResult request. +func (client *ManagedClustersClient) getCommandResultCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, commandID string, options *ManagedClustersClientGetCommandResultOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/commandResults/{commandId}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if commandID == "" { + return nil, errors.New("parameter commandID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{commandId}", url.PathEscape(commandID)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getCommandResultHandleResponse handles the GetCommandResult response. +func (client *ManagedClustersClient) getCommandResultHandleResponse(resp *http.Response) (ManagedClustersClientGetCommandResultResponse, error) { + result := ManagedClustersClientGetCommandResultResponse{} + if val := resp.Header.Get("Location"); val != "" { + result.Location = &val + } + if err := runtime.UnmarshalAsJSON(resp, &result.RunCommandResult); err != nil { + return ManagedClustersClientGetCommandResultResponse{}, err + } + return result, nil +} + +// GetGuardrailsVersions - Contains Guardrails version along with its support info and whether it is a default version. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - version - Safeguards version +// - options - ManagedClustersClientGetGuardrailsVersionsOptions contains the optional parameters for the ManagedClustersClient.GetGuardrailsVersions +// method. +func (client *ManagedClustersClient) GetGuardrailsVersions(ctx context.Context, location string, version string, options *ManagedClustersClientGetGuardrailsVersionsOptions) (ManagedClustersClientGetGuardrailsVersionsResponse, error) { + var err error + const operationName = "ManagedClustersClient.GetGuardrailsVersions" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getGuardrailsVersionsCreateRequest(ctx, location, version, options) + if err != nil { + return ManagedClustersClientGetGuardrailsVersionsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetGuardrailsVersionsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetGuardrailsVersionsResponse{}, err + } + resp, err := client.getGuardrailsVersionsHandleResponse(httpResp) + return resp, err +} + +// getGuardrailsVersionsCreateRequest creates the GetGuardrailsVersions request. +func (client *ManagedClustersClient) getGuardrailsVersionsCreateRequest(ctx context.Context, location string, version string, options *ManagedClustersClientGetGuardrailsVersionsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/guardrailsVersions/{version}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + if version == "" { + return nil, errors.New("parameter version cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{version}", url.PathEscape(version)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getGuardrailsVersionsHandleResponse handles the GetGuardrailsVersions response. +func (client *ManagedClustersClient) getGuardrailsVersionsHandleResponse(resp *http.Response) (ManagedClustersClientGetGuardrailsVersionsResponse, error) { + result := ManagedClustersClientGetGuardrailsVersionsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.GuardrailsAvailableVersion); err != nil { + return ManagedClustersClientGetGuardrailsVersionsResponse{}, err + } + return result, nil +} + +// GetMeshRevisionProfile - Contains extra metadata on the revision, including supported revisions, cluster compatibility +// and available upgrades +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - mode - The mode of the mesh. +// - options - ManagedClustersClientGetMeshRevisionProfileOptions contains the optional parameters for the ManagedClustersClient.GetMeshRevisionProfile +// method. +func (client *ManagedClustersClient) GetMeshRevisionProfile(ctx context.Context, location string, mode string, options *ManagedClustersClientGetMeshRevisionProfileOptions) (ManagedClustersClientGetMeshRevisionProfileResponse, error) { + var err error + const operationName = "ManagedClustersClient.GetMeshRevisionProfile" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getMeshRevisionProfileCreateRequest(ctx, location, mode, options) + if err != nil { + return ManagedClustersClientGetMeshRevisionProfileResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetMeshRevisionProfileResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetMeshRevisionProfileResponse{}, err + } + resp, err := client.getMeshRevisionProfileHandleResponse(httpResp) + return resp, err +} + +// getMeshRevisionProfileCreateRequest creates the GetMeshRevisionProfile request. +func (client *ManagedClustersClient) getMeshRevisionProfileCreateRequest(ctx context.Context, location string, mode string, options *ManagedClustersClientGetMeshRevisionProfileOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/meshRevisionProfiles/{mode}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + if mode == "" { + return nil, errors.New("parameter mode cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{mode}", url.PathEscape(mode)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getMeshRevisionProfileHandleResponse handles the GetMeshRevisionProfile response. +func (client *ManagedClustersClient) getMeshRevisionProfileHandleResponse(resp *http.Response) (ManagedClustersClientGetMeshRevisionProfileResponse, error) { + result := ManagedClustersClientGetMeshRevisionProfileResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.MeshRevisionProfile); err != nil { + return ManagedClustersClientGetMeshRevisionProfileResponse{}, err + } + return result, nil +} + +// GetMeshUpgradeProfile - Gets available upgrades for a service mesh in a cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - mode - The mode of the mesh. +// - options - ManagedClustersClientGetMeshUpgradeProfileOptions contains the optional parameters for the ManagedClustersClient.GetMeshUpgradeProfile +// method. +func (client *ManagedClustersClient) GetMeshUpgradeProfile(ctx context.Context, resourceGroupName string, resourceName string, mode string, options *ManagedClustersClientGetMeshUpgradeProfileOptions) (ManagedClustersClientGetMeshUpgradeProfileResponse, error) { + var err error + const operationName = "ManagedClustersClient.GetMeshUpgradeProfile" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getMeshUpgradeProfileCreateRequest(ctx, resourceGroupName, resourceName, mode, options) + if err != nil { + return ManagedClustersClientGetMeshUpgradeProfileResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetMeshUpgradeProfileResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetMeshUpgradeProfileResponse{}, err + } + resp, err := client.getMeshUpgradeProfileHandleResponse(httpResp) + return resp, err +} + +// getMeshUpgradeProfileCreateRequest creates the GetMeshUpgradeProfile request. +func (client *ManagedClustersClient) getMeshUpgradeProfileCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, mode string, options *ManagedClustersClientGetMeshUpgradeProfileOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/meshUpgradeProfiles/{mode}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if mode == "" { + return nil, errors.New("parameter mode cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{mode}", url.PathEscape(mode)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getMeshUpgradeProfileHandleResponse handles the GetMeshUpgradeProfile response. +func (client *ManagedClustersClient) getMeshUpgradeProfileHandleResponse(resp *http.Response) (ManagedClustersClientGetMeshUpgradeProfileResponse, error) { + result := ManagedClustersClientGetMeshUpgradeProfileResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.MeshUpgradeProfile); err != nil { + return ManagedClustersClientGetMeshUpgradeProfileResponse{}, err + } + return result, nil +} + +// GetOSOptions - Gets supported OS options in the specified subscription. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - options - ManagedClustersClientGetOSOptionsOptions contains the optional parameters for the ManagedClustersClient.GetOSOptions +// method. +func (client *ManagedClustersClient) GetOSOptions(ctx context.Context, location string, options *ManagedClustersClientGetOSOptionsOptions) (ManagedClustersClientGetOSOptionsResponse, error) { + var err error + const operationName = "ManagedClustersClient.GetOSOptions" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getOSOptionsCreateRequest(ctx, location, options) + if err != nil { + return ManagedClustersClientGetOSOptionsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetOSOptionsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetOSOptionsResponse{}, err + } + resp, err := client.getOSOptionsHandleResponse(httpResp) + return resp, err +} + +// getOSOptionsCreateRequest creates the GetOSOptions request. +func (client *ManagedClustersClient) getOSOptionsCreateRequest(ctx context.Context, location string, options *ManagedClustersClientGetOSOptionsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/osOptions/default" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + if options != nil && options.ResourceType != nil { + reqQP.Set("resource-type", *options.ResourceType) + } + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getOSOptionsHandleResponse handles the GetOSOptions response. +func (client *ManagedClustersClient) getOSOptionsHandleResponse(resp *http.Response) (ManagedClustersClientGetOSOptionsResponse, error) { + result := ManagedClustersClientGetOSOptionsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.OSOptionProfile); err != nil { + return ManagedClustersClientGetOSOptionsResponse{}, err + } + return result, nil +} + +// GetSafeguardsVersions - Contains Safeguards version along with its support info and whether it is a default version. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - version - Safeguards version +// - options - ManagedClustersClientGetSafeguardsVersionsOptions contains the optional parameters for the ManagedClustersClient.GetSafeguardsVersions +// method. +func (client *ManagedClustersClient) GetSafeguardsVersions(ctx context.Context, location string, version string, options *ManagedClustersClientGetSafeguardsVersionsOptions) (ManagedClustersClientGetSafeguardsVersionsResponse, error) { + var err error + const operationName = "ManagedClustersClient.GetSafeguardsVersions" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getSafeguardsVersionsCreateRequest(ctx, location, version, options) + if err != nil { + return ManagedClustersClientGetSafeguardsVersionsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetSafeguardsVersionsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetSafeguardsVersionsResponse{}, err + } + resp, err := client.getSafeguardsVersionsHandleResponse(httpResp) + return resp, err +} + +// getSafeguardsVersionsCreateRequest creates the GetSafeguardsVersions request. +func (client *ManagedClustersClient) getSafeguardsVersionsCreateRequest(ctx context.Context, location string, version string, options *ManagedClustersClientGetSafeguardsVersionsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/safeguardsVersions/{version}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + if version == "" { + return nil, errors.New("parameter version cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{version}", url.PathEscape(version)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getSafeguardsVersionsHandleResponse handles the GetSafeguardsVersions response. +func (client *ManagedClustersClient) getSafeguardsVersionsHandleResponse(resp *http.Response) (ManagedClustersClientGetSafeguardsVersionsResponse, error) { + result := ManagedClustersClientGetSafeguardsVersionsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.SafeguardsAvailableVersion); err != nil { + return ManagedClustersClientGetSafeguardsVersionsResponse{}, err + } + return result, nil +} + +// GetUpgradeProfile - Gets the upgrade profile of a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientGetUpgradeProfileOptions contains the optional parameters for the ManagedClustersClient.GetUpgradeProfile +// method. +func (client *ManagedClustersClient) GetUpgradeProfile(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientGetUpgradeProfileOptions) (ManagedClustersClientGetUpgradeProfileResponse, error) { + var err error + const operationName = "ManagedClustersClient.GetUpgradeProfile" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getUpgradeProfileCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return ManagedClustersClientGetUpgradeProfileResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientGetUpgradeProfileResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientGetUpgradeProfileResponse{}, err + } + resp, err := client.getUpgradeProfileHandleResponse(httpResp) + return resp, err +} + +// getUpgradeProfileCreateRequest creates the GetUpgradeProfile request. +func (client *ManagedClustersClient) getUpgradeProfileCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientGetUpgradeProfileOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/upgradeProfiles/default" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getUpgradeProfileHandleResponse handles the GetUpgradeProfile response. +func (client *ManagedClustersClient) getUpgradeProfileHandleResponse(resp *http.Response) (ManagedClustersClientGetUpgradeProfileResponse, error) { + result := ManagedClustersClientGetUpgradeProfileResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterUpgradeProfile); err != nil { + return ManagedClustersClientGetUpgradeProfileResponse{}, err + } + return result, nil +} + +// NewListPager - Gets a list of managed clusters in the specified subscription. +// +// Generated from API version 2023-11-02-preview +// - options - ManagedClustersClientListOptions contains the optional parameters for the ManagedClustersClient.NewListPager +// method. +func (client *ManagedClustersClient) NewListPager(options *ManagedClustersClientListOptions) *runtime.Pager[ManagedClustersClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClustersClientListResponse]{ + More: func(page ManagedClustersClientListResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClustersClientListResponse) (ManagedClustersClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClustersClient.NewListPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listCreateRequest(ctx, options) + }, nil) + if err != nil { + return ManagedClustersClientListResponse{}, err + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *ManagedClustersClient) listCreateRequest(ctx context.Context, options *ManagedClustersClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/managedClusters" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *ManagedClustersClient) listHandleResponse(resp *http.Response) (ManagedClustersClientListResponse, error) { + result := ManagedClustersClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterListResult); err != nil { + return ManagedClustersClientListResponse{}, err + } + return result, nil +} + +// NewListByResourceGroupPager - Lists managed clusters in the specified subscription and resource group. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - options - ManagedClustersClientListByResourceGroupOptions contains the optional parameters for the ManagedClustersClient.NewListByResourceGroupPager +// method. +func (client *ManagedClustersClient) NewListByResourceGroupPager(resourceGroupName string, options *ManagedClustersClientListByResourceGroupOptions) *runtime.Pager[ManagedClustersClientListByResourceGroupResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClustersClientListByResourceGroupResponse]{ + More: func(page ManagedClustersClientListByResourceGroupResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClustersClientListByResourceGroupResponse) (ManagedClustersClientListByResourceGroupResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClustersClient.NewListByResourceGroupPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listByResourceGroupCreateRequest(ctx, resourceGroupName, options) + }, nil) + if err != nil { + return ManagedClustersClientListByResourceGroupResponse{}, err + } + return client.listByResourceGroupHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listByResourceGroupCreateRequest creates the ListByResourceGroup request. +func (client *ManagedClustersClient) listByResourceGroupCreateRequest(ctx context.Context, resourceGroupName string, options *ManagedClustersClientListByResourceGroupOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listByResourceGroupHandleResponse handles the ListByResourceGroup response. +func (client *ManagedClustersClient) listByResourceGroupHandleResponse(resp *http.Response) (ManagedClustersClientListByResourceGroupResponse, error) { + result := ManagedClustersClientListByResourceGroupResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterListResult); err != nil { + return ManagedClustersClientListByResourceGroupResponse{}, err + } + return result, nil +} + +// ListClusterAdminCredentials - Lists the admin credentials of a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientListClusterAdminCredentialsOptions contains the optional parameters for the ManagedClustersClient.ListClusterAdminCredentials +// method. +func (client *ManagedClustersClient) ListClusterAdminCredentials(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientListClusterAdminCredentialsOptions) (ManagedClustersClientListClusterAdminCredentialsResponse, error) { + var err error + const operationName = "ManagedClustersClient.ListClusterAdminCredentials" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.listClusterAdminCredentialsCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return ManagedClustersClientListClusterAdminCredentialsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientListClusterAdminCredentialsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientListClusterAdminCredentialsResponse{}, err + } + resp, err := client.listClusterAdminCredentialsHandleResponse(httpResp) + return resp, err +} + +// listClusterAdminCredentialsCreateRequest creates the ListClusterAdminCredentials request. +func (client *ManagedClustersClient) listClusterAdminCredentialsCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientListClusterAdminCredentialsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/listClusterAdminCredential" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + if options != nil && options.ServerFqdn != nil { + reqQP.Set("server-fqdn", *options.ServerFqdn) + } + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listClusterAdminCredentialsHandleResponse handles the ListClusterAdminCredentials response. +func (client *ManagedClustersClient) listClusterAdminCredentialsHandleResponse(resp *http.Response) (ManagedClustersClientListClusterAdminCredentialsResponse, error) { + result := ManagedClustersClientListClusterAdminCredentialsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.CredentialResults); err != nil { + return ManagedClustersClientListClusterAdminCredentialsResponse{}, err + } + return result, nil +} + +// ListClusterMonitoringUserCredentials - Lists the cluster monitoring user credentials of a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientListClusterMonitoringUserCredentialsOptions contains the optional parameters for the ManagedClustersClient.ListClusterMonitoringUserCredentials +// method. +func (client *ManagedClustersClient) ListClusterMonitoringUserCredentials(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientListClusterMonitoringUserCredentialsOptions) (ManagedClustersClientListClusterMonitoringUserCredentialsResponse, error) { + var err error + const operationName = "ManagedClustersClient.ListClusterMonitoringUserCredentials" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.listClusterMonitoringUserCredentialsCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return ManagedClustersClientListClusterMonitoringUserCredentialsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientListClusterMonitoringUserCredentialsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientListClusterMonitoringUserCredentialsResponse{}, err + } + resp, err := client.listClusterMonitoringUserCredentialsHandleResponse(httpResp) + return resp, err +} + +// listClusterMonitoringUserCredentialsCreateRequest creates the ListClusterMonitoringUserCredentials request. +func (client *ManagedClustersClient) listClusterMonitoringUserCredentialsCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientListClusterMonitoringUserCredentialsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/listClusterMonitoringUserCredential" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + if options != nil && options.ServerFqdn != nil { + reqQP.Set("server-fqdn", *options.ServerFqdn) + } + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listClusterMonitoringUserCredentialsHandleResponse handles the ListClusterMonitoringUserCredentials response. +func (client *ManagedClustersClient) listClusterMonitoringUserCredentialsHandleResponse(resp *http.Response) (ManagedClustersClientListClusterMonitoringUserCredentialsResponse, error) { + result := ManagedClustersClientListClusterMonitoringUserCredentialsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.CredentialResults); err != nil { + return ManagedClustersClientListClusterMonitoringUserCredentialsResponse{}, err + } + return result, nil +} + +// ListClusterUserCredentials - Lists the user credentials of a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientListClusterUserCredentialsOptions contains the optional parameters for the ManagedClustersClient.ListClusterUserCredentials +// method. +func (client *ManagedClustersClient) ListClusterUserCredentials(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientListClusterUserCredentialsOptions) (ManagedClustersClientListClusterUserCredentialsResponse, error) { + var err error + const operationName = "ManagedClustersClient.ListClusterUserCredentials" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.listClusterUserCredentialsCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return ManagedClustersClientListClusterUserCredentialsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientListClusterUserCredentialsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientListClusterUserCredentialsResponse{}, err + } + resp, err := client.listClusterUserCredentialsHandleResponse(httpResp) + return resp, err +} + +// listClusterUserCredentialsCreateRequest creates the ListClusterUserCredentials request. +func (client *ManagedClustersClient) listClusterUserCredentialsCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientListClusterUserCredentialsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/listClusterUserCredential" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + if options != nil && options.Format != nil { + reqQP.Set("format", string(*options.Format)) + } + if options != nil && options.ServerFqdn != nil { + reqQP.Set("server-fqdn", *options.ServerFqdn) + } + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listClusterUserCredentialsHandleResponse handles the ListClusterUserCredentials response. +func (client *ManagedClustersClient) listClusterUserCredentialsHandleResponse(resp *http.Response) (ManagedClustersClientListClusterUserCredentialsResponse, error) { + result := ManagedClustersClientListClusterUserCredentialsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.CredentialResults); err != nil { + return ManagedClustersClientListClusterUserCredentialsResponse{}, err + } + return result, nil +} + +// NewListGuardrailsVersionsPager - Contains list of Guardrails version along with its support info and whether it is a default +// version. +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - options - ManagedClustersClientListGuardrailsVersionsOptions contains the optional parameters for the ManagedClustersClient.NewListGuardrailsVersionsPager +// method. +func (client *ManagedClustersClient) NewListGuardrailsVersionsPager(location string, options *ManagedClustersClientListGuardrailsVersionsOptions) *runtime.Pager[ManagedClustersClientListGuardrailsVersionsResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClustersClientListGuardrailsVersionsResponse]{ + More: func(page ManagedClustersClientListGuardrailsVersionsResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClustersClientListGuardrailsVersionsResponse) (ManagedClustersClientListGuardrailsVersionsResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClustersClient.NewListGuardrailsVersionsPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listGuardrailsVersionsCreateRequest(ctx, location, options) + }, nil) + if err != nil { + return ManagedClustersClientListGuardrailsVersionsResponse{}, err + } + return client.listGuardrailsVersionsHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listGuardrailsVersionsCreateRequest creates the ListGuardrailsVersions request. +func (client *ManagedClustersClient) listGuardrailsVersionsCreateRequest(ctx context.Context, location string, options *ManagedClustersClientListGuardrailsVersionsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/guardrailsVersions" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listGuardrailsVersionsHandleResponse handles the ListGuardrailsVersions response. +func (client *ManagedClustersClient) listGuardrailsVersionsHandleResponse(resp *http.Response) (ManagedClustersClientListGuardrailsVersionsResponse, error) { + result := ManagedClustersClientListGuardrailsVersionsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.GuardrailsAvailableVersionsList); err != nil { + return ManagedClustersClientListGuardrailsVersionsResponse{}, err + } + return result, nil +} + +// ListKubernetesVersions - Contains extra metadata on the version, including supported patch versions, capabilities, available +// upgrades, and details on preview status of the version +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - options - ManagedClustersClientListKubernetesVersionsOptions contains the optional parameters for the ManagedClustersClient.ListKubernetesVersions +// method. +func (client *ManagedClustersClient) ListKubernetesVersions(ctx context.Context, location string, options *ManagedClustersClientListKubernetesVersionsOptions) (ManagedClustersClientListKubernetesVersionsResponse, error) { + var err error + const operationName = "ManagedClustersClient.ListKubernetesVersions" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.listKubernetesVersionsCreateRequest(ctx, location, options) + if err != nil { + return ManagedClustersClientListKubernetesVersionsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClustersClientListKubernetesVersionsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClustersClientListKubernetesVersionsResponse{}, err + } + resp, err := client.listKubernetesVersionsHandleResponse(httpResp) + return resp, err +} + +// listKubernetesVersionsCreateRequest creates the ListKubernetesVersions request. +func (client *ManagedClustersClient) listKubernetesVersionsCreateRequest(ctx context.Context, location string, options *ManagedClustersClientListKubernetesVersionsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/kubernetesVersions" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listKubernetesVersionsHandleResponse handles the ListKubernetesVersions response. +func (client *ManagedClustersClient) listKubernetesVersionsHandleResponse(resp *http.Response) (ManagedClustersClientListKubernetesVersionsResponse, error) { + result := ManagedClustersClientListKubernetesVersionsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.KubernetesVersionListResult); err != nil { + return ManagedClustersClientListKubernetesVersionsResponse{}, err + } + return result, nil +} + +// NewListMeshRevisionProfilesPager - Contains extra metadata on each revision, including supported revisions, cluster compatibility +// and available upgrades +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - options - ManagedClustersClientListMeshRevisionProfilesOptions contains the optional parameters for the ManagedClustersClient.NewListMeshRevisionProfilesPager +// method. +func (client *ManagedClustersClient) NewListMeshRevisionProfilesPager(location string, options *ManagedClustersClientListMeshRevisionProfilesOptions) *runtime.Pager[ManagedClustersClientListMeshRevisionProfilesResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClustersClientListMeshRevisionProfilesResponse]{ + More: func(page ManagedClustersClientListMeshRevisionProfilesResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClustersClientListMeshRevisionProfilesResponse) (ManagedClustersClientListMeshRevisionProfilesResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClustersClient.NewListMeshRevisionProfilesPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listMeshRevisionProfilesCreateRequest(ctx, location, options) + }, nil) + if err != nil { + return ManagedClustersClientListMeshRevisionProfilesResponse{}, err + } + return client.listMeshRevisionProfilesHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listMeshRevisionProfilesCreateRequest creates the ListMeshRevisionProfiles request. +func (client *ManagedClustersClient) listMeshRevisionProfilesCreateRequest(ctx context.Context, location string, options *ManagedClustersClientListMeshRevisionProfilesOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/meshRevisionProfiles" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listMeshRevisionProfilesHandleResponse handles the ListMeshRevisionProfiles response. +func (client *ManagedClustersClient) listMeshRevisionProfilesHandleResponse(resp *http.Response) (ManagedClustersClientListMeshRevisionProfilesResponse, error) { + result := ManagedClustersClientListMeshRevisionProfilesResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.MeshRevisionProfileList); err != nil { + return ManagedClustersClientListMeshRevisionProfilesResponse{}, err + } + return result, nil +} + +// NewListMeshUpgradeProfilesPager - Lists available upgrades for all service meshes in a specific cluster. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientListMeshUpgradeProfilesOptions contains the optional parameters for the ManagedClustersClient.NewListMeshUpgradeProfilesPager +// method. +func (client *ManagedClustersClient) NewListMeshUpgradeProfilesPager(resourceGroupName string, resourceName string, options *ManagedClustersClientListMeshUpgradeProfilesOptions) *runtime.Pager[ManagedClustersClientListMeshUpgradeProfilesResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClustersClientListMeshUpgradeProfilesResponse]{ + More: func(page ManagedClustersClientListMeshUpgradeProfilesResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClustersClientListMeshUpgradeProfilesResponse) (ManagedClustersClientListMeshUpgradeProfilesResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClustersClient.NewListMeshUpgradeProfilesPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listMeshUpgradeProfilesCreateRequest(ctx, resourceGroupName, resourceName, options) + }, nil) + if err != nil { + return ManagedClustersClientListMeshUpgradeProfilesResponse{}, err + } + return client.listMeshUpgradeProfilesHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listMeshUpgradeProfilesCreateRequest creates the ListMeshUpgradeProfiles request. +func (client *ManagedClustersClient) listMeshUpgradeProfilesCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientListMeshUpgradeProfilesOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/meshUpgradeProfiles" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listMeshUpgradeProfilesHandleResponse handles the ListMeshUpgradeProfiles response. +func (client *ManagedClustersClient) listMeshUpgradeProfilesHandleResponse(resp *http.Response) (ManagedClustersClientListMeshUpgradeProfilesResponse, error) { + result := ManagedClustersClientListMeshUpgradeProfilesResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.MeshUpgradeProfileList); err != nil { + return ManagedClustersClientListMeshUpgradeProfilesResponse{}, err + } + return result, nil +} + +// NewListOutboundNetworkDependenciesEndpointsPager - Gets a list of egress endpoints (network endpoints of all outbound dependencies) +// in the specified managed cluster. The operation returns properties of each egress endpoint. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientListOutboundNetworkDependenciesEndpointsOptions contains the optional parameters for the +// ManagedClustersClient.NewListOutboundNetworkDependenciesEndpointsPager method. +func (client *ManagedClustersClient) NewListOutboundNetworkDependenciesEndpointsPager(resourceGroupName string, resourceName string, options *ManagedClustersClientListOutboundNetworkDependenciesEndpointsOptions) *runtime.Pager[ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse]{ + More: func(page ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse) (ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClustersClient.NewListOutboundNetworkDependenciesEndpointsPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listOutboundNetworkDependenciesEndpointsCreateRequest(ctx, resourceGroupName, resourceName, options) + }, nil) + if err != nil { + return ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse{}, err + } + return client.listOutboundNetworkDependenciesEndpointsHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listOutboundNetworkDependenciesEndpointsCreateRequest creates the ListOutboundNetworkDependenciesEndpoints request. +func (client *ManagedClustersClient) listOutboundNetworkDependenciesEndpointsCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientListOutboundNetworkDependenciesEndpointsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/outboundNetworkDependenciesEndpoints" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listOutboundNetworkDependenciesEndpointsHandleResponse handles the ListOutboundNetworkDependenciesEndpoints response. +func (client *ManagedClustersClient) listOutboundNetworkDependenciesEndpointsHandleResponse(resp *http.Response) (ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse, error) { + result := ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.OutboundEnvironmentEndpointCollection); err != nil { + return ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse{}, err + } + return result, nil +} + +// NewListSafeguardsVersionsPager - Contains list of Safeguards version along with its support info and whether it is a default +// version. +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - options - ManagedClustersClientListSafeguardsVersionsOptions contains the optional parameters for the ManagedClustersClient.NewListSafeguardsVersionsPager +// method. +func (client *ManagedClustersClient) NewListSafeguardsVersionsPager(location string, options *ManagedClustersClientListSafeguardsVersionsOptions) *runtime.Pager[ManagedClustersClientListSafeguardsVersionsResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClustersClientListSafeguardsVersionsResponse]{ + More: func(page ManagedClustersClientListSafeguardsVersionsResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClustersClientListSafeguardsVersionsResponse) (ManagedClustersClientListSafeguardsVersionsResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClustersClient.NewListSafeguardsVersionsPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listSafeguardsVersionsCreateRequest(ctx, location, options) + }, nil) + if err != nil { + return ManagedClustersClientListSafeguardsVersionsResponse{}, err + } + return client.listSafeguardsVersionsHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listSafeguardsVersionsCreateRequest creates the ListSafeguardsVersions request. +func (client *ManagedClustersClient) listSafeguardsVersionsCreateRequest(ctx context.Context, location string, options *ManagedClustersClientListSafeguardsVersionsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/safeguardsVersions" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listSafeguardsVersionsHandleResponse handles the ListSafeguardsVersions response. +func (client *ManagedClustersClient) listSafeguardsVersionsHandleResponse(resp *http.Response) (ManagedClustersClientListSafeguardsVersionsResponse, error) { + result := ManagedClustersClientListSafeguardsVersionsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.SafeguardsAvailableVersionsList); err != nil { + return ManagedClustersClientListSafeguardsVersionsResponse{}, err + } + return result, nil +} + +// BeginResetAADProfile - WARNING: This API will be deprecated. Please see AKS-managed Azure Active Directory integration +// [https://aka.ms/aks-managed-aad] to update your cluster with AKS-managed Azure AD. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - The AAD profile to set on the Managed Cluster +// - options - ManagedClustersClientBeginResetAADProfileOptions contains the optional parameters for the ManagedClustersClient.BeginResetAADProfile +// method. +func (client *ManagedClustersClient) BeginResetAADProfile(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedClusterAADProfile, options *ManagedClustersClientBeginResetAADProfileOptions) (*runtime.Poller[ManagedClustersClientResetAADProfileResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.resetAADProfile(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientResetAADProfileResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientResetAADProfileResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// ResetAADProfile - WARNING: This API will be deprecated. Please see AKS-managed Azure Active Directory integration [https://aka.ms/aks-managed-aad] +// to update your cluster with AKS-managed Azure AD. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) resetAADProfile(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedClusterAADProfile, options *ManagedClustersClientBeginResetAADProfileOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginResetAADProfile" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.resetAADProfileCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusAccepted) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// resetAADProfileCreateRequest creates the ResetAADProfile request. +func (client *ManagedClustersClient) resetAADProfileCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedClusterAADProfile, options *ManagedClustersClientBeginResetAADProfileOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/resetAADProfile" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// BeginResetServicePrincipalProfile - This action cannot be performed on a cluster that is not using a service principal +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - The service principal profile to set on the managed cluster. +// - options - ManagedClustersClientBeginResetServicePrincipalProfileOptions contains the optional parameters for the ManagedClustersClient.BeginResetServicePrincipalProfile +// method. +func (client *ManagedClustersClient) BeginResetServicePrincipalProfile(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedClusterServicePrincipalProfile, options *ManagedClustersClientBeginResetServicePrincipalProfileOptions) (*runtime.Poller[ManagedClustersClientResetServicePrincipalProfileResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.resetServicePrincipalProfile(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientResetServicePrincipalProfileResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientResetServicePrincipalProfileResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// ResetServicePrincipalProfile - This action cannot be performed on a cluster that is not using a service principal +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) resetServicePrincipalProfile(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedClusterServicePrincipalProfile, options *ManagedClustersClientBeginResetServicePrincipalProfileOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginResetServicePrincipalProfile" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.resetServicePrincipalProfileCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusAccepted) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// resetServicePrincipalProfileCreateRequest creates the ResetServicePrincipalProfile request. +func (client *ManagedClustersClient) resetServicePrincipalProfileCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedClusterServicePrincipalProfile, options *ManagedClustersClientBeginResetServicePrincipalProfileOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/resetServicePrincipalProfile" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// BeginRotateClusterCertificates - See Certificate rotation [https://docs.microsoft.com/azure/aks/certificate-rotation] for +// more details about rotating managed cluster certificates. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientBeginRotateClusterCertificatesOptions contains the optional parameters for the ManagedClustersClient.BeginRotateClusterCertificates +// method. +func (client *ManagedClustersClient) BeginRotateClusterCertificates(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginRotateClusterCertificatesOptions) (*runtime.Poller[ManagedClustersClientRotateClusterCertificatesResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.rotateClusterCertificates(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientRotateClusterCertificatesResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientRotateClusterCertificatesResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// RotateClusterCertificates - See Certificate rotation [https://docs.microsoft.com/azure/aks/certificate-rotation] for more +// details about rotating managed cluster certificates. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) rotateClusterCertificates(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginRotateClusterCertificatesOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginRotateClusterCertificates" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.rotateClusterCertificatesCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// rotateClusterCertificatesCreateRequest creates the RotateClusterCertificates request. +func (client *ManagedClustersClient) rotateClusterCertificatesCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginRotateClusterCertificatesOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/rotateClusterCertificates" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// BeginRotateServiceAccountSigningKeys - Rotates the service account signing keys of a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientBeginRotateServiceAccountSigningKeysOptions contains the optional parameters for the ManagedClustersClient.BeginRotateServiceAccountSigningKeys +// method. +func (client *ManagedClustersClient) BeginRotateServiceAccountSigningKeys(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginRotateServiceAccountSigningKeysOptions) (*runtime.Poller[ManagedClustersClientRotateServiceAccountSigningKeysResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.rotateServiceAccountSigningKeys(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientRotateServiceAccountSigningKeysResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientRotateServiceAccountSigningKeysResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// RotateServiceAccountSigningKeys - Rotates the service account signing keys of a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) rotateServiceAccountSigningKeys(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginRotateServiceAccountSigningKeysOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginRotateServiceAccountSigningKeys" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.rotateServiceAccountSigningKeysCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// rotateServiceAccountSigningKeysCreateRequest creates the RotateServiceAccountSigningKeys request. +func (client *ManagedClustersClient) rotateServiceAccountSigningKeysCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginRotateServiceAccountSigningKeysOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/rotateServiceAccountSigningKeys" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// BeginRunCommand - AKS will create a pod to run the command. This is primarily useful for private clusters. For more information +// see AKS Run Command +// [https://docs.microsoft.com/azure/aks/private-clusters#aks-run-command-preview]. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - requestPayload - The run command request +// - options - ManagedClustersClientBeginRunCommandOptions contains the optional parameters for the ManagedClustersClient.BeginRunCommand +// method. +func (client *ManagedClustersClient) BeginRunCommand(ctx context.Context, resourceGroupName string, resourceName string, requestPayload RunCommandRequest, options *ManagedClustersClientBeginRunCommandOptions) (*runtime.Poller[ManagedClustersClientRunCommandResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.runCommand(ctx, resourceGroupName, resourceName, requestPayload, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientRunCommandResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientRunCommandResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// RunCommand - AKS will create a pod to run the command. This is primarily useful for private clusters. For more information +// see AKS Run Command +// [https://docs.microsoft.com/azure/aks/private-clusters#aks-run-command-preview]. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) runCommand(ctx context.Context, resourceGroupName string, resourceName string, requestPayload RunCommandRequest, options *ManagedClustersClientBeginRunCommandOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginRunCommand" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.runCommandCreateRequest(ctx, resourceGroupName, resourceName, requestPayload, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusAccepted) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// runCommandCreateRequest creates the RunCommand request. +func (client *ManagedClustersClient) runCommandCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, requestPayload RunCommandRequest, options *ManagedClustersClientBeginRunCommandOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/runCommand" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, requestPayload); err != nil { + return nil, err + } + return req, nil +} + +// BeginStart - See starting a cluster [https://docs.microsoft.com/azure/aks/start-stop-cluster] for more details about starting +// a cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientBeginStartOptions contains the optional parameters for the ManagedClustersClient.BeginStart +// method. +func (client *ManagedClustersClient) BeginStart(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginStartOptions) (*runtime.Poller[ManagedClustersClientStartResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.start(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientStartResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientStartResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// Start - See starting a cluster [https://docs.microsoft.com/azure/aks/start-stop-cluster] for more details about starting +// a cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) start(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginStartOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginStart" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.startCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// startCreateRequest creates the Start request. +func (client *ManagedClustersClient) startCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginStartOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/start" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// BeginStop - This can only be performed on Azure Virtual Machine Scale set backed clusters. Stopping a cluster stops the +// control plane and agent nodes entirely, while maintaining all object and cluster state. A +// cluster does not accrue charges while it is stopped. See stopping a cluster [https://docs.microsoft.com/azure/aks/start-stop-cluster] +// for more details about stopping a cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClustersClientBeginStopOptions contains the optional parameters for the ManagedClustersClient.BeginStop +// method. +func (client *ManagedClustersClient) BeginStop(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginStopOptions) (*runtime.Poller[ManagedClustersClientStopResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.stop(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientStopResponse]{ + FinalStateVia: runtime.FinalStateViaLocation, + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientStopResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// Stop - This can only be performed on Azure Virtual Machine Scale set backed clusters. Stopping a cluster stops the control +// plane and agent nodes entirely, while maintaining all object and cluster state. A +// cluster does not accrue charges while it is stopped. See stopping a cluster [https://docs.microsoft.com/azure/aks/start-stop-cluster] +// for more details about stopping a cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) stop(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginStopOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginStop" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.stopCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// stopCreateRequest creates the Stop request. +func (client *ManagedClustersClient) stopCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClustersClientBeginStopOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/stop" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// BeginUpdateTags - Updates tags on a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - Parameters supplied to the Update Managed Cluster Tags operation. +// - options - ManagedClustersClientBeginUpdateTagsOptions contains the optional parameters for the ManagedClustersClient.BeginUpdateTags +// method. +func (client *ManagedClustersClient) BeginUpdateTags(ctx context.Context, resourceGroupName string, resourceName string, parameters TagsObject, options *ManagedClustersClientBeginUpdateTagsOptions) (*runtime.Poller[ManagedClustersClientUpdateTagsResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.updateTags(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[ManagedClustersClientUpdateTagsResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[ManagedClustersClientUpdateTagsResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// UpdateTags - Updates tags on a managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *ManagedClustersClient) updateTags(ctx context.Context, resourceGroupName string, resourceName string, parameters TagsObject, options *ManagedClustersClientBeginUpdateTagsOptions) (*http.Response, error) { + var err error + const operationName = "ManagedClustersClient.BeginUpdateTags" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.updateTagsCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// updateTagsCreateRequest creates the UpdateTags request. +func (client *ManagedClustersClient) updateTagsCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters TagsObject, options *ManagedClustersClientBeginUpdateTagsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPatch, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/managedclustersnapshots_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/managedclustersnapshots_client.go new file mode 100644 index 000000000000..677aa9964ce1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/managedclustersnapshots_client.go @@ -0,0 +1,417 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// ManagedClusterSnapshotsClient contains the methods for the ManagedClusterSnapshots group. +// Don't use this type directly, use NewManagedClusterSnapshotsClient() instead. +type ManagedClusterSnapshotsClient struct { + internal *arm.Client + subscriptionID string +} + +// NewManagedClusterSnapshotsClient creates a new instance of ManagedClusterSnapshotsClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewManagedClusterSnapshotsClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*ManagedClusterSnapshotsClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &ManagedClusterSnapshotsClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// CreateOrUpdate - Creates or updates a managed cluster snapshot. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - The managed cluster snapshot to create or update. +// - options - ManagedClusterSnapshotsClientCreateOrUpdateOptions contains the optional parameters for the ManagedClusterSnapshotsClient.CreateOrUpdate +// method. +func (client *ManagedClusterSnapshotsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedClusterSnapshot, options *ManagedClusterSnapshotsClientCreateOrUpdateOptions) (ManagedClusterSnapshotsClientCreateOrUpdateResponse, error) { + var err error + const operationName = "ManagedClusterSnapshotsClient.CreateOrUpdate" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.createOrUpdateCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return ManagedClusterSnapshotsClientCreateOrUpdateResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClusterSnapshotsClientCreateOrUpdateResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusCreated) { + err = runtime.NewResponseError(httpResp) + return ManagedClusterSnapshotsClientCreateOrUpdateResponse{}, err + } + resp, err := client.createOrUpdateHandleResponse(httpResp) + return resp, err +} + +// createOrUpdateCreateRequest creates the CreateOrUpdate request. +func (client *ManagedClusterSnapshotsClient) createOrUpdateCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters ManagedClusterSnapshot, options *ManagedClusterSnapshotsClientCreateOrUpdateOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclustersnapshots/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPut, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// createOrUpdateHandleResponse handles the CreateOrUpdate response. +func (client *ManagedClusterSnapshotsClient) createOrUpdateHandleResponse(resp *http.Response) (ManagedClusterSnapshotsClientCreateOrUpdateResponse, error) { + result := ManagedClusterSnapshotsClientCreateOrUpdateResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterSnapshot); err != nil { + return ManagedClusterSnapshotsClientCreateOrUpdateResponse{}, err + } + return result, nil +} + +// Delete - Deletes a managed cluster snapshot. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClusterSnapshotsClientDeleteOptions contains the optional parameters for the ManagedClusterSnapshotsClient.Delete +// method. +func (client *ManagedClusterSnapshotsClient) Delete(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClusterSnapshotsClientDeleteOptions) (ManagedClusterSnapshotsClientDeleteResponse, error) { + var err error + const operationName = "ManagedClusterSnapshotsClient.Delete" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.deleteCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return ManagedClusterSnapshotsClientDeleteResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClusterSnapshotsClientDeleteResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return ManagedClusterSnapshotsClientDeleteResponse{}, err + } + return ManagedClusterSnapshotsClientDeleteResponse{}, nil +} + +// deleteCreateRequest creates the Delete request. +func (client *ManagedClusterSnapshotsClient) deleteCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClusterSnapshotsClientDeleteOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclustersnapshots/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodDelete, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// Get - Gets a managed cluster snapshot. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - ManagedClusterSnapshotsClientGetOptions contains the optional parameters for the ManagedClusterSnapshotsClient.Get +// method. +func (client *ManagedClusterSnapshotsClient) Get(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClusterSnapshotsClientGetOptions) (ManagedClusterSnapshotsClientGetResponse, error) { + var err error + const operationName = "ManagedClusterSnapshotsClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return ManagedClusterSnapshotsClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClusterSnapshotsClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClusterSnapshotsClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *ManagedClusterSnapshotsClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *ManagedClusterSnapshotsClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclustersnapshots/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *ManagedClusterSnapshotsClient) getHandleResponse(resp *http.Response) (ManagedClusterSnapshotsClientGetResponse, error) { + result := ManagedClusterSnapshotsClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterSnapshot); err != nil { + return ManagedClusterSnapshotsClientGetResponse{}, err + } + return result, nil +} + +// NewListPager - Gets a list of managed cluster snapshots in the specified subscription. +// +// Generated from API version 2023-11-02-preview +// - options - ManagedClusterSnapshotsClientListOptions contains the optional parameters for the ManagedClusterSnapshotsClient.NewListPager +// method. +func (client *ManagedClusterSnapshotsClient) NewListPager(options *ManagedClusterSnapshotsClientListOptions) *runtime.Pager[ManagedClusterSnapshotsClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClusterSnapshotsClientListResponse]{ + More: func(page ManagedClusterSnapshotsClientListResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClusterSnapshotsClientListResponse) (ManagedClusterSnapshotsClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClusterSnapshotsClient.NewListPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listCreateRequest(ctx, options) + }, nil) + if err != nil { + return ManagedClusterSnapshotsClientListResponse{}, err + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *ManagedClusterSnapshotsClient) listCreateRequest(ctx context.Context, options *ManagedClusterSnapshotsClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/managedclustersnapshots" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *ManagedClusterSnapshotsClient) listHandleResponse(resp *http.Response) (ManagedClusterSnapshotsClientListResponse, error) { + result := ManagedClusterSnapshotsClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterSnapshotListResult); err != nil { + return ManagedClusterSnapshotsClientListResponse{}, err + } + return result, nil +} + +// NewListByResourceGroupPager - Lists managed cluster snapshots in the specified subscription and resource group. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - options - ManagedClusterSnapshotsClientListByResourceGroupOptions contains the optional parameters for the ManagedClusterSnapshotsClient.NewListByResourceGroupPager +// method. +func (client *ManagedClusterSnapshotsClient) NewListByResourceGroupPager(resourceGroupName string, options *ManagedClusterSnapshotsClientListByResourceGroupOptions) *runtime.Pager[ManagedClusterSnapshotsClientListByResourceGroupResponse] { + return runtime.NewPager(runtime.PagingHandler[ManagedClusterSnapshotsClientListByResourceGroupResponse]{ + More: func(page ManagedClusterSnapshotsClientListByResourceGroupResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *ManagedClusterSnapshotsClientListByResourceGroupResponse) (ManagedClusterSnapshotsClientListByResourceGroupResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "ManagedClusterSnapshotsClient.NewListByResourceGroupPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listByResourceGroupCreateRequest(ctx, resourceGroupName, options) + }, nil) + if err != nil { + return ManagedClusterSnapshotsClientListByResourceGroupResponse{}, err + } + return client.listByResourceGroupHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listByResourceGroupCreateRequest creates the ListByResourceGroup request. +func (client *ManagedClusterSnapshotsClient) listByResourceGroupCreateRequest(ctx context.Context, resourceGroupName string, options *ManagedClusterSnapshotsClientListByResourceGroupOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclustersnapshots" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listByResourceGroupHandleResponse handles the ListByResourceGroup response. +func (client *ManagedClusterSnapshotsClient) listByResourceGroupHandleResponse(resp *http.Response) (ManagedClusterSnapshotsClientListByResourceGroupResponse, error) { + result := ManagedClusterSnapshotsClientListByResourceGroupResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterSnapshotListResult); err != nil { + return ManagedClusterSnapshotsClientListByResourceGroupResponse{}, err + } + return result, nil +} + +// UpdateTags - Updates tags on a managed cluster snapshot. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - Parameters supplied to the Update managed cluster snapshot Tags operation. +// - options - ManagedClusterSnapshotsClientUpdateTagsOptions contains the optional parameters for the ManagedClusterSnapshotsClient.UpdateTags +// method. +func (client *ManagedClusterSnapshotsClient) UpdateTags(ctx context.Context, resourceGroupName string, resourceName string, parameters TagsObject, options *ManagedClusterSnapshotsClientUpdateTagsOptions) (ManagedClusterSnapshotsClientUpdateTagsResponse, error) { + var err error + const operationName = "ManagedClusterSnapshotsClient.UpdateTags" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.updateTagsCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return ManagedClusterSnapshotsClientUpdateTagsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ManagedClusterSnapshotsClientUpdateTagsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ManagedClusterSnapshotsClientUpdateTagsResponse{}, err + } + resp, err := client.updateTagsHandleResponse(httpResp) + return resp, err +} + +// updateTagsCreateRequest creates the UpdateTags request. +func (client *ManagedClusterSnapshotsClient) updateTagsCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters TagsObject, options *ManagedClusterSnapshotsClientUpdateTagsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedclustersnapshots/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPatch, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// updateTagsHandleResponse handles the UpdateTags response. +func (client *ManagedClusterSnapshotsClient) updateTagsHandleResponse(resp *http.Response) (ManagedClusterSnapshotsClientUpdateTagsResponse, error) { + result := ManagedClusterSnapshotsClientUpdateTagsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.ManagedClusterSnapshot); err != nil { + return ManagedClusterSnapshotsClientUpdateTagsResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/models.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/models.go new file mode 100644 index 000000000000..9abe513571f5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/models.go @@ -0,0 +1,3026 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import "time" + +// AbsoluteMonthlySchedule - For schedules like: 'recur every month on the 15th' or 'recur every 3 months on the 20th'. +type AbsoluteMonthlySchedule struct { + // REQUIRED; The date of the month. + DayOfMonth *int32 + + // REQUIRED; Specifies the number of months between each set of occurrences. + IntervalMonths *int32 +} + +// AccessProfile - Profile for enabling a user to access a managed cluster. +type AccessProfile struct { + // Base64-encoded Kubernetes configuration file. + KubeConfig []byte +} + +// AgentPool - Agent Pool. +type AgentPool struct { + // Properties of an agent pool. + Properties *ManagedClusterAgentPoolProfileProperties + + // READ-ONLY; Resource ID. + ID *string + + // READ-ONLY; The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string + + // READ-ONLY; Resource type + Type *string +} + +type AgentPoolArtifactStreamingProfile struct { + // Artifact streaming speeds up the cold-start of containers on a node through on-demand image loading. To use this feature, + // container images must also enable artifact streaming on ACR. If not specified, + // the default is false. + Enabled *bool +} + +// AgentPoolAvailableVersions - The list of available versions for an agent pool. +type AgentPoolAvailableVersions struct { + // REQUIRED; Properties of agent pool available versions. + Properties *AgentPoolAvailableVersionsProperties + + // READ-ONLY; The ID of the agent pool version list. + ID *string + + // READ-ONLY; The name of the agent pool version list. + Name *string + + // READ-ONLY; Type of the agent pool version list. + Type *string +} + +// AgentPoolAvailableVersionsProperties - The list of available agent pool versions. +type AgentPoolAvailableVersionsProperties struct { + // List of versions available for agent pool. + AgentPoolVersions []*AgentPoolAvailableVersionsPropertiesAgentPoolVersionsItem +} + +type AgentPoolAvailableVersionsPropertiesAgentPoolVersionsItem struct { + // Whether this version is the default agent pool version. + Default *bool + + // Whether Kubernetes version is currently in preview. + IsPreview *bool + + // The Kubernetes version (major.minor.patch). + KubernetesVersion *string +} + +// AgentPoolDeleteMachinesParameter - Specifies a list of machine names from the agent pool to be deleted. +type AgentPoolDeleteMachinesParameter struct { + // REQUIRED; The agent pool machine names. + MachineNames []*string +} + +type AgentPoolGPUProfile struct { + // The default value is true when the vmSize of the agent pool contains a GPU, false otherwise. GPU Driver Installation can + // only be set true when VM has an associated GPU resource. Setting this field to + // false prevents automatic GPU driver installation. In that case, in order for the GPU to be usable, the user must perform + // GPU driver installation themselves. + InstallGPUDriver *bool +} + +// AgentPoolListResult - The response from the List Agent Pools operation. +type AgentPoolListResult struct { + // The list of agent pools. + Value []*AgentPool + + // READ-ONLY; The URL to get the next set of agent pool results. + NextLink *string +} + +// AgentPoolNetworkProfile - Network settings of an agent pool. +type AgentPoolNetworkProfile struct { + // The port ranges that are allowed to access. The specified ranges are allowed to overlap. + AllowedHostPorts []*PortRange + + // The IDs of the application security groups which agent pool will associate when created. + ApplicationSecurityGroups []*string + + // IPTags of instance-level public IPs. + NodePublicIPTags []*IPTag +} + +// AgentPoolSecurityProfile - The security settings of an agent pool. +type AgentPoolSecurityProfile struct { + // Secure Boot is a feature of Trusted Launch which ensures that only signed operating systems and drivers can boot. For more + // details, see aka.ms/aks/trustedlaunch. If not specified, the default is + // false. + EnableSecureBoot *bool + + // vTPM is a Trusted Launch feature for configuring a dedicated secure vault for keys and measurements held locally on the + // node. For more details, see aka.ms/aks/trustedlaunch. If not specified, the + // default is false. + EnableVTPM *bool + + // SSH access method of an agent pool. + SSHAccess *AgentPoolSSHAccess +} + +// AgentPoolUpgradeProfile - The list of available upgrades for an agent pool. +type AgentPoolUpgradeProfile struct { + // REQUIRED; The properties of the agent pool upgrade profile. + Properties *AgentPoolUpgradeProfileProperties + + // READ-ONLY; The ID of the agent pool upgrade profile. + ID *string + + // READ-ONLY; The name of the agent pool upgrade profile. + Name *string + + // READ-ONLY; The type of the agent pool upgrade profile. + Type *string +} + +// AgentPoolUpgradeProfileProperties - The list of available upgrade versions. +type AgentPoolUpgradeProfileProperties struct { + // REQUIRED; The Kubernetes version (major.minor.patch). + KubernetesVersion *string + + // REQUIRED; The operating system type. The default is Linux. + OSType *OSType + + // The latest AKS supported node image version. + LatestNodeImageVersion *string + + // List of orchestrator types and versions available for upgrade. + Upgrades []*AgentPoolUpgradeProfilePropertiesUpgradesItem +} + +type AgentPoolUpgradeProfilePropertiesUpgradesItem struct { + // Whether the Kubernetes version is currently in preview. + IsPreview *bool + + // The Kubernetes version (major.minor.patch). + KubernetesVersion *string +} + +// AgentPoolUpgradeSettings - Settings for upgrading an agentpool +type AgentPoolUpgradeSettings struct { + // The amount of time (in minutes) to wait on eviction of pods and graceful termination per node. This eviction wait time + // honors waiting on pod disruption budgets. If this time is exceeded, the upgrade + // fails. If not specified, the default is 30 minutes. + DrainTimeoutInMinutes *int32 + + // This can either be set to an integer (e.g. '5') or a percentage (e.g. '50%'). If a percentage is specified, it is the percentage + // of the total agent pool size at the time of the upgrade. For + // percentages, fractional nodes are rounded up. If not specified, the default is 1. For more information, including best + // practices, see: + // https://docs.microsoft.com/azure/aks/upgrade-cluster#customize-node-surge-upgrade + MaxSurge *string + + // The amount of time (in minutes) to wait after draining a node and before reimaging it and moving on to next node. If not + // specified, the default is 0 minutes. + NodeSoakDurationInMinutes *int32 +} + +// AgentPoolWindowsProfile - The Windows agent pool's specific profile. +type AgentPoolWindowsProfile struct { + // The default value is false. Outbound NAT can only be disabled if the cluster outboundType is NAT Gateway and the Windows + // agent pool does not have node public IP enabled. + DisableOutboundNat *bool +} + +// AzureKeyVaultKms - Azure Key Vault key management service settings for the security profile. +type AzureKeyVaultKms struct { + // Whether to enable Azure Key Vault key management service. The default is false. + Enabled *bool + + // Identifier of Azure Key Vault key. See key identifier format [https://docs.microsoft.com/en-us/azure/key-vault/general/about-keys-secrets-certificates#vault-name-and-object-name] + // for more details. + // When Azure Key Vault key management service is enabled, this field is required and must be a valid key identifier. When + // Azure Key Vault key management service is disabled, leave the field empty. + KeyID *string + + // Network access of key vault. The possible values are Public and Private. Public means the key vault allows public access + // from all networks. Private means the key vault disables public access and + // enables private link. The default value is Public. + KeyVaultNetworkAccess *KeyVaultNetworkAccessTypes + + // Resource ID of key vault. When keyVaultNetworkAccess is Private, this field is required and must be a valid resource ID. + // When keyVaultNetworkAccess is Public, leave the field empty. + KeyVaultResourceID *string +} + +// ClusterUpgradeSettings - Settings for upgrading a cluster. +type ClusterUpgradeSettings struct { + // Settings for overrides. + OverrideSettings *UpgradeOverrideSettings +} + +// CommandResultProperties - The results of a run command +type CommandResultProperties struct { + // READ-ONLY; The exit code of the command + ExitCode *int32 + + // READ-ONLY; The time when the command finished. + FinishedAt *time.Time + + // READ-ONLY; The command output. + Logs *string + + // READ-ONLY; provisioning State + ProvisioningState *string + + // READ-ONLY; An explanation of why provisioningState is set to failed (if so). + Reason *string + + // READ-ONLY; The time when the command started. + StartedAt *time.Time +} + +// CompatibleVersions - Version information about a product/service that is compatible with a service mesh revision. +type CompatibleVersions struct { + // The product/service name. + Name *string + + // Product/service versions compatible with a service mesh add-on revision. + Versions []*string +} + +// CreationData - Data used when creating a target resource from a source resource. +type CreationData struct { + // This is the ARM ID of the source object to be used to create the target object. + SourceResourceID *string +} + +// CredentialResult - The credential result response. +type CredentialResult struct { + // READ-ONLY; The name of the credential. + Name *string + + // READ-ONLY; Base64-encoded Kubernetes configuration file. + Value []byte +} + +// CredentialResults - The list credential result response. +type CredentialResults struct { + // READ-ONLY; Base64-encoded Kubernetes configuration file. + Kubeconfigs []*CredentialResult +} + +// DailySchedule - For schedules like: 'recur every day' or 'recur every 3 days'. +type DailySchedule struct { + // REQUIRED; Specifies the number of days between each set of occurrences. + IntervalDays *int32 +} + +// DateSpan - For example, between '2022-12-23' and '2023-01-05'. +type DateSpan struct { + // REQUIRED; The end date of the date span. + End *time.Time + + // REQUIRED; The start date of the date span. + Start *time.Time +} + +// DelegatedResource - Delegated resource properties - internal use only. +type DelegatedResource struct { + // The source resource location - internal use only. + Location *string + + // The delegation id of the referral delegation (optional) - internal use only. + ReferralResource *string + + // The ARM resource id of the delegated resource - internal use only. + ResourceID *string + + // The tenant id of the delegated resource - internal use only. + TenantID *string +} + +// EndpointDependency - A domain name that AKS agent nodes are reaching at. +type EndpointDependency struct { + // The domain name of the dependency. + DomainName *string + + // The Ports and Protocols used when connecting to domainName. + EndpointDetails []*EndpointDetail +} + +// EndpointDetail - connect information from the AKS agent nodes to a single endpoint. +type EndpointDetail struct { + // Description of the detail + Description *string + + // An IP Address that Domain Name currently resolves to. + IPAddress *string + + // The port an endpoint is connected to. + Port *int32 + + // The protocol used for connection + Protocol *string +} + +// ErrorAdditionalInfo - The resource management error additional info. +type ErrorAdditionalInfo struct { + // READ-ONLY; The additional info. + Info any + + // READ-ONLY; The additional info type. + Type *string +} + +// ErrorDetail - The error detail. +type ErrorDetail struct { + // READ-ONLY; The error additional info. + AdditionalInfo []*ErrorAdditionalInfo + + // READ-ONLY; The error code. + Code *string + + // READ-ONLY; The error details. + Details []*ErrorDetail + + // READ-ONLY; The error message. + Message *string + + // READ-ONLY; The error target. + Target *string +} + +// ExtendedLocation - The complex type of the extended location. +type ExtendedLocation struct { + // The name of the extended location. + Name *string + + // The type of the extended location. + Type *ExtendedLocationTypes +} + +// GuardrailsAvailableVersion - Available Guardrails Version +type GuardrailsAvailableVersion struct { + // REQUIRED; Whether the version is default or not and support info. + Properties *GuardrailsAvailableVersionsProperties + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// GuardrailsAvailableVersionsList - Hold values properties, which is array of GuardrailsVersions +type GuardrailsAvailableVersionsList struct { + // Array of AKS supported Guardrails versions. + Value []*GuardrailsAvailableVersion + + // READ-ONLY; The URL to get the next Guardrails available version. + NextLink *string +} + +// GuardrailsAvailableVersionsProperties - Whether the version is default or not and support info. +type GuardrailsAvailableVersionsProperties struct { + // READ-ONLY + IsDefaultVersion *bool + + // READ-ONLY; Whether the version is preview or stable. + Support *GuardrailsSupport +} + +// IPTag - Contains the IPTag associated with the object. +type IPTag struct { + // The IP tag type. Example: RoutingPreference. + IPTagType *string + + // The value of the IP tag associated with the public IP. Example: Internet. + Tag *string +} + +// IstioCertificateAuthority - Istio Service Mesh Certificate Authority (CA) configuration. For now, we only support plugin +// certificates as described here https://aka.ms/asm-plugin-ca +type IstioCertificateAuthority struct { + // Plugin certificates information for Service Mesh. + Plugin *IstioPluginCertificateAuthority +} + +// IstioComponents - Istio components configuration. +type IstioComponents struct { + // Istio egress gateways. + EgressGateways []*IstioEgressGateway + + // Istio ingress gateways. + IngressGateways []*IstioIngressGateway +} + +// IstioEgressGateway - Istio egress gateway configuration. +type IstioEgressGateway struct { + // REQUIRED; Whether to enable the egress gateway. + Enabled *bool + + // NodeSelector for scheduling the egress gateway. + NodeSelector map[string]*string +} + +// IstioIngressGateway - Istio ingress gateway configuration. For now, we support up to one external ingress gateway named +// aks-istio-ingressgateway-external and one internal ingress gateway named +// aks-istio-ingressgateway-internal. +type IstioIngressGateway struct { + // REQUIRED; Whether to enable the ingress gateway. + Enabled *bool + + // REQUIRED; Mode of an ingress gateway. + Mode *IstioIngressGatewayMode +} + +// IstioPluginCertificateAuthority - Plugin certificates information for Service Mesh. +type IstioPluginCertificateAuthority struct { + // Certificate chain object name in Azure Key Vault. + CertChainObjectName *string + + // Intermediate certificate object name in Azure Key Vault. + CertObjectName *string + + // Intermediate certificate private key object name in Azure Key Vault. + KeyObjectName *string + + // The resource ID of the Key Vault. + KeyVaultID *string + + // Root certificate object name in Azure Key Vault. + RootCertObjectName *string +} + +// IstioServiceMesh - Istio service mesh configuration. +type IstioServiceMesh struct { + // Istio Service Mesh Certificate Authority (CA) configuration. For now, we only support plugin certificates as described + // here https://aka.ms/asm-plugin-ca + CertificateAuthority *IstioCertificateAuthority + + // Istio components configuration. + Components *IstioComponents + + // The list of revisions of the Istio control plane. When an upgrade is not in progress, this holds one value. When canary + // upgrade is in progress, this can only hold two consecutive values. For more + // information, see: https://learn.microsoft.com/en-us/azure/aks/istio-upgrade + Revisions []*string +} + +// KubeletConfig - See AKS custom node configuration [https://docs.microsoft.com/azure/aks/custom-node-configuration] for +// more details. +type KubeletConfig struct { + // Allowed list of unsafe sysctls or unsafe sysctl patterns (ending in *). + AllowedUnsafeSysctls []*string + + // The default is true. + CPUCfsQuota *bool + + // The default is '100ms.' Valid values are a sequence of decimal numbers with an optional fraction and a unit suffix. For + // example: '300ms', '2h45m'. Supported units are 'ns', 'us', 'ms', 's', 'm', and + // 'h'. + CPUCfsQuotaPeriod *string + + // The default is 'none'. See Kubernetes CPU management policies [https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#cpu-management-policies] + // for more information. Allowed + // values are 'none' and 'static'. + CPUManagerPolicy *string + + // The maximum number of container log files that can be present for a container. The number must be ≥ 2. + ContainerLogMaxFiles *int32 + + // The maximum size (e.g. 10Mi) of container log file before it is rotated. + ContainerLogMaxSizeMB *int32 + + // If set to true it will make the Kubelet fail to start if swap is enabled on the node. + FailSwapOn *bool + + // To disable image garbage collection, set to 100. The default is 85% + ImageGcHighThreshold *int32 + + // This cannot be set higher than imageGcHighThreshold. The default is 80% + ImageGcLowThreshold *int32 + + // The maximum number of processes per pod. + PodMaxPids *int32 + + // For more information see Kubernetes Topology Manager [https://kubernetes.io/docs/tasks/administer-cluster/topology-manager]. + // The default is 'none'. Allowed values are 'none', 'best-effort', + // 'restricted', and 'single-numa-node'. + TopologyManagerPolicy *string +} + +// KubernetesPatchVersion - Kubernetes patch version profile +type KubernetesPatchVersion struct { + // Possible upgrade path for given patch version + Upgrades []*string +} + +// KubernetesVersion - Kubernetes version profile for given major.minor release. +type KubernetesVersion struct { + // Capabilities on this Kubernetes version. + Capabilities *KubernetesVersionCapabilities + + // Whether this version is in preview mode. + IsPreview *bool + + // Patch versions of Kubernetes release + PatchVersions map[string]*KubernetesPatchVersion + + // major.minor version of Kubernetes release + Version *string +} + +// KubernetesVersionCapabilities - Capabilities on this Kubernetes version. +type KubernetesVersionCapabilities struct { + SupportPlan []*KubernetesSupportPlan +} + +// KubernetesVersionListResult - Hold values properties, which is array of KubernetesVersion +type KubernetesVersionListResult struct { + // Array of AKS supported Kubernetes versions. + Values []*KubernetesVersion +} + +// LinuxOSConfig - See AKS custom node configuration [https://docs.microsoft.com/azure/aks/custom-node-configuration] for +// more details. +type LinuxOSConfig struct { + // The size in MB of a swap file that will be created on each node. + SwapFileSizeMB *int32 + + // Sysctl settings for Linux agent nodes. + Sysctls *SysctlConfig + + // Valid values are 'always', 'defer', 'defer+madvise', 'madvise' and 'never'. The default is 'madvise'. For more information + // see Transparent Hugepages + // [https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge]. + TransparentHugePageDefrag *string + + // Valid values are 'always', 'madvise', and 'never'. The default is 'always'. For more information see Transparent Hugepages + // [https://www.kernel.org/doc/html/latest/admin-guide/mm/transhuge.html#admin-guide-transhuge]. + TransparentHugePageEnabled *string +} + +// LinuxProfile - Profile for Linux VMs in the container service cluster. +type LinuxProfile struct { + // REQUIRED; The administrator username to use for Linux VMs. + AdminUsername *string + + // REQUIRED; The SSH configuration for Linux-based VMs running on Azure. + SSH *SSHConfiguration +} + +// Machine - A machine. Contains details about the underlying virtual machine. A machine may be visible here but not in kubectl +// get nodes; if so it may be because the machine has not been registered with the +// Kubernetes API Server yet. +type Machine struct { + // READ-ONLY; Resource ID. + ID *string + + // READ-ONLY; The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string + + // READ-ONLY; The properties of the machine + Properties *MachineProperties + + // READ-ONLY; Resource type + Type *string +} + +// MachineIPAddress - The machine IP address details. +type MachineIPAddress struct { + // READ-ONLY; To determine if address belongs IPv4 or IPv6 family. + Family *IPFamily + + // READ-ONLY; IPv4 or IPv6 address of the machine + IP *string +} + +// MachineListResult - The response from the List Machines operation. +type MachineListResult struct { + // The list of Machines in cluster. + Value []*Machine + + // READ-ONLY; The URL to get the next set of machine results. + NextLink *string +} + +// MachineNetworkProperties - network properties of the machine +type MachineNetworkProperties struct { + // READ-ONLY; IPv4, IPv6 addresses of the machine + IPAddresses []*MachineIPAddress +} + +// MachineProperties - The properties of the machine +type MachineProperties struct { + // READ-ONLY; network properties of the machine + Network *MachineNetworkProperties + + // READ-ONLY; Arm resource id of the machine. It can be used to GET underlying VM Instance + ResourceID *string +} + +// MaintenanceConfiguration - See planned maintenance [https://docs.microsoft.com/azure/aks/planned-maintenance] for more +// information about planned maintenance. +type MaintenanceConfiguration struct { + // Properties of a default maintenance configuration. + Properties *MaintenanceConfigurationProperties + + // READ-ONLY; Resource ID. + ID *string + + // READ-ONLY; The name of the resource that is unique within a resource group. This name can be used to access the resource. + Name *string + + // READ-ONLY; The system metadata relating to this resource. + SystemData *SystemData + + // READ-ONLY; Resource type + Type *string +} + +// MaintenanceConfigurationListResult - The response from the List maintenance configurations operation. +type MaintenanceConfigurationListResult struct { + // The list of maintenance configurations. + Value []*MaintenanceConfiguration + + // READ-ONLY; The URL to get the next set of maintenance configuration results. + NextLink *string +} + +// MaintenanceConfigurationProperties - Properties used to configure planned maintenance for a Managed Cluster. +type MaintenanceConfigurationProperties struct { + // Maintenance window for the maintenance configuration. + MaintenanceWindow *MaintenanceWindow + + // Time slots on which upgrade is not allowed. + NotAllowedTime []*TimeSpan + + // If two array entries specify the same day of the week, the applied configuration is the union of times in both entries. + TimeInWeek []*TimeInWeek +} + +// MaintenanceWindow - Maintenance window used to configure scheduled auto-upgrade for a Managed Cluster. +type MaintenanceWindow struct { + // REQUIRED; Length of maintenance window range from 4 to 24 hours. + DurationHours *int32 + + // REQUIRED; Recurrence schedule for the maintenance window. + Schedule *Schedule + + // REQUIRED; The start time of the maintenance window. Accepted values are from '00:00' to '23:59'. 'utcOffset' applies to + // this field. For example: '02:00' with 'utcOffset: +02:00' means UTC time '00:00'. + StartTime *string + + // Date ranges on which upgrade is not allowed. 'utcOffset' applies to this field. For example, with 'utcOffset: +02:00' and + // 'dateSpan' being '2022-12-23' to '2023-01-03', maintenance will be blocked + // from '2022-12-22 22:00' to '2023-01-03 22:00' in UTC time. + NotAllowedDates []*DateSpan + + // The date the maintenance window activates. If the current date is before this date, the maintenance window is inactive + // and will not be used for upgrades. If not specified, the maintenance window will + // be active right away. + StartDate *time.Time + + // The UTC offset in format +/-HH:mm. For example, '+05:30' for IST and '-07:00' for PST. If not specified, the default is + // '+00:00'. + UTCOffset *string +} + +// ManagedCluster - Managed cluster. +type ManagedCluster struct { + // REQUIRED; The geo-location where the resource lives + Location *string + + // The extended location of the Virtual Machine. + ExtendedLocation *ExtendedLocation + + // The identity of the managed cluster, if configured. + Identity *ManagedClusterIdentity + + // Properties of a managed cluster. + Properties *ManagedClusterProperties + + // The managed cluster SKU. + SKU *ManagedClusterSKU + + // Resource tags. + Tags map[string]*string + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// ManagedClusterAADProfile - For more details see managed AAD on AKS [https://docs.microsoft.com/azure/aks/managed-aad]. +type ManagedClusterAADProfile struct { + // The list of AAD group object IDs that will have admin role of the cluster. + AdminGroupObjectIDs []*string + + // (DEPRECATED) The client AAD application ID. Learn more at https://aka.ms/aks/aad-legacy. + ClientAppID *string + + // Whether to enable Azure RBAC for Kubernetes authorization. + EnableAzureRBAC *bool + + // Whether to enable managed AAD. + Managed *bool + + // (DEPRECATED) The server AAD application ID. Learn more at https://aka.ms/aks/aad-legacy. + ServerAppID *string + + // (DEPRECATED) The server AAD application secret. Learn more at https://aka.ms/aks/aad-legacy. + ServerAppSecret *string + + // The AAD tenant ID to use for authentication. If not specified, will use the tenant of the deployment subscription. + TenantID *string +} + +// ManagedClusterAIToolchainOperatorProfile - When enabling the operator, a set of AKS managed CRDs and controllers will be +// installed in the cluster. The operator automates the deployment of OSS models for inference and/or training purposes. It +// provides a set of preset models and enables distributed inference against them. +type ManagedClusterAIToolchainOperatorProfile struct { + // Indicates if AI toolchain operator enabled or not. + Enabled *bool +} + +// ManagedClusterAPIServerAccessProfile - Access profile for managed cluster API server. +type ManagedClusterAPIServerAccessProfile struct { + // IP ranges are specified in CIDR format, e.g. 137.117.106.88/29. This feature is not compatible with clusters that use Public + // IP Per Node, or clusters that are using a Basic Load Balancer. For more + // information see API server authorized IP ranges [https://docs.microsoft.com/azure/aks/api-server-authorized-ip-ranges]. + AuthorizedIPRanges []*string + + // Whether to disable run command for the cluster or not. + DisableRunCommand *bool + + // For more details, see Creating a private AKS cluster [https://docs.microsoft.com/azure/aks/private-clusters]. + EnablePrivateCluster *bool + + // Whether to create additional public FQDN for private cluster or not. + EnablePrivateClusterPublicFQDN *bool + + // Whether to enable apiserver vnet integration for the cluster or not. + EnableVnetIntegration *bool + + // The default is System. For more details see configure private DNS zone [https://docs.microsoft.com/azure/aks/private-clusters#configure-private-dns-zone]. + // Allowed values are 'system' and 'none'. + PrivateDNSZone *string + + // It is required when: 1. creating a new cluster with BYO Vnet; 2. updating an existing cluster to enable apiserver vnet + // integration. + SubnetID *string +} + +// ManagedClusterAccessProfile - Managed cluster Access Profile. +type ManagedClusterAccessProfile struct { + // REQUIRED; The geo-location where the resource lives + Location *string + + // AccessProfile of a managed cluster. + Properties *AccessProfile + + // Resource tags. + Tags map[string]*string + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// ManagedClusterAddonProfile - A Kubernetes add-on profile for a managed cluster. +type ManagedClusterAddonProfile struct { + // REQUIRED; Whether the add-on is enabled or not. + Enabled *bool + + // Key-value pairs for configuring an add-on. + Config map[string]*string + + // READ-ONLY; Information of user assigned identity used by this add-on. + Identity *ManagedClusterAddonProfileIdentity +} + +// ManagedClusterAddonProfileIdentity - Information of user assigned identity used by this add-on. +type ManagedClusterAddonProfileIdentity struct { + // The client ID of the user assigned identity. + ClientID *string + + // The object ID of the user assigned identity. + ObjectID *string + + // The resource ID of the user assigned identity. + ResourceID *string +} + +// ManagedClusterAgentPoolProfile - Profile for the container service agent pool. +type ManagedClusterAgentPoolProfile struct { + // REQUIRED; Windows agent pool names must be 6 characters or less. + Name *string + + // Configuration for using artifact streaming on AKS. + ArtifactStreamingProfile *AgentPoolArtifactStreamingProfile + + // The list of Availability zones to use for nodes. This can only be specified if the AgentPoolType property is 'VirtualMachineScaleSets'. + AvailabilityZones []*string + + // AKS will associate the specified agent pool with the Capacity Reservation Group. + CapacityReservationGroupID *string + + // Number of agents (VMs) to host docker containers. Allowed values must be in the range of 0 to 1000 (inclusive) for user + // pools and in the range of 1 to 1000 (inclusive) for system pools. The default + // value is 1. + Count *int32 + + // CreationData to be used to specify the source Snapshot ID if the node pool will be created/upgraded using a snapshot. + CreationData *CreationData + + // Whether to enable auto-scaler + EnableAutoScaling *bool + + // When set to true, AKS adds a label to the node indicating that the feature is enabled and deploys a daemonset along with + // host services to sync custom certificate authorities from user-provided list of + // base64 encoded certificates into node trust stores. Defaults to false. + EnableCustomCATrust *bool + + // This is only supported on certain VM sizes and in certain Azure regions. For more information, see: https://docs.microsoft.com/azure/aks/enable-host-encryption + EnableEncryptionAtHost *bool + + // See Add a FIPS-enabled node pool [https://docs.microsoft.com/azure/aks/use-multiple-node-pools#add-a-fips-enabled-node-pool-preview] + // for more details. + EnableFIPS *bool + + // Some scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is + // for gaming workloads, where a console needs to make a direct connection to a + // cloud virtual machine to minimize hops. For more information see assigning a public IP per node + // [https://docs.microsoft.com/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-for-your-node-pools]. The default + // is false. + EnableNodePublicIP *bool + + // Whether to enable UltraSSD + EnableUltraSSD *bool + + // GPUInstanceProfile to be used to specify GPU MIG instance profile for supported GPU VM SKU. + GpuInstanceProfile *GPUInstanceProfile + + // The GPU settings of an agent pool. + GpuProfile *AgentPoolGPUProfile + + // This is of the form: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/hostGroups/{hostGroupName}. + // For more information see Azure dedicated hosts + // [https://docs.microsoft.com/azure/virtual-machines/dedicated-hosts]. + HostGroupID *string + + // The Kubelet configuration on the agent pool nodes. + KubeletConfig *KubeletConfig + + // Determines the placement of emptyDir volumes, container runtime data root, and Kubelet ephemeral storage. + KubeletDiskType *KubeletDiskType + + // The OS configuration of Linux agent nodes. + LinuxOSConfig *LinuxOSConfig + + // The maximum number of nodes for auto-scaling + MaxCount *int32 + + // The maximum number of pods that can run on a node. + MaxPods *int32 + + // A base64-encoded string which will be written to /etc/motd after decoding. This allows customization of the message of + // the day for Linux nodes. It must not be specified for Windows nodes. It must be a + // static string (i.e., will be printed raw and not be executed as a script). + MessageOfTheDay *string + + // The minimum number of nodes for auto-scaling + MinCount *int32 + + // A cluster must have at least one 'System' Agent Pool at all times. For additional information on agent pool restrictions + // and best practices, see: https://docs.microsoft.com/azure/aks/use-system-pools + Mode *AgentPoolMode + + // Network-related settings of an agent pool. + NetworkProfile *AgentPoolNetworkProfile + + // These taints will not be reconciled by AKS and can be removed with a kubectl call. This field can be modified after node + // pool is created, but nodes will not be recreated with new taints until another + // operation that requires recreation (e.g. node image upgrade) happens. These taints allow for required configuration to + // run before the node is ready to accept workloads, for example + // 'key1=value1:NoSchedule' that then can be removed with kubectl taint nodes node1 key1=value1:NoSchedule- + NodeInitializationTaints []*string + + // The node labels to be persisted across all nodes in agent pool. + NodeLabels map[string]*string + + // This is of the form: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPPrefixes/{publicIPPrefixName} + NodePublicIPPrefixID *string + + // The taints added to new nodes during node pool create and scale. For example, key=value:NoSchedule. + NodeTaints []*string + + // OS Disk Size in GB to be used to specify the disk size for every machine in the master/agent pool. If you specify 0, it + // will apply the default osDisk size according to the vmSize specified. + OSDiskSizeGB *int32 + + // The default is 'Ephemeral' if the VM supports it and has a cache disk larger than the requested OSDiskSizeGB. Otherwise, + // defaults to 'Managed'. May not be changed after creation. For more information + // see Ephemeral OS [https://docs.microsoft.com/azure/aks/cluster-configuration#ephemeral-os]. + OSDiskType *OSDiskType + + // Specifies the OS SKU used by the agent pool. If not specified, the default is Ubuntu if OSType=Linux or Windows2019 if + // OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 + // after Windows2019 is deprecated. + OSSKU *OSSKU + + // The operating system type. The default is Linux. + OSType *OSType + + // Both patch version and are supported. When is specified, the latest supported patch version is chosen automatically. Updating + // the agent pool with the same once it has been created will not trigger an + // upgrade, even if a newer patch version is available. As a best practice, you should upgrade all node pools in an AKS cluster + // to the same Kubernetes version. The node pool version must have the same + // major version as the control plane. The node pool minor version must be within two minor versions of the control plane + // version. The node pool version cannot be greater than the control plane version. + // For more information see upgrading a node pool [https://docs.microsoft.com/azure/aks/use-multiple-node-pools#upgrade-a-node-pool]. + OrchestratorVersion *string + + // If omitted, pod IPs are statically assigned on the node subnet (see vnetSubnetID for more details). This is of the form: + // /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName} + PodSubnetID *string + + // When an Agent Pool is first created it is initially Running. The Agent Pool can be stopped by setting this field to Stopped. + // A stopped Agent Pool stops all of its VMs and does not accrue billing + // charges. An Agent Pool can only be stopped if it is Running and provisioning state is Succeeded + PowerState *PowerState + + // The ID for Proximity Placement Group. + ProximityPlacementGroupID *string + + // This also effects the cluster autoscaler behavior. If not specified, it defaults to Delete. + ScaleDownMode *ScaleDownMode + + // This cannot be specified unless the scaleSetPriority is 'Spot'. If not specified, the default is 'Delete'. + ScaleSetEvictionPolicy *ScaleSetEvictionPolicy + + // The Virtual Machine Scale Set priority. If not specified, the default is 'Regular'. + ScaleSetPriority *ScaleSetPriority + + // The security settings of an agent pool. + SecurityProfile *AgentPoolSecurityProfile + + // Possible values are any decimal value greater than zero or -1 which indicates the willingness to pay any on-demand price. + // For more details on spot pricing, see spot VMs pricing + // [https://docs.microsoft.com/azure/virtual-machines/spot-vms#pricing] + SpotMaxPrice *float32 + + // The tags to be persisted on the agent pool virtual machine scale set. + Tags map[string]*string + + // The type of Agent Pool. + Type *AgentPoolType + + // Settings for upgrading the agentpool + UpgradeSettings *AgentPoolUpgradeSettings + + // VM size availability varies by region. If a node contains insufficient compute resources (memory, cpu, etc) pods might + // fail to run correctly. For more details on restricted VM sizes, see: + // https://docs.microsoft.com/azure/aks/quotas-skus-regions + VMSize *string + + // The status of nodes in a VirtualMachines agent pool. + VirtualMachineNodesStatus []*VirtualMachineNodes + + // Specifications on VirtualMachines agent pool. + VirtualMachinesProfile *VirtualMachinesProfile + + // If this is not specified, a VNET and subnet will be generated and used. If no podSubnetID is specified, this applies to + // nodes and pods, otherwise it applies to just nodes. This is of the form: + // /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName} + VnetSubnetID *string + + // The Windows agent pool's specific profile. + WindowsProfile *AgentPoolWindowsProfile + + // Determines the type of workload a node can run. + WorkloadRuntime *WorkloadRuntime + + // READ-ONLY; If orchestratorVersion was a fully specified version , this field will be exactly equal to it. If orchestratorVersion + // was , this field will contain the full version being used. + CurrentOrchestratorVersion *string + + // READ-ONLY; The version of node image + NodeImageVersion *string + + // READ-ONLY; The current deployment or provisioning state. + ProvisioningState *string +} + +// ManagedClusterAgentPoolProfileProperties - Properties for the container service agent pool profile. +type ManagedClusterAgentPoolProfileProperties struct { + // Configuration for using artifact streaming on AKS. + ArtifactStreamingProfile *AgentPoolArtifactStreamingProfile + + // The list of Availability zones to use for nodes. This can only be specified if the AgentPoolType property is 'VirtualMachineScaleSets'. + AvailabilityZones []*string + + // AKS will associate the specified agent pool with the Capacity Reservation Group. + CapacityReservationGroupID *string + + // Number of agents (VMs) to host docker containers. Allowed values must be in the range of 0 to 1000 (inclusive) for user + // pools and in the range of 1 to 1000 (inclusive) for system pools. The default + // value is 1. + Count *int32 + + // CreationData to be used to specify the source Snapshot ID if the node pool will be created/upgraded using a snapshot. + CreationData *CreationData + + // Whether to enable auto-scaler + EnableAutoScaling *bool + + // When set to true, AKS adds a label to the node indicating that the feature is enabled and deploys a daemonset along with + // host services to sync custom certificate authorities from user-provided list of + // base64 encoded certificates into node trust stores. Defaults to false. + EnableCustomCATrust *bool + + // This is only supported on certain VM sizes and in certain Azure regions. For more information, see: https://docs.microsoft.com/azure/aks/enable-host-encryption + EnableEncryptionAtHost *bool + + // See Add a FIPS-enabled node pool [https://docs.microsoft.com/azure/aks/use-multiple-node-pools#add-a-fips-enabled-node-pool-preview] + // for more details. + EnableFIPS *bool + + // Some scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is + // for gaming workloads, where a console needs to make a direct connection to a + // cloud virtual machine to minimize hops. For more information see assigning a public IP per node + // [https://docs.microsoft.com/azure/aks/use-multiple-node-pools#assign-a-public-ip-per-node-for-your-node-pools]. The default + // is false. + EnableNodePublicIP *bool + + // Whether to enable UltraSSD + EnableUltraSSD *bool + + // GPUInstanceProfile to be used to specify GPU MIG instance profile for supported GPU VM SKU. + GpuInstanceProfile *GPUInstanceProfile + + // The GPU settings of an agent pool. + GpuProfile *AgentPoolGPUProfile + + // This is of the form: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/hostGroups/{hostGroupName}. + // For more information see Azure dedicated hosts + // [https://docs.microsoft.com/azure/virtual-machines/dedicated-hosts]. + HostGroupID *string + + // The Kubelet configuration on the agent pool nodes. + KubeletConfig *KubeletConfig + + // Determines the placement of emptyDir volumes, container runtime data root, and Kubelet ephemeral storage. + KubeletDiskType *KubeletDiskType + + // The OS configuration of Linux agent nodes. + LinuxOSConfig *LinuxOSConfig + + // The maximum number of nodes for auto-scaling + MaxCount *int32 + + // The maximum number of pods that can run on a node. + MaxPods *int32 + + // A base64-encoded string which will be written to /etc/motd after decoding. This allows customization of the message of + // the day for Linux nodes. It must not be specified for Windows nodes. It must be a + // static string (i.e., will be printed raw and not be executed as a script). + MessageOfTheDay *string + + // The minimum number of nodes for auto-scaling + MinCount *int32 + + // A cluster must have at least one 'System' Agent Pool at all times. For additional information on agent pool restrictions + // and best practices, see: https://docs.microsoft.com/azure/aks/use-system-pools + Mode *AgentPoolMode + + // Network-related settings of an agent pool. + NetworkProfile *AgentPoolNetworkProfile + + // These taints will not be reconciled by AKS and can be removed with a kubectl call. This field can be modified after node + // pool is created, but nodes will not be recreated with new taints until another + // operation that requires recreation (e.g. node image upgrade) happens. These taints allow for required configuration to + // run before the node is ready to accept workloads, for example + // 'key1=value1:NoSchedule' that then can be removed with kubectl taint nodes node1 key1=value1:NoSchedule- + NodeInitializationTaints []*string + + // The node labels to be persisted across all nodes in agent pool. + NodeLabels map[string]*string + + // This is of the form: /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/publicIPPrefixes/{publicIPPrefixName} + NodePublicIPPrefixID *string + + // The taints added to new nodes during node pool create and scale. For example, key=value:NoSchedule. + NodeTaints []*string + + // OS Disk Size in GB to be used to specify the disk size for every machine in the master/agent pool. If you specify 0, it + // will apply the default osDisk size according to the vmSize specified. + OSDiskSizeGB *int32 + + // The default is 'Ephemeral' if the VM supports it and has a cache disk larger than the requested OSDiskSizeGB. Otherwise, + // defaults to 'Managed'. May not be changed after creation. For more information + // see Ephemeral OS [https://docs.microsoft.com/azure/aks/cluster-configuration#ephemeral-os]. + OSDiskType *OSDiskType + + // Specifies the OS SKU used by the agent pool. If not specified, the default is Ubuntu if OSType=Linux or Windows2019 if + // OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 + // after Windows2019 is deprecated. + OSSKU *OSSKU + + // The operating system type. The default is Linux. + OSType *OSType + + // Both patch version and are supported. When is specified, the latest supported patch version is chosen automatically. Updating + // the agent pool with the same once it has been created will not trigger an + // upgrade, even if a newer patch version is available. As a best practice, you should upgrade all node pools in an AKS cluster + // to the same Kubernetes version. The node pool version must have the same + // major version as the control plane. The node pool minor version must be within two minor versions of the control plane + // version. The node pool version cannot be greater than the control plane version. + // For more information see upgrading a node pool [https://docs.microsoft.com/azure/aks/use-multiple-node-pools#upgrade-a-node-pool]. + OrchestratorVersion *string + + // If omitted, pod IPs are statically assigned on the node subnet (see vnetSubnetID for more details). This is of the form: + // /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName} + PodSubnetID *string + + // When an Agent Pool is first created it is initially Running. The Agent Pool can be stopped by setting this field to Stopped. + // A stopped Agent Pool stops all of its VMs and does not accrue billing + // charges. An Agent Pool can only be stopped if it is Running and provisioning state is Succeeded + PowerState *PowerState + + // The ID for Proximity Placement Group. + ProximityPlacementGroupID *string + + // This also effects the cluster autoscaler behavior. If not specified, it defaults to Delete. + ScaleDownMode *ScaleDownMode + + // This cannot be specified unless the scaleSetPriority is 'Spot'. If not specified, the default is 'Delete'. + ScaleSetEvictionPolicy *ScaleSetEvictionPolicy + + // The Virtual Machine Scale Set priority. If not specified, the default is 'Regular'. + ScaleSetPriority *ScaleSetPriority + + // The security settings of an agent pool. + SecurityProfile *AgentPoolSecurityProfile + + // Possible values are any decimal value greater than zero or -1 which indicates the willingness to pay any on-demand price. + // For more details on spot pricing, see spot VMs pricing + // [https://docs.microsoft.com/azure/virtual-machines/spot-vms#pricing] + SpotMaxPrice *float32 + + // The tags to be persisted on the agent pool virtual machine scale set. + Tags map[string]*string + + // The type of Agent Pool. + Type *AgentPoolType + + // Settings for upgrading the agentpool + UpgradeSettings *AgentPoolUpgradeSettings + + // VM size availability varies by region. If a node contains insufficient compute resources (memory, cpu, etc) pods might + // fail to run correctly. For more details on restricted VM sizes, see: + // https://docs.microsoft.com/azure/aks/quotas-skus-regions + VMSize *string + + // The status of nodes in a VirtualMachines agent pool. + VirtualMachineNodesStatus []*VirtualMachineNodes + + // Specifications on VirtualMachines agent pool. + VirtualMachinesProfile *VirtualMachinesProfile + + // If this is not specified, a VNET and subnet will be generated and used. If no podSubnetID is specified, this applies to + // nodes and pods, otherwise it applies to just nodes. This is of the form: + // /subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Network/virtualNetworks/{virtualNetworkName}/subnets/{subnetName} + VnetSubnetID *string + + // The Windows agent pool's specific profile. + WindowsProfile *AgentPoolWindowsProfile + + // Determines the type of workload a node can run. + WorkloadRuntime *WorkloadRuntime + + // READ-ONLY; If orchestratorVersion was a fully specified version , this field will be exactly equal to it. If orchestratorVersion + // was , this field will contain the full version being used. + CurrentOrchestratorVersion *string + + // READ-ONLY; The version of node image + NodeImageVersion *string + + // READ-ONLY; The current deployment or provisioning state. + ProvisioningState *string +} + +// ManagedClusterAutoUpgradeProfile - Auto upgrade profile for a managed cluster. +type ManagedClusterAutoUpgradeProfile struct { + // The default is Unmanaged, but may change to either NodeImage or SecurityPatch at GA. + NodeOSUpgradeChannel *NodeOSUpgradeChannel + + // For more information see setting the AKS cluster auto-upgrade channel [https://docs.microsoft.com/azure/aks/upgrade-cluster#set-auto-upgrade-channel]. + UpgradeChannel *UpgradeChannel +} + +// ManagedClusterAzureMonitorProfile - Prometheus addon profile for the container service cluster +type ManagedClusterAzureMonitorProfile struct { + // Logs profile for the Azure Monitor Infrastructure and Application Logs. Collect out-of-the-box Kubernetes infrastructure + // & application logs to send to Azure Monitor. See + // aka.ms/AzureMonitorContainerInsights for an overview. + Logs *ManagedClusterAzureMonitorProfileLogs + + // Metrics profile for the prometheus service addon + Metrics *ManagedClusterAzureMonitorProfileMetrics +} + +// ManagedClusterAzureMonitorProfileAppMonitoring - Application Monitoring Profile for Kubernetes Application Container. Collects +// application logs, metrics and traces through auto-instrumentation of the application using Azure Monitor OpenTelemetry +// based SDKs. See aka.ms/AzureMonitorApplicationMonitoring for an overview. +type ManagedClusterAzureMonitorProfileAppMonitoring struct { + // Indicates if Application Monitoring enabled or not. + Enabled *bool +} + +// ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics - Application Monitoring Open Telemetry Metrics Profile +// for Kubernetes Application Container Metrics. Collects OpenTelemetry metrics through auto-instrumentation of the application +// using Azure Monitor +// OpenTelemetry based SDKs. See aka.ms/AzureMonitorApplicationMonitoring for an overview. +type ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics struct { + // Indicates if Application Monitoring Open Telemetry Metrics is enabled or not. + Enabled *bool +} + +// ManagedClusterAzureMonitorProfileContainerInsights - Azure Monitor Container Insights Profile for Kubernetes Events, Inventory +// and Container stdout & stderr logs etc. See aka.ms/AzureMonitorContainerInsights for an overview. +type ManagedClusterAzureMonitorProfileContainerInsights struct { + // Indicates if Azure Monitor Container Insights Logs Addon is enabled or not. + Enabled *bool + + // Fully Qualified ARM Resource Id of Azure Log Analytics Workspace for storing Azure Monitor Container Insights Logs. + LogAnalyticsWorkspaceResourceID *string + + // Windows Host Logs Profile for Kubernetes Windows Nodes Log Collection. Collects ETW, Event Logs and Text logs etc. See + // aka.ms/AzureMonitorContainerInsights for an overview. + WindowsHostLogs *ManagedClusterAzureMonitorProfileWindowsHostLogs +} + +// ManagedClusterAzureMonitorProfileKubeStateMetrics - Kube State Metrics for prometheus addon profile for the container service +// cluster +type ManagedClusterAzureMonitorProfileKubeStateMetrics struct { + // Comma-separated list of additional Kubernetes label keys that will be used in the resource's labels metric. + MetricAnnotationsAllowList *string + + // Comma-separated list of Kubernetes annotations keys that will be used in the resource's labels metric. + MetricLabelsAllowlist *string +} + +// ManagedClusterAzureMonitorProfileLogs - Logs profile for the Azure Monitor Infrastructure and Application Logs. Collect +// out-of-the-box Kubernetes infrastructure & application logs to send to Azure Monitor. See +// aka.ms/AzureMonitorContainerInsights for an overview. +type ManagedClusterAzureMonitorProfileLogs struct { + // Application Monitoring Profile for Kubernetes Application Container. Collects application logs, metrics and traces through + // auto-instrumentation of the application using Azure Monitor OpenTelemetry + // based SDKs. See aka.ms/AzureMonitorApplicationMonitoring for an overview. + AppMonitoring *ManagedClusterAzureMonitorProfileAppMonitoring + + // Azure Monitor Container Insights Profile for Kubernetes Events, Inventory and Container stdout & stderr logs etc. See aka.ms/AzureMonitorContainerInsights + // for an overview. + ContainerInsights *ManagedClusterAzureMonitorProfileContainerInsights +} + +// ManagedClusterAzureMonitorProfileMetrics - Metrics profile for the prometheus service addon +type ManagedClusterAzureMonitorProfileMetrics struct { + // REQUIRED; Whether to enable the Prometheus collector + Enabled *bool + + // Application Monitoring Open Telemetry Metrics Profile for Kubernetes Application Container Metrics. Collects OpenTelemetry + // metrics through auto-instrumentation of the application using Azure Monitor + // OpenTelemetry based SDKs. See aka.ms/AzureMonitorApplicationMonitoring for an overview. + AppMonitoringOpenTelemetryMetrics *ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics + + // Kube State Metrics for prometheus addon profile for the container service cluster + KubeStateMetrics *ManagedClusterAzureMonitorProfileKubeStateMetrics +} + +// ManagedClusterAzureMonitorProfileWindowsHostLogs - Windows Host Logs Profile for Kubernetes Windows Nodes Log Collection. +// Collects ETW, Event Logs and Text logs etc. See aka.ms/AzureMonitorContainerInsights for an overview. +type ManagedClusterAzureMonitorProfileWindowsHostLogs struct { + // Indicates if Windows Host Log Collection is enabled or not for Azure Monitor Container Insights Logs Addon. + Enabled *bool +} + +// ManagedClusterCostAnalysis - The cost analysis configuration for the cluster +type ManagedClusterCostAnalysis struct { + // The Managed Cluster sku.tier must be set to 'Standard' to enable this feature. Enabling this will add Kubernetes Namespace + // and Deployment details to the Cost Analysis views in the Azure portal. If not + // specified, the default is false. For more information see aka.ms/aks/docs/cost-analysis. + Enabled *bool +} + +// ManagedClusterHTTPProxyConfig - Cluster HTTP proxy configuration. +type ManagedClusterHTTPProxyConfig struct { + // The HTTP proxy server endpoint to use. + HTTPProxy *string + + // The HTTPS proxy server endpoint to use. + HTTPSProxy *string + + // The endpoints that should not go through proxy. + NoProxy []*string + + // Alternative CA cert to use for connecting to proxy servers. + TrustedCa *string + + // READ-ONLY; A read-only list of all endpoints for which traffic should not be sent to the proxy. This list is a superset + // of noProxy and values injected by AKS. + EffectiveNoProxy []*string +} + +// ManagedClusterIdentity - Identity for the managed cluster. +type ManagedClusterIdentity struct { + // The delegated identity resources assigned to this managed cluster. This can only be set by another Azure Resource Provider, + // and managed cluster only accept one delegated identity resource. Internal + // use only. + DelegatedResources map[string]*DelegatedResource + + // For more information see use managed identities in AKS [https://docs.microsoft.com/azure/aks/use-managed-identity]. + Type *ResourceIdentityType + + // The keys must be ARM resource IDs in the form: '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ManagedIdentity/userAssignedIdentities/{identityName}'. + UserAssignedIdentities map[string]*ManagedServiceIdentityUserAssignedIdentitiesValue + + // READ-ONLY; The principal id of the system assigned identity which is used by master components. + PrincipalID *string + + // READ-ONLY; The tenant id of the system assigned identity which is used by master components. + TenantID *string +} + +// ManagedClusterIngressProfile - Ingress profile for the container service cluster. +type ManagedClusterIngressProfile struct { + // Web App Routing settings for the ingress profile. + WebAppRouting *ManagedClusterIngressProfileWebAppRouting +} + +// ManagedClusterIngressProfileWebAppRouting - Web App Routing settings for the ingress profile. +type ManagedClusterIngressProfileWebAppRouting struct { + // Resource IDs of the DNS zones to be associated with the Web App Routing add-on. Used only when Web App Routing is enabled. + // Public and private DNS zones can be in different resource groups, but all + // public DNS zones must be in the same resource group and all private DNS zones must be in the same resource group. + DNSZoneResourceIDs []*string + + // Whether to enable Web App Routing. + Enabled *bool + + // READ-ONLY; Managed identity of the Web Application Routing add-on. This is the identity that should be granted permissions, + // for example, to manage the associated Azure DNS resource and get certificates from + // Azure Key Vault. See this overview of the add-on [https://learn.microsoft.com/en-us/azure/aks/web-app-routing?tabs=with-osm] + // for more instructions. + Identity *UserAssignedIdentity +} + +// ManagedClusterListResult - The response from the List Managed Clusters operation. +type ManagedClusterListResult struct { + // The list of managed clusters. + Value []*ManagedCluster + + // READ-ONLY; The URL to get the next set of managed cluster results. + NextLink *string +} + +// ManagedClusterLoadBalancerProfile - Profile of the managed cluster load balancer. +type ManagedClusterLoadBalancerProfile struct { + // The desired number of allocated SNAT ports per VM. Allowed values are in the range of 0 to 64000 (inclusive). The default + // value is 0 which results in Azure dynamically allocating ports. + AllocatedOutboundPorts *int32 + + // The type of the managed inbound Load Balancer BackendPool. + BackendPoolType *BackendPoolType + + // The effective outbound IP resources of the cluster load balancer. + EffectiveOutboundIPs []*ResourceReference + + // Enable multiple standard load balancers per AKS cluster or not. + EnableMultipleStandardLoadBalancers *bool + + // Desired outbound flow idle timeout in minutes. Allowed values are in the range of 4 to 120 (inclusive). The default value + // is 30 minutes. + IdleTimeoutInMinutes *int32 + + // Desired managed outbound IPs for the cluster load balancer. + ManagedOutboundIPs *ManagedClusterLoadBalancerProfileManagedOutboundIPs + + // Desired outbound IP Prefix resources for the cluster load balancer. + OutboundIPPrefixes *ManagedClusterLoadBalancerProfileOutboundIPPrefixes + + // Desired outbound IP resources for the cluster load balancer. + OutboundIPs *ManagedClusterLoadBalancerProfileOutboundIPs +} + +// ManagedClusterLoadBalancerProfileManagedOutboundIPs - Desired managed outbound IPs for the cluster load balancer. +type ManagedClusterLoadBalancerProfileManagedOutboundIPs struct { + // The desired number of IPv4 outbound IPs created/managed by Azure for the cluster load balancer. Allowed values must be + // in the range of 1 to 100 (inclusive). The default value is 1. + Count *int32 + + // The desired number of IPv6 outbound IPs created/managed by Azure for the cluster load balancer. Allowed values must be + // in the range of 1 to 100 (inclusive). The default value is 0 for single-stack and + // 1 for dual-stack. + CountIPv6 *int32 +} + +// ManagedClusterLoadBalancerProfileOutboundIPPrefixes - Desired outbound IP Prefix resources for the cluster load balancer. +type ManagedClusterLoadBalancerProfileOutboundIPPrefixes struct { + // A list of public IP prefix resources. + PublicIPPrefixes []*ResourceReference +} + +// ManagedClusterLoadBalancerProfileOutboundIPs - Desired outbound IP resources for the cluster load balancer. +type ManagedClusterLoadBalancerProfileOutboundIPs struct { + // A list of public IP resources. + PublicIPs []*ResourceReference +} + +// ManagedClusterManagedOutboundIPProfile - Profile of the managed outbound IP resources of the managed cluster. +type ManagedClusterManagedOutboundIPProfile struct { + // The desired number of outbound IPs created/managed by Azure. Allowed values must be in the range of 1 to 16 (inclusive). + // The default value is 1. + Count *int32 +} + +// ManagedClusterMetricsProfile - The metrics profile for the ManagedCluster. +type ManagedClusterMetricsProfile struct { + // The cost analysis configuration for the cluster + CostAnalysis *ManagedClusterCostAnalysis +} + +// ManagedClusterNATGatewayProfile - Profile of the managed cluster NAT gateway. +type ManagedClusterNATGatewayProfile struct { + // The effective outbound IP resources of the cluster NAT gateway. + EffectiveOutboundIPs []*ResourceReference + + // Desired outbound flow idle timeout in minutes. Allowed values are in the range of 4 to 120 (inclusive). The default value + // is 4 minutes. + IdleTimeoutInMinutes *int32 + + // Profile of the managed outbound IP resources of the cluster NAT gateway. + ManagedOutboundIPProfile *ManagedClusterManagedOutboundIPProfile +} + +type ManagedClusterNodeProvisioningProfile struct { + // Once the mode it set to Auto, it cannot be changed back to Manual. + Mode *NodeProvisioningMode +} + +// ManagedClusterNodeResourceGroupProfile - Node resource group lockdown profile for a managed cluster. +type ManagedClusterNodeResourceGroupProfile struct { + // The restriction level applied to the cluster's node resource group + RestrictionLevel *RestrictionLevel +} + +// ManagedClusterOIDCIssuerProfile - The OIDC issuer profile of the Managed Cluster. +type ManagedClusterOIDCIssuerProfile struct { + // Whether the OIDC issuer is enabled. + Enabled *bool + + // READ-ONLY; The OIDC issuer url of the Managed Cluster. + IssuerURL *string +} + +// ManagedClusterPodIdentity - Details about the pod identity assigned to the Managed Cluster. +type ManagedClusterPodIdentity struct { + // REQUIRED; The user assigned identity details. + Identity *UserAssignedIdentity + + // REQUIRED; The name of the pod identity. + Name *string + + // REQUIRED; The namespace of the pod identity. + Namespace *string + + // The binding selector to use for the AzureIdentityBinding resource. + BindingSelector *string + + // READ-ONLY + ProvisioningInfo *ManagedClusterPodIdentityProvisioningInfo + + // READ-ONLY; The current provisioning state of the pod identity. + ProvisioningState *ManagedClusterPodIdentityProvisioningState +} + +// ManagedClusterPodIdentityException - See disable AAD Pod Identity for a specific Pod/Application [https://azure.github.io/aad-pod-identity/docs/configure/application_exception/] +// for more details. +type ManagedClusterPodIdentityException struct { + // REQUIRED; The name of the pod identity exception. + Name *string + + // REQUIRED; The namespace of the pod identity exception. + Namespace *string + + // REQUIRED; The pod labels to match. + PodLabels map[string]*string +} + +// ManagedClusterPodIdentityProfile - See use AAD pod identity [https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity] +// for more details on pod identity integration. +type ManagedClusterPodIdentityProfile struct { + // Running in Kubenet is disabled by default due to the security related nature of AAD Pod Identity and the risks of IP spoofing. + // See using Kubenet network plugin with AAD Pod Identity + // [https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity#using-kubenet-network-plugin-with-azure-active-directory-pod-managed-identities] + // for more information. + AllowNetworkPluginKubenet *bool + + // Whether the pod identity addon is enabled. + Enabled *bool + + // The pod identities to use in the cluster. + UserAssignedIdentities []*ManagedClusterPodIdentity + + // The pod identity exceptions to allow. + UserAssignedIdentityExceptions []*ManagedClusterPodIdentityException +} + +// ManagedClusterPodIdentityProvisioningError - An error response from the pod identity provisioning. +type ManagedClusterPodIdentityProvisioningError struct { + // Details about the error. + Error *ManagedClusterPodIdentityProvisioningErrorBody +} + +// ManagedClusterPodIdentityProvisioningErrorBody - An error response from the pod identity provisioning. +type ManagedClusterPodIdentityProvisioningErrorBody struct { + // An identifier for the error. Codes are invariant and are intended to be consumed programmatically. + Code *string + + // A list of additional details about the error. + Details []*ManagedClusterPodIdentityProvisioningErrorBody + + // A message describing the error, intended to be suitable for display in a user interface. + Message *string + + // The target of the particular error. For example, the name of the property in error. + Target *string +} + +type ManagedClusterPodIdentityProvisioningInfo struct { + // Pod identity assignment error (if any). + Error *ManagedClusterPodIdentityProvisioningError +} + +// ManagedClusterPoolUpgradeProfile - The list of available upgrade versions. +type ManagedClusterPoolUpgradeProfile struct { + // REQUIRED; The Kubernetes version (major.minor.patch). + KubernetesVersion *string + + // REQUIRED; The operating system type. The default is Linux. + OSType *OSType + + // The Agent Pool name. + Name *string + + // List of orchestrator types and versions available for upgrade. + Upgrades []*ManagedClusterPoolUpgradeProfileUpgradesItem +} + +type ManagedClusterPoolUpgradeProfileUpgradesItem struct { + // Whether the Kubernetes version is currently in preview. + IsPreview *bool + + // The Kubernetes version (major.minor.patch). + KubernetesVersion *string +} + +// ManagedClusterProperties - Properties of the managed cluster. +type ManagedClusterProperties struct { + // The Azure Active Directory configuration. + AADProfile *ManagedClusterAADProfile + + // The access profile for managed cluster API server. + APIServerAccessProfile *ManagedClusterAPIServerAccessProfile + + // The profile of managed cluster add-on. + AddonProfiles map[string]*ManagedClusterAddonProfile + + // The agent pool properties. + AgentPoolProfiles []*ManagedClusterAgentPoolProfile + + // AI toolchain operator settings that apply to the whole cluster. + AiToolchainOperatorProfile *ManagedClusterAIToolchainOperatorProfile + + // Parameters to be applied to the cluster-autoscaler when enabled + AutoScalerProfile *ManagedClusterPropertiesAutoScalerProfile + + // The auto upgrade configuration. + AutoUpgradeProfile *ManagedClusterAutoUpgradeProfile + + // Prometheus addon profile for the container service cluster + AzureMonitorProfile *ManagedClusterAzureMonitorProfile + + // CreationData to be used to specify the source Snapshot ID if the cluster will be created/upgraded using a snapshot. + CreationData *CreationData + + // This cannot be updated once the Managed Cluster has been created. + DNSPrefix *string + + // If set to true, getting static credentials will be disabled for this cluster. This must only be used on Managed Clusters + // that are AAD enabled. For more details see disable local accounts + // [https://docs.microsoft.com/azure/aks/managed-aad#disable-local-accounts-preview]. + DisableLocalAccounts *bool + + // This is of the form: '/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.Compute/diskEncryptionSets/{encryptionSetName}' + DiskEncryptionSetID *string + + // The default value is false. It can be enabled/disabled on creation and updating of the managed cluster. See https://aka.ms/NamespaceARMResource + // [https://aka.ms/NamespaceARMResource] for more details + // on Namespace as a ARM Resource. + EnableNamespaceResources *bool + + // (DEPRECATED) Whether to enable Kubernetes pod security policy (preview). PodSecurityPolicy was deprecated in Kubernetes + // v1.21, and removed from Kubernetes in v1.25. Learn more at + // https://aka.ms/k8s/psp and https://aka.ms/aks/psp. + EnablePodSecurityPolicy *bool + + // Whether to enable Kubernetes Role-Based Access Control. + EnableRBAC *bool + + // This cannot be updated once the Managed Cluster has been created. + FqdnSubdomain *string + + // Configurations for provisioning the cluster with HTTP proxy servers. + HTTPProxyConfig *ManagedClusterHTTPProxyConfig + + // Identities associated with the cluster. + IdentityProfile map[string]*UserAssignedIdentity + + // Ingress profile for the managed cluster. + IngressProfile *ManagedClusterIngressProfile + + // When you upgrade a supported AKS cluster, Kubernetes minor versions cannot be skipped. All upgrades must be performed sequentially + // by major version number. For example, upgrades between 1.14.x -> + // 1.15.x or 1.15.x -> 1.16.x are allowed, however 1.14.x -> 1.16.x is not allowed. See upgrading an AKS cluster [https://docs.microsoft.com/azure/aks/upgrade-cluster] + // for more details. + KubernetesVersion *string + + // The profile for Linux VMs in the Managed Cluster. + LinuxProfile *LinuxProfile + + // Optional cluster metrics configuration. + MetricsProfile *ManagedClusterMetricsProfile + + // The network configuration profile. + NetworkProfile *NetworkProfile + + // Node provisioning settings that apply to the whole cluster. + NodeProvisioningProfile *ManagedClusterNodeProvisioningProfile + + // The name of the resource group containing agent pool nodes. + NodeResourceGroup *string + + // The node resource group configuration profile. + NodeResourceGroupProfile *ManagedClusterNodeResourceGroupProfile + + // The OIDC issuer profile of the Managed Cluster. + OidcIssuerProfile *ManagedClusterOIDCIssuerProfile + + // See use AAD pod identity [https://docs.microsoft.com/azure/aks/use-azure-ad-pod-identity] for more details on AAD pod identity + // integration. + PodIdentityProfile *ManagedClusterPodIdentityProfile + + // Private link resources associated with the cluster. + PrivateLinkResources []*PrivateLinkResource + + // Allow or deny public network access for AKS + PublicNetworkAccess *PublicNetworkAccess + + // The Safeguards profile holds all the safeguards information for a given cluster + SafeguardsProfile *SafeguardsProfile + + // Security profile for the managed cluster. + SecurityProfile *ManagedClusterSecurityProfile + + // Service mesh profile for a managed cluster. + ServiceMeshProfile *ServiceMeshProfile + + // Information about a service principal identity for the cluster to use for manipulating Azure APIs. + ServicePrincipalProfile *ManagedClusterServicePrincipalProfile + + // Storage profile for the managed cluster. + StorageProfile *ManagedClusterStorageProfile + + // The support plan for the Managed Cluster. If unspecified, the default is 'KubernetesOfficial'. + SupportPlan *KubernetesSupportPlan + + // Settings for upgrading a cluster. + UpgradeSettings *ClusterUpgradeSettings + + // The profile for Windows VMs in the Managed Cluster. + WindowsProfile *ManagedClusterWindowsProfile + + // Workload Auto-scaler profile for the managed cluster. + WorkloadAutoScalerProfile *ManagedClusterWorkloadAutoScalerProfile + + // READ-ONLY; The Azure Portal requires certain Cross-Origin Resource Sharing (CORS) headers to be sent in some responses, + // which Kubernetes APIServer doesn't handle by default. This special FQDN supports CORS, + // allowing the Azure Portal to function properly. + AzurePortalFQDN *string + + // READ-ONLY; The version of Kubernetes the Managed Cluster is running. + CurrentKubernetesVersion *string + + // READ-ONLY; The FQDN of the master pool. + Fqdn *string + + // READ-ONLY; The max number of agent pools for the managed cluster. + MaxAgentPools *int32 + + // READ-ONLY; The Power State of the cluster. + PowerState *PowerState + + // READ-ONLY; The FQDN of private cluster. + PrivateFQDN *string + + // READ-ONLY; The current provisioning state. + ProvisioningState *string + + // READ-ONLY; The resourceUID uniquely identifies ManagedClusters that reuse ARM ResourceIds (i.e: create, delete, create + // sequence) + ResourceUID *string +} + +// ManagedClusterPropertiesAutoScalerProfile - Parameters to be applied to the cluster-autoscaler when enabled +type ManagedClusterPropertiesAutoScalerProfile struct { + // Valid values are 'true' and 'false' + BalanceSimilarNodeGroups *string + + // If set to true, all daemonset pods on empty nodes will be evicted before deletion of the node. If the daemonset pod cannot + // be evicted another node will be chosen for scaling. If set to false, the node + // will be deleted without ensuring that daemonset pods are deleted or evicted. + DaemonsetEvictionForEmptyNodes *bool + + // If set to true, all daemonset pods on occupied nodes will be evicted before deletion of the node. If the daemonset pod + // cannot be evicted another node will be chosen for scaling. If set to false, the + // node will be deleted without ensuring that daemonset pods are deleted or evicted. + DaemonsetEvictionForOccupiedNodes *bool + + // Available values are: 'least-waste', 'most-pods', 'priority', 'random'. + Expander *Expander + + // If set to true, the resources used by daemonset will be taken into account when making scaling down decisions. + IgnoreDaemonsetsUtilization *bool + + // The default is 10. + MaxEmptyBulkDelete *string + + // The default is 600. + MaxGracefulTerminationSec *string + + // The default is '15m'. Values must be an integer followed by an 'm'. No unit of time other than minutes (m) is supported. + MaxNodeProvisionTime *string + + // The default is 45. The maximum is 100 and the minimum is 0. + MaxTotalUnreadyPercentage *string + + // For scenarios like burst/batch scale where you don't want CA to act before the kubernetes scheduler could schedule all + // the pods, you can tell CA to ignore unscheduled pods before they're a certain + // age. The default is '0s'. Values must be an integer followed by a unit ('s' for seconds, 'm' for minutes, 'h' for hours, + // etc). + NewPodScaleUpDelay *string + + // This must be an integer. The default is 3. + OkTotalUnreadyCount *string + + // The default is '10m'. Values must be an integer followed by an 'm'. No unit of time other than minutes (m) is supported. + ScaleDownDelayAfterAdd *string + + // The default is the scan-interval. Values must be an integer followed by an 'm'. No unit of time other than minutes (m) + // is supported. + ScaleDownDelayAfterDelete *string + + // The default is '3m'. Values must be an integer followed by an 'm'. No unit of time other than minutes (m) is supported. + ScaleDownDelayAfterFailure *string + + // The default is '10m'. Values must be an integer followed by an 'm'. No unit of time other than minutes (m) is supported. + ScaleDownUnneededTime *string + + // The default is '20m'. Values must be an integer followed by an 'm'. No unit of time other than minutes (m) is supported. + ScaleDownUnreadyTime *string + + // The default is '0.5'. + ScaleDownUtilizationThreshold *string + + // The default is '10'. Values must be an integer number of seconds. + ScanInterval *string + + // The default is true. + SkipNodesWithLocalStorage *string + + // The default is true. + SkipNodesWithSystemPods *string +} + +// ManagedClusterPropertiesForSnapshot - managed cluster properties for snapshot, these properties are read only. +type ManagedClusterPropertiesForSnapshot struct { + // Whether the cluster has enabled Kubernetes Role-Based Access Control or not. + EnableRbac *bool + + // The current kubernetes version. + KubernetesVersion *string + + // The current managed cluster sku. + SKU *ManagedClusterSKU + + // READ-ONLY; The current network profile. + NetworkProfile *NetworkProfileForSnapshot +} + +// ManagedClusterSKU - The SKU of a Managed Cluster. +type ManagedClusterSKU struct { + // The name of a managed cluster SKU. + Name *ManagedClusterSKUName + + // If not specified, the default is 'Free'. See AKS Pricing Tier [https://learn.microsoft.com/azure/aks/free-standard-pricing-tiers] + // for more details. + Tier *ManagedClusterSKUTier +} + +// ManagedClusterSecurityProfile - Security profile for the container service cluster. +type ManagedClusterSecurityProfile struct { + // Azure Key Vault key management service [https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/] settings for + // the security profile. + AzureKeyVaultKms *AzureKeyVaultKms + + // A list of up to 10 base64 encoded CAs that will be added to the trust store on nodes with the Custom CA Trust feature enabled. + // For more information see Custom CA Trust Certificates + // [https://learn.microsoft.com/en-us/azure/aks/custom-certificate-authority] + CustomCATrustCertificates [][]byte + + // Microsoft Defender settings for the security profile. + Defender *ManagedClusterSecurityProfileDefender + + // Image Cleaner settings for the security profile. + ImageCleaner *ManagedClusterSecurityProfileImageCleaner + + // Image integrity is a feature that works with Azure Policy to verify image integrity by signature. This will not have any + // effect unless Azure Policy is applied to enforce image signatures. See + // https://aka.ms/aks/image-integrity for how to use this feature via policy. + ImageIntegrity *ManagedClusterSecurityProfileImageIntegrity + + // Node Restriction [https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#noderestriction] settings + // for the security profile. + NodeRestriction *ManagedClusterSecurityProfileNodeRestriction + + // Workload identity settings for the security profile. Workload identity enables Kubernetes applications to access Azure + // cloud resources securely with Azure AD. See https://aka.ms/aks/wi for more + // details. + WorkloadIdentity *ManagedClusterSecurityProfileWorkloadIdentity +} + +// ManagedClusterSecurityProfileDefender - Microsoft Defender settings for the security profile. +type ManagedClusterSecurityProfileDefender struct { + // Resource ID of the Log Analytics workspace to be associated with Microsoft Defender. When Microsoft Defender is enabled, + // this field is required and must be a valid workspace resource ID. When + // Microsoft Defender is disabled, leave the field empty. + LogAnalyticsWorkspaceResourceID *string + + // Microsoft Defender threat detection for Cloud settings for the security profile. + SecurityMonitoring *ManagedClusterSecurityProfileDefenderSecurityMonitoring +} + +// ManagedClusterSecurityProfileDefenderSecurityMonitoring - Microsoft Defender settings for the security profile threat detection. +type ManagedClusterSecurityProfileDefenderSecurityMonitoring struct { + // Whether to enable Defender threat detection + Enabled *bool +} + +// ManagedClusterSecurityProfileImageCleaner - Image Cleaner removes unused images from nodes, freeing up disk space and helping +// to reduce attack surface area. Here are settings for the security profile. +type ManagedClusterSecurityProfileImageCleaner struct { + // Whether to enable Image Cleaner on AKS cluster. + Enabled *bool + + // Image Cleaner scanning interval in hours. + IntervalHours *int32 +} + +// ManagedClusterSecurityProfileImageIntegrity - Image integrity related settings for the security profile. +type ManagedClusterSecurityProfileImageIntegrity struct { + // Whether to enable image integrity. The default value is false. + Enabled *bool +} + +// ManagedClusterSecurityProfileNodeRestriction - Node Restriction settings for the security profile. +type ManagedClusterSecurityProfileNodeRestriction struct { + // Whether to enable Node Restriction + Enabled *bool +} + +// ManagedClusterSecurityProfileWorkloadIdentity - Workload identity settings for the security profile. +type ManagedClusterSecurityProfileWorkloadIdentity struct { + // Whether to enable workload identity. + Enabled *bool +} + +// ManagedClusterServicePrincipalProfile - Information about a service principal identity for the cluster to use for manipulating +// Azure APIs. +type ManagedClusterServicePrincipalProfile struct { + // REQUIRED; The ID for the service principal. + ClientID *string + + // The secret password associated with the service principal in plain text. + Secret *string +} + +// ManagedClusterSnapshot - A managed cluster snapshot resource. +type ManagedClusterSnapshot struct { + // REQUIRED; The geo-location where the resource lives + Location *string + + // Properties of a managed cluster snapshot. + Properties *ManagedClusterSnapshotProperties + + // Resource tags. + Tags map[string]*string + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// ManagedClusterSnapshotListResult - The response from the List Managed Cluster Snapshots operation. +type ManagedClusterSnapshotListResult struct { + // The list of managed cluster snapshots. + Value []*ManagedClusterSnapshot + + // READ-ONLY; The URL to get the next set of managed cluster snapshot results. + NextLink *string +} + +// ManagedClusterSnapshotProperties - Properties for a managed cluster snapshot. +type ManagedClusterSnapshotProperties struct { + // CreationData to be used to specify the source resource ID to create this snapshot. + CreationData *CreationData + + // The type of a snapshot. The default is NodePool. + SnapshotType *SnapshotType + + // READ-ONLY; What the properties will be showed when getting managed cluster snapshot. Those properties are read-only. + ManagedClusterPropertiesReadOnly *ManagedClusterPropertiesForSnapshot +} + +// ManagedClusterStorageProfile - Storage profile for the container service cluster. +type ManagedClusterStorageProfile struct { + // AzureBlob CSI Driver settings for the storage profile. + BlobCSIDriver *ManagedClusterStorageProfileBlobCSIDriver + + // AzureDisk CSI Driver settings for the storage profile. + DiskCSIDriver *ManagedClusterStorageProfileDiskCSIDriver + + // AzureFile CSI Driver settings for the storage profile. + FileCSIDriver *ManagedClusterStorageProfileFileCSIDriver + + // Snapshot Controller settings for the storage profile. + SnapshotController *ManagedClusterStorageProfileSnapshotController +} + +// ManagedClusterStorageProfileBlobCSIDriver - AzureBlob CSI Driver settings for the storage profile. +type ManagedClusterStorageProfileBlobCSIDriver struct { + // Whether to enable AzureBlob CSI Driver. The default value is false. + Enabled *bool +} + +// ManagedClusterStorageProfileDiskCSIDriver - AzureDisk CSI Driver settings for the storage profile. +type ManagedClusterStorageProfileDiskCSIDriver struct { + // Whether to enable AzureDisk CSI Driver. The default value is true. + Enabled *bool + + // The version of AzureDisk CSI Driver. The default value is v1. + Version *string +} + +// ManagedClusterStorageProfileFileCSIDriver - AzureFile CSI Driver settings for the storage profile. +type ManagedClusterStorageProfileFileCSIDriver struct { + // Whether to enable AzureFile CSI Driver. The default value is true. + Enabled *bool +} + +// ManagedClusterStorageProfileSnapshotController - Snapshot Controller settings for the storage profile. +type ManagedClusterStorageProfileSnapshotController struct { + // Whether to enable Snapshot Controller. The default value is true. + Enabled *bool +} + +// ManagedClusterUpgradeProfile - The list of available upgrades for compute pools. +type ManagedClusterUpgradeProfile struct { + // REQUIRED; The properties of the upgrade profile. + Properties *ManagedClusterUpgradeProfileProperties + + // READ-ONLY; The ID of the upgrade profile. + ID *string + + // READ-ONLY; The name of the upgrade profile. + Name *string + + // READ-ONLY; The type of the upgrade profile. + Type *string +} + +// ManagedClusterUpgradeProfileProperties - Control plane and agent pool upgrade profiles. +type ManagedClusterUpgradeProfileProperties struct { + // REQUIRED; The list of available upgrade versions for agent pools. + AgentPoolProfiles []*ManagedClusterPoolUpgradeProfile + + // REQUIRED; The list of available upgrade versions for the control plane. + ControlPlaneProfile *ManagedClusterPoolUpgradeProfile +} + +// ManagedClusterWindowsProfile - Profile for Windows VMs in the managed cluster. +type ManagedClusterWindowsProfile struct { + // REQUIRED; Specifies the name of the administrator account. + // Restriction: Cannot end in "." + // Disallowed values: "administrator", "admin", "user", "user1", "test", "user2", "test1", "user3", "admin1", "1", "123", + // "a", "actuser", "adm", "admin2", "aspnet", "backup", "console", "david", "guest", + // "john", "owner", "root", "server", "sql", "support", "support_388945a0", "sys", "test2", "test3", "user4", "user5". + // Minimum-length: 1 character + // Max-length: 20 characters + AdminUsername *string + + // Specifies the password of the administrator account. + // Minimum-length: 8 characters + // Max-length: 123 characters + // Complexity requirements: 3 out of 4 conditions below need to be fulfilled + // Has lower characters + // Has upper characters + // Has a digit + // Has a special character (Regex match [\W_]) + // Disallowed values: "abc@123", "P@$$w0rd", "P@ssw0rd", "P@ssword123", "Pa$$word", "pass@word1", "Password!", "Password1", + // "Password22", "iloveyou!" + AdminPassword *string + + // For more details on CSI proxy, see the CSI proxy GitHub repo [https://github.com/kubernetes-csi/csi-proxy]. + EnableCSIProxy *bool + + // The Windows gMSA Profile in the Managed Cluster. + GmsaProfile *WindowsGmsaProfile + + // The license type to use for Windows VMs. See Azure Hybrid User Benefits [https://azure.microsoft.com/pricing/hybrid-benefit/faq/] + // for more details. + LicenseType *LicenseType +} + +// ManagedClusterWorkloadAutoScalerProfile - Workload Auto-scaler profile for the managed cluster. +type ManagedClusterWorkloadAutoScalerProfile struct { + // KEDA (Kubernetes Event-driven Autoscaling) settings for the workload auto-scaler profile. + Keda *ManagedClusterWorkloadAutoScalerProfileKeda + VerticalPodAutoscaler *ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler +} + +// ManagedClusterWorkloadAutoScalerProfileKeda - KEDA (Kubernetes Event-driven Autoscaling) settings for the workload auto-scaler +// profile. +type ManagedClusterWorkloadAutoScalerProfileKeda struct { + // REQUIRED; Whether to enable KEDA. + Enabled *bool +} + +type ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler struct { + // REQUIRED; Whether to enable VPA add-on in cluster. Default value is false. + Enabled *bool + + // Whether VPA add-on is enabled and configured to scale AKS-managed add-ons. + AddonAutoscaling *AddonAutoscaling +} + +type ManagedServiceIdentityUserAssignedIdentitiesValue struct { + // READ-ONLY; The client id of user assigned identity. + ClientID *string + + // READ-ONLY; The principal id of user assigned identity. + PrincipalID *string +} + +// ManualScaleProfile - Specifications on number of machines. +type ManualScaleProfile struct { + // Number of nodes. + Count *int32 + + // The list of allowed vm sizes. AKS will use the first available one when scaling. If a VM size is unavailable (e.g. due + // to quota or regional capacity reasons), AKS will use the next size. + Sizes []*string +} + +// MeshRevision - Holds information on upgrades and compatibility for given major.minor mesh release. +type MeshRevision struct { + // List of items this revision of service mesh is compatible with, and their associated versions. + CompatibleWith []*CompatibleVersions + + // The revision of the mesh release. + Revision *string + + // List of revisions available for upgrade of a specific mesh revision + Upgrades []*string +} + +// MeshRevisionProfile - Mesh revision profile for a mesh. +type MeshRevisionProfile struct { + // Mesh revision profile properties for a mesh + Properties *MeshRevisionProfileProperties + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// MeshRevisionProfileList - Holds an array of MeshRevisionsProfiles +type MeshRevisionProfileList struct { + // Array of service mesh add-on revision profiles for all supported mesh modes. + Value []*MeshRevisionProfile + + // READ-ONLY; The URL to get the next set of mesh revision profile. + NextLink *string +} + +// MeshRevisionProfileProperties - Mesh revision profile properties for a mesh +type MeshRevisionProfileProperties struct { + MeshRevisions []*MeshRevision +} + +// MeshUpgradeProfile - Upgrade profile for given mesh. +type MeshUpgradeProfile struct { + // Mesh upgrade profile properties for a major.minor release. + Properties *MeshUpgradeProfileProperties + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// MeshUpgradeProfileList - Holds an array of MeshUpgradeProfiles +type MeshUpgradeProfileList struct { + // Array of supported service mesh add-on upgrade profiles. + Value []*MeshUpgradeProfile + + // READ-ONLY; The URL to get the next set of mesh upgrade profile. + NextLink *string +} + +// MeshUpgradeProfileProperties - Mesh upgrade profile properties for a major.minor release. +type MeshUpgradeProfileProperties struct { + // List of items this revision of service mesh is compatible with, and their associated versions. + CompatibleWith []*CompatibleVersions + + // The revision of the mesh release. + Revision *string + + // List of revisions available for upgrade of a specific mesh revision + Upgrades []*string +} + +// NetworkMonitoring - This addon can be used to configure network monitoring and generate network monitoring data in Prometheus +// format +type NetworkMonitoring struct { + // Enable or disable the network monitoring plugin on the cluster + Enabled *bool +} + +// NetworkProfile - Profile of network configuration. +type NetworkProfile struct { + // An IP address assigned to the Kubernetes DNS service. It must be within the Kubernetes service address range specified + // in serviceCidr. + DNSServiceIP *string + + // IP families are used to determine single-stack or dual-stack clusters. For single-stack, the expected value is IPv4. For + // dual-stack, the expected values are IPv4 and IPv6. + IPFamilies []*IPFamily + + // Holds configuration customizations for kube-proxy. Any values not defined will use the kube-proxy defaulting behavior. + // See https://v + // .docs.kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ where is represented by a - string. Kubernetes + // version 1.23 would be '1-23'. + KubeProxyConfig *NetworkProfileKubeProxyConfig + + // Profile of the cluster load balancer. + LoadBalancerProfile *ManagedClusterLoadBalancerProfile + + // The default is 'standard'. See Azure Load Balancer SKUs [https://docs.microsoft.com/azure/load-balancer/skus] for more + // information about the differences between load balancer SKUs. + LoadBalancerSKU *LoadBalancerSKU + + // This addon can be used to configure network monitoring and generate network monitoring data in Prometheus format + Monitoring *NetworkMonitoring + + // Profile of the cluster NAT gateway. + NatGatewayProfile *ManagedClusterNATGatewayProfile + + // Network dataplane used in the Kubernetes cluster. + NetworkDataplane *NetworkDataplane + + // This cannot be specified if networkPlugin is anything other than 'azure'. + NetworkMode *NetworkMode + + // Network plugin used for building the Kubernetes network. + NetworkPlugin *NetworkPlugin + + // Network plugin mode used for building the Kubernetes network. + NetworkPluginMode *NetworkPluginMode + + // Network policy used for building the Kubernetes network. + NetworkPolicy *NetworkPolicy + + // This can only be set at cluster creation time and cannot be changed later. For more information see egress outbound type + // [https://docs.microsoft.com/azure/aks/egress-outboundtype]. + OutboundType *OutboundType + + // A CIDR notation IP range from which to assign pod IPs when kubenet is used. + PodCidr *string + + // One IPv4 CIDR is expected for single-stack networking. Two CIDRs, one for each IP family (IPv4/IPv6), is expected for dual-stack + // networking. + PodCidrs []*string + + // A CIDR notation IP range from which to assign service cluster IPs. It must not overlap with any Subnet IP ranges. + ServiceCidr *string + + // One IPv4 CIDR is expected for single-stack networking. Two CIDRs, one for each IP family (IPv4/IPv6), is expected for dual-stack + // networking. They must not overlap with any Subnet IP ranges. + ServiceCidrs []*string +} + +// NetworkProfileForSnapshot - network profile for managed cluster snapshot, these properties are read only. +type NetworkProfileForSnapshot struct { + // loadBalancerSku for managed cluster snapshot. + LoadBalancerSKU *LoadBalancerSKU + + // networkMode for managed cluster snapshot. + NetworkMode *NetworkMode + + // networkPlugin for managed cluster snapshot. + NetworkPlugin *NetworkPlugin + + // NetworkPluginMode for managed cluster snapshot. + NetworkPluginMode *NetworkPluginMode + + // networkPolicy for managed cluster snapshot. + NetworkPolicy *NetworkPolicy +} + +// NetworkProfileKubeProxyConfig - Holds configuration customizations for kube-proxy. Any values not defined will use the +// kube-proxy defaulting behavior. See https://v +// .docs.kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ where is represented by a - string. Kubernetes +// version 1.23 would be '1-23'. +type NetworkProfileKubeProxyConfig struct { + // Whether to enable on kube-proxy on the cluster (if no 'kubeProxyConfig' exists, kube-proxy is enabled in AKS by default + // without these customizations). + Enabled *bool + + // Holds configuration customizations for IPVS. May only be specified if 'mode' is set to 'IPVS'. + IpvsConfig *NetworkProfileKubeProxyConfigIpvsConfig + + // Specify which proxy mode to use ('IPTABLES' or 'IPVS') + Mode *Mode +} + +// NetworkProfileKubeProxyConfigIpvsConfig - Holds configuration customizations for IPVS. May only be specified if 'mode' +// is set to 'IPVS'. +type NetworkProfileKubeProxyConfigIpvsConfig struct { + // IPVS scheduler, for more information please see http://www.linuxvirtualserver.org/docs/scheduling.html. + Scheduler *IpvsScheduler + + // The timeout value used for IPVS TCP sessions after receiving a FIN in seconds. Must be a positive integer value. + TCPFinTimeoutSeconds *int32 + + // The timeout value used for idle IPVS TCP sessions in seconds. Must be a positive integer value. + TCPTimeoutSeconds *int32 + + // The timeout value used for IPVS UDP packets in seconds. Must be a positive integer value. + UDPTimeoutSeconds *int32 +} + +// OSOptionProfile - The OS option profile. +type OSOptionProfile struct { + // REQUIRED; The list of OS options. + Properties *OSOptionPropertyList + + // READ-ONLY; The ID of the OS option resource. + ID *string + + // READ-ONLY; The name of the OS option resource. + Name *string + + // READ-ONLY; The type of the OS option resource. + Type *string +} + +// OSOptionProperty - OS option property. +type OSOptionProperty struct { + // REQUIRED; Whether the image is FIPS-enabled. + EnableFipsImage *bool + + // REQUIRED; The OS type. + OSType *string +} + +// OSOptionPropertyList - The list of OS option properties. +type OSOptionPropertyList struct { + // REQUIRED; The list of OS options. + OSOptionPropertyList []*OSOptionProperty +} + +// OperationListResult - The List Operation response. +type OperationListResult struct { + // READ-ONLY; The list of operations + Value []*OperationValue +} + +// OperationStatusResult - The current status of an async operation. +type OperationStatusResult struct { + // REQUIRED; Operation status. + Status *string + + // The end time of the operation. + EndTime *time.Time + + // If present, details of the operation error. + Error *ErrorDetail + + // Fully qualified ID for the async operation. + ID *string + + // Name of the async operation. + Name *string + + // The operations list. + Operations []*OperationStatusResult + + // Percent of the operation that is complete. + PercentComplete *float32 + + // The start time of the operation. + StartTime *time.Time + + // READ-ONLY; Fully qualified ID of the resource against which the original async operation was started. + ResourceID *string +} + +// OperationStatusResultList - The operations list. It contains an URL link to get the next set of results. +type OperationStatusResultList struct { + // READ-ONLY; URL to get the next set of operation list results (if there are any). + NextLink *string + + // READ-ONLY; List of operations + Value []*OperationStatusResult +} + +// OperationValue - Describes the properties of a Operation value. +type OperationValue struct { + // Describes the properties of a Operation Value Display. + Display *OperationValueDisplay + + // READ-ONLY; The name of the operation. + Name *string + + // READ-ONLY; The origin of the operation. + Origin *string +} + +// OperationValueDisplay - Describes the properties of a Operation Value Display. +type OperationValueDisplay struct { + // READ-ONLY; The description of the operation. + Description *string + + // READ-ONLY; The display name of the operation. + Operation *string + + // READ-ONLY; The resource provider for the operation. + Provider *string + + // READ-ONLY; The display name of the resource the operation applies to. + Resource *string +} + +// OutboundEnvironmentEndpoint - Egress endpoints which AKS agent nodes connect to for common purpose. +type OutboundEnvironmentEndpoint struct { + // The category of endpoints accessed by the AKS agent node, e.g. azure-resource-management, apiserver, etc. + Category *string + + // The endpoints that AKS agent nodes connect to + Endpoints []*EndpointDependency +} + +// OutboundEnvironmentEndpointCollection - Collection of OutboundEnvironmentEndpoint +type OutboundEnvironmentEndpointCollection struct { + // REQUIRED; Collection of resources. + Value []*OutboundEnvironmentEndpoint + + // READ-ONLY; Link to next page of resources. + NextLink *string +} + +// PortRange - The port range. +type PortRange struct { + // The maximum port that is included in the range. It should be ranged from 1 to 65535, and be greater than or equal to portStart. + PortEnd *int32 + + // The minimum port that is included in the range. It should be ranged from 1 to 65535, and be less than or equal to portEnd. + PortStart *int32 + + // The network protocol of the port. + Protocol *Protocol +} + +// PowerState - Describes the Power State of the cluster +type PowerState struct { + // Tells whether the cluster is Running or Stopped + Code *Code +} + +// PrivateEndpoint - Private endpoint which a connection belongs to. +type PrivateEndpoint struct { + // The resource ID of the private endpoint + ID *string +} + +// PrivateEndpointConnection - A private endpoint connection +type PrivateEndpointConnection struct { + // The properties of a private endpoint connection. + Properties *PrivateEndpointConnectionProperties + + // READ-ONLY; The ID of the private endpoint connection. + ID *string + + // READ-ONLY; The name of the private endpoint connection. + Name *string + + // READ-ONLY; The resource type. + Type *string +} + +// PrivateEndpointConnectionListResult - A list of private endpoint connections +type PrivateEndpointConnectionListResult struct { + // The collection value. + Value []*PrivateEndpointConnection +} + +// PrivateEndpointConnectionProperties - Properties of a private endpoint connection. +type PrivateEndpointConnectionProperties struct { + // REQUIRED; A collection of information about the state of the connection between service consumer and provider. + PrivateLinkServiceConnectionState *PrivateLinkServiceConnectionState + + // The resource of private endpoint. + PrivateEndpoint *PrivateEndpoint + + // READ-ONLY; The current provisioning state. + ProvisioningState *PrivateEndpointConnectionProvisioningState +} + +// PrivateLinkResource - A private link resource +type PrivateLinkResource struct { + // The group ID of the resource. + GroupID *string + + // The ID of the private link resource. + ID *string + + // The name of the private link resource. + Name *string + + // The RequiredMembers of the resource + RequiredMembers []*string + + // The resource type. + Type *string + + // READ-ONLY; The private link service ID of the resource, this field is exposed only to NRP internally. + PrivateLinkServiceID *string +} + +// PrivateLinkResourcesListResult - A list of private link resources +type PrivateLinkResourcesListResult struct { + // The collection value. + Value []*PrivateLinkResource +} + +// PrivateLinkServiceConnectionState - The state of a private link service connection. +type PrivateLinkServiceConnectionState struct { + // The private link service connection description. + Description *string + + // The private link service connection status. + Status *ConnectionStatus +} + +// RelativeMonthlySchedule - For schedules like: 'recur every month on the first Monday' or 'recur every 3 months on last +// Friday'. +type RelativeMonthlySchedule struct { + // REQUIRED; Specifies on which day of the week the maintenance occurs. + DayOfWeek *WeekDay + + // REQUIRED; Specifies the number of months between each set of occurrences. + IntervalMonths *int32 + + // REQUIRED; Specifies on which instance of the allowed days specified in daysOfWeek the maintenance occurs. + WeekIndex *Type +} + +// ResourceReference - A reference to an Azure resource. +type ResourceReference struct { + // The fully qualified Azure resource id. + ID *string +} + +// RunCommandRequest - A run command request +type RunCommandRequest struct { + // REQUIRED; The command to run. + Command *string + + // AuthToken issued for AKS AAD Server App. + ClusterToken *string + + // A base64 encoded zip file containing the files required by the command. + Context *string +} + +// RunCommandResult - run command result. +type RunCommandResult struct { + // Properties of command result. + Properties *CommandResultProperties + + // READ-ONLY; The command id. + ID *string +} + +// SSHConfiguration - SSH configuration for Linux-based VMs running on Azure. +type SSHConfiguration struct { + // REQUIRED; The list of SSH public keys used to authenticate with Linux-based VMs. A maximum of 1 key may be specified. + PublicKeys []*SSHPublicKey +} + +// SSHPublicKey - Contains information about SSH certificate public key data. +type SSHPublicKey struct { + // REQUIRED; Certificate public key used to authenticate with VMs through SSH. The certificate must be in PEM format with + // or without headers. + KeyData *string +} + +// SafeguardsAvailableVersion - Available Safeguards Version +type SafeguardsAvailableVersion struct { + // REQUIRED; Whether the version is default or not and support info. + Properties *SafeguardsAvailableVersionsProperties + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// SafeguardsAvailableVersionsList - Hold values properties, which is array of SafeguardsVersions +type SafeguardsAvailableVersionsList struct { + // Array of AKS supported Safeguards versions. + Value []*SafeguardsAvailableVersion + + // READ-ONLY; The URL to get the next Safeguards available version. + NextLink *string +} + +// SafeguardsAvailableVersionsProperties - Whether the version is default or not and support info. +type SafeguardsAvailableVersionsProperties struct { + // READ-ONLY + IsDefaultVersion *bool + + // READ-ONLY; Whether the version is preview or stable. + Support *SafeguardsSupport +} + +// SafeguardsProfile - The Safeguards profile. +type SafeguardsProfile struct { + // REQUIRED; The Safeguards level to be used. By default, Safeguards is enabled for all namespaces except those that AKS excludes + // via systemExcludedNamespaces + Level *Level + + // List of namespaces excluded from Safeguards checks + ExcludedNamespaces []*string + + // The version of constraints to use + Version *string + + // READ-ONLY; List of namespaces specified by AKS to be excluded from Safeguards + SystemExcludedNamespaces []*string +} + +// ScaleProfile - Specifications on how to scale a VirtualMachines agent pool. +type ScaleProfile struct { + // Specifications on how to scale the VirtualMachines agent pool to a fixed size. + Manual []*ManualScaleProfile +} + +// Schedule - One and only one of the schedule types should be specified. Choose either 'daily', 'weekly', 'absoluteMonthly' +// or 'relativeMonthly' for your maintenance schedule. +type Schedule struct { + // For schedules like: 'recur every month on the 15th' or 'recur every 3 months on the 20th'. + AbsoluteMonthly *AbsoluteMonthlySchedule + + // For schedules like: 'recur every day' or 'recur every 3 days'. + Daily *DailySchedule + + // For schedules like: 'recur every month on the first Monday' or 'recur every 3 months on last Friday'. + RelativeMonthly *RelativeMonthlySchedule + + // For schedules like: 'recur every Monday' or 'recur every 3 weeks on Wednesday'. + Weekly *WeeklySchedule +} + +// ServiceMeshProfile - Service mesh profile for a managed cluster. +type ServiceMeshProfile struct { + // REQUIRED; Mode of the service mesh. + Mode *ServiceMeshMode + + // Istio service mesh configuration. + Istio *IstioServiceMesh +} + +// Snapshot - A node pool snapshot resource. +type Snapshot struct { + // REQUIRED; The geo-location where the resource lives + Location *string + + // Properties of a snapshot. + Properties *SnapshotProperties + + // Resource tags. + Tags map[string]*string + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// SnapshotListResult - The response from the List Snapshots operation. +type SnapshotListResult struct { + // The list of snapshots. + Value []*Snapshot + + // READ-ONLY; The URL to get the next set of snapshot results. + NextLink *string +} + +// SnapshotProperties - Properties used to configure a node pool snapshot. +type SnapshotProperties struct { + // CreationData to be used to specify the source agent pool resource ID to create this snapshot. + CreationData *CreationData + + // The type of a snapshot. The default is NodePool. + SnapshotType *SnapshotType + + // READ-ONLY; Whether to use a FIPS-enabled OS. + EnableFIPS *bool + + // READ-ONLY; The version of Kubernetes. + KubernetesVersion *string + + // READ-ONLY; The version of node image. + NodeImageVersion *string + + // READ-ONLY; Specifies the OS SKU used by the agent pool. If not specified, the default is Ubuntu if OSType=Linux or Windows2019 + // if OSType=Windows. And the default Windows OSSKU will be changed to Windows2022 + // after Windows2019 is deprecated. + OSSKU *OSSKU + + // READ-ONLY; The operating system type. The default is Linux. + OSType *OSType + + // READ-ONLY; The size of the VM. + VMSize *string +} + +// SysctlConfig - Sysctl settings for Linux agent nodes. +type SysctlConfig struct { + // Sysctl setting fs.aio-max-nr. + FsAioMaxNr *int32 + + // Sysctl setting fs.file-max. + FsFileMax *int32 + + // Sysctl setting fs.inotify.maxuserwatches. + FsInotifyMaxUserWatches *int32 + + // Sysctl setting fs.nr_open. + FsNrOpen *int32 + + // Sysctl setting kernel.threads-max. + KernelThreadsMax *int32 + + // Sysctl setting net.core.netdevmaxbacklog. + NetCoreNetdevMaxBacklog *int32 + + // Sysctl setting net.core.optmem_max. + NetCoreOptmemMax *int32 + + // Sysctl setting net.core.rmem_default. + NetCoreRmemDefault *int32 + + // Sysctl setting net.core.rmem_max. + NetCoreRmemMax *int32 + + // Sysctl setting net.core.somaxconn. + NetCoreSomaxconn *int32 + + // Sysctl setting net.core.wmem_default. + NetCoreWmemDefault *int32 + + // Sysctl setting net.core.wmem_max. + NetCoreWmemMax *int32 + + // Sysctl setting net.ipv4.iplocalport_range. + NetIPv4IPLocalPortRange *string + + // Sysctl setting net.ipv4.neigh.default.gc_thresh1. + NetIPv4NeighDefaultGcThresh1 *int32 + + // Sysctl setting net.ipv4.neigh.default.gc_thresh2. + NetIPv4NeighDefaultGcThresh2 *int32 + + // Sysctl setting net.ipv4.neigh.default.gc_thresh3. + NetIPv4NeighDefaultGcThresh3 *int32 + + // Sysctl setting net.ipv4.tcpfintimeout. + NetIPv4TCPFinTimeout *int32 + + // Sysctl setting net.ipv4.tcpkeepaliveprobes. + NetIPv4TCPKeepaliveProbes *int32 + + // Sysctl setting net.ipv4.tcpkeepalivetime. + NetIPv4TCPKeepaliveTime *int32 + + // Sysctl setting net.ipv4.tcpmaxsyn_backlog. + NetIPv4TCPMaxSynBacklog *int32 + + // Sysctl setting net.ipv4.tcpmaxtw_buckets. + NetIPv4TCPMaxTwBuckets *int32 + + // Sysctl setting net.ipv4.tcptwreuse. + NetIPv4TCPTwReuse *bool + + // Sysctl setting net.ipv4.tcpkeepaliveintvl. + NetIPv4TcpkeepaliveIntvl *int32 + + // Sysctl setting net.netfilter.nfconntrackbuckets. + NetNetfilterNfConntrackBuckets *int32 + + // Sysctl setting net.netfilter.nfconntrackmax. + NetNetfilterNfConntrackMax *int32 + + // Sysctl setting vm.maxmapcount. + VMMaxMapCount *int32 + + // Sysctl setting vm.swappiness. + VMSwappiness *int32 + + // Sysctl setting vm.vfscachepressure. + VMVfsCachePressure *int32 +} + +// SystemData - Metadata pertaining to creation and last modification of the resource. +type SystemData struct { + // The timestamp of resource creation (UTC). + CreatedAt *time.Time + + // The identity that created the resource. + CreatedBy *string + + // The type of identity that created the resource. + CreatedByType *CreatedByType + + // The timestamp of resource last modification (UTC) + LastModifiedAt *time.Time + + // The identity that last modified the resource. + LastModifiedBy *string + + // The type of identity that last modified the resource. + LastModifiedByType *CreatedByType +} + +// TagsObject - Tags object for patch operations. +type TagsObject struct { + // Resource tags. + Tags map[string]*string +} + +// TimeInWeek - Time in a week. +type TimeInWeek struct { + // The day of the week. + Day *WeekDay + + // Each integer hour represents a time range beginning at 0m after the hour ending at the next hour (non-inclusive). 0 corresponds + // to 00:00 UTC, 23 corresponds to 23:00 UTC. Specifying [0, 1] means the + // 00:00 - 02:00 UTC time range. + HourSlots []*int32 +} + +// TimeSpan - For example, between 2021-05-25T13:00:00Z and 2021-05-25T14:00:00Z. +type TimeSpan struct { + // The end of a time span + End *time.Time + + // The start of a time span + Start *time.Time +} + +// TrustedAccessRole - Trusted access role definition. +type TrustedAccessRole struct { + // READ-ONLY; Name of role, name is unique under a source resource type + Name *string + + // READ-ONLY; List of rules for the role. This maps to 'rules' property of Kubernetes Cluster Role [https://kubernetes.io/docs/reference/kubernetes-api/authorization-resources/cluster-role-v1/#ClusterRole]. + Rules []*TrustedAccessRoleRule + + // READ-ONLY; Resource type of Azure resource + SourceResourceType *string +} + +// TrustedAccessRoleBinding - Defines binding between a resource and role +type TrustedAccessRoleBinding struct { + // REQUIRED; Properties for trusted access role binding + Properties *TrustedAccessRoleBindingProperties + + // READ-ONLY; Fully qualified resource ID for the resource. E.g. "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/{resourceType}/{resourceName}" + ID *string + + // READ-ONLY; The name of the resource + Name *string + + // READ-ONLY; Azure Resource Manager metadata containing createdBy and modifiedBy information. + SystemData *SystemData + + // READ-ONLY; The type of the resource. E.g. "Microsoft.Compute/virtualMachines" or "Microsoft.Storage/storageAccounts" + Type *string +} + +// TrustedAccessRoleBindingListResult - List of trusted access role bindings +type TrustedAccessRoleBindingListResult struct { + // Role binding list + Value []*TrustedAccessRoleBinding + + // READ-ONLY; Link to next page of resources. + NextLink *string +} + +// TrustedAccessRoleBindingProperties - Properties for trusted access role binding +type TrustedAccessRoleBindingProperties struct { + // REQUIRED; A list of roles to bind, each item is a resource type qualified role name. For example: 'Microsoft.MachineLearningServices/workspaces/reader'. + Roles []*string + + // REQUIRED; The ARM resource ID of source resource that trusted access is configured for. + SourceResourceID *string + + // READ-ONLY; The current provisioning state of trusted access role binding. + ProvisioningState *TrustedAccessRoleBindingProvisioningState +} + +// TrustedAccessRoleListResult - List of trusted access roles +type TrustedAccessRoleListResult struct { + // READ-ONLY; Link to next page of resources. + NextLink *string + + // READ-ONLY; Role list + Value []*TrustedAccessRole +} + +// TrustedAccessRoleRule - Rule for trusted access role +type TrustedAccessRoleRule struct { + // READ-ONLY; List of allowed apiGroups + APIGroups []*string + + // READ-ONLY; List of allowed nonResourceURLs + NonResourceURLs []*string + + // READ-ONLY; List of allowed names + ResourceNames []*string + + // READ-ONLY; List of allowed resources + Resources []*string + + // READ-ONLY; List of allowed verbs + Verbs []*string +} + +// UpgradeOverrideSettings - Settings for overrides when upgrading a cluster. +type UpgradeOverrideSettings struct { + // Whether to force upgrade the cluster. Note that this option instructs upgrade operation to bypass upgrade protections such + // as checking for deprecated API usage. Enable this option only with caution. + ForceUpgrade *bool + + // Until when the overrides are effective. Note that this only matches the start time of an upgrade, and the effectiveness + // won't change once an upgrade starts even if the until expires as upgrade + // proceeds. This field is not set by default. It must be set for the overrides to take effect. + Until *time.Time +} + +// UserAssignedIdentity - Details about a user assigned identity. +type UserAssignedIdentity struct { + // The client ID of the user assigned identity. + ClientID *string + + // The object ID of the user assigned identity. + ObjectID *string + + // The resource ID of the user assigned identity. + ResourceID *string +} + +// VirtualMachineNodes - Current status on a group of nodes of the same vm size. +type VirtualMachineNodes struct { + // Number of nodes. + Count *int32 + + // The VM size of the agents used to host this group of nodes. + Size *string +} + +// VirtualMachinesProfile - Specifications on VirtualMachines agent pool. +type VirtualMachinesProfile struct { + // Specifications on how to scale a VirtualMachines agent pool. + Scale *ScaleProfile +} + +// WeeklySchedule - For schedules like: 'recur every Monday' or 'recur every 3 weeks on Wednesday'. +type WeeklySchedule struct { + // REQUIRED; Specifies on which day of the week the maintenance occurs. + DayOfWeek *WeekDay + + // REQUIRED; Specifies the number of weeks between each set of occurrences. + IntervalWeeks *int32 +} + +// WindowsGmsaProfile - Windows gMSA Profile in the managed cluster. +type WindowsGmsaProfile struct { + // Specifies the DNS server for Windows gMSA. + // Set it to empty if you have configured the DNS server in the vnet which is used to create the managed cluster. + DNSServer *string + + // Specifies whether to enable Windows gMSA in the managed cluster. + Enabled *bool + + // Specifies the root domain name for Windows gMSA. + // Set it to empty if you have configured the DNS server in the vnet which is used to create the managed cluster. + RootDomainName *string +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/models_serde.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/models_serde.go new file mode 100644 index 000000000000..e9a87957a15b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/models_serde.go @@ -0,0 +1,7426 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "encoding/json" + "fmt" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "reflect" +) + +// MarshalJSON implements the json.Marshaller interface for type AbsoluteMonthlySchedule. +func (a AbsoluteMonthlySchedule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "dayOfMonth", a.DayOfMonth) + populate(objectMap, "intervalMonths", a.IntervalMonths) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AbsoluteMonthlySchedule. +func (a *AbsoluteMonthlySchedule) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "dayOfMonth": + err = unpopulate(val, "DayOfMonth", &a.DayOfMonth) + delete(rawMsg, key) + case "intervalMonths": + err = unpopulate(val, "IntervalMonths", &a.IntervalMonths) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AccessProfile. +func (a AccessProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populateByteArray(objectMap, "kubeConfig", a.KubeConfig, func() any { + return runtime.EncodeByteArray(a.KubeConfig, runtime.Base64StdFormat) + }) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AccessProfile. +func (a *AccessProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "kubeConfig": + if val != nil && string(val) != "null" { + err = runtime.DecodeByteArray(string(val), &a.KubeConfig, runtime.Base64StdFormat) + } + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPool. +func (a AgentPool) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", a.ID) + populate(objectMap, "name", a.Name) + populate(objectMap, "properties", a.Properties) + populate(objectMap, "type", a.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPool. +func (a *AgentPool) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &a.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &a.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &a.Properties) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &a.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolArtifactStreamingProfile. +func (a AgentPoolArtifactStreamingProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", a.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolArtifactStreamingProfile. +func (a *AgentPoolArtifactStreamingProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &a.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolAvailableVersions. +func (a AgentPoolAvailableVersions) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", a.ID) + populate(objectMap, "name", a.Name) + populate(objectMap, "properties", a.Properties) + populate(objectMap, "type", a.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolAvailableVersions. +func (a *AgentPoolAvailableVersions) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &a.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &a.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &a.Properties) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &a.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolAvailableVersionsProperties. +func (a AgentPoolAvailableVersionsProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "agentPoolVersions", a.AgentPoolVersions) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolAvailableVersionsProperties. +func (a *AgentPoolAvailableVersionsProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "agentPoolVersions": + err = unpopulate(val, "AgentPoolVersions", &a.AgentPoolVersions) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolAvailableVersionsPropertiesAgentPoolVersionsItem. +func (a AgentPoolAvailableVersionsPropertiesAgentPoolVersionsItem) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "default", a.Default) + populate(objectMap, "isPreview", a.IsPreview) + populate(objectMap, "kubernetesVersion", a.KubernetesVersion) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolAvailableVersionsPropertiesAgentPoolVersionsItem. +func (a *AgentPoolAvailableVersionsPropertiesAgentPoolVersionsItem) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "default": + err = unpopulate(val, "Default", &a.Default) + delete(rawMsg, key) + case "isPreview": + err = unpopulate(val, "IsPreview", &a.IsPreview) + delete(rawMsg, key) + case "kubernetesVersion": + err = unpopulate(val, "KubernetesVersion", &a.KubernetesVersion) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolDeleteMachinesParameter. +func (a AgentPoolDeleteMachinesParameter) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "machineNames", a.MachineNames) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolDeleteMachinesParameter. +func (a *AgentPoolDeleteMachinesParameter) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "machineNames": + err = unpopulate(val, "MachineNames", &a.MachineNames) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolGPUProfile. +func (a AgentPoolGPUProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "installGPUDriver", a.InstallGPUDriver) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolGPUProfile. +func (a *AgentPoolGPUProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "installGPUDriver": + err = unpopulate(val, "InstallGPUDriver", &a.InstallGPUDriver) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolListResult. +func (a AgentPoolListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", a.NextLink) + populate(objectMap, "value", a.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolListResult. +func (a *AgentPoolListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &a.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &a.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolNetworkProfile. +func (a AgentPoolNetworkProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "allowedHostPorts", a.AllowedHostPorts) + populate(objectMap, "applicationSecurityGroups", a.ApplicationSecurityGroups) + populate(objectMap, "nodePublicIPTags", a.NodePublicIPTags) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolNetworkProfile. +func (a *AgentPoolNetworkProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "allowedHostPorts": + err = unpopulate(val, "AllowedHostPorts", &a.AllowedHostPorts) + delete(rawMsg, key) + case "applicationSecurityGroups": + err = unpopulate(val, "ApplicationSecurityGroups", &a.ApplicationSecurityGroups) + delete(rawMsg, key) + case "nodePublicIPTags": + err = unpopulate(val, "NodePublicIPTags", &a.NodePublicIPTags) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolSecurityProfile. +func (a AgentPoolSecurityProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enableSecureBoot", a.EnableSecureBoot) + populate(objectMap, "enableVTPM", a.EnableVTPM) + populate(objectMap, "sshAccess", a.SSHAccess) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolSecurityProfile. +func (a *AgentPoolSecurityProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enableSecureBoot": + err = unpopulate(val, "EnableSecureBoot", &a.EnableSecureBoot) + delete(rawMsg, key) + case "enableVTPM": + err = unpopulate(val, "EnableVTPM", &a.EnableVTPM) + delete(rawMsg, key) + case "sshAccess": + err = unpopulate(val, "SSHAccess", &a.SSHAccess) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolUpgradeProfile. +func (a AgentPoolUpgradeProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", a.ID) + populate(objectMap, "name", a.Name) + populate(objectMap, "properties", a.Properties) + populate(objectMap, "type", a.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolUpgradeProfile. +func (a *AgentPoolUpgradeProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &a.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &a.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &a.Properties) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &a.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolUpgradeProfileProperties. +func (a AgentPoolUpgradeProfileProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "kubernetesVersion", a.KubernetesVersion) + populate(objectMap, "latestNodeImageVersion", a.LatestNodeImageVersion) + populate(objectMap, "osType", a.OSType) + populate(objectMap, "upgrades", a.Upgrades) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolUpgradeProfileProperties. +func (a *AgentPoolUpgradeProfileProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "kubernetesVersion": + err = unpopulate(val, "KubernetesVersion", &a.KubernetesVersion) + delete(rawMsg, key) + case "latestNodeImageVersion": + err = unpopulate(val, "LatestNodeImageVersion", &a.LatestNodeImageVersion) + delete(rawMsg, key) + case "osType": + err = unpopulate(val, "OSType", &a.OSType) + delete(rawMsg, key) + case "upgrades": + err = unpopulate(val, "Upgrades", &a.Upgrades) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolUpgradeProfilePropertiesUpgradesItem. +func (a AgentPoolUpgradeProfilePropertiesUpgradesItem) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "isPreview", a.IsPreview) + populate(objectMap, "kubernetesVersion", a.KubernetesVersion) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolUpgradeProfilePropertiesUpgradesItem. +func (a *AgentPoolUpgradeProfilePropertiesUpgradesItem) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "isPreview": + err = unpopulate(val, "IsPreview", &a.IsPreview) + delete(rawMsg, key) + case "kubernetesVersion": + err = unpopulate(val, "KubernetesVersion", &a.KubernetesVersion) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolUpgradeSettings. +func (a AgentPoolUpgradeSettings) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "drainTimeoutInMinutes", a.DrainTimeoutInMinutes) + populate(objectMap, "maxSurge", a.MaxSurge) + populate(objectMap, "nodeSoakDurationInMinutes", a.NodeSoakDurationInMinutes) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolUpgradeSettings. +func (a *AgentPoolUpgradeSettings) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "drainTimeoutInMinutes": + err = unpopulate(val, "DrainTimeoutInMinutes", &a.DrainTimeoutInMinutes) + delete(rawMsg, key) + case "maxSurge": + err = unpopulate(val, "MaxSurge", &a.MaxSurge) + delete(rawMsg, key) + case "nodeSoakDurationInMinutes": + err = unpopulate(val, "NodeSoakDurationInMinutes", &a.NodeSoakDurationInMinutes) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AgentPoolWindowsProfile. +func (a AgentPoolWindowsProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "disableOutboundNat", a.DisableOutboundNat) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AgentPoolWindowsProfile. +func (a *AgentPoolWindowsProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "disableOutboundNat": + err = unpopulate(val, "DisableOutboundNat", &a.DisableOutboundNat) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type AzureKeyVaultKms. +func (a AzureKeyVaultKms) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", a.Enabled) + populate(objectMap, "keyId", a.KeyID) + populate(objectMap, "keyVaultNetworkAccess", a.KeyVaultNetworkAccess) + populate(objectMap, "keyVaultResourceId", a.KeyVaultResourceID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type AzureKeyVaultKms. +func (a *AzureKeyVaultKms) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &a.Enabled) + delete(rawMsg, key) + case "keyId": + err = unpopulate(val, "KeyID", &a.KeyID) + delete(rawMsg, key) + case "keyVaultNetworkAccess": + err = unpopulate(val, "KeyVaultNetworkAccess", &a.KeyVaultNetworkAccess) + delete(rawMsg, key) + case "keyVaultResourceId": + err = unpopulate(val, "KeyVaultResourceID", &a.KeyVaultResourceID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", a, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ClusterUpgradeSettings. +func (c ClusterUpgradeSettings) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "overrideSettings", c.OverrideSettings) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ClusterUpgradeSettings. +func (c *ClusterUpgradeSettings) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "overrideSettings": + err = unpopulate(val, "OverrideSettings", &c.OverrideSettings) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type CommandResultProperties. +func (c CommandResultProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "exitCode", c.ExitCode) + populateDateTimeRFC3339(objectMap, "finishedAt", c.FinishedAt) + populate(objectMap, "logs", c.Logs) + populate(objectMap, "provisioningState", c.ProvisioningState) + populate(objectMap, "reason", c.Reason) + populateDateTimeRFC3339(objectMap, "startedAt", c.StartedAt) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type CommandResultProperties. +func (c *CommandResultProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "exitCode": + err = unpopulate(val, "ExitCode", &c.ExitCode) + delete(rawMsg, key) + case "finishedAt": + err = unpopulateDateTimeRFC3339(val, "FinishedAt", &c.FinishedAt) + delete(rawMsg, key) + case "logs": + err = unpopulate(val, "Logs", &c.Logs) + delete(rawMsg, key) + case "provisioningState": + err = unpopulate(val, "ProvisioningState", &c.ProvisioningState) + delete(rawMsg, key) + case "reason": + err = unpopulate(val, "Reason", &c.Reason) + delete(rawMsg, key) + case "startedAt": + err = unpopulateDateTimeRFC3339(val, "StartedAt", &c.StartedAt) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type CompatibleVersions. +func (c CompatibleVersions) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "name", c.Name) + populate(objectMap, "versions", c.Versions) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type CompatibleVersions. +func (c *CompatibleVersions) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "name": + err = unpopulate(val, "Name", &c.Name) + delete(rawMsg, key) + case "versions": + err = unpopulate(val, "Versions", &c.Versions) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type CreationData. +func (c CreationData) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "sourceResourceId", c.SourceResourceID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type CreationData. +func (c *CreationData) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "sourceResourceId": + err = unpopulate(val, "SourceResourceID", &c.SourceResourceID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type CredentialResult. +func (c CredentialResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "name", c.Name) + populateByteArray(objectMap, "value", c.Value, func() any { + return runtime.EncodeByteArray(c.Value, runtime.Base64StdFormat) + }) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type CredentialResult. +func (c *CredentialResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "name": + err = unpopulate(val, "Name", &c.Name) + delete(rawMsg, key) + case "value": + if val != nil && string(val) != "null" { + err = runtime.DecodeByteArray(string(val), &c.Value, runtime.Base64StdFormat) + } + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type CredentialResults. +func (c CredentialResults) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "kubeconfigs", c.Kubeconfigs) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type CredentialResults. +func (c *CredentialResults) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "kubeconfigs": + err = unpopulate(val, "Kubeconfigs", &c.Kubeconfigs) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", c, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type DailySchedule. +func (d DailySchedule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "intervalDays", d.IntervalDays) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type DailySchedule. +func (d *DailySchedule) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", d, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "intervalDays": + err = unpopulate(val, "IntervalDays", &d.IntervalDays) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", d, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type DateSpan. +func (d DateSpan) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populateDateType(objectMap, "end", d.End) + populateDateType(objectMap, "start", d.Start) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type DateSpan. +func (d *DateSpan) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", d, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "end": + err = unpopulateDateType(val, "End", &d.End) + delete(rawMsg, key) + case "start": + err = unpopulateDateType(val, "Start", &d.Start) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", d, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type DelegatedResource. +func (d DelegatedResource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "location", d.Location) + populate(objectMap, "referralResource", d.ReferralResource) + populate(objectMap, "resourceId", d.ResourceID) + populate(objectMap, "tenantId", d.TenantID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type DelegatedResource. +func (d *DelegatedResource) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", d, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "location": + err = unpopulate(val, "Location", &d.Location) + delete(rawMsg, key) + case "referralResource": + err = unpopulate(val, "ReferralResource", &d.ReferralResource) + delete(rawMsg, key) + case "resourceId": + err = unpopulate(val, "ResourceID", &d.ResourceID) + delete(rawMsg, key) + case "tenantId": + err = unpopulate(val, "TenantID", &d.TenantID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", d, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type EndpointDependency. +func (e EndpointDependency) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "domainName", e.DomainName) + populate(objectMap, "endpointDetails", e.EndpointDetails) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type EndpointDependency. +func (e *EndpointDependency) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "domainName": + err = unpopulate(val, "DomainName", &e.DomainName) + delete(rawMsg, key) + case "endpointDetails": + err = unpopulate(val, "EndpointDetails", &e.EndpointDetails) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type EndpointDetail. +func (e EndpointDetail) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "description", e.Description) + populate(objectMap, "ipAddress", e.IPAddress) + populate(objectMap, "port", e.Port) + populate(objectMap, "protocol", e.Protocol) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type EndpointDetail. +func (e *EndpointDetail) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "description": + err = unpopulate(val, "Description", &e.Description) + delete(rawMsg, key) + case "ipAddress": + err = unpopulate(val, "IPAddress", &e.IPAddress) + delete(rawMsg, key) + case "port": + err = unpopulate(val, "Port", &e.Port) + delete(rawMsg, key) + case "protocol": + err = unpopulate(val, "Protocol", &e.Protocol) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ErrorAdditionalInfo. +func (e ErrorAdditionalInfo) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populateAny(objectMap, "info", e.Info) + populate(objectMap, "type", e.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ErrorAdditionalInfo. +func (e *ErrorAdditionalInfo) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "info": + err = unpopulate(val, "Info", &e.Info) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &e.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ErrorDetail. +func (e ErrorDetail) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "additionalInfo", e.AdditionalInfo) + populate(objectMap, "code", e.Code) + populate(objectMap, "details", e.Details) + populate(objectMap, "message", e.Message) + populate(objectMap, "target", e.Target) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ErrorDetail. +func (e *ErrorDetail) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "additionalInfo": + err = unpopulate(val, "AdditionalInfo", &e.AdditionalInfo) + delete(rawMsg, key) + case "code": + err = unpopulate(val, "Code", &e.Code) + delete(rawMsg, key) + case "details": + err = unpopulate(val, "Details", &e.Details) + delete(rawMsg, key) + case "message": + err = unpopulate(val, "Message", &e.Message) + delete(rawMsg, key) + case "target": + err = unpopulate(val, "Target", &e.Target) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ExtendedLocation. +func (e ExtendedLocation) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "name", e.Name) + populate(objectMap, "type", e.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ExtendedLocation. +func (e *ExtendedLocation) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "name": + err = unpopulate(val, "Name", &e.Name) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &e.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", e, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type GuardrailsAvailableVersion. +func (g GuardrailsAvailableVersion) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", g.ID) + populate(objectMap, "name", g.Name) + populate(objectMap, "properties", g.Properties) + populate(objectMap, "systemData", g.SystemData) + populate(objectMap, "type", g.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type GuardrailsAvailableVersion. +func (g *GuardrailsAvailableVersion) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", g, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &g.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &g.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &g.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &g.SystemData) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &g.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", g, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type GuardrailsAvailableVersionsList. +func (g GuardrailsAvailableVersionsList) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", g.NextLink) + populate(objectMap, "value", g.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type GuardrailsAvailableVersionsList. +func (g *GuardrailsAvailableVersionsList) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", g, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &g.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &g.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", g, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type GuardrailsAvailableVersionsProperties. +func (g GuardrailsAvailableVersionsProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "isDefaultVersion", g.IsDefaultVersion) + populate(objectMap, "support", g.Support) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type GuardrailsAvailableVersionsProperties. +func (g *GuardrailsAvailableVersionsProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", g, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "isDefaultVersion": + err = unpopulate(val, "IsDefaultVersion", &g.IsDefaultVersion) + delete(rawMsg, key) + case "support": + err = unpopulate(val, "Support", &g.Support) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", g, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type IPTag. +func (i IPTag) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "ipTagType", i.IPTagType) + populate(objectMap, "tag", i.Tag) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type IPTag. +func (i *IPTag) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "ipTagType": + err = unpopulate(val, "IPTagType", &i.IPTagType) + delete(rawMsg, key) + case "tag": + err = unpopulate(val, "Tag", &i.Tag) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type IstioCertificateAuthority. +func (i IstioCertificateAuthority) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "plugin", i.Plugin) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type IstioCertificateAuthority. +func (i *IstioCertificateAuthority) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "plugin": + err = unpopulate(val, "Plugin", &i.Plugin) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type IstioComponents. +func (i IstioComponents) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "egressGateways", i.EgressGateways) + populate(objectMap, "ingressGateways", i.IngressGateways) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type IstioComponents. +func (i *IstioComponents) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "egressGateways": + err = unpopulate(val, "EgressGateways", &i.EgressGateways) + delete(rawMsg, key) + case "ingressGateways": + err = unpopulate(val, "IngressGateways", &i.IngressGateways) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type IstioEgressGateway. +func (i IstioEgressGateway) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", i.Enabled) + populate(objectMap, "nodeSelector", i.NodeSelector) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type IstioEgressGateway. +func (i *IstioEgressGateway) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &i.Enabled) + delete(rawMsg, key) + case "nodeSelector": + err = unpopulate(val, "NodeSelector", &i.NodeSelector) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type IstioIngressGateway. +func (i IstioIngressGateway) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", i.Enabled) + populate(objectMap, "mode", i.Mode) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type IstioIngressGateway. +func (i *IstioIngressGateway) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &i.Enabled) + delete(rawMsg, key) + case "mode": + err = unpopulate(val, "Mode", &i.Mode) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type IstioPluginCertificateAuthority. +func (i IstioPluginCertificateAuthority) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "certChainObjectName", i.CertChainObjectName) + populate(objectMap, "certObjectName", i.CertObjectName) + populate(objectMap, "keyObjectName", i.KeyObjectName) + populate(objectMap, "keyVaultId", i.KeyVaultID) + populate(objectMap, "rootCertObjectName", i.RootCertObjectName) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type IstioPluginCertificateAuthority. +func (i *IstioPluginCertificateAuthority) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "certChainObjectName": + err = unpopulate(val, "CertChainObjectName", &i.CertChainObjectName) + delete(rawMsg, key) + case "certObjectName": + err = unpopulate(val, "CertObjectName", &i.CertObjectName) + delete(rawMsg, key) + case "keyObjectName": + err = unpopulate(val, "KeyObjectName", &i.KeyObjectName) + delete(rawMsg, key) + case "keyVaultId": + err = unpopulate(val, "KeyVaultID", &i.KeyVaultID) + delete(rawMsg, key) + case "rootCertObjectName": + err = unpopulate(val, "RootCertObjectName", &i.RootCertObjectName) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type IstioServiceMesh. +func (i IstioServiceMesh) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "certificateAuthority", i.CertificateAuthority) + populate(objectMap, "components", i.Components) + populate(objectMap, "revisions", i.Revisions) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type IstioServiceMesh. +func (i *IstioServiceMesh) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "certificateAuthority": + err = unpopulate(val, "CertificateAuthority", &i.CertificateAuthority) + delete(rawMsg, key) + case "components": + err = unpopulate(val, "Components", &i.Components) + delete(rawMsg, key) + case "revisions": + err = unpopulate(val, "Revisions", &i.Revisions) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", i, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type KubeletConfig. +func (k KubeletConfig) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "allowedUnsafeSysctls", k.AllowedUnsafeSysctls) + populate(objectMap, "cpuCfsQuota", k.CPUCfsQuota) + populate(objectMap, "cpuCfsQuotaPeriod", k.CPUCfsQuotaPeriod) + populate(objectMap, "cpuManagerPolicy", k.CPUManagerPolicy) + populate(objectMap, "containerLogMaxFiles", k.ContainerLogMaxFiles) + populate(objectMap, "containerLogMaxSizeMB", k.ContainerLogMaxSizeMB) + populate(objectMap, "failSwapOn", k.FailSwapOn) + populate(objectMap, "imageGcHighThreshold", k.ImageGcHighThreshold) + populate(objectMap, "imageGcLowThreshold", k.ImageGcLowThreshold) + populate(objectMap, "podMaxPids", k.PodMaxPids) + populate(objectMap, "topologyManagerPolicy", k.TopologyManagerPolicy) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type KubeletConfig. +func (k *KubeletConfig) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "allowedUnsafeSysctls": + err = unpopulate(val, "AllowedUnsafeSysctls", &k.AllowedUnsafeSysctls) + delete(rawMsg, key) + case "cpuCfsQuota": + err = unpopulate(val, "CPUCfsQuota", &k.CPUCfsQuota) + delete(rawMsg, key) + case "cpuCfsQuotaPeriod": + err = unpopulate(val, "CPUCfsQuotaPeriod", &k.CPUCfsQuotaPeriod) + delete(rawMsg, key) + case "cpuManagerPolicy": + err = unpopulate(val, "CPUManagerPolicy", &k.CPUManagerPolicy) + delete(rawMsg, key) + case "containerLogMaxFiles": + err = unpopulate(val, "ContainerLogMaxFiles", &k.ContainerLogMaxFiles) + delete(rawMsg, key) + case "containerLogMaxSizeMB": + err = unpopulate(val, "ContainerLogMaxSizeMB", &k.ContainerLogMaxSizeMB) + delete(rawMsg, key) + case "failSwapOn": + err = unpopulate(val, "FailSwapOn", &k.FailSwapOn) + delete(rawMsg, key) + case "imageGcHighThreshold": + err = unpopulate(val, "ImageGcHighThreshold", &k.ImageGcHighThreshold) + delete(rawMsg, key) + case "imageGcLowThreshold": + err = unpopulate(val, "ImageGcLowThreshold", &k.ImageGcLowThreshold) + delete(rawMsg, key) + case "podMaxPids": + err = unpopulate(val, "PodMaxPids", &k.PodMaxPids) + delete(rawMsg, key) + case "topologyManagerPolicy": + err = unpopulate(val, "TopologyManagerPolicy", &k.TopologyManagerPolicy) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type KubernetesPatchVersion. +func (k KubernetesPatchVersion) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "upgrades", k.Upgrades) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type KubernetesPatchVersion. +func (k *KubernetesPatchVersion) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "upgrades": + err = unpopulate(val, "Upgrades", &k.Upgrades) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type KubernetesVersion. +func (k KubernetesVersion) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "capabilities", k.Capabilities) + populate(objectMap, "isPreview", k.IsPreview) + populate(objectMap, "patchVersions", k.PatchVersions) + populate(objectMap, "version", k.Version) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type KubernetesVersion. +func (k *KubernetesVersion) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "capabilities": + err = unpopulate(val, "Capabilities", &k.Capabilities) + delete(rawMsg, key) + case "isPreview": + err = unpopulate(val, "IsPreview", &k.IsPreview) + delete(rawMsg, key) + case "patchVersions": + err = unpopulate(val, "PatchVersions", &k.PatchVersions) + delete(rawMsg, key) + case "version": + err = unpopulate(val, "Version", &k.Version) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type KubernetesVersionCapabilities. +func (k KubernetesVersionCapabilities) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "supportPlan", k.SupportPlan) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type KubernetesVersionCapabilities. +func (k *KubernetesVersionCapabilities) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "supportPlan": + err = unpopulate(val, "SupportPlan", &k.SupportPlan) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type KubernetesVersionListResult. +func (k KubernetesVersionListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "values", k.Values) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type KubernetesVersionListResult. +func (k *KubernetesVersionListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "values": + err = unpopulate(val, "Values", &k.Values) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", k, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type LinuxOSConfig. +func (l LinuxOSConfig) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "swapFileSizeMB", l.SwapFileSizeMB) + populate(objectMap, "sysctls", l.Sysctls) + populate(objectMap, "transparentHugePageDefrag", l.TransparentHugePageDefrag) + populate(objectMap, "transparentHugePageEnabled", l.TransparentHugePageEnabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type LinuxOSConfig. +func (l *LinuxOSConfig) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", l, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "swapFileSizeMB": + err = unpopulate(val, "SwapFileSizeMB", &l.SwapFileSizeMB) + delete(rawMsg, key) + case "sysctls": + err = unpopulate(val, "Sysctls", &l.Sysctls) + delete(rawMsg, key) + case "transparentHugePageDefrag": + err = unpopulate(val, "TransparentHugePageDefrag", &l.TransparentHugePageDefrag) + delete(rawMsg, key) + case "transparentHugePageEnabled": + err = unpopulate(val, "TransparentHugePageEnabled", &l.TransparentHugePageEnabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", l, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type LinuxProfile. +func (l LinuxProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "adminUsername", l.AdminUsername) + populate(objectMap, "ssh", l.SSH) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type LinuxProfile. +func (l *LinuxProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", l, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "adminUsername": + err = unpopulate(val, "AdminUsername", &l.AdminUsername) + delete(rawMsg, key) + case "ssh": + err = unpopulate(val, "SSH", &l.SSH) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", l, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type Machine. +func (m Machine) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", m.ID) + populate(objectMap, "name", m.Name) + populate(objectMap, "properties", m.Properties) + populate(objectMap, "type", m.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type Machine. +func (m *Machine) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &m.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &m.Properties) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MachineIPAddress. +func (m MachineIPAddress) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "family", m.Family) + populate(objectMap, "ip", m.IP) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MachineIPAddress. +func (m *MachineIPAddress) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "family": + err = unpopulate(val, "Family", &m.Family) + delete(rawMsg, key) + case "ip": + err = unpopulate(val, "IP", &m.IP) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MachineListResult. +func (m MachineListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", m.NextLink) + populate(objectMap, "value", m.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MachineListResult. +func (m *MachineListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &m.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &m.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MachineNetworkProperties. +func (m MachineNetworkProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "ipAddresses", m.IPAddresses) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MachineNetworkProperties. +func (m *MachineNetworkProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "ipAddresses": + err = unpopulate(val, "IPAddresses", &m.IPAddresses) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MachineProperties. +func (m MachineProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "network", m.Network) + populate(objectMap, "resourceId", m.ResourceID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MachineProperties. +func (m *MachineProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "network": + err = unpopulate(val, "Network", &m.Network) + delete(rawMsg, key) + case "resourceId": + err = unpopulate(val, "ResourceID", &m.ResourceID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MaintenanceConfiguration. +func (m MaintenanceConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", m.ID) + populate(objectMap, "name", m.Name) + populate(objectMap, "properties", m.Properties) + populate(objectMap, "systemData", m.SystemData) + populate(objectMap, "type", m.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MaintenanceConfiguration. +func (m *MaintenanceConfiguration) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &m.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &m.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &m.SystemData) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MaintenanceConfigurationListResult. +func (m MaintenanceConfigurationListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", m.NextLink) + populate(objectMap, "value", m.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MaintenanceConfigurationListResult. +func (m *MaintenanceConfigurationListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &m.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &m.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MaintenanceConfigurationProperties. +func (m MaintenanceConfigurationProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "maintenanceWindow", m.MaintenanceWindow) + populate(objectMap, "notAllowedTime", m.NotAllowedTime) + populate(objectMap, "timeInWeek", m.TimeInWeek) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MaintenanceConfigurationProperties. +func (m *MaintenanceConfigurationProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "maintenanceWindow": + err = unpopulate(val, "MaintenanceWindow", &m.MaintenanceWindow) + delete(rawMsg, key) + case "notAllowedTime": + err = unpopulate(val, "NotAllowedTime", &m.NotAllowedTime) + delete(rawMsg, key) + case "timeInWeek": + err = unpopulate(val, "TimeInWeek", &m.TimeInWeek) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MaintenanceWindow. +func (m MaintenanceWindow) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "durationHours", m.DurationHours) + populate(objectMap, "notAllowedDates", m.NotAllowedDates) + populate(objectMap, "schedule", m.Schedule) + populateDateType(objectMap, "startDate", m.StartDate) + populate(objectMap, "startTime", m.StartTime) + populate(objectMap, "utcOffset", m.UTCOffset) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MaintenanceWindow. +func (m *MaintenanceWindow) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "durationHours": + err = unpopulate(val, "DurationHours", &m.DurationHours) + delete(rawMsg, key) + case "notAllowedDates": + err = unpopulate(val, "NotAllowedDates", &m.NotAllowedDates) + delete(rawMsg, key) + case "schedule": + err = unpopulate(val, "Schedule", &m.Schedule) + delete(rawMsg, key) + case "startDate": + err = unpopulateDateType(val, "StartDate", &m.StartDate) + delete(rawMsg, key) + case "startTime": + err = unpopulate(val, "StartTime", &m.StartTime) + delete(rawMsg, key) + case "utcOffset": + err = unpopulate(val, "UTCOffset", &m.UTCOffset) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedCluster. +func (m ManagedCluster) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "extendedLocation", m.ExtendedLocation) + populate(objectMap, "id", m.ID) + populate(objectMap, "identity", m.Identity) + populate(objectMap, "location", m.Location) + populate(objectMap, "name", m.Name) + populate(objectMap, "properties", m.Properties) + populate(objectMap, "sku", m.SKU) + populate(objectMap, "systemData", m.SystemData) + populate(objectMap, "tags", m.Tags) + populate(objectMap, "type", m.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedCluster. +func (m *ManagedCluster) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "extendedLocation": + err = unpopulate(val, "ExtendedLocation", &m.ExtendedLocation) + delete(rawMsg, key) + case "id": + err = unpopulate(val, "ID", &m.ID) + delete(rawMsg, key) + case "identity": + err = unpopulate(val, "Identity", &m.Identity) + delete(rawMsg, key) + case "location": + err = unpopulate(val, "Location", &m.Location) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &m.Properties) + delete(rawMsg, key) + case "sku": + err = unpopulate(val, "SKU", &m.SKU) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &m.SystemData) + delete(rawMsg, key) + case "tags": + err = unpopulate(val, "Tags", &m.Tags) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAADProfile. +func (m ManagedClusterAADProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "adminGroupObjectIDs", m.AdminGroupObjectIDs) + populate(objectMap, "clientAppID", m.ClientAppID) + populate(objectMap, "enableAzureRBAC", m.EnableAzureRBAC) + populate(objectMap, "managed", m.Managed) + populate(objectMap, "serverAppID", m.ServerAppID) + populate(objectMap, "serverAppSecret", m.ServerAppSecret) + populate(objectMap, "tenantID", m.TenantID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAADProfile. +func (m *ManagedClusterAADProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "adminGroupObjectIDs": + err = unpopulate(val, "AdminGroupObjectIDs", &m.AdminGroupObjectIDs) + delete(rawMsg, key) + case "clientAppID": + err = unpopulate(val, "ClientAppID", &m.ClientAppID) + delete(rawMsg, key) + case "enableAzureRBAC": + err = unpopulate(val, "EnableAzureRBAC", &m.EnableAzureRBAC) + delete(rawMsg, key) + case "managed": + err = unpopulate(val, "Managed", &m.Managed) + delete(rawMsg, key) + case "serverAppID": + err = unpopulate(val, "ServerAppID", &m.ServerAppID) + delete(rawMsg, key) + case "serverAppSecret": + err = unpopulate(val, "ServerAppSecret", &m.ServerAppSecret) + delete(rawMsg, key) + case "tenantID": + err = unpopulate(val, "TenantID", &m.TenantID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAIToolchainOperatorProfile. +func (m ManagedClusterAIToolchainOperatorProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAIToolchainOperatorProfile. +func (m *ManagedClusterAIToolchainOperatorProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAPIServerAccessProfile. +func (m ManagedClusterAPIServerAccessProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "authorizedIPRanges", m.AuthorizedIPRanges) + populate(objectMap, "disableRunCommand", m.DisableRunCommand) + populate(objectMap, "enablePrivateCluster", m.EnablePrivateCluster) + populate(objectMap, "enablePrivateClusterPublicFQDN", m.EnablePrivateClusterPublicFQDN) + populate(objectMap, "enableVnetIntegration", m.EnableVnetIntegration) + populate(objectMap, "privateDNSZone", m.PrivateDNSZone) + populate(objectMap, "subnetId", m.SubnetID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAPIServerAccessProfile. +func (m *ManagedClusterAPIServerAccessProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "authorizedIPRanges": + err = unpopulate(val, "AuthorizedIPRanges", &m.AuthorizedIPRanges) + delete(rawMsg, key) + case "disableRunCommand": + err = unpopulate(val, "DisableRunCommand", &m.DisableRunCommand) + delete(rawMsg, key) + case "enablePrivateCluster": + err = unpopulate(val, "EnablePrivateCluster", &m.EnablePrivateCluster) + delete(rawMsg, key) + case "enablePrivateClusterPublicFQDN": + err = unpopulate(val, "EnablePrivateClusterPublicFQDN", &m.EnablePrivateClusterPublicFQDN) + delete(rawMsg, key) + case "enableVnetIntegration": + err = unpopulate(val, "EnableVnetIntegration", &m.EnableVnetIntegration) + delete(rawMsg, key) + case "privateDNSZone": + err = unpopulate(val, "PrivateDNSZone", &m.PrivateDNSZone) + delete(rawMsg, key) + case "subnetId": + err = unpopulate(val, "SubnetID", &m.SubnetID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAccessProfile. +func (m ManagedClusterAccessProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", m.ID) + populate(objectMap, "location", m.Location) + populate(objectMap, "name", m.Name) + populate(objectMap, "properties", m.Properties) + populate(objectMap, "systemData", m.SystemData) + populate(objectMap, "tags", m.Tags) + populate(objectMap, "type", m.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAccessProfile. +func (m *ManagedClusterAccessProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &m.ID) + delete(rawMsg, key) + case "location": + err = unpopulate(val, "Location", &m.Location) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &m.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &m.SystemData) + delete(rawMsg, key) + case "tags": + err = unpopulate(val, "Tags", &m.Tags) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAddonProfile. +func (m ManagedClusterAddonProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "config", m.Config) + populate(objectMap, "enabled", m.Enabled) + populate(objectMap, "identity", m.Identity) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAddonProfile. +func (m *ManagedClusterAddonProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "config": + err = unpopulate(val, "Config", &m.Config) + delete(rawMsg, key) + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + case "identity": + err = unpopulate(val, "Identity", &m.Identity) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAddonProfileIdentity. +func (m ManagedClusterAddonProfileIdentity) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "clientId", m.ClientID) + populate(objectMap, "objectId", m.ObjectID) + populate(objectMap, "resourceId", m.ResourceID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAddonProfileIdentity. +func (m *ManagedClusterAddonProfileIdentity) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "clientId": + err = unpopulate(val, "ClientID", &m.ClientID) + delete(rawMsg, key) + case "objectId": + err = unpopulate(val, "ObjectID", &m.ObjectID) + delete(rawMsg, key) + case "resourceId": + err = unpopulate(val, "ResourceID", &m.ResourceID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAgentPoolProfile. +func (m ManagedClusterAgentPoolProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "artifactStreamingProfile", m.ArtifactStreamingProfile) + populate(objectMap, "availabilityZones", m.AvailabilityZones) + populate(objectMap, "capacityReservationGroupID", m.CapacityReservationGroupID) + populate(objectMap, "count", m.Count) + populate(objectMap, "creationData", m.CreationData) + populate(objectMap, "currentOrchestratorVersion", m.CurrentOrchestratorVersion) + populate(objectMap, "enableAutoScaling", m.EnableAutoScaling) + populate(objectMap, "enableCustomCATrust", m.EnableCustomCATrust) + populate(objectMap, "enableEncryptionAtHost", m.EnableEncryptionAtHost) + populate(objectMap, "enableFIPS", m.EnableFIPS) + populate(objectMap, "enableNodePublicIP", m.EnableNodePublicIP) + populate(objectMap, "enableUltraSSD", m.EnableUltraSSD) + populate(objectMap, "gpuInstanceProfile", m.GpuInstanceProfile) + populate(objectMap, "gpuProfile", m.GpuProfile) + populate(objectMap, "hostGroupID", m.HostGroupID) + populate(objectMap, "kubeletConfig", m.KubeletConfig) + populate(objectMap, "kubeletDiskType", m.KubeletDiskType) + populate(objectMap, "linuxOSConfig", m.LinuxOSConfig) + populate(objectMap, "maxCount", m.MaxCount) + populate(objectMap, "maxPods", m.MaxPods) + populate(objectMap, "messageOfTheDay", m.MessageOfTheDay) + populate(objectMap, "minCount", m.MinCount) + populate(objectMap, "mode", m.Mode) + populate(objectMap, "name", m.Name) + populate(objectMap, "networkProfile", m.NetworkProfile) + populate(objectMap, "nodeImageVersion", m.NodeImageVersion) + populate(objectMap, "nodeInitializationTaints", m.NodeInitializationTaints) + populate(objectMap, "nodeLabels", m.NodeLabels) + populate(objectMap, "nodePublicIPPrefixID", m.NodePublicIPPrefixID) + populate(objectMap, "nodeTaints", m.NodeTaints) + populate(objectMap, "osDiskSizeGB", m.OSDiskSizeGB) + populate(objectMap, "osDiskType", m.OSDiskType) + populate(objectMap, "osSKU", m.OSSKU) + populate(objectMap, "osType", m.OSType) + populate(objectMap, "orchestratorVersion", m.OrchestratorVersion) + populate(objectMap, "podSubnetID", m.PodSubnetID) + populate(objectMap, "powerState", m.PowerState) + populate(objectMap, "provisioningState", m.ProvisioningState) + populate(objectMap, "proximityPlacementGroupID", m.ProximityPlacementGroupID) + populate(objectMap, "scaleDownMode", m.ScaleDownMode) + populate(objectMap, "scaleSetEvictionPolicy", m.ScaleSetEvictionPolicy) + populate(objectMap, "scaleSetPriority", m.ScaleSetPriority) + populate(objectMap, "securityProfile", m.SecurityProfile) + populate(objectMap, "spotMaxPrice", m.SpotMaxPrice) + populate(objectMap, "tags", m.Tags) + populate(objectMap, "type", m.Type) + populate(objectMap, "upgradeSettings", m.UpgradeSettings) + populate(objectMap, "vmSize", m.VMSize) + populate(objectMap, "virtualMachineNodesStatus", m.VirtualMachineNodesStatus) + populate(objectMap, "virtualMachinesProfile", m.VirtualMachinesProfile) + populate(objectMap, "vnetSubnetID", m.VnetSubnetID) + populate(objectMap, "windowsProfile", m.WindowsProfile) + populate(objectMap, "workloadRuntime", m.WorkloadRuntime) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAgentPoolProfile. +func (m *ManagedClusterAgentPoolProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "artifactStreamingProfile": + err = unpopulate(val, "ArtifactStreamingProfile", &m.ArtifactStreamingProfile) + delete(rawMsg, key) + case "availabilityZones": + err = unpopulate(val, "AvailabilityZones", &m.AvailabilityZones) + delete(rawMsg, key) + case "capacityReservationGroupID": + err = unpopulate(val, "CapacityReservationGroupID", &m.CapacityReservationGroupID) + delete(rawMsg, key) + case "count": + err = unpopulate(val, "Count", &m.Count) + delete(rawMsg, key) + case "creationData": + err = unpopulate(val, "CreationData", &m.CreationData) + delete(rawMsg, key) + case "currentOrchestratorVersion": + err = unpopulate(val, "CurrentOrchestratorVersion", &m.CurrentOrchestratorVersion) + delete(rawMsg, key) + case "enableAutoScaling": + err = unpopulate(val, "EnableAutoScaling", &m.EnableAutoScaling) + delete(rawMsg, key) + case "enableCustomCATrust": + err = unpopulate(val, "EnableCustomCATrust", &m.EnableCustomCATrust) + delete(rawMsg, key) + case "enableEncryptionAtHost": + err = unpopulate(val, "EnableEncryptionAtHost", &m.EnableEncryptionAtHost) + delete(rawMsg, key) + case "enableFIPS": + err = unpopulate(val, "EnableFIPS", &m.EnableFIPS) + delete(rawMsg, key) + case "enableNodePublicIP": + err = unpopulate(val, "EnableNodePublicIP", &m.EnableNodePublicIP) + delete(rawMsg, key) + case "enableUltraSSD": + err = unpopulate(val, "EnableUltraSSD", &m.EnableUltraSSD) + delete(rawMsg, key) + case "gpuInstanceProfile": + err = unpopulate(val, "GpuInstanceProfile", &m.GpuInstanceProfile) + delete(rawMsg, key) + case "gpuProfile": + err = unpopulate(val, "GpuProfile", &m.GpuProfile) + delete(rawMsg, key) + case "hostGroupID": + err = unpopulate(val, "HostGroupID", &m.HostGroupID) + delete(rawMsg, key) + case "kubeletConfig": + err = unpopulate(val, "KubeletConfig", &m.KubeletConfig) + delete(rawMsg, key) + case "kubeletDiskType": + err = unpopulate(val, "KubeletDiskType", &m.KubeletDiskType) + delete(rawMsg, key) + case "linuxOSConfig": + err = unpopulate(val, "LinuxOSConfig", &m.LinuxOSConfig) + delete(rawMsg, key) + case "maxCount": + err = unpopulate(val, "MaxCount", &m.MaxCount) + delete(rawMsg, key) + case "maxPods": + err = unpopulate(val, "MaxPods", &m.MaxPods) + delete(rawMsg, key) + case "messageOfTheDay": + err = unpopulate(val, "MessageOfTheDay", &m.MessageOfTheDay) + delete(rawMsg, key) + case "minCount": + err = unpopulate(val, "MinCount", &m.MinCount) + delete(rawMsg, key) + case "mode": + err = unpopulate(val, "Mode", &m.Mode) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "networkProfile": + err = unpopulate(val, "NetworkProfile", &m.NetworkProfile) + delete(rawMsg, key) + case "nodeImageVersion": + err = unpopulate(val, "NodeImageVersion", &m.NodeImageVersion) + delete(rawMsg, key) + case "nodeInitializationTaints": + err = unpopulate(val, "NodeInitializationTaints", &m.NodeInitializationTaints) + delete(rawMsg, key) + case "nodeLabels": + err = unpopulate(val, "NodeLabels", &m.NodeLabels) + delete(rawMsg, key) + case "nodePublicIPPrefixID": + err = unpopulate(val, "NodePublicIPPrefixID", &m.NodePublicIPPrefixID) + delete(rawMsg, key) + case "nodeTaints": + err = unpopulate(val, "NodeTaints", &m.NodeTaints) + delete(rawMsg, key) + case "osDiskSizeGB": + err = unpopulate(val, "OSDiskSizeGB", &m.OSDiskSizeGB) + delete(rawMsg, key) + case "osDiskType": + err = unpopulate(val, "OSDiskType", &m.OSDiskType) + delete(rawMsg, key) + case "osSKU": + err = unpopulate(val, "OSSKU", &m.OSSKU) + delete(rawMsg, key) + case "osType": + err = unpopulate(val, "OSType", &m.OSType) + delete(rawMsg, key) + case "orchestratorVersion": + err = unpopulate(val, "OrchestratorVersion", &m.OrchestratorVersion) + delete(rawMsg, key) + case "podSubnetID": + err = unpopulate(val, "PodSubnetID", &m.PodSubnetID) + delete(rawMsg, key) + case "powerState": + err = unpopulate(val, "PowerState", &m.PowerState) + delete(rawMsg, key) + case "provisioningState": + err = unpopulate(val, "ProvisioningState", &m.ProvisioningState) + delete(rawMsg, key) + case "proximityPlacementGroupID": + err = unpopulate(val, "ProximityPlacementGroupID", &m.ProximityPlacementGroupID) + delete(rawMsg, key) + case "scaleDownMode": + err = unpopulate(val, "ScaleDownMode", &m.ScaleDownMode) + delete(rawMsg, key) + case "scaleSetEvictionPolicy": + err = unpopulate(val, "ScaleSetEvictionPolicy", &m.ScaleSetEvictionPolicy) + delete(rawMsg, key) + case "scaleSetPriority": + err = unpopulate(val, "ScaleSetPriority", &m.ScaleSetPriority) + delete(rawMsg, key) + case "securityProfile": + err = unpopulate(val, "SecurityProfile", &m.SecurityProfile) + delete(rawMsg, key) + case "spotMaxPrice": + err = unpopulate(val, "SpotMaxPrice", &m.SpotMaxPrice) + delete(rawMsg, key) + case "tags": + err = unpopulate(val, "Tags", &m.Tags) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + case "upgradeSettings": + err = unpopulate(val, "UpgradeSettings", &m.UpgradeSettings) + delete(rawMsg, key) + case "vmSize": + err = unpopulate(val, "VMSize", &m.VMSize) + delete(rawMsg, key) + case "virtualMachineNodesStatus": + err = unpopulate(val, "VirtualMachineNodesStatus", &m.VirtualMachineNodesStatus) + delete(rawMsg, key) + case "virtualMachinesProfile": + err = unpopulate(val, "VirtualMachinesProfile", &m.VirtualMachinesProfile) + delete(rawMsg, key) + case "vnetSubnetID": + err = unpopulate(val, "VnetSubnetID", &m.VnetSubnetID) + delete(rawMsg, key) + case "windowsProfile": + err = unpopulate(val, "WindowsProfile", &m.WindowsProfile) + delete(rawMsg, key) + case "workloadRuntime": + err = unpopulate(val, "WorkloadRuntime", &m.WorkloadRuntime) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAgentPoolProfileProperties. +func (m ManagedClusterAgentPoolProfileProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "artifactStreamingProfile", m.ArtifactStreamingProfile) + populate(objectMap, "availabilityZones", m.AvailabilityZones) + populate(objectMap, "capacityReservationGroupID", m.CapacityReservationGroupID) + populate(objectMap, "count", m.Count) + populate(objectMap, "creationData", m.CreationData) + populate(objectMap, "currentOrchestratorVersion", m.CurrentOrchestratorVersion) + populate(objectMap, "enableAutoScaling", m.EnableAutoScaling) + populate(objectMap, "enableCustomCATrust", m.EnableCustomCATrust) + populate(objectMap, "enableEncryptionAtHost", m.EnableEncryptionAtHost) + populate(objectMap, "enableFIPS", m.EnableFIPS) + populate(objectMap, "enableNodePublicIP", m.EnableNodePublicIP) + populate(objectMap, "enableUltraSSD", m.EnableUltraSSD) + populate(objectMap, "gpuInstanceProfile", m.GpuInstanceProfile) + populate(objectMap, "gpuProfile", m.GpuProfile) + populate(objectMap, "hostGroupID", m.HostGroupID) + populate(objectMap, "kubeletConfig", m.KubeletConfig) + populate(objectMap, "kubeletDiskType", m.KubeletDiskType) + populate(objectMap, "linuxOSConfig", m.LinuxOSConfig) + populate(objectMap, "maxCount", m.MaxCount) + populate(objectMap, "maxPods", m.MaxPods) + populate(objectMap, "messageOfTheDay", m.MessageOfTheDay) + populate(objectMap, "minCount", m.MinCount) + populate(objectMap, "mode", m.Mode) + populate(objectMap, "networkProfile", m.NetworkProfile) + populate(objectMap, "nodeImageVersion", m.NodeImageVersion) + populate(objectMap, "nodeInitializationTaints", m.NodeInitializationTaints) + populate(objectMap, "nodeLabels", m.NodeLabels) + populate(objectMap, "nodePublicIPPrefixID", m.NodePublicIPPrefixID) + populate(objectMap, "nodeTaints", m.NodeTaints) + populate(objectMap, "osDiskSizeGB", m.OSDiskSizeGB) + populate(objectMap, "osDiskType", m.OSDiskType) + populate(objectMap, "osSKU", m.OSSKU) + populate(objectMap, "osType", m.OSType) + populate(objectMap, "orchestratorVersion", m.OrchestratorVersion) + populate(objectMap, "podSubnetID", m.PodSubnetID) + populate(objectMap, "powerState", m.PowerState) + populate(objectMap, "provisioningState", m.ProvisioningState) + populate(objectMap, "proximityPlacementGroupID", m.ProximityPlacementGroupID) + populate(objectMap, "scaleDownMode", m.ScaleDownMode) + populate(objectMap, "scaleSetEvictionPolicy", m.ScaleSetEvictionPolicy) + populate(objectMap, "scaleSetPriority", m.ScaleSetPriority) + populate(objectMap, "securityProfile", m.SecurityProfile) + populate(objectMap, "spotMaxPrice", m.SpotMaxPrice) + populate(objectMap, "tags", m.Tags) + populate(objectMap, "type", m.Type) + populate(objectMap, "upgradeSettings", m.UpgradeSettings) + populate(objectMap, "vmSize", m.VMSize) + populate(objectMap, "virtualMachineNodesStatus", m.VirtualMachineNodesStatus) + populate(objectMap, "virtualMachinesProfile", m.VirtualMachinesProfile) + populate(objectMap, "vnetSubnetID", m.VnetSubnetID) + populate(objectMap, "windowsProfile", m.WindowsProfile) + populate(objectMap, "workloadRuntime", m.WorkloadRuntime) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAgentPoolProfileProperties. +func (m *ManagedClusterAgentPoolProfileProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "artifactStreamingProfile": + err = unpopulate(val, "ArtifactStreamingProfile", &m.ArtifactStreamingProfile) + delete(rawMsg, key) + case "availabilityZones": + err = unpopulate(val, "AvailabilityZones", &m.AvailabilityZones) + delete(rawMsg, key) + case "capacityReservationGroupID": + err = unpopulate(val, "CapacityReservationGroupID", &m.CapacityReservationGroupID) + delete(rawMsg, key) + case "count": + err = unpopulate(val, "Count", &m.Count) + delete(rawMsg, key) + case "creationData": + err = unpopulate(val, "CreationData", &m.CreationData) + delete(rawMsg, key) + case "currentOrchestratorVersion": + err = unpopulate(val, "CurrentOrchestratorVersion", &m.CurrentOrchestratorVersion) + delete(rawMsg, key) + case "enableAutoScaling": + err = unpopulate(val, "EnableAutoScaling", &m.EnableAutoScaling) + delete(rawMsg, key) + case "enableCustomCATrust": + err = unpopulate(val, "EnableCustomCATrust", &m.EnableCustomCATrust) + delete(rawMsg, key) + case "enableEncryptionAtHost": + err = unpopulate(val, "EnableEncryptionAtHost", &m.EnableEncryptionAtHost) + delete(rawMsg, key) + case "enableFIPS": + err = unpopulate(val, "EnableFIPS", &m.EnableFIPS) + delete(rawMsg, key) + case "enableNodePublicIP": + err = unpopulate(val, "EnableNodePublicIP", &m.EnableNodePublicIP) + delete(rawMsg, key) + case "enableUltraSSD": + err = unpopulate(val, "EnableUltraSSD", &m.EnableUltraSSD) + delete(rawMsg, key) + case "gpuInstanceProfile": + err = unpopulate(val, "GpuInstanceProfile", &m.GpuInstanceProfile) + delete(rawMsg, key) + case "gpuProfile": + err = unpopulate(val, "GpuProfile", &m.GpuProfile) + delete(rawMsg, key) + case "hostGroupID": + err = unpopulate(val, "HostGroupID", &m.HostGroupID) + delete(rawMsg, key) + case "kubeletConfig": + err = unpopulate(val, "KubeletConfig", &m.KubeletConfig) + delete(rawMsg, key) + case "kubeletDiskType": + err = unpopulate(val, "KubeletDiskType", &m.KubeletDiskType) + delete(rawMsg, key) + case "linuxOSConfig": + err = unpopulate(val, "LinuxOSConfig", &m.LinuxOSConfig) + delete(rawMsg, key) + case "maxCount": + err = unpopulate(val, "MaxCount", &m.MaxCount) + delete(rawMsg, key) + case "maxPods": + err = unpopulate(val, "MaxPods", &m.MaxPods) + delete(rawMsg, key) + case "messageOfTheDay": + err = unpopulate(val, "MessageOfTheDay", &m.MessageOfTheDay) + delete(rawMsg, key) + case "minCount": + err = unpopulate(val, "MinCount", &m.MinCount) + delete(rawMsg, key) + case "mode": + err = unpopulate(val, "Mode", &m.Mode) + delete(rawMsg, key) + case "networkProfile": + err = unpopulate(val, "NetworkProfile", &m.NetworkProfile) + delete(rawMsg, key) + case "nodeImageVersion": + err = unpopulate(val, "NodeImageVersion", &m.NodeImageVersion) + delete(rawMsg, key) + case "nodeInitializationTaints": + err = unpopulate(val, "NodeInitializationTaints", &m.NodeInitializationTaints) + delete(rawMsg, key) + case "nodeLabels": + err = unpopulate(val, "NodeLabels", &m.NodeLabels) + delete(rawMsg, key) + case "nodePublicIPPrefixID": + err = unpopulate(val, "NodePublicIPPrefixID", &m.NodePublicIPPrefixID) + delete(rawMsg, key) + case "nodeTaints": + err = unpopulate(val, "NodeTaints", &m.NodeTaints) + delete(rawMsg, key) + case "osDiskSizeGB": + err = unpopulate(val, "OSDiskSizeGB", &m.OSDiskSizeGB) + delete(rawMsg, key) + case "osDiskType": + err = unpopulate(val, "OSDiskType", &m.OSDiskType) + delete(rawMsg, key) + case "osSKU": + err = unpopulate(val, "OSSKU", &m.OSSKU) + delete(rawMsg, key) + case "osType": + err = unpopulate(val, "OSType", &m.OSType) + delete(rawMsg, key) + case "orchestratorVersion": + err = unpopulate(val, "OrchestratorVersion", &m.OrchestratorVersion) + delete(rawMsg, key) + case "podSubnetID": + err = unpopulate(val, "PodSubnetID", &m.PodSubnetID) + delete(rawMsg, key) + case "powerState": + err = unpopulate(val, "PowerState", &m.PowerState) + delete(rawMsg, key) + case "provisioningState": + err = unpopulate(val, "ProvisioningState", &m.ProvisioningState) + delete(rawMsg, key) + case "proximityPlacementGroupID": + err = unpopulate(val, "ProximityPlacementGroupID", &m.ProximityPlacementGroupID) + delete(rawMsg, key) + case "scaleDownMode": + err = unpopulate(val, "ScaleDownMode", &m.ScaleDownMode) + delete(rawMsg, key) + case "scaleSetEvictionPolicy": + err = unpopulate(val, "ScaleSetEvictionPolicy", &m.ScaleSetEvictionPolicy) + delete(rawMsg, key) + case "scaleSetPriority": + err = unpopulate(val, "ScaleSetPriority", &m.ScaleSetPriority) + delete(rawMsg, key) + case "securityProfile": + err = unpopulate(val, "SecurityProfile", &m.SecurityProfile) + delete(rawMsg, key) + case "spotMaxPrice": + err = unpopulate(val, "SpotMaxPrice", &m.SpotMaxPrice) + delete(rawMsg, key) + case "tags": + err = unpopulate(val, "Tags", &m.Tags) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + case "upgradeSettings": + err = unpopulate(val, "UpgradeSettings", &m.UpgradeSettings) + delete(rawMsg, key) + case "vmSize": + err = unpopulate(val, "VMSize", &m.VMSize) + delete(rawMsg, key) + case "virtualMachineNodesStatus": + err = unpopulate(val, "VirtualMachineNodesStatus", &m.VirtualMachineNodesStatus) + delete(rawMsg, key) + case "virtualMachinesProfile": + err = unpopulate(val, "VirtualMachinesProfile", &m.VirtualMachinesProfile) + delete(rawMsg, key) + case "vnetSubnetID": + err = unpopulate(val, "VnetSubnetID", &m.VnetSubnetID) + delete(rawMsg, key) + case "windowsProfile": + err = unpopulate(val, "WindowsProfile", &m.WindowsProfile) + delete(rawMsg, key) + case "workloadRuntime": + err = unpopulate(val, "WorkloadRuntime", &m.WorkloadRuntime) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAutoUpgradeProfile. +func (m ManagedClusterAutoUpgradeProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nodeOSUpgradeChannel", m.NodeOSUpgradeChannel) + populate(objectMap, "upgradeChannel", m.UpgradeChannel) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAutoUpgradeProfile. +func (m *ManagedClusterAutoUpgradeProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nodeOSUpgradeChannel": + err = unpopulate(val, "NodeOSUpgradeChannel", &m.NodeOSUpgradeChannel) + delete(rawMsg, key) + case "upgradeChannel": + err = unpopulate(val, "UpgradeChannel", &m.UpgradeChannel) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAzureMonitorProfile. +func (m ManagedClusterAzureMonitorProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "logs", m.Logs) + populate(objectMap, "metrics", m.Metrics) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAzureMonitorProfile. +func (m *ManagedClusterAzureMonitorProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "logs": + err = unpopulate(val, "Logs", &m.Logs) + delete(rawMsg, key) + case "metrics": + err = unpopulate(val, "Metrics", &m.Metrics) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAzureMonitorProfileAppMonitoring. +func (m ManagedClusterAzureMonitorProfileAppMonitoring) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAzureMonitorProfileAppMonitoring. +func (m *ManagedClusterAzureMonitorProfileAppMonitoring) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics. +func (m ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics. +func (m *ManagedClusterAzureMonitorProfileAppMonitoringOpenTelemetryMetrics) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAzureMonitorProfileContainerInsights. +func (m ManagedClusterAzureMonitorProfileContainerInsights) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + populate(objectMap, "logAnalyticsWorkspaceResourceId", m.LogAnalyticsWorkspaceResourceID) + populate(objectMap, "windowsHostLogs", m.WindowsHostLogs) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAzureMonitorProfileContainerInsights. +func (m *ManagedClusterAzureMonitorProfileContainerInsights) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + case "logAnalyticsWorkspaceResourceId": + err = unpopulate(val, "LogAnalyticsWorkspaceResourceID", &m.LogAnalyticsWorkspaceResourceID) + delete(rawMsg, key) + case "windowsHostLogs": + err = unpopulate(val, "WindowsHostLogs", &m.WindowsHostLogs) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAzureMonitorProfileKubeStateMetrics. +func (m ManagedClusterAzureMonitorProfileKubeStateMetrics) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "metricAnnotationsAllowList", m.MetricAnnotationsAllowList) + populate(objectMap, "metricLabelsAllowlist", m.MetricLabelsAllowlist) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAzureMonitorProfileKubeStateMetrics. +func (m *ManagedClusterAzureMonitorProfileKubeStateMetrics) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "metricAnnotationsAllowList": + err = unpopulate(val, "MetricAnnotationsAllowList", &m.MetricAnnotationsAllowList) + delete(rawMsg, key) + case "metricLabelsAllowlist": + err = unpopulate(val, "MetricLabelsAllowlist", &m.MetricLabelsAllowlist) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAzureMonitorProfileLogs. +func (m ManagedClusterAzureMonitorProfileLogs) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "appMonitoring", m.AppMonitoring) + populate(objectMap, "containerInsights", m.ContainerInsights) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAzureMonitorProfileLogs. +func (m *ManagedClusterAzureMonitorProfileLogs) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "appMonitoring": + err = unpopulate(val, "AppMonitoring", &m.AppMonitoring) + delete(rawMsg, key) + case "containerInsights": + err = unpopulate(val, "ContainerInsights", &m.ContainerInsights) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAzureMonitorProfileMetrics. +func (m ManagedClusterAzureMonitorProfileMetrics) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "appMonitoringOpenTelemetryMetrics", m.AppMonitoringOpenTelemetryMetrics) + populate(objectMap, "enabled", m.Enabled) + populate(objectMap, "kubeStateMetrics", m.KubeStateMetrics) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAzureMonitorProfileMetrics. +func (m *ManagedClusterAzureMonitorProfileMetrics) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "appMonitoringOpenTelemetryMetrics": + err = unpopulate(val, "AppMonitoringOpenTelemetryMetrics", &m.AppMonitoringOpenTelemetryMetrics) + delete(rawMsg, key) + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + case "kubeStateMetrics": + err = unpopulate(val, "KubeStateMetrics", &m.KubeStateMetrics) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterAzureMonitorProfileWindowsHostLogs. +func (m ManagedClusterAzureMonitorProfileWindowsHostLogs) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterAzureMonitorProfileWindowsHostLogs. +func (m *ManagedClusterAzureMonitorProfileWindowsHostLogs) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterCostAnalysis. +func (m ManagedClusterCostAnalysis) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterCostAnalysis. +func (m *ManagedClusterCostAnalysis) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterHTTPProxyConfig. +func (m ManagedClusterHTTPProxyConfig) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "effectiveNoProxy", m.EffectiveNoProxy) + populate(objectMap, "httpProxy", m.HTTPProxy) + populate(objectMap, "httpsProxy", m.HTTPSProxy) + populate(objectMap, "noProxy", m.NoProxy) + populate(objectMap, "trustedCa", m.TrustedCa) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterHTTPProxyConfig. +func (m *ManagedClusterHTTPProxyConfig) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "effectiveNoProxy": + err = unpopulate(val, "EffectiveNoProxy", &m.EffectiveNoProxy) + delete(rawMsg, key) + case "httpProxy": + err = unpopulate(val, "HTTPProxy", &m.HTTPProxy) + delete(rawMsg, key) + case "httpsProxy": + err = unpopulate(val, "HTTPSProxy", &m.HTTPSProxy) + delete(rawMsg, key) + case "noProxy": + err = unpopulate(val, "NoProxy", &m.NoProxy) + delete(rawMsg, key) + case "trustedCa": + err = unpopulate(val, "TrustedCa", &m.TrustedCa) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterIdentity. +func (m ManagedClusterIdentity) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "delegatedResources", m.DelegatedResources) + populate(objectMap, "principalId", m.PrincipalID) + populate(objectMap, "tenantId", m.TenantID) + populate(objectMap, "type", m.Type) + populate(objectMap, "userAssignedIdentities", m.UserAssignedIdentities) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterIdentity. +func (m *ManagedClusterIdentity) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "delegatedResources": + err = unpopulate(val, "DelegatedResources", &m.DelegatedResources) + delete(rawMsg, key) + case "principalId": + err = unpopulate(val, "PrincipalID", &m.PrincipalID) + delete(rawMsg, key) + case "tenantId": + err = unpopulate(val, "TenantID", &m.TenantID) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + case "userAssignedIdentities": + err = unpopulate(val, "UserAssignedIdentities", &m.UserAssignedIdentities) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterIngressProfile. +func (m ManagedClusterIngressProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "webAppRouting", m.WebAppRouting) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterIngressProfile. +func (m *ManagedClusterIngressProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "webAppRouting": + err = unpopulate(val, "WebAppRouting", &m.WebAppRouting) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterIngressProfileWebAppRouting. +func (m ManagedClusterIngressProfileWebAppRouting) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "dnsZoneResourceIds", m.DNSZoneResourceIDs) + populate(objectMap, "enabled", m.Enabled) + populate(objectMap, "identity", m.Identity) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterIngressProfileWebAppRouting. +func (m *ManagedClusterIngressProfileWebAppRouting) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "dnsZoneResourceIds": + err = unpopulate(val, "DNSZoneResourceIDs", &m.DNSZoneResourceIDs) + delete(rawMsg, key) + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + case "identity": + err = unpopulate(val, "Identity", &m.Identity) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterListResult. +func (m ManagedClusterListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", m.NextLink) + populate(objectMap, "value", m.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterListResult. +func (m *ManagedClusterListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &m.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &m.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterLoadBalancerProfile. +func (m ManagedClusterLoadBalancerProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "allocatedOutboundPorts", m.AllocatedOutboundPorts) + populate(objectMap, "backendPoolType", m.BackendPoolType) + populate(objectMap, "effectiveOutboundIPs", m.EffectiveOutboundIPs) + populate(objectMap, "enableMultipleStandardLoadBalancers", m.EnableMultipleStandardLoadBalancers) + populate(objectMap, "idleTimeoutInMinutes", m.IdleTimeoutInMinutes) + populate(objectMap, "managedOutboundIPs", m.ManagedOutboundIPs) + populate(objectMap, "outboundIPPrefixes", m.OutboundIPPrefixes) + populate(objectMap, "outboundIPs", m.OutboundIPs) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterLoadBalancerProfile. +func (m *ManagedClusterLoadBalancerProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "allocatedOutboundPorts": + err = unpopulate(val, "AllocatedOutboundPorts", &m.AllocatedOutboundPorts) + delete(rawMsg, key) + case "backendPoolType": + err = unpopulate(val, "BackendPoolType", &m.BackendPoolType) + delete(rawMsg, key) + case "effectiveOutboundIPs": + err = unpopulate(val, "EffectiveOutboundIPs", &m.EffectiveOutboundIPs) + delete(rawMsg, key) + case "enableMultipleStandardLoadBalancers": + err = unpopulate(val, "EnableMultipleStandardLoadBalancers", &m.EnableMultipleStandardLoadBalancers) + delete(rawMsg, key) + case "idleTimeoutInMinutes": + err = unpopulate(val, "IdleTimeoutInMinutes", &m.IdleTimeoutInMinutes) + delete(rawMsg, key) + case "managedOutboundIPs": + err = unpopulate(val, "ManagedOutboundIPs", &m.ManagedOutboundIPs) + delete(rawMsg, key) + case "outboundIPPrefixes": + err = unpopulate(val, "OutboundIPPrefixes", &m.OutboundIPPrefixes) + delete(rawMsg, key) + case "outboundIPs": + err = unpopulate(val, "OutboundIPs", &m.OutboundIPs) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterLoadBalancerProfileManagedOutboundIPs. +func (m ManagedClusterLoadBalancerProfileManagedOutboundIPs) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "count", m.Count) + populate(objectMap, "countIPv6", m.CountIPv6) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterLoadBalancerProfileManagedOutboundIPs. +func (m *ManagedClusterLoadBalancerProfileManagedOutboundIPs) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "count": + err = unpopulate(val, "Count", &m.Count) + delete(rawMsg, key) + case "countIPv6": + err = unpopulate(val, "CountIPv6", &m.CountIPv6) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterLoadBalancerProfileOutboundIPPrefixes. +func (m ManagedClusterLoadBalancerProfileOutboundIPPrefixes) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "publicIPPrefixes", m.PublicIPPrefixes) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterLoadBalancerProfileOutboundIPPrefixes. +func (m *ManagedClusterLoadBalancerProfileOutboundIPPrefixes) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "publicIPPrefixes": + err = unpopulate(val, "PublicIPPrefixes", &m.PublicIPPrefixes) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterLoadBalancerProfileOutboundIPs. +func (m ManagedClusterLoadBalancerProfileOutboundIPs) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "publicIPs", m.PublicIPs) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterLoadBalancerProfileOutboundIPs. +func (m *ManagedClusterLoadBalancerProfileOutboundIPs) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "publicIPs": + err = unpopulate(val, "PublicIPs", &m.PublicIPs) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterManagedOutboundIPProfile. +func (m ManagedClusterManagedOutboundIPProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "count", m.Count) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterManagedOutboundIPProfile. +func (m *ManagedClusterManagedOutboundIPProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "count": + err = unpopulate(val, "Count", &m.Count) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterMetricsProfile. +func (m ManagedClusterMetricsProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "costAnalysis", m.CostAnalysis) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterMetricsProfile. +func (m *ManagedClusterMetricsProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "costAnalysis": + err = unpopulate(val, "CostAnalysis", &m.CostAnalysis) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterNATGatewayProfile. +func (m ManagedClusterNATGatewayProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "effectiveOutboundIPs", m.EffectiveOutboundIPs) + populate(objectMap, "idleTimeoutInMinutes", m.IdleTimeoutInMinutes) + populate(objectMap, "managedOutboundIPProfile", m.ManagedOutboundIPProfile) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterNATGatewayProfile. +func (m *ManagedClusterNATGatewayProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "effectiveOutboundIPs": + err = unpopulate(val, "EffectiveOutboundIPs", &m.EffectiveOutboundIPs) + delete(rawMsg, key) + case "idleTimeoutInMinutes": + err = unpopulate(val, "IdleTimeoutInMinutes", &m.IdleTimeoutInMinutes) + delete(rawMsg, key) + case "managedOutboundIPProfile": + err = unpopulate(val, "ManagedOutboundIPProfile", &m.ManagedOutboundIPProfile) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterNodeProvisioningProfile. +func (m ManagedClusterNodeProvisioningProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "mode", m.Mode) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterNodeProvisioningProfile. +func (m *ManagedClusterNodeProvisioningProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "mode": + err = unpopulate(val, "Mode", &m.Mode) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterNodeResourceGroupProfile. +func (m ManagedClusterNodeResourceGroupProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "restrictionLevel", m.RestrictionLevel) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterNodeResourceGroupProfile. +func (m *ManagedClusterNodeResourceGroupProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "restrictionLevel": + err = unpopulate(val, "RestrictionLevel", &m.RestrictionLevel) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterOIDCIssuerProfile. +func (m ManagedClusterOIDCIssuerProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + populate(objectMap, "issuerURL", m.IssuerURL) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterOIDCIssuerProfile. +func (m *ManagedClusterOIDCIssuerProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + case "issuerURL": + err = unpopulate(val, "IssuerURL", &m.IssuerURL) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPodIdentity. +func (m ManagedClusterPodIdentity) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "bindingSelector", m.BindingSelector) + populate(objectMap, "identity", m.Identity) + populate(objectMap, "name", m.Name) + populate(objectMap, "namespace", m.Namespace) + populate(objectMap, "provisioningInfo", m.ProvisioningInfo) + populate(objectMap, "provisioningState", m.ProvisioningState) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPodIdentity. +func (m *ManagedClusterPodIdentity) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "bindingSelector": + err = unpopulate(val, "BindingSelector", &m.BindingSelector) + delete(rawMsg, key) + case "identity": + err = unpopulate(val, "Identity", &m.Identity) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "namespace": + err = unpopulate(val, "Namespace", &m.Namespace) + delete(rawMsg, key) + case "provisioningInfo": + err = unpopulate(val, "ProvisioningInfo", &m.ProvisioningInfo) + delete(rawMsg, key) + case "provisioningState": + err = unpopulate(val, "ProvisioningState", &m.ProvisioningState) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPodIdentityException. +func (m ManagedClusterPodIdentityException) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "name", m.Name) + populate(objectMap, "namespace", m.Namespace) + populate(objectMap, "podLabels", m.PodLabels) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPodIdentityException. +func (m *ManagedClusterPodIdentityException) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "namespace": + err = unpopulate(val, "Namespace", &m.Namespace) + delete(rawMsg, key) + case "podLabels": + err = unpopulate(val, "PodLabels", &m.PodLabels) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPodIdentityProfile. +func (m ManagedClusterPodIdentityProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "allowNetworkPluginKubenet", m.AllowNetworkPluginKubenet) + populate(objectMap, "enabled", m.Enabled) + populate(objectMap, "userAssignedIdentities", m.UserAssignedIdentities) + populate(objectMap, "userAssignedIdentityExceptions", m.UserAssignedIdentityExceptions) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPodIdentityProfile. +func (m *ManagedClusterPodIdentityProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "allowNetworkPluginKubenet": + err = unpopulate(val, "AllowNetworkPluginKubenet", &m.AllowNetworkPluginKubenet) + delete(rawMsg, key) + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + case "userAssignedIdentities": + err = unpopulate(val, "UserAssignedIdentities", &m.UserAssignedIdentities) + delete(rawMsg, key) + case "userAssignedIdentityExceptions": + err = unpopulate(val, "UserAssignedIdentityExceptions", &m.UserAssignedIdentityExceptions) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPodIdentityProvisioningError. +func (m ManagedClusterPodIdentityProvisioningError) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "error", m.Error) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPodIdentityProvisioningError. +func (m *ManagedClusterPodIdentityProvisioningError) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "error": + err = unpopulate(val, "Error", &m.Error) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPodIdentityProvisioningErrorBody. +func (m ManagedClusterPodIdentityProvisioningErrorBody) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "code", m.Code) + populate(objectMap, "details", m.Details) + populate(objectMap, "message", m.Message) + populate(objectMap, "target", m.Target) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPodIdentityProvisioningErrorBody. +func (m *ManagedClusterPodIdentityProvisioningErrorBody) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "code": + err = unpopulate(val, "Code", &m.Code) + delete(rawMsg, key) + case "details": + err = unpopulate(val, "Details", &m.Details) + delete(rawMsg, key) + case "message": + err = unpopulate(val, "Message", &m.Message) + delete(rawMsg, key) + case "target": + err = unpopulate(val, "Target", &m.Target) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPodIdentityProvisioningInfo. +func (m ManagedClusterPodIdentityProvisioningInfo) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "error", m.Error) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPodIdentityProvisioningInfo. +func (m *ManagedClusterPodIdentityProvisioningInfo) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "error": + err = unpopulate(val, "Error", &m.Error) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPoolUpgradeProfile. +func (m ManagedClusterPoolUpgradeProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "kubernetesVersion", m.KubernetesVersion) + populate(objectMap, "name", m.Name) + populate(objectMap, "osType", m.OSType) + populate(objectMap, "upgrades", m.Upgrades) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPoolUpgradeProfile. +func (m *ManagedClusterPoolUpgradeProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "kubernetesVersion": + err = unpopulate(val, "KubernetesVersion", &m.KubernetesVersion) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "osType": + err = unpopulate(val, "OSType", &m.OSType) + delete(rawMsg, key) + case "upgrades": + err = unpopulate(val, "Upgrades", &m.Upgrades) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPoolUpgradeProfileUpgradesItem. +func (m ManagedClusterPoolUpgradeProfileUpgradesItem) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "isPreview", m.IsPreview) + populate(objectMap, "kubernetesVersion", m.KubernetesVersion) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPoolUpgradeProfileUpgradesItem. +func (m *ManagedClusterPoolUpgradeProfileUpgradesItem) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "isPreview": + err = unpopulate(val, "IsPreview", &m.IsPreview) + delete(rawMsg, key) + case "kubernetesVersion": + err = unpopulate(val, "KubernetesVersion", &m.KubernetesVersion) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterProperties. +func (m ManagedClusterProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "aadProfile", m.AADProfile) + populate(objectMap, "apiServerAccessProfile", m.APIServerAccessProfile) + populate(objectMap, "addonProfiles", m.AddonProfiles) + populate(objectMap, "agentPoolProfiles", m.AgentPoolProfiles) + populate(objectMap, "aiToolchainOperatorProfile", m.AiToolchainOperatorProfile) + populate(objectMap, "autoScalerProfile", m.AutoScalerProfile) + populate(objectMap, "autoUpgradeProfile", m.AutoUpgradeProfile) + populate(objectMap, "azureMonitorProfile", m.AzureMonitorProfile) + populate(objectMap, "azurePortalFQDN", m.AzurePortalFQDN) + populate(objectMap, "creationData", m.CreationData) + populate(objectMap, "currentKubernetesVersion", m.CurrentKubernetesVersion) + populate(objectMap, "dnsPrefix", m.DNSPrefix) + populate(objectMap, "disableLocalAccounts", m.DisableLocalAccounts) + populate(objectMap, "diskEncryptionSetID", m.DiskEncryptionSetID) + populate(objectMap, "enableNamespaceResources", m.EnableNamespaceResources) + populate(objectMap, "enablePodSecurityPolicy", m.EnablePodSecurityPolicy) + populate(objectMap, "enableRBAC", m.EnableRBAC) + populate(objectMap, "fqdn", m.Fqdn) + populate(objectMap, "fqdnSubdomain", m.FqdnSubdomain) + populate(objectMap, "httpProxyConfig", m.HTTPProxyConfig) + populate(objectMap, "identityProfile", m.IdentityProfile) + populate(objectMap, "ingressProfile", m.IngressProfile) + populate(objectMap, "kubernetesVersion", m.KubernetesVersion) + populate(objectMap, "linuxProfile", m.LinuxProfile) + populate(objectMap, "maxAgentPools", m.MaxAgentPools) + populate(objectMap, "metricsProfile", m.MetricsProfile) + populate(objectMap, "networkProfile", m.NetworkProfile) + populate(objectMap, "nodeProvisioningProfile", m.NodeProvisioningProfile) + populate(objectMap, "nodeResourceGroup", m.NodeResourceGroup) + populate(objectMap, "nodeResourceGroupProfile", m.NodeResourceGroupProfile) + populate(objectMap, "oidcIssuerProfile", m.OidcIssuerProfile) + populate(objectMap, "podIdentityProfile", m.PodIdentityProfile) + populate(objectMap, "powerState", m.PowerState) + populate(objectMap, "privateFQDN", m.PrivateFQDN) + populate(objectMap, "privateLinkResources", m.PrivateLinkResources) + populate(objectMap, "provisioningState", m.ProvisioningState) + populate(objectMap, "publicNetworkAccess", m.PublicNetworkAccess) + populate(objectMap, "resourceUID", m.ResourceUID) + populate(objectMap, "safeguardsProfile", m.SafeguardsProfile) + populate(objectMap, "securityProfile", m.SecurityProfile) + populate(objectMap, "serviceMeshProfile", m.ServiceMeshProfile) + populate(objectMap, "servicePrincipalProfile", m.ServicePrincipalProfile) + populate(objectMap, "storageProfile", m.StorageProfile) + populate(objectMap, "supportPlan", m.SupportPlan) + populate(objectMap, "upgradeSettings", m.UpgradeSettings) + populate(objectMap, "windowsProfile", m.WindowsProfile) + populate(objectMap, "workloadAutoScalerProfile", m.WorkloadAutoScalerProfile) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterProperties. +func (m *ManagedClusterProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "aadProfile": + err = unpopulate(val, "AADProfile", &m.AADProfile) + delete(rawMsg, key) + case "apiServerAccessProfile": + err = unpopulate(val, "APIServerAccessProfile", &m.APIServerAccessProfile) + delete(rawMsg, key) + case "addonProfiles": + err = unpopulate(val, "AddonProfiles", &m.AddonProfiles) + delete(rawMsg, key) + case "agentPoolProfiles": + err = unpopulate(val, "AgentPoolProfiles", &m.AgentPoolProfiles) + delete(rawMsg, key) + case "aiToolchainOperatorProfile": + err = unpopulate(val, "AiToolchainOperatorProfile", &m.AiToolchainOperatorProfile) + delete(rawMsg, key) + case "autoScalerProfile": + err = unpopulate(val, "AutoScalerProfile", &m.AutoScalerProfile) + delete(rawMsg, key) + case "autoUpgradeProfile": + err = unpopulate(val, "AutoUpgradeProfile", &m.AutoUpgradeProfile) + delete(rawMsg, key) + case "azureMonitorProfile": + err = unpopulate(val, "AzureMonitorProfile", &m.AzureMonitorProfile) + delete(rawMsg, key) + case "azurePortalFQDN": + err = unpopulate(val, "AzurePortalFQDN", &m.AzurePortalFQDN) + delete(rawMsg, key) + case "creationData": + err = unpopulate(val, "CreationData", &m.CreationData) + delete(rawMsg, key) + case "currentKubernetesVersion": + err = unpopulate(val, "CurrentKubernetesVersion", &m.CurrentKubernetesVersion) + delete(rawMsg, key) + case "dnsPrefix": + err = unpopulate(val, "DNSPrefix", &m.DNSPrefix) + delete(rawMsg, key) + case "disableLocalAccounts": + err = unpopulate(val, "DisableLocalAccounts", &m.DisableLocalAccounts) + delete(rawMsg, key) + case "diskEncryptionSetID": + err = unpopulate(val, "DiskEncryptionSetID", &m.DiskEncryptionSetID) + delete(rawMsg, key) + case "enableNamespaceResources": + err = unpopulate(val, "EnableNamespaceResources", &m.EnableNamespaceResources) + delete(rawMsg, key) + case "enablePodSecurityPolicy": + err = unpopulate(val, "EnablePodSecurityPolicy", &m.EnablePodSecurityPolicy) + delete(rawMsg, key) + case "enableRBAC": + err = unpopulate(val, "EnableRBAC", &m.EnableRBAC) + delete(rawMsg, key) + case "fqdn": + err = unpopulate(val, "Fqdn", &m.Fqdn) + delete(rawMsg, key) + case "fqdnSubdomain": + err = unpopulate(val, "FqdnSubdomain", &m.FqdnSubdomain) + delete(rawMsg, key) + case "httpProxyConfig": + err = unpopulate(val, "HTTPProxyConfig", &m.HTTPProxyConfig) + delete(rawMsg, key) + case "identityProfile": + err = unpopulate(val, "IdentityProfile", &m.IdentityProfile) + delete(rawMsg, key) + case "ingressProfile": + err = unpopulate(val, "IngressProfile", &m.IngressProfile) + delete(rawMsg, key) + case "kubernetesVersion": + err = unpopulate(val, "KubernetesVersion", &m.KubernetesVersion) + delete(rawMsg, key) + case "linuxProfile": + err = unpopulate(val, "LinuxProfile", &m.LinuxProfile) + delete(rawMsg, key) + case "maxAgentPools": + err = unpopulate(val, "MaxAgentPools", &m.MaxAgentPools) + delete(rawMsg, key) + case "metricsProfile": + err = unpopulate(val, "MetricsProfile", &m.MetricsProfile) + delete(rawMsg, key) + case "networkProfile": + err = unpopulate(val, "NetworkProfile", &m.NetworkProfile) + delete(rawMsg, key) + case "nodeProvisioningProfile": + err = unpopulate(val, "NodeProvisioningProfile", &m.NodeProvisioningProfile) + delete(rawMsg, key) + case "nodeResourceGroup": + err = unpopulate(val, "NodeResourceGroup", &m.NodeResourceGroup) + delete(rawMsg, key) + case "nodeResourceGroupProfile": + err = unpopulate(val, "NodeResourceGroupProfile", &m.NodeResourceGroupProfile) + delete(rawMsg, key) + case "oidcIssuerProfile": + err = unpopulate(val, "OidcIssuerProfile", &m.OidcIssuerProfile) + delete(rawMsg, key) + case "podIdentityProfile": + err = unpopulate(val, "PodIdentityProfile", &m.PodIdentityProfile) + delete(rawMsg, key) + case "powerState": + err = unpopulate(val, "PowerState", &m.PowerState) + delete(rawMsg, key) + case "privateFQDN": + err = unpopulate(val, "PrivateFQDN", &m.PrivateFQDN) + delete(rawMsg, key) + case "privateLinkResources": + err = unpopulate(val, "PrivateLinkResources", &m.PrivateLinkResources) + delete(rawMsg, key) + case "provisioningState": + err = unpopulate(val, "ProvisioningState", &m.ProvisioningState) + delete(rawMsg, key) + case "publicNetworkAccess": + err = unpopulate(val, "PublicNetworkAccess", &m.PublicNetworkAccess) + delete(rawMsg, key) + case "resourceUID": + err = unpopulate(val, "ResourceUID", &m.ResourceUID) + delete(rawMsg, key) + case "safeguardsProfile": + err = unpopulate(val, "SafeguardsProfile", &m.SafeguardsProfile) + delete(rawMsg, key) + case "securityProfile": + err = unpopulate(val, "SecurityProfile", &m.SecurityProfile) + delete(rawMsg, key) + case "serviceMeshProfile": + err = unpopulate(val, "ServiceMeshProfile", &m.ServiceMeshProfile) + delete(rawMsg, key) + case "servicePrincipalProfile": + err = unpopulate(val, "ServicePrincipalProfile", &m.ServicePrincipalProfile) + delete(rawMsg, key) + case "storageProfile": + err = unpopulate(val, "StorageProfile", &m.StorageProfile) + delete(rawMsg, key) + case "supportPlan": + err = unpopulate(val, "SupportPlan", &m.SupportPlan) + delete(rawMsg, key) + case "upgradeSettings": + err = unpopulate(val, "UpgradeSettings", &m.UpgradeSettings) + delete(rawMsg, key) + case "windowsProfile": + err = unpopulate(val, "WindowsProfile", &m.WindowsProfile) + delete(rawMsg, key) + case "workloadAutoScalerProfile": + err = unpopulate(val, "WorkloadAutoScalerProfile", &m.WorkloadAutoScalerProfile) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPropertiesAutoScalerProfile. +func (m ManagedClusterPropertiesAutoScalerProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "balance-similar-node-groups", m.BalanceSimilarNodeGroups) + populate(objectMap, "daemonset-eviction-for-empty-nodes", m.DaemonsetEvictionForEmptyNodes) + populate(objectMap, "daemonset-eviction-for-occupied-nodes", m.DaemonsetEvictionForOccupiedNodes) + populate(objectMap, "expander", m.Expander) + populate(objectMap, "ignore-daemonsets-utilization", m.IgnoreDaemonsetsUtilization) + populate(objectMap, "max-empty-bulk-delete", m.MaxEmptyBulkDelete) + populate(objectMap, "max-graceful-termination-sec", m.MaxGracefulTerminationSec) + populate(objectMap, "max-node-provision-time", m.MaxNodeProvisionTime) + populate(objectMap, "max-total-unready-percentage", m.MaxTotalUnreadyPercentage) + populate(objectMap, "new-pod-scale-up-delay", m.NewPodScaleUpDelay) + populate(objectMap, "ok-total-unready-count", m.OkTotalUnreadyCount) + populate(objectMap, "scale-down-delay-after-add", m.ScaleDownDelayAfterAdd) + populate(objectMap, "scale-down-delay-after-delete", m.ScaleDownDelayAfterDelete) + populate(objectMap, "scale-down-delay-after-failure", m.ScaleDownDelayAfterFailure) + populate(objectMap, "scale-down-unneeded-time", m.ScaleDownUnneededTime) + populate(objectMap, "scale-down-unready-time", m.ScaleDownUnreadyTime) + populate(objectMap, "scale-down-utilization-threshold", m.ScaleDownUtilizationThreshold) + populate(objectMap, "scan-interval", m.ScanInterval) + populate(objectMap, "skip-nodes-with-local-storage", m.SkipNodesWithLocalStorage) + populate(objectMap, "skip-nodes-with-system-pods", m.SkipNodesWithSystemPods) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPropertiesAutoScalerProfile. +func (m *ManagedClusterPropertiesAutoScalerProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "balance-similar-node-groups": + err = unpopulate(val, "BalanceSimilarNodeGroups", &m.BalanceSimilarNodeGroups) + delete(rawMsg, key) + case "daemonset-eviction-for-empty-nodes": + err = unpopulate(val, "DaemonsetEvictionForEmptyNodes", &m.DaemonsetEvictionForEmptyNodes) + delete(rawMsg, key) + case "daemonset-eviction-for-occupied-nodes": + err = unpopulate(val, "DaemonsetEvictionForOccupiedNodes", &m.DaemonsetEvictionForOccupiedNodes) + delete(rawMsg, key) + case "expander": + err = unpopulate(val, "Expander", &m.Expander) + delete(rawMsg, key) + case "ignore-daemonsets-utilization": + err = unpopulate(val, "IgnoreDaemonsetsUtilization", &m.IgnoreDaemonsetsUtilization) + delete(rawMsg, key) + case "max-empty-bulk-delete": + err = unpopulate(val, "MaxEmptyBulkDelete", &m.MaxEmptyBulkDelete) + delete(rawMsg, key) + case "max-graceful-termination-sec": + err = unpopulate(val, "MaxGracefulTerminationSec", &m.MaxGracefulTerminationSec) + delete(rawMsg, key) + case "max-node-provision-time": + err = unpopulate(val, "MaxNodeProvisionTime", &m.MaxNodeProvisionTime) + delete(rawMsg, key) + case "max-total-unready-percentage": + err = unpopulate(val, "MaxTotalUnreadyPercentage", &m.MaxTotalUnreadyPercentage) + delete(rawMsg, key) + case "new-pod-scale-up-delay": + err = unpopulate(val, "NewPodScaleUpDelay", &m.NewPodScaleUpDelay) + delete(rawMsg, key) + case "ok-total-unready-count": + err = unpopulate(val, "OkTotalUnreadyCount", &m.OkTotalUnreadyCount) + delete(rawMsg, key) + case "scale-down-delay-after-add": + err = unpopulate(val, "ScaleDownDelayAfterAdd", &m.ScaleDownDelayAfterAdd) + delete(rawMsg, key) + case "scale-down-delay-after-delete": + err = unpopulate(val, "ScaleDownDelayAfterDelete", &m.ScaleDownDelayAfterDelete) + delete(rawMsg, key) + case "scale-down-delay-after-failure": + err = unpopulate(val, "ScaleDownDelayAfterFailure", &m.ScaleDownDelayAfterFailure) + delete(rawMsg, key) + case "scale-down-unneeded-time": + err = unpopulate(val, "ScaleDownUnneededTime", &m.ScaleDownUnneededTime) + delete(rawMsg, key) + case "scale-down-unready-time": + err = unpopulate(val, "ScaleDownUnreadyTime", &m.ScaleDownUnreadyTime) + delete(rawMsg, key) + case "scale-down-utilization-threshold": + err = unpopulate(val, "ScaleDownUtilizationThreshold", &m.ScaleDownUtilizationThreshold) + delete(rawMsg, key) + case "scan-interval": + err = unpopulate(val, "ScanInterval", &m.ScanInterval) + delete(rawMsg, key) + case "skip-nodes-with-local-storage": + err = unpopulate(val, "SkipNodesWithLocalStorage", &m.SkipNodesWithLocalStorage) + delete(rawMsg, key) + case "skip-nodes-with-system-pods": + err = unpopulate(val, "SkipNodesWithSystemPods", &m.SkipNodesWithSystemPods) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterPropertiesForSnapshot. +func (m ManagedClusterPropertiesForSnapshot) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enableRbac", m.EnableRbac) + populate(objectMap, "kubernetesVersion", m.KubernetesVersion) + populate(objectMap, "networkProfile", m.NetworkProfile) + populate(objectMap, "sku", m.SKU) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterPropertiesForSnapshot. +func (m *ManagedClusterPropertiesForSnapshot) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enableRbac": + err = unpopulate(val, "EnableRbac", &m.EnableRbac) + delete(rawMsg, key) + case "kubernetesVersion": + err = unpopulate(val, "KubernetesVersion", &m.KubernetesVersion) + delete(rawMsg, key) + case "networkProfile": + err = unpopulate(val, "NetworkProfile", &m.NetworkProfile) + delete(rawMsg, key) + case "sku": + err = unpopulate(val, "SKU", &m.SKU) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSKU. +func (m ManagedClusterSKU) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "name", m.Name) + populate(objectMap, "tier", m.Tier) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSKU. +func (m *ManagedClusterSKU) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "tier": + err = unpopulate(val, "Tier", &m.Tier) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSecurityProfile. +func (m ManagedClusterSecurityProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "azureKeyVaultKms", m.AzureKeyVaultKms) + populateByteArray(objectMap, "customCATrustCertificates", m.CustomCATrustCertificates, func() any { + encodedValue := make([]string, len(m.CustomCATrustCertificates)) + for i := 0; i < len(m.CustomCATrustCertificates); i++ { + encodedValue[i] = runtime.EncodeByteArray(m.CustomCATrustCertificates[i], runtime.Base64StdFormat) + } + return encodedValue + }) + populate(objectMap, "defender", m.Defender) + populate(objectMap, "imageCleaner", m.ImageCleaner) + populate(objectMap, "imageIntegrity", m.ImageIntegrity) + populate(objectMap, "nodeRestriction", m.NodeRestriction) + populate(objectMap, "workloadIdentity", m.WorkloadIdentity) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSecurityProfile. +func (m *ManagedClusterSecurityProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "azureKeyVaultKms": + err = unpopulate(val, "AzureKeyVaultKms", &m.AzureKeyVaultKms) + delete(rawMsg, key) + case "customCATrustCertificates": + var encodedValue []string + err = unpopulate(val, "CustomCATrustCertificates", &encodedValue) + if err == nil && len(encodedValue) > 0 { + m.CustomCATrustCertificates = make([][]byte, len(encodedValue)) + for i := 0; i < len(encodedValue) && err == nil; i++ { + err = runtime.DecodeByteArray(encodedValue[i], &m.CustomCATrustCertificates[i], runtime.Base64StdFormat) + } + } + delete(rawMsg, key) + case "defender": + err = unpopulate(val, "Defender", &m.Defender) + delete(rawMsg, key) + case "imageCleaner": + err = unpopulate(val, "ImageCleaner", &m.ImageCleaner) + delete(rawMsg, key) + case "imageIntegrity": + err = unpopulate(val, "ImageIntegrity", &m.ImageIntegrity) + delete(rawMsg, key) + case "nodeRestriction": + err = unpopulate(val, "NodeRestriction", &m.NodeRestriction) + delete(rawMsg, key) + case "workloadIdentity": + err = unpopulate(val, "WorkloadIdentity", &m.WorkloadIdentity) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSecurityProfileDefender. +func (m ManagedClusterSecurityProfileDefender) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "logAnalyticsWorkspaceResourceId", m.LogAnalyticsWorkspaceResourceID) + populate(objectMap, "securityMonitoring", m.SecurityMonitoring) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSecurityProfileDefender. +func (m *ManagedClusterSecurityProfileDefender) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "logAnalyticsWorkspaceResourceId": + err = unpopulate(val, "LogAnalyticsWorkspaceResourceID", &m.LogAnalyticsWorkspaceResourceID) + delete(rawMsg, key) + case "securityMonitoring": + err = unpopulate(val, "SecurityMonitoring", &m.SecurityMonitoring) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSecurityProfileDefenderSecurityMonitoring. +func (m ManagedClusterSecurityProfileDefenderSecurityMonitoring) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSecurityProfileDefenderSecurityMonitoring. +func (m *ManagedClusterSecurityProfileDefenderSecurityMonitoring) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSecurityProfileImageCleaner. +func (m ManagedClusterSecurityProfileImageCleaner) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + populate(objectMap, "intervalHours", m.IntervalHours) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSecurityProfileImageCleaner. +func (m *ManagedClusterSecurityProfileImageCleaner) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + case "intervalHours": + err = unpopulate(val, "IntervalHours", &m.IntervalHours) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSecurityProfileImageIntegrity. +func (m ManagedClusterSecurityProfileImageIntegrity) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSecurityProfileImageIntegrity. +func (m *ManagedClusterSecurityProfileImageIntegrity) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSecurityProfileNodeRestriction. +func (m ManagedClusterSecurityProfileNodeRestriction) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSecurityProfileNodeRestriction. +func (m *ManagedClusterSecurityProfileNodeRestriction) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSecurityProfileWorkloadIdentity. +func (m ManagedClusterSecurityProfileWorkloadIdentity) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSecurityProfileWorkloadIdentity. +func (m *ManagedClusterSecurityProfileWorkloadIdentity) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterServicePrincipalProfile. +func (m ManagedClusterServicePrincipalProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "clientId", m.ClientID) + populate(objectMap, "secret", m.Secret) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterServicePrincipalProfile. +func (m *ManagedClusterServicePrincipalProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "clientId": + err = unpopulate(val, "ClientID", &m.ClientID) + delete(rawMsg, key) + case "secret": + err = unpopulate(val, "Secret", &m.Secret) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSnapshot. +func (m ManagedClusterSnapshot) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", m.ID) + populate(objectMap, "location", m.Location) + populate(objectMap, "name", m.Name) + populate(objectMap, "properties", m.Properties) + populate(objectMap, "systemData", m.SystemData) + populate(objectMap, "tags", m.Tags) + populate(objectMap, "type", m.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSnapshot. +func (m *ManagedClusterSnapshot) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &m.ID) + delete(rawMsg, key) + case "location": + err = unpopulate(val, "Location", &m.Location) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &m.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &m.SystemData) + delete(rawMsg, key) + case "tags": + err = unpopulate(val, "Tags", &m.Tags) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSnapshotListResult. +func (m ManagedClusterSnapshotListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", m.NextLink) + populate(objectMap, "value", m.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSnapshotListResult. +func (m *ManagedClusterSnapshotListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &m.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &m.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterSnapshotProperties. +func (m ManagedClusterSnapshotProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "creationData", m.CreationData) + populate(objectMap, "managedClusterPropertiesReadOnly", m.ManagedClusterPropertiesReadOnly) + populate(objectMap, "snapshotType", m.SnapshotType) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterSnapshotProperties. +func (m *ManagedClusterSnapshotProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "creationData": + err = unpopulate(val, "CreationData", &m.CreationData) + delete(rawMsg, key) + case "managedClusterPropertiesReadOnly": + err = unpopulate(val, "ManagedClusterPropertiesReadOnly", &m.ManagedClusterPropertiesReadOnly) + delete(rawMsg, key) + case "snapshotType": + err = unpopulate(val, "SnapshotType", &m.SnapshotType) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterStorageProfile. +func (m ManagedClusterStorageProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "blobCSIDriver", m.BlobCSIDriver) + populate(objectMap, "diskCSIDriver", m.DiskCSIDriver) + populate(objectMap, "fileCSIDriver", m.FileCSIDriver) + populate(objectMap, "snapshotController", m.SnapshotController) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterStorageProfile. +func (m *ManagedClusterStorageProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "blobCSIDriver": + err = unpopulate(val, "BlobCSIDriver", &m.BlobCSIDriver) + delete(rawMsg, key) + case "diskCSIDriver": + err = unpopulate(val, "DiskCSIDriver", &m.DiskCSIDriver) + delete(rawMsg, key) + case "fileCSIDriver": + err = unpopulate(val, "FileCSIDriver", &m.FileCSIDriver) + delete(rawMsg, key) + case "snapshotController": + err = unpopulate(val, "SnapshotController", &m.SnapshotController) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterStorageProfileBlobCSIDriver. +func (m ManagedClusterStorageProfileBlobCSIDriver) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterStorageProfileBlobCSIDriver. +func (m *ManagedClusterStorageProfileBlobCSIDriver) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterStorageProfileDiskCSIDriver. +func (m ManagedClusterStorageProfileDiskCSIDriver) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + populate(objectMap, "version", m.Version) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterStorageProfileDiskCSIDriver. +func (m *ManagedClusterStorageProfileDiskCSIDriver) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + case "version": + err = unpopulate(val, "Version", &m.Version) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterStorageProfileFileCSIDriver. +func (m ManagedClusterStorageProfileFileCSIDriver) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterStorageProfileFileCSIDriver. +func (m *ManagedClusterStorageProfileFileCSIDriver) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterStorageProfileSnapshotController. +func (m ManagedClusterStorageProfileSnapshotController) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterStorageProfileSnapshotController. +func (m *ManagedClusterStorageProfileSnapshotController) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterUpgradeProfile. +func (m ManagedClusterUpgradeProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", m.ID) + populate(objectMap, "name", m.Name) + populate(objectMap, "properties", m.Properties) + populate(objectMap, "type", m.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterUpgradeProfile. +func (m *ManagedClusterUpgradeProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &m.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &m.Properties) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterUpgradeProfileProperties. +func (m ManagedClusterUpgradeProfileProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "agentPoolProfiles", m.AgentPoolProfiles) + populate(objectMap, "controlPlaneProfile", m.ControlPlaneProfile) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterUpgradeProfileProperties. +func (m *ManagedClusterUpgradeProfileProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "agentPoolProfiles": + err = unpopulate(val, "AgentPoolProfiles", &m.AgentPoolProfiles) + delete(rawMsg, key) + case "controlPlaneProfile": + err = unpopulate(val, "ControlPlaneProfile", &m.ControlPlaneProfile) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterWindowsProfile. +func (m ManagedClusterWindowsProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "adminPassword", m.AdminPassword) + populate(objectMap, "adminUsername", m.AdminUsername) + populate(objectMap, "enableCSIProxy", m.EnableCSIProxy) + populate(objectMap, "gmsaProfile", m.GmsaProfile) + populate(objectMap, "licenseType", m.LicenseType) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterWindowsProfile. +func (m *ManagedClusterWindowsProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "adminPassword": + err = unpopulate(val, "AdminPassword", &m.AdminPassword) + delete(rawMsg, key) + case "adminUsername": + err = unpopulate(val, "AdminUsername", &m.AdminUsername) + delete(rawMsg, key) + case "enableCSIProxy": + err = unpopulate(val, "EnableCSIProxy", &m.EnableCSIProxy) + delete(rawMsg, key) + case "gmsaProfile": + err = unpopulate(val, "GmsaProfile", &m.GmsaProfile) + delete(rawMsg, key) + case "licenseType": + err = unpopulate(val, "LicenseType", &m.LicenseType) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterWorkloadAutoScalerProfile. +func (m ManagedClusterWorkloadAutoScalerProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "keda", m.Keda) + populate(objectMap, "verticalPodAutoscaler", m.VerticalPodAutoscaler) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterWorkloadAutoScalerProfile. +func (m *ManagedClusterWorkloadAutoScalerProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "keda": + err = unpopulate(val, "Keda", &m.Keda) + delete(rawMsg, key) + case "verticalPodAutoscaler": + err = unpopulate(val, "VerticalPodAutoscaler", &m.VerticalPodAutoscaler) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterWorkloadAutoScalerProfileKeda. +func (m ManagedClusterWorkloadAutoScalerProfileKeda) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterWorkloadAutoScalerProfileKeda. +func (m *ManagedClusterWorkloadAutoScalerProfileKeda) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler. +func (m ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "addonAutoscaling", m.AddonAutoscaling) + populate(objectMap, "enabled", m.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler. +func (m *ManagedClusterWorkloadAutoScalerProfileVerticalPodAutoscaler) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "addonAutoscaling": + err = unpopulate(val, "AddonAutoscaling", &m.AddonAutoscaling) + delete(rawMsg, key) + case "enabled": + err = unpopulate(val, "Enabled", &m.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManagedServiceIdentityUserAssignedIdentitiesValue. +func (m ManagedServiceIdentityUserAssignedIdentitiesValue) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "clientId", m.ClientID) + populate(objectMap, "principalId", m.PrincipalID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManagedServiceIdentityUserAssignedIdentitiesValue. +func (m *ManagedServiceIdentityUserAssignedIdentitiesValue) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "clientId": + err = unpopulate(val, "ClientID", &m.ClientID) + delete(rawMsg, key) + case "principalId": + err = unpopulate(val, "PrincipalID", &m.PrincipalID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ManualScaleProfile. +func (m ManualScaleProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "count", m.Count) + populate(objectMap, "sizes", m.Sizes) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ManualScaleProfile. +func (m *ManualScaleProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "count": + err = unpopulate(val, "Count", &m.Count) + delete(rawMsg, key) + case "sizes": + err = unpopulate(val, "Sizes", &m.Sizes) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MeshRevision. +func (m MeshRevision) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "compatibleWith", m.CompatibleWith) + populate(objectMap, "revision", m.Revision) + populate(objectMap, "upgrades", m.Upgrades) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MeshRevision. +func (m *MeshRevision) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "compatibleWith": + err = unpopulate(val, "CompatibleWith", &m.CompatibleWith) + delete(rawMsg, key) + case "revision": + err = unpopulate(val, "Revision", &m.Revision) + delete(rawMsg, key) + case "upgrades": + err = unpopulate(val, "Upgrades", &m.Upgrades) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MeshRevisionProfile. +func (m MeshRevisionProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", m.ID) + populate(objectMap, "name", m.Name) + populate(objectMap, "properties", m.Properties) + populate(objectMap, "systemData", m.SystemData) + populate(objectMap, "type", m.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MeshRevisionProfile. +func (m *MeshRevisionProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &m.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &m.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &m.SystemData) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MeshRevisionProfileList. +func (m MeshRevisionProfileList) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", m.NextLink) + populate(objectMap, "value", m.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MeshRevisionProfileList. +func (m *MeshRevisionProfileList) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &m.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &m.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MeshRevisionProfileProperties. +func (m MeshRevisionProfileProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "meshRevisions", m.MeshRevisions) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MeshRevisionProfileProperties. +func (m *MeshRevisionProfileProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "meshRevisions": + err = unpopulate(val, "MeshRevisions", &m.MeshRevisions) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MeshUpgradeProfile. +func (m MeshUpgradeProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", m.ID) + populate(objectMap, "name", m.Name) + populate(objectMap, "properties", m.Properties) + populate(objectMap, "systemData", m.SystemData) + populate(objectMap, "type", m.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MeshUpgradeProfile. +func (m *MeshUpgradeProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &m.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &m.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &m.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &m.SystemData) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &m.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MeshUpgradeProfileList. +func (m MeshUpgradeProfileList) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", m.NextLink) + populate(objectMap, "value", m.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MeshUpgradeProfileList. +func (m *MeshUpgradeProfileList) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &m.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &m.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type MeshUpgradeProfileProperties. +func (m MeshUpgradeProfileProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "compatibleWith", m.CompatibleWith) + populate(objectMap, "revision", m.Revision) + populate(objectMap, "upgrades", m.Upgrades) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type MeshUpgradeProfileProperties. +func (m *MeshUpgradeProfileProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "compatibleWith": + err = unpopulate(val, "CompatibleWith", &m.CompatibleWith) + delete(rawMsg, key) + case "revision": + err = unpopulate(val, "Revision", &m.Revision) + delete(rawMsg, key) + case "upgrades": + err = unpopulate(val, "Upgrades", &m.Upgrades) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", m, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type NetworkMonitoring. +func (n NetworkMonitoring) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", n.Enabled) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type NetworkMonitoring. +func (n *NetworkMonitoring) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &n.Enabled) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type NetworkProfile. +func (n NetworkProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "dnsServiceIP", n.DNSServiceIP) + populate(objectMap, "ipFamilies", n.IPFamilies) + populate(objectMap, "kubeProxyConfig", n.KubeProxyConfig) + populate(objectMap, "loadBalancerProfile", n.LoadBalancerProfile) + populate(objectMap, "loadBalancerSku", n.LoadBalancerSKU) + populate(objectMap, "monitoring", n.Monitoring) + populate(objectMap, "natGatewayProfile", n.NatGatewayProfile) + populate(objectMap, "networkDataplane", n.NetworkDataplane) + populate(objectMap, "networkMode", n.NetworkMode) + populate(objectMap, "networkPlugin", n.NetworkPlugin) + populate(objectMap, "networkPluginMode", n.NetworkPluginMode) + populate(objectMap, "networkPolicy", n.NetworkPolicy) + populate(objectMap, "outboundType", n.OutboundType) + populate(objectMap, "podCidr", n.PodCidr) + populate(objectMap, "podCidrs", n.PodCidrs) + populate(objectMap, "serviceCidr", n.ServiceCidr) + populate(objectMap, "serviceCidrs", n.ServiceCidrs) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type NetworkProfile. +func (n *NetworkProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "dnsServiceIP": + err = unpopulate(val, "DNSServiceIP", &n.DNSServiceIP) + delete(rawMsg, key) + case "ipFamilies": + err = unpopulate(val, "IPFamilies", &n.IPFamilies) + delete(rawMsg, key) + case "kubeProxyConfig": + err = unpopulate(val, "KubeProxyConfig", &n.KubeProxyConfig) + delete(rawMsg, key) + case "loadBalancerProfile": + err = unpopulate(val, "LoadBalancerProfile", &n.LoadBalancerProfile) + delete(rawMsg, key) + case "loadBalancerSku": + err = unpopulate(val, "LoadBalancerSKU", &n.LoadBalancerSKU) + delete(rawMsg, key) + case "monitoring": + err = unpopulate(val, "Monitoring", &n.Monitoring) + delete(rawMsg, key) + case "natGatewayProfile": + err = unpopulate(val, "NatGatewayProfile", &n.NatGatewayProfile) + delete(rawMsg, key) + case "networkDataplane": + err = unpopulate(val, "NetworkDataplane", &n.NetworkDataplane) + delete(rawMsg, key) + case "networkMode": + err = unpopulate(val, "NetworkMode", &n.NetworkMode) + delete(rawMsg, key) + case "networkPlugin": + err = unpopulate(val, "NetworkPlugin", &n.NetworkPlugin) + delete(rawMsg, key) + case "networkPluginMode": + err = unpopulate(val, "NetworkPluginMode", &n.NetworkPluginMode) + delete(rawMsg, key) + case "networkPolicy": + err = unpopulate(val, "NetworkPolicy", &n.NetworkPolicy) + delete(rawMsg, key) + case "outboundType": + err = unpopulate(val, "OutboundType", &n.OutboundType) + delete(rawMsg, key) + case "podCidr": + err = unpopulate(val, "PodCidr", &n.PodCidr) + delete(rawMsg, key) + case "podCidrs": + err = unpopulate(val, "PodCidrs", &n.PodCidrs) + delete(rawMsg, key) + case "serviceCidr": + err = unpopulate(val, "ServiceCidr", &n.ServiceCidr) + delete(rawMsg, key) + case "serviceCidrs": + err = unpopulate(val, "ServiceCidrs", &n.ServiceCidrs) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type NetworkProfileForSnapshot. +func (n NetworkProfileForSnapshot) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "loadBalancerSku", n.LoadBalancerSKU) + populate(objectMap, "networkMode", n.NetworkMode) + populate(objectMap, "networkPlugin", n.NetworkPlugin) + populate(objectMap, "networkPluginMode", n.NetworkPluginMode) + populate(objectMap, "networkPolicy", n.NetworkPolicy) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type NetworkProfileForSnapshot. +func (n *NetworkProfileForSnapshot) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "loadBalancerSku": + err = unpopulate(val, "LoadBalancerSKU", &n.LoadBalancerSKU) + delete(rawMsg, key) + case "networkMode": + err = unpopulate(val, "NetworkMode", &n.NetworkMode) + delete(rawMsg, key) + case "networkPlugin": + err = unpopulate(val, "NetworkPlugin", &n.NetworkPlugin) + delete(rawMsg, key) + case "networkPluginMode": + err = unpopulate(val, "NetworkPluginMode", &n.NetworkPluginMode) + delete(rawMsg, key) + case "networkPolicy": + err = unpopulate(val, "NetworkPolicy", &n.NetworkPolicy) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type NetworkProfileKubeProxyConfig. +func (n NetworkProfileKubeProxyConfig) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enabled", n.Enabled) + populate(objectMap, "ipvsConfig", n.IpvsConfig) + populate(objectMap, "mode", n.Mode) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type NetworkProfileKubeProxyConfig. +func (n *NetworkProfileKubeProxyConfig) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enabled": + err = unpopulate(val, "Enabled", &n.Enabled) + delete(rawMsg, key) + case "ipvsConfig": + err = unpopulate(val, "IpvsConfig", &n.IpvsConfig) + delete(rawMsg, key) + case "mode": + err = unpopulate(val, "Mode", &n.Mode) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type NetworkProfileKubeProxyConfigIpvsConfig. +func (n NetworkProfileKubeProxyConfigIpvsConfig) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "scheduler", n.Scheduler) + populate(objectMap, "tcpFinTimeoutSeconds", n.TCPFinTimeoutSeconds) + populate(objectMap, "tcpTimeoutSeconds", n.TCPTimeoutSeconds) + populate(objectMap, "udpTimeoutSeconds", n.UDPTimeoutSeconds) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type NetworkProfileKubeProxyConfigIpvsConfig. +func (n *NetworkProfileKubeProxyConfigIpvsConfig) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "scheduler": + err = unpopulate(val, "Scheduler", &n.Scheduler) + delete(rawMsg, key) + case "tcpFinTimeoutSeconds": + err = unpopulate(val, "TCPFinTimeoutSeconds", &n.TCPFinTimeoutSeconds) + delete(rawMsg, key) + case "tcpTimeoutSeconds": + err = unpopulate(val, "TCPTimeoutSeconds", &n.TCPTimeoutSeconds) + delete(rawMsg, key) + case "udpTimeoutSeconds": + err = unpopulate(val, "UDPTimeoutSeconds", &n.UDPTimeoutSeconds) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", n, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OSOptionProfile. +func (o OSOptionProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", o.ID) + populate(objectMap, "name", o.Name) + populate(objectMap, "properties", o.Properties) + populate(objectMap, "type", o.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OSOptionProfile. +func (o *OSOptionProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &o.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &o.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &o.Properties) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &o.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OSOptionProperty. +func (o OSOptionProperty) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "enable-fips-image", o.EnableFipsImage) + populate(objectMap, "os-type", o.OSType) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OSOptionProperty. +func (o *OSOptionProperty) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "enable-fips-image": + err = unpopulate(val, "EnableFipsImage", &o.EnableFipsImage) + delete(rawMsg, key) + case "os-type": + err = unpopulate(val, "OSType", &o.OSType) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OSOptionPropertyList. +func (o OSOptionPropertyList) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "osOptionPropertyList", o.OSOptionPropertyList) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OSOptionPropertyList. +func (o *OSOptionPropertyList) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "osOptionPropertyList": + err = unpopulate(val, "OSOptionPropertyList", &o.OSOptionPropertyList) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OperationListResult. +func (o OperationListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "value", o.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OperationListResult. +func (o *OperationListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "value": + err = unpopulate(val, "Value", &o.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OperationStatusResult. +func (o OperationStatusResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populateDateTimeRFC3339(objectMap, "endTime", o.EndTime) + populate(objectMap, "error", o.Error) + populate(objectMap, "id", o.ID) + populate(objectMap, "name", o.Name) + populate(objectMap, "operations", o.Operations) + populate(objectMap, "percentComplete", o.PercentComplete) + populate(objectMap, "resourceId", o.ResourceID) + populateDateTimeRFC3339(objectMap, "startTime", o.StartTime) + populate(objectMap, "status", o.Status) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OperationStatusResult. +func (o *OperationStatusResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "endTime": + err = unpopulateDateTimeRFC3339(val, "EndTime", &o.EndTime) + delete(rawMsg, key) + case "error": + err = unpopulate(val, "Error", &o.Error) + delete(rawMsg, key) + case "id": + err = unpopulate(val, "ID", &o.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &o.Name) + delete(rawMsg, key) + case "operations": + err = unpopulate(val, "Operations", &o.Operations) + delete(rawMsg, key) + case "percentComplete": + err = unpopulate(val, "PercentComplete", &o.PercentComplete) + delete(rawMsg, key) + case "resourceId": + err = unpopulate(val, "ResourceID", &o.ResourceID) + delete(rawMsg, key) + case "startTime": + err = unpopulateDateTimeRFC3339(val, "StartTime", &o.StartTime) + delete(rawMsg, key) + case "status": + err = unpopulate(val, "Status", &o.Status) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OperationStatusResultList. +func (o OperationStatusResultList) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", o.NextLink) + populate(objectMap, "value", o.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OperationStatusResultList. +func (o *OperationStatusResultList) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &o.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &o.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OperationValue. +func (o OperationValue) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "display", o.Display) + populate(objectMap, "name", o.Name) + populate(objectMap, "origin", o.Origin) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OperationValue. +func (o *OperationValue) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "display": + err = unpopulate(val, "Display", &o.Display) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &o.Name) + delete(rawMsg, key) + case "origin": + err = unpopulate(val, "Origin", &o.Origin) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OperationValueDisplay. +func (o OperationValueDisplay) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "description", o.Description) + populate(objectMap, "operation", o.Operation) + populate(objectMap, "provider", o.Provider) + populate(objectMap, "resource", o.Resource) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OperationValueDisplay. +func (o *OperationValueDisplay) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "description": + err = unpopulate(val, "Description", &o.Description) + delete(rawMsg, key) + case "operation": + err = unpopulate(val, "Operation", &o.Operation) + delete(rawMsg, key) + case "provider": + err = unpopulate(val, "Provider", &o.Provider) + delete(rawMsg, key) + case "resource": + err = unpopulate(val, "Resource", &o.Resource) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OutboundEnvironmentEndpoint. +func (o OutboundEnvironmentEndpoint) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "category", o.Category) + populate(objectMap, "endpoints", o.Endpoints) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OutboundEnvironmentEndpoint. +func (o *OutboundEnvironmentEndpoint) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "category": + err = unpopulate(val, "Category", &o.Category) + delete(rawMsg, key) + case "endpoints": + err = unpopulate(val, "Endpoints", &o.Endpoints) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type OutboundEnvironmentEndpointCollection. +func (o OutboundEnvironmentEndpointCollection) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", o.NextLink) + populate(objectMap, "value", o.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type OutboundEnvironmentEndpointCollection. +func (o *OutboundEnvironmentEndpointCollection) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &o.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &o.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", o, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PortRange. +func (p PortRange) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "portEnd", p.PortEnd) + populate(objectMap, "portStart", p.PortStart) + populate(objectMap, "protocol", p.Protocol) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PortRange. +func (p *PortRange) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "portEnd": + err = unpopulate(val, "PortEnd", &p.PortEnd) + delete(rawMsg, key) + case "portStart": + err = unpopulate(val, "PortStart", &p.PortStart) + delete(rawMsg, key) + case "protocol": + err = unpopulate(val, "Protocol", &p.Protocol) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PowerState. +func (p PowerState) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "code", p.Code) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PowerState. +func (p *PowerState) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "code": + err = unpopulate(val, "Code", &p.Code) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PrivateEndpoint. +func (p PrivateEndpoint) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", p.ID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PrivateEndpoint. +func (p *PrivateEndpoint) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &p.ID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PrivateEndpointConnection. +func (p PrivateEndpointConnection) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", p.ID) + populate(objectMap, "name", p.Name) + populate(objectMap, "properties", p.Properties) + populate(objectMap, "type", p.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PrivateEndpointConnection. +func (p *PrivateEndpointConnection) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &p.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &p.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &p.Properties) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &p.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PrivateEndpointConnectionListResult. +func (p PrivateEndpointConnectionListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "value", p.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PrivateEndpointConnectionListResult. +func (p *PrivateEndpointConnectionListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "value": + err = unpopulate(val, "Value", &p.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PrivateEndpointConnectionProperties. +func (p PrivateEndpointConnectionProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "privateEndpoint", p.PrivateEndpoint) + populate(objectMap, "privateLinkServiceConnectionState", p.PrivateLinkServiceConnectionState) + populate(objectMap, "provisioningState", p.ProvisioningState) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PrivateEndpointConnectionProperties. +func (p *PrivateEndpointConnectionProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "privateEndpoint": + err = unpopulate(val, "PrivateEndpoint", &p.PrivateEndpoint) + delete(rawMsg, key) + case "privateLinkServiceConnectionState": + err = unpopulate(val, "PrivateLinkServiceConnectionState", &p.PrivateLinkServiceConnectionState) + delete(rawMsg, key) + case "provisioningState": + err = unpopulate(val, "ProvisioningState", &p.ProvisioningState) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PrivateLinkResource. +func (p PrivateLinkResource) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "groupId", p.GroupID) + populate(objectMap, "id", p.ID) + populate(objectMap, "name", p.Name) + populate(objectMap, "privateLinkServiceID", p.PrivateLinkServiceID) + populate(objectMap, "requiredMembers", p.RequiredMembers) + populate(objectMap, "type", p.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkResource. +func (p *PrivateLinkResource) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "groupId": + err = unpopulate(val, "GroupID", &p.GroupID) + delete(rawMsg, key) + case "id": + err = unpopulate(val, "ID", &p.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &p.Name) + delete(rawMsg, key) + case "privateLinkServiceID": + err = unpopulate(val, "PrivateLinkServiceID", &p.PrivateLinkServiceID) + delete(rawMsg, key) + case "requiredMembers": + err = unpopulate(val, "RequiredMembers", &p.RequiredMembers) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &p.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PrivateLinkResourcesListResult. +func (p PrivateLinkResourcesListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "value", p.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkResourcesListResult. +func (p *PrivateLinkResourcesListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "value": + err = unpopulate(val, "Value", &p.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type PrivateLinkServiceConnectionState. +func (p PrivateLinkServiceConnectionState) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "description", p.Description) + populate(objectMap, "status", p.Status) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type PrivateLinkServiceConnectionState. +func (p *PrivateLinkServiceConnectionState) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "description": + err = unpopulate(val, "Description", &p.Description) + delete(rawMsg, key) + case "status": + err = unpopulate(val, "Status", &p.Status) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", p, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type RelativeMonthlySchedule. +func (r RelativeMonthlySchedule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "dayOfWeek", r.DayOfWeek) + populate(objectMap, "intervalMonths", r.IntervalMonths) + populate(objectMap, "weekIndex", r.WeekIndex) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type RelativeMonthlySchedule. +func (r *RelativeMonthlySchedule) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", r, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "dayOfWeek": + err = unpopulate(val, "DayOfWeek", &r.DayOfWeek) + delete(rawMsg, key) + case "intervalMonths": + err = unpopulate(val, "IntervalMonths", &r.IntervalMonths) + delete(rawMsg, key) + case "weekIndex": + err = unpopulate(val, "WeekIndex", &r.WeekIndex) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", r, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ResourceReference. +func (r ResourceReference) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", r.ID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ResourceReference. +func (r *ResourceReference) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", r, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &r.ID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", r, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type RunCommandRequest. +func (r RunCommandRequest) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "clusterToken", r.ClusterToken) + populate(objectMap, "command", r.Command) + populate(objectMap, "context", r.Context) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type RunCommandRequest. +func (r *RunCommandRequest) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", r, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "clusterToken": + err = unpopulate(val, "ClusterToken", &r.ClusterToken) + delete(rawMsg, key) + case "command": + err = unpopulate(val, "Command", &r.Command) + delete(rawMsg, key) + case "context": + err = unpopulate(val, "Context", &r.Context) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", r, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type RunCommandResult. +func (r RunCommandResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", r.ID) + populate(objectMap, "properties", r.Properties) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type RunCommandResult. +func (r *RunCommandResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", r, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &r.ID) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &r.Properties) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", r, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SSHConfiguration. +func (s SSHConfiguration) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "publicKeys", s.PublicKeys) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SSHConfiguration. +func (s *SSHConfiguration) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "publicKeys": + err = unpopulate(val, "PublicKeys", &s.PublicKeys) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SSHPublicKey. +func (s SSHPublicKey) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "keyData", s.KeyData) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SSHPublicKey. +func (s *SSHPublicKey) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "keyData": + err = unpopulate(val, "KeyData", &s.KeyData) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SafeguardsAvailableVersion. +func (s SafeguardsAvailableVersion) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", s.ID) + populate(objectMap, "name", s.Name) + populate(objectMap, "properties", s.Properties) + populate(objectMap, "systemData", s.SystemData) + populate(objectMap, "type", s.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SafeguardsAvailableVersion. +func (s *SafeguardsAvailableVersion) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &s.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &s.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &s.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &s.SystemData) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &s.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SafeguardsAvailableVersionsList. +func (s SafeguardsAvailableVersionsList) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", s.NextLink) + populate(objectMap, "value", s.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SafeguardsAvailableVersionsList. +func (s *SafeguardsAvailableVersionsList) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &s.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &s.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SafeguardsAvailableVersionsProperties. +func (s SafeguardsAvailableVersionsProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "isDefaultVersion", s.IsDefaultVersion) + populate(objectMap, "support", s.Support) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SafeguardsAvailableVersionsProperties. +func (s *SafeguardsAvailableVersionsProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "isDefaultVersion": + err = unpopulate(val, "IsDefaultVersion", &s.IsDefaultVersion) + delete(rawMsg, key) + case "support": + err = unpopulate(val, "Support", &s.Support) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SafeguardsProfile. +func (s SafeguardsProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "excludedNamespaces", s.ExcludedNamespaces) + populate(objectMap, "level", s.Level) + populate(objectMap, "systemExcludedNamespaces", s.SystemExcludedNamespaces) + populate(objectMap, "version", s.Version) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SafeguardsProfile. +func (s *SafeguardsProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "excludedNamespaces": + err = unpopulate(val, "ExcludedNamespaces", &s.ExcludedNamespaces) + delete(rawMsg, key) + case "level": + err = unpopulate(val, "Level", &s.Level) + delete(rawMsg, key) + case "systemExcludedNamespaces": + err = unpopulate(val, "SystemExcludedNamespaces", &s.SystemExcludedNamespaces) + delete(rawMsg, key) + case "version": + err = unpopulate(val, "Version", &s.Version) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ScaleProfile. +func (s ScaleProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "manual", s.Manual) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ScaleProfile. +func (s *ScaleProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "manual": + err = unpopulate(val, "Manual", &s.Manual) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type Schedule. +func (s Schedule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "absoluteMonthly", s.AbsoluteMonthly) + populate(objectMap, "daily", s.Daily) + populate(objectMap, "relativeMonthly", s.RelativeMonthly) + populate(objectMap, "weekly", s.Weekly) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type Schedule. +func (s *Schedule) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "absoluteMonthly": + err = unpopulate(val, "AbsoluteMonthly", &s.AbsoluteMonthly) + delete(rawMsg, key) + case "daily": + err = unpopulate(val, "Daily", &s.Daily) + delete(rawMsg, key) + case "relativeMonthly": + err = unpopulate(val, "RelativeMonthly", &s.RelativeMonthly) + delete(rawMsg, key) + case "weekly": + err = unpopulate(val, "Weekly", &s.Weekly) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type ServiceMeshProfile. +func (s ServiceMeshProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "istio", s.Istio) + populate(objectMap, "mode", s.Mode) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type ServiceMeshProfile. +func (s *ServiceMeshProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "istio": + err = unpopulate(val, "Istio", &s.Istio) + delete(rawMsg, key) + case "mode": + err = unpopulate(val, "Mode", &s.Mode) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type Snapshot. +func (s Snapshot) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", s.ID) + populate(objectMap, "location", s.Location) + populate(objectMap, "name", s.Name) + populate(objectMap, "properties", s.Properties) + populate(objectMap, "systemData", s.SystemData) + populate(objectMap, "tags", s.Tags) + populate(objectMap, "type", s.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type Snapshot. +func (s *Snapshot) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &s.ID) + delete(rawMsg, key) + case "location": + err = unpopulate(val, "Location", &s.Location) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &s.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &s.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &s.SystemData) + delete(rawMsg, key) + case "tags": + err = unpopulate(val, "Tags", &s.Tags) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &s.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SnapshotListResult. +func (s SnapshotListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", s.NextLink) + populate(objectMap, "value", s.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SnapshotListResult. +func (s *SnapshotListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &s.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &s.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SnapshotProperties. +func (s SnapshotProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "creationData", s.CreationData) + populate(objectMap, "enableFIPS", s.EnableFIPS) + populate(objectMap, "kubernetesVersion", s.KubernetesVersion) + populate(objectMap, "nodeImageVersion", s.NodeImageVersion) + populate(objectMap, "osSku", s.OSSKU) + populate(objectMap, "osType", s.OSType) + populate(objectMap, "snapshotType", s.SnapshotType) + populate(objectMap, "vmSize", s.VMSize) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SnapshotProperties. +func (s *SnapshotProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "creationData": + err = unpopulate(val, "CreationData", &s.CreationData) + delete(rawMsg, key) + case "enableFIPS": + err = unpopulate(val, "EnableFIPS", &s.EnableFIPS) + delete(rawMsg, key) + case "kubernetesVersion": + err = unpopulate(val, "KubernetesVersion", &s.KubernetesVersion) + delete(rawMsg, key) + case "nodeImageVersion": + err = unpopulate(val, "NodeImageVersion", &s.NodeImageVersion) + delete(rawMsg, key) + case "osSku": + err = unpopulate(val, "OSSKU", &s.OSSKU) + delete(rawMsg, key) + case "osType": + err = unpopulate(val, "OSType", &s.OSType) + delete(rawMsg, key) + case "snapshotType": + err = unpopulate(val, "SnapshotType", &s.SnapshotType) + delete(rawMsg, key) + case "vmSize": + err = unpopulate(val, "VMSize", &s.VMSize) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SysctlConfig. +func (s SysctlConfig) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "fsAioMaxNr", s.FsAioMaxNr) + populate(objectMap, "fsFileMax", s.FsFileMax) + populate(objectMap, "fsInotifyMaxUserWatches", s.FsInotifyMaxUserWatches) + populate(objectMap, "fsNrOpen", s.FsNrOpen) + populate(objectMap, "kernelThreadsMax", s.KernelThreadsMax) + populate(objectMap, "netCoreNetdevMaxBacklog", s.NetCoreNetdevMaxBacklog) + populate(objectMap, "netCoreOptmemMax", s.NetCoreOptmemMax) + populate(objectMap, "netCoreRmemDefault", s.NetCoreRmemDefault) + populate(objectMap, "netCoreRmemMax", s.NetCoreRmemMax) + populate(objectMap, "netCoreSomaxconn", s.NetCoreSomaxconn) + populate(objectMap, "netCoreWmemDefault", s.NetCoreWmemDefault) + populate(objectMap, "netCoreWmemMax", s.NetCoreWmemMax) + populate(objectMap, "netIpv4IpLocalPortRange", s.NetIPv4IPLocalPortRange) + populate(objectMap, "netIpv4NeighDefaultGcThresh1", s.NetIPv4NeighDefaultGcThresh1) + populate(objectMap, "netIpv4NeighDefaultGcThresh2", s.NetIPv4NeighDefaultGcThresh2) + populate(objectMap, "netIpv4NeighDefaultGcThresh3", s.NetIPv4NeighDefaultGcThresh3) + populate(objectMap, "netIpv4TcpFinTimeout", s.NetIPv4TCPFinTimeout) + populate(objectMap, "netIpv4TcpKeepaliveProbes", s.NetIPv4TCPKeepaliveProbes) + populate(objectMap, "netIpv4TcpKeepaliveTime", s.NetIPv4TCPKeepaliveTime) + populate(objectMap, "netIpv4TcpMaxSynBacklog", s.NetIPv4TCPMaxSynBacklog) + populate(objectMap, "netIpv4TcpMaxTwBuckets", s.NetIPv4TCPMaxTwBuckets) + populate(objectMap, "netIpv4TcpTwReuse", s.NetIPv4TCPTwReuse) + populate(objectMap, "netIpv4TcpkeepaliveIntvl", s.NetIPv4TcpkeepaliveIntvl) + populate(objectMap, "netNetfilterNfConntrackBuckets", s.NetNetfilterNfConntrackBuckets) + populate(objectMap, "netNetfilterNfConntrackMax", s.NetNetfilterNfConntrackMax) + populate(objectMap, "vmMaxMapCount", s.VMMaxMapCount) + populate(objectMap, "vmSwappiness", s.VMSwappiness) + populate(objectMap, "vmVfsCachePressure", s.VMVfsCachePressure) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SysctlConfig. +func (s *SysctlConfig) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "fsAioMaxNr": + err = unpopulate(val, "FsAioMaxNr", &s.FsAioMaxNr) + delete(rawMsg, key) + case "fsFileMax": + err = unpopulate(val, "FsFileMax", &s.FsFileMax) + delete(rawMsg, key) + case "fsInotifyMaxUserWatches": + err = unpopulate(val, "FsInotifyMaxUserWatches", &s.FsInotifyMaxUserWatches) + delete(rawMsg, key) + case "fsNrOpen": + err = unpopulate(val, "FsNrOpen", &s.FsNrOpen) + delete(rawMsg, key) + case "kernelThreadsMax": + err = unpopulate(val, "KernelThreadsMax", &s.KernelThreadsMax) + delete(rawMsg, key) + case "netCoreNetdevMaxBacklog": + err = unpopulate(val, "NetCoreNetdevMaxBacklog", &s.NetCoreNetdevMaxBacklog) + delete(rawMsg, key) + case "netCoreOptmemMax": + err = unpopulate(val, "NetCoreOptmemMax", &s.NetCoreOptmemMax) + delete(rawMsg, key) + case "netCoreRmemDefault": + err = unpopulate(val, "NetCoreRmemDefault", &s.NetCoreRmemDefault) + delete(rawMsg, key) + case "netCoreRmemMax": + err = unpopulate(val, "NetCoreRmemMax", &s.NetCoreRmemMax) + delete(rawMsg, key) + case "netCoreSomaxconn": + err = unpopulate(val, "NetCoreSomaxconn", &s.NetCoreSomaxconn) + delete(rawMsg, key) + case "netCoreWmemDefault": + err = unpopulate(val, "NetCoreWmemDefault", &s.NetCoreWmemDefault) + delete(rawMsg, key) + case "netCoreWmemMax": + err = unpopulate(val, "NetCoreWmemMax", &s.NetCoreWmemMax) + delete(rawMsg, key) + case "netIpv4IpLocalPortRange": + err = unpopulate(val, "NetIPv4IPLocalPortRange", &s.NetIPv4IPLocalPortRange) + delete(rawMsg, key) + case "netIpv4NeighDefaultGcThresh1": + err = unpopulate(val, "NetIPv4NeighDefaultGcThresh1", &s.NetIPv4NeighDefaultGcThresh1) + delete(rawMsg, key) + case "netIpv4NeighDefaultGcThresh2": + err = unpopulate(val, "NetIPv4NeighDefaultGcThresh2", &s.NetIPv4NeighDefaultGcThresh2) + delete(rawMsg, key) + case "netIpv4NeighDefaultGcThresh3": + err = unpopulate(val, "NetIPv4NeighDefaultGcThresh3", &s.NetIPv4NeighDefaultGcThresh3) + delete(rawMsg, key) + case "netIpv4TcpFinTimeout": + err = unpopulate(val, "NetIPv4TCPFinTimeout", &s.NetIPv4TCPFinTimeout) + delete(rawMsg, key) + case "netIpv4TcpKeepaliveProbes": + err = unpopulate(val, "NetIPv4TCPKeepaliveProbes", &s.NetIPv4TCPKeepaliveProbes) + delete(rawMsg, key) + case "netIpv4TcpKeepaliveTime": + err = unpopulate(val, "NetIPv4TCPKeepaliveTime", &s.NetIPv4TCPKeepaliveTime) + delete(rawMsg, key) + case "netIpv4TcpMaxSynBacklog": + err = unpopulate(val, "NetIPv4TCPMaxSynBacklog", &s.NetIPv4TCPMaxSynBacklog) + delete(rawMsg, key) + case "netIpv4TcpMaxTwBuckets": + err = unpopulate(val, "NetIPv4TCPMaxTwBuckets", &s.NetIPv4TCPMaxTwBuckets) + delete(rawMsg, key) + case "netIpv4TcpTwReuse": + err = unpopulate(val, "NetIPv4TCPTwReuse", &s.NetIPv4TCPTwReuse) + delete(rawMsg, key) + case "netIpv4TcpkeepaliveIntvl": + err = unpopulate(val, "NetIPv4TcpkeepaliveIntvl", &s.NetIPv4TcpkeepaliveIntvl) + delete(rawMsg, key) + case "netNetfilterNfConntrackBuckets": + err = unpopulate(val, "NetNetfilterNfConntrackBuckets", &s.NetNetfilterNfConntrackBuckets) + delete(rawMsg, key) + case "netNetfilterNfConntrackMax": + err = unpopulate(val, "NetNetfilterNfConntrackMax", &s.NetNetfilterNfConntrackMax) + delete(rawMsg, key) + case "vmMaxMapCount": + err = unpopulate(val, "VMMaxMapCount", &s.VMMaxMapCount) + delete(rawMsg, key) + case "vmSwappiness": + err = unpopulate(val, "VMSwappiness", &s.VMSwappiness) + delete(rawMsg, key) + case "vmVfsCachePressure": + err = unpopulate(val, "VMVfsCachePressure", &s.VMVfsCachePressure) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type SystemData. +func (s SystemData) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populateDateTimeRFC3339(objectMap, "createdAt", s.CreatedAt) + populate(objectMap, "createdBy", s.CreatedBy) + populate(objectMap, "createdByType", s.CreatedByType) + populateDateTimeRFC3339(objectMap, "lastModifiedAt", s.LastModifiedAt) + populate(objectMap, "lastModifiedBy", s.LastModifiedBy) + populate(objectMap, "lastModifiedByType", s.LastModifiedByType) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type SystemData. +func (s *SystemData) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "createdAt": + err = unpopulateDateTimeRFC3339(val, "CreatedAt", &s.CreatedAt) + delete(rawMsg, key) + case "createdBy": + err = unpopulate(val, "CreatedBy", &s.CreatedBy) + delete(rawMsg, key) + case "createdByType": + err = unpopulate(val, "CreatedByType", &s.CreatedByType) + delete(rawMsg, key) + case "lastModifiedAt": + err = unpopulateDateTimeRFC3339(val, "LastModifiedAt", &s.LastModifiedAt) + delete(rawMsg, key) + case "lastModifiedBy": + err = unpopulate(val, "LastModifiedBy", &s.LastModifiedBy) + delete(rawMsg, key) + case "lastModifiedByType": + err = unpopulate(val, "LastModifiedByType", &s.LastModifiedByType) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", s, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TagsObject. +func (t TagsObject) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "tags", t.Tags) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TagsObject. +func (t *TagsObject) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "tags": + err = unpopulate(val, "Tags", &t.Tags) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TimeInWeek. +func (t TimeInWeek) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "day", t.Day) + populate(objectMap, "hourSlots", t.HourSlots) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TimeInWeek. +func (t *TimeInWeek) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "day": + err = unpopulate(val, "Day", &t.Day) + delete(rawMsg, key) + case "hourSlots": + err = unpopulate(val, "HourSlots", &t.HourSlots) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TimeSpan. +func (t TimeSpan) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populateDateTimeRFC3339(objectMap, "end", t.End) + populateDateTimeRFC3339(objectMap, "start", t.Start) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TimeSpan. +func (t *TimeSpan) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "end": + err = unpopulateDateTimeRFC3339(val, "End", &t.End) + delete(rawMsg, key) + case "start": + err = unpopulateDateTimeRFC3339(val, "Start", &t.Start) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TrustedAccessRole. +func (t TrustedAccessRole) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "name", t.Name) + populate(objectMap, "rules", t.Rules) + populate(objectMap, "sourceResourceType", t.SourceResourceType) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TrustedAccessRole. +func (t *TrustedAccessRole) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "name": + err = unpopulate(val, "Name", &t.Name) + delete(rawMsg, key) + case "rules": + err = unpopulate(val, "Rules", &t.Rules) + delete(rawMsg, key) + case "sourceResourceType": + err = unpopulate(val, "SourceResourceType", &t.SourceResourceType) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TrustedAccessRoleBinding. +func (t TrustedAccessRoleBinding) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "id", t.ID) + populate(objectMap, "name", t.Name) + populate(objectMap, "properties", t.Properties) + populate(objectMap, "systemData", t.SystemData) + populate(objectMap, "type", t.Type) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TrustedAccessRoleBinding. +func (t *TrustedAccessRoleBinding) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "id": + err = unpopulate(val, "ID", &t.ID) + delete(rawMsg, key) + case "name": + err = unpopulate(val, "Name", &t.Name) + delete(rawMsg, key) + case "properties": + err = unpopulate(val, "Properties", &t.Properties) + delete(rawMsg, key) + case "systemData": + err = unpopulate(val, "SystemData", &t.SystemData) + delete(rawMsg, key) + case "type": + err = unpopulate(val, "Type", &t.Type) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TrustedAccessRoleBindingListResult. +func (t TrustedAccessRoleBindingListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", t.NextLink) + populate(objectMap, "value", t.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TrustedAccessRoleBindingListResult. +func (t *TrustedAccessRoleBindingListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &t.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &t.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TrustedAccessRoleBindingProperties. +func (t TrustedAccessRoleBindingProperties) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "provisioningState", t.ProvisioningState) + populate(objectMap, "roles", t.Roles) + populate(objectMap, "sourceResourceId", t.SourceResourceID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TrustedAccessRoleBindingProperties. +func (t *TrustedAccessRoleBindingProperties) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "provisioningState": + err = unpopulate(val, "ProvisioningState", &t.ProvisioningState) + delete(rawMsg, key) + case "roles": + err = unpopulate(val, "Roles", &t.Roles) + delete(rawMsg, key) + case "sourceResourceId": + err = unpopulate(val, "SourceResourceID", &t.SourceResourceID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TrustedAccessRoleListResult. +func (t TrustedAccessRoleListResult) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "nextLink", t.NextLink) + populate(objectMap, "value", t.Value) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TrustedAccessRoleListResult. +func (t *TrustedAccessRoleListResult) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "nextLink": + err = unpopulate(val, "NextLink", &t.NextLink) + delete(rawMsg, key) + case "value": + err = unpopulate(val, "Value", &t.Value) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type TrustedAccessRoleRule. +func (t TrustedAccessRoleRule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "apiGroups", t.APIGroups) + populate(objectMap, "nonResourceURLs", t.NonResourceURLs) + populate(objectMap, "resourceNames", t.ResourceNames) + populate(objectMap, "resources", t.Resources) + populate(objectMap, "verbs", t.Verbs) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type TrustedAccessRoleRule. +func (t *TrustedAccessRoleRule) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "apiGroups": + err = unpopulate(val, "APIGroups", &t.APIGroups) + delete(rawMsg, key) + case "nonResourceURLs": + err = unpopulate(val, "NonResourceURLs", &t.NonResourceURLs) + delete(rawMsg, key) + case "resourceNames": + err = unpopulate(val, "ResourceNames", &t.ResourceNames) + delete(rawMsg, key) + case "resources": + err = unpopulate(val, "Resources", &t.Resources) + delete(rawMsg, key) + case "verbs": + err = unpopulate(val, "Verbs", &t.Verbs) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", t, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type UpgradeOverrideSettings. +func (u UpgradeOverrideSettings) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "forceUpgrade", u.ForceUpgrade) + populateDateTimeRFC3339(objectMap, "until", u.Until) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type UpgradeOverrideSettings. +func (u *UpgradeOverrideSettings) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", u, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "forceUpgrade": + err = unpopulate(val, "ForceUpgrade", &u.ForceUpgrade) + delete(rawMsg, key) + case "until": + err = unpopulateDateTimeRFC3339(val, "Until", &u.Until) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", u, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type UserAssignedIdentity. +func (u UserAssignedIdentity) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "clientId", u.ClientID) + populate(objectMap, "objectId", u.ObjectID) + populate(objectMap, "resourceId", u.ResourceID) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type UserAssignedIdentity. +func (u *UserAssignedIdentity) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", u, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "clientId": + err = unpopulate(val, "ClientID", &u.ClientID) + delete(rawMsg, key) + case "objectId": + err = unpopulate(val, "ObjectID", &u.ObjectID) + delete(rawMsg, key) + case "resourceId": + err = unpopulate(val, "ResourceID", &u.ResourceID) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", u, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type VirtualMachineNodes. +func (v VirtualMachineNodes) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "count", v.Count) + populate(objectMap, "size", v.Size) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type VirtualMachineNodes. +func (v *VirtualMachineNodes) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", v, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "count": + err = unpopulate(val, "Count", &v.Count) + delete(rawMsg, key) + case "size": + err = unpopulate(val, "Size", &v.Size) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", v, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type VirtualMachinesProfile. +func (v VirtualMachinesProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "scale", v.Scale) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type VirtualMachinesProfile. +func (v *VirtualMachinesProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", v, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "scale": + err = unpopulate(val, "Scale", &v.Scale) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", v, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type WeeklySchedule. +func (w WeeklySchedule) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "dayOfWeek", w.DayOfWeek) + populate(objectMap, "intervalWeeks", w.IntervalWeeks) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type WeeklySchedule. +func (w *WeeklySchedule) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", w, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "dayOfWeek": + err = unpopulate(val, "DayOfWeek", &w.DayOfWeek) + delete(rawMsg, key) + case "intervalWeeks": + err = unpopulate(val, "IntervalWeeks", &w.IntervalWeeks) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", w, err) + } + } + return nil +} + +// MarshalJSON implements the json.Marshaller interface for type WindowsGmsaProfile. +func (w WindowsGmsaProfile) MarshalJSON() ([]byte, error) { + objectMap := make(map[string]any) + populate(objectMap, "dnsServer", w.DNSServer) + populate(objectMap, "enabled", w.Enabled) + populate(objectMap, "rootDomainName", w.RootDomainName) + return json.Marshal(objectMap) +} + +// UnmarshalJSON implements the json.Unmarshaller interface for type WindowsGmsaProfile. +func (w *WindowsGmsaProfile) UnmarshalJSON(data []byte) error { + var rawMsg map[string]json.RawMessage + if err := json.Unmarshal(data, &rawMsg); err != nil { + return fmt.Errorf("unmarshalling type %T: %v", w, err) + } + for key, val := range rawMsg { + var err error + switch key { + case "dnsServer": + err = unpopulate(val, "DNSServer", &w.DNSServer) + delete(rawMsg, key) + case "enabled": + err = unpopulate(val, "Enabled", &w.Enabled) + delete(rawMsg, key) + case "rootDomainName": + err = unpopulate(val, "RootDomainName", &w.RootDomainName) + delete(rawMsg, key) + } + if err != nil { + return fmt.Errorf("unmarshalling type %T: %v", w, err) + } + } + return nil +} + +func populate(m map[string]any, k string, v any) { + if v == nil { + return + } else if azcore.IsNullValue(v) { + m[k] = nil + } else if !reflect.ValueOf(v).IsNil() { + m[k] = v + } +} + +func populateAny(m map[string]any, k string, v any) { + if v == nil { + return + } else if azcore.IsNullValue(v) { + m[k] = nil + } else { + m[k] = v + } +} + +func populateByteArray[T any](m map[string]any, k string, b []T, convert func() any) { + if azcore.IsNullValue(b) { + m[k] = nil + } else if len(b) == 0 { + return + } else { + m[k] = convert() + } +} + +func unpopulate(data json.RawMessage, fn string, v any) error { + if data == nil || string(data) == "null" { + return nil + } + if err := json.Unmarshal(data, v); err != nil { + return fmt.Errorf("struct field %s: %v", fn, err) + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/operations_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/operations_client.go new file mode 100644 index 000000000000..bfc700c76a31 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/operations_client.go @@ -0,0 +1,89 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" +) + +// OperationsClient contains the methods for the Operations group. +// Don't use this type directly, use NewOperationsClient() instead. +type OperationsClient struct { + internal *arm.Client +} + +// NewOperationsClient creates a new instance of OperationsClient with the specified values. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewOperationsClient(credential azcore.TokenCredential, options *arm.ClientOptions) (*OperationsClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &OperationsClient{ + internal: cl, + } + return client, nil +} + +// NewListPager - Gets a list of operations. +// +// Generated from API version 2023-11-02-preview +// - options - OperationsClientListOptions contains the optional parameters for the OperationsClient.NewListPager method. +func (client *OperationsClient) NewListPager(options *OperationsClientListOptions) *runtime.Pager[OperationsClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[OperationsClientListResponse]{ + More: func(page OperationsClientListResponse) bool { + return false + }, + Fetcher: func(ctx context.Context, page *OperationsClientListResponse) (OperationsClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "OperationsClient.NewListPager") + req, err := client.listCreateRequest(ctx, options) + if err != nil { + return OperationsClientListResponse{}, err + } + resp, err := client.internal.Pipeline().Do(req) + if err != nil { + return OperationsClientListResponse{}, err + } + if !runtime.HasStatusCode(resp, http.StatusOK) { + return OperationsClientListResponse{}, runtime.NewResponseError(resp) + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *OperationsClient) listCreateRequest(ctx context.Context, options *OperationsClientListOptions) (*policy.Request, error) { + urlPath := "/providers/Microsoft.ContainerService/operations" + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *OperationsClient) listHandleResponse(resp *http.Response) (OperationsClientListResponse, error) { + result := OperationsClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.OperationListResult); err != nil { + return OperationsClientListResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/operationstatusresult_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/operationstatusresult_client.go new file mode 100644 index 000000000000..bf79b9b6dd18 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/operationstatusresult_client.go @@ -0,0 +1,254 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// OperationStatusResultClient contains the methods for the OperationStatusResult group. +// Don't use this type directly, use NewOperationStatusResultClient() instead. +type OperationStatusResultClient struct { + internal *arm.Client + subscriptionID string +} + +// NewOperationStatusResultClient creates a new instance of OperationStatusResultClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewOperationStatusResultClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*OperationStatusResultClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &OperationStatusResultClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// Get - Get the status of a specific operation in the specified managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - operationID - The ID of an ongoing async operation. +// - options - OperationStatusResultClientGetOptions contains the optional parameters for the OperationStatusResultClient.Get +// method. +func (client *OperationStatusResultClient) Get(ctx context.Context, resourceGroupName string, resourceName string, operationID string, options *OperationStatusResultClientGetOptions) (OperationStatusResultClientGetResponse, error) { + var err error + const operationName = "OperationStatusResultClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, operationID, options) + if err != nil { + return OperationStatusResultClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return OperationStatusResultClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return OperationStatusResultClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *OperationStatusResultClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, operationID string, options *OperationStatusResultClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/operations/{operationId}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if operationID == "" { + return nil, errors.New("parameter operationID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{operationId}", url.PathEscape(operationID)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *OperationStatusResultClient) getHandleResponse(resp *http.Response) (OperationStatusResultClientGetResponse, error) { + result := OperationStatusResultClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.OperationStatusResult); err != nil { + return OperationStatusResultClientGetResponse{}, err + } + return result, nil +} + +// GetByAgentPool - Get the status of a specific operation in the specified agent pool. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - agentPoolName - The name of the agent pool. +// - operationID - The ID of an ongoing async operation. +// - options - OperationStatusResultClientGetByAgentPoolOptions contains the optional parameters for the OperationStatusResultClient.GetByAgentPool +// method. +func (client *OperationStatusResultClient) GetByAgentPool(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, operationID string, options *OperationStatusResultClientGetByAgentPoolOptions) (OperationStatusResultClientGetByAgentPoolResponse, error) { + var err error + const operationName = "OperationStatusResultClient.GetByAgentPool" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getByAgentPoolCreateRequest(ctx, resourceGroupName, resourceName, agentPoolName, operationID, options) + if err != nil { + return OperationStatusResultClientGetByAgentPoolResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return OperationStatusResultClientGetByAgentPoolResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return OperationStatusResultClientGetByAgentPoolResponse{}, err + } + resp, err := client.getByAgentPoolHandleResponse(httpResp) + return resp, err +} + +// getByAgentPoolCreateRequest creates the GetByAgentPool request. +func (client *OperationStatusResultClient) getByAgentPoolCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, agentPoolName string, operationID string, options *OperationStatusResultClientGetByAgentPoolOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/agentPools/{agentPoolName}/operations/{operationId}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if agentPoolName == "" { + return nil, errors.New("parameter agentPoolName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{agentPoolName}", url.PathEscape(agentPoolName)) + if operationID == "" { + return nil, errors.New("parameter operationID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{operationId}", url.PathEscape(operationID)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getByAgentPoolHandleResponse handles the GetByAgentPool response. +func (client *OperationStatusResultClient) getByAgentPoolHandleResponse(resp *http.Response) (OperationStatusResultClientGetByAgentPoolResponse, error) { + result := OperationStatusResultClientGetByAgentPoolResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.OperationStatusResult); err != nil { + return OperationStatusResultClientGetByAgentPoolResponse{}, err + } + return result, nil +} + +// NewListPager - Gets a list of operations in the specified managedCluster +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - OperationStatusResultClientListOptions contains the optional parameters for the OperationStatusResultClient.NewListPager +// method. +func (client *OperationStatusResultClient) NewListPager(resourceGroupName string, resourceName string, options *OperationStatusResultClientListOptions) *runtime.Pager[OperationStatusResultClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[OperationStatusResultClientListResponse]{ + More: func(page OperationStatusResultClientListResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *OperationStatusResultClientListResponse) (OperationStatusResultClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "OperationStatusResultClient.NewListPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listCreateRequest(ctx, resourceGroupName, resourceName, options) + }, nil) + if err != nil { + return OperationStatusResultClientListResponse{}, err + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *OperationStatusResultClient) listCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *OperationStatusResultClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/operations" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *OperationStatusResultClient) listHandleResponse(resp *http.Response) (OperationStatusResultClientListResponse, error) { + result := OperationStatusResultClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.OperationStatusResultList); err != nil { + return OperationStatusResultClientListResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/options.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/options.go new file mode 100644 index 000000000000..f209cbc7f0ed --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/options.go @@ -0,0 +1,459 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +// AgentPoolsClientBeginAbortLatestOperationOptions contains the optional parameters for the AgentPoolsClient.BeginAbortLatestOperation +// method. +type AgentPoolsClientBeginAbortLatestOperationOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// AgentPoolsClientBeginCreateOrUpdateOptions contains the optional parameters for the AgentPoolsClient.BeginCreateOrUpdate +// method. +type AgentPoolsClientBeginCreateOrUpdateOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// AgentPoolsClientBeginDeleteMachinesOptions contains the optional parameters for the AgentPoolsClient.BeginDeleteMachines +// method. +type AgentPoolsClientBeginDeleteMachinesOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// AgentPoolsClientBeginDeleteOptions contains the optional parameters for the AgentPoolsClient.BeginDelete method. +type AgentPoolsClientBeginDeleteOptions struct { + // ignore-pod-disruption-budget=true to delete those pods on a node without considering Pod Disruption Budget + IgnorePodDisruptionBudget *bool + + // Resumes the LRO from the provided token. + ResumeToken string +} + +// AgentPoolsClientBeginUpgradeNodeImageVersionOptions contains the optional parameters for the AgentPoolsClient.BeginUpgradeNodeImageVersion +// method. +type AgentPoolsClientBeginUpgradeNodeImageVersionOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// AgentPoolsClientGetAvailableAgentPoolVersionsOptions contains the optional parameters for the AgentPoolsClient.GetAvailableAgentPoolVersions +// method. +type AgentPoolsClientGetAvailableAgentPoolVersionsOptions struct { + // placeholder for future optional parameters +} + +// AgentPoolsClientGetOptions contains the optional parameters for the AgentPoolsClient.Get method. +type AgentPoolsClientGetOptions struct { + // placeholder for future optional parameters +} + +// AgentPoolsClientGetUpgradeProfileOptions contains the optional parameters for the AgentPoolsClient.GetUpgradeProfile method. +type AgentPoolsClientGetUpgradeProfileOptions struct { + // placeholder for future optional parameters +} + +// AgentPoolsClientListOptions contains the optional parameters for the AgentPoolsClient.NewListPager method. +type AgentPoolsClientListOptions struct { + // placeholder for future optional parameters +} + +// MachinesClientGetOptions contains the optional parameters for the MachinesClient.Get method. +type MachinesClientGetOptions struct { + // placeholder for future optional parameters +} + +// MachinesClientListOptions contains the optional parameters for the MachinesClient.NewListPager method. +type MachinesClientListOptions struct { + // placeholder for future optional parameters +} + +// MaintenanceConfigurationsClientCreateOrUpdateOptions contains the optional parameters for the MaintenanceConfigurationsClient.CreateOrUpdate +// method. +type MaintenanceConfigurationsClientCreateOrUpdateOptions struct { + // placeholder for future optional parameters +} + +// MaintenanceConfigurationsClientDeleteOptions contains the optional parameters for the MaintenanceConfigurationsClient.Delete +// method. +type MaintenanceConfigurationsClientDeleteOptions struct { + // placeholder for future optional parameters +} + +// MaintenanceConfigurationsClientGetOptions contains the optional parameters for the MaintenanceConfigurationsClient.Get +// method. +type MaintenanceConfigurationsClientGetOptions struct { + // placeholder for future optional parameters +} + +// MaintenanceConfigurationsClientListByManagedClusterOptions contains the optional parameters for the MaintenanceConfigurationsClient.NewListByManagedClusterPager +// method. +type MaintenanceConfigurationsClientListByManagedClusterOptions struct { + // placeholder for future optional parameters +} + +// ManagedClusterSnapshotsClientCreateOrUpdateOptions contains the optional parameters for the ManagedClusterSnapshotsClient.CreateOrUpdate +// method. +type ManagedClusterSnapshotsClientCreateOrUpdateOptions struct { + // placeholder for future optional parameters +} + +// ManagedClusterSnapshotsClientDeleteOptions contains the optional parameters for the ManagedClusterSnapshotsClient.Delete +// method. +type ManagedClusterSnapshotsClientDeleteOptions struct { + // placeholder for future optional parameters +} + +// ManagedClusterSnapshotsClientGetOptions contains the optional parameters for the ManagedClusterSnapshotsClient.Get method. +type ManagedClusterSnapshotsClientGetOptions struct { + // placeholder for future optional parameters +} + +// ManagedClusterSnapshotsClientListByResourceGroupOptions contains the optional parameters for the ManagedClusterSnapshotsClient.NewListByResourceGroupPager +// method. +type ManagedClusterSnapshotsClientListByResourceGroupOptions struct { + // placeholder for future optional parameters +} + +// ManagedClusterSnapshotsClientListOptions contains the optional parameters for the ManagedClusterSnapshotsClient.NewListPager +// method. +type ManagedClusterSnapshotsClientListOptions struct { + // placeholder for future optional parameters +} + +// ManagedClusterSnapshotsClientUpdateTagsOptions contains the optional parameters for the ManagedClusterSnapshotsClient.UpdateTags +// method. +type ManagedClusterSnapshotsClientUpdateTagsOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientBeginAbortLatestOperationOptions contains the optional parameters for the ManagedClustersClient.BeginAbortLatestOperation +// method. +type ManagedClustersClientBeginAbortLatestOperationOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginCreateOrUpdateOptions contains the optional parameters for the ManagedClustersClient.BeginCreateOrUpdate +// method. +type ManagedClustersClientBeginCreateOrUpdateOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginDeleteOptions contains the optional parameters for the ManagedClustersClient.BeginDelete method. +type ManagedClustersClientBeginDeleteOptions struct { + // ignore-pod-disruption-budget=true to delete those pods on a node without considering Pod Disruption Budget + IgnorePodDisruptionBudget *bool + + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginResetAADProfileOptions contains the optional parameters for the ManagedClustersClient.BeginResetAADProfile +// method. +type ManagedClustersClientBeginResetAADProfileOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginResetServicePrincipalProfileOptions contains the optional parameters for the ManagedClustersClient.BeginResetServicePrincipalProfile +// method. +type ManagedClustersClientBeginResetServicePrincipalProfileOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginRotateClusterCertificatesOptions contains the optional parameters for the ManagedClustersClient.BeginRotateClusterCertificates +// method. +type ManagedClustersClientBeginRotateClusterCertificatesOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginRotateServiceAccountSigningKeysOptions contains the optional parameters for the ManagedClustersClient.BeginRotateServiceAccountSigningKeys +// method. +type ManagedClustersClientBeginRotateServiceAccountSigningKeysOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginRunCommandOptions contains the optional parameters for the ManagedClustersClient.BeginRunCommand +// method. +type ManagedClustersClientBeginRunCommandOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginStartOptions contains the optional parameters for the ManagedClustersClient.BeginStart method. +type ManagedClustersClientBeginStartOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginStopOptions contains the optional parameters for the ManagedClustersClient.BeginStop method. +type ManagedClustersClientBeginStopOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientBeginUpdateTagsOptions contains the optional parameters for the ManagedClustersClient.BeginUpdateTags +// method. +type ManagedClustersClientBeginUpdateTagsOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// ManagedClustersClientGetAccessProfileOptions contains the optional parameters for the ManagedClustersClient.GetAccessProfile +// method. +type ManagedClustersClientGetAccessProfileOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientGetCommandResultOptions contains the optional parameters for the ManagedClustersClient.GetCommandResult +// method. +type ManagedClustersClientGetCommandResultOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientGetGuardrailsVersionsOptions contains the optional parameters for the ManagedClustersClient.GetGuardrailsVersions +// method. +type ManagedClustersClientGetGuardrailsVersionsOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientGetMeshRevisionProfileOptions contains the optional parameters for the ManagedClustersClient.GetMeshRevisionProfile +// method. +type ManagedClustersClientGetMeshRevisionProfileOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientGetMeshUpgradeProfileOptions contains the optional parameters for the ManagedClustersClient.GetMeshUpgradeProfile +// method. +type ManagedClustersClientGetMeshUpgradeProfileOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientGetOSOptionsOptions contains the optional parameters for the ManagedClustersClient.GetOSOptions method. +type ManagedClustersClientGetOSOptionsOptions struct { + // The resource type for which the OS options needs to be returned + ResourceType *string +} + +// ManagedClustersClientGetOptions contains the optional parameters for the ManagedClustersClient.Get method. +type ManagedClustersClientGetOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientGetSafeguardsVersionsOptions contains the optional parameters for the ManagedClustersClient.GetSafeguardsVersions +// method. +type ManagedClustersClientGetSafeguardsVersionsOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientGetUpgradeProfileOptions contains the optional parameters for the ManagedClustersClient.GetUpgradeProfile +// method. +type ManagedClustersClientGetUpgradeProfileOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientListByResourceGroupOptions contains the optional parameters for the ManagedClustersClient.NewListByResourceGroupPager +// method. +type ManagedClustersClientListByResourceGroupOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientListClusterAdminCredentialsOptions contains the optional parameters for the ManagedClustersClient.ListClusterAdminCredentials +// method. +type ManagedClustersClientListClusterAdminCredentialsOptions struct { + // server fqdn type for credentials to be returned + ServerFqdn *string +} + +// ManagedClustersClientListClusterMonitoringUserCredentialsOptions contains the optional parameters for the ManagedClustersClient.ListClusterMonitoringUserCredentials +// method. +type ManagedClustersClientListClusterMonitoringUserCredentialsOptions struct { + // server fqdn type for credentials to be returned + ServerFqdn *string +} + +// ManagedClustersClientListClusterUserCredentialsOptions contains the optional parameters for the ManagedClustersClient.ListClusterUserCredentials +// method. +type ManagedClustersClientListClusterUserCredentialsOptions struct { + // Only apply to AAD clusters, specifies the format of returned kubeconfig. Format 'azure' will return azure auth-provider + // kubeconfig; format 'exec' will return exec format kubeconfig, which requires + // kubelogin binary in the path. + Format *Format + + // server fqdn type for credentials to be returned + ServerFqdn *string +} + +// ManagedClustersClientListGuardrailsVersionsOptions contains the optional parameters for the ManagedClustersClient.NewListGuardrailsVersionsPager +// method. +type ManagedClustersClientListGuardrailsVersionsOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientListKubernetesVersionsOptions contains the optional parameters for the ManagedClustersClient.ListKubernetesVersions +// method. +type ManagedClustersClientListKubernetesVersionsOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientListMeshRevisionProfilesOptions contains the optional parameters for the ManagedClustersClient.NewListMeshRevisionProfilesPager +// method. +type ManagedClustersClientListMeshRevisionProfilesOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientListMeshUpgradeProfilesOptions contains the optional parameters for the ManagedClustersClient.NewListMeshUpgradeProfilesPager +// method. +type ManagedClustersClientListMeshUpgradeProfilesOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientListOptions contains the optional parameters for the ManagedClustersClient.NewListPager method. +type ManagedClustersClientListOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientListOutboundNetworkDependenciesEndpointsOptions contains the optional parameters for the ManagedClustersClient.NewListOutboundNetworkDependenciesEndpointsPager +// method. +type ManagedClustersClientListOutboundNetworkDependenciesEndpointsOptions struct { + // placeholder for future optional parameters +} + +// ManagedClustersClientListSafeguardsVersionsOptions contains the optional parameters for the ManagedClustersClient.NewListSafeguardsVersionsPager +// method. +type ManagedClustersClientListSafeguardsVersionsOptions struct { + // placeholder for future optional parameters +} + +// OperationStatusResultClientGetByAgentPoolOptions contains the optional parameters for the OperationStatusResultClient.GetByAgentPool +// method. +type OperationStatusResultClientGetByAgentPoolOptions struct { + // placeholder for future optional parameters +} + +// OperationStatusResultClientGetOptions contains the optional parameters for the OperationStatusResultClient.Get method. +type OperationStatusResultClientGetOptions struct { + // placeholder for future optional parameters +} + +// OperationStatusResultClientListOptions contains the optional parameters for the OperationStatusResultClient.NewListPager +// method. +type OperationStatusResultClientListOptions struct { + // placeholder for future optional parameters +} + +// OperationsClientListOptions contains the optional parameters for the OperationsClient.NewListPager method. +type OperationsClientListOptions struct { + // placeholder for future optional parameters +} + +// PrivateEndpointConnectionsClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsClient.BeginDelete +// method. +type PrivateEndpointConnectionsClientBeginDeleteOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// PrivateEndpointConnectionsClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsClient.Get +// method. +type PrivateEndpointConnectionsClientGetOptions struct { + // placeholder for future optional parameters +} + +// PrivateEndpointConnectionsClientListOptions contains the optional parameters for the PrivateEndpointConnectionsClient.List +// method. +type PrivateEndpointConnectionsClientListOptions struct { + // placeholder for future optional parameters +} + +// PrivateEndpointConnectionsClientUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsClient.Update +// method. +type PrivateEndpointConnectionsClientUpdateOptions struct { + // placeholder for future optional parameters +} + +// PrivateLinkResourcesClientListOptions contains the optional parameters for the PrivateLinkResourcesClient.List method. +type PrivateLinkResourcesClientListOptions struct { + // placeholder for future optional parameters +} + +// ResolvePrivateLinkServiceIDClientPOSTOptions contains the optional parameters for the ResolvePrivateLinkServiceIDClient.POST +// method. +type ResolvePrivateLinkServiceIDClientPOSTOptions struct { + // placeholder for future optional parameters +} + +// SnapshotsClientCreateOrUpdateOptions contains the optional parameters for the SnapshotsClient.CreateOrUpdate method. +type SnapshotsClientCreateOrUpdateOptions struct { + // placeholder for future optional parameters +} + +// SnapshotsClientDeleteOptions contains the optional parameters for the SnapshotsClient.Delete method. +type SnapshotsClientDeleteOptions struct { + // placeholder for future optional parameters +} + +// SnapshotsClientGetOptions contains the optional parameters for the SnapshotsClient.Get method. +type SnapshotsClientGetOptions struct { + // placeholder for future optional parameters +} + +// SnapshotsClientListByResourceGroupOptions contains the optional parameters for the SnapshotsClient.NewListByResourceGroupPager +// method. +type SnapshotsClientListByResourceGroupOptions struct { + // placeholder for future optional parameters +} + +// SnapshotsClientListOptions contains the optional parameters for the SnapshotsClient.NewListPager method. +type SnapshotsClientListOptions struct { + // placeholder for future optional parameters +} + +// SnapshotsClientUpdateTagsOptions contains the optional parameters for the SnapshotsClient.UpdateTags method. +type SnapshotsClientUpdateTagsOptions struct { + // placeholder for future optional parameters +} + +// TrustedAccessRoleBindingsClientBeginCreateOrUpdateOptions contains the optional parameters for the TrustedAccessRoleBindingsClient.BeginCreateOrUpdate +// method. +type TrustedAccessRoleBindingsClientBeginCreateOrUpdateOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// TrustedAccessRoleBindingsClientBeginDeleteOptions contains the optional parameters for the TrustedAccessRoleBindingsClient.BeginDelete +// method. +type TrustedAccessRoleBindingsClientBeginDeleteOptions struct { + // Resumes the LRO from the provided token. + ResumeToken string +} + +// TrustedAccessRoleBindingsClientGetOptions contains the optional parameters for the TrustedAccessRoleBindingsClient.Get +// method. +type TrustedAccessRoleBindingsClientGetOptions struct { + // placeholder for future optional parameters +} + +// TrustedAccessRoleBindingsClientListOptions contains the optional parameters for the TrustedAccessRoleBindingsClient.NewListPager +// method. +type TrustedAccessRoleBindingsClientListOptions struct { + // placeholder for future optional parameters +} + +// TrustedAccessRolesClientListOptions contains the optional parameters for the TrustedAccessRolesClient.NewListPager method. +type TrustedAccessRolesClientListOptions struct { + // placeholder for future optional parameters +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/privateendpointconnections_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/privateendpointconnections_client.go new file mode 100644 index 000000000000..d20f4f69db3d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/privateendpointconnections_client.go @@ -0,0 +1,334 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// PrivateEndpointConnectionsClient contains the methods for the PrivateEndpointConnections group. +// Don't use this type directly, use NewPrivateEndpointConnectionsClient() instead. +type PrivateEndpointConnectionsClient struct { + internal *arm.Client + subscriptionID string +} + +// NewPrivateEndpointConnectionsClient creates a new instance of PrivateEndpointConnectionsClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewPrivateEndpointConnectionsClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*PrivateEndpointConnectionsClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &PrivateEndpointConnectionsClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// BeginDelete - Deletes a private endpoint connection. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - privateEndpointConnectionName - The name of the private endpoint connection. +// - options - PrivateEndpointConnectionsClientBeginDeleteOptions contains the optional parameters for the PrivateEndpointConnectionsClient.BeginDelete +// method. +func (client *PrivateEndpointConnectionsClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, privateEndpointConnectionName string, options *PrivateEndpointConnectionsClientBeginDeleteOptions) (*runtime.Poller[PrivateEndpointConnectionsClientDeleteResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.deleteOperation(ctx, resourceGroupName, resourceName, privateEndpointConnectionName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[PrivateEndpointConnectionsClientDeleteResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[PrivateEndpointConnectionsClientDeleteResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// Delete - Deletes a private endpoint connection. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *PrivateEndpointConnectionsClient) deleteOperation(ctx context.Context, resourceGroupName string, resourceName string, privateEndpointConnectionName string, options *PrivateEndpointConnectionsClientBeginDeleteOptions) (*http.Response, error) { + var err error + const operationName = "PrivateEndpointConnectionsClient.BeginDelete" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.deleteCreateRequest(ctx, resourceGroupName, resourceName, privateEndpointConnectionName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// deleteCreateRequest creates the Delete request. +func (client *PrivateEndpointConnectionsClient) deleteCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, privateEndpointConnectionName string, options *PrivateEndpointConnectionsClientBeginDeleteOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/privateEndpointConnections/{privateEndpointConnectionName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if privateEndpointConnectionName == "" { + return nil, errors.New("parameter privateEndpointConnectionName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{privateEndpointConnectionName}", url.PathEscape(privateEndpointConnectionName)) + req, err := runtime.NewRequest(ctx, http.MethodDelete, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// Get - To learn more about private clusters, see: https://docs.microsoft.com/azure/aks/private-clusters +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - privateEndpointConnectionName - The name of the private endpoint connection. +// - options - PrivateEndpointConnectionsClientGetOptions contains the optional parameters for the PrivateEndpointConnectionsClient.Get +// method. +func (client *PrivateEndpointConnectionsClient) Get(ctx context.Context, resourceGroupName string, resourceName string, privateEndpointConnectionName string, options *PrivateEndpointConnectionsClientGetOptions) (PrivateEndpointConnectionsClientGetResponse, error) { + var err error + const operationName = "PrivateEndpointConnectionsClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, privateEndpointConnectionName, options) + if err != nil { + return PrivateEndpointConnectionsClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return PrivateEndpointConnectionsClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return PrivateEndpointConnectionsClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *PrivateEndpointConnectionsClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, privateEndpointConnectionName string, options *PrivateEndpointConnectionsClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/privateEndpointConnections/{privateEndpointConnectionName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if privateEndpointConnectionName == "" { + return nil, errors.New("parameter privateEndpointConnectionName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{privateEndpointConnectionName}", url.PathEscape(privateEndpointConnectionName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *PrivateEndpointConnectionsClient) getHandleResponse(resp *http.Response) (PrivateEndpointConnectionsClientGetResponse, error) { + result := PrivateEndpointConnectionsClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.PrivateEndpointConnection); err != nil { + return PrivateEndpointConnectionsClientGetResponse{}, err + } + return result, nil +} + +// List - To learn more about private clusters, see: https://docs.microsoft.com/azure/aks/private-clusters +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - PrivateEndpointConnectionsClientListOptions contains the optional parameters for the PrivateEndpointConnectionsClient.List +// method. +func (client *PrivateEndpointConnectionsClient) List(ctx context.Context, resourceGroupName string, resourceName string, options *PrivateEndpointConnectionsClientListOptions) (PrivateEndpointConnectionsClientListResponse, error) { + var err error + const operationName = "PrivateEndpointConnectionsClient.List" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.listCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return PrivateEndpointConnectionsClientListResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return PrivateEndpointConnectionsClientListResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return PrivateEndpointConnectionsClientListResponse{}, err + } + resp, err := client.listHandleResponse(httpResp) + return resp, err +} + +// listCreateRequest creates the List request. +func (client *PrivateEndpointConnectionsClient) listCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *PrivateEndpointConnectionsClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/privateEndpointConnections" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *PrivateEndpointConnectionsClient) listHandleResponse(resp *http.Response) (PrivateEndpointConnectionsClientListResponse, error) { + result := PrivateEndpointConnectionsClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.PrivateEndpointConnectionListResult); err != nil { + return PrivateEndpointConnectionsClientListResponse{}, err + } + return result, nil +} + +// Update - Updates a private endpoint connection. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - privateEndpointConnectionName - The name of the private endpoint connection. +// - parameters - The updated private endpoint connection. +// - options - PrivateEndpointConnectionsClientUpdateOptions contains the optional parameters for the PrivateEndpointConnectionsClient.Update +// method. +func (client *PrivateEndpointConnectionsClient) Update(ctx context.Context, resourceGroupName string, resourceName string, privateEndpointConnectionName string, parameters PrivateEndpointConnection, options *PrivateEndpointConnectionsClientUpdateOptions) (PrivateEndpointConnectionsClientUpdateResponse, error) { + var err error + const operationName = "PrivateEndpointConnectionsClient.Update" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.updateCreateRequest(ctx, resourceGroupName, resourceName, privateEndpointConnectionName, parameters, options) + if err != nil { + return PrivateEndpointConnectionsClientUpdateResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return PrivateEndpointConnectionsClientUpdateResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusCreated) { + err = runtime.NewResponseError(httpResp) + return PrivateEndpointConnectionsClientUpdateResponse{}, err + } + resp, err := client.updateHandleResponse(httpResp) + return resp, err +} + +// updateCreateRequest creates the Update request. +func (client *PrivateEndpointConnectionsClient) updateCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, privateEndpointConnectionName string, parameters PrivateEndpointConnection, options *PrivateEndpointConnectionsClientUpdateOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/privateEndpointConnections/{privateEndpointConnectionName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if privateEndpointConnectionName == "" { + return nil, errors.New("parameter privateEndpointConnectionName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{privateEndpointConnectionName}", url.PathEscape(privateEndpointConnectionName)) + req, err := runtime.NewRequest(ctx, http.MethodPut, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// updateHandleResponse handles the Update response. +func (client *PrivateEndpointConnectionsClient) updateHandleResponse(resp *http.Response) (PrivateEndpointConnectionsClientUpdateResponse, error) { + result := PrivateEndpointConnectionsClientUpdateResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.PrivateEndpointConnection); err != nil { + return PrivateEndpointConnectionsClientUpdateResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/privatelinkresources_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/privatelinkresources_client.go new file mode 100644 index 000000000000..19222b1656a5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/privatelinkresources_client.go @@ -0,0 +1,109 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// PrivateLinkResourcesClient contains the methods for the PrivateLinkResources group. +// Don't use this type directly, use NewPrivateLinkResourcesClient() instead. +type PrivateLinkResourcesClient struct { + internal *arm.Client + subscriptionID string +} + +// NewPrivateLinkResourcesClient creates a new instance of PrivateLinkResourcesClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewPrivateLinkResourcesClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*PrivateLinkResourcesClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &PrivateLinkResourcesClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// List - To learn more about private clusters, see: https://docs.microsoft.com/azure/aks/private-clusters +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - PrivateLinkResourcesClientListOptions contains the optional parameters for the PrivateLinkResourcesClient.List +// method. +func (client *PrivateLinkResourcesClient) List(ctx context.Context, resourceGroupName string, resourceName string, options *PrivateLinkResourcesClientListOptions) (PrivateLinkResourcesClientListResponse, error) { + var err error + const operationName = "PrivateLinkResourcesClient.List" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.listCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return PrivateLinkResourcesClientListResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return PrivateLinkResourcesClientListResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return PrivateLinkResourcesClientListResponse{}, err + } + resp, err := client.listHandleResponse(httpResp) + return resp, err +} + +// listCreateRequest creates the List request. +func (client *PrivateLinkResourcesClient) listCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *PrivateLinkResourcesClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/privateLinkResources" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *PrivateLinkResourcesClient) listHandleResponse(resp *http.Response) (PrivateLinkResourcesClientListResponse, error) { + result := PrivateLinkResourcesClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.PrivateLinkResourcesListResult); err != nil { + return PrivateLinkResourcesClientListResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/resolveprivatelinkserviceid_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/resolveprivatelinkserviceid_client.go new file mode 100644 index 000000000000..d30ebf05b25c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/resolveprivatelinkserviceid_client.go @@ -0,0 +1,113 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// ResolvePrivateLinkServiceIDClient contains the methods for the ResolvePrivateLinkServiceID group. +// Don't use this type directly, use NewResolvePrivateLinkServiceIDClient() instead. +type ResolvePrivateLinkServiceIDClient struct { + internal *arm.Client + subscriptionID string +} + +// NewResolvePrivateLinkServiceIDClient creates a new instance of ResolvePrivateLinkServiceIDClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewResolvePrivateLinkServiceIDClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*ResolvePrivateLinkServiceIDClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &ResolvePrivateLinkServiceIDClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// POST - Gets the private link service ID for the specified managed cluster. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - Parameters required in order to resolve a private link service ID. +// - options - ResolvePrivateLinkServiceIDClientPOSTOptions contains the optional parameters for the ResolvePrivateLinkServiceIDClient.POST +// method. +func (client *ResolvePrivateLinkServiceIDClient) POST(ctx context.Context, resourceGroupName string, resourceName string, parameters PrivateLinkResource, options *ResolvePrivateLinkServiceIDClientPOSTOptions) (ResolvePrivateLinkServiceIDClientPOSTResponse, error) { + var err error + const operationName = "ResolvePrivateLinkServiceIDClient.POST" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.postCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return ResolvePrivateLinkServiceIDClientPOSTResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return ResolvePrivateLinkServiceIDClientPOSTResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return ResolvePrivateLinkServiceIDClientPOSTResponse{}, err + } + resp, err := client.postHandleResponse(httpResp) + return resp, err +} + +// postCreateRequest creates the POST request. +func (client *ResolvePrivateLinkServiceIDClient) postCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters PrivateLinkResource, options *ResolvePrivateLinkServiceIDClientPOSTOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/resolvePrivateLinkServiceId" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPost, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// postHandleResponse handles the POST response. +func (client *ResolvePrivateLinkServiceIDClient) postHandleResponse(resp *http.Response) (ResolvePrivateLinkServiceIDClientPOSTResponse, error) { + result := ResolvePrivateLinkServiceIDClientPOSTResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.PrivateLinkResource); err != nil { + return ResolvePrivateLinkServiceIDClientPOSTResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/responses.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/responses.go new file mode 100644 index 000000000000..8acdd298df60 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/responses.go @@ -0,0 +1,437 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +// AgentPoolsClientAbortLatestOperationResponse contains the response from method AgentPoolsClient.BeginAbortLatestOperation. +type AgentPoolsClientAbortLatestOperationResponse struct { + // placeholder for future response values +} + +// AgentPoolsClientCreateOrUpdateResponse contains the response from method AgentPoolsClient.BeginCreateOrUpdate. +type AgentPoolsClientCreateOrUpdateResponse struct { + // Agent Pool. + AgentPool +} + +// AgentPoolsClientDeleteMachinesResponse contains the response from method AgentPoolsClient.BeginDeleteMachines. +type AgentPoolsClientDeleteMachinesResponse struct { + // placeholder for future response values +} + +// AgentPoolsClientDeleteResponse contains the response from method AgentPoolsClient.BeginDelete. +type AgentPoolsClientDeleteResponse struct { + // placeholder for future response values +} + +// AgentPoolsClientGetAvailableAgentPoolVersionsResponse contains the response from method AgentPoolsClient.GetAvailableAgentPoolVersions. +type AgentPoolsClientGetAvailableAgentPoolVersionsResponse struct { + // The list of available versions for an agent pool. + AgentPoolAvailableVersions +} + +// AgentPoolsClientGetResponse contains the response from method AgentPoolsClient.Get. +type AgentPoolsClientGetResponse struct { + // Agent Pool. + AgentPool +} + +// AgentPoolsClientGetUpgradeProfileResponse contains the response from method AgentPoolsClient.GetUpgradeProfile. +type AgentPoolsClientGetUpgradeProfileResponse struct { + // The list of available upgrades for an agent pool. + AgentPoolUpgradeProfile +} + +// AgentPoolsClientListResponse contains the response from method AgentPoolsClient.NewListPager. +type AgentPoolsClientListResponse struct { + // The response from the List Agent Pools operation. + AgentPoolListResult +} + +// AgentPoolsClientUpgradeNodeImageVersionResponse contains the response from method AgentPoolsClient.BeginUpgradeNodeImageVersion. +type AgentPoolsClientUpgradeNodeImageVersionResponse struct { + // Agent Pool. + AgentPool +} + +// MachinesClientGetResponse contains the response from method MachinesClient.Get. +type MachinesClientGetResponse struct { + // A machine. Contains details about the underlying virtual machine. A machine may be visible here but not in kubectl get + // nodes; if so it may be because the machine has not been registered with the Kubernetes API Server yet. + Machine +} + +// MachinesClientListResponse contains the response from method MachinesClient.NewListPager. +type MachinesClientListResponse struct { + // The response from the List Machines operation. + MachineListResult +} + +// MaintenanceConfigurationsClientCreateOrUpdateResponse contains the response from method MaintenanceConfigurationsClient.CreateOrUpdate. +type MaintenanceConfigurationsClientCreateOrUpdateResponse struct { + // See [planned maintenance](https://docs.microsoft.com/azure/aks/planned-maintenance) for more information about planned + // maintenance. + MaintenanceConfiguration +} + +// MaintenanceConfigurationsClientDeleteResponse contains the response from method MaintenanceConfigurationsClient.Delete. +type MaintenanceConfigurationsClientDeleteResponse struct { + // placeholder for future response values +} + +// MaintenanceConfigurationsClientGetResponse contains the response from method MaintenanceConfigurationsClient.Get. +type MaintenanceConfigurationsClientGetResponse struct { + // See [planned maintenance](https://docs.microsoft.com/azure/aks/planned-maintenance) for more information about planned + // maintenance. + MaintenanceConfiguration +} + +// MaintenanceConfigurationsClientListByManagedClusterResponse contains the response from method MaintenanceConfigurationsClient.NewListByManagedClusterPager. +type MaintenanceConfigurationsClientListByManagedClusterResponse struct { + // The response from the List maintenance configurations operation. + MaintenanceConfigurationListResult +} + +// ManagedClusterSnapshotsClientCreateOrUpdateResponse contains the response from method ManagedClusterSnapshotsClient.CreateOrUpdate. +type ManagedClusterSnapshotsClientCreateOrUpdateResponse struct { + // A managed cluster snapshot resource. + ManagedClusterSnapshot +} + +// ManagedClusterSnapshotsClientDeleteResponse contains the response from method ManagedClusterSnapshotsClient.Delete. +type ManagedClusterSnapshotsClientDeleteResponse struct { + // placeholder for future response values +} + +// ManagedClusterSnapshotsClientGetResponse contains the response from method ManagedClusterSnapshotsClient.Get. +type ManagedClusterSnapshotsClientGetResponse struct { + // A managed cluster snapshot resource. + ManagedClusterSnapshot +} + +// ManagedClusterSnapshotsClientListByResourceGroupResponse contains the response from method ManagedClusterSnapshotsClient.NewListByResourceGroupPager. +type ManagedClusterSnapshotsClientListByResourceGroupResponse struct { + // The response from the List Managed Cluster Snapshots operation. + ManagedClusterSnapshotListResult +} + +// ManagedClusterSnapshotsClientListResponse contains the response from method ManagedClusterSnapshotsClient.NewListPager. +type ManagedClusterSnapshotsClientListResponse struct { + // The response from the List Managed Cluster Snapshots operation. + ManagedClusterSnapshotListResult +} + +// ManagedClusterSnapshotsClientUpdateTagsResponse contains the response from method ManagedClusterSnapshotsClient.UpdateTags. +type ManagedClusterSnapshotsClientUpdateTagsResponse struct { + // A managed cluster snapshot resource. + ManagedClusterSnapshot +} + +// ManagedClustersClientAbortLatestOperationResponse contains the response from method ManagedClustersClient.BeginAbortLatestOperation. +type ManagedClustersClientAbortLatestOperationResponse struct { + // placeholder for future response values +} + +// ManagedClustersClientCreateOrUpdateResponse contains the response from method ManagedClustersClient.BeginCreateOrUpdate. +type ManagedClustersClientCreateOrUpdateResponse struct { + // Managed cluster. + ManagedCluster +} + +// ManagedClustersClientDeleteResponse contains the response from method ManagedClustersClient.BeginDelete. +type ManagedClustersClientDeleteResponse struct { + // placeholder for future response values +} + +// ManagedClustersClientGetAccessProfileResponse contains the response from method ManagedClustersClient.GetAccessProfile. +type ManagedClustersClientGetAccessProfileResponse struct { + // Managed cluster Access Profile. + ManagedClusterAccessProfile +} + +// ManagedClustersClientGetCommandResultResponse contains the response from method ManagedClustersClient.GetCommandResult. +type ManagedClustersClientGetCommandResultResponse struct { + // run command result. + RunCommandResult + + // Location contains the information returned from the Location header response. + Location *string +} + +// ManagedClustersClientGetGuardrailsVersionsResponse contains the response from method ManagedClustersClient.GetGuardrailsVersions. +type ManagedClustersClientGetGuardrailsVersionsResponse struct { + // Available Guardrails Version + GuardrailsAvailableVersion +} + +// ManagedClustersClientGetMeshRevisionProfileResponse contains the response from method ManagedClustersClient.GetMeshRevisionProfile. +type ManagedClustersClientGetMeshRevisionProfileResponse struct { + // Mesh revision profile for a mesh. + MeshRevisionProfile +} + +// ManagedClustersClientGetMeshUpgradeProfileResponse contains the response from method ManagedClustersClient.GetMeshUpgradeProfile. +type ManagedClustersClientGetMeshUpgradeProfileResponse struct { + // Upgrade profile for given mesh. + MeshUpgradeProfile +} + +// ManagedClustersClientGetOSOptionsResponse contains the response from method ManagedClustersClient.GetOSOptions. +type ManagedClustersClientGetOSOptionsResponse struct { + // The OS option profile. + OSOptionProfile +} + +// ManagedClustersClientGetResponse contains the response from method ManagedClustersClient.Get. +type ManagedClustersClientGetResponse struct { + // Managed cluster. + ManagedCluster +} + +// ManagedClustersClientGetSafeguardsVersionsResponse contains the response from method ManagedClustersClient.GetSafeguardsVersions. +type ManagedClustersClientGetSafeguardsVersionsResponse struct { + // Available Safeguards Version + SafeguardsAvailableVersion +} + +// ManagedClustersClientGetUpgradeProfileResponse contains the response from method ManagedClustersClient.GetUpgradeProfile. +type ManagedClustersClientGetUpgradeProfileResponse struct { + // The list of available upgrades for compute pools. + ManagedClusterUpgradeProfile +} + +// ManagedClustersClientListByResourceGroupResponse contains the response from method ManagedClustersClient.NewListByResourceGroupPager. +type ManagedClustersClientListByResourceGroupResponse struct { + // The response from the List Managed Clusters operation. + ManagedClusterListResult +} + +// ManagedClustersClientListClusterAdminCredentialsResponse contains the response from method ManagedClustersClient.ListClusterAdminCredentials. +type ManagedClustersClientListClusterAdminCredentialsResponse struct { + // The list credential result response. + CredentialResults +} + +// ManagedClustersClientListClusterMonitoringUserCredentialsResponse contains the response from method ManagedClustersClient.ListClusterMonitoringUserCredentials. +type ManagedClustersClientListClusterMonitoringUserCredentialsResponse struct { + // The list credential result response. + CredentialResults +} + +// ManagedClustersClientListClusterUserCredentialsResponse contains the response from method ManagedClustersClient.ListClusterUserCredentials. +type ManagedClustersClientListClusterUserCredentialsResponse struct { + // The list credential result response. + CredentialResults +} + +// ManagedClustersClientListGuardrailsVersionsResponse contains the response from method ManagedClustersClient.NewListGuardrailsVersionsPager. +type ManagedClustersClientListGuardrailsVersionsResponse struct { + // Hold values properties, which is array of GuardrailsVersions + GuardrailsAvailableVersionsList +} + +// ManagedClustersClientListKubernetesVersionsResponse contains the response from method ManagedClustersClient.ListKubernetesVersions. +type ManagedClustersClientListKubernetesVersionsResponse struct { + // Hold values properties, which is array of KubernetesVersion + KubernetesVersionListResult +} + +// ManagedClustersClientListMeshRevisionProfilesResponse contains the response from method ManagedClustersClient.NewListMeshRevisionProfilesPager. +type ManagedClustersClientListMeshRevisionProfilesResponse struct { + // Holds an array of MeshRevisionsProfiles + MeshRevisionProfileList +} + +// ManagedClustersClientListMeshUpgradeProfilesResponse contains the response from method ManagedClustersClient.NewListMeshUpgradeProfilesPager. +type ManagedClustersClientListMeshUpgradeProfilesResponse struct { + // Holds an array of MeshUpgradeProfiles + MeshUpgradeProfileList +} + +// ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse contains the response from method ManagedClustersClient.NewListOutboundNetworkDependenciesEndpointsPager. +type ManagedClustersClientListOutboundNetworkDependenciesEndpointsResponse struct { + // Collection of OutboundEnvironmentEndpoint + OutboundEnvironmentEndpointCollection +} + +// ManagedClustersClientListResponse contains the response from method ManagedClustersClient.NewListPager. +type ManagedClustersClientListResponse struct { + // The response from the List Managed Clusters operation. + ManagedClusterListResult +} + +// ManagedClustersClientListSafeguardsVersionsResponse contains the response from method ManagedClustersClient.NewListSafeguardsVersionsPager. +type ManagedClustersClientListSafeguardsVersionsResponse struct { + // Hold values properties, which is array of SafeguardsVersions + SafeguardsAvailableVersionsList +} + +// ManagedClustersClientResetAADProfileResponse contains the response from method ManagedClustersClient.BeginResetAADProfile. +type ManagedClustersClientResetAADProfileResponse struct { + // placeholder for future response values +} + +// ManagedClustersClientResetServicePrincipalProfileResponse contains the response from method ManagedClustersClient.BeginResetServicePrincipalProfile. +type ManagedClustersClientResetServicePrincipalProfileResponse struct { + // placeholder for future response values +} + +// ManagedClustersClientRotateClusterCertificatesResponse contains the response from method ManagedClustersClient.BeginRotateClusterCertificates. +type ManagedClustersClientRotateClusterCertificatesResponse struct { + // placeholder for future response values +} + +// ManagedClustersClientRotateServiceAccountSigningKeysResponse contains the response from method ManagedClustersClient.BeginRotateServiceAccountSigningKeys. +type ManagedClustersClientRotateServiceAccountSigningKeysResponse struct { + // placeholder for future response values +} + +// ManagedClustersClientRunCommandResponse contains the response from method ManagedClustersClient.BeginRunCommand. +type ManagedClustersClientRunCommandResponse struct { + // run command result. + RunCommandResult +} + +// ManagedClustersClientStartResponse contains the response from method ManagedClustersClient.BeginStart. +type ManagedClustersClientStartResponse struct { + // placeholder for future response values +} + +// ManagedClustersClientStopResponse contains the response from method ManagedClustersClient.BeginStop. +type ManagedClustersClientStopResponse struct { + // placeholder for future response values +} + +// ManagedClustersClientUpdateTagsResponse contains the response from method ManagedClustersClient.BeginUpdateTags. +type ManagedClustersClientUpdateTagsResponse struct { + // Managed cluster. + ManagedCluster +} + +// OperationStatusResultClientGetByAgentPoolResponse contains the response from method OperationStatusResultClient.GetByAgentPool. +type OperationStatusResultClientGetByAgentPoolResponse struct { + // The current status of an async operation. + OperationStatusResult +} + +// OperationStatusResultClientGetResponse contains the response from method OperationStatusResultClient.Get. +type OperationStatusResultClientGetResponse struct { + // The current status of an async operation. + OperationStatusResult +} + +// OperationStatusResultClientListResponse contains the response from method OperationStatusResultClient.NewListPager. +type OperationStatusResultClientListResponse struct { + // The operations list. It contains an URL link to get the next set of results. + OperationStatusResultList +} + +// OperationsClientListResponse contains the response from method OperationsClient.NewListPager. +type OperationsClientListResponse struct { + // The List Operation response. + OperationListResult +} + +// PrivateEndpointConnectionsClientDeleteResponse contains the response from method PrivateEndpointConnectionsClient.BeginDelete. +type PrivateEndpointConnectionsClientDeleteResponse struct { + // placeholder for future response values +} + +// PrivateEndpointConnectionsClientGetResponse contains the response from method PrivateEndpointConnectionsClient.Get. +type PrivateEndpointConnectionsClientGetResponse struct { + // A private endpoint connection + PrivateEndpointConnection +} + +// PrivateEndpointConnectionsClientListResponse contains the response from method PrivateEndpointConnectionsClient.List. +type PrivateEndpointConnectionsClientListResponse struct { + // A list of private endpoint connections + PrivateEndpointConnectionListResult +} + +// PrivateEndpointConnectionsClientUpdateResponse contains the response from method PrivateEndpointConnectionsClient.Update. +type PrivateEndpointConnectionsClientUpdateResponse struct { + // A private endpoint connection + PrivateEndpointConnection +} + +// PrivateLinkResourcesClientListResponse contains the response from method PrivateLinkResourcesClient.List. +type PrivateLinkResourcesClientListResponse struct { + // A list of private link resources + PrivateLinkResourcesListResult +} + +// ResolvePrivateLinkServiceIDClientPOSTResponse contains the response from method ResolvePrivateLinkServiceIDClient.POST. +type ResolvePrivateLinkServiceIDClientPOSTResponse struct { + // A private link resource + PrivateLinkResource +} + +// SnapshotsClientCreateOrUpdateResponse contains the response from method SnapshotsClient.CreateOrUpdate. +type SnapshotsClientCreateOrUpdateResponse struct { + // A node pool snapshot resource. + Snapshot +} + +// SnapshotsClientDeleteResponse contains the response from method SnapshotsClient.Delete. +type SnapshotsClientDeleteResponse struct { + // placeholder for future response values +} + +// SnapshotsClientGetResponse contains the response from method SnapshotsClient.Get. +type SnapshotsClientGetResponse struct { + // A node pool snapshot resource. + Snapshot +} + +// SnapshotsClientListByResourceGroupResponse contains the response from method SnapshotsClient.NewListByResourceGroupPager. +type SnapshotsClientListByResourceGroupResponse struct { + // The response from the List Snapshots operation. + SnapshotListResult +} + +// SnapshotsClientListResponse contains the response from method SnapshotsClient.NewListPager. +type SnapshotsClientListResponse struct { + // The response from the List Snapshots operation. + SnapshotListResult +} + +// SnapshotsClientUpdateTagsResponse contains the response from method SnapshotsClient.UpdateTags. +type SnapshotsClientUpdateTagsResponse struct { + // A node pool snapshot resource. + Snapshot +} + +// TrustedAccessRoleBindingsClientCreateOrUpdateResponse contains the response from method TrustedAccessRoleBindingsClient.BeginCreateOrUpdate. +type TrustedAccessRoleBindingsClientCreateOrUpdateResponse struct { + // Defines binding between a resource and role + TrustedAccessRoleBinding +} + +// TrustedAccessRoleBindingsClientDeleteResponse contains the response from method TrustedAccessRoleBindingsClient.BeginDelete. +type TrustedAccessRoleBindingsClientDeleteResponse struct { + // placeholder for future response values +} + +// TrustedAccessRoleBindingsClientGetResponse contains the response from method TrustedAccessRoleBindingsClient.Get. +type TrustedAccessRoleBindingsClientGetResponse struct { + // Defines binding between a resource and role + TrustedAccessRoleBinding +} + +// TrustedAccessRoleBindingsClientListResponse contains the response from method TrustedAccessRoleBindingsClient.NewListPager. +type TrustedAccessRoleBindingsClientListResponse struct { + // List of trusted access role bindings + TrustedAccessRoleBindingListResult +} + +// TrustedAccessRolesClientListResponse contains the response from method TrustedAccessRolesClient.NewListPager. +type TrustedAccessRolesClientListResponse struct { + // List of trusted access roles + TrustedAccessRoleListResult +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/snapshots_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/snapshots_client.go new file mode 100644 index 000000000000..c0cb5e2f3514 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/snapshots_client.go @@ -0,0 +1,413 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// SnapshotsClient contains the methods for the Snapshots group. +// Don't use this type directly, use NewSnapshotsClient() instead. +type SnapshotsClient struct { + internal *arm.Client + subscriptionID string +} + +// NewSnapshotsClient creates a new instance of SnapshotsClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewSnapshotsClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*SnapshotsClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &SnapshotsClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// CreateOrUpdate - Creates or updates a snapshot. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - The snapshot to create or update. +// - options - SnapshotsClientCreateOrUpdateOptions contains the optional parameters for the SnapshotsClient.CreateOrUpdate +// method. +func (client *SnapshotsClient) CreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, parameters Snapshot, options *SnapshotsClientCreateOrUpdateOptions) (SnapshotsClientCreateOrUpdateResponse, error) { + var err error + const operationName = "SnapshotsClient.CreateOrUpdate" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.createOrUpdateCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return SnapshotsClientCreateOrUpdateResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return SnapshotsClientCreateOrUpdateResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusCreated) { + err = runtime.NewResponseError(httpResp) + return SnapshotsClientCreateOrUpdateResponse{}, err + } + resp, err := client.createOrUpdateHandleResponse(httpResp) + return resp, err +} + +// createOrUpdateCreateRequest creates the CreateOrUpdate request. +func (client *SnapshotsClient) createOrUpdateCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters Snapshot, options *SnapshotsClientCreateOrUpdateOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/snapshots/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPut, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// createOrUpdateHandleResponse handles the CreateOrUpdate response. +func (client *SnapshotsClient) createOrUpdateHandleResponse(resp *http.Response) (SnapshotsClientCreateOrUpdateResponse, error) { + result := SnapshotsClientCreateOrUpdateResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.Snapshot); err != nil { + return SnapshotsClientCreateOrUpdateResponse{}, err + } + return result, nil +} + +// Delete - Deletes a snapshot. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - SnapshotsClientDeleteOptions contains the optional parameters for the SnapshotsClient.Delete method. +func (client *SnapshotsClient) Delete(ctx context.Context, resourceGroupName string, resourceName string, options *SnapshotsClientDeleteOptions) (SnapshotsClientDeleteResponse, error) { + var err error + const operationName = "SnapshotsClient.Delete" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.deleteCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return SnapshotsClientDeleteResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return SnapshotsClientDeleteResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return SnapshotsClientDeleteResponse{}, err + } + return SnapshotsClientDeleteResponse{}, nil +} + +// deleteCreateRequest creates the Delete request. +func (client *SnapshotsClient) deleteCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *SnapshotsClientDeleteOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/snapshots/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodDelete, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// Get - Gets a snapshot. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - SnapshotsClientGetOptions contains the optional parameters for the SnapshotsClient.Get method. +func (client *SnapshotsClient) Get(ctx context.Context, resourceGroupName string, resourceName string, options *SnapshotsClientGetOptions) (SnapshotsClientGetResponse, error) { + var err error + const operationName = "SnapshotsClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, options) + if err != nil { + return SnapshotsClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return SnapshotsClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return SnapshotsClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *SnapshotsClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *SnapshotsClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/snapshots/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *SnapshotsClient) getHandleResponse(resp *http.Response) (SnapshotsClientGetResponse, error) { + result := SnapshotsClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.Snapshot); err != nil { + return SnapshotsClientGetResponse{}, err + } + return result, nil +} + +// NewListPager - Gets a list of snapshots in the specified subscription. +// +// Generated from API version 2023-11-02-preview +// - options - SnapshotsClientListOptions contains the optional parameters for the SnapshotsClient.NewListPager method. +func (client *SnapshotsClient) NewListPager(options *SnapshotsClientListOptions) *runtime.Pager[SnapshotsClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[SnapshotsClientListResponse]{ + More: func(page SnapshotsClientListResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *SnapshotsClientListResponse) (SnapshotsClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "SnapshotsClient.NewListPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listCreateRequest(ctx, options) + }, nil) + if err != nil { + return SnapshotsClientListResponse{}, err + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *SnapshotsClient) listCreateRequest(ctx context.Context, options *SnapshotsClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/snapshots" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *SnapshotsClient) listHandleResponse(resp *http.Response) (SnapshotsClientListResponse, error) { + result := SnapshotsClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.SnapshotListResult); err != nil { + return SnapshotsClientListResponse{}, err + } + return result, nil +} + +// NewListByResourceGroupPager - Lists snapshots in the specified subscription and resource group. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - options - SnapshotsClientListByResourceGroupOptions contains the optional parameters for the SnapshotsClient.NewListByResourceGroupPager +// method. +func (client *SnapshotsClient) NewListByResourceGroupPager(resourceGroupName string, options *SnapshotsClientListByResourceGroupOptions) *runtime.Pager[SnapshotsClientListByResourceGroupResponse] { + return runtime.NewPager(runtime.PagingHandler[SnapshotsClientListByResourceGroupResponse]{ + More: func(page SnapshotsClientListByResourceGroupResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *SnapshotsClientListByResourceGroupResponse) (SnapshotsClientListByResourceGroupResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "SnapshotsClient.NewListByResourceGroupPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listByResourceGroupCreateRequest(ctx, resourceGroupName, options) + }, nil) + if err != nil { + return SnapshotsClientListByResourceGroupResponse{}, err + } + return client.listByResourceGroupHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listByResourceGroupCreateRequest creates the ListByResourceGroup request. +func (client *SnapshotsClient) listByResourceGroupCreateRequest(ctx context.Context, resourceGroupName string, options *SnapshotsClientListByResourceGroupOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/snapshots" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listByResourceGroupHandleResponse handles the ListByResourceGroup response. +func (client *SnapshotsClient) listByResourceGroupHandleResponse(resp *http.Response) (SnapshotsClientListByResourceGroupResponse, error) { + result := SnapshotsClientListByResourceGroupResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.SnapshotListResult); err != nil { + return SnapshotsClientListByResourceGroupResponse{}, err + } + return result, nil +} + +// UpdateTags - Updates tags on a snapshot. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - parameters - Parameters supplied to the Update snapshot Tags operation. +// - options - SnapshotsClientUpdateTagsOptions contains the optional parameters for the SnapshotsClient.UpdateTags method. +func (client *SnapshotsClient) UpdateTags(ctx context.Context, resourceGroupName string, resourceName string, parameters TagsObject, options *SnapshotsClientUpdateTagsOptions) (SnapshotsClientUpdateTagsResponse, error) { + var err error + const operationName = "SnapshotsClient.UpdateTags" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.updateTagsCreateRequest(ctx, resourceGroupName, resourceName, parameters, options) + if err != nil { + return SnapshotsClientUpdateTagsResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return SnapshotsClientUpdateTagsResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return SnapshotsClientUpdateTagsResponse{}, err + } + resp, err := client.updateTagsHandleResponse(httpResp) + return resp, err +} + +// updateTagsCreateRequest creates the UpdateTags request. +func (client *SnapshotsClient) updateTagsCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, parameters TagsObject, options *SnapshotsClientUpdateTagsOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/snapshots/{resourceName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodPatch, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, parameters); err != nil { + return nil, err + } + return req, nil +} + +// updateTagsHandleResponse handles the UpdateTags response. +func (client *SnapshotsClient) updateTagsHandleResponse(resp *http.Response) (SnapshotsClientUpdateTagsResponse, error) { + result := SnapshotsClientUpdateTagsResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.Snapshot); err != nil { + return SnapshotsClientUpdateTagsResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/time_rfc3339.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/time_rfc3339.go new file mode 100644 index 000000000000..c671b41ed1c2 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/time_rfc3339.go @@ -0,0 +1,110 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "encoding/json" + "fmt" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "reflect" + "regexp" + "strings" + "time" +) + +// Azure reports time in UTC but it doesn't include the 'Z' time zone suffix in some cases. +var tzOffsetRegex = regexp.MustCompile(`(?:Z|z|\+|-)(?:\d+:\d+)*"*$`) + +const ( + utcDateTime = "2006-01-02T15:04:05.999999999" + utcDateTimeJSON = `"` + utcDateTime + `"` + utcDateTimeNoT = "2006-01-02 15:04:05.999999999" + utcDateTimeJSONNoT = `"` + utcDateTimeNoT + `"` + dateTimeNoT = `2006-01-02 15:04:05.999999999Z07:00` + dateTimeJSON = `"` + time.RFC3339Nano + `"` + dateTimeJSONNoT = `"` + dateTimeNoT + `"` +) + +type dateTimeRFC3339 time.Time + +func (t dateTimeRFC3339) MarshalJSON() ([]byte, error) { + tt := time.Time(t) + return tt.MarshalJSON() +} + +func (t dateTimeRFC3339) MarshalText() ([]byte, error) { + tt := time.Time(t) + return tt.MarshalText() +} + +func (t *dateTimeRFC3339) UnmarshalJSON(data []byte) error { + tzOffset := tzOffsetRegex.Match(data) + hasT := strings.Contains(string(data), "T") || strings.Contains(string(data), "t") + var layout string + if tzOffset && hasT { + layout = dateTimeJSON + } else if tzOffset { + layout = dateTimeJSONNoT + } else if hasT { + layout = utcDateTimeJSON + } else { + layout = utcDateTimeJSONNoT + } + return t.Parse(layout, string(data)) +} + +func (t *dateTimeRFC3339) UnmarshalText(data []byte) error { + tzOffset := tzOffsetRegex.Match(data) + hasT := strings.Contains(string(data), "T") || strings.Contains(string(data), "t") + var layout string + if tzOffset && hasT { + layout = time.RFC3339Nano + } else if tzOffset { + layout = dateTimeNoT + } else if hasT { + layout = utcDateTime + } else { + layout = utcDateTimeNoT + } + return t.Parse(layout, string(data)) +} + +func (t *dateTimeRFC3339) Parse(layout, value string) error { + p, err := time.Parse(layout, strings.ToUpper(value)) + *t = dateTimeRFC3339(p) + return err +} + +func (t dateTimeRFC3339) String() string { + return time.Time(t).Format(time.RFC3339Nano) +} + +func populateDateTimeRFC3339(m map[string]any, k string, t *time.Time) { + if t == nil { + return + } else if azcore.IsNullValue(t) { + m[k] = nil + return + } else if reflect.ValueOf(t).IsNil() { + return + } + m[k] = (*dateTimeRFC3339)(t) +} + +func unpopulateDateTimeRFC3339(data json.RawMessage, fn string, t **time.Time) error { + if data == nil || string(data) == "null" { + return nil + } + var aux dateTimeRFC3339 + if err := json.Unmarshal(data, &aux); err != nil { + return fmt.Errorf("struct field %s: %v", fn, err) + } + *t = (*time.Time)(&aux) + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/trustedaccessrolebindings_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/trustedaccessrolebindings_client.go new file mode 100644 index 000000000000..13e18a3767ba --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/trustedaccessrolebindings_client.go @@ -0,0 +1,345 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// TrustedAccessRoleBindingsClient contains the methods for the TrustedAccessRoleBindings group. +// Don't use this type directly, use NewTrustedAccessRoleBindingsClient() instead. +type TrustedAccessRoleBindingsClient struct { + internal *arm.Client + subscriptionID string +} + +// NewTrustedAccessRoleBindingsClient creates a new instance of TrustedAccessRoleBindingsClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewTrustedAccessRoleBindingsClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*TrustedAccessRoleBindingsClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &TrustedAccessRoleBindingsClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// BeginCreateOrUpdate - Create or update a trusted access role binding +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - trustedAccessRoleBindingName - The name of trusted access role binding. +// - trustedAccessRoleBinding - A trusted access role binding +// - options - TrustedAccessRoleBindingsClientBeginCreateOrUpdateOptions contains the optional parameters for the TrustedAccessRoleBindingsClient.BeginCreateOrUpdate +// method. +func (client *TrustedAccessRoleBindingsClient) BeginCreateOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, trustedAccessRoleBindingName string, trustedAccessRoleBinding TrustedAccessRoleBinding, options *TrustedAccessRoleBindingsClientBeginCreateOrUpdateOptions) (*runtime.Poller[TrustedAccessRoleBindingsClientCreateOrUpdateResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.createOrUpdate(ctx, resourceGroupName, resourceName, trustedAccessRoleBindingName, trustedAccessRoleBinding, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[TrustedAccessRoleBindingsClientCreateOrUpdateResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[TrustedAccessRoleBindingsClientCreateOrUpdateResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// CreateOrUpdate - Create or update a trusted access role binding +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *TrustedAccessRoleBindingsClient) createOrUpdate(ctx context.Context, resourceGroupName string, resourceName string, trustedAccessRoleBindingName string, trustedAccessRoleBinding TrustedAccessRoleBinding, options *TrustedAccessRoleBindingsClientBeginCreateOrUpdateOptions) (*http.Response, error) { + var err error + const operationName = "TrustedAccessRoleBindingsClient.BeginCreateOrUpdate" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.createOrUpdateCreateRequest(ctx, resourceGroupName, resourceName, trustedAccessRoleBindingName, trustedAccessRoleBinding, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK, http.StatusCreated) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// createOrUpdateCreateRequest creates the CreateOrUpdate request. +func (client *TrustedAccessRoleBindingsClient) createOrUpdateCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, trustedAccessRoleBindingName string, trustedAccessRoleBinding TrustedAccessRoleBinding, options *TrustedAccessRoleBindingsClientBeginCreateOrUpdateOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/trustedAccessRoleBindings/{trustedAccessRoleBindingName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if trustedAccessRoleBindingName == "" { + return nil, errors.New("parameter trustedAccessRoleBindingName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{trustedAccessRoleBindingName}", url.PathEscape(trustedAccessRoleBindingName)) + req, err := runtime.NewRequest(ctx, http.MethodPut, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + if err := runtime.MarshalAsJSON(req, trustedAccessRoleBinding); err != nil { + return nil, err + } + return req, nil +} + +// BeginDelete - Delete a trusted access role binding. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - trustedAccessRoleBindingName - The name of trusted access role binding. +// - options - TrustedAccessRoleBindingsClientBeginDeleteOptions contains the optional parameters for the TrustedAccessRoleBindingsClient.BeginDelete +// method. +func (client *TrustedAccessRoleBindingsClient) BeginDelete(ctx context.Context, resourceGroupName string, resourceName string, trustedAccessRoleBindingName string, options *TrustedAccessRoleBindingsClientBeginDeleteOptions) (*runtime.Poller[TrustedAccessRoleBindingsClientDeleteResponse], error) { + if options == nil || options.ResumeToken == "" { + resp, err := client.deleteOperation(ctx, resourceGroupName, resourceName, trustedAccessRoleBindingName, options) + if err != nil { + return nil, err + } + poller, err := runtime.NewPoller(resp, client.internal.Pipeline(), &runtime.NewPollerOptions[TrustedAccessRoleBindingsClientDeleteResponse]{ + Tracer: client.internal.Tracer(), + }) + return poller, err + } else { + return runtime.NewPollerFromResumeToken(options.ResumeToken, client.internal.Pipeline(), &runtime.NewPollerFromResumeTokenOptions[TrustedAccessRoleBindingsClientDeleteResponse]{ + Tracer: client.internal.Tracer(), + }) + } +} + +// Delete - Delete a trusted access role binding. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +func (client *TrustedAccessRoleBindingsClient) deleteOperation(ctx context.Context, resourceGroupName string, resourceName string, trustedAccessRoleBindingName string, options *TrustedAccessRoleBindingsClientBeginDeleteOptions) (*http.Response, error) { + var err error + const operationName = "TrustedAccessRoleBindingsClient.BeginDelete" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.deleteCreateRequest(ctx, resourceGroupName, resourceName, trustedAccessRoleBindingName, options) + if err != nil { + return nil, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return nil, err + } + if !runtime.HasStatusCode(httpResp, http.StatusAccepted, http.StatusNoContent) { + err = runtime.NewResponseError(httpResp) + return nil, err + } + return httpResp, nil +} + +// deleteCreateRequest creates the Delete request. +func (client *TrustedAccessRoleBindingsClient) deleteCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, trustedAccessRoleBindingName string, options *TrustedAccessRoleBindingsClientBeginDeleteOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/trustedAccessRoleBindings/{trustedAccessRoleBindingName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if trustedAccessRoleBindingName == "" { + return nil, errors.New("parameter trustedAccessRoleBindingName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{trustedAccessRoleBindingName}", url.PathEscape(trustedAccessRoleBindingName)) + req, err := runtime.NewRequest(ctx, http.MethodDelete, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// Get - Get a trusted access role binding. +// If the operation fails it returns an *azcore.ResponseError type. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - trustedAccessRoleBindingName - The name of trusted access role binding. +// - options - TrustedAccessRoleBindingsClientGetOptions contains the optional parameters for the TrustedAccessRoleBindingsClient.Get +// method. +func (client *TrustedAccessRoleBindingsClient) Get(ctx context.Context, resourceGroupName string, resourceName string, trustedAccessRoleBindingName string, options *TrustedAccessRoleBindingsClientGetOptions) (TrustedAccessRoleBindingsClientGetResponse, error) { + var err error + const operationName = "TrustedAccessRoleBindingsClient.Get" + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, operationName) + ctx, endSpan := runtime.StartSpan(ctx, operationName, client.internal.Tracer(), nil) + defer func() { endSpan(err) }() + req, err := client.getCreateRequest(ctx, resourceGroupName, resourceName, trustedAccessRoleBindingName, options) + if err != nil { + return TrustedAccessRoleBindingsClientGetResponse{}, err + } + httpResp, err := client.internal.Pipeline().Do(req) + if err != nil { + return TrustedAccessRoleBindingsClientGetResponse{}, err + } + if !runtime.HasStatusCode(httpResp, http.StatusOK) { + err = runtime.NewResponseError(httpResp) + return TrustedAccessRoleBindingsClientGetResponse{}, err + } + resp, err := client.getHandleResponse(httpResp) + return resp, err +} + +// getCreateRequest creates the Get request. +func (client *TrustedAccessRoleBindingsClient) getCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, trustedAccessRoleBindingName string, options *TrustedAccessRoleBindingsClientGetOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/trustedAccessRoleBindings/{trustedAccessRoleBindingName}" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + if trustedAccessRoleBindingName == "" { + return nil, errors.New("parameter trustedAccessRoleBindingName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{trustedAccessRoleBindingName}", url.PathEscape(trustedAccessRoleBindingName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// getHandleResponse handles the Get response. +func (client *TrustedAccessRoleBindingsClient) getHandleResponse(resp *http.Response) (TrustedAccessRoleBindingsClientGetResponse, error) { + result := TrustedAccessRoleBindingsClientGetResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.TrustedAccessRoleBinding); err != nil { + return TrustedAccessRoleBindingsClientGetResponse{}, err + } + return result, nil +} + +// NewListPager - List trusted access role bindings. +// +// Generated from API version 2023-11-02-preview +// - resourceGroupName - The name of the resource group. The name is case insensitive. +// - resourceName - The name of the managed cluster resource. +// - options - TrustedAccessRoleBindingsClientListOptions contains the optional parameters for the TrustedAccessRoleBindingsClient.NewListPager +// method. +func (client *TrustedAccessRoleBindingsClient) NewListPager(resourceGroupName string, resourceName string, options *TrustedAccessRoleBindingsClientListOptions) *runtime.Pager[TrustedAccessRoleBindingsClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[TrustedAccessRoleBindingsClientListResponse]{ + More: func(page TrustedAccessRoleBindingsClientListResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *TrustedAccessRoleBindingsClientListResponse) (TrustedAccessRoleBindingsClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "TrustedAccessRoleBindingsClient.NewListPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listCreateRequest(ctx, resourceGroupName, resourceName, options) + }, nil) + if err != nil { + return TrustedAccessRoleBindingsClientListResponse{}, err + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *TrustedAccessRoleBindingsClient) listCreateRequest(ctx context.Context, resourceGroupName string, resourceName string, options *TrustedAccessRoleBindingsClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.ContainerService/managedClusters/{resourceName}/trustedAccessRoleBindings" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if resourceGroupName == "" { + return nil, errors.New("parameter resourceGroupName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceGroupName}", url.PathEscape(resourceGroupName)) + if resourceName == "" { + return nil, errors.New("parameter resourceName cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{resourceName}", url.PathEscape(resourceName)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *TrustedAccessRoleBindingsClient) listHandleResponse(resp *http.Response) (TrustedAccessRoleBindingsClientListResponse, error) { + result := TrustedAccessRoleBindingsClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.TrustedAccessRoleBindingListResult); err != nil { + return TrustedAccessRoleBindingsClientListResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/trustedaccessroles_client.go b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/trustedaccessroles_client.go new file mode 100644 index 000000000000..b50cc3b2d22c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4/trustedaccessroles_client.go @@ -0,0 +1,104 @@ +//go:build go1.18 +// +build go1.18 + +// Copyright (c) Microsoft Corporation. All rights reserved. +// Licensed under the MIT License. See License.txt in the project root for license information. +// Code generated by Microsoft (R) AutoRest Code Generator. DO NOT EDIT. +// Changes may cause incorrect behavior and will be lost if the code is regenerated. + +package armcontainerservice + +import ( + "context" + "errors" + "github.com/Azure/azure-sdk-for-go/sdk/azcore" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/arm" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/policy" + "github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime" + "net/http" + "net/url" + "strings" +) + +// TrustedAccessRolesClient contains the methods for the TrustedAccessRoles group. +// Don't use this type directly, use NewTrustedAccessRolesClient() instead. +type TrustedAccessRolesClient struct { + internal *arm.Client + subscriptionID string +} + +// NewTrustedAccessRolesClient creates a new instance of TrustedAccessRolesClient with the specified values. +// - subscriptionID - The ID of the target subscription. The value must be an UUID. +// - credential - used to authorize requests. Usually a credential from azidentity. +// - options - pass nil to accept the default values. +func NewTrustedAccessRolesClient(subscriptionID string, credential azcore.TokenCredential, options *arm.ClientOptions) (*TrustedAccessRolesClient, error) { + cl, err := arm.NewClient(moduleName, moduleVersion, credential, options) + if err != nil { + return nil, err + } + client := &TrustedAccessRolesClient{ + subscriptionID: subscriptionID, + internal: cl, + } + return client, nil +} + +// NewListPager - List supported trusted access roles. +// +// Generated from API version 2023-11-02-preview +// - location - The name of the Azure region. +// - options - TrustedAccessRolesClientListOptions contains the optional parameters for the TrustedAccessRolesClient.NewListPager +// method. +func (client *TrustedAccessRolesClient) NewListPager(location string, options *TrustedAccessRolesClientListOptions) *runtime.Pager[TrustedAccessRolesClientListResponse] { + return runtime.NewPager(runtime.PagingHandler[TrustedAccessRolesClientListResponse]{ + More: func(page TrustedAccessRolesClientListResponse) bool { + return page.NextLink != nil && len(*page.NextLink) > 0 + }, + Fetcher: func(ctx context.Context, page *TrustedAccessRolesClientListResponse) (TrustedAccessRolesClientListResponse, error) { + ctx = context.WithValue(ctx, runtime.CtxAPINameKey{}, "TrustedAccessRolesClient.NewListPager") + nextLink := "" + if page != nil { + nextLink = *page.NextLink + } + resp, err := runtime.FetcherForNextLink(ctx, client.internal.Pipeline(), nextLink, func(ctx context.Context) (*policy.Request, error) { + return client.listCreateRequest(ctx, location, options) + }, nil) + if err != nil { + return TrustedAccessRolesClientListResponse{}, err + } + return client.listHandleResponse(resp) + }, + Tracer: client.internal.Tracer(), + }) +} + +// listCreateRequest creates the List request. +func (client *TrustedAccessRolesClient) listCreateRequest(ctx context.Context, location string, options *TrustedAccessRolesClientListOptions) (*policy.Request, error) { + urlPath := "/subscriptions/{subscriptionId}/providers/Microsoft.ContainerService/locations/{location}/trustedAccessRoles" + if client.subscriptionID == "" { + return nil, errors.New("parameter client.subscriptionID cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{subscriptionId}", url.PathEscape(client.subscriptionID)) + if location == "" { + return nil, errors.New("parameter location cannot be empty") + } + urlPath = strings.ReplaceAll(urlPath, "{location}", url.PathEscape(location)) + req, err := runtime.NewRequest(ctx, http.MethodGet, runtime.JoinPaths(client.internal.Endpoint(), urlPath)) + if err != nil { + return nil, err + } + reqQP := req.Raw().URL.Query() + reqQP.Set("api-version", "2023-11-02-preview") + req.Raw().URL.RawQuery = reqQP.Encode() + req.Raw().Header["Accept"] = []string{"application/json"} + return req, nil +} + +// listHandleResponse handles the List response. +func (client *TrustedAccessRolesClient) listHandleResponse(resp *http.Response) (TrustedAccessRolesClientListResponse, error) { + result := TrustedAccessRolesClientListResponse{} + if err := runtime.UnmarshalAsJSON(resp, &result.TrustedAccessRoleListResult); err != nil { + return TrustedAccessRolesClientListResponse{}, err + } + return result, nil +} diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/CODE_OF_CONDUCT.md b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/CODE_OF_CONDUCT.md new file mode 100644 index 000000000000..f9ba8cf65f3e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/CODE_OF_CONDUCT.md @@ -0,0 +1,9 @@ +# Microsoft Open Source Code of Conduct + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). + +Resources: + +- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/) +- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) +- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/LICENSE b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/LICENSE new file mode 100644 index 000000000000..9e841e7a26e4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/LICENSE @@ -0,0 +1,21 @@ + MIT License + + Copyright (c) Microsoft Corporation. + + Permission is hereby granted, free of charge, to any person obtaining a copy + of this software and associated documentation files (the "Software"), to deal + in the Software without restriction, including without limitation the rights + to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + copies of the Software, and to permit persons to whom the Software is + furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included in all + copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + SOFTWARE diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/README.md b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/README.md new file mode 100644 index 000000000000..c7d534b24266 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/README.md @@ -0,0 +1,56 @@ +# ARM Balancer + +A client-side connection manager for Azure Resource Manager. + +## Why? + +ARM request throttling is scoped to the specific instance of ARM that a connection lands on. +This serves to reduce the risk of a noisy client impacting the performance of other requests handled concurrently by that particular instance without requiring coordination between instances. + +HTTP1.1 clients commonly use pooled TCP connections to provide concurrency. +But HTTP2 allows a single connection to handle many concurrent requests. +Conforming client implementations will only open a second connection when the concurrency limit advertised by the server would be exceeded. + +This poses a problem for ARM consumers using HTTP2: requests that were previously distributed across several ARM instances will now be sent to only one. + +## Design + +- Multiple connections are established with ARM, forming a simple client-side load balancer +- Connections are re-established when they receive a "ratelimit-remaining" header below a certain threshold + +This scheme avoids throttling by proactively redistributing load across ARM instances. +Performance under high concurrency may also improve relative to HTTP1.1 since the pool of connections can easily be made larger than common HTTP client defaults. + +## Usage + +```go +armresources.NewClient("{{subscriptionID}}", cred, &arm.ClientOptions{ + ClientOptions: policy.ClientOptions{ + Transport: &http.Client{ + Transport: armbalancer.New(armbalancer.Options{}), + }, + }, +}) +``` + +## Contributing + +This project welcomes contributions and suggestions. Most contributions require you to agree to a +Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us +the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com. + +When you submit a pull request, a CLA bot will automatically determine whether you need to provide +a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions +provided by the bot. You will only need to do this once across all repos using our CLA. + +This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). +For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or +contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments. + +## Trademarks + +This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft +trademarks or logos is subject to and must follow +[Microsoft's Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks/usage/general). +Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. +Any use of third-party trademarks or logos are subject to those third-party's policies. diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/SECURITY.md b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/SECURITY.md new file mode 100644 index 000000000000..e138ec5d6a77 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/SECURITY.md @@ -0,0 +1,41 @@ + + +## Security + +Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/). + +If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below. + +## Reporting Security Issues + +**Please do not report security vulnerabilities through public GitHub issues.** + +Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report). + +If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey). + +You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc). + +Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue: + + * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.) + * Full paths of source file(s) related to the manifestation of the issue + * The location of the affected source code (tag/branch/commit or direct URL) + * Any special configuration required to reproduce the issue + * Step-by-step instructions to reproduce the issue + * Proof-of-concept or exploit code (if possible) + * Impact of the issue, including how an attacker might exploit the issue + +This information will help us triage your report more quickly. + +If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs. + +## Preferred Languages + +We prefer all communications to be in English. + +## Policy + +Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd). + + diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/SUPPORT.md b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/SUPPORT.md new file mode 100644 index 000000000000..291d4d43733f --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/SUPPORT.md @@ -0,0 +1,25 @@ +# TODO: The maintainer of this repo has not yet edited this file + +**REPO OWNER**: Do you want Customer Service & Support (CSS) support for this product/project? + +- **No CSS support:** Fill out this template with information about how to file issues and get help. +- **Yes CSS support:** Fill out an intake form at [aka.ms/onboardsupport](https://aka.ms/onboardsupport). CSS will work with/help you to determine next steps. +- **Not sure?** Fill out an intake as though the answer were "Yes". CSS will help you decide. + +*Then remove this first heading from this SUPPORT.MD file before publishing your repo.* + +# Support + +## How to file issues and get help + +This project uses GitHub Issues to track bugs and feature requests. Please search the existing +issues before filing new issues to avoid duplicates. For new issues, file your bug or +feature request as a new Issue. + +For help and questions about using this project, please **REPO MAINTAINER: INSERT INSTRUCTIONS HERE +FOR HOW TO ENGAGE REPO OWNERS OR COMMUNITY FOR HELP. COULD BE A STACK OVERFLOW TAG OR OTHER +CHANNEL. WHERE WILL YOU HELP PEOPLE?**. + +## Microsoft Support Policy + +Support for this **PROJECT or PRODUCT** is limited to the resources listed above. diff --git a/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/armbalancer.go b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/armbalancer.go new file mode 100644 index 000000000000..36e0961be7ef --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/Azure/go-armbalancer/armbalancer.go @@ -0,0 +1,200 @@ +package armbalancer + +import ( + "fmt" + "math" + "net/http" + "strconv" + "strings" + "sync" + "sync/atomic" +) + +const rateLimitHeaderPrefix = "X-Ms-Ratelimit-Remaining-" + +type Options struct { + Transport *http.Transport + + // Host is the only host that can be reached through the round tripper. + // Default: management.azure.com + Host string + + // PoolSize is the max number of connections that will be created by the connection pool. + // Default: 8 + PoolSize int + + // RecycleThreshold is the lowest value of any X-Ms-Ratelimit-Remaining-* header that + // can be seen before the associated connection will be re-established. + // Default: 100 + RecycleThreshold int64 + + // MinReqsBeforeRecycle is a safeguard to prevent frequent connection churn in the unlikely event + // that a connections lands on an ARM instance that already has a depleted rate limiting quota. + // Default: 10 + MinReqsBeforeRecycle int64 +} + +// New wraps a transport to provide smart connection pooling and client-side load balancing. +func New(opts Options) http.RoundTripper { + if opts.Transport == nil { + opts.Transport = http.DefaultTransport.(*http.Transport) + } + if opts.Host == "" { + opts.Host = "management.azure.com" + } + if opts.PoolSize == 0 { + opts.PoolSize = 8 + } + if opts.RecycleThreshold == 0 { + opts.RecycleThreshold = 100 + } + if opts.MinReqsBeforeRecycle == 0 { + opts.MinReqsBeforeRecycle = 10 + } + + t := &transportPool{pool: make([]http.RoundTripper, opts.PoolSize)} + for i := range t.pool { + t.pool[i] = newRecyclableTransport(i, opts.Transport, opts.Host, opts.RecycleThreshold, opts.MinReqsBeforeRecycle) + } + return t +} + +type transportPool struct { + pool []http.RoundTripper + cursor int64 +} + +func (t *transportPool) RoundTrip(req *http.Request) (*http.Response, error) { + i := int(atomic.AddInt64(&t.cursor, 1)) % len(t.pool) + return t.pool[i].RoundTrip(req) +} + +type recyclableTransport struct { + lock sync.Mutex // only hold while copying pointer - not calling RoundTrip + host string + current *http.Transport + counter int64 // atomic + activeCount *sync.WaitGroup + state *connState + signal chan struct{} +} + +func newRecyclableTransport(id int, parent *http.Transport, host string, recycleThreshold, minReqsBeforeRecycle int64) *recyclableTransport { + tx := parent.Clone() + tx.MaxConnsPerHost = 1 + r := &recyclableTransport{ + host: host, + current: tx.Clone(), + activeCount: &sync.WaitGroup{}, + state: newConnState(), + signal: make(chan struct{}, 1), + } + go func() { + for range r.signal { + if r.state.Min() > recycleThreshold || atomic.LoadInt64(&r.counter) < minReqsBeforeRecycle { + continue + } + + // Swap a new transport in place while holding a pointer to the previous + r.lock.Lock() + previous := r.current + previousActiveCount := r.activeCount + r.current = tx.Clone() + atomic.StoreInt64(&r.counter, 0) + r.activeCount = &sync.WaitGroup{} + r.lock.Unlock() + + // Wait for all active requests against the previous transport to complete before closing its idle connections + previousActiveCount.Wait() + previous.CloseIdleConnections() + } + }() + return r +} + +// return retrue if transport host matched with request host +func (t *recyclableTransport) compareHost(reqHost string) bool { + idx := strings.Index(reqHost, ":") + idx1 := strings.Index(t.host, ":") + + // both host have ":" or not, directly compare reqest host name with transport host + if idx == idx1 { + return reqHost == t.host + } + + // reqHost has ":", but transportHost doesn't, compare reqHost with port-appened transport host + if idx != -1 { + return reqHost == t.host+reqHost[idx:] + } + + // reqHost doesn't have ":", but transportHost does, compare reqHost with non-port transport host + return reqHost == t.host[:idx1] +} + +func (t *recyclableTransport) RoundTrip(req *http.Request) (*http.Response, error) { + matched := t.compareHost(req.URL.Host) + if !matched { + return nil, fmt.Errorf("host %q is not supported by the configured ARM balancer, supported host name is %q", req.URL.Host, t.host) + } + + t.lock.Lock() + tx := t.current + wg := t.activeCount + wg.Add(1) + t.lock.Unlock() + + defer func() { + t.lock.Lock() + wg.Add(-1) + t.lock.Unlock() + }() + + resp, err := tx.RoundTrip(req) + atomic.AddInt64(&t.counter, 1) + + if resp != nil { + t.state.ApplyHeader(resp.Header) + } + + select { + case t.signal <- struct{}{}: + default: + } + return resp, err +} + +type connState struct { + lock sync.Mutex + types map[string]int64 +} + +func newConnState() *connState { + return &connState{types: make(map[string]int64)} +} + +func (c *connState) ApplyHeader(h http.Header) { + c.lock.Lock() + for key, vals := range h { + if !strings.HasPrefix(key, "X-Ms-Ratelimit-Remaining-") { + continue + } + n, err := strconv.ParseInt(vals[0], 10, 0) + if err != nil { + continue + } + c.types[key[len(rateLimitHeaderPrefix):]] = n + } + c.lock.Unlock() +} + +func (c *connState) Min() int64 { + c.lock.Lock() + var min int64 = math.MaxInt64 + for _, val := range c.types { + if val < min { + min = val + } + } + c.lock.Unlock() + return min +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/LICENSE b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/LICENSE new file mode 100644 index 000000000000..3d8b93bc7987 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/LICENSE @@ -0,0 +1,21 @@ + MIT License + + Copyright (c) Microsoft Corporation. + + Permission is hereby granted, free of charge, to any person obtaining a copy + of this software and associated documentation files (the "Software"), to deal + in the Software without restriction, including without limitation the rights + to use, copy, modify, merge, publish, distribute, sublicense, and/or sell + copies of the Software, and to permit persons to whom the Software is + furnished to do so, subject to the following conditions: + + The above copyright notice and this permission notice shall be included in all + copies or substantial portions of the Software. + + THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR + IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, + FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE + AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER + LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, + OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE + SOFTWARE diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache/cache.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache/cache.go new file mode 100644 index 000000000000..19210883bac2 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache/cache.go @@ -0,0 +1,54 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +/* +Package cache allows third parties to implement external storage for caching token data +for distributed systems or multiple local applications access. + +The data stored and extracted will represent the entire cache. Therefore it is recommended +one msal instance per user. This data is considered opaque and there are no guarantees to +implementers on the format being passed. +*/ +package cache + +import "context" + +// Marshaler marshals data from an internal cache to bytes that can be stored. +type Marshaler interface { + Marshal() ([]byte, error) +} + +// Unmarshaler unmarshals data from a storage medium into the internal cache, overwriting it. +type Unmarshaler interface { + Unmarshal([]byte) error +} + +// Serializer can serialize the cache to binary or from binary into the cache. +type Serializer interface { + Marshaler + Unmarshaler +} + +// ExportHints are suggestions for storing data. +type ExportHints struct { + // PartitionKey is a suggested key for partitioning the cache + PartitionKey string +} + +// ReplaceHints are suggestions for loading data. +type ReplaceHints struct { + // PartitionKey is a suggested key for partitioning the cache + PartitionKey string +} + +// ExportReplace exports and replaces in-memory cache data. It doesn't support nil Context or +// define the outcome of passing one. A Context without a timeout must receive a default timeout +// specified by the implementor. Retries must be implemented inside the implementation. +type ExportReplace interface { + // Replace replaces the cache with what is in external storage. Implementors should honor + // Context cancellations and return context.Canceled or context.DeadlineExceeded in those cases. + Replace(ctx context.Context, cache Unmarshaler, hints ReplaceHints) error + // Export writes the binary representation of the cache (cache.Marshal()) to external storage. + // This is considered opaque. Context cancellations should be honored as in Replace. + Export(ctx context.Context, cache Marshaler, hints ExportHints) error +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential/confidential.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential/confidential.go new file mode 100644 index 000000000000..6612feb4bf88 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential/confidential.go @@ -0,0 +1,685 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +/* +Package confidential provides a client for authentication of "confidential" applications. +A "confidential" application is defined as an app that run on servers. They are considered +difficult to access and for that reason capable of keeping an application secret. +Confidential clients can hold configuration-time secrets. +*/ +package confidential + +import ( + "context" + "crypto" + "crypto/rsa" + "crypto/x509" + "encoding/base64" + "encoding/pem" + "errors" + "fmt" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared" +) + +/* +Design note: + +confidential.Client uses base.Client as an embedded type. base.Client statically assigns its attributes +during creation. As it doesn't have any pointers in it, anything borrowed from it, such as +Base.AuthParams is a copy that is free to be manipulated here. + +Duplicate Calls shared between public.Client and this package: +There is some duplicate call options provided here that are the same as in public.Client . This +is a design choices. Go proverb(https://www.youtube.com/watch?v=PAAkCSZUG1c&t=9m28s): +"a little copying is better than a little dependency". Yes, we could have another package with +shared options (fail). That divides like 2 options from all others which makes the user look +through more docs. We can have all clients in one package, but I think separate packages +here makes for better naming (public.Client vs client.PublicClient). So I chose a little +duplication. + +.Net People, Take note on X509: +This uses x509.Certificates and private keys. x509 does not store private keys. .Net +has some x509.Certificate2 thing that has private keys, but that is just some bullcrap that .Net +added, it doesn't exist in real life. As such I've put a PEM decoder into here. +*/ + +// TODO(msal): This should have example code for each method on client using Go's example doc framework. +// base usage details should be include in the package documentation. + +// AuthResult contains the results of one token acquisition operation. +// For details see https://aka.ms/msal-net-authenticationresult +type AuthResult = base.AuthResult + +type Account = shared.Account + +// CertFromPEM converts a PEM file (.pem or .key) for use with [NewCredFromCert]. The file +// must contain the public certificate and the private key. If a PEM block is encrypted and +// password is not an empty string, it attempts to decrypt the PEM blocks using the password. +// Multiple certs are due to certificate chaining for use cases like TLS that sign from root to leaf. +func CertFromPEM(pemData []byte, password string) ([]*x509.Certificate, crypto.PrivateKey, error) { + var certs []*x509.Certificate + var priv crypto.PrivateKey + for { + block, rest := pem.Decode(pemData) + if block == nil { + break + } + + //nolint:staticcheck // x509.IsEncryptedPEMBlock and x509.DecryptPEMBlock are deprecated. They are used here only to support a usecase. + if x509.IsEncryptedPEMBlock(block) { + b, err := x509.DecryptPEMBlock(block, []byte(password)) + if err != nil { + return nil, nil, fmt.Errorf("could not decrypt encrypted PEM block: %v", err) + } + block, _ = pem.Decode(b) + if block == nil { + return nil, nil, fmt.Errorf("encounter encrypted PEM block that did not decode") + } + } + + switch block.Type { + case "CERTIFICATE": + cert, err := x509.ParseCertificate(block.Bytes) + if err != nil { + return nil, nil, fmt.Errorf("block labelled 'CERTIFICATE' could not be parsed by x509: %v", err) + } + certs = append(certs, cert) + case "PRIVATE KEY": + if priv != nil { + return nil, nil, errors.New("found multiple private key blocks") + } + + var err error + priv, err = x509.ParsePKCS8PrivateKey(block.Bytes) + if err != nil { + return nil, nil, fmt.Errorf("could not decode private key: %v", err) + } + case "RSA PRIVATE KEY": + if priv != nil { + return nil, nil, errors.New("found multiple private key blocks") + } + var err error + priv, err = x509.ParsePKCS1PrivateKey(block.Bytes) + if err != nil { + return nil, nil, fmt.Errorf("could not decode private key: %v", err) + } + } + pemData = rest + } + + if len(certs) == 0 { + return nil, nil, fmt.Errorf("no certificates found") + } + + if priv == nil { + return nil, nil, fmt.Errorf("no private key found") + } + + return certs, priv, nil +} + +// AssertionRequestOptions has required information for client assertion claims +type AssertionRequestOptions = exported.AssertionRequestOptions + +// Credential represents the credential used in confidential client flows. +type Credential struct { + secret string + + cert *x509.Certificate + key crypto.PrivateKey + x5c []string + + assertionCallback func(context.Context, AssertionRequestOptions) (string, error) + + tokenProvider func(context.Context, TokenProviderParameters) (TokenProviderResult, error) +} + +// toInternal returns the accesstokens.Credential that is used internally. The current structure of the +// code requires that client.go, requests.go and confidential.go share a credential type without +// having import recursion. That requires the type used between is in a shared package. Therefore +// we have this. +func (c Credential) toInternal() (*accesstokens.Credential, error) { + if c.secret != "" { + return &accesstokens.Credential{Secret: c.secret}, nil + } + if c.cert != nil { + if c.key == nil { + return nil, errors.New("missing private key for certificate") + } + return &accesstokens.Credential{Cert: c.cert, Key: c.key, X5c: c.x5c}, nil + } + if c.key != nil { + return nil, errors.New("missing certificate for private key") + } + if c.assertionCallback != nil { + return &accesstokens.Credential{AssertionCallback: c.assertionCallback}, nil + } + if c.tokenProvider != nil { + return &accesstokens.Credential{TokenProvider: c.tokenProvider}, nil + } + return nil, errors.New("invalid credential") +} + +// NewCredFromSecret creates a Credential from a secret. +func NewCredFromSecret(secret string) (Credential, error) { + if secret == "" { + return Credential{}, errors.New("secret can't be empty string") + } + return Credential{secret: secret}, nil +} + +// NewCredFromAssertionCallback creates a Credential that invokes a callback to get assertions +// authenticating the application. The callback must be thread safe. +func NewCredFromAssertionCallback(callback func(context.Context, AssertionRequestOptions) (string, error)) Credential { + return Credential{assertionCallback: callback} +} + +// NewCredFromCert creates a Credential from a certificate or chain of certificates and an RSA private key +// as returned by [CertFromPEM]. +func NewCredFromCert(certs []*x509.Certificate, key crypto.PrivateKey) (Credential, error) { + cred := Credential{key: key} + k, ok := key.(*rsa.PrivateKey) + if !ok { + return cred, errors.New("key must be an RSA key") + } + for _, cert := range certs { + if cert == nil { + // not returning an error here because certs may still contain a sufficient cert/key pair + continue + } + certKey, ok := cert.PublicKey.(*rsa.PublicKey) + if ok && k.E == certKey.E && k.N.Cmp(certKey.N) == 0 { + // We know this is the signing cert because its public key matches the given private key. + // This cert must be first in x5c. + cred.cert = cert + cred.x5c = append([]string{base64.StdEncoding.EncodeToString(cert.Raw)}, cred.x5c...) + } else { + cred.x5c = append(cred.x5c, base64.StdEncoding.EncodeToString(cert.Raw)) + } + } + if cred.cert == nil { + return cred, errors.New("key doesn't match any certificate") + } + return cred, nil +} + +// TokenProviderParameters is the authentication parameters passed to token providers +type TokenProviderParameters = exported.TokenProviderParameters + +// TokenProviderResult is the authentication result returned by custom token providers +type TokenProviderResult = exported.TokenProviderResult + +// NewCredFromTokenProvider creates a Credential from a function that provides access tokens. The function +// must be concurrency safe. This is intended only to allow the Azure SDK to cache MSI tokens. It isn't +// useful to applications in general because the token provider must implement all authentication logic. +func NewCredFromTokenProvider(provider func(context.Context, TokenProviderParameters) (TokenProviderResult, error)) Credential { + return Credential{tokenProvider: provider} +} + +// AutoDetectRegion instructs MSAL Go to auto detect region for Azure regional token service. +func AutoDetectRegion() string { + return "TryAutoDetect" +} + +// Client is a representation of authentication client for confidential applications as defined in the +// package doc. A new Client should be created PER SERVICE USER. +// For more information, visit https://docs.microsoft.com/azure/active-directory/develop/msal-client-applications +type Client struct { + base base.Client + cred *accesstokens.Credential +} + +// clientOptions are optional settings for New(). These options are set using various functions +// returning Option calls. +type clientOptions struct { + accessor cache.ExportReplace + authority, azureRegion string + capabilities []string + disableInstanceDiscovery, sendX5C bool + httpClient ops.HTTPClient +} + +// Option is an optional argument to New(). +type Option func(o *clientOptions) + +// WithCache provides an accessor that will read and write authentication data to an externally managed cache. +func WithCache(accessor cache.ExportReplace) Option { + return func(o *clientOptions) { + o.accessor = accessor + } +} + +// WithClientCapabilities allows configuring one or more client capabilities such as "CP1" +func WithClientCapabilities(capabilities []string) Option { + return func(o *clientOptions) { + // there's no danger of sharing the slice's underlying memory with the application because + // this slice is simply passed to base.WithClientCapabilities, which copies its data + o.capabilities = capabilities + } +} + +// WithHTTPClient allows for a custom HTTP client to be set. +func WithHTTPClient(httpClient ops.HTTPClient) Option { + return func(o *clientOptions) { + o.httpClient = httpClient + } +} + +// WithX5C specifies if x5c claim(public key of the certificate) should be sent to STS to enable Subject Name Issuer Authentication. +func WithX5C() Option { + return func(o *clientOptions) { + o.sendX5C = true + } +} + +// WithInstanceDiscovery set to false to disable authority validation (to support private cloud scenarios) +func WithInstanceDiscovery(enabled bool) Option { + return func(o *clientOptions) { + o.disableInstanceDiscovery = !enabled + } +} + +// WithAzureRegion sets the region(preferred) or Confidential.AutoDetectRegion() for auto detecting region. +// Region names as per https://azure.microsoft.com/en-ca/global-infrastructure/geographies/. +// See https://aka.ms/region-map for more details on region names. +// The region value should be short region name for the region where the service is deployed. +// For example "centralus" is short name for region Central US. +// Not all auth flows can use the regional token service. +// Service To Service (client credential flow) tokens can be obtained from the regional service. +// Requires configuration at the tenant level. +// Auto-detection works on a limited number of Azure artifacts (VMs, Azure functions). +// If auto-detection fails, the non-regional endpoint will be used. +// If an invalid region name is provided, the non-regional endpoint MIGHT be used or the token request MIGHT fail. +func WithAzureRegion(val string) Option { + return func(o *clientOptions) { + o.azureRegion = val + } +} + +// New is the constructor for Client. authority is the URL of a token authority such as "https://login.microsoftonline.com/". +// If the Client will connect directly to AD FS, use "adfs" for the tenant. clientID is the application's client ID (also called its +// "application ID"). +func New(authority, clientID string, cred Credential, options ...Option) (Client, error) { + internalCred, err := cred.toInternal() + if err != nil { + return Client{}, err + } + + opts := clientOptions{ + authority: authority, + // if the caller specified a token provider, it will handle all details of authentication, using Client only as a token cache + disableInstanceDiscovery: cred.tokenProvider != nil, + httpClient: shared.DefaultClient, + } + for _, o := range options { + o(&opts) + } + baseOpts := []base.Option{ + base.WithCacheAccessor(opts.accessor), + base.WithClientCapabilities(opts.capabilities), + base.WithInstanceDiscovery(!opts.disableInstanceDiscovery), + base.WithRegionDetection(opts.azureRegion), + base.WithX5C(opts.sendX5C), + } + base, err := base.New(clientID, opts.authority, oauth.New(opts.httpClient), baseOpts...) + if err != nil { + return Client{}, err + } + base.AuthParams.IsConfidentialClient = true + + return Client{base: base, cred: internalCred}, nil +} + +// authCodeURLOptions contains options for AuthCodeURL +type authCodeURLOptions struct { + claims, loginHint, tenantID, domainHint string +} + +// AuthCodeURLOption is implemented by options for AuthCodeURL +type AuthCodeURLOption interface { + authCodeURLOption() +} + +// AuthCodeURL creates a URL used to acquire an authorization code. Users need to call CreateAuthorizationCodeURLParameters and pass it in. +// +// Options: [WithClaims], [WithDomainHint], [WithLoginHint], [WithTenantID] +func (cca Client) AuthCodeURL(ctx context.Context, clientID, redirectURI string, scopes []string, opts ...AuthCodeURLOption) (string, error) { + o := authCodeURLOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return "", err + } + ap, err := cca.base.AuthParams.WithTenant(o.tenantID) + if err != nil { + return "", err + } + ap.Claims = o.claims + ap.LoginHint = o.loginHint + ap.DomainHint = o.domainHint + return cca.base.AuthCodeURL(ctx, clientID, redirectURI, scopes, ap) +} + +// WithLoginHint pre-populates the login prompt with a username. +func WithLoginHint(username string) interface { + AuthCodeURLOption + options.CallOption +} { + return struct { + AuthCodeURLOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *authCodeURLOptions: + t.loginHint = username + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// WithDomainHint adds the IdP domain as domain_hint query parameter in the auth url. +func WithDomainHint(domain string) interface { + AuthCodeURLOption + options.CallOption +} { + return struct { + AuthCodeURLOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *authCodeURLOptions: + t.domainHint = domain + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// WithClaims sets additional claims to request for the token, such as those required by conditional access policies. +// Use this option when Azure AD returned a claims challenge for a prior request. The argument must be decoded. +// This option is valid for any token acquisition method. +func WithClaims(claims string) interface { + AcquireByAuthCodeOption + AcquireByCredentialOption + AcquireOnBehalfOfOption + AcquireSilentOption + AuthCodeURLOption + options.CallOption +} { + return struct { + AcquireByAuthCodeOption + AcquireByCredentialOption + AcquireOnBehalfOfOption + AcquireSilentOption + AuthCodeURLOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *acquireTokenByAuthCodeOptions: + t.claims = claims + case *acquireTokenByCredentialOptions: + t.claims = claims + case *acquireTokenOnBehalfOfOptions: + t.claims = claims + case *acquireTokenSilentOptions: + t.claims = claims + case *authCodeURLOptions: + t.claims = claims + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// WithTenantID specifies a tenant for a single authentication. It may be different than the tenant set in [New]. +// This option is valid for any token acquisition method. +func WithTenantID(tenantID string) interface { + AcquireByAuthCodeOption + AcquireByCredentialOption + AcquireOnBehalfOfOption + AcquireSilentOption + AuthCodeURLOption + options.CallOption +} { + return struct { + AcquireByAuthCodeOption + AcquireByCredentialOption + AcquireOnBehalfOfOption + AcquireSilentOption + AuthCodeURLOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *acquireTokenByAuthCodeOptions: + t.tenantID = tenantID + case *acquireTokenByCredentialOptions: + t.tenantID = tenantID + case *acquireTokenOnBehalfOfOptions: + t.tenantID = tenantID + case *acquireTokenSilentOptions: + t.tenantID = tenantID + case *authCodeURLOptions: + t.tenantID = tenantID + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// acquireTokenSilentOptions are all the optional settings to an AcquireTokenSilent() call. +// These are set by using various AcquireTokenSilentOption functions. +type acquireTokenSilentOptions struct { + account Account + claims, tenantID string +} + +// AcquireSilentOption is implemented by options for AcquireTokenSilent +type AcquireSilentOption interface { + acquireSilentOption() +} + +// WithSilentAccount uses the passed account during an AcquireTokenSilent() call. +func WithSilentAccount(account Account) interface { + AcquireSilentOption + options.CallOption +} { + return struct { + AcquireSilentOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *acquireTokenSilentOptions: + t.account = account + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// AcquireTokenSilent acquires a token from either the cache or using a refresh token. +// +// Options: [WithClaims], [WithSilentAccount], [WithTenantID] +func (cca Client) AcquireTokenSilent(ctx context.Context, scopes []string, opts ...AcquireSilentOption) (AuthResult, error) { + o := acquireTokenSilentOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return AuthResult{}, err + } + + if o.claims != "" { + return AuthResult{}, errors.New("call another AcquireToken method to request a new token having these claims") + } + + silentParameters := base.AcquireTokenSilentParameters{ + Scopes: scopes, + Account: o.account, + RequestType: accesstokens.ATConfidential, + Credential: cca.cred, + IsAppCache: o.account.IsZero(), + TenantID: o.tenantID, + } + + return cca.base.AcquireTokenSilent(ctx, silentParameters) +} + +// acquireTokenByAuthCodeOptions contains the optional parameters used to acquire an access token using the authorization code flow. +type acquireTokenByAuthCodeOptions struct { + challenge, claims, tenantID string +} + +// AcquireByAuthCodeOption is implemented by options for AcquireTokenByAuthCode +type AcquireByAuthCodeOption interface { + acquireByAuthCodeOption() +} + +// WithChallenge allows you to provide a challenge for the .AcquireTokenByAuthCode() call. +func WithChallenge(challenge string) interface { + AcquireByAuthCodeOption + options.CallOption +} { + return struct { + AcquireByAuthCodeOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *acquireTokenByAuthCodeOptions: + t.challenge = challenge + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// AcquireTokenByAuthCode is a request to acquire a security token from the authority, using an authorization code. +// The specified redirect URI must be the same URI that was used when the authorization code was requested. +// +// Options: [WithChallenge], [WithClaims], [WithTenantID] +func (cca Client) AcquireTokenByAuthCode(ctx context.Context, code string, redirectURI string, scopes []string, opts ...AcquireByAuthCodeOption) (AuthResult, error) { + o := acquireTokenByAuthCodeOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return AuthResult{}, err + } + + params := base.AcquireTokenAuthCodeParameters{ + Scopes: scopes, + Code: code, + Challenge: o.challenge, + Claims: o.claims, + AppType: accesstokens.ATConfidential, + Credential: cca.cred, // This setting differs from public.Client.AcquireTokenByAuthCode + RedirectURI: redirectURI, + TenantID: o.tenantID, + } + + return cca.base.AcquireTokenByAuthCode(ctx, params) +} + +// acquireTokenByCredentialOptions contains optional configuration for AcquireTokenByCredential +type acquireTokenByCredentialOptions struct { + claims, tenantID string +} + +// AcquireByCredentialOption is implemented by options for AcquireTokenByCredential +type AcquireByCredentialOption interface { + acquireByCredOption() +} + +// AcquireTokenByCredential acquires a security token from the authority, using the client credentials grant. +// +// Options: [WithClaims], [WithTenantID] +func (cca Client) AcquireTokenByCredential(ctx context.Context, scopes []string, opts ...AcquireByCredentialOption) (AuthResult, error) { + o := acquireTokenByCredentialOptions{} + err := options.ApplyOptions(&o, opts) + if err != nil { + return AuthResult{}, err + } + authParams, err := cca.base.AuthParams.WithTenant(o.tenantID) + if err != nil { + return AuthResult{}, err + } + authParams.Scopes = scopes + authParams.AuthorizationType = authority.ATClientCredentials + authParams.Claims = o.claims + + token, err := cca.base.Token.Credential(ctx, authParams, cca.cred) + if err != nil { + return AuthResult{}, err + } + return cca.base.AuthResultFromToken(ctx, authParams, token, true) +} + +// acquireTokenOnBehalfOfOptions contains optional configuration for AcquireTokenOnBehalfOf +type acquireTokenOnBehalfOfOptions struct { + claims, tenantID string +} + +// AcquireOnBehalfOfOption is implemented by options for AcquireTokenOnBehalfOf +type AcquireOnBehalfOfOption interface { + acquireOBOOption() +} + +// AcquireTokenOnBehalfOf acquires a security token for an app using middle tier apps access token. +// Refer https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-on-behalf-of-flow. +// +// Options: [WithClaims], [WithTenantID] +func (cca Client) AcquireTokenOnBehalfOf(ctx context.Context, userAssertion string, scopes []string, opts ...AcquireOnBehalfOfOption) (AuthResult, error) { + o := acquireTokenOnBehalfOfOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return AuthResult{}, err + } + params := base.AcquireTokenOnBehalfOfParameters{ + Scopes: scopes, + UserAssertion: userAssertion, + Claims: o.claims, + Credential: cca.cred, + TenantID: o.tenantID, + } + return cca.base.AcquireTokenOnBehalfOf(ctx, params) +} + +// Account gets the account in the token cache with the specified homeAccountID. +func (cca Client) Account(ctx context.Context, accountID string) (Account, error) { + return cca.base.Account(ctx, accountID) +} + +// RemoveAccount signs the account out and forgets account from token cache. +func (cca Client) RemoveAccount(ctx context.Context, account Account) error { + return cca.base.RemoveAccount(ctx, account) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/error_design.md b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/error_design.md new file mode 100644 index 000000000000..7ef7862fe53c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/error_design.md @@ -0,0 +1,111 @@ +# MSAL Error Design + +Author: Abhidnya Patil(abhidnya.patil@microsoft.com) + +Contributors: + +- John Doak(jdoak@microsoft.com) +- Keegan Caruso(Keegan.Caruso@microsoft.com) +- Joel Hendrix(jhendrix@microsoft.com) + +## Background + +Errors in MSAL are intended for app developers to troubleshoot and not for displaying to end-users. + +### Go error handling vs other MSAL languages + +Most modern languages use exception based errors. Simply put, you "throw" an exception and it must be caught at some routine in the upper stack or it will eventually crash the program. + +Go doesn't use exceptions, instead it relies on multiple return values, one of which can be the builtin error interface type. It is up to the user to decide what to do. + +### Go custom error types + +Errors can be created in Go by simply using errors.New() or fmt.Errorf() to create an "error". + +Custom errors can be created in multiple ways. One of the more robust ways is simply to satisfy the error interface: + +```go +type MyCustomErr struct { + Msg string +} +func (m MyCustomErr) Error() string { // This implements "error" + return m.Msg +} +``` + +### MSAL Error Goals + +- Provide diagnostics to the user and for tickets that can be used to track down bugs or client misconfigurations +- Detect errors that are transitory and can be retried +- Allow the user to identify certain errors that the program can respond to, such a informing the user for the need to do an enrollment + +## Implementing Client Side Errors + +Client side errors indicate a misconfiguration or passing of bad arguments that is non-recoverable. Retrying isn't possible. + +These errors can simply be standard Go errors created by errors.New() or fmt.Errorf(). If down the line we need a custom error, we can introduce it, but for now the error messages just need to be clear on what the issue was. + +## Implementing Service Side Errors + +Service side errors occur when an external RPC responds either with an HTTP error code or returns a message that includes an error. + +These errors can be transitory (please slow down) or permanent (HTTP 404). To provide our diagnostic goals, we require the ability to differentiate these errors from other errors. + +The current implementation includes a specialized type that captures any error from the server: + +```go +// CallErr represents an HTTP call error. Has a Verbose() method that allows getting the +// http.Request and Response objects. Implements error. +type CallErr struct { + Req *http.Request + Resp *http.Response + Err error +} + +// Errors implements error.Error(). +func (e CallErr) Error() string { + return e.Err.Error() +} + +// Verbose prints a versbose error message with the request or response. +func (e CallErr) Verbose() string { + e.Resp.Request = nil // This brings in a bunch of TLS stuff we don't need + e.Resp.TLS = nil // Same + return fmt.Sprintf("%s:\nRequest:\n%s\nResponse:\n%s", e.Err, prettyConf.Sprint(e.Req), prettyConf.Sprint(e.Resp)) +} +``` + +A user will always receive the most concise error we provide. They can tell if it is a server side error using Go error package: + +```go +var callErr CallErr +if errors.As(err, &callErr) { + ... +} +``` + +We provide a Verbose() function that can retrieve the most verbose message from any error we provide: + +```go +fmt.Println(errors.Verbose(err)) +``` + +If further differentiation is required, we can add custom errors that use Go error wrapping on top of CallErr to achieve our diagnostic goals (such as detecting when to retry a call due to transient errors). + +CallErr is always thrown from the comm package (which handles all http requests) and looks similar to: + +```go +return nil, errors.CallErr{ + Req: req, + Resp: reply, + Err: fmt.Errorf("http call(%s)(%s) error: reply status code was %d:\n%s", req.URL.String(), req.Method, reply.StatusCode, ErrorResponse), //ErrorResponse is the json body extracted from the http response + } +``` + +## Future Decisions + +The ability to retry calls needs to have centralized responsibility. Either the user is doing it or the client is doing it. + +If the user should be responsible, our errors package will include a CanRetry() function that will inform the user if the error provided to them is retryable. This is based on the http error code and possibly the type of error that was returned. It would also include a sleep time if the server returned an amount of time to wait. + +Otherwise we will do this internally and retries will be left to us. diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/errors.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/errors.go new file mode 100644 index 000000000000..c9b8dbed088d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors/errors.go @@ -0,0 +1,89 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package errors + +import ( + "errors" + "fmt" + "io" + "net/http" + "reflect" + "strings" + + "github.com/kylelemons/godebug/pretty" +) + +var prettyConf = &pretty.Config{ + IncludeUnexported: false, + SkipZeroFields: true, + TrackCycles: true, + Formatter: map[reflect.Type]interface{}{ + reflect.TypeOf((*io.Reader)(nil)).Elem(): func(r io.Reader) string { + b, err := io.ReadAll(r) + if err != nil { + return "could not read io.Reader content" + } + return string(b) + }, + }, +} + +type verboser interface { + Verbose() string +} + +// Verbose prints the most verbose error that the error message has. +func Verbose(err error) string { + build := strings.Builder{} + for { + if err == nil { + break + } + if v, ok := err.(verboser); ok { + build.WriteString(v.Verbose()) + } else { + build.WriteString(err.Error()) + } + err = errors.Unwrap(err) + } + return build.String() +} + +// New is equivalent to errors.New(). +func New(text string) error { + return errors.New(text) +} + +// CallErr represents an HTTP call error. Has a Verbose() method that allows getting the +// http.Request and Response objects. Implements error. +type CallErr struct { + Req *http.Request + // Resp contains response body + Resp *http.Response + Err error +} + +// Errors implements error.Error(). +func (e CallErr) Error() string { + return e.Err.Error() +} + +// Verbose prints a versbose error message with the request or response. +func (e CallErr) Verbose() string { + e.Resp.Request = nil // This brings in a bunch of TLS crap we don't need + e.Resp.TLS = nil // Same + return fmt.Sprintf("%s:\nRequest:\n%s\nResponse:\n%s", e.Err, prettyConf.Sprint(e.Req), prettyConf.Sprint(e.Resp)) +} + +// Is reports whether any error in errors chain matches target. +func Is(err, target error) bool { + return errors.Is(err, target) +} + +// As finds the first error in errors chain that matches target, +// and if so, sets target to that error value and returns true. +// Otherwise, it returns false. +func As(err error, target interface{}) bool { + return errors.As(err, target) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/base.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/base.go new file mode 100644 index 000000000000..5f68384f68bc --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/base.go @@ -0,0 +1,467 @@ +// Package base contains a "Base" client that is used by the external public.Client and confidential.Client. +// Base holds shared attributes that must be available to both clients and methods that act as +// shared calls. +package base + +import ( + "context" + "errors" + "fmt" + "net/url" + "reflect" + "strings" + "sync" + "time" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared" +) + +const ( + // AuthorityPublicCloud is the default AAD authority host + AuthorityPublicCloud = "https://login.microsoftonline.com/common" + scopeSeparator = " " +) + +// manager provides an internal cache. It is defined to allow faking the cache in tests. +// In production it's a *storage.Manager or *storage.PartitionedManager. +type manager interface { + cache.Serializer + Read(context.Context, authority.AuthParams) (storage.TokenResponse, error) + Write(authority.AuthParams, accesstokens.TokenResponse) (shared.Account, error) +} + +// accountManager is a manager that also caches accounts. In production it's a *storage.Manager. +type accountManager interface { + manager + AllAccounts() []shared.Account + Account(homeAccountID string) shared.Account + RemoveAccount(account shared.Account, clientID string) +} + +// AcquireTokenSilentParameters contains the parameters to acquire a token silently (from cache). +type AcquireTokenSilentParameters struct { + Scopes []string + Account shared.Account + RequestType accesstokens.AppType + Credential *accesstokens.Credential + IsAppCache bool + TenantID string + UserAssertion string + AuthorizationType authority.AuthorizeType + Claims string +} + +// AcquireTokenAuthCodeParameters contains the parameters required to acquire an access token using the auth code flow. +// To use PKCE, set the CodeChallengeParameter. +// Code challenges are used to secure authorization code grants; for more information, visit +// https://tools.ietf.org/html/rfc7636. +type AcquireTokenAuthCodeParameters struct { + Scopes []string + Code string + Challenge string + Claims string + RedirectURI string + AppType accesstokens.AppType + Credential *accesstokens.Credential + TenantID string +} + +type AcquireTokenOnBehalfOfParameters struct { + Scopes []string + Claims string + Credential *accesstokens.Credential + TenantID string + UserAssertion string +} + +// AuthResult contains the results of one token acquisition operation in PublicClientApplication +// or ConfidentialClientApplication. For details see https://aka.ms/msal-net-authenticationresult +type AuthResult struct { + Account shared.Account + IDToken accesstokens.IDToken + AccessToken string + ExpiresOn time.Time + GrantedScopes []string + DeclinedScopes []string +} + +// AuthResultFromStorage creates an AuthResult from a storage token response (which is generated from the cache). +func AuthResultFromStorage(storageTokenResponse storage.TokenResponse) (AuthResult, error) { + if err := storageTokenResponse.AccessToken.Validate(); err != nil { + return AuthResult{}, fmt.Errorf("problem with access token in StorageTokenResponse: %w", err) + } + + account := storageTokenResponse.Account + accessToken := storageTokenResponse.AccessToken.Secret + grantedScopes := strings.Split(storageTokenResponse.AccessToken.Scopes, scopeSeparator) + + // Checking if there was an ID token in the cache; this will throw an error in the case of confidential client applications. + var idToken accesstokens.IDToken + if !storageTokenResponse.IDToken.IsZero() { + err := idToken.UnmarshalJSON([]byte(storageTokenResponse.IDToken.Secret)) + if err != nil { + return AuthResult{}, fmt.Errorf("problem decoding JWT token: %w", err) + } + } + return AuthResult{account, idToken, accessToken, storageTokenResponse.AccessToken.ExpiresOn.T, grantedScopes, nil}, nil +} + +// NewAuthResult creates an AuthResult. +func NewAuthResult(tokenResponse accesstokens.TokenResponse, account shared.Account) (AuthResult, error) { + if len(tokenResponse.DeclinedScopes) > 0 { + return AuthResult{}, fmt.Errorf("token response failed because declined scopes are present: %s", strings.Join(tokenResponse.DeclinedScopes, ",")) + } + return AuthResult{ + Account: account, + IDToken: tokenResponse.IDToken, + AccessToken: tokenResponse.AccessToken, + ExpiresOn: tokenResponse.ExpiresOn.T, + GrantedScopes: tokenResponse.GrantedScopes.Slice, + }, nil +} + +// Client is a base client that provides access to common methods and primatives that +// can be used by multiple clients. +type Client struct { + Token *oauth.Client + manager accountManager // *storage.Manager or fakeManager in tests + // pmanager is a partitioned cache for OBO authentication. *storage.PartitionedManager or fakeManager in tests + pmanager manager + + AuthParams authority.AuthParams // DO NOT EVER MAKE THIS A POINTER! See "Note" in New(). + cacheAccessor cache.ExportReplace + cacheAccessorMu *sync.RWMutex +} + +// Option is an optional argument to the New constructor. +type Option func(c *Client) error + +// WithCacheAccessor allows you to set some type of cache for storing authentication tokens. +func WithCacheAccessor(ca cache.ExportReplace) Option { + return func(c *Client) error { + if ca != nil { + c.cacheAccessor = ca + } + return nil + } +} + +// WithClientCapabilities allows configuring one or more client capabilities such as "CP1" +func WithClientCapabilities(capabilities []string) Option { + return func(c *Client) error { + var err error + if len(capabilities) > 0 { + cc, err := authority.NewClientCapabilities(capabilities) + if err == nil { + c.AuthParams.Capabilities = cc + } + } + return err + } +} + +// WithKnownAuthorityHosts specifies hosts Client shouldn't validate or request metadata for because they're known to the user +func WithKnownAuthorityHosts(hosts []string) Option { + return func(c *Client) error { + cp := make([]string, len(hosts)) + copy(cp, hosts) + c.AuthParams.KnownAuthorityHosts = cp + return nil + } +} + +// WithX5C specifies if x5c claim(public key of the certificate) should be sent to STS to enable Subject Name Issuer Authentication. +func WithX5C(sendX5C bool) Option { + return func(c *Client) error { + c.AuthParams.SendX5C = sendX5C + return nil + } +} + +func WithRegionDetection(region string) Option { + return func(c *Client) error { + c.AuthParams.AuthorityInfo.Region = region + return nil + } +} + +func WithInstanceDiscovery(instanceDiscoveryEnabled bool) Option { + return func(c *Client) error { + c.AuthParams.AuthorityInfo.ValidateAuthority = instanceDiscoveryEnabled + c.AuthParams.AuthorityInfo.InstanceDiscoveryDisabled = !instanceDiscoveryEnabled + return nil + } +} + +// New is the constructor for Base. +func New(clientID string, authorityURI string, token *oauth.Client, options ...Option) (Client, error) { + //By default, validateAuthority is set to true and instanceDiscoveryDisabled is set to false + authInfo, err := authority.NewInfoFromAuthorityURI(authorityURI, true, false) + if err != nil { + return Client{}, err + } + authParams := authority.NewAuthParams(clientID, authInfo) + client := Client{ // Note: Hey, don't even THINK about making Base into *Base. See "design notes" in public.go and confidential.go + Token: token, + AuthParams: authParams, + cacheAccessorMu: &sync.RWMutex{}, + manager: storage.New(token), + pmanager: storage.NewPartitionedManager(token), + } + for _, o := range options { + if err = o(&client); err != nil { + break + } + } + return client, err + +} + +// AuthCodeURL creates a URL used to acquire an authorization code. +func (b Client) AuthCodeURL(ctx context.Context, clientID, redirectURI string, scopes []string, authParams authority.AuthParams) (string, error) { + endpoints, err := b.Token.ResolveEndpoints(ctx, authParams.AuthorityInfo, "") + if err != nil { + return "", err + } + + baseURL, err := url.Parse(endpoints.AuthorizationEndpoint) + if err != nil { + return "", err + } + + claims, err := authParams.MergeCapabilitiesAndClaims() + if err != nil { + return "", err + } + + v := url.Values{} + v.Add("client_id", clientID) + v.Add("response_type", "code") + v.Add("redirect_uri", redirectURI) + v.Add("scope", strings.Join(scopes, scopeSeparator)) + if authParams.State != "" { + v.Add("state", authParams.State) + } + if claims != "" { + v.Add("claims", claims) + } + if authParams.CodeChallenge != "" { + v.Add("code_challenge", authParams.CodeChallenge) + } + if authParams.CodeChallengeMethod != "" { + v.Add("code_challenge_method", authParams.CodeChallengeMethod) + } + if authParams.LoginHint != "" { + v.Add("login_hint", authParams.LoginHint) + } + if authParams.Prompt != "" { + v.Add("prompt", authParams.Prompt) + } + if authParams.DomainHint != "" { + v.Add("domain_hint", authParams.DomainHint) + } + // There were left over from an implementation that didn't use any of these. We may + // need to add them later, but as of now aren't needed. + /* + if p.ResponseMode != "" { + urlParams.Add("response_mode", p.ResponseMode) + } + */ + baseURL.RawQuery = v.Encode() + return baseURL.String(), nil +} + +func (b Client) AcquireTokenSilent(ctx context.Context, silent AcquireTokenSilentParameters) (AuthResult, error) { + ar := AuthResult{} + // when tenant == "", the caller didn't specify a tenant and WithTenant will choose the client's configured tenant + tenant := silent.TenantID + authParams, err := b.AuthParams.WithTenant(tenant) + if err != nil { + return ar, err + } + authParams.Scopes = silent.Scopes + authParams.HomeAccountID = silent.Account.HomeAccountID + authParams.AuthorizationType = silent.AuthorizationType + authParams.Claims = silent.Claims + authParams.UserAssertion = silent.UserAssertion + + m := b.pmanager + if authParams.AuthorizationType != authority.ATOnBehalfOf { + authParams.AuthorizationType = authority.ATRefreshToken + m = b.manager + } + if b.cacheAccessor != nil { + key := authParams.CacheKey(silent.IsAppCache) + b.cacheAccessorMu.RLock() + err = b.cacheAccessor.Replace(ctx, m, cache.ReplaceHints{PartitionKey: key}) + b.cacheAccessorMu.RUnlock() + } + if err != nil { + return ar, err + } + storageTokenResponse, err := m.Read(ctx, authParams) + if err != nil { + return ar, err + } + + // ignore cached access tokens when given claims + if silent.Claims == "" { + ar, err = AuthResultFromStorage(storageTokenResponse) + if err == nil { + return ar, err + } + } + + // redeem a cached refresh token, if available + if reflect.ValueOf(storageTokenResponse.RefreshToken).IsZero() { + return ar, errors.New("no token found") + } + var cc *accesstokens.Credential + if silent.RequestType == accesstokens.ATConfidential { + cc = silent.Credential + } + token, err := b.Token.Refresh(ctx, silent.RequestType, authParams, cc, storageTokenResponse.RefreshToken) + if err != nil { + return ar, err + } + return b.AuthResultFromToken(ctx, authParams, token, true) +} + +func (b Client) AcquireTokenByAuthCode(ctx context.Context, authCodeParams AcquireTokenAuthCodeParameters) (AuthResult, error) { + authParams, err := b.AuthParams.WithTenant(authCodeParams.TenantID) + if err != nil { + return AuthResult{}, err + } + authParams.Claims = authCodeParams.Claims + authParams.Scopes = authCodeParams.Scopes + authParams.Redirecturi = authCodeParams.RedirectURI + authParams.AuthorizationType = authority.ATAuthCode + + var cc *accesstokens.Credential + if authCodeParams.AppType == accesstokens.ATConfidential { + cc = authCodeParams.Credential + authParams.IsConfidentialClient = true + } + + req, err := accesstokens.NewCodeChallengeRequest(authParams, authCodeParams.AppType, cc, authCodeParams.Code, authCodeParams.Challenge) + if err != nil { + return AuthResult{}, err + } + + token, err := b.Token.AuthCode(ctx, req) + if err != nil { + return AuthResult{}, err + } + + return b.AuthResultFromToken(ctx, authParams, token, true) +} + +// AcquireTokenOnBehalfOf acquires a security token for an app using middle tier apps access token. +func (b Client) AcquireTokenOnBehalfOf(ctx context.Context, onBehalfOfParams AcquireTokenOnBehalfOfParameters) (AuthResult, error) { + var ar AuthResult + silentParameters := AcquireTokenSilentParameters{ + Scopes: onBehalfOfParams.Scopes, + RequestType: accesstokens.ATConfidential, + Credential: onBehalfOfParams.Credential, + UserAssertion: onBehalfOfParams.UserAssertion, + AuthorizationType: authority.ATOnBehalfOf, + TenantID: onBehalfOfParams.TenantID, + Claims: onBehalfOfParams.Claims, + } + ar, err := b.AcquireTokenSilent(ctx, silentParameters) + if err == nil { + return ar, err + } + authParams, err := b.AuthParams.WithTenant(onBehalfOfParams.TenantID) + if err != nil { + return AuthResult{}, err + } + authParams.AuthorizationType = authority.ATOnBehalfOf + authParams.Claims = onBehalfOfParams.Claims + authParams.Scopes = onBehalfOfParams.Scopes + authParams.UserAssertion = onBehalfOfParams.UserAssertion + token, err := b.Token.OnBehalfOf(ctx, authParams, onBehalfOfParams.Credential) + if err == nil { + ar, err = b.AuthResultFromToken(ctx, authParams, token, true) + } + return ar, err +} + +func (b Client) AuthResultFromToken(ctx context.Context, authParams authority.AuthParams, token accesstokens.TokenResponse, cacheWrite bool) (AuthResult, error) { + if !cacheWrite { + return NewAuthResult(token, shared.Account{}) + } + var m manager = b.manager + if authParams.AuthorizationType == authority.ATOnBehalfOf { + m = b.pmanager + } + key := token.CacheKey(authParams) + if b.cacheAccessor != nil { + b.cacheAccessorMu.Lock() + defer b.cacheAccessorMu.Unlock() + err := b.cacheAccessor.Replace(ctx, m, cache.ReplaceHints{PartitionKey: key}) + if err != nil { + return AuthResult{}, err + } + } + account, err := m.Write(authParams, token) + if err != nil { + return AuthResult{}, err + } + ar, err := NewAuthResult(token, account) + if err == nil && b.cacheAccessor != nil { + err = b.cacheAccessor.Export(ctx, b.manager, cache.ExportHints{PartitionKey: key}) + } + return ar, err +} + +func (b Client) AllAccounts(ctx context.Context) ([]shared.Account, error) { + if b.cacheAccessor != nil { + b.cacheAccessorMu.RLock() + defer b.cacheAccessorMu.RUnlock() + key := b.AuthParams.CacheKey(false) + err := b.cacheAccessor.Replace(ctx, b.manager, cache.ReplaceHints{PartitionKey: key}) + if err != nil { + return nil, err + } + } + return b.manager.AllAccounts(), nil +} + +func (b Client) Account(ctx context.Context, homeAccountID string) (shared.Account, error) { + if b.cacheAccessor != nil { + b.cacheAccessorMu.RLock() + defer b.cacheAccessorMu.RUnlock() + authParams := b.AuthParams // This is a copy, as we don't have a pointer receiver and .AuthParams is not a pointer. + authParams.AuthorizationType = authority.AccountByID + authParams.HomeAccountID = homeAccountID + key := b.AuthParams.CacheKey(false) + err := b.cacheAccessor.Replace(ctx, b.manager, cache.ReplaceHints{PartitionKey: key}) + if err != nil { + return shared.Account{}, err + } + } + return b.manager.Account(homeAccountID), nil +} + +// RemoveAccount removes all the ATs, RTs and IDTs from the cache associated with this account. +func (b Client) RemoveAccount(ctx context.Context, account shared.Account) error { + if b.cacheAccessor == nil { + b.manager.RemoveAccount(account, b.AuthParams.ClientID) + return nil + } + b.cacheAccessorMu.Lock() + defer b.cacheAccessorMu.Unlock() + key := b.AuthParams.CacheKey(false) + err := b.cacheAccessor.Replace(ctx, b.manager, cache.ReplaceHints{PartitionKey: key}) + if err != nil { + return err + } + b.manager.RemoveAccount(account, b.AuthParams.ClientID) + return b.cacheAccessor.Export(ctx, b.manager, cache.ExportHints{PartitionKey: key}) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/items.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/items.go new file mode 100644 index 000000000000..5d4c9f1d1f3a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/items.go @@ -0,0 +1,203 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package storage + +import ( + "errors" + "fmt" + "reflect" + "strings" + "time" + + internalTime "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared" +) + +// Contract is the JSON structure that is written to any storage medium when serializing +// the internal cache. This design is shared between MSAL versions in many languages. +// This cannot be changed without design that includes other SDKs. +type Contract struct { + AccessTokens map[string]AccessToken `json:"AccessToken,omitempty"` + RefreshTokens map[string]accesstokens.RefreshToken `json:"RefreshToken,omitempty"` + IDTokens map[string]IDToken `json:"IdToken,omitempty"` + Accounts map[string]shared.Account `json:"Account,omitempty"` + AppMetaData map[string]AppMetaData `json:"AppMetadata,omitempty"` + + AdditionalFields map[string]interface{} +} + +// Contract is the JSON structure that is written to any storage medium when serializing +// the internal cache. This design is shared between MSAL versions in many languages. +// This cannot be changed without design that includes other SDKs. +type InMemoryContract struct { + AccessTokensPartition map[string]map[string]AccessToken + RefreshTokensPartition map[string]map[string]accesstokens.RefreshToken + IDTokensPartition map[string]map[string]IDToken + AccountsPartition map[string]map[string]shared.Account + AppMetaData map[string]AppMetaData +} + +// NewContract is the constructor for Contract. +func NewInMemoryContract() *InMemoryContract { + return &InMemoryContract{ + AccessTokensPartition: map[string]map[string]AccessToken{}, + RefreshTokensPartition: map[string]map[string]accesstokens.RefreshToken{}, + IDTokensPartition: map[string]map[string]IDToken{}, + AccountsPartition: map[string]map[string]shared.Account{}, + AppMetaData: map[string]AppMetaData{}, + } +} + +// NewContract is the constructor for Contract. +func NewContract() *Contract { + return &Contract{ + AccessTokens: map[string]AccessToken{}, + RefreshTokens: map[string]accesstokens.RefreshToken{}, + IDTokens: map[string]IDToken{}, + Accounts: map[string]shared.Account{}, + AppMetaData: map[string]AppMetaData{}, + AdditionalFields: map[string]interface{}{}, + } +} + +// AccessToken is the JSON representation of a MSAL access token for encoding to storage. +type AccessToken struct { + HomeAccountID string `json:"home_account_id,omitempty"` + Environment string `json:"environment,omitempty"` + Realm string `json:"realm,omitempty"` + CredentialType string `json:"credential_type,omitempty"` + ClientID string `json:"client_id,omitempty"` + Secret string `json:"secret,omitempty"` + Scopes string `json:"target,omitempty"` + ExpiresOn internalTime.Unix `json:"expires_on,omitempty"` + ExtendedExpiresOn internalTime.Unix `json:"extended_expires_on,omitempty"` + CachedAt internalTime.Unix `json:"cached_at,omitempty"` + UserAssertionHash string `json:"user_assertion_hash,omitempty"` + + AdditionalFields map[string]interface{} +} + +// NewAccessToken is the constructor for AccessToken. +func NewAccessToken(homeID, env, realm, clientID string, cachedAt, expiresOn, extendedExpiresOn time.Time, scopes, token string) AccessToken { + return AccessToken{ + HomeAccountID: homeID, + Environment: env, + Realm: realm, + CredentialType: "AccessToken", + ClientID: clientID, + Secret: token, + Scopes: scopes, + CachedAt: internalTime.Unix{T: cachedAt.UTC()}, + ExpiresOn: internalTime.Unix{T: expiresOn.UTC()}, + ExtendedExpiresOn: internalTime.Unix{T: extendedExpiresOn.UTC()}, + } +} + +// Key outputs the key that can be used to uniquely look up this entry in a map. +func (a AccessToken) Key() string { + key := strings.Join( + []string{a.HomeAccountID, a.Environment, a.CredentialType, a.ClientID, a.Realm, a.Scopes}, + shared.CacheKeySeparator, + ) + return strings.ToLower(key) +} + +// FakeValidate enables tests to fake access token validation +var FakeValidate func(AccessToken) error + +// Validate validates that this AccessToken can be used. +func (a AccessToken) Validate() error { + if FakeValidate != nil { + return FakeValidate(a) + } + if a.CachedAt.T.After(time.Now()) { + return errors.New("access token isn't valid, it was cached at a future time") + } + if a.ExpiresOn.T.Before(time.Now().Add(5 * time.Minute)) { + return fmt.Errorf("access token is expired") + } + if a.CachedAt.T.IsZero() { + return fmt.Errorf("access token does not have CachedAt set") + } + return nil +} + +// IDToken is the JSON representation of an MSAL id token for encoding to storage. +type IDToken struct { + HomeAccountID string `json:"home_account_id,omitempty"` + Environment string `json:"environment,omitempty"` + Realm string `json:"realm,omitempty"` + CredentialType string `json:"credential_type,omitempty"` + ClientID string `json:"client_id,omitempty"` + Secret string `json:"secret,omitempty"` + UserAssertionHash string `json:"user_assertion_hash,omitempty"` + AdditionalFields map[string]interface{} +} + +// IsZero determines if IDToken is the zero value. +func (i IDToken) IsZero() bool { + v := reflect.ValueOf(i) + for i := 0; i < v.NumField(); i++ { + field := v.Field(i) + if !field.IsZero() { + switch field.Kind() { + case reflect.Map, reflect.Slice: + if field.Len() == 0 { + continue + } + } + return false + } + } + return true +} + +// NewIDToken is the constructor for IDToken. +func NewIDToken(homeID, env, realm, clientID, idToken string) IDToken { + return IDToken{ + HomeAccountID: homeID, + Environment: env, + Realm: realm, + CredentialType: "IDToken", + ClientID: clientID, + Secret: idToken, + } +} + +// Key outputs the key that can be used to uniquely look up this entry in a map. +func (id IDToken) Key() string { + key := strings.Join( + []string{id.HomeAccountID, id.Environment, id.CredentialType, id.ClientID, id.Realm}, + shared.CacheKeySeparator, + ) + return strings.ToLower(key) +} + +// AppMetaData is the JSON representation of application metadata for encoding to storage. +type AppMetaData struct { + FamilyID string `json:"family_id,omitempty"` + ClientID string `json:"client_id,omitempty"` + Environment string `json:"environment,omitempty"` + + AdditionalFields map[string]interface{} +} + +// NewAppMetaData is the constructor for AppMetaData. +func NewAppMetaData(familyID, clientID, environment string) AppMetaData { + return AppMetaData{ + FamilyID: familyID, + ClientID: clientID, + Environment: environment, + } +} + +// Key outputs the key that can be used to uniquely look up this entry in a map. +func (a AppMetaData) Key() string { + key := strings.Join( + []string{"AppMetaData", a.Environment, a.ClientID}, + shared.CacheKeySeparator, + ) + return strings.ToLower(key) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/partitioned_storage.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/partitioned_storage.go new file mode 100644 index 000000000000..5e1cae0b8a34 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/partitioned_storage.go @@ -0,0 +1,436 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package storage + +import ( + "context" + "errors" + "fmt" + "strings" + "sync" + "time" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared" +) + +// PartitionedManager is a partitioned in-memory cache of access tokens, accounts and meta data. +type PartitionedManager struct { + contract *InMemoryContract + contractMu sync.RWMutex + requests aadInstanceDiscoveryer // *oauth.Token + + aadCacheMu sync.RWMutex + aadCache map[string]authority.InstanceDiscoveryMetadata +} + +// NewPartitionedManager is the constructor for PartitionedManager. +func NewPartitionedManager(requests *oauth.Client) *PartitionedManager { + m := &PartitionedManager{requests: requests, aadCache: make(map[string]authority.InstanceDiscoveryMetadata)} + m.contract = NewInMemoryContract() + return m +} + +// Read reads a storage token from the cache if it exists. +func (m *PartitionedManager) Read(ctx context.Context, authParameters authority.AuthParams) (TokenResponse, error) { + tr := TokenResponse{} + realm := authParameters.AuthorityInfo.Tenant + clientID := authParameters.ClientID + scopes := authParameters.Scopes + + // fetch metadata if instanceDiscovery is enabled + aliases := []string{authParameters.AuthorityInfo.Host} + if !authParameters.AuthorityInfo.InstanceDiscoveryDisabled { + metadata, err := m.getMetadataEntry(ctx, authParameters.AuthorityInfo) + if err != nil { + return TokenResponse{}, err + } + aliases = metadata.Aliases + } + + userAssertionHash := authParameters.AssertionHash() + partitionKeyFromRequest := userAssertionHash + + // errors returned by read* methods indicate a cache miss and are therefore non-fatal. We continue populating + // TokenResponse fields so that e.g. lack of an ID token doesn't prevent the caller from receiving a refresh token. + accessToken, err := m.readAccessToken(aliases, realm, clientID, userAssertionHash, scopes, partitionKeyFromRequest) + if err == nil { + tr.AccessToken = accessToken + } + idToken, err := m.readIDToken(aliases, realm, clientID, userAssertionHash, getPartitionKeyIDTokenRead(accessToken)) + if err == nil { + tr.IDToken = idToken + } + + if appMetadata, err := m.readAppMetaData(aliases, clientID); err == nil { + // we need the family ID to identify the correct refresh token, if any + familyID := appMetadata.FamilyID + refreshToken, err := m.readRefreshToken(aliases, familyID, clientID, userAssertionHash, partitionKeyFromRequest) + if err == nil { + tr.RefreshToken = refreshToken + } + } + + account, err := m.readAccount(aliases, realm, userAssertionHash, idToken.HomeAccountID) + if err == nil { + tr.Account = account + } + return tr, nil +} + +// Write writes a token response to the cache and returns the account information the token is stored with. +func (m *PartitionedManager) Write(authParameters authority.AuthParams, tokenResponse accesstokens.TokenResponse) (shared.Account, error) { + authParameters.HomeAccountID = tokenResponse.HomeAccountID() + homeAccountID := authParameters.HomeAccountID + environment := authParameters.AuthorityInfo.Host + realm := authParameters.AuthorityInfo.Tenant + clientID := authParameters.ClientID + target := strings.Join(tokenResponse.GrantedScopes.Slice, scopeSeparator) + userAssertionHash := authParameters.AssertionHash() + cachedAt := time.Now() + + var account shared.Account + + if len(tokenResponse.RefreshToken) > 0 { + refreshToken := accesstokens.NewRefreshToken(homeAccountID, environment, clientID, tokenResponse.RefreshToken, tokenResponse.FamilyID) + if authParameters.AuthorizationType == authority.ATOnBehalfOf { + refreshToken.UserAssertionHash = userAssertionHash + } + if err := m.writeRefreshToken(refreshToken, getPartitionKeyRefreshToken(refreshToken)); err != nil { + return account, err + } + } + + if len(tokenResponse.AccessToken) > 0 { + accessToken := NewAccessToken( + homeAccountID, + environment, + realm, + clientID, + cachedAt, + tokenResponse.ExpiresOn.T, + tokenResponse.ExtExpiresOn.T, + target, + tokenResponse.AccessToken, + ) + if authParameters.AuthorizationType == authority.ATOnBehalfOf { + accessToken.UserAssertionHash = userAssertionHash // get Hash method on this + } + + // Since we have a valid access token, cache it before moving on. + if err := accessToken.Validate(); err == nil { + if err := m.writeAccessToken(accessToken, getPartitionKeyAccessToken(accessToken)); err != nil { + return account, err + } + } else { + return shared.Account{}, err + } + } + + idTokenJwt := tokenResponse.IDToken + if !idTokenJwt.IsZero() { + idToken := NewIDToken(homeAccountID, environment, realm, clientID, idTokenJwt.RawToken) + if authParameters.AuthorizationType == authority.ATOnBehalfOf { + idToken.UserAssertionHash = userAssertionHash + } + if err := m.writeIDToken(idToken, getPartitionKeyIDToken(idToken)); err != nil { + return shared.Account{}, err + } + + localAccountID := idTokenJwt.LocalAccountID() + authorityType := authParameters.AuthorityInfo.AuthorityType + + preferredUsername := idTokenJwt.UPN + if idTokenJwt.PreferredUsername != "" { + preferredUsername = idTokenJwt.PreferredUsername + } + + account = shared.NewAccount( + homeAccountID, + environment, + realm, + localAccountID, + authorityType, + preferredUsername, + ) + if authParameters.AuthorizationType == authority.ATOnBehalfOf { + account.UserAssertionHash = userAssertionHash + } + if err := m.writeAccount(account, getPartitionKeyAccount(account)); err != nil { + return shared.Account{}, err + } + } + + AppMetaData := NewAppMetaData(tokenResponse.FamilyID, clientID, environment) + + if err := m.writeAppMetaData(AppMetaData); err != nil { + return shared.Account{}, err + } + return account, nil +} + +func (m *PartitionedManager) getMetadataEntry(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) { + md, err := m.aadMetadataFromCache(ctx, authorityInfo) + if err != nil { + // not in the cache, retrieve it + md, err = m.aadMetadata(ctx, authorityInfo) + } + return md, err +} + +func (m *PartitionedManager) aadMetadataFromCache(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) { + m.aadCacheMu.RLock() + defer m.aadCacheMu.RUnlock() + metadata, ok := m.aadCache[authorityInfo.Host] + if ok { + return metadata, nil + } + return metadata, errors.New("not found") +} + +func (m *PartitionedManager) aadMetadata(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) { + discoveryResponse, err := m.requests.AADInstanceDiscovery(ctx, authorityInfo) + if err != nil { + return authority.InstanceDiscoveryMetadata{}, err + } + + m.aadCacheMu.Lock() + defer m.aadCacheMu.Unlock() + + for _, metadataEntry := range discoveryResponse.Metadata { + for _, aliasedAuthority := range metadataEntry.Aliases { + m.aadCache[aliasedAuthority] = metadataEntry + } + } + if _, ok := m.aadCache[authorityInfo.Host]; !ok { + m.aadCache[authorityInfo.Host] = authority.InstanceDiscoveryMetadata{ + PreferredNetwork: authorityInfo.Host, + PreferredCache: authorityInfo.Host, + } + } + return m.aadCache[authorityInfo.Host], nil +} + +func (m *PartitionedManager) readAccessToken(envAliases []string, realm, clientID, userAssertionHash string, scopes []string, partitionKey string) (AccessToken, error) { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + if accessTokens, ok := m.contract.AccessTokensPartition[partitionKey]; ok { + // TODO: linear search (over a map no less) is slow for a large number (thousands) of tokens. + // this shows up as the dominating node in a profile. for real-world scenarios this likely isn't + // an issue, however if it does become a problem then we know where to look. + for _, at := range accessTokens { + if at.Realm == realm && at.ClientID == clientID && at.UserAssertionHash == userAssertionHash { + if checkAlias(at.Environment, envAliases) { + if isMatchingScopes(scopes, at.Scopes) { + return at, nil + } + } + } + } + } + return AccessToken{}, fmt.Errorf("access token not found") +} + +func (m *PartitionedManager) writeAccessToken(accessToken AccessToken, partitionKey string) error { + m.contractMu.Lock() + defer m.contractMu.Unlock() + key := accessToken.Key() + if m.contract.AccessTokensPartition[partitionKey] == nil { + m.contract.AccessTokensPartition[partitionKey] = make(map[string]AccessToken) + } + m.contract.AccessTokensPartition[partitionKey][key] = accessToken + return nil +} + +func matchFamilyRefreshTokenObo(rt accesstokens.RefreshToken, userAssertionHash string, envAliases []string) bool { + return rt.UserAssertionHash == userAssertionHash && checkAlias(rt.Environment, envAliases) && rt.FamilyID != "" +} + +func matchClientIDRefreshTokenObo(rt accesstokens.RefreshToken, userAssertionHash string, envAliases []string, clientID string) bool { + return rt.UserAssertionHash == userAssertionHash && checkAlias(rt.Environment, envAliases) && rt.ClientID == clientID +} + +func (m *PartitionedManager) readRefreshToken(envAliases []string, familyID, clientID, userAssertionHash, partitionKey string) (accesstokens.RefreshToken, error) { + byFamily := func(rt accesstokens.RefreshToken) bool { + return matchFamilyRefreshTokenObo(rt, userAssertionHash, envAliases) + } + byClient := func(rt accesstokens.RefreshToken) bool { + return matchClientIDRefreshTokenObo(rt, userAssertionHash, envAliases, clientID) + } + + var matchers []func(rt accesstokens.RefreshToken) bool + if familyID == "" { + matchers = []func(rt accesstokens.RefreshToken) bool{ + byClient, byFamily, + } + } else { + matchers = []func(rt accesstokens.RefreshToken) bool{ + byFamily, byClient, + } + } + + // TODO(keegan): All the tests here pass, but Bogdan says this is + // more complicated. I'm opening an issue for this to have him + // review the tests and suggest tests that would break this so + // we can re-write against good tests. His comments as follow: + // The algorithm is a bit more complex than this, I assume there are some tests covering everything. I would keep the order as is. + // The algorithm is: + // If application is NOT part of the family, search by client_ID + // If app is part of the family or if we DO NOT KNOW if it's part of the family, search by family ID, then by client_id (we will know if an app is part of the family after the first token response). + // https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/311fe8b16e7c293462806f397e189a6aa1159769/src/client/Microsoft.Identity.Client/Internal/Requests/Silent/CacheSilentStrategy.cs#L95 + m.contractMu.RLock() + defer m.contractMu.RUnlock() + for _, matcher := range matchers { + for _, rt := range m.contract.RefreshTokensPartition[partitionKey] { + if matcher(rt) { + return rt, nil + } + } + } + + return accesstokens.RefreshToken{}, fmt.Errorf("refresh token not found") +} + +func (m *PartitionedManager) writeRefreshToken(refreshToken accesstokens.RefreshToken, partitionKey string) error { + m.contractMu.Lock() + defer m.contractMu.Unlock() + key := refreshToken.Key() + if m.contract.AccessTokensPartition[partitionKey] == nil { + m.contract.RefreshTokensPartition[partitionKey] = make(map[string]accesstokens.RefreshToken) + } + m.contract.RefreshTokensPartition[partitionKey][key] = refreshToken + return nil +} + +func (m *PartitionedManager) readIDToken(envAliases []string, realm, clientID, userAssertionHash, partitionKey string) (IDToken, error) { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + for _, idt := range m.contract.IDTokensPartition[partitionKey] { + if idt.Realm == realm && idt.ClientID == clientID && idt.UserAssertionHash == userAssertionHash { + if checkAlias(idt.Environment, envAliases) { + return idt, nil + } + } + } + return IDToken{}, fmt.Errorf("token not found") +} + +func (m *PartitionedManager) writeIDToken(idToken IDToken, partitionKey string) error { + key := idToken.Key() + m.contractMu.Lock() + defer m.contractMu.Unlock() + if m.contract.IDTokensPartition[partitionKey] == nil { + m.contract.IDTokensPartition[partitionKey] = make(map[string]IDToken) + } + m.contract.IDTokensPartition[partitionKey][key] = idToken + return nil +} + +func (m *PartitionedManager) readAccount(envAliases []string, realm, UserAssertionHash, partitionKey string) (shared.Account, error) { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + + // You might ask why, if cache.Accounts is a map, we would loop through all of these instead of using a key. + // We only use a map because the storage contract shared between all language implementations says use a map. + // We can't change that. The other is because the keys are made using a specific "env", but here we are allowing + // a match in multiple envs (envAlias). That means we either need to hash each possible keyand do the lookup + // or just statically check. Since the design is to have a storage.Manager per user, the amount of keys stored + // is really low (say 2). Each hash is more expensive than the entire iteration. + for _, acc := range m.contract.AccountsPartition[partitionKey] { + if checkAlias(acc.Environment, envAliases) && acc.UserAssertionHash == UserAssertionHash && acc.Realm == realm { + return acc, nil + } + } + return shared.Account{}, fmt.Errorf("account not found") +} + +func (m *PartitionedManager) writeAccount(account shared.Account, partitionKey string) error { + key := account.Key() + m.contractMu.Lock() + defer m.contractMu.Unlock() + if m.contract.AccountsPartition[partitionKey] == nil { + m.contract.AccountsPartition[partitionKey] = make(map[string]shared.Account) + } + m.contract.AccountsPartition[partitionKey][key] = account + return nil +} + +func (m *PartitionedManager) readAppMetaData(envAliases []string, clientID string) (AppMetaData, error) { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + + for _, app := range m.contract.AppMetaData { + if checkAlias(app.Environment, envAliases) && app.ClientID == clientID { + return app, nil + } + } + return AppMetaData{}, fmt.Errorf("not found") +} + +func (m *PartitionedManager) writeAppMetaData(AppMetaData AppMetaData) error { + key := AppMetaData.Key() + m.contractMu.Lock() + defer m.contractMu.Unlock() + m.contract.AppMetaData[key] = AppMetaData + return nil +} + +// update updates the internal cache object. This is for use in tests, other uses are not +// supported. +func (m *PartitionedManager) update(cache *InMemoryContract) { + m.contractMu.Lock() + defer m.contractMu.Unlock() + m.contract = cache +} + +// Marshal implements cache.Marshaler. +func (m *PartitionedManager) Marshal() ([]byte, error) { + return json.Marshal(m.contract) +} + +// Unmarshal implements cache.Unmarshaler. +func (m *PartitionedManager) Unmarshal(b []byte) error { + m.contractMu.Lock() + defer m.contractMu.Unlock() + + contract := NewInMemoryContract() + + err := json.Unmarshal(b, contract) + if err != nil { + return err + } + + m.contract = contract + + return nil +} + +func getPartitionKeyAccessToken(item AccessToken) string { + if item.UserAssertionHash != "" { + return item.UserAssertionHash + } + return item.HomeAccountID +} + +func getPartitionKeyRefreshToken(item accesstokens.RefreshToken) string { + if item.UserAssertionHash != "" { + return item.UserAssertionHash + } + return item.HomeAccountID +} + +func getPartitionKeyIDToken(item IDToken) string { + return item.HomeAccountID +} + +func getPartitionKeyAccount(item shared.Account) string { + return item.HomeAccountID +} + +func getPartitionKeyIDTokenRead(item AccessToken) string { + return item.HomeAccountID +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/storage.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/storage.go new file mode 100644 index 000000000000..d3a39e005ca3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage/storage.go @@ -0,0 +1,516 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// Package storage holds all cached token information for MSAL. This storage can be +// augmented with third-party extensions to provide persistent storage. In that case, +// reads and writes in upper packages will call Marshal() to take the entire in-memory +// representation and write it to storage and Unmarshal() to update the entire in-memory +// storage with what was in the persistent storage. The persistent storage can only be +// accessed in this way because multiple MSAL clients written in multiple languages can +// access the same storage and must adhere to the same method that was defined +// previously. +package storage + +import ( + "context" + "errors" + "fmt" + "strings" + "sync" + "time" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared" +) + +// aadInstanceDiscoveryer allows faking in tests. +// It is implemented in production by ops/authority.Client +type aadInstanceDiscoveryer interface { + AADInstanceDiscovery(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryResponse, error) +} + +// TokenResponse mimics a token response that was pulled from the cache. +type TokenResponse struct { + RefreshToken accesstokens.RefreshToken + IDToken IDToken // *Credential + AccessToken AccessToken + Account shared.Account +} + +// Manager is an in-memory cache of access tokens, accounts and meta data. This data is +// updated on read/write calls. Unmarshal() replaces all data stored here with whatever +// was given to it on each call. +type Manager struct { + contract *Contract + contractMu sync.RWMutex + requests aadInstanceDiscoveryer // *oauth.Token + + aadCacheMu sync.RWMutex + aadCache map[string]authority.InstanceDiscoveryMetadata +} + +// New is the constructor for Manager. +func New(requests *oauth.Client) *Manager { + m := &Manager{requests: requests, aadCache: make(map[string]authority.InstanceDiscoveryMetadata)} + m.contract = NewContract() + return m +} + +func checkAlias(alias string, aliases []string) bool { + for _, v := range aliases { + if alias == v { + return true + } + } + return false +} + +func isMatchingScopes(scopesOne []string, scopesTwo string) bool { + newScopesTwo := strings.Split(scopesTwo, scopeSeparator) + scopeCounter := 0 + for _, scope := range scopesOne { + for _, otherScope := range newScopesTwo { + if strings.EqualFold(scope, otherScope) { + scopeCounter++ + continue + } + } + } + return scopeCounter == len(scopesOne) +} + +// Read reads a storage token from the cache if it exists. +func (m *Manager) Read(ctx context.Context, authParameters authority.AuthParams) (TokenResponse, error) { + tr := TokenResponse{} + homeAccountID := authParameters.HomeAccountID + realm := authParameters.AuthorityInfo.Tenant + clientID := authParameters.ClientID + scopes := authParameters.Scopes + + // fetch metadata if instanceDiscovery is enabled + aliases := []string{authParameters.AuthorityInfo.Host} + if !authParameters.AuthorityInfo.InstanceDiscoveryDisabled { + metadata, err := m.getMetadataEntry(ctx, authParameters.AuthorityInfo) + if err != nil { + return TokenResponse{}, err + } + aliases = metadata.Aliases + } + + accessToken := m.readAccessToken(homeAccountID, aliases, realm, clientID, scopes) + tr.AccessToken = accessToken + + if homeAccountID == "" { + // caller didn't specify a user, so there's no reason to search for an ID or refresh token + return tr, nil + } + // errors returned by read* methods indicate a cache miss and are therefore non-fatal. We continue populating + // TokenResponse fields so that e.g. lack of an ID token doesn't prevent the caller from receiving a refresh token. + idToken, err := m.readIDToken(homeAccountID, aliases, realm, clientID) + if err == nil { + tr.IDToken = idToken + } + + if appMetadata, err := m.readAppMetaData(aliases, clientID); err == nil { + // we need the family ID to identify the correct refresh token, if any + familyID := appMetadata.FamilyID + refreshToken, err := m.readRefreshToken(homeAccountID, aliases, familyID, clientID) + if err == nil { + tr.RefreshToken = refreshToken + } + } + + account, err := m.readAccount(homeAccountID, aliases, realm) + if err == nil { + tr.Account = account + } + return tr, nil +} + +const scopeSeparator = " " + +// Write writes a token response to the cache and returns the account information the token is stored with. +func (m *Manager) Write(authParameters authority.AuthParams, tokenResponse accesstokens.TokenResponse) (shared.Account, error) { + homeAccountID := tokenResponse.HomeAccountID() + environment := authParameters.AuthorityInfo.Host + realm := authParameters.AuthorityInfo.Tenant + clientID := authParameters.ClientID + target := strings.Join(tokenResponse.GrantedScopes.Slice, scopeSeparator) + cachedAt := time.Now() + + var account shared.Account + + if len(tokenResponse.RefreshToken) > 0 { + refreshToken := accesstokens.NewRefreshToken(homeAccountID, environment, clientID, tokenResponse.RefreshToken, tokenResponse.FamilyID) + if err := m.writeRefreshToken(refreshToken); err != nil { + return account, err + } + } + + if len(tokenResponse.AccessToken) > 0 { + accessToken := NewAccessToken( + homeAccountID, + environment, + realm, + clientID, + cachedAt, + tokenResponse.ExpiresOn.T, + tokenResponse.ExtExpiresOn.T, + target, + tokenResponse.AccessToken, + ) + + // Since we have a valid access token, cache it before moving on. + if err := accessToken.Validate(); err == nil { + if err := m.writeAccessToken(accessToken); err != nil { + return account, err + } + } + } + + idTokenJwt := tokenResponse.IDToken + if !idTokenJwt.IsZero() { + idToken := NewIDToken(homeAccountID, environment, realm, clientID, idTokenJwt.RawToken) + if err := m.writeIDToken(idToken); err != nil { + return shared.Account{}, err + } + + localAccountID := idTokenJwt.LocalAccountID() + authorityType := authParameters.AuthorityInfo.AuthorityType + + preferredUsername := idTokenJwt.UPN + if idTokenJwt.PreferredUsername != "" { + preferredUsername = idTokenJwt.PreferredUsername + } + + account = shared.NewAccount( + homeAccountID, + environment, + realm, + localAccountID, + authorityType, + preferredUsername, + ) + if err := m.writeAccount(account); err != nil { + return shared.Account{}, err + } + } + + AppMetaData := NewAppMetaData(tokenResponse.FamilyID, clientID, environment) + + if err := m.writeAppMetaData(AppMetaData); err != nil { + return shared.Account{}, err + } + return account, nil +} + +func (m *Manager) getMetadataEntry(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) { + md, err := m.aadMetadataFromCache(ctx, authorityInfo) + if err != nil { + // not in the cache, retrieve it + md, err = m.aadMetadata(ctx, authorityInfo) + } + return md, err +} + +func (m *Manager) aadMetadataFromCache(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) { + m.aadCacheMu.RLock() + defer m.aadCacheMu.RUnlock() + metadata, ok := m.aadCache[authorityInfo.Host] + if ok { + return metadata, nil + } + return metadata, errors.New("not found") +} + +func (m *Manager) aadMetadata(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryMetadata, error) { + m.aadCacheMu.Lock() + defer m.aadCacheMu.Unlock() + discoveryResponse, err := m.requests.AADInstanceDiscovery(ctx, authorityInfo) + if err != nil { + return authority.InstanceDiscoveryMetadata{}, err + } + + for _, metadataEntry := range discoveryResponse.Metadata { + for _, aliasedAuthority := range metadataEntry.Aliases { + m.aadCache[aliasedAuthority] = metadataEntry + } + } + if _, ok := m.aadCache[authorityInfo.Host]; !ok { + m.aadCache[authorityInfo.Host] = authority.InstanceDiscoveryMetadata{ + PreferredNetwork: authorityInfo.Host, + PreferredCache: authorityInfo.Host, + } + } + return m.aadCache[authorityInfo.Host], nil +} + +func (m *Manager) readAccessToken(homeID string, envAliases []string, realm, clientID string, scopes []string) AccessToken { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + // TODO: linear search (over a map no less) is slow for a large number (thousands) of tokens. + // this shows up as the dominating node in a profile. for real-world scenarios this likely isn't + // an issue, however if it does become a problem then we know where to look. + for _, at := range m.contract.AccessTokens { + if at.HomeAccountID == homeID && at.Realm == realm && at.ClientID == clientID { + if checkAlias(at.Environment, envAliases) { + if isMatchingScopes(scopes, at.Scopes) { + return at + } + } + } + } + return AccessToken{} +} + +func (m *Manager) writeAccessToken(accessToken AccessToken) error { + m.contractMu.Lock() + defer m.contractMu.Unlock() + key := accessToken.Key() + m.contract.AccessTokens[key] = accessToken + return nil +} + +func (m *Manager) readRefreshToken(homeID string, envAliases []string, familyID, clientID string) (accesstokens.RefreshToken, error) { + byFamily := func(rt accesstokens.RefreshToken) bool { + return matchFamilyRefreshToken(rt, homeID, envAliases) + } + byClient := func(rt accesstokens.RefreshToken) bool { + return matchClientIDRefreshToken(rt, homeID, envAliases, clientID) + } + + var matchers []func(rt accesstokens.RefreshToken) bool + if familyID == "" { + matchers = []func(rt accesstokens.RefreshToken) bool{ + byClient, byFamily, + } + } else { + matchers = []func(rt accesstokens.RefreshToken) bool{ + byFamily, byClient, + } + } + + // TODO(keegan): All the tests here pass, but Bogdan says this is + // more complicated. I'm opening an issue for this to have him + // review the tests and suggest tests that would break this so + // we can re-write against good tests. His comments as follow: + // The algorithm is a bit more complex than this, I assume there are some tests covering everything. I would keep the order as is. + // The algorithm is: + // If application is NOT part of the family, search by client_ID + // If app is part of the family or if we DO NOT KNOW if it's part of the family, search by family ID, then by client_id (we will know if an app is part of the family after the first token response). + // https://github.com/AzureAD/microsoft-authentication-library-for-dotnet/blob/311fe8b16e7c293462806f397e189a6aa1159769/src/client/Microsoft.Identity.Client/Internal/Requests/Silent/CacheSilentStrategy.cs#L95 + m.contractMu.RLock() + defer m.contractMu.RUnlock() + for _, matcher := range matchers { + for _, rt := range m.contract.RefreshTokens { + if matcher(rt) { + return rt, nil + } + } + } + + return accesstokens.RefreshToken{}, fmt.Errorf("refresh token not found") +} + +func matchFamilyRefreshToken(rt accesstokens.RefreshToken, homeID string, envAliases []string) bool { + return rt.HomeAccountID == homeID && checkAlias(rt.Environment, envAliases) && rt.FamilyID != "" +} + +func matchClientIDRefreshToken(rt accesstokens.RefreshToken, homeID string, envAliases []string, clientID string) bool { + return rt.HomeAccountID == homeID && checkAlias(rt.Environment, envAliases) && rt.ClientID == clientID +} + +func (m *Manager) writeRefreshToken(refreshToken accesstokens.RefreshToken) error { + key := refreshToken.Key() + m.contractMu.Lock() + defer m.contractMu.Unlock() + m.contract.RefreshTokens[key] = refreshToken + return nil +} + +func (m *Manager) readIDToken(homeID string, envAliases []string, realm, clientID string) (IDToken, error) { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + for _, idt := range m.contract.IDTokens { + if idt.HomeAccountID == homeID && idt.Realm == realm && idt.ClientID == clientID { + if checkAlias(idt.Environment, envAliases) { + return idt, nil + } + } + } + return IDToken{}, fmt.Errorf("token not found") +} + +func (m *Manager) writeIDToken(idToken IDToken) error { + key := idToken.Key() + m.contractMu.Lock() + defer m.contractMu.Unlock() + m.contract.IDTokens[key] = idToken + return nil +} + +func (m *Manager) AllAccounts() []shared.Account { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + + var accounts []shared.Account + for _, v := range m.contract.Accounts { + accounts = append(accounts, v) + } + + return accounts +} + +func (m *Manager) Account(homeAccountID string) shared.Account { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + + for _, v := range m.contract.Accounts { + if v.HomeAccountID == homeAccountID { + return v + } + } + + return shared.Account{} +} + +func (m *Manager) readAccount(homeAccountID string, envAliases []string, realm string) (shared.Account, error) { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + + // You might ask why, if cache.Accounts is a map, we would loop through all of these instead of using a key. + // We only use a map because the storage contract shared between all language implementations says use a map. + // We can't change that. The other is because the keys are made using a specific "env", but here we are allowing + // a match in multiple envs (envAlias). That means we either need to hash each possible keyand do the lookup + // or just statically check. Since the design is to have a storage.Manager per user, the amount of keys stored + // is really low (say 2). Each hash is more expensive than the entire iteration. + for _, acc := range m.contract.Accounts { + if acc.HomeAccountID == homeAccountID && checkAlias(acc.Environment, envAliases) && acc.Realm == realm { + return acc, nil + } + } + return shared.Account{}, fmt.Errorf("account not found") +} + +func (m *Manager) writeAccount(account shared.Account) error { + key := account.Key() + m.contractMu.Lock() + defer m.contractMu.Unlock() + m.contract.Accounts[key] = account + return nil +} + +func (m *Manager) readAppMetaData(envAliases []string, clientID string) (AppMetaData, error) { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + + for _, app := range m.contract.AppMetaData { + if checkAlias(app.Environment, envAliases) && app.ClientID == clientID { + return app, nil + } + } + return AppMetaData{}, fmt.Errorf("not found") +} + +func (m *Manager) writeAppMetaData(AppMetaData AppMetaData) error { + key := AppMetaData.Key() + m.contractMu.Lock() + defer m.contractMu.Unlock() + m.contract.AppMetaData[key] = AppMetaData + return nil +} + +// RemoveAccount removes all the associated ATs, RTs and IDTs from the cache associated with this account. +func (m *Manager) RemoveAccount(account shared.Account, clientID string) { + m.removeRefreshTokens(account.HomeAccountID, account.Environment, clientID) + m.removeAccessTokens(account.HomeAccountID, account.Environment) + m.removeIDTokens(account.HomeAccountID, account.Environment) + m.removeAccounts(account.HomeAccountID, account.Environment) +} + +func (m *Manager) removeRefreshTokens(homeID string, env string, clientID string) { + m.contractMu.Lock() + defer m.contractMu.Unlock() + for key, rt := range m.contract.RefreshTokens { + // Check for RTs associated with the account. + if rt.HomeAccountID == homeID && rt.Environment == env { + // Do RT's app ownership check as a precaution, in case family apps + // and 3rd-party apps share same token cache, although they should not. + if rt.ClientID == clientID || rt.FamilyID != "" { + delete(m.contract.RefreshTokens, key) + } + } + } +} + +func (m *Manager) removeAccessTokens(homeID string, env string) { + m.contractMu.Lock() + defer m.contractMu.Unlock() + for key, at := range m.contract.AccessTokens { + // Remove AT's associated with the account + if at.HomeAccountID == homeID && at.Environment == env { + // # To avoid the complexity of locating sibling family app's AT, we skip AT's app ownership check. + // It means ATs for other apps will also be removed, it is OK because: + // non-family apps are not supposed to share token cache to begin with; + // Even if it happens, we keep other app's RT already, so SSO still works. + delete(m.contract.AccessTokens, key) + } + } +} + +func (m *Manager) removeIDTokens(homeID string, env string) { + m.contractMu.Lock() + defer m.contractMu.Unlock() + for key, idt := range m.contract.IDTokens { + // Remove ID tokens associated with the account. + if idt.HomeAccountID == homeID && idt.Environment == env { + delete(m.contract.IDTokens, key) + } + } +} + +func (m *Manager) removeAccounts(homeID string, env string) { + m.contractMu.Lock() + defer m.contractMu.Unlock() + for key, acc := range m.contract.Accounts { + // Remove the specified account. + if acc.HomeAccountID == homeID && acc.Environment == env { + delete(m.contract.Accounts, key) + } + } +} + +// update updates the internal cache object. This is for use in tests, other uses are not +// supported. +func (m *Manager) update(cache *Contract) { + m.contractMu.Lock() + defer m.contractMu.Unlock() + m.contract = cache +} + +// Marshal implements cache.Marshaler. +func (m *Manager) Marshal() ([]byte, error) { + m.contractMu.RLock() + defer m.contractMu.RUnlock() + return json.Marshal(m.contract) +} + +// Unmarshal implements cache.Unmarshaler. +func (m *Manager) Unmarshal(b []byte) error { + m.contractMu.Lock() + defer m.contractMu.Unlock() + + contract := NewContract() + + err := json.Unmarshal(b, contract) + if err != nil { + return err + } + + m.contract = contract + + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported/exported.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported/exported.go new file mode 100644 index 000000000000..7b673e3fe126 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported/exported.go @@ -0,0 +1,34 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// package exported contains internal types that are re-exported from a public package +package exported + +// AssertionRequestOptions has information required to generate a client assertion +type AssertionRequestOptions struct { + // ClientID identifies the application for which an assertion is requested. Used as the assertion's "iss" and "sub" claims. + ClientID string + + // TokenEndpoint is the intended token endpoint. Used as the assertion's "aud" claim. + TokenEndpoint string +} + +// TokenProviderParameters is the authentication parameters passed to token providers +type TokenProviderParameters struct { + // Claims contains any additional claims requested for the token + Claims string + // CorrelationID of the authentication request + CorrelationID string + // Scopes requested for the token + Scopes []string + // TenantID identifies the tenant in which to authenticate + TenantID string +} + +// TokenProviderResult is the authentication result returned by custom token providers +type TokenProviderResult struct { + // AccessToken is the requested token + AccessToken string + // ExpiresInSeconds is the lifetime of the token in seconds + ExpiresInSeconds int +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/design.md b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/design.md new file mode 100644 index 000000000000..09edb01b7e43 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/design.md @@ -0,0 +1,140 @@ +# JSON Package Design +Author: John Doak(jdoak@microsoft.com) + +## Why? + +This project needs a special type of marshal/unmarshal not directly supported +by the encoding/json package. + +The need revolves around a few key wants/needs: +- unmarshal and marshal structs representing JSON messages +- fields in the messgage not in the struct must be maintained when unmarshalled +- those same fields must be marshalled back when encoded again + +The initial version used map[string]interface{} to put in the keys that +were known and then any other keys were put into a field called AdditionalFields. + +This has a few negatives: +- Dual marshaling/unmarshalling is required +- Adding a struct field requires manually adding a key by name to be encoded/decoded from the map (which is a loosely coupled construct), which can lead to bugs that aren't detected or have bad side effects +- Tests can become quickly disconnected if those keys aren't put +in tests as well. So you think you have support working, but you +don't. Existing tests were found that didn't test the marshalling output. +- There is no enforcement that if AdditionalFields is required on one struct, it should be on all containers +that don't have custom marshal/unmarshal. + +This package aims to support our needs by providing custom Marshal()/Unmarshal() functions. + +This prevents all the negatives in the initial solution listed above. However, it does add its own negative: +- Custom encoding/decoding via reflection is messy (as can be seen in encoding/json itself) + +Go proverb: Reflection is never clear +Suggested reading: https://blog.golang.org/laws-of-reflection + +## Important design decisions + +- We don't want to understand all JSON decoding rules +- We don't want to deal with all the quoting, commas, etc on decode +- Need support for json.Marshaler/Unmarshaler, so we can support types like time.Time +- If struct does not implement json.Unmarshaler, it must have AdditionalFields defined +- We only support root level objects that are \*struct or struct + +To faciliate these goals, we will utilize the json.Encoder and json.Decoder. +They provide streaming processing (efficient) and return errors on bad JSON. + +Support for json.Marshaler/Unmarshaler allows for us to use non-basic types +that must be specially encoded/decoded (like time.Time objects). + +We don't support types that can't customer unmarshal or have AdditionalFields +in order to prevent future devs from forgetting that important field and +generating bad return values. + +Support for root level objects of \*struct or struct simply acknowledges the +fact that this is designed only for the purposes listed in the Introduction. +Outside that (like encoding a lone number) should be done with the +regular json package (as it will not have additional fields). + +We don't support a few things on json supported reference types and structs: +- \*map: no need for pointers to maps +- \*slice: no need for pointers to slices +- any further pointers on struct after \*struct + +There should never be a need for this in Go. + +## Design + +## State Machines + +This uses state machine designs that based upon the Rob Pike talk on +lexers and parsers: https://www.youtube.com/watch?v=HxaD_trXwRE + +This is the most common pattern for state machines in Go and +the model to follow closesly when dealing with streaming +processing of textual data. + +Our state machines are based on the type: +```go +type stateFn func() (stateFn, error) +``` + +The state machine itself is simply a struct that has methods that +satisfy stateFn. + +Our state machines have a few standard calls +- run(): runs the state machine +- start(): always the first stateFn to be called + +All state machines have the following logic: +* run() is called +* start() is called and returns the next stateFn or error +* stateFn is called + - If returned stateFn(next state) is non-nil, call it + - If error is non-nil, run() returns the error + - If stateFn == nil and err == nil, run() return err == nil + +## Supporting types + +Marshalling/Unmarshalling must support(within top level struct): +- struct +- \*struct +- []struct +- []\*struct +- []map[string]structContainer +- [][]structContainer + +**Term note:** structContainer == type that has a struct or \*struct inside it + +We specifically do not support []interface or map[string]interface +where the interface value would hold some value with a struct in it. + +Those will still marshal/unmarshal, but without support for +AdditionalFields. + +## Marshalling + +The marshalling design will be based around a statemachine design. + +The basic logic is as follows: + +* If struct has custom marshaller, call it and return +* If struct has field "AdditionalFields", it must be a map[string]interface{} +* If struct does not have "AdditionalFields", give an error +* Get struct tag detailing json names to go names, create mapping +* For each public field name + - Write field name out + - If field value is a struct, recursively call our state machine + - Otherwise, use the json.Encoder to write out the value + +## Unmarshalling + +The unmarshalling desin is also based around a statemachine design. The +basic logic is as follows: + +* If struct has custom marhaller, call it +* If struct has field "AdditionalFields", it must be a map[string]interface{} +* Get struct tag detailing json names to go names, create mapping +* For each key found + - If key exists, + - If value is basic type, extract value into struct field using Decoder + - If value is struct type, recursively call statemachine + - If key doesn't exist, add it to AdditionalFields if it exists using Decoder diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/json.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/json.go new file mode 100644 index 000000000000..2238521f5f91 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/json.go @@ -0,0 +1,184 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// Package json provide functions for marshalling an unmarshalling types to JSON. These functions are meant to +// be utilized inside of structs that implement json.Unmarshaler and json.Marshaler interfaces. +// This package provides the additional functionality of writing fields that are not in the struct when marshalling +// to a field called AdditionalFields if that field exists and is a map[string]interface{}. +// When marshalling, if the struct has all the same prerequisites, it will uses the keys in AdditionalFields as +// extra fields. This package uses encoding/json underneath. +package json + +import ( + "bytes" + "encoding/json" + "fmt" + "reflect" + "strings" +) + +const addField = "AdditionalFields" +const ( + marshalJSON = "MarshalJSON" + unmarshalJSON = "UnmarshalJSON" +) + +var ( + leftBrace = []byte("{")[0] + rightBrace = []byte("}")[0] + comma = []byte(",")[0] + leftParen = []byte("[")[0] + rightParen = []byte("]")[0] +) + +var mapStrInterType = reflect.TypeOf(map[string]interface{}{}) + +// stateFn defines a state machine function. This will be used in all state +// machines in this package. +type stateFn func() (stateFn, error) + +// Marshal is used to marshal a type into its JSON representation. It +// wraps the stdlib calls in order to marshal a struct or *struct so +// that a field called "AdditionalFields" of type map[string]interface{} +// with "-" used inside struct tag `json:"-"` can be marshalled as if +// they were fields within the struct. +func Marshal(i interface{}) ([]byte, error) { + buff := bytes.Buffer{} + enc := json.NewEncoder(&buff) + enc.SetEscapeHTML(false) + enc.SetIndent("", "") + + v := reflect.ValueOf(i) + if v.Kind() != reflect.Ptr && v.CanAddr() { + v = v.Addr() + } + err := marshalStruct(v, &buff, enc) + if err != nil { + return nil, err + } + return buff.Bytes(), nil +} + +// Unmarshal unmarshals a []byte representing JSON into i, which must be a *struct. In addition, if the struct has +// a field called AdditionalFields of type map[string]interface{}, JSON data representing fields not in the struct +// will be written as key/value pairs to AdditionalFields. +func Unmarshal(b []byte, i interface{}) error { + if len(b) == 0 { + return nil + } + + jdec := json.NewDecoder(bytes.NewBuffer(b)) + jdec.UseNumber() + return unmarshalStruct(jdec, i) +} + +// MarshalRaw marshals i into a json.RawMessage. If I cannot be marshalled, +// this will panic. This is exposed to help test AdditionalField values +// which are stored as json.RawMessage. +func MarshalRaw(i interface{}) json.RawMessage { + b, err := json.Marshal(i) + if err != nil { + panic(err) + } + return json.RawMessage(b) +} + +// isDelim simply tests to see if a json.Token is a delimeter. +func isDelim(got json.Token) bool { + switch got.(type) { + case json.Delim: + return true + } + return false +} + +// delimIs tests got to see if it is want. +func delimIs(got json.Token, want rune) bool { + switch v := got.(type) { + case json.Delim: + if v == json.Delim(want) { + return true + } + } + return false +} + +// hasMarshalJSON will determine if the value or a pointer to this value has +// the MarshalJSON method. +func hasMarshalJSON(v reflect.Value) bool { + if method := v.MethodByName(marshalJSON); method.Kind() != reflect.Invalid { + _, ok := v.Interface().(json.Marshaler) + return ok + } + + if v.Kind() == reflect.Ptr { + v = v.Elem() + } else { + if !v.CanAddr() { + return false + } + v = v.Addr() + } + + if method := v.MethodByName(marshalJSON); method.Kind() != reflect.Invalid { + _, ok := v.Interface().(json.Marshaler) + return ok + } + return false +} + +// callMarshalJSON will call MarshalJSON() method on the value or a pointer to this value. +// This will panic if the method is not defined. +func callMarshalJSON(v reflect.Value) ([]byte, error) { + if method := v.MethodByName(marshalJSON); method.Kind() != reflect.Invalid { + marsh := v.Interface().(json.Marshaler) + return marsh.MarshalJSON() + } + + if v.Kind() == reflect.Ptr { + v = v.Elem() + } else { + if v.CanAddr() { + v = v.Addr() + } + } + + if method := v.MethodByName(unmarshalJSON); method.Kind() != reflect.Invalid { + marsh := v.Interface().(json.Marshaler) + return marsh.MarshalJSON() + } + + panic(fmt.Sprintf("callMarshalJSON called on type %T that does not have MarshalJSON defined", v.Interface())) +} + +// hasUnmarshalJSON will determine if the value or a pointer to this value has +// the UnmarshalJSON method. +func hasUnmarshalJSON(v reflect.Value) bool { + // You can't unmarshal on a non-pointer type. + if v.Kind() != reflect.Ptr { + if !v.CanAddr() { + return false + } + v = v.Addr() + } + + if method := v.MethodByName(unmarshalJSON); method.Kind() != reflect.Invalid { + _, ok := v.Interface().(json.Unmarshaler) + return ok + } + + return false +} + +// hasOmitEmpty indicates if the field has instructed us to not output +// the field if omitempty is set on the tag. tag is the string +// returned by reflect.StructField.Tag().Get(). +func hasOmitEmpty(tag string) bool { + sl := strings.Split(tag, ",") + for _, str := range sl { + if str == "omitempty" { + return true + } + } + return false +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/mapslice.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/mapslice.go new file mode 100644 index 000000000000..cef442f25c86 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/mapslice.go @@ -0,0 +1,333 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package json + +import ( + "encoding/json" + "fmt" + "reflect" +) + +// unmarshalMap unmarshal's a map. +func unmarshalMap(dec *json.Decoder, m reflect.Value) error { + if m.Kind() != reflect.Ptr || m.Elem().Kind() != reflect.Map { + panic("unmarshalMap called on non-*map value") + } + mapValueType := m.Elem().Type().Elem() + walk := mapWalk{dec: dec, m: m, valueType: mapValueType} + if err := walk.run(); err != nil { + return err + } + return nil +} + +type mapWalk struct { + dec *json.Decoder + key string + m reflect.Value + valueType reflect.Type +} + +// run runs our decoder state machine. +func (m *mapWalk) run() error { + var state = m.start + var err error + for { + state, err = state() + if err != nil { + return err + } + if state == nil { + return nil + } + } +} + +func (m *mapWalk) start() (stateFn, error) { + // maps can have custom unmarshaler's. + if hasUnmarshalJSON(m.m) { + err := m.dec.Decode(m.m.Interface()) + if err != nil { + return nil, err + } + return nil, nil + } + + // We only want to use this if the map value is: + // *struct/struct/map/slice + // otherwise use standard decode + t, _ := m.valueBaseType() + switch t.Kind() { + case reflect.Struct, reflect.Map, reflect.Slice: + delim, err := m.dec.Token() + if err != nil { + return nil, err + } + // This indicates the value was set to JSON null. + if delim == nil { + return nil, nil + } + if !delimIs(delim, '{') { + return nil, fmt.Errorf("Unmarshal expected opening {, received %v", delim) + } + return m.next, nil + case reflect.Ptr: + return nil, fmt.Errorf("do not support maps with values of '**type' or '*reference") + } + + // This is a basic map type, so just use Decode(). + if err := m.dec.Decode(m.m.Interface()); err != nil { + return nil, err + } + + return nil, nil +} + +func (m *mapWalk) next() (stateFn, error) { + if m.dec.More() { + key, err := m.dec.Token() + if err != nil { + return nil, err + } + m.key = key.(string) + return m.storeValue, nil + } + // No more entries, so remove final }. + _, err := m.dec.Token() + if err != nil { + return nil, err + } + return nil, nil +} + +func (m *mapWalk) storeValue() (stateFn, error) { + v := m.valueType + for { + switch v.Kind() { + case reflect.Ptr: + v = v.Elem() + continue + case reflect.Struct: + return m.storeStruct, nil + case reflect.Map: + return m.storeMap, nil + case reflect.Slice: + return m.storeSlice, nil + } + return nil, fmt.Errorf("bug: mapWalk.storeValue() called on unsupported type: %v", v.Kind()) + } +} + +func (m *mapWalk) storeStruct() (stateFn, error) { + v := newValue(m.valueType) + if err := unmarshalStruct(m.dec, v.Interface()); err != nil { + return nil, err + } + + if m.valueType.Kind() == reflect.Ptr { + m.m.Elem().SetMapIndex(reflect.ValueOf(m.key), v) + return m.next, nil + } + m.m.Elem().SetMapIndex(reflect.ValueOf(m.key), v.Elem()) + + return m.next, nil +} + +func (m *mapWalk) storeMap() (stateFn, error) { + v := reflect.MakeMap(m.valueType) + ptr := newValue(v.Type()) + ptr.Elem().Set(v) + if err := unmarshalMap(m.dec, ptr); err != nil { + return nil, err + } + + m.m.Elem().SetMapIndex(reflect.ValueOf(m.key), v) + + return m.next, nil +} + +func (m *mapWalk) storeSlice() (stateFn, error) { + v := newValue(m.valueType) + if err := unmarshalSlice(m.dec, v); err != nil { + return nil, err + } + + m.m.Elem().SetMapIndex(reflect.ValueOf(m.key), v.Elem()) + + return m.next, nil +} + +// valueType returns the underlying Type. So a *struct would yield +// struct, etc... +func (m *mapWalk) valueBaseType() (reflect.Type, bool) { + ptr := false + v := m.valueType + if v.Kind() == reflect.Ptr { + ptr = true + v = v.Elem() + } + return v, ptr +} + +// unmarshalSlice unmarshal's the next value, which must be a slice, into +// ptrSlice, which must be a pointer to a slice. newValue() can be use to +// create the slice. +func unmarshalSlice(dec *json.Decoder, ptrSlice reflect.Value) error { + if ptrSlice.Kind() != reflect.Ptr || ptrSlice.Elem().Kind() != reflect.Slice { + panic("unmarshalSlice called on non-*[]slice value") + } + sliceValueType := ptrSlice.Elem().Type().Elem() + walk := sliceWalk{ + dec: dec, + s: ptrSlice, + valueType: sliceValueType, + } + if err := walk.run(); err != nil { + return err + } + + return nil +} + +type sliceWalk struct { + dec *json.Decoder + s reflect.Value // *[]slice + valueType reflect.Type +} + +// run runs our decoder state machine. +func (s *sliceWalk) run() error { + var state = s.start + var err error + for { + state, err = state() + if err != nil { + return err + } + if state == nil { + return nil + } + } +} + +func (s *sliceWalk) start() (stateFn, error) { + // slices can have custom unmarshaler's. + if hasUnmarshalJSON(s.s) { + err := s.dec.Decode(s.s.Interface()) + if err != nil { + return nil, err + } + return nil, nil + } + + // We only want to use this if the slice value is: + // []*struct/[]struct/[]map/[]slice + // otherwise use standard decode + t := s.valueBaseType() + + switch t.Kind() { + case reflect.Ptr: + return nil, fmt.Errorf("cannot unmarshal into a ** or *") + case reflect.Struct, reflect.Map, reflect.Slice: + delim, err := s.dec.Token() + if err != nil { + return nil, err + } + // This indicates the value was set to nil. + if delim == nil { + return nil, nil + } + if !delimIs(delim, '[') { + return nil, fmt.Errorf("Unmarshal expected opening [, received %v", delim) + } + return s.next, nil + } + + if err := s.dec.Decode(s.s.Interface()); err != nil { + return nil, err + } + return nil, nil +} + +func (s *sliceWalk) next() (stateFn, error) { + if s.dec.More() { + return s.storeValue, nil + } + // Nothing left in the slice, remove closing ] + _, err := s.dec.Token() + return nil, err +} + +func (s *sliceWalk) storeValue() (stateFn, error) { + t := s.valueBaseType() + switch t.Kind() { + case reflect.Ptr: + return nil, fmt.Errorf("do not support 'pointer to pointer' or 'pointer to reference' types") + case reflect.Struct: + return s.storeStruct, nil + case reflect.Map: + return s.storeMap, nil + case reflect.Slice: + return s.storeSlice, nil + } + return nil, fmt.Errorf("bug: sliceWalk.storeValue() called on unsupported type: %v", t.Kind()) +} + +func (s *sliceWalk) storeStruct() (stateFn, error) { + v := newValue(s.valueType) + if err := unmarshalStruct(s.dec, v.Interface()); err != nil { + return nil, err + } + + if s.valueType.Kind() == reflect.Ptr { + s.s.Elem().Set(reflect.Append(s.s.Elem(), v)) + return s.next, nil + } + + s.s.Elem().Set(reflect.Append(s.s.Elem(), v.Elem())) + return s.next, nil +} + +func (s *sliceWalk) storeMap() (stateFn, error) { + v := reflect.MakeMap(s.valueType) + ptr := newValue(v.Type()) + ptr.Elem().Set(v) + + if err := unmarshalMap(s.dec, ptr); err != nil { + return nil, err + } + + s.s.Elem().Set(reflect.Append(s.s.Elem(), v)) + + return s.next, nil +} + +func (s *sliceWalk) storeSlice() (stateFn, error) { + v := newValue(s.valueType) + if err := unmarshalSlice(s.dec, v); err != nil { + return nil, err + } + + s.s.Elem().Set(reflect.Append(s.s.Elem(), v.Elem())) + + return s.next, nil +} + +// valueType returns the underlying Type. So a *struct would yield +// struct, etc... +func (s *sliceWalk) valueBaseType() reflect.Type { + v := s.valueType + if v.Kind() == reflect.Ptr { + v = v.Elem() + } + return v +} + +// newValue() returns a new *type that represents type passed. +func newValue(valueType reflect.Type) reflect.Value { + if valueType.Kind() == reflect.Ptr { + return reflect.New(valueType.Elem()) + } + return reflect.New(valueType) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/marshal.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/marshal.go new file mode 100644 index 000000000000..df5dc6e11b50 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/marshal.go @@ -0,0 +1,346 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package json + +import ( + "bytes" + "encoding/json" + "fmt" + "reflect" + "unicode" +) + +// marshalStruct takes in i, which must be a *struct or struct and marshals its content +// as JSON into buff (sometimes with writes to buff directly, sometimes via enc). +// This call is recursive for all fields of *struct or struct type. +func marshalStruct(v reflect.Value, buff *bytes.Buffer, enc *json.Encoder) error { + if v.Kind() == reflect.Ptr { + v = v.Elem() + } + // We only care about custom Marshalling a struct. + if v.Kind() != reflect.Struct { + return fmt.Errorf("bug: marshal() received a non *struct or struct, received type %T", v.Interface()) + } + + if hasMarshalJSON(v) { + b, err := callMarshalJSON(v) + if err != nil { + return err + } + buff.Write(b) + return nil + } + + t := v.Type() + + // If it has an AdditionalFields field make sure its the right type. + f := v.FieldByName(addField) + if f.Kind() != reflect.Invalid { + if f.Kind() != reflect.Map { + return fmt.Errorf("type %T has field 'AdditionalFields' that is not a map[string]interface{}", v.Interface()) + } + if !f.Type().AssignableTo(mapStrInterType) { + return fmt.Errorf("type %T has field 'AdditionalFields' that is not a map[string]interface{}", v.Interface()) + } + } + + translator, err := findFields(v) + if err != nil { + return err + } + + buff.WriteByte(leftBrace) + for x := 0; x < v.NumField(); x++ { + field := v.Field(x) + + // We don't access private fields. + if unicode.IsLower(rune(t.Field(x).Name[0])) { + continue + } + + if t.Field(x).Name == addField { + if v.Field(x).Len() > 0 { + if err := writeAddFields(field.Interface(), buff, enc); err != nil { + return err + } + buff.WriteByte(comma) + } + continue + } + + // If they have omitempty set, we don't write out the field if + // it is the zero value. + if hasOmitEmpty(t.Field(x).Tag.Get("json")) { + if v.Field(x).IsZero() { + continue + } + } + + // Write out the field name part. + jsonName := translator.jsonName(t.Field(x).Name) + buff.WriteString(fmt.Sprintf("%q:", jsonName)) + + if field.Kind() == reflect.Ptr { + field = field.Elem() + } + + if err := marshalStructField(field, buff, enc); err != nil { + return err + } + } + + buff.Truncate(buff.Len() - 1) // Remove final comma + buff.WriteByte(rightBrace) + + return nil +} + +func marshalStructField(field reflect.Value, buff *bytes.Buffer, enc *json.Encoder) error { + // Determine if we need a trailing comma. + defer buff.WriteByte(comma) + + switch field.Kind() { + // If it was a *struct or struct, we need to recursively all marshal(). + case reflect.Struct: + if field.CanAddr() { + field = field.Addr() + } + return marshalStruct(field, buff, enc) + case reflect.Map: + return marshalMap(field, buff, enc) + case reflect.Slice: + return marshalSlice(field, buff, enc) + } + + // It is just a basic type, so encode it. + if err := enc.Encode(field.Interface()); err != nil { + return err + } + buff.Truncate(buff.Len() - 1) // Remove Encode() added \n + + return nil +} + +func marshalMap(v reflect.Value, buff *bytes.Buffer, enc *json.Encoder) error { + if v.Kind() != reflect.Map { + return fmt.Errorf("bug: marshalMap() called on %T", v.Interface()) + } + if v.Len() == 0 { + buff.WriteByte(leftBrace) + buff.WriteByte(rightBrace) + return nil + } + encoder := mapEncode{m: v, buff: buff, enc: enc} + return encoder.run() +} + +type mapEncode struct { + m reflect.Value + buff *bytes.Buffer + enc *json.Encoder + + valueBaseType reflect.Type +} + +// run runs our encoder state machine. +func (m *mapEncode) run() error { + var state = m.start + var err error + for { + state, err = state() + if err != nil { + return err + } + if state == nil { + return nil + } + } +} + +func (m *mapEncode) start() (stateFn, error) { + if hasMarshalJSON(m.m) { + b, err := callMarshalJSON(m.m) + if err != nil { + return nil, err + } + m.buff.Write(b) + return nil, nil + } + + valueBaseType := m.m.Type().Elem() + if valueBaseType.Kind() == reflect.Ptr { + valueBaseType = valueBaseType.Elem() + } + m.valueBaseType = valueBaseType + + switch valueBaseType.Kind() { + case reflect.Ptr: + return nil, fmt.Errorf("Marshal does not support ** or *") + case reflect.Struct, reflect.Map, reflect.Slice: + return m.encode, nil + } + + // If the map value doesn't have a struct/map/slice, just Encode() it. + if err := m.enc.Encode(m.m.Interface()); err != nil { + return nil, err + } + m.buff.Truncate(m.buff.Len() - 1) // Remove Encode() added \n + return nil, nil +} + +func (m *mapEncode) encode() (stateFn, error) { + m.buff.WriteByte(leftBrace) + + iter := m.m.MapRange() + for iter.Next() { + // Write the key. + k := iter.Key() + m.buff.WriteString(fmt.Sprintf("%q:", k.String())) + + v := iter.Value() + switch m.valueBaseType.Kind() { + case reflect.Struct: + if v.CanAddr() { + v = v.Addr() + } + if err := marshalStruct(v, m.buff, m.enc); err != nil { + return nil, err + } + case reflect.Map: + if err := marshalMap(v, m.buff, m.enc); err != nil { + return nil, err + } + case reflect.Slice: + if err := marshalSlice(v, m.buff, m.enc); err != nil { + return nil, err + } + default: + panic(fmt.Sprintf("critical bug: mapEncode.encode() called with value base type: %v", m.valueBaseType.Kind())) + } + m.buff.WriteByte(comma) + } + m.buff.Truncate(m.buff.Len() - 1) // Remove final comma + m.buff.WriteByte(rightBrace) + + return nil, nil +} + +func marshalSlice(v reflect.Value, buff *bytes.Buffer, enc *json.Encoder) error { + if v.Kind() != reflect.Slice { + return fmt.Errorf("bug: marshalSlice() called on %T", v.Interface()) + } + if v.Len() == 0 { + buff.WriteByte(leftParen) + buff.WriteByte(rightParen) + return nil + } + encoder := sliceEncode{s: v, buff: buff, enc: enc} + return encoder.run() +} + +type sliceEncode struct { + s reflect.Value + buff *bytes.Buffer + enc *json.Encoder + + valueBaseType reflect.Type +} + +// run runs our encoder state machine. +func (s *sliceEncode) run() error { + var state = s.start + var err error + for { + state, err = state() + if err != nil { + return err + } + if state == nil { + return nil + } + } +} + +func (s *sliceEncode) start() (stateFn, error) { + if hasMarshalJSON(s.s) { + b, err := callMarshalJSON(s.s) + if err != nil { + return nil, err + } + s.buff.Write(b) + return nil, nil + } + + valueBaseType := s.s.Type().Elem() + if valueBaseType.Kind() == reflect.Ptr { + valueBaseType = valueBaseType.Elem() + } + s.valueBaseType = valueBaseType + + switch valueBaseType.Kind() { + case reflect.Ptr: + return nil, fmt.Errorf("Marshal does not support ** or *") + case reflect.Struct, reflect.Map, reflect.Slice: + return s.encode, nil + } + + // If the map value doesn't have a struct/map/slice, just Encode() it. + if err := s.enc.Encode(s.s.Interface()); err != nil { + return nil, err + } + s.buff.Truncate(s.buff.Len() - 1) // Remove Encode added \n + + return nil, nil +} + +func (s *sliceEncode) encode() (stateFn, error) { + s.buff.WriteByte(leftParen) + for i := 0; i < s.s.Len(); i++ { + v := s.s.Index(i) + switch s.valueBaseType.Kind() { + case reflect.Struct: + if v.CanAddr() { + v = v.Addr() + } + if err := marshalStruct(v, s.buff, s.enc); err != nil { + return nil, err + } + case reflect.Map: + if err := marshalMap(v, s.buff, s.enc); err != nil { + return nil, err + } + case reflect.Slice: + if err := marshalSlice(v, s.buff, s.enc); err != nil { + return nil, err + } + default: + panic(fmt.Sprintf("critical bug: mapEncode.encode() called with value base type: %v", s.valueBaseType.Kind())) + } + s.buff.WriteByte(comma) + } + s.buff.Truncate(s.buff.Len() - 1) // Remove final comma + s.buff.WriteByte(rightParen) + return nil, nil +} + +// writeAddFields writes the AdditionalFields struct field out to JSON as field +// values. i must be a map[string]interface{} or this will panic. +func writeAddFields(i interface{}, buff *bytes.Buffer, enc *json.Encoder) error { + m := i.(map[string]interface{}) + + x := 0 + for k, v := range m { + buff.WriteString(fmt.Sprintf("%q:", k)) + if err := enc.Encode(v); err != nil { + return err + } + buff.Truncate(buff.Len() - 1) // Remove Encode() added \n + + if x+1 != len(m) { + buff.WriteByte(comma) + } + x++ + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/struct.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/struct.go new file mode 100644 index 000000000000..07751544a282 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/struct.go @@ -0,0 +1,290 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package json + +import ( + "encoding/json" + "fmt" + "reflect" + "strings" +) + +func unmarshalStruct(jdec *json.Decoder, i interface{}) error { + v := reflect.ValueOf(i) + if v.Kind() != reflect.Ptr { + return fmt.Errorf("Unmarshal() received type %T, which is not a *struct", i) + } + v = v.Elem() + if v.Kind() != reflect.Struct { + return fmt.Errorf("Unmarshal() received type %T, which is not a *struct", i) + } + + if hasUnmarshalJSON(v) { + // Indicates that this type has a custom Unmarshaler. + return jdec.Decode(v.Addr().Interface()) + } + + f := v.FieldByName(addField) + if f.Kind() == reflect.Invalid { + return fmt.Errorf("Unmarshal(%T) only supports structs that have the field AdditionalFields or implements json.Unmarshaler", i) + } + + if f.Kind() != reflect.Map || !f.Type().AssignableTo(mapStrInterType) { + return fmt.Errorf("type %T has field 'AdditionalFields' that is not a map[string]interface{}", i) + } + + dec := newDecoder(jdec, v) + return dec.run() +} + +type decoder struct { + dec *json.Decoder + value reflect.Value // This will be a reflect.Struct + translator translateFields + key string +} + +func newDecoder(dec *json.Decoder, value reflect.Value) *decoder { + return &decoder{value: value, dec: dec} +} + +// run runs our decoder state machine. +func (d *decoder) run() error { + var state = d.start + var err error + for { + state, err = state() + if err != nil { + return err + } + if state == nil { + return nil + } + } +} + +// start looks for our opening delimeter '{' and then transitions to looping through our fields. +func (d *decoder) start() (stateFn, error) { + var err error + d.translator, err = findFields(d.value) + if err != nil { + return nil, err + } + + delim, err := d.dec.Token() + if err != nil { + return nil, err + } + if !delimIs(delim, '{') { + return nil, fmt.Errorf("Unmarshal expected opening {, received %v", delim) + } + + return d.next, nil +} + +// next gets the next struct field name from the raw json or stops the machine if we get our closing }. +func (d *decoder) next() (stateFn, error) { + if !d.dec.More() { + // Remove the closing }. + if _, err := d.dec.Token(); err != nil { + return nil, err + } + return nil, nil + } + + key, err := d.dec.Token() + if err != nil { + return nil, err + } + + d.key = key.(string) + return d.storeValue, nil +} + +// storeValue takes the next value and stores it our struct. If the field can't be found +// in the struct, it pushes the operation to storeAdditional(). +func (d *decoder) storeValue() (stateFn, error) { + goName := d.translator.goName(d.key) + if goName == "" { + goName = d.key + } + + // We don't have the field in the struct, so it goes in AdditionalFields. + f := d.value.FieldByName(goName) + if f.Kind() == reflect.Invalid { + return d.storeAdditional, nil + } + + // Indicates that this type has a custom Unmarshaler. + if hasUnmarshalJSON(f) { + err := d.dec.Decode(f.Addr().Interface()) + if err != nil { + return nil, err + } + return d.next, nil + } + + t, isPtr, err := fieldBaseType(d.value, goName) + if err != nil { + return nil, fmt.Errorf("type(%s) had field(%s) %w", d.value.Type().Name(), goName, err) + } + + switch t.Kind() { + // We need to recursively call ourselves on any *struct or struct. + case reflect.Struct: + if isPtr { + if f.IsNil() { + f.Set(reflect.New(t)) + } + } else { + f = f.Addr() + } + if err := unmarshalStruct(d.dec, f.Interface()); err != nil { + return nil, err + } + return d.next, nil + case reflect.Map: + v := reflect.MakeMap(f.Type()) + ptr := newValue(f.Type()) + ptr.Elem().Set(v) + if err := unmarshalMap(d.dec, ptr); err != nil { + return nil, err + } + f.Set(ptr.Elem()) + return d.next, nil + case reflect.Slice: + v := reflect.MakeSlice(f.Type(), 0, 0) + ptr := newValue(f.Type()) + ptr.Elem().Set(v) + if err := unmarshalSlice(d.dec, ptr); err != nil { + return nil, err + } + f.Set(ptr.Elem()) + return d.next, nil + } + + if !isPtr { + f = f.Addr() + } + + // For values that are pointers, we need them to be non-nil in order + // to decode into them. + if f.IsNil() { + f.Set(reflect.New(t)) + } + + if err := d.dec.Decode(f.Interface()); err != nil { + return nil, err + } + + return d.next, nil +} + +// storeAdditional pushes the key/value into our .AdditionalFields map. +func (d *decoder) storeAdditional() (stateFn, error) { + rw := json.RawMessage{} + if err := d.dec.Decode(&rw); err != nil { + return nil, err + } + field := d.value.FieldByName(addField) + if field.IsNil() { + field.Set(reflect.MakeMap(field.Type())) + } + field.SetMapIndex(reflect.ValueOf(d.key), reflect.ValueOf(rw)) + return d.next, nil +} + +func fieldBaseType(v reflect.Value, fieldName string) (t reflect.Type, isPtr bool, err error) { + sf, ok := v.Type().FieldByName(fieldName) + if !ok { + return nil, false, fmt.Errorf("bug: fieldBaseType() lookup of field(%s) on type(%s): do not have field", fieldName, v.Type().Name()) + } + t = sf.Type + if t.Kind() == reflect.Ptr { + t = t.Elem() + isPtr = true + } + if t.Kind() == reflect.Ptr { + return nil, isPtr, fmt.Errorf("received pointer to pointer type, not supported") + } + return t, isPtr, nil +} + +type translateField struct { + jsonName string + goName string +} + +// translateFields is a list of translateFields with a handy lookup method. +type translateFields []translateField + +// goName loops through a list of fields looking for one contaning the jsonName and +// returning the goName. If not found, returns the empty string. +// Note: not a map because at this size slices are faster even in tight loops. +func (t translateFields) goName(jsonName string) string { + for _, entry := range t { + if entry.jsonName == jsonName { + return entry.goName + } + } + return "" +} + +// jsonName loops through a list of fields looking for one contaning the goName and +// returning the jsonName. If not found, returns the empty string. +// Note: not a map because at this size slices are faster even in tight loops. +func (t translateFields) jsonName(goName string) string { + for _, entry := range t { + if entry.goName == goName { + return entry.jsonName + } + } + return "" +} + +var umarshalerType = reflect.TypeOf((*json.Unmarshaler)(nil)).Elem() + +// findFields parses a struct and writes the field tags for lookup. It will return an error +// if any field has a type of *struct or struct that does not implement json.Marshaler. +func findFields(v reflect.Value) (translateFields, error) { + if v.Kind() == reflect.Ptr { + v = v.Elem() + } + if v.Kind() != reflect.Struct { + return nil, fmt.Errorf("findFields received a %s type, expected *struct or struct", v.Type().Name()) + } + tfs := make([]translateField, 0, v.NumField()) + for i := 0; i < v.NumField(); i++ { + tf := translateField{ + goName: v.Type().Field(i).Name, + jsonName: parseTag(v.Type().Field(i).Tag.Get("json")), + } + switch tf.jsonName { + case "", "-": + tf.jsonName = tf.goName + } + tfs = append(tfs, tf) + + f := v.Field(i) + if f.Kind() == reflect.Ptr { + f = f.Elem() + } + if f.Kind() == reflect.Struct { + if f.Type().Implements(umarshalerType) { + return nil, fmt.Errorf("struct type %q which has field %q which "+ + "doesn't implement json.Unmarshaler", v.Type().Name(), v.Type().Field(i).Name) + } + } + } + return tfs, nil +} + +// parseTag just returns the first entry in the tag. tag is the string +// returned by reflect.StructField.Tag().Get(). +func parseTag(tag string) string { + if idx := strings.Index(tag, ","); idx != -1 { + return tag[:idx] + } + return tag +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time/time.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time/time.go new file mode 100644 index 000000000000..a1c99621e9fc --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time/time.go @@ -0,0 +1,70 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// Package time provides for custom types to translate time from JSON and other formats +// into time.Time objects. +package time + +import ( + "fmt" + "strconv" + "strings" + "time" +) + +// Unix provides a type that can marshal and unmarshal a string representation +// of the unix epoch into a time.Time object. +type Unix struct { + T time.Time +} + +// MarshalJSON implements encoding/json.MarshalJSON(). +func (u Unix) MarshalJSON() ([]byte, error) { + if u.T.IsZero() { + return []byte(""), nil + } + return []byte(fmt.Sprintf("%q", strconv.FormatInt(u.T.Unix(), 10))), nil +} + +// UnmarshalJSON implements encoding/json.UnmarshalJSON(). +func (u *Unix) UnmarshalJSON(b []byte) error { + i, err := strconv.Atoi(strings.Trim(string(b), `"`)) + if err != nil { + return fmt.Errorf("unix time(%s) could not be converted from string to int: %w", string(b), err) + } + u.T = time.Unix(int64(i), 0) + return nil +} + +// DurationTime provides a type that can marshal and unmarshal a string representation +// of a duration from now into a time.Time object. +// Note: I'm not sure this is the best way to do this. What happens is we get a field +// called "expires_in" that represents the seconds from now that this expires. We +// turn that into a time we call .ExpiresOn. But maybe we should be recording +// when the token was received at .TokenRecieved and .ExpiresIn should remain as a duration. +// Then we could have a method called ExpiresOn(). Honestly, the whole thing is +// bad because the server doesn't return a concrete time. I think this is +// cleaner, but its not great either. +type DurationTime struct { + T time.Time +} + +// MarshalJSON implements encoding/json.MarshalJSON(). +func (d DurationTime) MarshalJSON() ([]byte, error) { + if d.T.IsZero() { + return []byte(""), nil + } + + dt := time.Until(d.T) + return []byte(fmt.Sprintf("%d", int64(dt*time.Second))), nil +} + +// UnmarshalJSON implements encoding/json.UnmarshalJSON(). +func (d *DurationTime) UnmarshalJSON(b []byte) error { + i, err := strconv.Atoi(strings.Trim(string(b), `"`)) + if err != nil { + return fmt.Errorf("unix time(%s) could not be converted from string to int: %w", string(b), err) + } + d.T = time.Now().Add(time.Duration(i) * time.Second) + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local/server.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local/server.go new file mode 100644 index 000000000000..04236ff3127a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local/server.go @@ -0,0 +1,177 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// Package local contains a local HTTP server used with interactive authentication. +package local + +import ( + "context" + "fmt" + "net" + "net/http" + "strconv" + "strings" + "time" +) + +var okPage = []byte(` + + + + + Authentication Complete + + +

      Authentication complete. You can return to the application. Feel free to close this browser tab.

      + + +`) + +const failPage = ` + + + + + Authentication Failed + + +

      Authentication failed. You can return to the application. Feel free to close this browser tab.

      +

      Error details: error %s error_description: %s

      + + +` + +// Result is the result from the redirect. +type Result struct { + // Code is the code sent by the authority server. + Code string + // Err is set if there was an error. + Err error +} + +// Server is an HTTP server. +type Server struct { + // Addr is the address the server is listening on. + Addr string + resultCh chan Result + s *http.Server + reqState string +} + +// New creates a local HTTP server and starts it. +func New(reqState string, port int) (*Server, error) { + var l net.Listener + var err error + var portStr string + if port > 0 { + // use port provided by caller + l, err = net.Listen("tcp", fmt.Sprintf("localhost:%d", port)) + portStr = strconv.FormatInt(int64(port), 10) + } else { + // find a free port + for i := 0; i < 10; i++ { + l, err = net.Listen("tcp", "localhost:0") + if err != nil { + continue + } + addr := l.Addr().String() + portStr = addr[strings.LastIndex(addr, ":")+1:] + break + } + } + if err != nil { + return nil, err + } + + serv := &Server{ + Addr: fmt.Sprintf("http://localhost:%s", portStr), + s: &http.Server{Addr: "localhost:0", ReadHeaderTimeout: time.Second}, + reqState: reqState, + resultCh: make(chan Result, 1), + } + serv.s.Handler = http.HandlerFunc(serv.handler) + + if err := serv.start(l); err != nil { + return nil, err + } + + return serv, nil +} + +func (s *Server) start(l net.Listener) error { + go func() { + err := s.s.Serve(l) + if err != nil { + select { + case s.resultCh <- Result{Err: err}: + default: + } + } + }() + + return nil +} + +// Result gets the result of the redirect operation. Once a single result is returned, the server +// is shutdown. ctx deadline will be honored. +func (s *Server) Result(ctx context.Context) Result { + select { + case <-ctx.Done(): + return Result{Err: ctx.Err()} + case r := <-s.resultCh: + return r + } +} + +// Shutdown shuts down the server. +func (s *Server) Shutdown() { + // Note: You might get clever and think you can do this in handler() as a defer, you can't. + _ = s.s.Shutdown(context.Background()) +} + +func (s *Server) putResult(r Result) { + select { + case s.resultCh <- r: + default: + } +} + +func (s *Server) handler(w http.ResponseWriter, r *http.Request) { + q := r.URL.Query() + + headerErr := q.Get("error") + if headerErr != "" { + desc := q.Get("error_description") + // Note: It is a little weird we handle some errors by not going to the failPage. If they all should, + // change this to s.error() and make s.error() write the failPage instead of an error code. + _, _ = w.Write([]byte(fmt.Sprintf(failPage, headerErr, desc))) + s.putResult(Result{Err: fmt.Errorf(desc)}) + return + } + + respState := q.Get("state") + switch respState { + case s.reqState: + case "": + s.error(w, http.StatusInternalServerError, "server didn't send OAuth state") + return + default: + s.error(w, http.StatusInternalServerError, "mismatched OAuth state, req(%s), resp(%s)", s.reqState, respState) + return + } + + code := q.Get("code") + if code == "" { + s.error(w, http.StatusInternalServerError, "authorization code missing in query string") + return + } + + _, _ = w.Write(okPage) + s.putResult(Result{Code: code}) +} + +func (s *Server) error(w http.ResponseWriter, code int, str string, i ...interface{}) { + err := fmt.Errorf(str, i...) + http.Error(w, err.Error(), code) + s.putResult(Result{Err: err}) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/oauth.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/oauth.go new file mode 100644 index 000000000000..ebd86e2baf9c --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/oauth.go @@ -0,0 +1,353 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package oauth + +import ( + "context" + "encoding/json" + "fmt" + "io" + "time" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported" + internalTime "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs" + "github.com/google/uuid" +) + +// ResolveEndpointer contains the methods for resolving authority endpoints. +type ResolveEndpointer interface { + ResolveEndpoints(ctx context.Context, authorityInfo authority.Info, userPrincipalName string) (authority.Endpoints, error) +} + +// AccessTokens contains the methods for fetching tokens from different sources. +type AccessTokens interface { + DeviceCodeResult(ctx context.Context, authParameters authority.AuthParams) (accesstokens.DeviceCodeResult, error) + FromUsernamePassword(ctx context.Context, authParameters authority.AuthParams) (accesstokens.TokenResponse, error) + FromAuthCode(ctx context.Context, req accesstokens.AuthCodeRequest) (accesstokens.TokenResponse, error) + FromRefreshToken(ctx context.Context, appType accesstokens.AppType, authParams authority.AuthParams, cc *accesstokens.Credential, refreshToken string) (accesstokens.TokenResponse, error) + FromClientSecret(ctx context.Context, authParameters authority.AuthParams, clientSecret string) (accesstokens.TokenResponse, error) + FromAssertion(ctx context.Context, authParameters authority.AuthParams, assertion string) (accesstokens.TokenResponse, error) + FromUserAssertionClientSecret(ctx context.Context, authParameters authority.AuthParams, userAssertion string, clientSecret string) (accesstokens.TokenResponse, error) + FromUserAssertionClientCertificate(ctx context.Context, authParameters authority.AuthParams, userAssertion string, assertion string) (accesstokens.TokenResponse, error) + FromDeviceCodeResult(ctx context.Context, authParameters authority.AuthParams, deviceCodeResult accesstokens.DeviceCodeResult) (accesstokens.TokenResponse, error) + FromSamlGrant(ctx context.Context, authParameters authority.AuthParams, samlGrant wstrust.SamlTokenInfo) (accesstokens.TokenResponse, error) +} + +// FetchAuthority will be implemented by authority.Authority. +type FetchAuthority interface { + UserRealm(context.Context, authority.AuthParams) (authority.UserRealm, error) + AADInstanceDiscovery(context.Context, authority.Info) (authority.InstanceDiscoveryResponse, error) +} + +// FetchWSTrust contains the methods for interacting with WSTrust endpoints. +type FetchWSTrust interface { + Mex(ctx context.Context, federationMetadataURL string) (defs.MexDocument, error) + SAMLTokenInfo(ctx context.Context, authParameters authority.AuthParams, cloudAudienceURN string, endpoint defs.Endpoint) (wstrust.SamlTokenInfo, error) +} + +// Client provides tokens for various types of token requests. +type Client struct { + Resolver ResolveEndpointer + AccessTokens AccessTokens + Authority FetchAuthority + WSTrust FetchWSTrust +} + +// New is the constructor for Token. +func New(httpClient ops.HTTPClient) *Client { + r := ops.New(httpClient) + return &Client{ + Resolver: newAuthorityEndpoint(r), + AccessTokens: r.AccessTokens(), + Authority: r.Authority(), + WSTrust: r.WSTrust(), + } +} + +// ResolveEndpoints gets the authorization and token endpoints and creates an AuthorityEndpoints instance. +func (t *Client) ResolveEndpoints(ctx context.Context, authorityInfo authority.Info, userPrincipalName string) (authority.Endpoints, error) { + return t.Resolver.ResolveEndpoints(ctx, authorityInfo, userPrincipalName) +} + +// AADInstanceDiscovery attempts to discover a tenant endpoint (used in OIDC auth with an authorization endpoint). +// This is done by AAD which allows for aliasing of tenants (windows.sts.net is the same as login.windows.com). +func (t *Client) AADInstanceDiscovery(ctx context.Context, authorityInfo authority.Info) (authority.InstanceDiscoveryResponse, error) { + return t.Authority.AADInstanceDiscovery(ctx, authorityInfo) +} + +// AuthCode returns a token based on an authorization code. +func (t *Client) AuthCode(ctx context.Context, req accesstokens.AuthCodeRequest) (accesstokens.TokenResponse, error) { + if err := scopeError(req.AuthParams); err != nil { + return accesstokens.TokenResponse{}, err + } + if err := t.resolveEndpoint(ctx, &req.AuthParams, ""); err != nil { + return accesstokens.TokenResponse{}, err + } + + tResp, err := t.AccessTokens.FromAuthCode(ctx, req) + if err != nil { + return accesstokens.TokenResponse{}, fmt.Errorf("could not retrieve token from auth code: %w", err) + } + return tResp, nil +} + +// Credential acquires a token from the authority using a client credentials grant. +func (t *Client) Credential(ctx context.Context, authParams authority.AuthParams, cred *accesstokens.Credential) (accesstokens.TokenResponse, error) { + if cred.TokenProvider != nil { + now := time.Now() + scopes := make([]string, len(authParams.Scopes)) + copy(scopes, authParams.Scopes) + params := exported.TokenProviderParameters{ + Claims: authParams.Claims, + CorrelationID: uuid.New().String(), + Scopes: scopes, + TenantID: authParams.AuthorityInfo.Tenant, + } + tr, err := cred.TokenProvider(ctx, params) + if err != nil { + if len(scopes) == 0 { + err = fmt.Errorf("token request had an empty authority.AuthParams.Scopes, which may cause the following error: %w", err) + return accesstokens.TokenResponse{}, err + } + return accesstokens.TokenResponse{}, err + } + return accesstokens.TokenResponse{ + AccessToken: tr.AccessToken, + ExpiresOn: internalTime.DurationTime{ + T: now.Add(time.Duration(tr.ExpiresInSeconds) * time.Second), + }, + GrantedScopes: accesstokens.Scopes{Slice: authParams.Scopes}, + }, nil + } + + if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil { + return accesstokens.TokenResponse{}, err + } + + if cred.Secret != "" { + return t.AccessTokens.FromClientSecret(ctx, authParams, cred.Secret) + } + jwt, err := cred.JWT(ctx, authParams) + if err != nil { + return accesstokens.TokenResponse{}, err + } + return t.AccessTokens.FromAssertion(ctx, authParams, jwt) +} + +// Credential acquires a token from the authority using a client credentials grant. +func (t *Client) OnBehalfOf(ctx context.Context, authParams authority.AuthParams, cred *accesstokens.Credential) (accesstokens.TokenResponse, error) { + if err := scopeError(authParams); err != nil { + return accesstokens.TokenResponse{}, err + } + if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil { + return accesstokens.TokenResponse{}, err + } + + if cred.Secret != "" { + return t.AccessTokens.FromUserAssertionClientSecret(ctx, authParams, authParams.UserAssertion, cred.Secret) + } + jwt, err := cred.JWT(ctx, authParams) + if err != nil { + return accesstokens.TokenResponse{}, err + } + tr, err := t.AccessTokens.FromUserAssertionClientCertificate(ctx, authParams, authParams.UserAssertion, jwt) + if err != nil { + return accesstokens.TokenResponse{}, err + } + return tr, nil +} + +func (t *Client) Refresh(ctx context.Context, reqType accesstokens.AppType, authParams authority.AuthParams, cc *accesstokens.Credential, refreshToken accesstokens.RefreshToken) (accesstokens.TokenResponse, error) { + if err := scopeError(authParams); err != nil { + return accesstokens.TokenResponse{}, err + } + if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil { + return accesstokens.TokenResponse{}, err + } + + tr, err := t.AccessTokens.FromRefreshToken(ctx, reqType, authParams, cc, refreshToken.Secret) + if err != nil { + return accesstokens.TokenResponse{}, err + } + return tr, nil +} + +// UsernamePassword retrieves a token where a username and password is used. However, if this is +// a user realm of "Federated", this uses SAML tokens. If "Managed", uses normal username/password. +func (t *Client) UsernamePassword(ctx context.Context, authParams authority.AuthParams) (accesstokens.TokenResponse, error) { + if err := scopeError(authParams); err != nil { + return accesstokens.TokenResponse{}, err + } + + if authParams.AuthorityInfo.AuthorityType == authority.ADFS { + if err := t.resolveEndpoint(ctx, &authParams, authParams.Username); err != nil { + return accesstokens.TokenResponse{}, err + } + return t.AccessTokens.FromUsernamePassword(ctx, authParams) + } + if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil { + return accesstokens.TokenResponse{}, err + } + + userRealm, err := t.Authority.UserRealm(ctx, authParams) + if err != nil { + return accesstokens.TokenResponse{}, fmt.Errorf("problem getting user realm from authority: %w", err) + } + + switch userRealm.AccountType { + case authority.Federated: + mexDoc, err := t.WSTrust.Mex(ctx, userRealm.FederationMetadataURL) + if err != nil { + err = fmt.Errorf("problem getting mex doc from federated url(%s): %w", userRealm.FederationMetadataURL, err) + return accesstokens.TokenResponse{}, err + } + + saml, err := t.WSTrust.SAMLTokenInfo(ctx, authParams, userRealm.CloudAudienceURN, mexDoc.UsernamePasswordEndpoint) + if err != nil { + err = fmt.Errorf("problem getting SAML token info: %w", err) + return accesstokens.TokenResponse{}, err + } + tr, err := t.AccessTokens.FromSamlGrant(ctx, authParams, saml) + if err != nil { + return accesstokens.TokenResponse{}, err + } + return tr, nil + case authority.Managed: + if len(authParams.Scopes) == 0 { + err = fmt.Errorf("token request had an empty authority.AuthParams.Scopes, which may cause the following error: %w", err) + return accesstokens.TokenResponse{}, err + } + return t.AccessTokens.FromUsernamePassword(ctx, authParams) + } + return accesstokens.TokenResponse{}, errors.New("unknown account type") +} + +// DeviceCode is the result of a call to Token.DeviceCode(). +type DeviceCode struct { + // Result is the device code result from the first call in the device code flow. This allows + // the caller to retrieve the displayed code that is used to authorize on the second device. + Result accesstokens.DeviceCodeResult + authParams authority.AuthParams + + accessTokens AccessTokens +} + +// Token returns a token AFTER the user uses the user code on the second device. This will block +// until either: (1) the code is input by the user and the service releases a token, (2) the token +// expires, (3) the Context passed to .DeviceCode() is cancelled or expires, (4) some other service +// error occurs. +func (d DeviceCode) Token(ctx context.Context) (accesstokens.TokenResponse, error) { + if d.accessTokens == nil { + return accesstokens.TokenResponse{}, fmt.Errorf("DeviceCode was either created outside its package or the creating method had an error. DeviceCode is not valid") + } + + var cancel context.CancelFunc + if deadline, ok := ctx.Deadline(); !ok || d.Result.ExpiresOn.Before(deadline) { + ctx, cancel = context.WithDeadline(ctx, d.Result.ExpiresOn) + } else { + ctx, cancel = context.WithCancel(ctx) + } + defer cancel() + + var interval = 50 * time.Millisecond + timer := time.NewTimer(interval) + defer timer.Stop() + + for { + timer.Reset(interval) + select { + case <-ctx.Done(): + return accesstokens.TokenResponse{}, ctx.Err() + case <-timer.C: + interval += interval * 2 + if interval > 5*time.Second { + interval = 5 * time.Second + } + } + + token, err := d.accessTokens.FromDeviceCodeResult(ctx, d.authParams, d.Result) + if err != nil && isWaitDeviceCodeErr(err) { + continue + } + return token, err // This handles if it was a non-wait error or success + } +} + +type deviceCodeError struct { + Error string `json:"error"` +} + +func isWaitDeviceCodeErr(err error) bool { + var c errors.CallErr + if !errors.As(err, &c) { + return false + } + if c.Resp.StatusCode != 400 { + return false + } + var dCErr deviceCodeError + defer c.Resp.Body.Close() + body, err := io.ReadAll(c.Resp.Body) + if err != nil { + return false + } + err = json.Unmarshal(body, &dCErr) + if err != nil { + return false + } + if dCErr.Error == "authorization_pending" || dCErr.Error == "slow_down" { + return true + } + return false +} + +// DeviceCode returns a DeviceCode object that can be used to get the code that must be entered on the second +// device and optionally the token once the code has been entered on the second device. +func (t *Client) DeviceCode(ctx context.Context, authParams authority.AuthParams) (DeviceCode, error) { + if err := scopeError(authParams); err != nil { + return DeviceCode{}, err + } + + if err := t.resolveEndpoint(ctx, &authParams, ""); err != nil { + return DeviceCode{}, err + } + + dcr, err := t.AccessTokens.DeviceCodeResult(ctx, authParams) + if err != nil { + return DeviceCode{}, err + } + + return DeviceCode{Result: dcr, authParams: authParams, accessTokens: t.AccessTokens}, nil +} + +func (t *Client) resolveEndpoint(ctx context.Context, authParams *authority.AuthParams, userPrincipalName string) error { + endpoints, err := t.Resolver.ResolveEndpoints(ctx, authParams.AuthorityInfo, userPrincipalName) + if err != nil { + return fmt.Errorf("unable to resolve an endpoint: %s", err) + } + authParams.Endpoints = endpoints + return nil +} + +// scopeError takes an authority.AuthParams and returns an error +// if len(AuthParams.Scope) == 0. +func scopeError(a authority.AuthParams) error { + // TODO(someone): we could look deeper at the message to determine if + // it's a scope error, but this is a good start. + /* + {error":"invalid_scope","error_description":"AADSTS1002012: The provided value for scope + openid offline_access profile is not valid. Client credential flows must have a scope value + with /.default suffixed to the resource identifier (application ID URI)...} + */ + if len(a.Scopes) == 0 { + return fmt.Errorf("token request had an empty authority.AuthParams.Scopes, which is invalid") + } + return nil +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/accesstokens.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/accesstokens.go new file mode 100644 index 000000000000..003d38648a6a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/accesstokens.go @@ -0,0 +1,451 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +/* +Package accesstokens exposes a REST client for querying backend systems to get various types of +access tokens (oauth) for use in authentication. + +These calls are of type "application/x-www-form-urlencoded". This means we use url.Values to +represent arguments and then encode them into the POST body message. We receive JSON in +return for the requests. The request definition is defined in https://tools.ietf.org/html/rfc7521#section-4.2 . +*/ +package accesstokens + +import ( + "context" + "crypto" + + /* #nosec */ + "crypto/sha1" + "crypto/x509" + "encoding/base64" + "encoding/json" + "fmt" + "net/url" + "strconv" + "strings" + "time" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust" + "github.com/golang-jwt/jwt/v5" + "github.com/google/uuid" +) + +const ( + grantType = "grant_type" + deviceCode = "device_code" + clientID = "client_id" + clientInfo = "client_info" + clientInfoVal = "1" + username = "username" + password = "password" +) + +//go:generate stringer -type=AppType + +// AppType is whether the authorization code flow is for a public or confidential client. +type AppType int8 + +const ( + // ATUnknown is the zero value when the type hasn't been set. + ATUnknown AppType = iota + // ATPublic indicates this if for the Public.Client. + ATPublic + // ATConfidential indicates this if for the Confidential.Client. + ATConfidential +) + +type urlFormCaller interface { + URLFormCall(ctx context.Context, endpoint string, qv url.Values, resp interface{}) error +} + +// DeviceCodeResponse represents the HTTP response received from the device code endpoint +type DeviceCodeResponse struct { + authority.OAuthResponseBase + + UserCode string `json:"user_code"` + DeviceCode string `json:"device_code"` + VerificationURL string `json:"verification_url"` + ExpiresIn int `json:"expires_in"` + Interval int `json:"interval"` + Message string `json:"message"` + + AdditionalFields map[string]interface{} +} + +// Convert converts the DeviceCodeResponse to a DeviceCodeResult +func (dcr DeviceCodeResponse) Convert(clientID string, scopes []string) DeviceCodeResult { + expiresOn := time.Now().UTC().Add(time.Duration(dcr.ExpiresIn) * time.Second) + return NewDeviceCodeResult(dcr.UserCode, dcr.DeviceCode, dcr.VerificationURL, expiresOn, dcr.Interval, dcr.Message, clientID, scopes) +} + +// Credential represents the credential used in confidential client flows. This can be either +// a Secret or Cert/Key. +type Credential struct { + // Secret contains the credential secret if we are doing auth by secret. + Secret string + + // Cert is the public certificate, if we're authenticating by certificate. + Cert *x509.Certificate + // Key is the private key for signing, if we're authenticating by certificate. + Key crypto.PrivateKey + // X5c is the JWT assertion's x5c header value, required for SN/I authentication. + X5c []string + + // AssertionCallback is a function provided by the application, if we're authenticating by assertion. + AssertionCallback func(context.Context, exported.AssertionRequestOptions) (string, error) + + // TokenProvider is a function provided by the application that implements custom authentication + // logic for a confidential client + TokenProvider func(context.Context, exported.TokenProviderParameters) (exported.TokenProviderResult, error) +} + +// JWT gets the jwt assertion when the credential is not using a secret. +func (c *Credential) JWT(ctx context.Context, authParams authority.AuthParams) (string, error) { + if c.AssertionCallback != nil { + options := exported.AssertionRequestOptions{ + ClientID: authParams.ClientID, + TokenEndpoint: authParams.Endpoints.TokenEndpoint, + } + return c.AssertionCallback(ctx, options) + } + + token := jwt.NewWithClaims(jwt.SigningMethodRS256, jwt.MapClaims{ + "aud": authParams.Endpoints.TokenEndpoint, + "exp": json.Number(strconv.FormatInt(time.Now().Add(10*time.Minute).Unix(), 10)), + "iss": authParams.ClientID, + "jti": uuid.New().String(), + "nbf": json.Number(strconv.FormatInt(time.Now().Unix(), 10)), + "sub": authParams.ClientID, + }) + token.Header = map[string]interface{}{ + "alg": "RS256", + "typ": "JWT", + "x5t": base64.StdEncoding.EncodeToString(thumbprint(c.Cert)), + } + + if authParams.SendX5C { + token.Header["x5c"] = c.X5c + } + + assertion, err := token.SignedString(c.Key) + if err != nil { + return "", fmt.Errorf("unable to sign a JWT token using private key: %w", err) + } + return assertion, nil +} + +// thumbprint runs the asn1.Der bytes through sha1 for use in the x5t parameter of JWT. +// https://tools.ietf.org/html/rfc7517#section-4.8 +func thumbprint(cert *x509.Certificate) []byte { + /* #nosec */ + a := sha1.Sum(cert.Raw) + return a[:] +} + +// Client represents the REST calls to get tokens from token generator backends. +type Client struct { + // Comm provides the HTTP transport client. + Comm urlFormCaller + + testing bool +} + +// FromUsernamePassword uses a username and password to get an access token. +func (c Client) FromUsernamePassword(ctx context.Context, authParameters authority.AuthParams) (TokenResponse, error) { + qv := url.Values{} + if err := addClaims(qv, authParameters); err != nil { + return TokenResponse{}, err + } + qv.Set(grantType, grant.Password) + qv.Set(username, authParameters.Username) + qv.Set(password, authParameters.Password) + qv.Set(clientID, authParameters.ClientID) + qv.Set(clientInfo, clientInfoVal) + addScopeQueryParam(qv, authParameters) + + return c.doTokenResp(ctx, authParameters, qv) +} + +// AuthCodeRequest stores the values required to request a token from the authority using an authorization code +type AuthCodeRequest struct { + AuthParams authority.AuthParams + Code string + CodeChallenge string + Credential *Credential + AppType AppType +} + +// NewCodeChallengeRequest returns an AuthCodeRequest that uses a code challenge.. +func NewCodeChallengeRequest(params authority.AuthParams, appType AppType, cc *Credential, code, challenge string) (AuthCodeRequest, error) { + if appType == ATUnknown { + return AuthCodeRequest{}, fmt.Errorf("bug: NewCodeChallengeRequest() called with AppType == ATUnknown") + } + return AuthCodeRequest{ + AuthParams: params, + AppType: appType, + Code: code, + CodeChallenge: challenge, + Credential: cc, + }, nil +} + +// FromAuthCode uses an authorization code to retrieve an access token. +func (c Client) FromAuthCode(ctx context.Context, req AuthCodeRequest) (TokenResponse, error) { + var qv url.Values + + switch req.AppType { + case ATUnknown: + return TokenResponse{}, fmt.Errorf("bug: Token.AuthCode() received request with AppType == ATUnknown") + case ATConfidential: + var err error + if req.Credential == nil { + return TokenResponse{}, fmt.Errorf("AuthCodeRequest had nil Credential for Confidential app") + } + qv, err = prepURLVals(ctx, req.Credential, req.AuthParams) + if err != nil { + return TokenResponse{}, err + } + case ATPublic: + qv = url.Values{} + default: + return TokenResponse{}, fmt.Errorf("bug: Token.AuthCode() received request with AppType == %v, which we do not recongnize", req.AppType) + } + + qv.Set(grantType, grant.AuthCode) + qv.Set("code", req.Code) + qv.Set("code_verifier", req.CodeChallenge) + qv.Set("redirect_uri", req.AuthParams.Redirecturi) + qv.Set(clientID, req.AuthParams.ClientID) + qv.Set(clientInfo, clientInfoVal) + addScopeQueryParam(qv, req.AuthParams) + if err := addClaims(qv, req.AuthParams); err != nil { + return TokenResponse{}, err + } + + return c.doTokenResp(ctx, req.AuthParams, qv) +} + +// FromRefreshToken uses a refresh token (for refreshing credentials) to get a new access token. +func (c Client) FromRefreshToken(ctx context.Context, appType AppType, authParams authority.AuthParams, cc *Credential, refreshToken string) (TokenResponse, error) { + qv := url.Values{} + if appType == ATConfidential { + var err error + qv, err = prepURLVals(ctx, cc, authParams) + if err != nil { + return TokenResponse{}, err + } + } + if err := addClaims(qv, authParams); err != nil { + return TokenResponse{}, err + } + qv.Set(grantType, grant.RefreshToken) + qv.Set(clientID, authParams.ClientID) + qv.Set(clientInfo, clientInfoVal) + qv.Set("refresh_token", refreshToken) + addScopeQueryParam(qv, authParams) + + return c.doTokenResp(ctx, authParams, qv) +} + +// FromClientSecret uses a client's secret (aka password) to get a new token. +func (c Client) FromClientSecret(ctx context.Context, authParameters authority.AuthParams, clientSecret string) (TokenResponse, error) { + qv := url.Values{} + if err := addClaims(qv, authParameters); err != nil { + return TokenResponse{}, err + } + qv.Set(grantType, grant.ClientCredential) + qv.Set("client_secret", clientSecret) + qv.Set(clientID, authParameters.ClientID) + addScopeQueryParam(qv, authParameters) + + token, err := c.doTokenResp(ctx, authParameters, qv) + if err != nil { + return token, fmt.Errorf("FromClientSecret(): %w", err) + } + return token, nil +} + +func (c Client) FromAssertion(ctx context.Context, authParameters authority.AuthParams, assertion string) (TokenResponse, error) { + qv := url.Values{} + if err := addClaims(qv, authParameters); err != nil { + return TokenResponse{}, err + } + qv.Set(grantType, grant.ClientCredential) + qv.Set("client_assertion_type", grant.ClientAssertion) + qv.Set("client_assertion", assertion) + qv.Set(clientID, authParameters.ClientID) + qv.Set(clientInfo, clientInfoVal) + addScopeQueryParam(qv, authParameters) + + token, err := c.doTokenResp(ctx, authParameters, qv) + if err != nil { + return token, fmt.Errorf("FromAssertion(): %w", err) + } + return token, nil +} + +func (c Client) FromUserAssertionClientSecret(ctx context.Context, authParameters authority.AuthParams, userAssertion string, clientSecret string) (TokenResponse, error) { + qv := url.Values{} + if err := addClaims(qv, authParameters); err != nil { + return TokenResponse{}, err + } + qv.Set(grantType, grant.JWT) + qv.Set(clientID, authParameters.ClientID) + qv.Set("client_secret", clientSecret) + qv.Set("assertion", userAssertion) + qv.Set(clientInfo, clientInfoVal) + qv.Set("requested_token_use", "on_behalf_of") + addScopeQueryParam(qv, authParameters) + + return c.doTokenResp(ctx, authParameters, qv) +} + +func (c Client) FromUserAssertionClientCertificate(ctx context.Context, authParameters authority.AuthParams, userAssertion string, assertion string) (TokenResponse, error) { + qv := url.Values{} + if err := addClaims(qv, authParameters); err != nil { + return TokenResponse{}, err + } + qv.Set(grantType, grant.JWT) + qv.Set("client_assertion_type", grant.ClientAssertion) + qv.Set("client_assertion", assertion) + qv.Set(clientID, authParameters.ClientID) + qv.Set("assertion", userAssertion) + qv.Set(clientInfo, clientInfoVal) + qv.Set("requested_token_use", "on_behalf_of") + addScopeQueryParam(qv, authParameters) + + return c.doTokenResp(ctx, authParameters, qv) +} + +func (c Client) DeviceCodeResult(ctx context.Context, authParameters authority.AuthParams) (DeviceCodeResult, error) { + qv := url.Values{} + if err := addClaims(qv, authParameters); err != nil { + return DeviceCodeResult{}, err + } + qv.Set(clientID, authParameters.ClientID) + addScopeQueryParam(qv, authParameters) + + endpoint := strings.Replace(authParameters.Endpoints.TokenEndpoint, "token", "devicecode", -1) + + resp := DeviceCodeResponse{} + err := c.Comm.URLFormCall(ctx, endpoint, qv, &resp) + if err != nil { + return DeviceCodeResult{}, err + } + + return resp.Convert(authParameters.ClientID, authParameters.Scopes), nil +} + +func (c Client) FromDeviceCodeResult(ctx context.Context, authParameters authority.AuthParams, deviceCodeResult DeviceCodeResult) (TokenResponse, error) { + qv := url.Values{} + if err := addClaims(qv, authParameters); err != nil { + return TokenResponse{}, err + } + qv.Set(grantType, grant.DeviceCode) + qv.Set(deviceCode, deviceCodeResult.DeviceCode) + qv.Set(clientID, authParameters.ClientID) + qv.Set(clientInfo, clientInfoVal) + addScopeQueryParam(qv, authParameters) + + return c.doTokenResp(ctx, authParameters, qv) +} + +func (c Client) FromSamlGrant(ctx context.Context, authParameters authority.AuthParams, samlGrant wstrust.SamlTokenInfo) (TokenResponse, error) { + qv := url.Values{} + if err := addClaims(qv, authParameters); err != nil { + return TokenResponse{}, err + } + qv.Set(username, authParameters.Username) + qv.Set(password, authParameters.Password) + qv.Set(clientID, authParameters.ClientID) + qv.Set(clientInfo, clientInfoVal) + qv.Set("assertion", base64.StdEncoding.WithPadding(base64.StdPadding).EncodeToString([]byte(samlGrant.Assertion))) + addScopeQueryParam(qv, authParameters) + + switch samlGrant.AssertionType { + case grant.SAMLV1: + qv.Set(grantType, grant.SAMLV1) + case grant.SAMLV2: + qv.Set(grantType, grant.SAMLV2) + default: + return TokenResponse{}, fmt.Errorf("GetAccessTokenFromSamlGrant returned unknown SAML assertion type: %q", samlGrant.AssertionType) + } + + return c.doTokenResp(ctx, authParameters, qv) +} + +func (c Client) doTokenResp(ctx context.Context, authParams authority.AuthParams, qv url.Values) (TokenResponse, error) { + resp := TokenResponse{} + err := c.Comm.URLFormCall(ctx, authParams.Endpoints.TokenEndpoint, qv, &resp) + if err != nil { + return resp, err + } + resp.ComputeScope(authParams) + if c.testing { + return resp, nil + } + return resp, resp.Validate() +} + +// prepURLVals returns an url.Values that sets various key/values if we are doing secrets +// or JWT assertions. +func prepURLVals(ctx context.Context, cc *Credential, authParams authority.AuthParams) (url.Values, error) { + params := url.Values{} + if cc.Secret != "" { + params.Set("client_secret", cc.Secret) + return params, nil + } + + jwt, err := cc.JWT(ctx, authParams) + if err != nil { + return nil, err + } + params.Set("client_assertion", jwt) + params.Set("client_assertion_type", grant.ClientAssertion) + return params, nil +} + +// openid required to get an id token +// offline_access required to get a refresh token +// profile required to get the client_info field back +var detectDefaultScopes = map[string]bool{ + "openid": true, + "offline_access": true, + "profile": true, +} + +var defaultScopes = []string{"openid", "offline_access", "profile"} + +func AppendDefaultScopes(authParameters authority.AuthParams) []string { + scopes := make([]string, 0, len(authParameters.Scopes)+len(defaultScopes)) + for _, scope := range authParameters.Scopes { + s := strings.TrimSpace(scope) + if s == "" { + continue + } + if detectDefaultScopes[scope] { + continue + } + scopes = append(scopes, scope) + } + scopes = append(scopes, defaultScopes...) + return scopes +} + +// addClaims adds client capabilities and claims from AuthParams to the given url.Values +func addClaims(v url.Values, ap authority.AuthParams) error { + claims, err := ap.MergeCapabilitiesAndClaims() + if err == nil && claims != "" { + v.Set("claims", claims) + } + return err +} + +func addScopeQueryParam(queryParams url.Values, authParameters authority.AuthParams) { + scopes := AppendDefaultScopes(authParameters) + queryParams.Set("scope", strings.Join(scopes, " ")) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/apptype_string.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/apptype_string.go new file mode 100644 index 000000000000..3bec4a67cf10 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/apptype_string.go @@ -0,0 +1,25 @@ +// Code generated by "stringer -type=AppType"; DO NOT EDIT. + +package accesstokens + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[ATUnknown-0] + _ = x[ATPublic-1] + _ = x[ATConfidential-2] +} + +const _AppType_name = "ATUnknownATPublicATConfidential" + +var _AppType_index = [...]uint8{0, 9, 17, 31} + +func (i AppType) String() string { + if i < 0 || i >= AppType(len(_AppType_index)-1) { + return "AppType(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _AppType_name[_AppType_index[i]:_AppType_index[i+1]] +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/tokens.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/tokens.go new file mode 100644 index 000000000000..3dd61d5b5f08 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens/tokens.go @@ -0,0 +1,338 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package accesstokens + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "reflect" + "strings" + "time" + + internalTime "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared" +) + +// IDToken consists of all the information used to validate a user. +// https://docs.microsoft.com/azure/active-directory/develop/id-tokens . +type IDToken struct { + PreferredUsername string `json:"preferred_username,omitempty"` + GivenName string `json:"given_name,omitempty"` + FamilyName string `json:"family_name,omitempty"` + MiddleName string `json:"middle_name,omitempty"` + Name string `json:"name,omitempty"` + Oid string `json:"oid,omitempty"` + TenantID string `json:"tid,omitempty"` + Subject string `json:"sub,omitempty"` + UPN string `json:"upn,omitempty"` + Email string `json:"email,omitempty"` + AlternativeID string `json:"alternative_id,omitempty"` + Issuer string `json:"iss,omitempty"` + Audience string `json:"aud,omitempty"` + ExpirationTime int64 `json:"exp,omitempty"` + IssuedAt int64 `json:"iat,omitempty"` + NotBefore int64 `json:"nbf,omitempty"` + RawToken string + + AdditionalFields map[string]interface{} +} + +var null = []byte("null") + +// UnmarshalJSON implements json.Unmarshaler. +func (i *IDToken) UnmarshalJSON(b []byte) error { + if bytes.Equal(null, b) { + return nil + } + + // Because we have a custom unmarshaler, you + // cannot directly call json.Unmarshal here. If you do, it will call this function + // recursively until reach our recursion limit. We have to create a new type + // that doesn't have this method in order to use json.Unmarshal. + type idToken2 IDToken + + jwt := strings.Trim(string(b), `"`) + jwtArr := strings.Split(jwt, ".") + if len(jwtArr) < 2 { + return errors.New("IDToken returned from server is invalid") + } + + jwtPart := jwtArr[1] + jwtDecoded, err := decodeJWT(jwtPart) + if err != nil { + return fmt.Errorf("unable to unmarshal IDToken, problem decoding JWT: %w", err) + } + + token := idToken2{} + err = json.Unmarshal(jwtDecoded, &token) + if err != nil { + return fmt.Errorf("unable to unmarshal IDToken: %w", err) + } + token.RawToken = jwt + + *i = IDToken(token) + return nil +} + +// IsZero indicates if the IDToken is the zero value. +func (i IDToken) IsZero() bool { + v := reflect.ValueOf(i) + for i := 0; i < v.NumField(); i++ { + field := v.Field(i) + if !field.IsZero() { + switch field.Kind() { + case reflect.Map, reflect.Slice: + if field.Len() == 0 { + continue + } + } + return false + } + } + return true +} + +// LocalAccountID extracts an account's local account ID from an ID token. +func (i IDToken) LocalAccountID() string { + if i.Oid != "" { + return i.Oid + } + return i.Subject +} + +// jwtDecoder is provided to allow tests to provide their own. +var jwtDecoder = decodeJWT + +// ClientInfo is used to create a Home Account ID for an account. +type ClientInfo struct { + UID string `json:"uid"` + UTID string `json:"utid"` + + AdditionalFields map[string]interface{} +} + +// UnmarshalJSON implements json.Unmarshaler.s +func (c *ClientInfo) UnmarshalJSON(b []byte) error { + s := strings.Trim(string(b), `"`) + // Client info may be empty in some flows, e.g. certificate exchange. + if len(s) == 0 { + return nil + } + + // Because we have a custom unmarshaler, you + // cannot directly call json.Unmarshal here. If you do, it will call this function + // recursively until reach our recursion limit. We have to create a new type + // that doesn't have this method in order to use json.Unmarshal. + type clientInfo2 ClientInfo + + raw, err := jwtDecoder(s) + if err != nil { + return fmt.Errorf("TokenResponse client_info field had JWT decode error: %w", err) + } + + var c2 clientInfo2 + + err = json.Unmarshal(raw, &c2) + if err != nil { + return fmt.Errorf("was unable to unmarshal decoded JWT in TokenRespone to ClientInfo: %w", err) + } + + *c = ClientInfo(c2) + return nil +} + +// Scopes represents scopes in a TokenResponse. +type Scopes struct { + Slice []string +} + +// UnmarshalJSON implements json.Unmarshal. +func (s *Scopes) UnmarshalJSON(b []byte) error { + str := strings.Trim(string(b), `"`) + if len(str) == 0 { + return nil + } + sl := strings.Split(str, " ") + s.Slice = sl + return nil +} + +// TokenResponse is the information that is returned from a token endpoint during a token acquisition flow. +type TokenResponse struct { + authority.OAuthResponseBase + + AccessToken string `json:"access_token"` + RefreshToken string `json:"refresh_token"` + + FamilyID string `json:"foci"` + IDToken IDToken `json:"id_token"` + ClientInfo ClientInfo `json:"client_info"` + ExpiresOn internalTime.DurationTime `json:"expires_in"` + ExtExpiresOn internalTime.DurationTime `json:"ext_expires_in"` + GrantedScopes Scopes `json:"scope"` + DeclinedScopes []string // This is derived + + AdditionalFields map[string]interface{} + + scopesComputed bool +} + +// ComputeScope computes the final scopes based on what was granted by the server and +// what our AuthParams were from the authority server. Per OAuth spec, if no scopes are returned, the response should be treated as if all scopes were granted +// This behavior can be observed in client assertion flows, but can happen at any time, this check ensures we treat +// those special responses properly Link to spec: https://tools.ietf.org/html/rfc6749#section-3.3 +func (tr *TokenResponse) ComputeScope(authParams authority.AuthParams) { + if len(tr.GrantedScopes.Slice) == 0 { + tr.GrantedScopes = Scopes{Slice: authParams.Scopes} + } else { + tr.DeclinedScopes = findDeclinedScopes(authParams.Scopes, tr.GrantedScopes.Slice) + } + tr.scopesComputed = true +} + +// HomeAccountID uniquely identifies the authenticated account, if any. It's "" when the token is an app token. +func (tr *TokenResponse) HomeAccountID() string { + id := tr.IDToken.Subject + if uid := tr.ClientInfo.UID; uid != "" { + utid := tr.ClientInfo.UTID + if utid == "" { + utid = uid + } + id = fmt.Sprintf("%s.%s", uid, utid) + } + return id +} + +// Validate validates the TokenResponse has basic valid values. It must be called +// after ComputeScopes() is called. +func (tr *TokenResponse) Validate() error { + if tr.Error != "" { + return fmt.Errorf("%s: %s", tr.Error, tr.ErrorDescription) + } + + if tr.AccessToken == "" { + return errors.New("response is missing access_token") + } + + if !tr.scopesComputed { + return fmt.Errorf("TokenResponse hasn't had ScopesComputed() called") + } + return nil +} + +func (tr *TokenResponse) CacheKey(authParams authority.AuthParams) string { + if authParams.AuthorizationType == authority.ATOnBehalfOf { + return authParams.AssertionHash() + } + if authParams.AuthorizationType == authority.ATClientCredentials { + return authParams.AppKey() + } + if authParams.IsConfidentialClient || authParams.AuthorizationType == authority.ATRefreshToken { + return tr.HomeAccountID() + } + return "" +} + +func findDeclinedScopes(requestedScopes []string, grantedScopes []string) []string { + declined := []string{} + grantedMap := map[string]bool{} + for _, s := range grantedScopes { + grantedMap[strings.ToLower(s)] = true + } + // Comparing the requested scopes with the granted scopes to see if there are any scopes that have been declined. + for _, r := range requestedScopes { + if !grantedMap[strings.ToLower(r)] { + declined = append(declined, r) + } + } + return declined +} + +// decodeJWT decodes a JWT and converts it to a byte array representing a JSON object +// JWT has headers and payload base64url encoded without padding +// https://tools.ietf.org/html/rfc7519#section-3 and +// https://tools.ietf.org/html/rfc7515#section-2 +func decodeJWT(data string) ([]byte, error) { + // https://tools.ietf.org/html/rfc7515#appendix-C + return base64.RawURLEncoding.DecodeString(data) +} + +// RefreshToken is the JSON representation of a MSAL refresh token for encoding to storage. +type RefreshToken struct { + HomeAccountID string `json:"home_account_id,omitempty"` + Environment string `json:"environment,omitempty"` + CredentialType string `json:"credential_type,omitempty"` + ClientID string `json:"client_id,omitempty"` + FamilyID string `json:"family_id,omitempty"` + Secret string `json:"secret,omitempty"` + Realm string `json:"realm,omitempty"` + Target string `json:"target,omitempty"` + UserAssertionHash string `json:"user_assertion_hash,omitempty"` + + AdditionalFields map[string]interface{} +} + +// NewRefreshToken is the constructor for RefreshToken. +func NewRefreshToken(homeID, env, clientID, refreshToken, familyID string) RefreshToken { + return RefreshToken{ + HomeAccountID: homeID, + Environment: env, + CredentialType: "RefreshToken", + ClientID: clientID, + FamilyID: familyID, + Secret: refreshToken, + } +} + +// Key outputs the key that can be used to uniquely look up this entry in a map. +func (rt RefreshToken) Key() string { + var fourth = rt.FamilyID + if fourth == "" { + fourth = rt.ClientID + } + + key := strings.Join( + []string{rt.HomeAccountID, rt.Environment, rt.CredentialType, fourth}, + shared.CacheKeySeparator, + ) + return strings.ToLower(key) +} + +func (rt RefreshToken) GetSecret() string { + return rt.Secret +} + +// DeviceCodeResult stores the response from the STS device code endpoint. +type DeviceCodeResult struct { + // UserCode is the code the user needs to provide when authentication at the verification URI. + UserCode string + // DeviceCode is the code used in the access token request. + DeviceCode string + // VerificationURL is the the URL where user can authenticate. + VerificationURL string + // ExpiresOn is the expiration time of device code in seconds. + ExpiresOn time.Time + // Interval is the interval at which the STS should be polled at. + Interval int + // Message is the message which should be displayed to the user. + Message string + // ClientID is the UUID issued by the authorization server for your application. + ClientID string + // Scopes is the OpenID scopes used to request access a protected API. + Scopes []string +} + +// NewDeviceCodeResult creates a DeviceCodeResult instance. +func NewDeviceCodeResult(userCode, deviceCode, verificationURL string, expiresOn time.Time, interval int, message, clientID string, scopes []string) DeviceCodeResult { + return DeviceCodeResult{userCode, deviceCode, verificationURL, expiresOn, interval, message, clientID, scopes} +} + +func (dcr DeviceCodeResult) String() string { + return fmt.Sprintf("UserCode: (%v)\nDeviceCode: (%v)\nURL: (%v)\nMessage: (%v)\n", dcr.UserCode, dcr.DeviceCode, dcr.VerificationURL, dcr.Message) + +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authority.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authority.go new file mode 100644 index 000000000000..7b2ccb4f5d20 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authority.go @@ -0,0 +1,552 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package authority + +import ( + "context" + "crypto/sha256" + "encoding/base64" + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "net/url" + "os" + "path" + "strings" + "time" + + "github.com/google/uuid" +) + +const ( + authorizationEndpoint = "https://%v/%v/oauth2/v2.0/authorize" + instanceDiscoveryEndpoint = "https://%v/common/discovery/instance" + tenantDiscoveryEndpointWithRegion = "https://%s.%s/%s/v2.0/.well-known/openid-configuration" + regionName = "REGION_NAME" + defaultAPIVersion = "2021-10-01" + imdsEndpoint = "http://169.254.169.254/metadata/instance/compute/location?format=text&api-version=" + defaultAPIVersion + autoDetectRegion = "TryAutoDetect" +) + +// These are various hosts that host AAD Instance discovery endpoints. +const ( + defaultHost = "login.microsoftonline.com" + loginMicrosoft = "login.microsoft.com" + loginWindows = "login.windows.net" + loginSTSWindows = "sts.windows.net" + loginMicrosoftOnline = defaultHost +) + +// jsonCaller is an interface that allows us to mock the JSONCall method. +type jsonCaller interface { + JSONCall(ctx context.Context, endpoint string, headers http.Header, qv url.Values, body, resp interface{}) error +} + +var aadTrustedHostList = map[string]bool{ + "login.windows.net": true, // Microsoft Azure Worldwide - Used in validation scenarios where host is not this list + "login.chinacloudapi.cn": true, // Microsoft Azure China + "login.microsoftonline.de": true, // Microsoft Azure Blackforest + "login-us.microsoftonline.com": true, // Microsoft Azure US Government - Legacy + "login.microsoftonline.us": true, // Microsoft Azure US Government + "login.microsoftonline.com": true, // Microsoft Azure Worldwide + "login.cloudgovapi.us": true, // Microsoft Azure US Government +} + +// TrustedHost checks if an AAD host is trusted/valid. +func TrustedHost(host string) bool { + if _, ok := aadTrustedHostList[host]; ok { + return true + } + return false +} + +// OAuthResponseBase is the base JSON return message for an OAuth call. +// This is embedded in other calls to get the base fields from every response. +type OAuthResponseBase struct { + Error string `json:"error"` + SubError string `json:"suberror"` + ErrorDescription string `json:"error_description"` + ErrorCodes []int `json:"error_codes"` + CorrelationID string `json:"correlation_id"` + Claims string `json:"claims"` +} + +// TenantDiscoveryResponse is the tenant endpoints from the OpenID configuration endpoint. +type TenantDiscoveryResponse struct { + OAuthResponseBase + + AuthorizationEndpoint string `json:"authorization_endpoint"` + TokenEndpoint string `json:"token_endpoint"` + Issuer string `json:"issuer"` + + AdditionalFields map[string]interface{} +} + +// Validate validates that the response had the correct values required. +func (r *TenantDiscoveryResponse) Validate() error { + switch "" { + case r.AuthorizationEndpoint: + return errors.New("TenantDiscoveryResponse: authorize endpoint was not found in the openid configuration") + case r.TokenEndpoint: + return errors.New("TenantDiscoveryResponse: token endpoint was not found in the openid configuration") + case r.Issuer: + return errors.New("TenantDiscoveryResponse: issuer was not found in the openid configuration") + } + return nil +} + +type InstanceDiscoveryMetadata struct { + PreferredNetwork string `json:"preferred_network"` + PreferredCache string `json:"preferred_cache"` + Aliases []string `json:"aliases"` + + AdditionalFields map[string]interface{} +} + +type InstanceDiscoveryResponse struct { + TenantDiscoveryEndpoint string `json:"tenant_discovery_endpoint"` + Metadata []InstanceDiscoveryMetadata `json:"metadata"` + + AdditionalFields map[string]interface{} +} + +//go:generate stringer -type=AuthorizeType + +// AuthorizeType represents the type of token flow. +type AuthorizeType int + +// These are all the types of token flows. +const ( + ATUnknown AuthorizeType = iota + ATUsernamePassword + ATWindowsIntegrated + ATAuthCode + ATInteractive + ATClientCredentials + ATDeviceCode + ATRefreshToken + AccountByID + ATOnBehalfOf +) + +// These are all authority types +const ( + AAD = "MSSTS" + ADFS = "ADFS" +) + +// AuthParams represents the parameters used for authorization for token acquisition. +type AuthParams struct { + AuthorityInfo Info + CorrelationID string + Endpoints Endpoints + ClientID string + // Redirecturi is used for auth flows that specify a redirect URI (e.g. local server for interactive auth flow). + Redirecturi string + HomeAccountID string + // Username is the user-name portion for username/password auth flow. + Username string + // Password is the password portion for username/password auth flow. + Password string + // Scopes is the list of scopes the user consents to. + Scopes []string + // AuthorizationType specifies the auth flow being used. + AuthorizationType AuthorizeType + // State is a random value used to prevent cross-site request forgery attacks. + State string + // CodeChallenge is derived from a code verifier and is sent in the auth request. + CodeChallenge string + // CodeChallengeMethod describes the method used to create the CodeChallenge. + CodeChallengeMethod string + // Prompt specifies the user prompt type during interactive auth. + Prompt string + // IsConfidentialClient specifies if it is a confidential client. + IsConfidentialClient bool + // SendX5C specifies if x5c claim(public key of the certificate) should be sent to STS. + SendX5C bool + // UserAssertion is the access token used to acquire token on behalf of user + UserAssertion string + // Capabilities the client will include with each token request, for example "CP1". + // Call [NewClientCapabilities] to construct a value for this field. + Capabilities ClientCapabilities + // Claims required for an access token to satisfy a conditional access policy + Claims string + // KnownAuthorityHosts don't require metadata discovery because they're known to the user + KnownAuthorityHosts []string + // LoginHint is a username with which to pre-populate account selection during interactive auth + LoginHint string + // DomainHint is a directive that can be used to accelerate the user to their federated IdP sign-in page + DomainHint string +} + +// NewAuthParams creates an authorization parameters object. +func NewAuthParams(clientID string, authorityInfo Info) AuthParams { + return AuthParams{ + ClientID: clientID, + AuthorityInfo: authorityInfo, + CorrelationID: uuid.New().String(), + } +} + +// WithTenant returns a copy of the AuthParams having the specified tenant ID. If the given +// ID is empty, the copy is identical to the original. This function returns an error in +// several cases: +// - ID isn't specific (for example, it's "common") +// - ID is non-empty and the authority doesn't support tenants (for example, it's an ADFS authority) +// - the client is configured to authenticate only Microsoft accounts via the "consumers" endpoint +// - the resulting authority URL is invalid +func (p AuthParams) WithTenant(ID string) (AuthParams, error) { + switch ID { + case "", p.AuthorityInfo.Tenant: + // keep the default tenant because the caller didn't override it + return p, nil + case "common", "consumers", "organizations": + if p.AuthorityInfo.AuthorityType == AAD { + return p, fmt.Errorf(`tenant ID must be a specific tenant, not "%s"`, ID) + } + // else we'll return a better error below + } + if p.AuthorityInfo.AuthorityType != AAD { + return p, errors.New("the authority doesn't support tenants") + } + if p.AuthorityInfo.Tenant == "consumers" { + return p, errors.New(`client is configured to authenticate only personal Microsoft accounts, via the "consumers" endpoint`) + } + authority := "https://" + path.Join(p.AuthorityInfo.Host, ID) + info, err := NewInfoFromAuthorityURI(authority, p.AuthorityInfo.ValidateAuthority, p.AuthorityInfo.InstanceDiscoveryDisabled) + if err == nil { + info.Region = p.AuthorityInfo.Region + p.AuthorityInfo = info + } + return p, err +} + +// MergeCapabilitiesAndClaims combines client capabilities and challenge claims into a value suitable for an authentication request's "claims" parameter. +func (p AuthParams) MergeCapabilitiesAndClaims() (string, error) { + claims := p.Claims + if len(p.Capabilities.asMap) > 0 { + if claims == "" { + // without claims the result is simply the capabilities + return p.Capabilities.asJSON, nil + } + // Otherwise, merge claims and capabilties into a single JSON object. + // We handle the claims challenge as a map because we don't know its structure. + var challenge map[string]any + if err := json.Unmarshal([]byte(claims), &challenge); err != nil { + return "", fmt.Errorf(`claims must be JSON. Are they base64 encoded? json.Unmarshal returned "%v"`, err) + } + if err := merge(p.Capabilities.asMap, challenge); err != nil { + return "", err + } + b, err := json.Marshal(challenge) + if err != nil { + return "", err + } + claims = string(b) + } + return claims, nil +} + +// merges a into b without overwriting b's values. Returns an error when a and b share a key for which either has a non-object value. +func merge(a, b map[string]any) error { + for k, av := range a { + if bv, ok := b[k]; !ok { + // b doesn't contain this key => simply set it to a's value + b[k] = av + } else { + // b does contain this key => recursively merge a[k] into b[k], provided both are maps. If a[k] or b[k] isn't + // a map, return an error because merging would overwrite some value in b. Errors shouldn't occur in practice + // because the challenge will be from AAD, which knows the capabilities format. + if A, ok := av.(map[string]any); ok { + if B, ok := bv.(map[string]any); ok { + return merge(A, B) + } else { + // b[k] isn't a map + return errors.New("challenge claims conflict with client capabilities") + } + } else { + // a[k] isn't a map + return errors.New("challenge claims conflict with client capabilities") + } + } + } + return nil +} + +// ClientCapabilities stores capabilities in the formats used by AuthParams.MergeCapabilitiesAndClaims. +// [NewClientCapabilities] precomputes these representations because capabilities are static for the +// lifetime of a client and are included with every authentication request i.e., these computations +// always have the same result and would otherwise have to be repeated for every request. +type ClientCapabilities struct { + // asJSON is for the common case: adding the capabilities to an auth request with no challenge claims + asJSON string + // asMap is for merging the capabilities with challenge claims + asMap map[string]any +} + +func NewClientCapabilities(capabilities []string) (ClientCapabilities, error) { + c := ClientCapabilities{} + var err error + if len(capabilities) > 0 { + cpbs := make([]string, len(capabilities)) + for i := 0; i < len(cpbs); i++ { + cpbs[i] = fmt.Sprintf(`"%s"`, capabilities[i]) + } + c.asJSON = fmt.Sprintf(`{"access_token":{"xms_cc":{"values":[%s]}}}`, strings.Join(cpbs, ",")) + // note our JSON is valid but we can't stop users breaking it with garbage like "}" + err = json.Unmarshal([]byte(c.asJSON), &c.asMap) + } + return c, err +} + +// Info consists of information about the authority. +type Info struct { + Host string + CanonicalAuthorityURI string + AuthorityType string + UserRealmURIPrefix string + ValidateAuthority bool + Tenant string + Region string + InstanceDiscoveryDisabled bool +} + +func firstPathSegment(u *url.URL) (string, error) { + pathParts := strings.Split(u.EscapedPath(), "/") + if len(pathParts) >= 2 { + return pathParts[1], nil + } + + return "", errors.New(`authority must be an https URL such as "https://login.microsoftonline.com/"`) +} + +// NewInfoFromAuthorityURI creates an AuthorityInfo instance from the authority URL provided. +func NewInfoFromAuthorityURI(authority string, validateAuthority bool, instanceDiscoveryDisabled bool) (Info, error) { + u, err := url.Parse(strings.ToLower(authority)) + if err != nil || u.Scheme != "https" { + return Info{}, errors.New(`authority must be an https URL such as "https://login.microsoftonline.com/"`) + } + + tenant, err := firstPathSegment(u) + if err != nil { + return Info{}, err + } + authorityType := AAD + if tenant == "adfs" { + authorityType = ADFS + } + + // u.Host includes the port, if any, which is required for private cloud deployments + return Info{ + Host: u.Host, + CanonicalAuthorityURI: fmt.Sprintf("https://%v/%v/", u.Host, tenant), + AuthorityType: authorityType, + UserRealmURIPrefix: fmt.Sprintf("https://%v/common/userrealm/", u.Hostname()), + ValidateAuthority: validateAuthority, + Tenant: tenant, + InstanceDiscoveryDisabled: instanceDiscoveryDisabled, + }, nil +} + +// Endpoints consists of the endpoints from the tenant discovery response. +type Endpoints struct { + AuthorizationEndpoint string + TokenEndpoint string + selfSignedJwtAudience string + authorityHost string +} + +// NewEndpoints creates an Endpoints object. +func NewEndpoints(authorizationEndpoint string, tokenEndpoint string, selfSignedJwtAudience string, authorityHost string) Endpoints { + return Endpoints{authorizationEndpoint, tokenEndpoint, selfSignedJwtAudience, authorityHost} +} + +// UserRealmAccountType refers to the type of user realm. +type UserRealmAccountType string + +// These are the different types of user realms. +const ( + Unknown UserRealmAccountType = "" + Federated UserRealmAccountType = "Federated" + Managed UserRealmAccountType = "Managed" +) + +// UserRealm is used for the username password request to determine user type +type UserRealm struct { + AccountType UserRealmAccountType `json:"account_type"` + DomainName string `json:"domain_name"` + CloudInstanceName string `json:"cloud_instance_name"` + CloudAudienceURN string `json:"cloud_audience_urn"` + + // required if accountType is Federated + FederationProtocol string `json:"federation_protocol"` + FederationMetadataURL string `json:"federation_metadata_url"` + + AdditionalFields map[string]interface{} +} + +func (u UserRealm) validate() error { + switch "" { + case string(u.AccountType): + return errors.New("the account type (Federated or Managed) is missing") + case u.DomainName: + return errors.New("domain name of user realm is missing") + case u.CloudInstanceName: + return errors.New("cloud instance name of user realm is missing") + case u.CloudAudienceURN: + return errors.New("cloud Instance URN is missing") + } + + if u.AccountType == Federated { + switch "" { + case u.FederationProtocol: + return errors.New("federation protocol of user realm is missing") + case u.FederationMetadataURL: + return errors.New("federation metadata URL of user realm is missing") + } + } + return nil +} + +// Client represents the REST calls to authority backends. +type Client struct { + // Comm provides the HTTP transport client. + Comm jsonCaller // *comm.Client +} + +func (c Client) UserRealm(ctx context.Context, authParams AuthParams) (UserRealm, error) { + endpoint := fmt.Sprintf("https://%s/common/UserRealm/%s", authParams.Endpoints.authorityHost, url.PathEscape(authParams.Username)) + qv := url.Values{ + "api-version": []string{"1.0"}, + } + + resp := UserRealm{} + err := c.Comm.JSONCall( + ctx, + endpoint, + http.Header{"client-request-id": []string{authParams.CorrelationID}}, + qv, + nil, + &resp, + ) + if err != nil { + return resp, err + } + + return resp, resp.validate() +} + +func (c Client) GetTenantDiscoveryResponse(ctx context.Context, openIDConfigurationEndpoint string) (TenantDiscoveryResponse, error) { + resp := TenantDiscoveryResponse{} + err := c.Comm.JSONCall( + ctx, + openIDConfigurationEndpoint, + http.Header{}, + nil, + nil, + &resp, + ) + + return resp, err +} + +// AADInstanceDiscovery attempts to discover a tenant endpoint (used in OIDC auth with an authorization endpoint). +// This is done by AAD which allows for aliasing of tenants (windows.sts.net is the same as login.windows.com). +func (c Client) AADInstanceDiscovery(ctx context.Context, authorityInfo Info) (InstanceDiscoveryResponse, error) { + region := "" + var err error + resp := InstanceDiscoveryResponse{} + if authorityInfo.Region != "" && authorityInfo.Region != autoDetectRegion { + region = authorityInfo.Region + } else if authorityInfo.Region == autoDetectRegion { + region = detectRegion(ctx) + } + if region != "" { + environment := authorityInfo.Host + switch environment { + case loginMicrosoft, loginWindows, loginSTSWindows, defaultHost: + environment = loginMicrosoft + } + + resp.TenantDiscoveryEndpoint = fmt.Sprintf(tenantDiscoveryEndpointWithRegion, region, environment, authorityInfo.Tenant) + metadata := InstanceDiscoveryMetadata{ + PreferredNetwork: fmt.Sprintf("%v.%v", region, authorityInfo.Host), + PreferredCache: authorityInfo.Host, + Aliases: []string{fmt.Sprintf("%v.%v", region, authorityInfo.Host), authorityInfo.Host}, + } + resp.Metadata = []InstanceDiscoveryMetadata{metadata} + } else { + qv := url.Values{} + qv.Set("api-version", "1.1") + qv.Set("authorization_endpoint", fmt.Sprintf(authorizationEndpoint, authorityInfo.Host, authorityInfo.Tenant)) + + discoveryHost := defaultHost + if TrustedHost(authorityInfo.Host) { + discoveryHost = authorityInfo.Host + } + + endpoint := fmt.Sprintf(instanceDiscoveryEndpoint, discoveryHost) + err = c.Comm.JSONCall(ctx, endpoint, http.Header{}, qv, nil, &resp) + } + return resp, err +} + +func detectRegion(ctx context.Context) string { + region := os.Getenv(regionName) + if region != "" { + region = strings.ReplaceAll(region, " ", "") + return strings.ToLower(region) + } + // HTTP call to IMDS endpoint to get region + // Refer : https://identitydivision.visualstudio.com/DevEx/_git/AuthLibrariesApiReview?path=%2FPinAuthToRegion%2FAAD%20SDK%20Proposal%20to%20Pin%20Auth%20to%20region.md&_a=preview&version=GBdev + // Set a 2 second timeout for this http client which only does calls to IMDS endpoint + client := http.Client{ + Timeout: time.Duration(2 * time.Second), + } + req, _ := http.NewRequest("GET", imdsEndpoint, nil) + req.Header.Set("Metadata", "true") + resp, err := client.Do(req) + // If the request times out or there is an error, it is retried once + if err != nil || resp.StatusCode != 200 { + resp, err = client.Do(req) + if err != nil || resp.StatusCode != 200 { + return "" + } + } + defer resp.Body.Close() + response, err := io.ReadAll(resp.Body) + if err != nil { + return "" + } + return string(response) +} + +func (a *AuthParams) CacheKey(isAppCache bool) string { + if a.AuthorizationType == ATOnBehalfOf { + return a.AssertionHash() + } + if a.AuthorizationType == ATClientCredentials || isAppCache { + return a.AppKey() + } + if a.AuthorizationType == ATRefreshToken || a.AuthorizationType == AccountByID { + return a.HomeAccountID + } + return "" +} +func (a *AuthParams) AssertionHash() string { + hasher := sha256.New() + // Per documentation this never returns an error : https://pkg.go.dev/hash#pkg-types + _, _ = hasher.Write([]byte(a.UserAssertion)) + sha := base64.URLEncoding.EncodeToString(hasher.Sum(nil)) + return sha +} + +func (a *AuthParams) AppKey() string { + if a.AuthorityInfo.Tenant != "" { + return fmt.Sprintf("%s_%s_AppTokenCache", a.ClientID, a.AuthorityInfo.Tenant) + } + return fmt.Sprintf("%s__AppTokenCache", a.ClientID) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authorizetype_string.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authorizetype_string.go new file mode 100644 index 000000000000..10039773b067 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority/authorizetype_string.go @@ -0,0 +1,30 @@ +// Code generated by "stringer -type=AuthorizeType"; DO NOT EDIT. + +package authority + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[ATUnknown-0] + _ = x[ATUsernamePassword-1] + _ = x[ATWindowsIntegrated-2] + _ = x[ATAuthCode-3] + _ = x[ATInteractive-4] + _ = x[ATClientCredentials-5] + _ = x[ATDeviceCode-6] + _ = x[ATRefreshToken-7] +} + +const _AuthorizeType_name = "ATUnknownATUsernamePasswordATWindowsIntegratedATAuthCodeATInteractiveATClientCredentialsATDeviceCodeATRefreshToken" + +var _AuthorizeType_index = [...]uint8{0, 9, 27, 46, 56, 69, 88, 100, 114} + +func (i AuthorizeType) String() string { + if i < 0 || i >= AuthorizeType(len(_AuthorizeType_index)-1) { + return "AuthorizeType(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _AuthorizeType_name[_AuthorizeType_index[i]:_AuthorizeType_index[i+1]] +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/comm.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/comm.go new file mode 100644 index 000000000000..7d9ec7cd3742 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/comm.go @@ -0,0 +1,320 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// Package comm provides helpers for communicating with HTTP backends. +package comm + +import ( + "bytes" + "context" + "encoding/json" + "encoding/xml" + "fmt" + "io" + "net/http" + "net/url" + "reflect" + "runtime" + "strings" + "time" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors" + customJSON "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/version" + "github.com/google/uuid" +) + +// HTTPClient represents an HTTP client. +// It's usually an *http.Client from the standard library. +type HTTPClient interface { + // Do sends an HTTP request and returns an HTTP response. + Do(req *http.Request) (*http.Response, error) + + // CloseIdleConnections closes any idle connections in a "keep-alive" state. + CloseIdleConnections() +} + +// Client provides a wrapper to our *http.Client that handles compression and serialization needs. +type Client struct { + client HTTPClient +} + +// New returns a new Client object. +func New(httpClient HTTPClient) *Client { + if httpClient == nil { + panic("http.Client cannot == nil") + } + + return &Client{client: httpClient} +} + +// JSONCall connects to the REST endpoint passing the HTTP query values, headers and JSON conversion +// of body in the HTTP body. It automatically handles compression and decompression with gzip. The response is JSON +// unmarshalled into resp. resp must be a pointer to a struct. If the body struct contains a field called +// "AdditionalFields" we use a custom marshal/unmarshal engine. +func (c *Client) JSONCall(ctx context.Context, endpoint string, headers http.Header, qv url.Values, body, resp interface{}) error { + if qv == nil { + qv = url.Values{} + } + + v := reflect.ValueOf(resp) + if err := c.checkResp(v); err != nil { + return err + } + + // Choose a JSON marshal/unmarshal depending on if we have AdditionalFields attribute. + var marshal = json.Marshal + var unmarshal = json.Unmarshal + if _, ok := v.Elem().Type().FieldByName("AdditionalFields"); ok { + marshal = customJSON.Marshal + unmarshal = customJSON.Unmarshal + } + + u, err := url.Parse(endpoint) + if err != nil { + return fmt.Errorf("could not parse path URL(%s): %w", endpoint, err) + } + u.RawQuery = qv.Encode() + + addStdHeaders(headers) + + req := &http.Request{Method: http.MethodGet, URL: u, Header: headers} + + if body != nil { + // Note: In case your wondering why we are not gzip encoding.... + // I'm not sure if these various services support gzip on send. + headers.Add("Content-Type", "application/json; charset=utf-8") + data, err := marshal(body) + if err != nil { + return fmt.Errorf("bug: conn.Call(): could not marshal the body object: %w", err) + } + req.Body = io.NopCloser(bytes.NewBuffer(data)) + req.Method = http.MethodPost + } + + data, err := c.do(ctx, req) + if err != nil { + return err + } + + if resp != nil { + if err := unmarshal(data, resp); err != nil { + return fmt.Errorf("json decode error: %w\njson message bytes were: %s", err, string(data)) + } + } + return nil +} + +// XMLCall connects to an endpoint and decodes the XML response into resp. This is used when +// sending application/xml . If sending XML via SOAP, use SOAPCall(). +func (c *Client) XMLCall(ctx context.Context, endpoint string, headers http.Header, qv url.Values, resp interface{}) error { + if err := c.checkResp(reflect.ValueOf(resp)); err != nil { + return err + } + + if qv == nil { + qv = url.Values{} + } + + u, err := url.Parse(endpoint) + if err != nil { + return fmt.Errorf("could not parse path URL(%s): %w", endpoint, err) + } + u.RawQuery = qv.Encode() + + headers.Set("Content-Type", "application/xml; charset=utf-8") // This was not set in he original Mex(), but... + addStdHeaders(headers) + + return c.xmlCall(ctx, u, headers, "", resp) +} + +// SOAPCall returns the SOAP message given an endpoint, action, body of the request and the response object to marshal into. +func (c *Client) SOAPCall(ctx context.Context, endpoint, action string, headers http.Header, qv url.Values, body string, resp interface{}) error { + if body == "" { + return fmt.Errorf("cannot make a SOAP call with body set to empty string") + } + + if err := c.checkResp(reflect.ValueOf(resp)); err != nil { + return err + } + + if qv == nil { + qv = url.Values{} + } + + u, err := url.Parse(endpoint) + if err != nil { + return fmt.Errorf("could not parse path URL(%s): %w", endpoint, err) + } + u.RawQuery = qv.Encode() + + headers.Set("Content-Type", "application/soap+xml; charset=utf-8") + headers.Set("SOAPAction", action) + addStdHeaders(headers) + + return c.xmlCall(ctx, u, headers, body, resp) +} + +// xmlCall sends an XML in body and decodes into resp. This simply does the transport and relies on +// an upper level call to set things such as SOAP parameters and Content-Type, if required. +func (c *Client) xmlCall(ctx context.Context, u *url.URL, headers http.Header, body string, resp interface{}) error { + req := &http.Request{Method: http.MethodGet, URL: u, Header: headers} + + if len(body) > 0 { + req.Method = http.MethodPost + req.Body = io.NopCloser(strings.NewReader(body)) + } + + data, err := c.do(ctx, req) + if err != nil { + return err + } + + return xml.Unmarshal(data, resp) +} + +// URLFormCall is used to make a call where we need to send application/x-www-form-urlencoded data +// to the backend and receive JSON back. qv will be encoded into the request body. +func (c *Client) URLFormCall(ctx context.Context, endpoint string, qv url.Values, resp interface{}) error { + if len(qv) == 0 { + return fmt.Errorf("URLFormCall() requires qv to have non-zero length") + } + + if err := c.checkResp(reflect.ValueOf(resp)); err != nil { + return err + } + + u, err := url.Parse(endpoint) + if err != nil { + return fmt.Errorf("could not parse path URL(%s): %w", endpoint, err) + } + + headers := http.Header{} + headers.Set("Content-Type", "application/x-www-form-urlencoded; charset=utf-8") + addStdHeaders(headers) + + enc := qv.Encode() + + req := &http.Request{ + Method: http.MethodPost, + URL: u, + Header: headers, + ContentLength: int64(len(enc)), + Body: io.NopCloser(strings.NewReader(enc)), + GetBody: func() (io.ReadCloser, error) { + return io.NopCloser(strings.NewReader(enc)), nil + }, + } + + data, err := c.do(ctx, req) + if err != nil { + return err + } + + v := reflect.ValueOf(resp) + if err := c.checkResp(v); err != nil { + return err + } + + var unmarshal = json.Unmarshal + if _, ok := v.Elem().Type().FieldByName("AdditionalFields"); ok { + unmarshal = customJSON.Unmarshal + } + if resp != nil { + if err := unmarshal(data, resp); err != nil { + return fmt.Errorf("json decode error: %w\nraw message was: %s", err, string(data)) + } + } + return nil +} + +// do makes the HTTP call to the server and returns the contents of the body. +func (c *Client) do(ctx context.Context, req *http.Request) ([]byte, error) { + if _, ok := ctx.Deadline(); !ok { + var cancel context.CancelFunc + ctx, cancel = context.WithTimeout(ctx, 30*time.Second) + defer cancel() + } + req = req.WithContext(ctx) + + reply, err := c.client.Do(req) + if err != nil { + return nil, fmt.Errorf("server response error:\n %w", err) + } + defer reply.Body.Close() + + data, err := c.readBody(reply) + if err != nil { + return nil, fmt.Errorf("could not read the body of an HTTP Response: %w", err) + } + reply.Body = io.NopCloser(bytes.NewBuffer(data)) + + // NOTE: This doesn't happen immediately after the call so that we can get an error message + // from the server and include it in our error. + switch reply.StatusCode { + case 200, 201: + default: + sd := strings.TrimSpace(string(data)) + if sd != "" { + // We probably have the error in the body. + return nil, errors.CallErr{ + Req: req, + Resp: reply, + Err: fmt.Errorf("http call(%s)(%s) error: reply status code was %d:\n%s", req.URL.String(), req.Method, reply.StatusCode, sd), + } + } + return nil, errors.CallErr{ + Req: req, + Resp: reply, + Err: fmt.Errorf("http call(%s)(%s) error: reply status code was %d", req.URL.String(), req.Method, reply.StatusCode), + } + } + + return data, nil +} + +// checkResp checks a response object o make sure it is a pointer to a struct. +func (c *Client) checkResp(v reflect.Value) error { + if v.Kind() != reflect.Ptr { + return fmt.Errorf("bug: resp argument must a *struct, was %T", v.Interface()) + } + v = v.Elem() + if v.Kind() != reflect.Struct { + return fmt.Errorf("bug: resp argument must be a *struct, was %T", v.Interface()) + } + return nil +} + +// readBody reads the body out of an *http.Response. It supports gzip encoded responses. +func (c *Client) readBody(resp *http.Response) ([]byte, error) { + var reader io.Reader = resp.Body + switch resp.Header.Get("Content-Encoding") { + case "": + // Do nothing + case "gzip": + reader = gzipDecompress(resp.Body) + default: + return nil, fmt.Errorf("bug: comm.Client.JSONCall(): content was send with unsupported content-encoding %s", resp.Header.Get("Content-Encoding")) + } + return io.ReadAll(reader) +} + +var testID string + +// addStdHeaders adds the standard headers we use on all calls. +func addStdHeaders(headers http.Header) http.Header { + headers.Set("Accept-Encoding", "gzip") + // So that I can have a static id for tests. + if testID != "" { + headers.Set("client-request-id", testID) + headers.Set("Return-Client-Request-Id", "false") + } else { + headers.Set("client-request-id", uuid.New().String()) + headers.Set("Return-Client-Request-Id", "false") + } + headers.Set("x-client-sku", "MSAL.Go") + headers.Set("x-client-os", runtime.GOOS) + headers.Set("x-client-cpu", runtime.GOARCH) + headers.Set("x-client-ver", version.Version) + return headers +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/compress.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/compress.go new file mode 100644 index 000000000000..4d3dbfcf0a6b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm/compress.go @@ -0,0 +1,33 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package comm + +import ( + "compress/gzip" + "io" +) + +func gzipDecompress(r io.Reader) io.Reader { + gzipReader, _ := gzip.NewReader(r) + + pipeOut, pipeIn := io.Pipe() + go func() { + // decompression bomb would have to come from Azure services. + // If we want to limit, we should do that in comm.do(). + _, err := io.Copy(pipeIn, gzipReader) //nolint + if err != nil { + // don't need the error. + pipeIn.CloseWithError(err) //nolint + gzipReader.Close() + return + } + if err := gzipReader.Close(); err != nil { + // don't need the error. + pipeIn.CloseWithError(err) //nolint + return + } + pipeIn.Close() + }() + return pipeOut +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant/grant.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant/grant.go new file mode 100644 index 000000000000..b628f61ac081 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant/grant.go @@ -0,0 +1,17 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// Package grant holds types of grants issued by authorization services. +package grant + +const ( + Password = "password" + JWT = "urn:ietf:params:oauth:grant-type:jwt-bearer" + SAMLV1 = "urn:ietf:params:oauth:grant-type:saml1_1-bearer" + SAMLV2 = "urn:ietf:params:oauth:grant-type:saml2-bearer" + DeviceCode = "device_code" + AuthCode = "authorization_code" + RefreshToken = "refresh_token" + ClientCredential = "client_credentials" + ClientAssertion = "urn:ietf:params:oauth:client-assertion-type:jwt-bearer" +) diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/ops.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/ops.go new file mode 100644 index 000000000000..1f9c543fa3b2 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/ops.go @@ -0,0 +1,56 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +/* +Package ops provides operations to various backend services using REST clients. + +The REST type provides several clients that can be used to communicate to backends. +Usage is simple: + + rest := ops.New() + + // Creates an authority client and calls the UserRealm() method. + userRealm, err := rest.Authority().UserRealm(ctx, authParameters) + if err != nil { + // Do something + } +*/ +package ops + +import ( + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust" +) + +// HTTPClient represents an HTTP client. +// It's usually an *http.Client from the standard library. +type HTTPClient = comm.HTTPClient + +// REST provides REST clients for communicating with various backends used by MSAL. +type REST struct { + client *comm.Client +} + +// New is the constructor for REST. +func New(httpClient HTTPClient) *REST { + return &REST{client: comm.New(httpClient)} +} + +// Authority returns a client for querying information about various authorities. +func (r *REST) Authority() authority.Client { + return authority.Client{Comm: r.client} +} + +// AccessTokens returns a client that can be used to get various access tokens for +// authorization purposes. +func (r *REST) AccessTokens() accesstokens.Client { + return accesstokens.Client{Comm: r.client} +} + +// WSTrust provides access to various metadata in a WSTrust service. This data can +// be used to gain tokens based on SAML data using the client provided by AccessTokens(). +func (r *REST) WSTrust() wstrust.Client { + return wstrust.Client{Comm: r.client} +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/endpointtype_string.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/endpointtype_string.go new file mode 100644 index 000000000000..a2bb6278ae5f --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/endpointtype_string.go @@ -0,0 +1,25 @@ +// Code generated by "stringer -type=endpointType"; DO NOT EDIT. + +package defs + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[etUnknown-0] + _ = x[etUsernamePassword-1] + _ = x[etWindowsTransport-2] +} + +const _endpointType_name = "etUnknownetUsernamePasswordetWindowsTransport" + +var _endpointType_index = [...]uint8{0, 9, 27, 45} + +func (i endpointType) String() string { + if i < 0 || i >= endpointType(len(_endpointType_index)-1) { + return "endpointType(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _endpointType_name[_endpointType_index[i]:_endpointType_index[i+1]] +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/mex_document_definitions.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/mex_document_definitions.go new file mode 100644 index 000000000000..6497270028d8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/mex_document_definitions.go @@ -0,0 +1,394 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package defs + +import "encoding/xml" + +type Definitions struct { + XMLName xml.Name `xml:"definitions"` + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + TargetNamespace string `xml:"targetNamespace,attr"` + WSDL string `xml:"wsdl,attr"` + XSD string `xml:"xsd,attr"` + T string `xml:"t,attr"` + SOAPENC string `xml:"soapenc,attr"` + SOAP string `xml:"soap,attr"` + TNS string `xml:"tns,attr"` + MSC string `xml:"msc,attr"` + WSAM string `xml:"wsam,attr"` + SOAP12 string `xml:"soap12,attr"` + WSA10 string `xml:"wsa10,attr"` + WSA string `xml:"wsa,attr"` + WSAW string `xml:"wsaw,attr"` + WSX string `xml:"wsx,attr"` + WSAP string `xml:"wsap,attr"` + WSU string `xml:"wsu,attr"` + Trust string `xml:"trust,attr"` + WSP string `xml:"wsp,attr"` + Policy []Policy `xml:"Policy"` + Types Types `xml:"types"` + Message []Message `xml:"message"` + PortType []PortType `xml:"portType"` + Binding []Binding `xml:"binding"` + Service Service `xml:"service"` +} + +type Policy struct { + Text string `xml:",chardata"` + ID string `xml:"Id,attr"` + ExactlyOne ExactlyOne `xml:"ExactlyOne"` +} + +type ExactlyOne struct { + Text string `xml:",chardata"` + All All `xml:"All"` +} + +type All struct { + Text string `xml:",chardata"` + NegotiateAuthentication NegotiateAuthentication `xml:"NegotiateAuthentication"` + TransportBinding TransportBinding `xml:"TransportBinding"` + UsingAddressing Text `xml:"UsingAddressing"` + EndorsingSupportingTokens EndorsingSupportingTokens `xml:"EndorsingSupportingTokens"` + WSS11 WSS11 `xml:"Wss11"` + Trust10 Trust10 `xml:"Trust10"` + SignedSupportingTokens SignedSupportingTokens `xml:"SignedSupportingTokens"` + Trust13 WSTrust13 `xml:"Trust13"` + SignedEncryptedSupportingTokens SignedEncryptedSupportingTokens `xml:"SignedEncryptedSupportingTokens"` +} + +type NegotiateAuthentication struct { + Text string `xml:",chardata"` + HTTP string `xml:"http,attr"` + XMLName xml.Name +} + +type TransportBinding struct { + Text string `xml:",chardata"` + SP string `xml:"sp,attr"` + Policy TransportBindingPolicy `xml:"Policy"` +} + +type TransportBindingPolicy struct { + Text string `xml:",chardata"` + TransportToken TransportToken `xml:"TransportToken"` + AlgorithmSuite AlgorithmSuite `xml:"AlgorithmSuite"` + Layout Layout `xml:"Layout"` + IncludeTimestamp Text `xml:"IncludeTimestamp"` +} + +type TransportToken struct { + Text string `xml:",chardata"` + Policy TransportTokenPolicy `xml:"Policy"` +} + +type TransportTokenPolicy struct { + Text string `xml:",chardata"` + HTTPSToken HTTPSToken `xml:"HttpsToken"` +} + +type HTTPSToken struct { + Text string `xml:",chardata"` + RequireClientCertificate string `xml:"RequireClientCertificate,attr"` +} + +type AlgorithmSuite struct { + Text string `xml:",chardata"` + Policy AlgorithmSuitePolicy `xml:"Policy"` +} + +type AlgorithmSuitePolicy struct { + Text string `xml:",chardata"` + Basic256 Text `xml:"Basic256"` + Basic128 Text `xml:"Basic128"` +} + +type Layout struct { + Text string `xml:",chardata"` + Policy LayoutPolicy `xml:"Policy"` +} + +type LayoutPolicy struct { + Text string `xml:",chardata"` + Strict Text `xml:"Strict"` +} + +type EndorsingSupportingTokens struct { + Text string `xml:",chardata"` + SP string `xml:"sp,attr"` + Policy EndorsingSupportingTokensPolicy `xml:"Policy"` +} + +type EndorsingSupportingTokensPolicy struct { + Text string `xml:",chardata"` + X509Token X509Token `xml:"X509Token"` + RSAToken RSAToken `xml:"RsaToken"` + SignedParts SignedParts `xml:"SignedParts"` + KerberosToken KerberosToken `xml:"KerberosToken"` + IssuedToken IssuedToken `xml:"IssuedToken"` + KeyValueToken KeyValueToken `xml:"KeyValueToken"` +} + +type X509Token struct { + Text string `xml:",chardata"` + IncludeToken string `xml:"IncludeToken,attr"` + Policy X509TokenPolicy `xml:"Policy"` +} + +type X509TokenPolicy struct { + Text string `xml:",chardata"` + RequireThumbprintReference Text `xml:"RequireThumbprintReference"` + WSSX509V3Token10 Text `xml:"WssX509V3Token10"` +} + +type RSAToken struct { + Text string `xml:",chardata"` + IncludeToken string `xml:"IncludeToken,attr"` + Optional string `xml:"Optional,attr"` + MSSP string `xml:"mssp,attr"` +} + +type SignedParts struct { + Text string `xml:",chardata"` + Header SignedPartsHeader `xml:"Header"` +} + +type SignedPartsHeader struct { + Text string `xml:",chardata"` + Name string `xml:"Name,attr"` + Namespace string `xml:"Namespace,attr"` +} + +type KerberosToken struct { + Text string `xml:",chardata"` + IncludeToken string `xml:"IncludeToken,attr"` + Policy KerberosTokenPolicy `xml:"Policy"` +} + +type KerberosTokenPolicy struct { + Text string `xml:",chardata"` + WSSGSSKerberosV5ApReqToken11 Text `xml:"WssGssKerberosV5ApReqToken11"` +} + +type IssuedToken struct { + Text string `xml:",chardata"` + IncludeToken string `xml:"IncludeToken,attr"` + RequestSecurityTokenTemplate RequestSecurityTokenTemplate `xml:"RequestSecurityTokenTemplate"` + Policy IssuedTokenPolicy `xml:"Policy"` +} + +type RequestSecurityTokenTemplate struct { + Text string `xml:",chardata"` + KeyType Text `xml:"KeyType"` + EncryptWith Text `xml:"EncryptWith"` + SignatureAlgorithm Text `xml:"SignatureAlgorithm"` + CanonicalizationAlgorithm Text `xml:"CanonicalizationAlgorithm"` + EncryptionAlgorithm Text `xml:"EncryptionAlgorithm"` + KeySize Text `xml:"KeySize"` + KeyWrapAlgorithm Text `xml:"KeyWrapAlgorithm"` +} + +type IssuedTokenPolicy struct { + Text string `xml:",chardata"` + RequireInternalReference Text `xml:"RequireInternalReference"` +} + +type KeyValueToken struct { + Text string `xml:",chardata"` + IncludeToken string `xml:"IncludeToken,attr"` + Optional string `xml:"Optional,attr"` +} + +type WSS11 struct { + Text string `xml:",chardata"` + SP string `xml:"sp,attr"` + Policy Wss11Policy `xml:"Policy"` +} + +type Wss11Policy struct { + Text string `xml:",chardata"` + MustSupportRefThumbprint Text `xml:"MustSupportRefThumbprint"` +} + +type Trust10 struct { + Text string `xml:",chardata"` + SP string `xml:"sp,attr"` + Policy Trust10Policy `xml:"Policy"` +} + +type Trust10Policy struct { + Text string `xml:",chardata"` + MustSupportIssuedTokens Text `xml:"MustSupportIssuedTokens"` + RequireClientEntropy Text `xml:"RequireClientEntropy"` + RequireServerEntropy Text `xml:"RequireServerEntropy"` +} + +type SignedSupportingTokens struct { + Text string `xml:",chardata"` + SP string `xml:"sp,attr"` + Policy SupportingTokensPolicy `xml:"Policy"` +} + +type SupportingTokensPolicy struct { + Text string `xml:",chardata"` + UsernameToken UsernameToken `xml:"UsernameToken"` +} +type UsernameToken struct { + Text string `xml:",chardata"` + IncludeToken string `xml:"IncludeToken,attr"` + Policy UsernameTokenPolicy `xml:"Policy"` +} + +type UsernameTokenPolicy struct { + Text string `xml:",chardata"` + WSSUsernameToken10 WSSUsernameToken10 `xml:"WssUsernameToken10"` +} + +type WSSUsernameToken10 struct { + Text string `xml:",chardata"` + XMLName xml.Name +} + +type WSTrust13 struct { + Text string `xml:",chardata"` + SP string `xml:"sp,attr"` + Policy WSTrust13Policy `xml:"Policy"` +} + +type WSTrust13Policy struct { + Text string `xml:",chardata"` + MustSupportIssuedTokens Text `xml:"MustSupportIssuedTokens"` + RequireClientEntropy Text `xml:"RequireClientEntropy"` + RequireServerEntropy Text `xml:"RequireServerEntropy"` +} + +type SignedEncryptedSupportingTokens struct { + Text string `xml:",chardata"` + SP string `xml:"sp,attr"` + Policy SupportingTokensPolicy `xml:"Policy"` +} + +type Types struct { + Text string `xml:",chardata"` + Schema Schema `xml:"schema"` +} + +type Schema struct { + Text string `xml:",chardata"` + TargetNamespace string `xml:"targetNamespace,attr"` + Import []Import `xml:"import"` +} + +type Import struct { + Text string `xml:",chardata"` + SchemaLocation string `xml:"schemaLocation,attr"` + Namespace string `xml:"namespace,attr"` +} + +type Message struct { + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + Part Part `xml:"part"` +} + +type Part struct { + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + Element string `xml:"element,attr"` +} + +type PortType struct { + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + Operation Operation `xml:"operation"` +} + +type Operation struct { + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + Input OperationIO `xml:"input"` + Output OperationIO `xml:"output"` +} + +type OperationIO struct { + Text string `xml:",chardata"` + Action string `xml:"Action,attr"` + Message string `xml:"message,attr"` + Body OperationIOBody `xml:"body"` +} + +type OperationIOBody struct { + Text string `xml:",chardata"` + Use string `xml:"use,attr"` +} + +type Binding struct { + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + Type string `xml:"type,attr"` + PolicyReference PolicyReference `xml:"PolicyReference"` + Binding DefinitionsBinding `xml:"binding"` + Operation BindingOperation `xml:"operation"` +} + +type PolicyReference struct { + Text string `xml:",chardata"` + URI string `xml:"URI,attr"` +} + +type DefinitionsBinding struct { + Text string `xml:",chardata"` + Transport string `xml:"transport,attr"` +} + +type BindingOperation struct { + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + Operation BindingOperationOperation `xml:"operation"` + Input BindingOperationIO `xml:"input"` + Output BindingOperationIO `xml:"output"` +} + +type BindingOperationOperation struct { + Text string `xml:",chardata"` + SoapAction string `xml:"soapAction,attr"` + Style string `xml:"style,attr"` +} + +type BindingOperationIO struct { + Text string `xml:",chardata"` + Body OperationIOBody `xml:"body"` +} + +type Service struct { + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + Port []Port `xml:"port"` +} + +type Port struct { + Text string `xml:",chardata"` + Name string `xml:"name,attr"` + Binding string `xml:"binding,attr"` + Address Address `xml:"address"` + EndpointReference PortEndpointReference `xml:"EndpointReference"` +} + +type Address struct { + Text string `xml:",chardata"` + Location string `xml:"location,attr"` +} + +type PortEndpointReference struct { + Text string `xml:",chardata"` + Address Text `xml:"Address"` + Identity Identity `xml:"Identity"` +} + +type Identity struct { + Text string `xml:",chardata"` + XMLNS string `xml:"xmlns,attr"` + SPN Text `xml:"Spn"` +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/saml_assertion_definitions.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/saml_assertion_definitions.go new file mode 100644 index 000000000000..7d0725565777 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/saml_assertion_definitions.go @@ -0,0 +1,230 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package defs + +import "encoding/xml" + +// TODO(msal): Someone (and it ain't gonna be me) needs to document these attributes or +// at the least put a link to RFC. + +type SAMLDefinitions struct { + XMLName xml.Name `xml:"Envelope"` + Text string `xml:",chardata"` + S string `xml:"s,attr"` + A string `xml:"a,attr"` + U string `xml:"u,attr"` + Header Header `xml:"Header"` + Body Body `xml:"Body"` +} + +type Header struct { + Text string `xml:",chardata"` + Action Action `xml:"Action"` + Security Security `xml:"Security"` +} + +type Action struct { + Text string `xml:",chardata"` + MustUnderstand string `xml:"mustUnderstand,attr"` +} + +type Security struct { + Text string `xml:",chardata"` + MustUnderstand string `xml:"mustUnderstand,attr"` + O string `xml:"o,attr"` + Timestamp Timestamp `xml:"Timestamp"` +} + +type Timestamp struct { + Text string `xml:",chardata"` + ID string `xml:"Id,attr"` + Created Text `xml:"Created"` + Expires Text `xml:"Expires"` +} + +type Text struct { + Text string `xml:",chardata"` +} + +type Body struct { + Text string `xml:",chardata"` + RequestSecurityTokenResponseCollection RequestSecurityTokenResponseCollection `xml:"RequestSecurityTokenResponseCollection"` +} + +type RequestSecurityTokenResponseCollection struct { + Text string `xml:",chardata"` + Trust string `xml:"trust,attr"` + RequestSecurityTokenResponse []RequestSecurityTokenResponse `xml:"RequestSecurityTokenResponse"` +} + +type RequestSecurityTokenResponse struct { + Text string `xml:",chardata"` + Lifetime Lifetime `xml:"Lifetime"` + AppliesTo AppliesTo `xml:"AppliesTo"` + RequestedSecurityToken RequestedSecurityToken `xml:"RequestedSecurityToken"` + RequestedAttachedReference RequestedAttachedReference `xml:"RequestedAttachedReference"` + RequestedUnattachedReference RequestedUnattachedReference `xml:"RequestedUnattachedReference"` + TokenType Text `xml:"TokenType"` + RequestType Text `xml:"RequestType"` + KeyType Text `xml:"KeyType"` +} + +type Lifetime struct { + Text string `xml:",chardata"` + Created WSUTimestamp `xml:"Created"` + Expires WSUTimestamp `xml:"Expires"` +} + +type WSUTimestamp struct { + Text string `xml:",chardata"` + Wsu string `xml:"wsu,attr"` +} + +type AppliesTo struct { + Text string `xml:",chardata"` + Wsp string `xml:"wsp,attr"` + EndpointReference EndpointReference `xml:"EndpointReference"` +} + +type EndpointReference struct { + Text string `xml:",chardata"` + Wsa string `xml:"wsa,attr"` + Address Text `xml:"Address"` +} + +type RequestedSecurityToken struct { + Text string `xml:",chardata"` + AssertionRawXML string `xml:",innerxml"` + Assertion Assertion `xml:"Assertion"` +} + +type Assertion struct { + XMLName xml.Name // Normally its `xml:"Assertion"`, but I think they want to capture the xmlns + Text string `xml:",chardata"` + MajorVersion string `xml:"MajorVersion,attr"` + MinorVersion string `xml:"MinorVersion,attr"` + AssertionID string `xml:"AssertionID,attr"` + Issuer string `xml:"Issuer,attr"` + IssueInstant string `xml:"IssueInstant,attr"` + Saml string `xml:"saml,attr"` + Conditions Conditions `xml:"Conditions"` + AttributeStatement AttributeStatement `xml:"AttributeStatement"` + AuthenticationStatement AuthenticationStatement `xml:"AuthenticationStatement"` + Signature Signature `xml:"Signature"` +} + +type Conditions struct { + Text string `xml:",chardata"` + NotBefore string `xml:"NotBefore,attr"` + NotOnOrAfter string `xml:"NotOnOrAfter,attr"` + AudienceRestrictionCondition AudienceRestrictionCondition `xml:"AudienceRestrictionCondition"` +} + +type AudienceRestrictionCondition struct { + Text string `xml:",chardata"` + Audience Text `xml:"Audience"` +} + +type AttributeStatement struct { + Text string `xml:",chardata"` + Subject Subject `xml:"Subject"` + Attribute []Attribute `xml:"Attribute"` +} + +type Subject struct { + Text string `xml:",chardata"` + NameIdentifier NameIdentifier `xml:"NameIdentifier"` + SubjectConfirmation SubjectConfirmation `xml:"SubjectConfirmation"` +} + +type NameIdentifier struct { + Text string `xml:",chardata"` + Format string `xml:"Format,attr"` +} + +type SubjectConfirmation struct { + Text string `xml:",chardata"` + ConfirmationMethod Text `xml:"ConfirmationMethod"` +} + +type Attribute struct { + Text string `xml:",chardata"` + AttributeName string `xml:"AttributeName,attr"` + AttributeNamespace string `xml:"AttributeNamespace,attr"` + AttributeValue Text `xml:"AttributeValue"` +} + +type AuthenticationStatement struct { + Text string `xml:",chardata"` + AuthenticationMethod string `xml:"AuthenticationMethod,attr"` + AuthenticationInstant string `xml:"AuthenticationInstant,attr"` + Subject Subject `xml:"Subject"` +} + +type Signature struct { + Text string `xml:",chardata"` + Ds string `xml:"ds,attr"` + SignedInfo SignedInfo `xml:"SignedInfo"` + SignatureValue Text `xml:"SignatureValue"` + KeyInfo KeyInfo `xml:"KeyInfo"` +} + +type SignedInfo struct { + Text string `xml:",chardata"` + CanonicalizationMethod Method `xml:"CanonicalizationMethod"` + SignatureMethod Method `xml:"SignatureMethod"` + Reference Reference `xml:"Reference"` +} + +type Method struct { + Text string `xml:",chardata"` + Algorithm string `xml:"Algorithm,attr"` +} + +type Reference struct { + Text string `xml:",chardata"` + URI string `xml:"URI,attr"` + Transforms Transforms `xml:"Transforms"` + DigestMethod Method `xml:"DigestMethod"` + DigestValue Text `xml:"DigestValue"` +} + +type Transforms struct { + Text string `xml:",chardata"` + Transform []Method `xml:"Transform"` +} + +type KeyInfo struct { + Text string `xml:",chardata"` + Xmlns string `xml:"xmlns,attr"` + X509Data X509Data `xml:"X509Data"` +} + +type X509Data struct { + Text string `xml:",chardata"` + X509Certificate Text `xml:"X509Certificate"` +} + +type RequestedAttachedReference struct { + Text string `xml:",chardata"` + SecurityTokenReference SecurityTokenReference `xml:"SecurityTokenReference"` +} + +type SecurityTokenReference struct { + Text string `xml:",chardata"` + TokenType string `xml:"TokenType,attr"` + O string `xml:"o,attr"` + K string `xml:"k,attr"` + KeyIdentifier KeyIdentifier `xml:"KeyIdentifier"` +} + +type KeyIdentifier struct { + Text string `xml:",chardata"` + ValueType string `xml:"ValueType,attr"` +} + +type RequestedUnattachedReference struct { + Text string `xml:",chardata"` + SecurityTokenReference SecurityTokenReference `xml:"SecurityTokenReference"` +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/version_string.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/version_string.go new file mode 100644 index 000000000000..6fe5efa8a9ab --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/version_string.go @@ -0,0 +1,25 @@ +// Code generated by "stringer -type=Version"; DO NOT EDIT. + +package defs + +import "strconv" + +func _() { + // An "invalid array index" compiler error signifies that the constant values have changed. + // Re-run the stringer command to generate them again. + var x [1]struct{} + _ = x[TrustUnknown-0] + _ = x[Trust2005-1] + _ = x[Trust13-2] +} + +const _Version_name = "TrustUnknownTrust2005Trust13" + +var _Version_index = [...]uint8{0, 12, 21, 28} + +func (i Version) String() string { + if i < 0 || i >= Version(len(_Version_index)-1) { + return "Version(" + strconv.FormatInt(int64(i), 10) + ")" + } + return _Version_name[_Version_index[i]:_Version_index[i+1]] +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_endpoint.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_endpoint.go new file mode 100644 index 000000000000..8fad5efb5de5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_endpoint.go @@ -0,0 +1,199 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package defs + +import ( + "encoding/xml" + "fmt" + "time" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + uuid "github.com/google/uuid" +) + +//go:generate stringer -type=Version + +type Version int + +const ( + TrustUnknown Version = iota + Trust2005 + Trust13 +) + +// Endpoint represents a WSTrust endpoint. +type Endpoint struct { + // Version is the version of the endpoint. + Version Version + // URL is the URL of the endpoint. + URL string +} + +type wsTrustTokenRequestEnvelope struct { + XMLName xml.Name `xml:"s:Envelope"` + Text string `xml:",chardata"` + S string `xml:"xmlns:s,attr"` + Wsa string `xml:"xmlns:wsa,attr"` + Wsu string `xml:"xmlns:wsu,attr"` + Header struct { + Text string `xml:",chardata"` + Action struct { + Text string `xml:",chardata"` + MustUnderstand string `xml:"s:mustUnderstand,attr"` + } `xml:"wsa:Action"` + MessageID struct { + Text string `xml:",chardata"` + } `xml:"wsa:messageID"` + ReplyTo struct { + Text string `xml:",chardata"` + Address struct { + Text string `xml:",chardata"` + } `xml:"wsa:Address"` + } `xml:"wsa:ReplyTo"` + To struct { + Text string `xml:",chardata"` + MustUnderstand string `xml:"s:mustUnderstand,attr"` + } `xml:"wsa:To"` + Security struct { + Text string `xml:",chardata"` + MustUnderstand string `xml:"s:mustUnderstand,attr"` + Wsse string `xml:"xmlns:wsse,attr"` + Timestamp struct { + Text string `xml:",chardata"` + ID string `xml:"wsu:Id,attr"` + Created struct { + Text string `xml:",chardata"` + } `xml:"wsu:Created"` + Expires struct { + Text string `xml:",chardata"` + } `xml:"wsu:Expires"` + } `xml:"wsu:Timestamp"` + UsernameToken struct { + Text string `xml:",chardata"` + ID string `xml:"wsu:Id,attr"` + Username struct { + Text string `xml:",chardata"` + } `xml:"wsse:Username"` + Password struct { + Text string `xml:",chardata"` + } `xml:"wsse:Password"` + } `xml:"wsse:UsernameToken"` + } `xml:"wsse:Security"` + } `xml:"s:Header"` + Body struct { + Text string `xml:",chardata"` + RequestSecurityToken struct { + Text string `xml:",chardata"` + Wst string `xml:"xmlns:wst,attr"` + AppliesTo struct { + Text string `xml:",chardata"` + Wsp string `xml:"xmlns:wsp,attr"` + EndpointReference struct { + Text string `xml:",chardata"` + Address struct { + Text string `xml:",chardata"` + } `xml:"wsa:Address"` + } `xml:"wsa:EndpointReference"` + } `xml:"wsp:AppliesTo"` + KeyType struct { + Text string `xml:",chardata"` + } `xml:"wst:KeyType"` + RequestType struct { + Text string `xml:",chardata"` + } `xml:"wst:RequestType"` + } `xml:"wst:RequestSecurityToken"` + } `xml:"s:Body"` +} + +func buildTimeString(t time.Time) string { + // Golang time formats are weird: https://stackoverflow.com/questions/20234104/how-to-format-current-time-using-a-yyyymmddhhmmss-format + return t.Format("2006-01-02T15:04:05.000Z") +} + +func (wte *Endpoint) buildTokenRequestMessage(authType authority.AuthorizeType, cloudAudienceURN string, username string, password string) (string, error) { + var soapAction string + var trustNamespace string + var keyType string + var requestType string + + createdTime := time.Now().UTC() + expiresTime := createdTime.Add(10 * time.Minute) + + switch wte.Version { + case Trust2005: + soapAction = trust2005Spec + trustNamespace = "http://schemas.xmlsoap.org/ws/2005/02/trust" + keyType = "http://schemas.xmlsoap.org/ws/2005/05/identity/NoProofKey" + requestType = "http://schemas.xmlsoap.org/ws/2005/02/trust/Issue" + case Trust13: + soapAction = trust13Spec + trustNamespace = "http://docs.oasis-open.org/ws-sx/ws-trust/200512" + keyType = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer" + requestType = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue" + default: + return "", fmt.Errorf("buildTokenRequestMessage had Version == %q, which is not recognized", wte.Version) + } + + var envelope wsTrustTokenRequestEnvelope + + messageUUID := uuid.New() + + envelope.S = "http://www.w3.org/2003/05/soap-envelope" + envelope.Wsa = "http://www.w3.org/2005/08/addressing" + envelope.Wsu = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" + + envelope.Header.Action.MustUnderstand = "1" + envelope.Header.Action.Text = soapAction + envelope.Header.MessageID.Text = "urn:uuid:" + messageUUID.String() + envelope.Header.ReplyTo.Address.Text = "http://www.w3.org/2005/08/addressing/anonymous" + envelope.Header.To.MustUnderstand = "1" + envelope.Header.To.Text = wte.URL + + switch authType { + case authority.ATUnknown: + return "", fmt.Errorf("buildTokenRequestMessage had no authority type(%v)", authType) + case authority.ATUsernamePassword: + endpointUUID := uuid.New() + + var trustID string + if wte.Version == Trust2005 { + trustID = "UnPwSecTok2005-" + endpointUUID.String() + } else { + trustID = "UnPwSecTok13-" + endpointUUID.String() + } + + envelope.Header.Security.MustUnderstand = "1" + envelope.Header.Security.Wsse = "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" + envelope.Header.Security.Timestamp.ID = "MSATimeStamp" + envelope.Header.Security.Timestamp.Created.Text = buildTimeString(createdTime) + envelope.Header.Security.Timestamp.Expires.Text = buildTimeString(expiresTime) + envelope.Header.Security.UsernameToken.ID = trustID + envelope.Header.Security.UsernameToken.Username.Text = username + envelope.Header.Security.UsernameToken.Password.Text = password + default: + // This is just to note that we don't do anything for other cases. + // We aren't missing anything I know of. + } + + envelope.Body.RequestSecurityToken.Wst = trustNamespace + envelope.Body.RequestSecurityToken.AppliesTo.Wsp = "http://schemas.xmlsoap.org/ws/2004/09/policy" + envelope.Body.RequestSecurityToken.AppliesTo.EndpointReference.Address.Text = cloudAudienceURN + envelope.Body.RequestSecurityToken.KeyType.Text = keyType + envelope.Body.RequestSecurityToken.RequestType.Text = requestType + + output, err := xml.Marshal(envelope) + if err != nil { + return "", err + } + + return string(output), nil +} + +func (wte *Endpoint) BuildTokenRequestMessageWIA(cloudAudienceURN string) (string, error) { + return wte.buildTokenRequestMessage(authority.ATWindowsIntegrated, cloudAudienceURN, "", "") +} + +func (wte *Endpoint) BuildTokenRequestMessageUsernamePassword(cloudAudienceURN string, username string, password string) (string, error) { + return wte.buildTokenRequestMessage(authority.ATUsernamePassword, cloudAudienceURN, username, password) +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_mex_document.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_mex_document.go new file mode 100644 index 000000000000..e3d19886ebc5 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs/wstrust_mex_document.go @@ -0,0 +1,159 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package defs + +import ( + "errors" + "fmt" + "strings" +) + +//go:generate stringer -type=endpointType + +type endpointType int + +const ( + etUnknown endpointType = iota + etUsernamePassword + etWindowsTransport +) + +type wsEndpointData struct { + Version Version + EndpointType endpointType +} + +const trust13Spec string = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue" +const trust2005Spec string = "http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue" + +type MexDocument struct { + UsernamePasswordEndpoint Endpoint + WindowsTransportEndpoint Endpoint + policies map[string]endpointType + bindings map[string]wsEndpointData +} + +func updateEndpoint(cached *Endpoint, found Endpoint) { + if cached == nil || cached.Version == TrustUnknown { + *cached = found + return + } + if (*cached).Version == Trust2005 && found.Version == Trust13 { + *cached = found + return + } +} + +// TODO(msal): Someone needs to write tests for everything below. + +// NewFromDef creates a new MexDocument. +func NewFromDef(defs Definitions) (MexDocument, error) { + policies, err := policies(defs) + if err != nil { + return MexDocument{}, err + } + + bindings, err := bindings(defs, policies) + if err != nil { + return MexDocument{}, err + } + + userPass, windows, err := endpoints(defs, bindings) + if err != nil { + return MexDocument{}, err + } + + return MexDocument{ + UsernamePasswordEndpoint: userPass, + WindowsTransportEndpoint: windows, + policies: policies, + bindings: bindings, + }, nil +} + +func policies(defs Definitions) (map[string]endpointType, error) { + policies := make(map[string]endpointType, len(defs.Policy)) + + for _, policy := range defs.Policy { + if policy.ExactlyOne.All.NegotiateAuthentication.XMLName.Local != "" { + if policy.ExactlyOne.All.TransportBinding.SP != "" && policy.ID != "" { + policies["#"+policy.ID] = etWindowsTransport + } + } + + if policy.ExactlyOne.All.SignedEncryptedSupportingTokens.Policy.UsernameToken.Policy.WSSUsernameToken10.XMLName.Local != "" { + if policy.ExactlyOne.All.TransportBinding.SP != "" && policy.ID != "" { + policies["#"+policy.ID] = etUsernamePassword + } + } + if policy.ExactlyOne.All.SignedSupportingTokens.Policy.UsernameToken.Policy.WSSUsernameToken10.XMLName.Local != "" { + if policy.ExactlyOne.All.TransportBinding.SP != "" && policy.ID != "" { + policies["#"+policy.ID] = etUsernamePassword + } + } + } + + if len(policies) == 0 { + return policies, errors.New("no policies for mex document") + } + + return policies, nil +} + +func bindings(defs Definitions, policies map[string]endpointType) (map[string]wsEndpointData, error) { + bindings := make(map[string]wsEndpointData, len(defs.Binding)) + + for _, binding := range defs.Binding { + policyName := binding.PolicyReference.URI + transport := binding.Binding.Transport + + if transport == "http://schemas.xmlsoap.org/soap/http" { + if policy, ok := policies[policyName]; ok { + bindingName := binding.Name + specVersion := binding.Operation.Operation.SoapAction + + if specVersion == trust13Spec { + bindings[bindingName] = wsEndpointData{Trust13, policy} + } else if specVersion == trust2005Spec { + bindings[bindingName] = wsEndpointData{Trust2005, policy} + } else { + return nil, errors.New("found unknown spec version in mex document") + } + } + } + } + return bindings, nil +} + +func endpoints(defs Definitions, bindings map[string]wsEndpointData) (userPass, windows Endpoint, err error) { + for _, port := range defs.Service.Port { + bindingName := port.Binding + + index := strings.Index(bindingName, ":") + if index != -1 { + bindingName = bindingName[index+1:] + } + + if binding, ok := bindings[bindingName]; ok { + url := strings.TrimSpace(port.EndpointReference.Address.Text) + if url == "" { + return Endpoint{}, Endpoint{}, fmt.Errorf("MexDocument cannot have blank URL endpoint") + } + if binding.Version == TrustUnknown { + return Endpoint{}, Endpoint{}, fmt.Errorf("endpoint version unknown") + } + endpoint := Endpoint{Version: binding.Version, URL: url} + + switch binding.EndpointType { + case etUsernamePassword: + updateEndpoint(&userPass, endpoint) + case etWindowsTransport: + updateEndpoint(&windows, endpoint) + default: + return Endpoint{}, Endpoint{}, errors.New("found unknown port type in MEX document") + } + } + } + return userPass, windows, nil +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/wstrust.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/wstrust.go new file mode 100644 index 000000000000..47cd4c692d62 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/wstrust.go @@ -0,0 +1,136 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +/* +Package wstrust provides a client for communicating with a WSTrust (https://en.wikipedia.org/wiki/WS-Trust#:~:text=WS%2DTrust%20is%20a%20WS,in%20a%20secure%20message%20exchange.) +for the purposes of extracting metadata from the service. This data can be used to acquire +tokens using the accesstokens.Client.GetAccessTokenFromSamlGrant() call. +*/ +package wstrust + +import ( + "context" + "errors" + "fmt" + "net/http" + "net/url" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs" +) + +type xmlCaller interface { + XMLCall(ctx context.Context, endpoint string, headers http.Header, qv url.Values, resp interface{}) error + SOAPCall(ctx context.Context, endpoint, action string, headers http.Header, qv url.Values, body string, resp interface{}) error +} + +type SamlTokenInfo struct { + AssertionType string // Should be either constants SAMLV1Grant or SAMLV2Grant. + Assertion string +} + +// Client represents the REST calls to get tokens from token generator backends. +type Client struct { + // Comm provides the HTTP transport client. + Comm xmlCaller +} + +// TODO(msal): This allows me to call Mex without having a real Def file on line 45. +// This would fail because policies() would not find a policy. This is easy enough to +// fix in test data, but.... Definitions is defined with built in structs. That needs +// to be pulled apart and until then I have this hack in. +var newFromDef = defs.NewFromDef + +// Mex provides metadata about a wstrust service. +func (c Client) Mex(ctx context.Context, federationMetadataURL string) (defs.MexDocument, error) { + resp := defs.Definitions{} + err := c.Comm.XMLCall( + ctx, + federationMetadataURL, + http.Header{}, + nil, + &resp, + ) + if err != nil { + return defs.MexDocument{}, err + } + + return newFromDef(resp) +} + +const ( + SoapActionDefault = "http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue" + + // Note: Commented out because this action is not supported. It was in the original code + // but only used in a switch where it errored. Since there was only one value, a default + // worked better. However, buildTokenRequestMessage() had 2005 support. I'm not actually + // sure what's going on here. It like we have half support. For now this is here just + // for documentation purposes in case we are going to add support. + // + // SoapActionWSTrust2005 = "http://schemas.xmlsoap.org/ws/2005/02/trust/RST/Issue" +) + +// SAMLTokenInfo provides SAML information that is used to generate a SAML token. +func (c Client) SAMLTokenInfo(ctx context.Context, authParameters authority.AuthParams, cloudAudienceURN string, endpoint defs.Endpoint) (SamlTokenInfo, error) { + var wsTrustRequestMessage string + var err error + + switch authParameters.AuthorizationType { + case authority.ATWindowsIntegrated: + wsTrustRequestMessage, err = endpoint.BuildTokenRequestMessageWIA(cloudAudienceURN) + if err != nil { + return SamlTokenInfo{}, err + } + case authority.ATUsernamePassword: + wsTrustRequestMessage, err = endpoint.BuildTokenRequestMessageUsernamePassword( + cloudAudienceURN, authParameters.Username, authParameters.Password) + if err != nil { + return SamlTokenInfo{}, err + } + default: + return SamlTokenInfo{}, fmt.Errorf("unknown auth type %v", authParameters.AuthorizationType) + } + + var soapAction string + switch endpoint.Version { + case defs.Trust13: + soapAction = SoapActionDefault + case defs.Trust2005: + return SamlTokenInfo{}, errors.New("WS Trust 2005 support is not implemented") + default: + return SamlTokenInfo{}, fmt.Errorf("the SOAP endpoint for a wstrust call had an invalid version: %v", endpoint.Version) + } + + resp := defs.SAMLDefinitions{} + err = c.Comm.SOAPCall(ctx, endpoint.URL, soapAction, http.Header{}, nil, wsTrustRequestMessage, &resp) + if err != nil { + return SamlTokenInfo{}, err + } + + return c.samlAssertion(resp) +} + +const ( + samlv1Assertion = "urn:oasis:names:tc:SAML:1.0:assertion" + samlv2Assertion = "urn:oasis:names:tc:SAML:2.0:assertion" +) + +func (c Client) samlAssertion(def defs.SAMLDefinitions) (SamlTokenInfo, error) { + for _, tokenResponse := range def.Body.RequestSecurityTokenResponseCollection.RequestSecurityTokenResponse { + token := tokenResponse.RequestedSecurityToken + if token.Assertion.XMLName.Local != "" { + assertion := token.AssertionRawXML + + samlVersion := token.Assertion.Saml + switch samlVersion { + case samlv1Assertion: + return SamlTokenInfo{AssertionType: grant.SAMLV1, Assertion: assertion}, nil + case samlv2Assertion: + return SamlTokenInfo{AssertionType: grant.SAMLV2, Assertion: assertion}, nil + } + return SamlTokenInfo{}, fmt.Errorf("couldn't parse SAML assertion, version unknown: %q", samlVersion) + } + } + return SamlTokenInfo{}, errors.New("unknown WS-Trust version") +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/resolvers.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/resolvers.go new file mode 100644 index 000000000000..0ade411797ac --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/resolvers.go @@ -0,0 +1,149 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// TODO(msal): Write some tests. The original code this came from didn't have tests and I'm too +// tired at this point to do it. It, like many other *Manager code I found was broken because +// they didn't have mutex protection. + +package oauth + +import ( + "context" + "errors" + "fmt" + "strings" + "sync" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" +) + +// ADFS is an active directory federation service authority type. +const ADFS = "ADFS" + +type cacheEntry struct { + Endpoints authority.Endpoints + ValidForDomainsInList map[string]bool +} + +func createcacheEntry(endpoints authority.Endpoints) cacheEntry { + return cacheEntry{endpoints, map[string]bool{}} +} + +// AuthorityEndpoint retrieves endpoints from an authority for auth and token acquisition. +type authorityEndpoint struct { + rest *ops.REST + + mu sync.Mutex + cache map[string]cacheEntry +} + +// newAuthorityEndpoint is the constructor for AuthorityEndpoint. +func newAuthorityEndpoint(rest *ops.REST) *authorityEndpoint { + m := &authorityEndpoint{rest: rest, cache: map[string]cacheEntry{}} + return m +} + +// ResolveEndpoints gets the authorization and token endpoints and creates an AuthorityEndpoints instance +func (m *authorityEndpoint) ResolveEndpoints(ctx context.Context, authorityInfo authority.Info, userPrincipalName string) (authority.Endpoints, error) { + + if endpoints, found := m.cachedEndpoints(authorityInfo, userPrincipalName); found { + return endpoints, nil + } + + endpoint, err := m.openIDConfigurationEndpoint(ctx, authorityInfo, userPrincipalName) + if err != nil { + return authority.Endpoints{}, err + } + + resp, err := m.rest.Authority().GetTenantDiscoveryResponse(ctx, endpoint) + if err != nil { + return authority.Endpoints{}, err + } + if err := resp.Validate(); err != nil { + return authority.Endpoints{}, fmt.Errorf("ResolveEndpoints(): %w", err) + } + + tenant := authorityInfo.Tenant + + endpoints := authority.NewEndpoints( + strings.Replace(resp.AuthorizationEndpoint, "{tenant}", tenant, -1), + strings.Replace(resp.TokenEndpoint, "{tenant}", tenant, -1), + strings.Replace(resp.Issuer, "{tenant}", tenant, -1), + authorityInfo.Host) + + m.addCachedEndpoints(authorityInfo, userPrincipalName, endpoints) + + return endpoints, nil +} + +// cachedEndpoints returns a the cached endpoints if they exists. If not, we return false. +func (m *authorityEndpoint) cachedEndpoints(authorityInfo authority.Info, userPrincipalName string) (authority.Endpoints, bool) { + m.mu.Lock() + defer m.mu.Unlock() + + if cacheEntry, ok := m.cache[authorityInfo.CanonicalAuthorityURI]; ok { + if authorityInfo.AuthorityType == ADFS { + domain, err := adfsDomainFromUpn(userPrincipalName) + if err == nil { + if _, ok := cacheEntry.ValidForDomainsInList[domain]; ok { + return cacheEntry.Endpoints, true + } + } + } + return cacheEntry.Endpoints, true + } + return authority.Endpoints{}, false +} + +func (m *authorityEndpoint) addCachedEndpoints(authorityInfo authority.Info, userPrincipalName string, endpoints authority.Endpoints) { + m.mu.Lock() + defer m.mu.Unlock() + + updatedCacheEntry := createcacheEntry(endpoints) + + if authorityInfo.AuthorityType == ADFS { + // Since we're here, we've made a call to the backend. We want to ensure we're caching + // the latest values from the server. + if cacheEntry, ok := m.cache[authorityInfo.CanonicalAuthorityURI]; ok { + for k := range cacheEntry.ValidForDomainsInList { + updatedCacheEntry.ValidForDomainsInList[k] = true + } + } + domain, err := adfsDomainFromUpn(userPrincipalName) + if err == nil { + updatedCacheEntry.ValidForDomainsInList[domain] = true + } + } + + m.cache[authorityInfo.CanonicalAuthorityURI] = updatedCacheEntry +} + +func (m *authorityEndpoint) openIDConfigurationEndpoint(ctx context.Context, authorityInfo authority.Info, userPrincipalName string) (string, error) { + if authorityInfo.Tenant == "adfs" { + return fmt.Sprintf("https://%s/adfs/.well-known/openid-configuration", authorityInfo.Host), nil + } else if authorityInfo.ValidateAuthority && !authority.TrustedHost(authorityInfo.Host) { + resp, err := m.rest.Authority().AADInstanceDiscovery(ctx, authorityInfo) + if err != nil { + return "", err + } + return resp.TenantDiscoveryEndpoint, nil + } else if authorityInfo.Region != "" { + resp, err := m.rest.Authority().AADInstanceDiscovery(ctx, authorityInfo) + if err != nil { + return "", err + } + return resp.TenantDiscoveryEndpoint, nil + + } + + return authorityInfo.CanonicalAuthorityURI + "v2.0/.well-known/openid-configuration", nil +} + +func adfsDomainFromUpn(userPrincipalName string) (string, error) { + parts := strings.Split(userPrincipalName, "@") + if len(parts) < 2 { + return "", errors.New("no @ present in user principal name") + } + return parts[1], nil +} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options/options.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options/options.go new file mode 100644 index 000000000000..4561d72db4d7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options/options.go @@ -0,0 +1,52 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package options + +import ( + "errors" + "fmt" +) + +// CallOption implements an optional argument to a method call. See +// https://blog.devgenius.io/go-call-option-that-can-be-used-with-multiple-methods-6c81734f3dbe +// for an explanation of the usage pattern. +type CallOption interface { + Do(any) error + callOption() +} + +// ApplyOptions applies all the callOptions to options. options must be a pointer to a struct and +// callOptions must be a list of objects that implement CallOption. +func ApplyOptions[O, C any](options O, callOptions []C) error { + for _, o := range callOptions { + if t, ok := any(o).(CallOption); !ok { + return fmt.Errorf("unexpected option type %T", o) + } else if err := t.Do(options); err != nil { + return err + } + } + return nil +} + +// NewCallOption returns a new CallOption whose Do() method calls function "f". +func NewCallOption(f func(any) error) CallOption { + if f == nil { + // This isn't a practical concern because only an MSAL maintainer can get + // us here, by implementing a do-nothing option. But if someone does that, + // the below ensures the method invoked with the option returns an error. + return callOption(func(any) error { + return errors.New("invalid option: missing implementation") + }) + } + return callOption(f) +} + +// callOption is an adapter for a function to a CallOption +type callOption func(any) error + +func (c callOption) Do(a any) error { + return c(a) +} + +func (callOption) callOption() {} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared/shared.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared/shared.go new file mode 100644 index 000000000000..d8ab713560c9 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared/shared.go @@ -0,0 +1,72 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +package shared + +import ( + "net/http" + "reflect" + "strings" +) + +const ( + // CacheKeySeparator is used in creating the keys of the cache. + CacheKeySeparator = "-" +) + +type Account struct { + HomeAccountID string `json:"home_account_id,omitempty"` + Environment string `json:"environment,omitempty"` + Realm string `json:"realm,omitempty"` + LocalAccountID string `json:"local_account_id,omitempty"` + AuthorityType string `json:"authority_type,omitempty"` + PreferredUsername string `json:"username,omitempty"` + GivenName string `json:"given_name,omitempty"` + FamilyName string `json:"family_name,omitempty"` + MiddleName string `json:"middle_name,omitempty"` + Name string `json:"name,omitempty"` + AlternativeID string `json:"alternative_account_id,omitempty"` + RawClientInfo string `json:"client_info,omitempty"` + UserAssertionHash string `json:"user_assertion_hash,omitempty"` + + AdditionalFields map[string]interface{} +} + +// NewAccount creates an account. +func NewAccount(homeAccountID, env, realm, localAccountID, authorityType, username string) Account { + return Account{ + HomeAccountID: homeAccountID, + Environment: env, + Realm: realm, + LocalAccountID: localAccountID, + AuthorityType: authorityType, + PreferredUsername: username, + } +} + +// Key creates the key for storing accounts in the cache. +func (acc Account) Key() string { + key := strings.Join([]string{acc.HomeAccountID, acc.Environment, acc.Realm}, CacheKeySeparator) + return strings.ToLower(key) +} + +// IsZero checks the zero value of account. +func (acc Account) IsZero() bool { + v := reflect.ValueOf(acc) + for i := 0; i < v.NumField(); i++ { + field := v.Field(i) + if !field.IsZero() { + switch field.Kind() { + case reflect.Map, reflect.Slice: + if field.Len() == 0 { + continue + } + } + return false + } + } + return true +} + +// DefaultClient is our default shared HTTP client. +var DefaultClient = &http.Client{} diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/version/version.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/version/version.go new file mode 100644 index 000000000000..2ac2d09e4fa0 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/version/version.go @@ -0,0 +1,8 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +// Package version keeps the version number of the client package. +package version + +// Version is the version of this client package that is communicated to the server. +const Version = "1.1.1" diff --git a/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/public/public.go b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/public/public.go new file mode 100644 index 000000000000..88b217deddaa --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/AzureAD/microsoft-authentication-library-for-go/apps/public/public.go @@ -0,0 +1,713 @@ +// Copyright (c) Microsoft Corporation. +// Licensed under the MIT license. + +/* +Package public provides a client for authentication of "public" applications. A "public" +application is defined as an app that runs on client devices (android, ios, windows, linux, ...). +These devices are "untrusted" and access resources via web APIs that must authenticate. +*/ +package public + +/* +Design note: + +public.Client uses client.Base as an embedded type. client.Base statically assigns its attributes +during creation. As it doesn't have any pointers in it, anything borrowed from it, such as +Base.AuthParams is a copy that is free to be manipulated here. +*/ + +// TODO(msal): This should have example code for each method on client using Go's example doc framework. +// base usage details should be includee in the package documentation. + +import ( + "context" + "crypto/rand" + "crypto/sha256" + "encoding/base64" + "errors" + "fmt" + "net/url" + "reflect" + "strconv" + + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options" + "github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared" + "github.com/google/uuid" + "github.com/pkg/browser" +) + +// AuthResult contains the results of one token acquisition operation. +// For details see https://aka.ms/msal-net-authenticationresult +type AuthResult = base.AuthResult + +type Account = shared.Account + +var errNoAccount = errors.New("no account was specified with public.WithAccount(), or the specified account is invalid") + +// clientOptions configures the Client's behavior. +type clientOptions struct { + accessor cache.ExportReplace + authority string + capabilities []string + disableInstanceDiscovery bool + httpClient ops.HTTPClient +} + +func (p *clientOptions) validate() error { + u, err := url.Parse(p.authority) + if err != nil { + return fmt.Errorf("Authority options cannot be URL parsed: %w", err) + } + if u.Scheme != "https" { + return fmt.Errorf("Authority(%s) did not start with https://", u.String()) + } + return nil +} + +// Option is an optional argument to the New constructor. +type Option func(o *clientOptions) + +// WithAuthority allows for a custom authority to be set. This must be a valid https url. +func WithAuthority(authority string) Option { + return func(o *clientOptions) { + o.authority = authority + } +} + +// WithCache provides an accessor that will read and write authentication data to an externally managed cache. +func WithCache(accessor cache.ExportReplace) Option { + return func(o *clientOptions) { + o.accessor = accessor + } +} + +// WithClientCapabilities allows configuring one or more client capabilities such as "CP1" +func WithClientCapabilities(capabilities []string) Option { + return func(o *clientOptions) { + // there's no danger of sharing the slice's underlying memory with the application because + // this slice is simply passed to base.WithClientCapabilities, which copies its data + o.capabilities = capabilities + } +} + +// WithHTTPClient allows for a custom HTTP client to be set. +func WithHTTPClient(httpClient ops.HTTPClient) Option { + return func(o *clientOptions) { + o.httpClient = httpClient + } +} + +// WithInstanceDiscovery set to false to disable authority validation (to support private cloud scenarios) +func WithInstanceDiscovery(enabled bool) Option { + return func(o *clientOptions) { + o.disableInstanceDiscovery = !enabled + } +} + +// Client is a representation of authentication client for public applications as defined in the +// package doc. For more information, visit https://docs.microsoft.com/azure/active-directory/develop/msal-client-applications. +type Client struct { + base base.Client +} + +// New is the constructor for Client. +func New(clientID string, options ...Option) (Client, error) { + opts := clientOptions{ + authority: base.AuthorityPublicCloud, + httpClient: shared.DefaultClient, + } + + for _, o := range options { + o(&opts) + } + if err := opts.validate(); err != nil { + return Client{}, err + } + + base, err := base.New(clientID, opts.authority, oauth.New(opts.httpClient), base.WithCacheAccessor(opts.accessor), base.WithClientCapabilities(opts.capabilities), base.WithInstanceDiscovery(!opts.disableInstanceDiscovery)) + if err != nil { + return Client{}, err + } + return Client{base}, nil +} + +// authCodeURLOptions contains options for AuthCodeURL +type authCodeURLOptions struct { + claims, loginHint, tenantID, domainHint string +} + +// AuthCodeURLOption is implemented by options for AuthCodeURL +type AuthCodeURLOption interface { + authCodeURLOption() +} + +// AuthCodeURL creates a URL used to acquire an authorization code. +// +// Options: [WithClaims], [WithDomainHint], [WithLoginHint], [WithTenantID] +func (pca Client) AuthCodeURL(ctx context.Context, clientID, redirectURI string, scopes []string, opts ...AuthCodeURLOption) (string, error) { + o := authCodeURLOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return "", err + } + ap, err := pca.base.AuthParams.WithTenant(o.tenantID) + if err != nil { + return "", err + } + ap.Claims = o.claims + ap.LoginHint = o.loginHint + ap.DomainHint = o.domainHint + return pca.base.AuthCodeURL(ctx, clientID, redirectURI, scopes, ap) +} + +// WithClaims sets additional claims to request for the token, such as those required by conditional access policies. +// Use this option when Azure AD returned a claims challenge for a prior request. The argument must be decoded. +// This option is valid for any token acquisition method. +func WithClaims(claims string) interface { + AcquireByAuthCodeOption + AcquireByDeviceCodeOption + AcquireByUsernamePasswordOption + AcquireInteractiveOption + AcquireSilentOption + AuthCodeURLOption + options.CallOption +} { + return struct { + AcquireByAuthCodeOption + AcquireByDeviceCodeOption + AcquireByUsernamePasswordOption + AcquireInteractiveOption + AcquireSilentOption + AuthCodeURLOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *acquireTokenByAuthCodeOptions: + t.claims = claims + case *acquireTokenByDeviceCodeOptions: + t.claims = claims + case *acquireTokenByUsernamePasswordOptions: + t.claims = claims + case *acquireTokenSilentOptions: + t.claims = claims + case *authCodeURLOptions: + t.claims = claims + case *interactiveAuthOptions: + t.claims = claims + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// WithTenantID specifies a tenant for a single authentication. It may be different than the tenant set in [New] by [WithAuthority]. +// This option is valid for any token acquisition method. +func WithTenantID(tenantID string) interface { + AcquireByAuthCodeOption + AcquireByDeviceCodeOption + AcquireByUsernamePasswordOption + AcquireInteractiveOption + AcquireSilentOption + AuthCodeURLOption + options.CallOption +} { + return struct { + AcquireByAuthCodeOption + AcquireByDeviceCodeOption + AcquireByUsernamePasswordOption + AcquireInteractiveOption + AcquireSilentOption + AuthCodeURLOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *acquireTokenByAuthCodeOptions: + t.tenantID = tenantID + case *acquireTokenByDeviceCodeOptions: + t.tenantID = tenantID + case *acquireTokenByUsernamePasswordOptions: + t.tenantID = tenantID + case *acquireTokenSilentOptions: + t.tenantID = tenantID + case *authCodeURLOptions: + t.tenantID = tenantID + case *interactiveAuthOptions: + t.tenantID = tenantID + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// acquireTokenSilentOptions are all the optional settings to an AcquireTokenSilent() call. +// These are set by using various AcquireTokenSilentOption functions. +type acquireTokenSilentOptions struct { + account Account + claims, tenantID string +} + +// AcquireSilentOption is implemented by options for AcquireTokenSilent +type AcquireSilentOption interface { + acquireSilentOption() +} + +// WithSilentAccount uses the passed account during an AcquireTokenSilent() call. +func WithSilentAccount(account Account) interface { + AcquireSilentOption + options.CallOption +} { + return struct { + AcquireSilentOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *acquireTokenSilentOptions: + t.account = account + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// AcquireTokenSilent acquires a token from either the cache or using a refresh token. +// +// Options: [WithClaims], [WithSilentAccount], [WithTenantID] +func (pca Client) AcquireTokenSilent(ctx context.Context, scopes []string, opts ...AcquireSilentOption) (AuthResult, error) { + o := acquireTokenSilentOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return AuthResult{}, err + } + // an account is required to find user tokens in the cache + if reflect.ValueOf(o.account).IsZero() { + return AuthResult{}, errNoAccount + } + + silentParameters := base.AcquireTokenSilentParameters{ + Scopes: scopes, + Account: o.account, + Claims: o.claims, + RequestType: accesstokens.ATPublic, + IsAppCache: false, + TenantID: o.tenantID, + } + + return pca.base.AcquireTokenSilent(ctx, silentParameters) +} + +// acquireTokenByUsernamePasswordOptions contains optional configuration for AcquireTokenByUsernamePassword +type acquireTokenByUsernamePasswordOptions struct { + claims, tenantID string +} + +// AcquireByUsernamePasswordOption is implemented by options for AcquireTokenByUsernamePassword +type AcquireByUsernamePasswordOption interface { + acquireByUsernamePasswordOption() +} + +// AcquireTokenByUsernamePassword acquires a security token from the authority, via Username/Password Authentication. +// NOTE: this flow is NOT recommended. +// +// Options: [WithClaims], [WithTenantID] +func (pca Client) AcquireTokenByUsernamePassword(ctx context.Context, scopes []string, username, password string, opts ...AcquireByUsernamePasswordOption) (AuthResult, error) { + o := acquireTokenByUsernamePasswordOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return AuthResult{}, err + } + authParams, err := pca.base.AuthParams.WithTenant(o.tenantID) + if err != nil { + return AuthResult{}, err + } + authParams.Scopes = scopes + authParams.AuthorizationType = authority.ATUsernamePassword + authParams.Claims = o.claims + authParams.Username = username + authParams.Password = password + + token, err := pca.base.Token.UsernamePassword(ctx, authParams) + if err != nil { + return AuthResult{}, err + } + return pca.base.AuthResultFromToken(ctx, authParams, token, true) +} + +type DeviceCodeResult = accesstokens.DeviceCodeResult + +// DeviceCode provides the results of the device code flows first stage (containing the code) +// that must be entered on the second device and provides a method to retrieve the AuthenticationResult +// once that code has been entered and verified. +type DeviceCode struct { + // Result holds the information about the device code (such as the code). + Result DeviceCodeResult + + authParams authority.AuthParams + client Client + dc oauth.DeviceCode +} + +// AuthenticationResult retreives the AuthenticationResult once the user enters the code +// on the second device. Until then it blocks until the .AcquireTokenByDeviceCode() context +// is cancelled or the token expires. +func (d DeviceCode) AuthenticationResult(ctx context.Context) (AuthResult, error) { + token, err := d.dc.Token(ctx) + if err != nil { + return AuthResult{}, err + } + return d.client.base.AuthResultFromToken(ctx, d.authParams, token, true) +} + +// acquireTokenByDeviceCodeOptions contains optional configuration for AcquireTokenByDeviceCode +type acquireTokenByDeviceCodeOptions struct { + claims, tenantID string +} + +// AcquireByDeviceCodeOption is implemented by options for AcquireTokenByDeviceCode +type AcquireByDeviceCodeOption interface { + acquireByDeviceCodeOptions() +} + +// AcquireTokenByDeviceCode acquires a security token from the authority, by acquiring a device code and using that to acquire the token. +// Users need to create an AcquireTokenDeviceCodeParameters instance and pass it in. +// +// Options: [WithClaims], [WithTenantID] +func (pca Client) AcquireTokenByDeviceCode(ctx context.Context, scopes []string, opts ...AcquireByDeviceCodeOption) (DeviceCode, error) { + o := acquireTokenByDeviceCodeOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return DeviceCode{}, err + } + authParams, err := pca.base.AuthParams.WithTenant(o.tenantID) + if err != nil { + return DeviceCode{}, err + } + authParams.Scopes = scopes + authParams.AuthorizationType = authority.ATDeviceCode + authParams.Claims = o.claims + + dc, err := pca.base.Token.DeviceCode(ctx, authParams) + if err != nil { + return DeviceCode{}, err + } + + return DeviceCode{Result: dc.Result, authParams: authParams, client: pca, dc: dc}, nil +} + +// acquireTokenByAuthCodeOptions contains the optional parameters used to acquire an access token using the authorization code flow. +type acquireTokenByAuthCodeOptions struct { + challenge, claims, tenantID string +} + +// AcquireByAuthCodeOption is implemented by options for AcquireTokenByAuthCode +type AcquireByAuthCodeOption interface { + acquireByAuthCodeOption() +} + +// WithChallenge allows you to provide a code for the .AcquireTokenByAuthCode() call. +func WithChallenge(challenge string) interface { + AcquireByAuthCodeOption + options.CallOption +} { + return struct { + AcquireByAuthCodeOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *acquireTokenByAuthCodeOptions: + t.challenge = challenge + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// AcquireTokenByAuthCode is a request to acquire a security token from the authority, using an authorization code. +// The specified redirect URI must be the same URI that was used when the authorization code was requested. +// +// Options: [WithChallenge], [WithClaims], [WithTenantID] +func (pca Client) AcquireTokenByAuthCode(ctx context.Context, code string, redirectURI string, scopes []string, opts ...AcquireByAuthCodeOption) (AuthResult, error) { + o := acquireTokenByAuthCodeOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return AuthResult{}, err + } + + params := base.AcquireTokenAuthCodeParameters{ + Scopes: scopes, + Code: code, + Challenge: o.challenge, + Claims: o.claims, + AppType: accesstokens.ATPublic, + RedirectURI: redirectURI, + TenantID: o.tenantID, + } + + return pca.base.AcquireTokenByAuthCode(ctx, params) +} + +// Accounts gets all the accounts in the token cache. +// If there are no accounts in the cache the returned slice is empty. +func (pca Client) Accounts(ctx context.Context) ([]Account, error) { + return pca.base.AllAccounts(ctx) +} + +// RemoveAccount signs the account out and forgets account from token cache. +func (pca Client) RemoveAccount(ctx context.Context, account Account) error { + return pca.base.RemoveAccount(ctx, account) +} + +// interactiveAuthOptions contains the optional parameters used to acquire an access token for interactive auth code flow. +type interactiveAuthOptions struct { + claims, domainHint, loginHint, redirectURI, tenantID string + openURL func(url string) error +} + +// AcquireInteractiveOption is implemented by options for AcquireTokenInteractive +type AcquireInteractiveOption interface { + acquireInteractiveOption() +} + +// WithLoginHint pre-populates the login prompt with a username. +func WithLoginHint(username string) interface { + AcquireInteractiveOption + AuthCodeURLOption + options.CallOption +} { + return struct { + AcquireInteractiveOption + AuthCodeURLOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *authCodeURLOptions: + t.loginHint = username + case *interactiveAuthOptions: + t.loginHint = username + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// WithDomainHint adds the IdP domain as domain_hint query parameter in the auth url. +func WithDomainHint(domain string) interface { + AcquireInteractiveOption + AuthCodeURLOption + options.CallOption +} { + return struct { + AcquireInteractiveOption + AuthCodeURLOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *authCodeURLOptions: + t.domainHint = domain + case *interactiveAuthOptions: + t.domainHint = domain + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// WithRedirectURI sets a port for the local server used in interactive authentication, for +// example http://localhost:port. All URI components other than the port are ignored. +func WithRedirectURI(redirectURI string) interface { + AcquireInteractiveOption + options.CallOption +} { + return struct { + AcquireInteractiveOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *interactiveAuthOptions: + t.redirectURI = redirectURI + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// WithOpenURL allows you to provide a function to open the browser to complete the interactive login, instead of launching the system default browser. +func WithOpenURL(openURL func(url string) error) interface { + AcquireInteractiveOption + options.CallOption +} { + return struct { + AcquireInteractiveOption + options.CallOption + }{ + CallOption: options.NewCallOption( + func(a any) error { + switch t := a.(type) { + case *interactiveAuthOptions: + t.openURL = openURL + default: + return fmt.Errorf("unexpected options type %T", a) + } + return nil + }, + ), + } +} + +// AcquireTokenInteractive acquires a security token from the authority using the default web browser to select the account. +// https://docs.microsoft.com/en-us/azure/active-directory/develop/msal-authentication-flows#interactive-and-non-interactive-authentication +// +// Options: [WithDomainHint], [WithLoginHint], [WithOpenURL], [WithRedirectURI], [WithTenantID] +func (pca Client) AcquireTokenInteractive(ctx context.Context, scopes []string, opts ...AcquireInteractiveOption) (AuthResult, error) { + o := interactiveAuthOptions{} + if err := options.ApplyOptions(&o, opts); err != nil { + return AuthResult{}, err + } + // the code verifier is a random 32-byte sequence that's been base-64 encoded without padding. + // it's used to prevent MitM attacks during auth code flow, see https://tools.ietf.org/html/rfc7636 + cv, challenge, err := codeVerifier() + if err != nil { + return AuthResult{}, err + } + var redirectURL *url.URL + if o.redirectURI != "" { + redirectURL, err = url.Parse(o.redirectURI) + if err != nil { + return AuthResult{}, err + } + } + if o.openURL == nil { + o.openURL = browser.OpenURL + } + authParams, err := pca.base.AuthParams.WithTenant(o.tenantID) + if err != nil { + return AuthResult{}, err + } + authParams.Scopes = scopes + authParams.AuthorizationType = authority.ATInteractive + authParams.Claims = o.claims + authParams.CodeChallenge = challenge + authParams.CodeChallengeMethod = "S256" + authParams.LoginHint = o.loginHint + authParams.DomainHint = o.domainHint + authParams.State = uuid.New().String() + authParams.Prompt = "select_account" + res, err := pca.browserLogin(ctx, redirectURL, authParams, o.openURL) + if err != nil { + return AuthResult{}, err + } + authParams.Redirecturi = res.redirectURI + + req, err := accesstokens.NewCodeChallengeRequest(authParams, accesstokens.ATPublic, nil, res.authCode, cv) + if err != nil { + return AuthResult{}, err + } + + token, err := pca.base.Token.AuthCode(ctx, req) + if err != nil { + return AuthResult{}, err + } + + return pca.base.AuthResultFromToken(ctx, authParams, token, true) +} + +type interactiveAuthResult struct { + authCode string + redirectURI string +} + +// parses the port number from the provided URL. +// returns 0 if nil or no port is specified. +func parsePort(u *url.URL) (int, error) { + if u == nil { + return 0, nil + } + p := u.Port() + if p == "" { + return 0, nil + } + return strconv.Atoi(p) +} + +// browserLogin calls openURL and waits for a user to log in +func (pca Client) browserLogin(ctx context.Context, redirectURI *url.URL, params authority.AuthParams, openURL func(string) error) (interactiveAuthResult, error) { + // start local redirect server so login can call us back + port, err := parsePort(redirectURI) + if err != nil { + return interactiveAuthResult{}, err + } + srv, err := local.New(params.State, port) + if err != nil { + return interactiveAuthResult{}, err + } + defer srv.Shutdown() + params.Scopes = accesstokens.AppendDefaultScopes(params) + authURL, err := pca.base.AuthCodeURL(ctx, params.ClientID, srv.Addr, params.Scopes, params) + if err != nil { + return interactiveAuthResult{}, err + } + // open browser window so user can select credentials + if err := openURL(authURL); err != nil { + return interactiveAuthResult{}, err + } + // now wait until the logic calls us back + res := srv.Result(ctx) + if res.Err != nil { + return interactiveAuthResult{}, res.Err + } + return interactiveAuthResult{ + authCode: res.Code, + redirectURI: srv.Addr, + }, nil +} + +// creates a code verifier string along with its SHA256 hash which +// is used as the challenge when requesting an auth code. +// used in interactive auth flow for PKCE. +func codeVerifier() (codeVerifier string, challenge string, err error) { + cvBytes := make([]byte, 32) + if _, err = rand.Read(cvBytes); err != nil { + return + } + codeVerifier = base64.RawURLEncoding.EncodeToString(cvBytes) + // for PKCE, create a hash of the code verifier + cvh := sha256.Sum256([]byte(codeVerifier)) + challenge = base64.RawURLEncoding.EncodeToString(cvh[:]) + return +} diff --git a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/.travis.yml b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/.travis.yml deleted file mode 100644 index bfc421200d0e..000000000000 --- a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/.travis.yml +++ /dev/null @@ -1,6 +0,0 @@ -language: go - -go: - - 1.6 - - 1.7 - - 1.8 diff --git a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/Makefile b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/Makefile index 2d84889aed79..4e12afdd90d3 100644 --- a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/Makefile +++ b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/Makefile @@ -1,7 +1,7 @@ .PHONY: ci generate clean ci: clean generate - go test -v ./... + go test -race -v ./... generate: go generate . diff --git a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/README.md b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/README.md index ddcecd13e731..cf6b42f3d77a 100644 --- a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/README.md +++ b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/README.md @@ -7,8 +7,8 @@ http.Handlers. Doing this requires non-trivial wrapping of the http.ResponseWriter interface, which is also exposed for users interested in a more low-level API. -[![GoDoc](https://godoc.org/github.com/felixge/httpsnoop?status.svg)](https://godoc.org/github.com/felixge/httpsnoop) -[![Build Status](https://travis-ci.org/felixge/httpsnoop.svg?branch=master)](https://travis-ci.org/felixge/httpsnoop) +[![Go Reference](https://pkg.go.dev/badge/github.com/felixge/httpsnoop.svg)](https://pkg.go.dev/github.com/felixge/httpsnoop) +[![Build Status](https://github.com/felixge/httpsnoop/actions/workflows/main.yaml/badge.svg)](https://github.com/felixge/httpsnoop/actions/workflows/main.yaml) ## Usage Example diff --git a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/capture_metrics.go b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/capture_metrics.go index b77cc7c00957..bec7b71b39cb 100644 --- a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/capture_metrics.go +++ b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/capture_metrics.go @@ -52,7 +52,7 @@ func (m *Metrics) CaptureMetrics(w http.ResponseWriter, fn func(http.ResponseWri return func(code int) { next(code) - if !headerWritten { + if !(code >= 100 && code <= 199) && !headerWritten { m.Code = code headerWritten = true } diff --git a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/wrap_generated_gteq_1.8.go b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/wrap_generated_gteq_1.8.go index 31cbdfb8ef05..101cedde6744 100644 --- a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/wrap_generated_gteq_1.8.go +++ b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/wrap_generated_gteq_1.8.go @@ -1,5 +1,5 @@ // +build go1.8 -// Code generated by "httpsnoop/codegen"; DO NOT EDIT +// Code generated by "httpsnoop/codegen"; DO NOT EDIT. package httpsnoop diff --git a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/wrap_generated_lt_1.8.go b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/wrap_generated_lt_1.8.go index ab99c07c7a19..e0951df15278 100644 --- a/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/wrap_generated_lt_1.8.go +++ b/cluster-autoscaler/vendor/github.com/felixge/httpsnoop/wrap_generated_lt_1.8.go @@ -1,5 +1,5 @@ // +build !go1.8 -// Code generated by "httpsnoop/codegen"; DO NOT EDIT +// Code generated by "httpsnoop/codegen"; DO NOT EDIT. package httpsnoop diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/.gitignore b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/.gitignore new file mode 100644 index 000000000000..09573e0169c2 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/.gitignore @@ -0,0 +1,4 @@ +.DS_Store +bin +.idea/ + diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/LICENSE b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/LICENSE new file mode 100644 index 000000000000..35dbc252041e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/LICENSE @@ -0,0 +1,9 @@ +Copyright (c) 2012 Dave Grijalva +Copyright (c) 2021 golang-jwt maintainers + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/MIGRATION_GUIDE.md b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/MIGRATION_GUIDE.md new file mode 100644 index 000000000000..6ad1c22bbe39 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/MIGRATION_GUIDE.md @@ -0,0 +1,185 @@ +# Migration Guide (v5.0.0) + +Version `v5` contains a major rework of core functionalities in the `jwt-go` +library. This includes support for several validation options as well as a +re-design of the `Claims` interface. Lastly, we reworked how errors work under +the hood, which should provide a better overall developer experience. + +Starting from [v5.0.0](https://github.com/golang-jwt/jwt/releases/tag/v5.0.0), +the import path will be: + + "github.com/golang-jwt/jwt/v5" + +For most users, changing the import path *should* suffice. However, since we +intentionally changed and cleaned some of the public API, existing programs +might need to be updated. The following sections describe significant changes +and corresponding updates for existing programs. + +## Parsing and Validation Options + +Under the hood, a new `validator` struct takes care of validating the claims. A +long awaited feature has been the option to fine-tune the validation of tokens. +This is now possible with several `ParserOption` functions that can be appended +to most `Parse` functions, such as `ParseWithClaims`. The most important options +and changes are: + * Added `WithLeeway` to support specifying the leeway that is allowed when + validating time-based claims, such as `exp` or `nbf`. + * Changed default behavior to not check the `iat` claim. Usage of this claim + is OPTIONAL according to the JWT RFC. The claim itself is also purely + informational according to the RFC, so a strict validation failure is not + recommended. If you want to check for sensible values in these claims, + please use the `WithIssuedAt` parser option. + * Added `WithAudience`, `WithSubject` and `WithIssuer` to support checking for + expected `aud`, `sub` and `iss`. + * Added `WithStrictDecoding` and `WithPaddingAllowed` options to allow + previously global settings to enable base64 strict encoding and the parsing + of base64 strings with padding. The latter is strictly speaking against the + standard, but unfortunately some of the major identity providers issue some + of these incorrect tokens. Both options are disabled by default. + +## Changes to the `Claims` interface + +### Complete Restructuring + +Previously, the claims interface was satisfied with an implementation of a +`Valid() error` function. This had several issues: + * The different claim types (struct claims, map claims, etc.) then contained + similar (but not 100 % identical) code of how this validation was done. This + lead to a lot of (almost) duplicate code and was hard to maintain + * It was not really semantically close to what a "claim" (or a set of claims) + really is; which is a list of defined key/value pairs with a certain + semantic meaning. + +Since all the validation functionality is now extracted into the validator, all +`VerifyXXX` and `Valid` functions have been removed from the `Claims` interface. +Instead, the interface now represents a list of getters to retrieve values with +a specific meaning. This allows us to completely decouple the validation logic +with the underlying storage representation of the claim, which could be a +struct, a map or even something stored in a database. + +```go +type Claims interface { + GetExpirationTime() (*NumericDate, error) + GetIssuedAt() (*NumericDate, error) + GetNotBefore() (*NumericDate, error) + GetIssuer() (string, error) + GetSubject() (string, error) + GetAudience() (ClaimStrings, error) +} +``` + +### Supported Claim Types and Removal of `StandardClaims` + +The two standard claim types supported by this library, `MapClaims` and +`RegisteredClaims` both implement the necessary functions of this interface. The +old `StandardClaims` struct, which has already been deprecated in `v4` is now +removed. + +Users using custom claims, in most cases, will not experience any changes in the +behavior as long as they embedded `RegisteredClaims`. If they created a new +claim type from scratch, they now need to implemented the proper getter +functions. + +### Migrating Application Specific Logic of the old `Valid` + +Previously, users could override the `Valid` method in a custom claim, for +example to extend the validation with application-specific claims. However, this +was always very dangerous, since once could easily disable the standard +validation and signature checking. + +In order to avoid that, while still supporting the use-case, a new +`ClaimsValidator` interface has been introduced. This interface consists of the +`Validate() error` function. If the validator sees, that a `Claims` struct +implements this interface, the errors returned to the `Validate` function will +be *appended* to the regular standard validation. It is not possible to disable +the standard validation anymore (even only by accident). + +Usage examples can be found in [example_test.go](./example_test.go), to build +claims structs like the following. + +```go +// MyCustomClaims includes all registered claims, plus Foo. +type MyCustomClaims struct { + Foo string `json:"foo"` + jwt.RegisteredClaims +} + +// Validate can be used to execute additional application-specific claims +// validation. +func (m MyCustomClaims) Validate() error { + if m.Foo != "bar" { + return errors.New("must be foobar") + } + + return nil +} +``` + +## Changes to the `Token` and `Parser` struct + +The previously global functions `DecodeSegment` and `EncodeSegment` were moved +to the `Parser` and `Token` struct respectively. This will allow us in the +future to configure the behavior of these two based on options supplied on the +parser or the token (creation). This also removes two previously global +variables and moves them to parser options `WithStrictDecoding` and +`WithPaddingAllowed`. + +In order to do that, we had to adjust the way signing methods work. Previously +they were given a base64 encoded signature in `Verify` and were expected to +return a base64 encoded version of the signature in `Sign`, both as a `string`. +However, this made it necessary to have `DecodeSegment` and `EncodeSegment` +global and was a less than perfect design because we were repeating +encoding/decoding steps for all signing methods. Now, `Sign` and `Verify` +operate on a decoded signature as a `[]byte`, which feels more natural for a +cryptographic operation anyway. Lastly, `Parse` and `SignedString` take care of +the final encoding/decoding part. + +In addition to that, we also changed the `Signature` field on `Token` from a +`string` to `[]byte` and this is also now populated with the decoded form. This +is also more consistent, because the other parts of the JWT, mainly `Header` and +`Claims` were already stored in decoded form in `Token`. Only the signature was +stored in base64 encoded form, which was redundant with the information in the +`Raw` field, which contains the complete token as base64. + +```go +type Token struct { + Raw string // Raw contains the raw token + Method SigningMethod // Method is the signing method used or to be used + Header map[string]interface{} // Header is the first segment of the token in decoded form + Claims Claims // Claims is the second segment of the token in decoded form + Signature []byte // Signature is the third segment of the token in decoded form + Valid bool // Valid specifies if the token is valid +} +``` + +Most (if not all) of these changes should not impact the normal usage of this +library. Only users directly accessing the `Signature` field as well as +developers of custom signing methods should be affected. + +# Migration Guide (v4.0.0) + +Starting from [v4.0.0](https://github.com/golang-jwt/jwt/releases/tag/v4.0.0), +the import path will be: + + "github.com/golang-jwt/jwt/v4" + +The `/v4` version will be backwards compatible with existing `v3.x.y` tags in +this repo, as well as `github.com/dgrijalva/jwt-go`. For most users this should +be a drop-in replacement, if you're having troubles migrating, please open an +issue. + +You can replace all occurrences of `github.com/dgrijalva/jwt-go` or +`github.com/golang-jwt/jwt` with `github.com/golang-jwt/jwt/v5`, either manually +or by using tools such as `sed` or `gofmt`. + +And then you'd typically run: + +``` +go get github.com/golang-jwt/jwt/v4 +go mod tidy +``` + +# Older releases (before v3.2.0) + +The original migration guide for older releases can be found at +https://github.com/dgrijalva/jwt-go/blob/master/MIGRATION_GUIDE.md. diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/README.md b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/README.md new file mode 100644 index 000000000000..964598a3173e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/README.md @@ -0,0 +1,167 @@ +# jwt-go + +[![build](https://github.com/golang-jwt/jwt/actions/workflows/build.yml/badge.svg)](https://github.com/golang-jwt/jwt/actions/workflows/build.yml) +[![Go +Reference](https://pkg.go.dev/badge/github.com/golang-jwt/jwt/v5.svg)](https://pkg.go.dev/github.com/golang-jwt/jwt/v5) +[![Coverage Status](https://coveralls.io/repos/github/golang-jwt/jwt/badge.svg?branch=main)](https://coveralls.io/github/golang-jwt/jwt?branch=main) + +A [go](http://www.golang.org) (or 'golang' for search engine friendliness) +implementation of [JSON Web +Tokens](https://datatracker.ietf.org/doc/html/rfc7519). + +Starting with [v4.0.0](https://github.com/golang-jwt/jwt/releases/tag/v4.0.0) +this project adds Go module support, but maintains backwards compatibility with +older `v3.x.y` tags and upstream `github.com/dgrijalva/jwt-go`. See the +[`MIGRATION_GUIDE.md`](./MIGRATION_GUIDE.md) for more information. Version +v5.0.0 introduces major improvements to the validation of tokens, but is not +entirely backwards compatible. + +> After the original author of the library suggested migrating the maintenance +> of `jwt-go`, a dedicated team of open source maintainers decided to clone the +> existing library into this repository. See +> [dgrijalva/jwt-go#462](https://github.com/dgrijalva/jwt-go/issues/462) for a +> detailed discussion on this topic. + + +**SECURITY NOTICE:** Some older versions of Go have a security issue in the +crypto/elliptic. Recommendation is to upgrade to at least 1.15 See issue +[dgrijalva/jwt-go#216](https://github.com/dgrijalva/jwt-go/issues/216) for more +detail. + +**SECURITY NOTICE:** It's important that you [validate the `alg` presented is +what you +expect](https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/). +This library attempts to make it easy to do the right thing by requiring key +types match the expected alg, but you should take the extra step to verify it in +your usage. See the examples provided. + +### Supported Go versions + +Our support of Go versions is aligned with Go's [version release +policy](https://golang.org/doc/devel/release#policy). So we will support a major +version of Go until there are two newer major releases. We no longer support +building jwt-go with unsupported Go versions, as these contain security +vulnerabilities which will not be fixed. + +## What the heck is a JWT? + +JWT.io has [a great introduction](https://jwt.io/introduction) to JSON Web +Tokens. + +In short, it's a signed JSON object that does something useful (for example, +authentication). It's commonly used for `Bearer` tokens in Oauth 2. A token is +made of three parts, separated by `.`'s. The first two parts are JSON objects, +that have been [base64url](https://datatracker.ietf.org/doc/html/rfc4648) +encoded. The last part is the signature, encoded the same way. + +The first part is called the header. It contains the necessary information for +verifying the last part, the signature. For example, which encryption method +was used for signing and what key was used. + +The part in the middle is the interesting bit. It's called the Claims and +contains the actual stuff you care about. Refer to [RFC +7519](https://datatracker.ietf.org/doc/html/rfc7519) for information about +reserved keys and the proper way to add your own. + +## What's in the box? + +This library supports the parsing and verification as well as the generation and +signing of JWTs. Current supported signing algorithms are HMAC SHA, RSA, +RSA-PSS, and ECDSA, though hooks are present for adding your own. + +## Installation Guidelines + +1. To install the jwt package, you first need to have + [Go](https://go.dev/doc/install) installed, then you can use the command + below to add `jwt-go` as a dependency in your Go program. + +```sh +go get -u github.com/golang-jwt/jwt/v5 +``` + +2. Import it in your code: + +```go +import "github.com/golang-jwt/jwt/v5" +``` + +## Usage + +A detailed usage guide, including how to sign and verify tokens can be found on +our [documentation website](https://golang-jwt.github.io/jwt/usage/create/). + +## Examples + +See [the project documentation](https://pkg.go.dev/github.com/golang-jwt/jwt/v5) +for examples of usage: + +* [Simple example of parsing and validating a + token](https://pkg.go.dev/github.com/golang-jwt/jwt/v5#example-Parse-Hmac) +* [Simple example of building and signing a + token](https://pkg.go.dev/github.com/golang-jwt/jwt/v5#example-New-Hmac) +* [Directory of + Examples](https://pkg.go.dev/github.com/golang-jwt/jwt/v5#pkg-examples) + +## Compliance + +This library was last reviewed to comply with [RFC +7519](https://datatracker.ietf.org/doc/html/rfc7519) dated May 2015 with a few +notable differences: + +* In order to protect against accidental use of [Unsecured + JWTs](https://datatracker.ietf.org/doc/html/rfc7519#section-6), tokens using + `alg=none` will only be accepted if the constant + `jwt.UnsafeAllowNoneSignatureType` is provided as the key. + +## Project Status & Versioning + +This library is considered production ready. Feedback and feature requests are +appreciated. The API should be considered stable. There should be very few +backwards-incompatible changes outside of major version updates (and only with +good reason). + +This project uses [Semantic Versioning 2.0.0](http://semver.org). Accepted pull +requests will land on `main`. Periodically, versions will be tagged from +`main`. You can find all the releases on [the project releases +page](https://github.com/golang-jwt/jwt/releases). + +**BREAKING CHANGES:*** A full list of breaking changes is available in +`VERSION_HISTORY.md`. See `MIGRATION_GUIDE.md` for more information on updating +your code. + +## Extensions + +This library publishes all the necessary components for adding your own signing +methods or key functions. Simply implement the `SigningMethod` interface and +register a factory method using `RegisterSigningMethod` or provide a +`jwt.Keyfunc`. + +A common use case would be integrating with different 3rd party signature +providers, like key management services from various cloud providers or Hardware +Security Modules (HSMs) or to implement additional standards. + +| Extension | Purpose | Repo | +| --------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------ | +| GCP | Integrates with multiple Google Cloud Platform signing tools (AppEngine, IAM API, Cloud KMS) | https://github.com/someone1/gcp-jwt-go | +| AWS | Integrates with AWS Key Management Service, KMS | https://github.com/matelang/jwt-go-aws-kms | +| JWKS | Provides support for JWKS ([RFC 7517](https://datatracker.ietf.org/doc/html/rfc7517)) as a `jwt.Keyfunc` | https://github.com/MicahParks/keyfunc | + +*Disclaimer*: Unless otherwise specified, these integrations are maintained by +third parties and should not be considered as a primary offer by any of the +mentioned cloud providers + +## More + +Go package documentation can be found [on +pkg.go.dev](https://pkg.go.dev/github.com/golang-jwt/jwt/v5). Additional +documentation can be found on [our project +page](https://golang-jwt.github.io/jwt/). + +The command line utility included in this project (cmd/jwt) provides a +straightforward example of token creation and parsing as well as a useful tool +for debugging your own integration. You'll also find several implementation +examples in the documentation. + +[golang-jwt](https://github.com/orgs/golang-jwt) incorporates a modified version +of the JWT logo, which is distributed under the terms of the [MIT +License](https://github.com/jsonwebtoken/jsonwebtoken.github.io/blob/master/LICENSE.txt). diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/SECURITY.md b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/SECURITY.md new file mode 100644 index 000000000000..b08402c3427f --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/SECURITY.md @@ -0,0 +1,19 @@ +# Security Policy + +## Supported Versions + +As of February 2022 (and until this document is updated), the latest version `v4` is supported. + +## Reporting a Vulnerability + +If you think you found a vulnerability, and even if you are not sure, please report it to jwt-go-security@googlegroups.com or one of the other [golang-jwt maintainers](https://github.com/orgs/golang-jwt/people). Please try be explicit, describe steps to reproduce the security issue with code example(s). + +You will receive a response within a timely manner. If the issue is confirmed, we will do our best to release a patch as soon as possible given the complexity of the problem. + +## Public Discussions + +Please avoid publicly discussing a potential security vulnerability. + +Let's take this offline and find a solution first, this limits the potential impact as much as possible. + +We appreciate your help! diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/VERSION_HISTORY.md b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/VERSION_HISTORY.md new file mode 100644 index 000000000000..b5039e49c102 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/VERSION_HISTORY.md @@ -0,0 +1,137 @@ +# `jwt-go` Version History + +The following version history is kept for historic purposes. To retrieve the current changes of each version, please refer to the change-log of the specific release versions on https://github.com/golang-jwt/jwt/releases. + +## 4.0.0 + +* Introduces support for Go modules. The `v4` version will be backwards compatible with `v3.x.y`. + +## 3.2.2 + +* Starting from this release, we are adopting the policy to support the most 2 recent versions of Go currently available. By the time of this release, this is Go 1.15 and 1.16 ([#28](https://github.com/golang-jwt/jwt/pull/28)). +* Fixed a potential issue that could occur when the verification of `exp`, `iat` or `nbf` was not required and contained invalid contents, i.e. non-numeric/date. Thanks for @thaJeztah for making us aware of that and @giorgos-f3 for originally reporting it to the formtech fork ([#40](https://github.com/golang-jwt/jwt/pull/40)). +* Added support for EdDSA / ED25519 ([#36](https://github.com/golang-jwt/jwt/pull/36)). +* Optimized allocations ([#33](https://github.com/golang-jwt/jwt/pull/33)). + +## 3.2.1 + +* **Import Path Change**: See MIGRATION_GUIDE.md for tips on updating your code + * Changed the import path from `github.com/dgrijalva/jwt-go` to `github.com/golang-jwt/jwt` +* Fixed type confusing issue between `string` and `[]string` in `VerifyAudience` ([#12](https://github.com/golang-jwt/jwt/pull/12)). This fixes CVE-2020-26160 + +#### 3.2.0 + +* Added method `ParseUnverified` to allow users to split up the tasks of parsing and validation +* HMAC signing method returns `ErrInvalidKeyType` instead of `ErrInvalidKey` where appropriate +* Added options to `request.ParseFromRequest`, which allows for an arbitrary list of modifiers to parsing behavior. Initial set include `WithClaims` and `WithParser`. Existing usage of this function will continue to work as before. +* Deprecated `ParseFromRequestWithClaims` to simplify API in the future. + +#### 3.1.0 + +* Improvements to `jwt` command line tool +* Added `SkipClaimsValidation` option to `Parser` +* Documentation updates + +#### 3.0.0 + +* **Compatibility Breaking Changes**: See MIGRATION_GUIDE.md for tips on updating your code + * Dropped support for `[]byte` keys when using RSA signing methods. This convenience feature could contribute to security vulnerabilities involving mismatched key types with signing methods. + * `ParseFromRequest` has been moved to `request` subpackage and usage has changed + * The `Claims` property on `Token` is now type `Claims` instead of `map[string]interface{}`. The default value is type `MapClaims`, which is an alias to `map[string]interface{}`. This makes it possible to use a custom type when decoding claims. +* Other Additions and Changes + * Added `Claims` interface type to allow users to decode the claims into a custom type + * Added `ParseWithClaims`, which takes a third argument of type `Claims`. Use this function instead of `Parse` if you have a custom type you'd like to decode into. + * Dramatically improved the functionality and flexibility of `ParseFromRequest`, which is now in the `request` subpackage + * Added `ParseFromRequestWithClaims` which is the `FromRequest` equivalent of `ParseWithClaims` + * Added new interface type `Extractor`, which is used for extracting JWT strings from http requests. Used with `ParseFromRequest` and `ParseFromRequestWithClaims`. + * Added several new, more specific, validation errors to error type bitmask + * Moved examples from README to executable example files + * Signing method registry is now thread safe + * Added new property to `ValidationError`, which contains the raw error returned by calls made by parse/verify (such as those returned by keyfunc or json parser) + +#### 2.7.0 + +This will likely be the last backwards compatible release before 3.0.0, excluding essential bug fixes. + +* Added new option `-show` to the `jwt` command that will just output the decoded token without verifying +* Error text for expired tokens includes how long it's been expired +* Fixed incorrect error returned from `ParseRSAPublicKeyFromPEM` +* Documentation updates + +#### 2.6.0 + +* Exposed inner error within ValidationError +* Fixed validation errors when using UseJSONNumber flag +* Added several unit tests + +#### 2.5.0 + +* Added support for signing method none. You shouldn't use this. The API tries to make this clear. +* Updated/fixed some documentation +* Added more helpful error message when trying to parse tokens that begin with `BEARER ` + +#### 2.4.0 + +* Added new type, Parser, to allow for configuration of various parsing parameters + * You can now specify a list of valid signing methods. Anything outside this set will be rejected. + * You can now opt to use the `json.Number` type instead of `float64` when parsing token JSON +* Added support for [Travis CI](https://travis-ci.org/dgrijalva/jwt-go) +* Fixed some bugs with ECDSA parsing + +#### 2.3.0 + +* Added support for ECDSA signing methods +* Added support for RSA PSS signing methods (requires go v1.4) + +#### 2.2.0 + +* Gracefully handle a `nil` `Keyfunc` being passed to `Parse`. Result will now be the parsed token and an error, instead of a panic. + +#### 2.1.0 + +Backwards compatible API change that was missed in 2.0.0. + +* The `SignedString` method on `Token` now takes `interface{}` instead of `[]byte` + +#### 2.0.0 + +There were two major reasons for breaking backwards compatibility with this update. The first was a refactor required to expand the width of the RSA and HMAC-SHA signing implementations. There will likely be no required code changes to support this change. + +The second update, while unfortunately requiring a small change in integration, is required to open up this library to other signing methods. Not all keys used for all signing methods have a single standard on-disk representation. Requiring `[]byte` as the type for all keys proved too limiting. Additionally, this implementation allows for pre-parsed tokens to be reused, which might matter in an application that parses a high volume of tokens with a small set of keys. Backwards compatibilty has been maintained for passing `[]byte` to the RSA signing methods, but they will also accept `*rsa.PublicKey` and `*rsa.PrivateKey`. + +It is likely the only integration change required here will be to change `func(t *jwt.Token) ([]byte, error)` to `func(t *jwt.Token) (interface{}, error)` when calling `Parse`. + +* **Compatibility Breaking Changes** + * `SigningMethodHS256` is now `*SigningMethodHMAC` instead of `type struct` + * `SigningMethodRS256` is now `*SigningMethodRSA` instead of `type struct` + * `KeyFunc` now returns `interface{}` instead of `[]byte` + * `SigningMethod.Sign` now takes `interface{}` instead of `[]byte` for the key + * `SigningMethod.Verify` now takes `interface{}` instead of `[]byte` for the key +* Renamed type `SigningMethodHS256` to `SigningMethodHMAC`. Specific sizes are now just instances of this type. + * Added public package global `SigningMethodHS256` + * Added public package global `SigningMethodHS384` + * Added public package global `SigningMethodHS512` +* Renamed type `SigningMethodRS256` to `SigningMethodRSA`. Specific sizes are now just instances of this type. + * Added public package global `SigningMethodRS256` + * Added public package global `SigningMethodRS384` + * Added public package global `SigningMethodRS512` +* Moved sample private key for HMAC tests from an inline value to a file on disk. Value is unchanged. +* Refactored the RSA implementation to be easier to read +* Exposed helper methods `ParseRSAPrivateKeyFromPEM` and `ParseRSAPublicKeyFromPEM` + +## 1.0.2 + +* Fixed bug in parsing public keys from certificates +* Added more tests around the parsing of keys for RS256 +* Code refactoring in RS256 implementation. No functional changes + +## 1.0.1 + +* Fixed panic if RS256 signing method was passed an invalid key + +## 1.0.0 + +* First versioned release +* API stabilized +* Supports creating, signing, parsing, and validating JWT tokens +* Supports RS256 and HS256 signing methods diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/claims.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/claims.go new file mode 100644 index 000000000000..d50ff3dad829 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/claims.go @@ -0,0 +1,16 @@ +package jwt + +// Claims represent any form of a JWT Claims Set according to +// https://datatracker.ietf.org/doc/html/rfc7519#section-4. In order to have a +// common basis for validation, it is required that an implementation is able to +// supply at least the claim names provided in +// https://datatracker.ietf.org/doc/html/rfc7519#section-4.1 namely `exp`, +// `iat`, `nbf`, `iss`, `sub` and `aud`. +type Claims interface { + GetExpirationTime() (*NumericDate, error) + GetIssuedAt() (*NumericDate, error) + GetNotBefore() (*NumericDate, error) + GetIssuer() (string, error) + GetSubject() (string, error) + GetAudience() (ClaimStrings, error) +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/doc.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/doc.go new file mode 100644 index 000000000000..a86dc1a3b348 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/doc.go @@ -0,0 +1,4 @@ +// Package jwt is a Go implementation of JSON Web Tokens: http://self-issued.info/docs/draft-jones-json-web-token.html +// +// See README.md for more info. +package jwt diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ecdsa.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ecdsa.go new file mode 100644 index 000000000000..4ccae2a857de --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ecdsa.go @@ -0,0 +1,134 @@ +package jwt + +import ( + "crypto" + "crypto/ecdsa" + "crypto/rand" + "errors" + "math/big" +) + +var ( + // Sadly this is missing from crypto/ecdsa compared to crypto/rsa + ErrECDSAVerification = errors.New("crypto/ecdsa: verification error") +) + +// SigningMethodECDSA implements the ECDSA family of signing methods. +// Expects *ecdsa.PrivateKey for signing and *ecdsa.PublicKey for verification +type SigningMethodECDSA struct { + Name string + Hash crypto.Hash + KeySize int + CurveBits int +} + +// Specific instances for EC256 and company +var ( + SigningMethodES256 *SigningMethodECDSA + SigningMethodES384 *SigningMethodECDSA + SigningMethodES512 *SigningMethodECDSA +) + +func init() { + // ES256 + SigningMethodES256 = &SigningMethodECDSA{"ES256", crypto.SHA256, 32, 256} + RegisterSigningMethod(SigningMethodES256.Alg(), func() SigningMethod { + return SigningMethodES256 + }) + + // ES384 + SigningMethodES384 = &SigningMethodECDSA{"ES384", crypto.SHA384, 48, 384} + RegisterSigningMethod(SigningMethodES384.Alg(), func() SigningMethod { + return SigningMethodES384 + }) + + // ES512 + SigningMethodES512 = &SigningMethodECDSA{"ES512", crypto.SHA512, 66, 521} + RegisterSigningMethod(SigningMethodES512.Alg(), func() SigningMethod { + return SigningMethodES512 + }) +} + +func (m *SigningMethodECDSA) Alg() string { + return m.Name +} + +// Verify implements token verification for the SigningMethod. +// For this verify method, key must be an ecdsa.PublicKey struct +func (m *SigningMethodECDSA) Verify(signingString string, sig []byte, key interface{}) error { + // Get the key + var ecdsaKey *ecdsa.PublicKey + switch k := key.(type) { + case *ecdsa.PublicKey: + ecdsaKey = k + default: + return ErrInvalidKeyType + } + + if len(sig) != 2*m.KeySize { + return ErrECDSAVerification + } + + r := big.NewInt(0).SetBytes(sig[:m.KeySize]) + s := big.NewInt(0).SetBytes(sig[m.KeySize:]) + + // Create hasher + if !m.Hash.Available() { + return ErrHashUnavailable + } + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Verify the signature + if verifystatus := ecdsa.Verify(ecdsaKey, hasher.Sum(nil), r, s); verifystatus { + return nil + } + + return ErrECDSAVerification +} + +// Sign implements token signing for the SigningMethod. +// For this signing method, key must be an ecdsa.PrivateKey struct +func (m *SigningMethodECDSA) Sign(signingString string, key interface{}) ([]byte, error) { + // Get the key + var ecdsaKey *ecdsa.PrivateKey + switch k := key.(type) { + case *ecdsa.PrivateKey: + ecdsaKey = k + default: + return nil, ErrInvalidKeyType + } + + // Create the hasher + if !m.Hash.Available() { + return nil, ErrHashUnavailable + } + + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Sign the string and return r, s + if r, s, err := ecdsa.Sign(rand.Reader, ecdsaKey, hasher.Sum(nil)); err == nil { + curveBits := ecdsaKey.Curve.Params().BitSize + + if m.CurveBits != curveBits { + return nil, ErrInvalidKey + } + + keyBytes := curveBits / 8 + if curveBits%8 > 0 { + keyBytes += 1 + } + + // We serialize the outputs (r and s) into big-endian byte arrays + // padded with zeros on the left to make sure the sizes work out. + // Output must be 2*keyBytes long. + out := make([]byte, 2*keyBytes) + r.FillBytes(out[0:keyBytes]) // r is assigned to the first half of output. + s.FillBytes(out[keyBytes:]) // s is assigned to the second half of output. + + return out, nil + } else { + return nil, err + } +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ecdsa_utils.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ecdsa_utils.go new file mode 100644 index 000000000000..5700636d35b6 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ecdsa_utils.go @@ -0,0 +1,69 @@ +package jwt + +import ( + "crypto/ecdsa" + "crypto/x509" + "encoding/pem" + "errors" +) + +var ( + ErrNotECPublicKey = errors.New("key is not a valid ECDSA public key") + ErrNotECPrivateKey = errors.New("key is not a valid ECDSA private key") +) + +// ParseECPrivateKeyFromPEM parses a PEM encoded Elliptic Curve Private Key Structure +func ParseECPrivateKeyFromPEM(key []byte) (*ecdsa.PrivateKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + // Parse the key + var parsedKey interface{} + if parsedKey, err = x509.ParseECPrivateKey(block.Bytes); err != nil { + if parsedKey, err = x509.ParsePKCS8PrivateKey(block.Bytes); err != nil { + return nil, err + } + } + + var pkey *ecdsa.PrivateKey + var ok bool + if pkey, ok = parsedKey.(*ecdsa.PrivateKey); !ok { + return nil, ErrNotECPrivateKey + } + + return pkey, nil +} + +// ParseECPublicKeyFromPEM parses a PEM encoded PKCS1 or PKCS8 public key +func ParseECPublicKeyFromPEM(key []byte) (*ecdsa.PublicKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + // Parse the key + var parsedKey interface{} + if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil { + if cert, err := x509.ParseCertificate(block.Bytes); err == nil { + parsedKey = cert.PublicKey + } else { + return nil, err + } + } + + var pkey *ecdsa.PublicKey + var ok bool + if pkey, ok = parsedKey.(*ecdsa.PublicKey); !ok { + return nil, ErrNotECPublicKey + } + + return pkey, nil +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ed25519.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ed25519.go new file mode 100644 index 000000000000..3db00e4a233b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ed25519.go @@ -0,0 +1,80 @@ +package jwt + +import ( + "errors" + + "crypto" + "crypto/ed25519" + "crypto/rand" +) + +var ( + ErrEd25519Verification = errors.New("ed25519: verification error") +) + +// SigningMethodEd25519 implements the EdDSA family. +// Expects ed25519.PrivateKey for signing and ed25519.PublicKey for verification +type SigningMethodEd25519 struct{} + +// Specific instance for EdDSA +var ( + SigningMethodEdDSA *SigningMethodEd25519 +) + +func init() { + SigningMethodEdDSA = &SigningMethodEd25519{} + RegisterSigningMethod(SigningMethodEdDSA.Alg(), func() SigningMethod { + return SigningMethodEdDSA + }) +} + +func (m *SigningMethodEd25519) Alg() string { + return "EdDSA" +} + +// Verify implements token verification for the SigningMethod. +// For this verify method, key must be an ed25519.PublicKey +func (m *SigningMethodEd25519) Verify(signingString string, sig []byte, key interface{}) error { + var ed25519Key ed25519.PublicKey + var ok bool + + if ed25519Key, ok = key.(ed25519.PublicKey); !ok { + return ErrInvalidKeyType + } + + if len(ed25519Key) != ed25519.PublicKeySize { + return ErrInvalidKey + } + + // Verify the signature + if !ed25519.Verify(ed25519Key, []byte(signingString), sig) { + return ErrEd25519Verification + } + + return nil +} + +// Sign implements token signing for the SigningMethod. +// For this signing method, key must be an ed25519.PrivateKey +func (m *SigningMethodEd25519) Sign(signingString string, key interface{}) ([]byte, error) { + var ed25519Key crypto.Signer + var ok bool + + if ed25519Key, ok = key.(crypto.Signer); !ok { + return nil, ErrInvalidKeyType + } + + if _, ok := ed25519Key.Public().(ed25519.PublicKey); !ok { + return nil, ErrInvalidKey + } + + // Sign the string and return the result. ed25519 performs a two-pass hash + // as part of its algorithm. Therefore, we need to pass a non-prehashed + // message into the Sign function, as indicated by crypto.Hash(0) + sig, err := ed25519Key.Sign(rand.Reader, []byte(signingString), crypto.Hash(0)) + if err != nil { + return nil, err + } + + return sig, nil +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ed25519_utils.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ed25519_utils.go new file mode 100644 index 000000000000..cdb5e68e8767 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/ed25519_utils.go @@ -0,0 +1,64 @@ +package jwt + +import ( + "crypto" + "crypto/ed25519" + "crypto/x509" + "encoding/pem" + "errors" +) + +var ( + ErrNotEdPrivateKey = errors.New("key is not a valid Ed25519 private key") + ErrNotEdPublicKey = errors.New("key is not a valid Ed25519 public key") +) + +// ParseEdPrivateKeyFromPEM parses a PEM-encoded Edwards curve private key +func ParseEdPrivateKeyFromPEM(key []byte) (crypto.PrivateKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + // Parse the key + var parsedKey interface{} + if parsedKey, err = x509.ParsePKCS8PrivateKey(block.Bytes); err != nil { + return nil, err + } + + var pkey ed25519.PrivateKey + var ok bool + if pkey, ok = parsedKey.(ed25519.PrivateKey); !ok { + return nil, ErrNotEdPrivateKey + } + + return pkey, nil +} + +// ParseEdPublicKeyFromPEM parses a PEM-encoded Edwards curve public key +func ParseEdPublicKeyFromPEM(key []byte) (crypto.PublicKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + // Parse the key + var parsedKey interface{} + if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil { + return nil, err + } + + var pkey ed25519.PublicKey + var ok bool + if pkey, ok = parsedKey.(ed25519.PublicKey); !ok { + return nil, ErrNotEdPublicKey + } + + return pkey, nil +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors.go new file mode 100644 index 000000000000..23bb616ddde3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors.go @@ -0,0 +1,49 @@ +package jwt + +import ( + "errors" + "strings" +) + +var ( + ErrInvalidKey = errors.New("key is invalid") + ErrInvalidKeyType = errors.New("key is of invalid type") + ErrHashUnavailable = errors.New("the requested hash function is unavailable") + ErrTokenMalformed = errors.New("token is malformed") + ErrTokenUnverifiable = errors.New("token is unverifiable") + ErrTokenSignatureInvalid = errors.New("token signature is invalid") + ErrTokenRequiredClaimMissing = errors.New("token is missing required claim") + ErrTokenInvalidAudience = errors.New("token has invalid audience") + ErrTokenExpired = errors.New("token is expired") + ErrTokenUsedBeforeIssued = errors.New("token used before issued") + ErrTokenInvalidIssuer = errors.New("token has invalid issuer") + ErrTokenInvalidSubject = errors.New("token has invalid subject") + ErrTokenNotValidYet = errors.New("token is not valid yet") + ErrTokenInvalidId = errors.New("token has invalid id") + ErrTokenInvalidClaims = errors.New("token has invalid claims") + ErrInvalidType = errors.New("invalid type for claim") +) + +// joinedError is an error type that works similar to what [errors.Join] +// produces, with the exception that it has a nice error string; mainly its +// error messages are concatenated using a comma, rather than a newline. +type joinedError struct { + errs []error +} + +func (je joinedError) Error() string { + msg := []string{} + for _, err := range je.errs { + msg = append(msg, err.Error()) + } + + return strings.Join(msg, ", ") +} + +// joinErrors joins together multiple errors. Useful for scenarios where +// multiple errors next to each other occur, e.g., in claims validation. +func joinErrors(errs ...error) error { + return &joinedError{ + errs: errs, + } +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors_go1_20.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors_go1_20.go new file mode 100644 index 000000000000..a893d355e1ab --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors_go1_20.go @@ -0,0 +1,47 @@ +//go:build go1.20 +// +build go1.20 + +package jwt + +import ( + "fmt" +) + +// Unwrap implements the multiple error unwrapping for this error type, which is +// possible in Go 1.20. +func (je joinedError) Unwrap() []error { + return je.errs +} + +// newError creates a new error message with a detailed error message. The +// message will be prefixed with the contents of the supplied error type. +// Additionally, more errors, that provide more context can be supplied which +// will be appended to the message. This makes use of Go 1.20's possibility to +// include more than one %w formatting directive in [fmt.Errorf]. +// +// For example, +// +// newError("no keyfunc was provided", ErrTokenUnverifiable) +// +// will produce the error string +// +// "token is unverifiable: no keyfunc was provided" +func newError(message string, err error, more ...error) error { + var format string + var args []any + if message != "" { + format = "%w: %s" + args = []any{err, message} + } else { + format = "%w" + args = []any{err} + } + + for _, e := range more { + format += ": %w" + args = append(args, e) + } + + err = fmt.Errorf(format, args...) + return err +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors_go_other.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors_go_other.go new file mode 100644 index 000000000000..3afb04e648fb --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/errors_go_other.go @@ -0,0 +1,78 @@ +//go:build !go1.20 +// +build !go1.20 + +package jwt + +import ( + "errors" + "fmt" +) + +// Is implements checking for multiple errors using [errors.Is], since multiple +// error unwrapping is not possible in versions less than Go 1.20. +func (je joinedError) Is(err error) bool { + for _, e := range je.errs { + if errors.Is(e, err) { + return true + } + } + + return false +} + +// wrappedErrors is a workaround for wrapping multiple errors in environments +// where Go 1.20 is not available. It basically uses the already implemented +// functionatlity of joinedError to handle multiple errors with supplies a +// custom error message that is identical to the one we produce in Go 1.20 using +// multiple %w directives. +type wrappedErrors struct { + msg string + joinedError +} + +// Error returns the stored error string +func (we wrappedErrors) Error() string { + return we.msg +} + +// newError creates a new error message with a detailed error message. The +// message will be prefixed with the contents of the supplied error type. +// Additionally, more errors, that provide more context can be supplied which +// will be appended to the message. Since we cannot use of Go 1.20's possibility +// to include more than one %w formatting directive in [fmt.Errorf], we have to +// emulate that. +// +// For example, +// +// newError("no keyfunc was provided", ErrTokenUnverifiable) +// +// will produce the error string +// +// "token is unverifiable: no keyfunc was provided" +func newError(message string, err error, more ...error) error { + // We cannot wrap multiple errors here with %w, so we have to be a little + // bit creative. Basically, we are using %s instead of %w to produce the + // same error message and then throw the result into a custom error struct. + var format string + var args []any + if message != "" { + format = "%s: %s" + args = []any{err, message} + } else { + format = "%s" + args = []any{err} + } + errs := []error{err} + + for _, e := range more { + format += ": %s" + args = append(args, e) + errs = append(errs, e) + } + + err = &wrappedErrors{ + msg: fmt.Sprintf(format, args...), + joinedError: joinedError{errs: errs}, + } + return err +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/hmac.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/hmac.go new file mode 100644 index 000000000000..91b688ba9f11 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/hmac.go @@ -0,0 +1,104 @@ +package jwt + +import ( + "crypto" + "crypto/hmac" + "errors" +) + +// SigningMethodHMAC implements the HMAC-SHA family of signing methods. +// Expects key type of []byte for both signing and validation +type SigningMethodHMAC struct { + Name string + Hash crypto.Hash +} + +// Specific instances for HS256 and company +var ( + SigningMethodHS256 *SigningMethodHMAC + SigningMethodHS384 *SigningMethodHMAC + SigningMethodHS512 *SigningMethodHMAC + ErrSignatureInvalid = errors.New("signature is invalid") +) + +func init() { + // HS256 + SigningMethodHS256 = &SigningMethodHMAC{"HS256", crypto.SHA256} + RegisterSigningMethod(SigningMethodHS256.Alg(), func() SigningMethod { + return SigningMethodHS256 + }) + + // HS384 + SigningMethodHS384 = &SigningMethodHMAC{"HS384", crypto.SHA384} + RegisterSigningMethod(SigningMethodHS384.Alg(), func() SigningMethod { + return SigningMethodHS384 + }) + + // HS512 + SigningMethodHS512 = &SigningMethodHMAC{"HS512", crypto.SHA512} + RegisterSigningMethod(SigningMethodHS512.Alg(), func() SigningMethod { + return SigningMethodHS512 + }) +} + +func (m *SigningMethodHMAC) Alg() string { + return m.Name +} + +// Verify implements token verification for the SigningMethod. Returns nil if +// the signature is valid. Key must be []byte. +// +// Note it is not advised to provide a []byte which was converted from a 'human +// readable' string using a subset of ASCII characters. To maximize entropy, you +// should ideally be providing a []byte key which was produced from a +// cryptographically random source, e.g. crypto/rand. Additional information +// about this, and why we intentionally are not supporting string as a key can +// be found on our usage guide +// https://golang-jwt.github.io/jwt/usage/signing_methods/#signing-methods-and-key-types. +func (m *SigningMethodHMAC) Verify(signingString string, sig []byte, key interface{}) error { + // Verify the key is the right type + keyBytes, ok := key.([]byte) + if !ok { + return ErrInvalidKeyType + } + + // Can we use the specified hashing method? + if !m.Hash.Available() { + return ErrHashUnavailable + } + + // This signing method is symmetric, so we validate the signature + // by reproducing the signature from the signing string and key, then + // comparing that against the provided signature. + hasher := hmac.New(m.Hash.New, keyBytes) + hasher.Write([]byte(signingString)) + if !hmac.Equal(sig, hasher.Sum(nil)) { + return ErrSignatureInvalid + } + + // No validation errors. Signature is good. + return nil +} + +// Sign implements token signing for the SigningMethod. Key must be []byte. +// +// Note it is not advised to provide a []byte which was converted from a 'human +// readable' string using a subset of ASCII characters. To maximize entropy, you +// should ideally be providing a []byte key which was produced from a +// cryptographically random source, e.g. crypto/rand. Additional information +// about this, and why we intentionally are not supporting string as a key can +// be found on our usage guide https://golang-jwt.github.io/jwt/usage/signing_methods/. +func (m *SigningMethodHMAC) Sign(signingString string, key interface{}) ([]byte, error) { + if keyBytes, ok := key.([]byte); ok { + if !m.Hash.Available() { + return nil, ErrHashUnavailable + } + + hasher := hmac.New(m.Hash.New, keyBytes) + hasher.Write([]byte(signingString)) + + return hasher.Sum(nil), nil + } + + return nil, ErrInvalidKeyType +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/map_claims.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/map_claims.go new file mode 100644 index 000000000000..b2b51a1f8065 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/map_claims.go @@ -0,0 +1,109 @@ +package jwt + +import ( + "encoding/json" + "fmt" +) + +// MapClaims is a claims type that uses the map[string]interface{} for JSON +// decoding. This is the default claims type if you don't supply one +type MapClaims map[string]interface{} + +// GetExpirationTime implements the Claims interface. +func (m MapClaims) GetExpirationTime() (*NumericDate, error) { + return m.parseNumericDate("exp") +} + +// GetNotBefore implements the Claims interface. +func (m MapClaims) GetNotBefore() (*NumericDate, error) { + return m.parseNumericDate("nbf") +} + +// GetIssuedAt implements the Claims interface. +func (m MapClaims) GetIssuedAt() (*NumericDate, error) { + return m.parseNumericDate("iat") +} + +// GetAudience implements the Claims interface. +func (m MapClaims) GetAudience() (ClaimStrings, error) { + return m.parseClaimsString("aud") +} + +// GetIssuer implements the Claims interface. +func (m MapClaims) GetIssuer() (string, error) { + return m.parseString("iss") +} + +// GetSubject implements the Claims interface. +func (m MapClaims) GetSubject() (string, error) { + return m.parseString("sub") +} + +// parseNumericDate tries to parse a key in the map claims type as a number +// date. This will succeed, if the underlying type is either a [float64] or a +// [json.Number]. Otherwise, nil will be returned. +func (m MapClaims) parseNumericDate(key string) (*NumericDate, error) { + v, ok := m[key] + if !ok { + return nil, nil + } + + switch exp := v.(type) { + case float64: + if exp == 0 { + return nil, nil + } + + return newNumericDateFromSeconds(exp), nil + case json.Number: + v, _ := exp.Float64() + + return newNumericDateFromSeconds(v), nil + } + + return nil, newError(fmt.Sprintf("%s is invalid", key), ErrInvalidType) +} + +// parseClaimsString tries to parse a key in the map claims type as a +// [ClaimsStrings] type, which can either be a string or an array of string. +func (m MapClaims) parseClaimsString(key string) (ClaimStrings, error) { + var cs []string + switch v := m[key].(type) { + case string: + cs = append(cs, v) + case []string: + cs = v + case []interface{}: + for _, a := range v { + vs, ok := a.(string) + if !ok { + return nil, newError(fmt.Sprintf("%s is invalid", key), ErrInvalidType) + } + cs = append(cs, vs) + } + } + + return cs, nil +} + +// parseString tries to parse a key in the map claims type as a [string] type. +// If the key does not exist, an empty string is returned. If the key has the +// wrong type, an error is returned. +func (m MapClaims) parseString(key string) (string, error) { + var ( + ok bool + raw interface{} + iss string + ) + raw, ok = m[key] + if !ok { + return "", nil + } + + iss, ok = raw.(string) + if !ok { + return "", newError(fmt.Sprintf("%s is invalid", key), ErrInvalidType) + } + + return iss, nil +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/none.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/none.go new file mode 100644 index 000000000000..c93daa584958 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/none.go @@ -0,0 +1,50 @@ +package jwt + +// SigningMethodNone implements the none signing method. This is required by the spec +// but you probably should never use it. +var SigningMethodNone *signingMethodNone + +const UnsafeAllowNoneSignatureType unsafeNoneMagicConstant = "none signing method allowed" + +var NoneSignatureTypeDisallowedError error + +type signingMethodNone struct{} +type unsafeNoneMagicConstant string + +func init() { + SigningMethodNone = &signingMethodNone{} + NoneSignatureTypeDisallowedError = newError("'none' signature type is not allowed", ErrTokenUnverifiable) + + RegisterSigningMethod(SigningMethodNone.Alg(), func() SigningMethod { + return SigningMethodNone + }) +} + +func (m *signingMethodNone) Alg() string { + return "none" +} + +// Only allow 'none' alg type if UnsafeAllowNoneSignatureType is specified as the key +func (m *signingMethodNone) Verify(signingString string, sig []byte, key interface{}) (err error) { + // Key must be UnsafeAllowNoneSignatureType to prevent accidentally + // accepting 'none' signing method + if _, ok := key.(unsafeNoneMagicConstant); !ok { + return NoneSignatureTypeDisallowedError + } + // If signing method is none, signature must be an empty string + if string(sig) != "" { + return newError("'none' signing method with non-empty signature", ErrTokenUnverifiable) + } + + // Accept 'none' signing method. + return nil +} + +// Only allow 'none' signing if UnsafeAllowNoneSignatureType is specified as the key +func (m *signingMethodNone) Sign(signingString string, key interface{}) ([]byte, error) { + if _, ok := key.(unsafeNoneMagicConstant); ok { + return []byte{}, nil + } + + return nil, NoneSignatureTypeDisallowedError +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/parser.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/parser.go new file mode 100644 index 000000000000..f4386fbaace9 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/parser.go @@ -0,0 +1,215 @@ +package jwt + +import ( + "bytes" + "encoding/base64" + "encoding/json" + "fmt" + "strings" +) + +type Parser struct { + // If populated, only these methods will be considered valid. + validMethods []string + + // Use JSON Number format in JSON decoder. + useJSONNumber bool + + // Skip claims validation during token parsing. + skipClaimsValidation bool + + validator *validator + + decodeStrict bool + + decodePaddingAllowed bool +} + +// NewParser creates a new Parser with the specified options +func NewParser(options ...ParserOption) *Parser { + p := &Parser{ + validator: &validator{}, + } + + // Loop through our parsing options and apply them + for _, option := range options { + option(p) + } + + return p +} + +// Parse parses, validates, verifies the signature and returns the parsed token. +// keyFunc will receive the parsed token and should return the key for validating. +func (p *Parser) Parse(tokenString string, keyFunc Keyfunc) (*Token, error) { + return p.ParseWithClaims(tokenString, MapClaims{}, keyFunc) +} + +// ParseWithClaims parses, validates, and verifies like Parse, but supplies a default object implementing the Claims +// interface. This provides default values which can be overridden and allows a caller to use their own type, rather +// than the default MapClaims implementation of Claims. +// +// Note: If you provide a custom claim implementation that embeds one of the standard claims (such as RegisteredClaims), +// make sure that a) you either embed a non-pointer version of the claims or b) if you are using a pointer, allocate the +// proper memory for it before passing in the overall claims, otherwise you might run into a panic. +func (p *Parser) ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc) (*Token, error) { + token, parts, err := p.ParseUnverified(tokenString, claims) + if err != nil { + return token, err + } + + // Verify signing method is in the required set + if p.validMethods != nil { + var signingMethodValid = false + var alg = token.Method.Alg() + for _, m := range p.validMethods { + if m == alg { + signingMethodValid = true + break + } + } + if !signingMethodValid { + // signing method is not in the listed set + return token, newError(fmt.Sprintf("signing method %v is invalid", alg), ErrTokenSignatureInvalid) + } + } + + // Lookup key + var key interface{} + if keyFunc == nil { + // keyFunc was not provided. short circuiting validation + return token, newError("no keyfunc was provided", ErrTokenUnverifiable) + } + if key, err = keyFunc(token); err != nil { + return token, newError("error while executing keyfunc", ErrTokenUnverifiable, err) + } + + // Decode signature + token.Signature, err = p.DecodeSegment(parts[2]) + if err != nil { + return token, newError("could not base64 decode signature", ErrTokenMalformed, err) + } + + // Perform signature validation + if err = token.Method.Verify(strings.Join(parts[0:2], "."), token.Signature, key); err != nil { + return token, newError("", ErrTokenSignatureInvalid, err) + } + + // Validate Claims + if !p.skipClaimsValidation { + // Make sure we have at least a default validator + if p.validator == nil { + p.validator = newValidator() + } + + if err := p.validator.Validate(claims); err != nil { + return token, newError("", ErrTokenInvalidClaims, err) + } + } + + // No errors so far, token is valid. + token.Valid = true + + return token, nil +} + +// ParseUnverified parses the token but doesn't validate the signature. +// +// WARNING: Don't use this method unless you know what you're doing. +// +// It's only ever useful in cases where you know the signature is valid (because it has +// been checked previously in the stack) and you want to extract values from it. +func (p *Parser) ParseUnverified(tokenString string, claims Claims) (token *Token, parts []string, err error) { + parts = strings.Split(tokenString, ".") + if len(parts) != 3 { + return nil, parts, newError("token contains an invalid number of segments", ErrTokenMalformed) + } + + token = &Token{Raw: tokenString} + + // parse Header + var headerBytes []byte + if headerBytes, err = p.DecodeSegment(parts[0]); err != nil { + if strings.HasPrefix(strings.ToLower(tokenString), "bearer ") { + return token, parts, newError("tokenstring should not contain 'bearer '", ErrTokenMalformed) + } + return token, parts, newError("could not base64 decode header", ErrTokenMalformed, err) + } + if err = json.Unmarshal(headerBytes, &token.Header); err != nil { + return token, parts, newError("could not JSON decode header", ErrTokenMalformed, err) + } + + // parse Claims + var claimBytes []byte + token.Claims = claims + + if claimBytes, err = p.DecodeSegment(parts[1]); err != nil { + return token, parts, newError("could not base64 decode claim", ErrTokenMalformed, err) + } + dec := json.NewDecoder(bytes.NewBuffer(claimBytes)) + if p.useJSONNumber { + dec.UseNumber() + } + // JSON Decode. Special case for map type to avoid weird pointer behavior + if c, ok := token.Claims.(MapClaims); ok { + err = dec.Decode(&c) + } else { + err = dec.Decode(&claims) + } + // Handle decode error + if err != nil { + return token, parts, newError("could not JSON decode claim", ErrTokenMalformed, err) + } + + // Lookup signature method + if method, ok := token.Header["alg"].(string); ok { + if token.Method = GetSigningMethod(method); token.Method == nil { + return token, parts, newError("signing method (alg) is unavailable", ErrTokenUnverifiable) + } + } else { + return token, parts, newError("signing method (alg) is unspecified", ErrTokenUnverifiable) + } + + return token, parts, nil +} + +// DecodeSegment decodes a JWT specific base64url encoding. This function will +// take into account whether the [Parser] is configured with additional options, +// such as [WithStrictDecoding] or [WithPaddingAllowed]. +func (p *Parser) DecodeSegment(seg string) ([]byte, error) { + encoding := base64.RawURLEncoding + + if p.decodePaddingAllowed { + if l := len(seg) % 4; l > 0 { + seg += strings.Repeat("=", 4-l) + } + encoding = base64.URLEncoding + } + + if p.decodeStrict { + encoding = encoding.Strict() + } + return encoding.DecodeString(seg) +} + +// Parse parses, validates, verifies the signature and returns the parsed token. +// keyFunc will receive the parsed token and should return the cryptographic key +// for verifying the signature. The caller is strongly encouraged to set the +// WithValidMethods option to validate the 'alg' claim in the token matches the +// expected algorithm. For more details about the importance of validating the +// 'alg' claim, see +// https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/ +func Parse(tokenString string, keyFunc Keyfunc, options ...ParserOption) (*Token, error) { + return NewParser(options...).Parse(tokenString, keyFunc) +} + +// ParseWithClaims is a shortcut for NewParser().ParseWithClaims(). +// +// Note: If you provide a custom claim implementation that embeds one of the +// standard claims (such as RegisteredClaims), make sure that a) you either +// embed a non-pointer version of the claims or b) if you are using a pointer, +// allocate the proper memory for it before passing in the overall claims, +// otherwise you might run into a panic. +func ParseWithClaims(tokenString string, claims Claims, keyFunc Keyfunc, options ...ParserOption) (*Token, error) { + return NewParser(options...).ParseWithClaims(tokenString, claims, keyFunc) +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/parser_option.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/parser_option.go new file mode 100644 index 000000000000..1b5af970f66a --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/parser_option.go @@ -0,0 +1,120 @@ +package jwt + +import "time" + +// ParserOption is used to implement functional-style options that modify the +// behavior of the parser. To add new options, just create a function (ideally +// beginning with With or Without) that returns an anonymous function that takes +// a *Parser type as input and manipulates its configuration accordingly. +type ParserOption func(*Parser) + +// WithValidMethods is an option to supply algorithm methods that the parser +// will check. Only those methods will be considered valid. It is heavily +// encouraged to use this option in order to prevent attacks such as +// https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/. +func WithValidMethods(methods []string) ParserOption { + return func(p *Parser) { + p.validMethods = methods + } +} + +// WithJSONNumber is an option to configure the underlying JSON parser with +// UseNumber. +func WithJSONNumber() ParserOption { + return func(p *Parser) { + p.useJSONNumber = true + } +} + +// WithoutClaimsValidation is an option to disable claims validation. This +// option should only be used if you exactly know what you are doing. +func WithoutClaimsValidation() ParserOption { + return func(p *Parser) { + p.skipClaimsValidation = true + } +} + +// WithLeeway returns the ParserOption for specifying the leeway window. +func WithLeeway(leeway time.Duration) ParserOption { + return func(p *Parser) { + p.validator.leeway = leeway + } +} + +// WithTimeFunc returns the ParserOption for specifying the time func. The +// primary use-case for this is testing. If you are looking for a way to account +// for clock-skew, WithLeeway should be used instead. +func WithTimeFunc(f func() time.Time) ParserOption { + return func(p *Parser) { + p.validator.timeFunc = f + } +} + +// WithIssuedAt returns the ParserOption to enable verification +// of issued-at. +func WithIssuedAt() ParserOption { + return func(p *Parser) { + p.validator.verifyIat = true + } +} + +// WithAudience configures the validator to require the specified audience in +// the `aud` claim. Validation will fail if the audience is not listed in the +// token or the `aud` claim is missing. +// +// NOTE: While the `aud` claim is OPTIONAL in a JWT, the handling of it is +// application-specific. Since this validation API is helping developers in +// writing secure application, we decided to REQUIRE the existence of the claim, +// if an audience is expected. +func WithAudience(aud string) ParserOption { + return func(p *Parser) { + p.validator.expectedAud = aud + } +} + +// WithIssuer configures the validator to require the specified issuer in the +// `iss` claim. Validation will fail if a different issuer is specified in the +// token or the `iss` claim is missing. +// +// NOTE: While the `iss` claim is OPTIONAL in a JWT, the handling of it is +// application-specific. Since this validation API is helping developers in +// writing secure application, we decided to REQUIRE the existence of the claim, +// if an issuer is expected. +func WithIssuer(iss string) ParserOption { + return func(p *Parser) { + p.validator.expectedIss = iss + } +} + +// WithSubject configures the validator to require the specified subject in the +// `sub` claim. Validation will fail if a different subject is specified in the +// token or the `sub` claim is missing. +// +// NOTE: While the `sub` claim is OPTIONAL in a JWT, the handling of it is +// application-specific. Since this validation API is helping developers in +// writing secure application, we decided to REQUIRE the existence of the claim, +// if a subject is expected. +func WithSubject(sub string) ParserOption { + return func(p *Parser) { + p.validator.expectedSub = sub + } +} + +// WithPaddingAllowed will enable the codec used for decoding JWTs to allow +// padding. Note that the JWS RFC7515 states that the tokens will utilize a +// Base64url encoding with no padding. Unfortunately, some implementations of +// JWT are producing non-standard tokens, and thus require support for decoding. +func WithPaddingAllowed() ParserOption { + return func(p *Parser) { + p.decodePaddingAllowed = true + } +} + +// WithStrictDecoding will switch the codec used for decoding JWTs into strict +// mode. In this mode, the decoder requires that trailing padding bits are zero, +// as described in RFC 4648 section 3.5. +func WithStrictDecoding() ParserOption { + return func(p *Parser) { + p.decodeStrict = true + } +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/registered_claims.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/registered_claims.go new file mode 100644 index 000000000000..77951a531d91 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/registered_claims.go @@ -0,0 +1,63 @@ +package jwt + +// RegisteredClaims are a structured version of the JWT Claims Set, +// restricted to Registered Claim Names, as referenced at +// https://datatracker.ietf.org/doc/html/rfc7519#section-4.1 +// +// This type can be used on its own, but then additional private and +// public claims embedded in the JWT will not be parsed. The typical use-case +// therefore is to embedded this in a user-defined claim type. +// +// See examples for how to use this with your own claim types. +type RegisteredClaims struct { + // the `iss` (Issuer) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.1 + Issuer string `json:"iss,omitempty"` + + // the `sub` (Subject) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.2 + Subject string `json:"sub,omitempty"` + + // the `aud` (Audience) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.3 + Audience ClaimStrings `json:"aud,omitempty"` + + // the `exp` (Expiration Time) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.4 + ExpiresAt *NumericDate `json:"exp,omitempty"` + + // the `nbf` (Not Before) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.5 + NotBefore *NumericDate `json:"nbf,omitempty"` + + // the `iat` (Issued At) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.6 + IssuedAt *NumericDate `json:"iat,omitempty"` + + // the `jti` (JWT ID) claim. See https://datatracker.ietf.org/doc/html/rfc7519#section-4.1.7 + ID string `json:"jti,omitempty"` +} + +// GetExpirationTime implements the Claims interface. +func (c RegisteredClaims) GetExpirationTime() (*NumericDate, error) { + return c.ExpiresAt, nil +} + +// GetNotBefore implements the Claims interface. +func (c RegisteredClaims) GetNotBefore() (*NumericDate, error) { + return c.NotBefore, nil +} + +// GetIssuedAt implements the Claims interface. +func (c RegisteredClaims) GetIssuedAt() (*NumericDate, error) { + return c.IssuedAt, nil +} + +// GetAudience implements the Claims interface. +func (c RegisteredClaims) GetAudience() (ClaimStrings, error) { + return c.Audience, nil +} + +// GetIssuer implements the Claims interface. +func (c RegisteredClaims) GetIssuer() (string, error) { + return c.Issuer, nil +} + +// GetSubject implements the Claims interface. +func (c RegisteredClaims) GetSubject() (string, error) { + return c.Subject, nil +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa.go new file mode 100644 index 000000000000..daff094313d8 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa.go @@ -0,0 +1,93 @@ +package jwt + +import ( + "crypto" + "crypto/rand" + "crypto/rsa" +) + +// SigningMethodRSA implements the RSA family of signing methods. +// Expects *rsa.PrivateKey for signing and *rsa.PublicKey for validation +type SigningMethodRSA struct { + Name string + Hash crypto.Hash +} + +// Specific instances for RS256 and company +var ( + SigningMethodRS256 *SigningMethodRSA + SigningMethodRS384 *SigningMethodRSA + SigningMethodRS512 *SigningMethodRSA +) + +func init() { + // RS256 + SigningMethodRS256 = &SigningMethodRSA{"RS256", crypto.SHA256} + RegisterSigningMethod(SigningMethodRS256.Alg(), func() SigningMethod { + return SigningMethodRS256 + }) + + // RS384 + SigningMethodRS384 = &SigningMethodRSA{"RS384", crypto.SHA384} + RegisterSigningMethod(SigningMethodRS384.Alg(), func() SigningMethod { + return SigningMethodRS384 + }) + + // RS512 + SigningMethodRS512 = &SigningMethodRSA{"RS512", crypto.SHA512} + RegisterSigningMethod(SigningMethodRS512.Alg(), func() SigningMethod { + return SigningMethodRS512 + }) +} + +func (m *SigningMethodRSA) Alg() string { + return m.Name +} + +// Verify implements token verification for the SigningMethod +// For this signing method, must be an *rsa.PublicKey structure. +func (m *SigningMethodRSA) Verify(signingString string, sig []byte, key interface{}) error { + var rsaKey *rsa.PublicKey + var ok bool + + if rsaKey, ok = key.(*rsa.PublicKey); !ok { + return ErrInvalidKeyType + } + + // Create hasher + if !m.Hash.Available() { + return ErrHashUnavailable + } + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Verify the signature + return rsa.VerifyPKCS1v15(rsaKey, m.Hash, hasher.Sum(nil), sig) +} + +// Sign implements token signing for the SigningMethod +// For this signing method, must be an *rsa.PrivateKey structure. +func (m *SigningMethodRSA) Sign(signingString string, key interface{}) ([]byte, error) { + var rsaKey *rsa.PrivateKey + var ok bool + + // Validate type of key + if rsaKey, ok = key.(*rsa.PrivateKey); !ok { + return nil, ErrInvalidKey + } + + // Create the hasher + if !m.Hash.Available() { + return nil, ErrHashUnavailable + } + + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Sign the string and return the encoded bytes + if sigBytes, err := rsa.SignPKCS1v15(rand.Reader, rsaKey, m.Hash, hasher.Sum(nil)); err == nil { + return sigBytes, nil + } else { + return nil, err + } +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa_pss.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa_pss.go new file mode 100644 index 000000000000..9599f0a46c00 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa_pss.go @@ -0,0 +1,135 @@ +//go:build go1.4 +// +build go1.4 + +package jwt + +import ( + "crypto" + "crypto/rand" + "crypto/rsa" +) + +// SigningMethodRSAPSS implements the RSAPSS family of signing methods signing methods +type SigningMethodRSAPSS struct { + *SigningMethodRSA + Options *rsa.PSSOptions + // VerifyOptions is optional. If set overrides Options for rsa.VerifyPPS. + // Used to accept tokens signed with rsa.PSSSaltLengthAuto, what doesn't follow + // https://tools.ietf.org/html/rfc7518#section-3.5 but was used previously. + // See https://github.com/dgrijalva/jwt-go/issues/285#issuecomment-437451244 for details. + VerifyOptions *rsa.PSSOptions +} + +// Specific instances for RS/PS and company. +var ( + SigningMethodPS256 *SigningMethodRSAPSS + SigningMethodPS384 *SigningMethodRSAPSS + SigningMethodPS512 *SigningMethodRSAPSS +) + +func init() { + // PS256 + SigningMethodPS256 = &SigningMethodRSAPSS{ + SigningMethodRSA: &SigningMethodRSA{ + Name: "PS256", + Hash: crypto.SHA256, + }, + Options: &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthEqualsHash, + }, + VerifyOptions: &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthAuto, + }, + } + RegisterSigningMethod(SigningMethodPS256.Alg(), func() SigningMethod { + return SigningMethodPS256 + }) + + // PS384 + SigningMethodPS384 = &SigningMethodRSAPSS{ + SigningMethodRSA: &SigningMethodRSA{ + Name: "PS384", + Hash: crypto.SHA384, + }, + Options: &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthEqualsHash, + }, + VerifyOptions: &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthAuto, + }, + } + RegisterSigningMethod(SigningMethodPS384.Alg(), func() SigningMethod { + return SigningMethodPS384 + }) + + // PS512 + SigningMethodPS512 = &SigningMethodRSAPSS{ + SigningMethodRSA: &SigningMethodRSA{ + Name: "PS512", + Hash: crypto.SHA512, + }, + Options: &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthEqualsHash, + }, + VerifyOptions: &rsa.PSSOptions{ + SaltLength: rsa.PSSSaltLengthAuto, + }, + } + RegisterSigningMethod(SigningMethodPS512.Alg(), func() SigningMethod { + return SigningMethodPS512 + }) +} + +// Verify implements token verification for the SigningMethod. +// For this verify method, key must be an rsa.PublicKey struct +func (m *SigningMethodRSAPSS) Verify(signingString string, sig []byte, key interface{}) error { + var rsaKey *rsa.PublicKey + switch k := key.(type) { + case *rsa.PublicKey: + rsaKey = k + default: + return ErrInvalidKey + } + + // Create hasher + if !m.Hash.Available() { + return ErrHashUnavailable + } + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + opts := m.Options + if m.VerifyOptions != nil { + opts = m.VerifyOptions + } + + return rsa.VerifyPSS(rsaKey, m.Hash, hasher.Sum(nil), sig, opts) +} + +// Sign implements token signing for the SigningMethod. +// For this signing method, key must be an rsa.PrivateKey struct +func (m *SigningMethodRSAPSS) Sign(signingString string, key interface{}) ([]byte, error) { + var rsaKey *rsa.PrivateKey + + switch k := key.(type) { + case *rsa.PrivateKey: + rsaKey = k + default: + return nil, ErrInvalidKeyType + } + + // Create the hasher + if !m.Hash.Available() { + return nil, ErrHashUnavailable + } + + hasher := m.Hash.New() + hasher.Write([]byte(signingString)) + + // Sign the string and return the encoded bytes + if sigBytes, err := rsa.SignPSS(rand.Reader, rsaKey, m.Hash, hasher.Sum(nil), m.Options); err == nil { + return sigBytes, nil + } else { + return nil, err + } +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa_utils.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa_utils.go new file mode 100644 index 000000000000..b3aeebbe110b --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/rsa_utils.go @@ -0,0 +1,107 @@ +package jwt + +import ( + "crypto/rsa" + "crypto/x509" + "encoding/pem" + "errors" +) + +var ( + ErrKeyMustBePEMEncoded = errors.New("invalid key: Key must be a PEM encoded PKCS1 or PKCS8 key") + ErrNotRSAPrivateKey = errors.New("key is not a valid RSA private key") + ErrNotRSAPublicKey = errors.New("key is not a valid RSA public key") +) + +// ParseRSAPrivateKeyFromPEM parses a PEM encoded PKCS1 or PKCS8 private key +func ParseRSAPrivateKeyFromPEM(key []byte) (*rsa.PrivateKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + var parsedKey interface{} + if parsedKey, err = x509.ParsePKCS1PrivateKey(block.Bytes); err != nil { + if parsedKey, err = x509.ParsePKCS8PrivateKey(block.Bytes); err != nil { + return nil, err + } + } + + var pkey *rsa.PrivateKey + var ok bool + if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok { + return nil, ErrNotRSAPrivateKey + } + + return pkey, nil +} + +// ParseRSAPrivateKeyFromPEMWithPassword parses a PEM encoded PKCS1 or PKCS8 private key protected with password +// +// Deprecated: This function is deprecated and should not be used anymore. It uses the deprecated x509.DecryptPEMBlock +// function, which was deprecated since RFC 1423 is regarded insecure by design. Unfortunately, there is no alternative +// in the Go standard library for now. See https://github.com/golang/go/issues/8860. +func ParseRSAPrivateKeyFromPEMWithPassword(key []byte, password string) (*rsa.PrivateKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + var parsedKey interface{} + + var blockDecrypted []byte + if blockDecrypted, err = x509.DecryptPEMBlock(block, []byte(password)); err != nil { + return nil, err + } + + if parsedKey, err = x509.ParsePKCS1PrivateKey(blockDecrypted); err != nil { + if parsedKey, err = x509.ParsePKCS8PrivateKey(blockDecrypted); err != nil { + return nil, err + } + } + + var pkey *rsa.PrivateKey + var ok bool + if pkey, ok = parsedKey.(*rsa.PrivateKey); !ok { + return nil, ErrNotRSAPrivateKey + } + + return pkey, nil +} + +// ParseRSAPublicKeyFromPEM parses a certificate or a PEM encoded PKCS1 or PKIX public key +func ParseRSAPublicKeyFromPEM(key []byte) (*rsa.PublicKey, error) { + var err error + + // Parse PEM block + var block *pem.Block + if block, _ = pem.Decode(key); block == nil { + return nil, ErrKeyMustBePEMEncoded + } + + // Parse the key + var parsedKey interface{} + if parsedKey, err = x509.ParsePKIXPublicKey(block.Bytes); err != nil { + if cert, err := x509.ParseCertificate(block.Bytes); err == nil { + parsedKey = cert.PublicKey + } else { + if parsedKey, err = x509.ParsePKCS1PublicKey(block.Bytes); err != nil { + return nil, err + } + } + } + + var pkey *rsa.PublicKey + var ok bool + if pkey, ok = parsedKey.(*rsa.PublicKey); !ok { + return nil, ErrNotRSAPublicKey + } + + return pkey, nil +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/signing_method.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/signing_method.go new file mode 100644 index 000000000000..0d73631c1bf3 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/signing_method.go @@ -0,0 +1,49 @@ +package jwt + +import ( + "sync" +) + +var signingMethods = map[string]func() SigningMethod{} +var signingMethodLock = new(sync.RWMutex) + +// SigningMethod can be used add new methods for signing or verifying tokens. It +// takes a decoded signature as an input in the Verify function and produces a +// signature in Sign. The signature is then usually base64 encoded as part of a +// JWT. +type SigningMethod interface { + Verify(signingString string, sig []byte, key interface{}) error // Returns nil if signature is valid + Sign(signingString string, key interface{}) ([]byte, error) // Returns signature or error + Alg() string // returns the alg identifier for this method (example: 'HS256') +} + +// RegisterSigningMethod registers the "alg" name and a factory function for signing method. +// This is typically done during init() in the method's implementation +func RegisterSigningMethod(alg string, f func() SigningMethod) { + signingMethodLock.Lock() + defer signingMethodLock.Unlock() + + signingMethods[alg] = f +} + +// GetSigningMethod retrieves a signing method from an "alg" string +func GetSigningMethod(alg string) (method SigningMethod) { + signingMethodLock.RLock() + defer signingMethodLock.RUnlock() + + if methodF, ok := signingMethods[alg]; ok { + method = methodF() + } + return +} + +// GetAlgorithms returns a list of registered "alg" names +func GetAlgorithms() (algs []string) { + signingMethodLock.RLock() + defer signingMethodLock.RUnlock() + + for alg := range signingMethods { + algs = append(algs, alg) + } + return +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/staticcheck.conf b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/staticcheck.conf new file mode 100644 index 000000000000..53745d51d7c7 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/staticcheck.conf @@ -0,0 +1 @@ +checks = ["all", "-ST1000", "-ST1003", "-ST1016", "-ST1023"] diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/token.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/token.go new file mode 100644 index 000000000000..c8ad7c7834de --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/token.go @@ -0,0 +1,86 @@ +package jwt + +import ( + "encoding/base64" + "encoding/json" +) + +// Keyfunc will be used by the Parse methods as a callback function to supply +// the key for verification. The function receives the parsed, but unverified +// Token. This allows you to use properties in the Header of the token (such as +// `kid`) to identify which key to use. +type Keyfunc func(*Token) (interface{}, error) + +// Token represents a JWT Token. Different fields will be used depending on +// whether you're creating or parsing/verifying a token. +type Token struct { + Raw string // Raw contains the raw token. Populated when you [Parse] a token + Method SigningMethod // Method is the signing method used or to be used + Header map[string]interface{} // Header is the first segment of the token in decoded form + Claims Claims // Claims is the second segment of the token in decoded form + Signature []byte // Signature is the third segment of the token in decoded form. Populated when you Parse a token + Valid bool // Valid specifies if the token is valid. Populated when you Parse/Verify a token +} + +// New creates a new [Token] with the specified signing method and an empty map +// of claims. Additional options can be specified, but are currently unused. +func New(method SigningMethod, opts ...TokenOption) *Token { + return NewWithClaims(method, MapClaims{}, opts...) +} + +// NewWithClaims creates a new [Token] with the specified signing method and +// claims. Additional options can be specified, but are currently unused. +func NewWithClaims(method SigningMethod, claims Claims, opts ...TokenOption) *Token { + return &Token{ + Header: map[string]interface{}{ + "typ": "JWT", + "alg": method.Alg(), + }, + Claims: claims, + Method: method, + } +} + +// SignedString creates and returns a complete, signed JWT. The token is signed +// using the SigningMethod specified in the token. Please refer to +// https://golang-jwt.github.io/jwt/usage/signing_methods/#signing-methods-and-key-types +// for an overview of the different signing methods and their respective key +// types. +func (t *Token) SignedString(key interface{}) (string, error) { + sstr, err := t.SigningString() + if err != nil { + return "", err + } + + sig, err := t.Method.Sign(sstr, key) + if err != nil { + return "", err + } + + return sstr + "." + t.EncodeSegment(sig), nil +} + +// SigningString generates the signing string. This is the most expensive part +// of the whole deal. Unless you need this for something special, just go +// straight for the SignedString. +func (t *Token) SigningString() (string, error) { + h, err := json.Marshal(t.Header) + if err != nil { + return "", err + } + + c, err := json.Marshal(t.Claims) + if err != nil { + return "", err + } + + return t.EncodeSegment(h) + "." + t.EncodeSegment(c), nil +} + +// EncodeSegment encodes a JWT specific base64url encoding with padding +// stripped. In the future, this function might take into account a +// [TokenOption]. Therefore, this function exists as a method of [Token], rather +// than a global function. +func (*Token) EncodeSegment(seg []byte) string { + return base64.RawURLEncoding.EncodeToString(seg) +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/token_option.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/token_option.go new file mode 100644 index 000000000000..b4ae3badf8e4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/token_option.go @@ -0,0 +1,5 @@ +package jwt + +// TokenOption is a reserved type, which provides some forward compatibility, +// if we ever want to introduce token creation-related options. +type TokenOption func(*Token) diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/types.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/types.go new file mode 100644 index 000000000000..b82b38867d0d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/types.go @@ -0,0 +1,150 @@ +package jwt + +import ( + "encoding/json" + "fmt" + "math" + "reflect" + "strconv" + "time" +) + +// TimePrecision sets the precision of times and dates within this library. This +// has an influence on the precision of times when comparing expiry or other +// related time fields. Furthermore, it is also the precision of times when +// serializing. +// +// For backwards compatibility the default precision is set to seconds, so that +// no fractional timestamps are generated. +var TimePrecision = time.Second + +// MarshalSingleStringAsArray modifies the behavior of the ClaimStrings type, +// especially its MarshalJSON function. +// +// If it is set to true (the default), it will always serialize the type as an +// array of strings, even if it just contains one element, defaulting to the +// behavior of the underlying []string. If it is set to false, it will serialize +// to a single string, if it contains one element. Otherwise, it will serialize +// to an array of strings. +var MarshalSingleStringAsArray = true + +// NumericDate represents a JSON numeric date value, as referenced at +// https://datatracker.ietf.org/doc/html/rfc7519#section-2. +type NumericDate struct { + time.Time +} + +// NewNumericDate constructs a new *NumericDate from a standard library time.Time struct. +// It will truncate the timestamp according to the precision specified in TimePrecision. +func NewNumericDate(t time.Time) *NumericDate { + return &NumericDate{t.Truncate(TimePrecision)} +} + +// newNumericDateFromSeconds creates a new *NumericDate out of a float64 representing a +// UNIX epoch with the float fraction representing non-integer seconds. +func newNumericDateFromSeconds(f float64) *NumericDate { + round, frac := math.Modf(f) + return NewNumericDate(time.Unix(int64(round), int64(frac*1e9))) +} + +// MarshalJSON is an implementation of the json.RawMessage interface and serializes the UNIX epoch +// represented in NumericDate to a byte array, using the precision specified in TimePrecision. +func (date NumericDate) MarshalJSON() (b []byte, err error) { + var prec int + if TimePrecision < time.Second { + prec = int(math.Log10(float64(time.Second) / float64(TimePrecision))) + } + truncatedDate := date.Truncate(TimePrecision) + + // For very large timestamps, UnixNano would overflow an int64, but this + // function requires nanosecond level precision, so we have to use the + // following technique to get round the issue: + // + // 1. Take the normal unix timestamp to form the whole number part of the + // output, + // 2. Take the result of the Nanosecond function, which returns the offset + // within the second of the particular unix time instance, to form the + // decimal part of the output + // 3. Concatenate them to produce the final result + seconds := strconv.FormatInt(truncatedDate.Unix(), 10) + nanosecondsOffset := strconv.FormatFloat(float64(truncatedDate.Nanosecond())/float64(time.Second), 'f', prec, 64) + + output := append([]byte(seconds), []byte(nanosecondsOffset)[1:]...) + + return output, nil +} + +// UnmarshalJSON is an implementation of the json.RawMessage interface and +// deserializes a [NumericDate] from a JSON representation, i.e. a +// [json.Number]. This number represents an UNIX epoch with either integer or +// non-integer seconds. +func (date *NumericDate) UnmarshalJSON(b []byte) (err error) { + var ( + number json.Number + f float64 + ) + + if err = json.Unmarshal(b, &number); err != nil { + return fmt.Errorf("could not parse NumericData: %w", err) + } + + if f, err = number.Float64(); err != nil { + return fmt.Errorf("could not convert json number value to float: %w", err) + } + + n := newNumericDateFromSeconds(f) + *date = *n + + return nil +} + +// ClaimStrings is basically just a slice of strings, but it can be either +// serialized from a string array or just a string. This type is necessary, +// since the "aud" claim can either be a single string or an array. +type ClaimStrings []string + +func (s *ClaimStrings) UnmarshalJSON(data []byte) (err error) { + var value interface{} + + if err = json.Unmarshal(data, &value); err != nil { + return err + } + + var aud []string + + switch v := value.(type) { + case string: + aud = append(aud, v) + case []string: + aud = ClaimStrings(v) + case []interface{}: + for _, vv := range v { + vs, ok := vv.(string) + if !ok { + return &json.UnsupportedTypeError{Type: reflect.TypeOf(vv)} + } + aud = append(aud, vs) + } + case nil: + return nil + default: + return &json.UnsupportedTypeError{Type: reflect.TypeOf(v)} + } + + *s = aud + + return +} + +func (s ClaimStrings) MarshalJSON() (b []byte, err error) { + // This handles a special case in the JWT RFC. If the string array, e.g. + // used by the "aud" field, only contains one element, it MAY be serialized + // as a single string. This may or may not be desired based on the ecosystem + // of other JWT library used, so we make it configurable by the variable + // MarshalSingleStringAsArray. + if len(s) == 1 && !MarshalSingleStringAsArray { + return json.Marshal(s[0]) + } + + return json.Marshal([]string(s)) +} diff --git a/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/validator.go b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/validator.go new file mode 100644 index 000000000000..3850438939d0 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/golang-jwt/jwt/v5/validator.go @@ -0,0 +1,301 @@ +package jwt + +import ( + "crypto/subtle" + "fmt" + "time" +) + +// ClaimsValidator is an interface that can be implemented by custom claims who +// wish to execute any additional claims validation based on +// application-specific logic. The Validate function is then executed in +// addition to the regular claims validation and any error returned is appended +// to the final validation result. +// +// type MyCustomClaims struct { +// Foo string `json:"foo"` +// jwt.RegisteredClaims +// } +// +// func (m MyCustomClaims) Validate() error { +// if m.Foo != "bar" { +// return errors.New("must be foobar") +// } +// return nil +// } +type ClaimsValidator interface { + Claims + Validate() error +} + +// validator is the core of the new Validation API. It is automatically used by +// a [Parser] during parsing and can be modified with various parser options. +// +// Note: This struct is intentionally not exported (yet) as we want to +// internally finalize its API. In the future, we might make it publicly +// available. +type validator struct { + // leeway is an optional leeway that can be provided to account for clock skew. + leeway time.Duration + + // timeFunc is used to supply the current time that is needed for + // validation. If unspecified, this defaults to time.Now. + timeFunc func() time.Time + + // verifyIat specifies whether the iat (Issued At) claim will be verified. + // According to https://www.rfc-editor.org/rfc/rfc7519#section-4.1.6 this + // only specifies the age of the token, but no validation check is + // necessary. However, if wanted, it can be checked if the iat is + // unrealistic, i.e., in the future. + verifyIat bool + + // expectedAud contains the audience this token expects. Supplying an empty + // string will disable aud checking. + expectedAud string + + // expectedIss contains the issuer this token expects. Supplying an empty + // string will disable iss checking. + expectedIss string + + // expectedSub contains the subject this token expects. Supplying an empty + // string will disable sub checking. + expectedSub string +} + +// newValidator can be used to create a stand-alone validator with the supplied +// options. This validator can then be used to validate already parsed claims. +func newValidator(opts ...ParserOption) *validator { + p := NewParser(opts...) + return p.validator +} + +// Validate validates the given claims. It will also perform any custom +// validation if claims implements the [ClaimsValidator] interface. +func (v *validator) Validate(claims Claims) error { + var ( + now time.Time + errs []error = make([]error, 0, 6) + err error + ) + + // Check, if we have a time func + if v.timeFunc != nil { + now = v.timeFunc() + } else { + now = time.Now() + } + + // We always need to check the expiration time, but usage of the claim + // itself is OPTIONAL. + if err = v.verifyExpiresAt(claims, now, false); err != nil { + errs = append(errs, err) + } + + // We always need to check not-before, but usage of the claim itself is + // OPTIONAL. + if err = v.verifyNotBefore(claims, now, false); err != nil { + errs = append(errs, err) + } + + // Check issued-at if the option is enabled + if v.verifyIat { + if err = v.verifyIssuedAt(claims, now, false); err != nil { + errs = append(errs, err) + } + } + + // If we have an expected audience, we also require the audience claim + if v.expectedAud != "" { + if err = v.verifyAudience(claims, v.expectedAud, true); err != nil { + errs = append(errs, err) + } + } + + // If we have an expected issuer, we also require the issuer claim + if v.expectedIss != "" { + if err = v.verifyIssuer(claims, v.expectedIss, true); err != nil { + errs = append(errs, err) + } + } + + // If we have an expected subject, we also require the subject claim + if v.expectedSub != "" { + if err = v.verifySubject(claims, v.expectedSub, true); err != nil { + errs = append(errs, err) + } + } + + // Finally, we want to give the claim itself some possibility to do some + // additional custom validation based on a custom Validate function. + cvt, ok := claims.(ClaimsValidator) + if ok { + if err := cvt.Validate(); err != nil { + errs = append(errs, err) + } + } + + if len(errs) == 0 { + return nil + } + + return joinErrors(errs...) +} + +// verifyExpiresAt compares the exp claim in claims against cmp. This function +// will succeed if cmp < exp. Additional leeway is taken into account. +// +// If exp is not set, it will succeed if the claim is not required, +// otherwise ErrTokenRequiredClaimMissing will be returned. +// +// Additionally, if any error occurs while retrieving the claim, e.g., when its +// the wrong type, an ErrTokenUnverifiable error will be returned. +func (v *validator) verifyExpiresAt(claims Claims, cmp time.Time, required bool) error { + exp, err := claims.GetExpirationTime() + if err != nil { + return err + } + + if exp == nil { + return errorIfRequired(required, "exp") + } + + return errorIfFalse(cmp.Before((exp.Time).Add(+v.leeway)), ErrTokenExpired) +} + +// verifyIssuedAt compares the iat claim in claims against cmp. This function +// will succeed if cmp >= iat. Additional leeway is taken into account. +// +// If iat is not set, it will succeed if the claim is not required, +// otherwise ErrTokenRequiredClaimMissing will be returned. +// +// Additionally, if any error occurs while retrieving the claim, e.g., when its +// the wrong type, an ErrTokenUnverifiable error will be returned. +func (v *validator) verifyIssuedAt(claims Claims, cmp time.Time, required bool) error { + iat, err := claims.GetIssuedAt() + if err != nil { + return err + } + + if iat == nil { + return errorIfRequired(required, "iat") + } + + return errorIfFalse(!cmp.Before(iat.Add(-v.leeway)), ErrTokenUsedBeforeIssued) +} + +// verifyNotBefore compares the nbf claim in claims against cmp. This function +// will return true if cmp >= nbf. Additional leeway is taken into account. +// +// If nbf is not set, it will succeed if the claim is not required, +// otherwise ErrTokenRequiredClaimMissing will be returned. +// +// Additionally, if any error occurs while retrieving the claim, e.g., when its +// the wrong type, an ErrTokenUnverifiable error will be returned. +func (v *validator) verifyNotBefore(claims Claims, cmp time.Time, required bool) error { + nbf, err := claims.GetNotBefore() + if err != nil { + return err + } + + if nbf == nil { + return errorIfRequired(required, "nbf") + } + + return errorIfFalse(!cmp.Before(nbf.Add(-v.leeway)), ErrTokenNotValidYet) +} + +// verifyAudience compares the aud claim against cmp. +// +// If aud is not set or an empty list, it will succeed if the claim is not required, +// otherwise ErrTokenRequiredClaimMissing will be returned. +// +// Additionally, if any error occurs while retrieving the claim, e.g., when its +// the wrong type, an ErrTokenUnverifiable error will be returned. +func (v *validator) verifyAudience(claims Claims, cmp string, required bool) error { + aud, err := claims.GetAudience() + if err != nil { + return err + } + + if len(aud) == 0 { + return errorIfRequired(required, "aud") + } + + // use a var here to keep constant time compare when looping over a number of claims + result := false + + var stringClaims string + for _, a := range aud { + if subtle.ConstantTimeCompare([]byte(a), []byte(cmp)) != 0 { + result = true + } + stringClaims = stringClaims + a + } + + // case where "" is sent in one or many aud claims + if stringClaims == "" { + return errorIfRequired(required, "aud") + } + + return errorIfFalse(result, ErrTokenInvalidAudience) +} + +// verifyIssuer compares the iss claim in claims against cmp. +// +// If iss is not set, it will succeed if the claim is not required, +// otherwise ErrTokenRequiredClaimMissing will be returned. +// +// Additionally, if any error occurs while retrieving the claim, e.g., when its +// the wrong type, an ErrTokenUnverifiable error will be returned. +func (v *validator) verifyIssuer(claims Claims, cmp string, required bool) error { + iss, err := claims.GetIssuer() + if err != nil { + return err + } + + if iss == "" { + return errorIfRequired(required, "iss") + } + + return errorIfFalse(iss == cmp, ErrTokenInvalidIssuer) +} + +// verifySubject compares the sub claim against cmp. +// +// If sub is not set, it will succeed if the claim is not required, +// otherwise ErrTokenRequiredClaimMissing will be returned. +// +// Additionally, if any error occurs while retrieving the claim, e.g., when its +// the wrong type, an ErrTokenUnverifiable error will be returned. +func (v *validator) verifySubject(claims Claims, cmp string, required bool) error { + sub, err := claims.GetSubject() + if err != nil { + return err + } + + if sub == "" { + return errorIfRequired(required, "sub") + } + + return errorIfFalse(sub == cmp, ErrTokenInvalidSubject) +} + +// errorIfFalse returns the error specified in err, if the value is true. +// Otherwise, nil is returned. +func errorIfFalse(value bool, err error) error { + if value { + return nil + } else { + return err + } +} + +// errorIfRequired returns an ErrTokenRequiredClaimMissing error if required is +// true. Otherwise, nil is returned. +func errorIfRequired(required bool, claim string) error { + if required { + return newError(fmt.Sprintf("%s claim is required", claim), ErrTokenRequiredClaimMissing) + } else { + return nil + } +} diff --git a/cluster-autoscaler/vendor/github.com/kylelemons/godebug/LICENSE b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/LICENSE new file mode 100644 index 000000000000..d64569567334 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/cluster-autoscaler/vendor/github.com/kylelemons/godebug/diff/diff.go b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/diff/diff.go new file mode 100644 index 000000000000..200e596c6259 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/diff/diff.go @@ -0,0 +1,186 @@ +// Copyright 2013 Google Inc. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package diff implements a linewise diff algorithm. +package diff + +import ( + "bytes" + "fmt" + "strings" +) + +// Chunk represents a piece of the diff. A chunk will not have both added and +// deleted lines. Equal lines are always after any added or deleted lines. +// A Chunk may or may not have any lines in it, especially for the first or last +// chunk in a computation. +type Chunk struct { + Added []string + Deleted []string + Equal []string +} + +func (c *Chunk) empty() bool { + return len(c.Added) == 0 && len(c.Deleted) == 0 && len(c.Equal) == 0 +} + +// Diff returns a string containing a line-by-line unified diff of the linewise +// changes required to make A into B. Each line is prefixed with '+', '-', or +// ' ' to indicate if it should be added, removed, or is correct respectively. +func Diff(A, B string) string { + aLines := strings.Split(A, "\n") + bLines := strings.Split(B, "\n") + + chunks := DiffChunks(aLines, bLines) + + buf := new(bytes.Buffer) + for _, c := range chunks { + for _, line := range c.Added { + fmt.Fprintf(buf, "+%s\n", line) + } + for _, line := range c.Deleted { + fmt.Fprintf(buf, "-%s\n", line) + } + for _, line := range c.Equal { + fmt.Fprintf(buf, " %s\n", line) + } + } + return strings.TrimRight(buf.String(), "\n") +} + +// DiffChunks uses an O(D(N+M)) shortest-edit-script algorithm +// to compute the edits required from A to B and returns the +// edit chunks. +func DiffChunks(a, b []string) []Chunk { + // algorithm: http://www.xmailserver.org/diff2.pdf + + // We'll need these quantities a lot. + alen, blen := len(a), len(b) // M, N + + // At most, it will require len(a) deletions and len(b) additions + // to transform a into b. + maxPath := alen + blen // MAX + if maxPath == 0 { + // degenerate case: two empty lists are the same + return nil + } + + // Store the endpoint of the path for diagonals. + // We store only the a index, because the b index on any diagonal + // (which we know during the loop below) is aidx-diag. + // endpoint[maxPath] represents the 0 diagonal. + // + // Stated differently: + // endpoint[d] contains the aidx of a furthest reaching path in diagonal d + endpoint := make([]int, 2*maxPath+1) // V + + saved := make([][]int, 0, 8) // Vs + save := func() { + dup := make([]int, len(endpoint)) + copy(dup, endpoint) + saved = append(saved, dup) + } + + var editDistance int // D +dLoop: + for editDistance = 0; editDistance <= maxPath; editDistance++ { + // The 0 diag(onal) represents equality of a and b. Each diagonal to + // the left is numbered one lower, to the right is one higher, from + // -alen to +blen. Negative diagonals favor differences from a, + // positive diagonals favor differences from b. The edit distance to a + // diagonal d cannot be shorter than d itself. + // + // The iterations of this loop cover either odds or evens, but not both, + // If odd indices are inputs, even indices are outputs and vice versa. + for diag := -editDistance; diag <= editDistance; diag += 2 { // k + var aidx int // x + switch { + case diag == -editDistance: + // This is a new diagonal; copy from previous iter + aidx = endpoint[maxPath-editDistance+1] + 0 + case diag == editDistance: + // This is a new diagonal; copy from previous iter + aidx = endpoint[maxPath+editDistance-1] + 1 + case endpoint[maxPath+diag+1] > endpoint[maxPath+diag-1]: + // diagonal d+1 was farther along, so use that + aidx = endpoint[maxPath+diag+1] + 0 + default: + // diagonal d-1 was farther (or the same), so use that + aidx = endpoint[maxPath+diag-1] + 1 + } + // On diagonal d, we can compute bidx from aidx. + bidx := aidx - diag // y + // See how far we can go on this diagonal before we find a difference. + for aidx < alen && bidx < blen && a[aidx] == b[bidx] { + aidx++ + bidx++ + } + // Store the end of the current edit chain. + endpoint[maxPath+diag] = aidx + // If we've found the end of both inputs, we're done! + if aidx >= alen && bidx >= blen { + save() // save the final path + break dLoop + } + } + save() // save the current path + } + if editDistance == 0 { + return nil + } + chunks := make([]Chunk, editDistance+1) + + x, y := alen, blen + for d := editDistance; d > 0; d-- { + endpoint := saved[d] + diag := x - y + insert := diag == -d || (diag != d && endpoint[maxPath+diag-1] < endpoint[maxPath+diag+1]) + + x1 := endpoint[maxPath+diag] + var x0, xM, kk int + if insert { + kk = diag + 1 + x0 = endpoint[maxPath+kk] + xM = x0 + } else { + kk = diag - 1 + x0 = endpoint[maxPath+kk] + xM = x0 + 1 + } + y0 := x0 - kk + + var c Chunk + if insert { + c.Added = b[y0:][:1] + } else { + c.Deleted = a[x0:][:1] + } + if xM < x1 { + c.Equal = a[xM:][:x1-xM] + } + + x, y = x0, y0 + chunks[d] = c + } + if x > 0 { + chunks[0].Equal = a[:x] + } + if chunks[0].empty() { + chunks = chunks[1:] + } + if len(chunks) == 0 { + return nil + } + return chunks +} diff --git a/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/.gitignore b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/.gitignore new file mode 100644 index 000000000000..fa9a735da3c1 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/.gitignore @@ -0,0 +1,5 @@ +*.test +*.bench +*.golden +*.txt +*.prof diff --git a/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/doc.go b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/doc.go new file mode 100644 index 000000000000..03b5718a70db --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/doc.go @@ -0,0 +1,25 @@ +// Copyright 2013 Google Inc. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package pretty pretty-prints Go structures. +// +// This package uses reflection to examine a Go value and can +// print out in a nice, aligned fashion. It supports three +// modes (normal, compact, and extended) for advanced use. +// +// See the Reflect and Print examples for what the output looks like. +package pretty + +// TODO: +// - Catch cycles diff --git a/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/public.go b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/public.go new file mode 100644 index 000000000000..fbc5d7abbf87 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/public.go @@ -0,0 +1,188 @@ +// Copyright 2013 Google Inc. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pretty + +import ( + "bytes" + "fmt" + "io" + "net" + "reflect" + "time" + + "github.com/kylelemons/godebug/diff" +) + +// A Config represents optional configuration parameters for formatting. +// +// Some options, notably ShortList, dramatically increase the overhead +// of pretty-printing a value. +type Config struct { + // Verbosity options + Compact bool // One-line output. Overrides Diffable. + Diffable bool // Adds extra newlines for more easily diffable output. + + // Field and value options + IncludeUnexported bool // Include unexported fields in output + PrintStringers bool // Call String on a fmt.Stringer + PrintTextMarshalers bool // Call MarshalText on an encoding.TextMarshaler + SkipZeroFields bool // Skip struct fields that have a zero value. + + // Output transforms + ShortList int // Maximum character length for short lists if nonzero. + + // Type-specific overrides + // + // Formatter maps a type to a function that will provide a one-line string + // representation of the input value. Conceptually: + // Formatter[reflect.TypeOf(v)](v) = "v as a string" + // + // Note that the first argument need not explicitly match the type, it must + // merely be callable with it. + // + // When processing an input value, if its type exists as a key in Formatter: + // 1) If the value is nil, no stringification is performed. + // This allows overriding of PrintStringers and PrintTextMarshalers. + // 2) The value will be called with the input as its only argument. + // The function must return a string as its first return value. + // + // In addition to func literals, two common values for this will be: + // fmt.Sprint (function) func Sprint(...interface{}) string + // Type.String (method) func (Type) String() string + // + // Note that neither of these work if the String method is a pointer + // method and the input will be provided as a value. In that case, + // use a function that calls .String on the formal value parameter. + Formatter map[reflect.Type]interface{} + + // If TrackCycles is enabled, pretty will detect and track + // self-referential structures. If a self-referential structure (aka a + // "recursive" value) is detected, numbered placeholders will be emitted. + // + // Pointer tracking is disabled by default for performance reasons. + TrackCycles bool +} + +// Default Config objects +var ( + // DefaultFormatter is the default set of overrides for stringification. + DefaultFormatter = map[reflect.Type]interface{}{ + reflect.TypeOf(time.Time{}): fmt.Sprint, + reflect.TypeOf(net.IP{}): fmt.Sprint, + reflect.TypeOf((*error)(nil)).Elem(): fmt.Sprint, + } + + // CompareConfig is the default configuration used for Compare. + CompareConfig = &Config{ + Diffable: true, + IncludeUnexported: true, + Formatter: DefaultFormatter, + } + + // DefaultConfig is the default configuration used for all other top-level functions. + DefaultConfig = &Config{ + Formatter: DefaultFormatter, + } + + // CycleTracker is a convenience config for formatting and comparing recursive structures. + CycleTracker = &Config{ + Diffable: true, + Formatter: DefaultFormatter, + TrackCycles: true, + } +) + +func (cfg *Config) fprint(buf *bytes.Buffer, vals ...interface{}) { + ref := &reflector{ + Config: cfg, + } + if cfg.TrackCycles { + ref.pointerTracker = new(pointerTracker) + } + for i, val := range vals { + if i > 0 { + buf.WriteByte('\n') + } + newFormatter(cfg, buf).write(ref.val2node(reflect.ValueOf(val))) + } +} + +// Print writes the DefaultConfig representation of the given values to standard output. +func Print(vals ...interface{}) { + DefaultConfig.Print(vals...) +} + +// Print writes the configured presentation of the given values to standard output. +func (cfg *Config) Print(vals ...interface{}) { + fmt.Println(cfg.Sprint(vals...)) +} + +// Sprint returns a string representation of the given value according to the DefaultConfig. +func Sprint(vals ...interface{}) string { + return DefaultConfig.Sprint(vals...) +} + +// Sprint returns a string representation of the given value according to cfg. +func (cfg *Config) Sprint(vals ...interface{}) string { + buf := new(bytes.Buffer) + cfg.fprint(buf, vals...) + return buf.String() +} + +// Fprint writes the representation of the given value to the writer according to the DefaultConfig. +func Fprint(w io.Writer, vals ...interface{}) (n int64, err error) { + return DefaultConfig.Fprint(w, vals...) +} + +// Fprint writes the representation of the given value to the writer according to the cfg. +func (cfg *Config) Fprint(w io.Writer, vals ...interface{}) (n int64, err error) { + buf := new(bytes.Buffer) + cfg.fprint(buf, vals...) + return buf.WriteTo(w) +} + +// Compare returns a string containing a line-by-line unified diff of the +// values in a and b, using the CompareConfig. +// +// Each line in the output is prefixed with '+', '-', or ' ' to indicate which +// side it's from. Lines from the a side are marked with '-', lines from the +// b side are marked with '+' and lines that are the same on both sides are +// marked with ' '. +// +// The comparison is based on the intentionally-untyped output of Print, and as +// such this comparison is pretty forviving. In particular, if the types of or +// types within in a and b are different but have the same representation, +// Compare will not indicate any differences between them. +func Compare(a, b interface{}) string { + return CompareConfig.Compare(a, b) +} + +// Compare returns a string containing a line-by-line unified diff of the +// values in got and want according to the cfg. +// +// Each line in the output is prefixed with '+', '-', or ' ' to indicate which +// side it's from. Lines from the a side are marked with '-', lines from the +// b side are marked with '+' and lines that are the same on both sides are +// marked with ' '. +// +// The comparison is based on the intentionally-untyped output of Print, and as +// such this comparison is pretty forviving. In particular, if the types of or +// types within in a and b are different but have the same representation, +// Compare will not indicate any differences between them. +func (cfg *Config) Compare(a, b interface{}) string { + diffCfg := *cfg + diffCfg.Diffable = true + return diff.Diff(cfg.Sprint(a), cfg.Sprint(b)) +} diff --git a/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/reflect.go b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/reflect.go new file mode 100644 index 000000000000..5cd30b7f0360 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/reflect.go @@ -0,0 +1,241 @@ +// Copyright 2013 Google Inc. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pretty + +import ( + "encoding" + "fmt" + "reflect" + "sort" +) + +func isZeroVal(val reflect.Value) bool { + if !val.CanInterface() { + return false + } + z := reflect.Zero(val.Type()).Interface() + return reflect.DeepEqual(val.Interface(), z) +} + +// pointerTracker is a helper for tracking pointer chasing to detect cycles. +type pointerTracker struct { + addrs map[uintptr]int // addr[address] = seen count + + lastID int + ids map[uintptr]int // ids[address] = id +} + +// track tracks following a reference (pointer, slice, map, etc). Every call to +// track should be paired with a call to untrack. +func (p *pointerTracker) track(ptr uintptr) { + if p.addrs == nil { + p.addrs = make(map[uintptr]int) + } + p.addrs[ptr]++ +} + +// untrack registers that we have backtracked over the reference to the pointer. +func (p *pointerTracker) untrack(ptr uintptr) { + p.addrs[ptr]-- + if p.addrs[ptr] == 0 { + delete(p.addrs, ptr) + } +} + +// seen returns whether the pointer was previously seen along this path. +func (p *pointerTracker) seen(ptr uintptr) bool { + _, ok := p.addrs[ptr] + return ok +} + +// keep allocates an ID for the given address and returns it. +func (p *pointerTracker) keep(ptr uintptr) int { + if p.ids == nil { + p.ids = make(map[uintptr]int) + } + if _, ok := p.ids[ptr]; !ok { + p.lastID++ + p.ids[ptr] = p.lastID + } + return p.ids[ptr] +} + +// id returns the ID for the given address. +func (p *pointerTracker) id(ptr uintptr) (int, bool) { + if p.ids == nil { + p.ids = make(map[uintptr]int) + } + id, ok := p.ids[ptr] + return id, ok +} + +// reflector adds local state to the recursive reflection logic. +type reflector struct { + *Config + *pointerTracker +} + +// follow handles following a possiblly-recursive reference to the given value +// from the given ptr address. +func (r *reflector) follow(ptr uintptr, val reflect.Value) node { + if r.pointerTracker == nil { + // Tracking disabled + return r.val2node(val) + } + + // If a parent already followed this, emit a reference marker + if r.seen(ptr) { + id := r.keep(ptr) + return ref{id} + } + + // Track the pointer we're following while on this recursive branch + r.track(ptr) + defer r.untrack(ptr) + n := r.val2node(val) + + // If the recursion used this ptr, wrap it with a target marker + if id, ok := r.id(ptr); ok { + return target{id, n} + } + + // Otherwise, return the node unadulterated + return n +} + +func (r *reflector) val2node(val reflect.Value) node { + if !val.IsValid() { + return rawVal("nil") + } + + if val.CanInterface() { + v := val.Interface() + if formatter, ok := r.Formatter[val.Type()]; ok { + if formatter != nil { + res := reflect.ValueOf(formatter).Call([]reflect.Value{val}) + return rawVal(res[0].Interface().(string)) + } + } else { + if s, ok := v.(fmt.Stringer); ok && r.PrintStringers { + return stringVal(s.String()) + } + if t, ok := v.(encoding.TextMarshaler); ok && r.PrintTextMarshalers { + if raw, err := t.MarshalText(); err == nil { // if NOT an error + return stringVal(string(raw)) + } + } + } + } + + switch kind := val.Kind(); kind { + case reflect.Ptr: + if val.IsNil() { + return rawVal("nil") + } + return r.follow(val.Pointer(), val.Elem()) + case reflect.Interface: + if val.IsNil() { + return rawVal("nil") + } + return r.val2node(val.Elem()) + case reflect.String: + return stringVal(val.String()) + case reflect.Slice: + n := list{} + length := val.Len() + ptr := val.Pointer() + for i := 0; i < length; i++ { + n = append(n, r.follow(ptr, val.Index(i))) + } + return n + case reflect.Array: + n := list{} + length := val.Len() + for i := 0; i < length; i++ { + n = append(n, r.val2node(val.Index(i))) + } + return n + case reflect.Map: + // Extract the keys and sort them for stable iteration + keys := val.MapKeys() + pairs := make([]mapPair, 0, len(keys)) + for _, key := range keys { + pairs = append(pairs, mapPair{ + key: new(formatter).compactString(r.val2node(key)), // can't be cyclic + value: val.MapIndex(key), + }) + } + sort.Sort(byKey(pairs)) + + // Process the keys into the final representation + ptr, n := val.Pointer(), keyvals{} + for _, pair := range pairs { + n = append(n, keyval{ + key: pair.key, + val: r.follow(ptr, pair.value), + }) + } + return n + case reflect.Struct: + n := keyvals{} + typ := val.Type() + fields := typ.NumField() + for i := 0; i < fields; i++ { + sf := typ.Field(i) + if !r.IncludeUnexported && sf.PkgPath != "" { + continue + } + field := val.Field(i) + if r.SkipZeroFields && isZeroVal(field) { + continue + } + n = append(n, keyval{sf.Name, r.val2node(field)}) + } + return n + case reflect.Bool: + if val.Bool() { + return rawVal("true") + } + return rawVal("false") + case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64: + return rawVal(fmt.Sprintf("%d", val.Int())) + case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64: + return rawVal(fmt.Sprintf("%d", val.Uint())) + case reflect.Uintptr: + return rawVal(fmt.Sprintf("0x%X", val.Uint())) + case reflect.Float32, reflect.Float64: + return rawVal(fmt.Sprintf("%v", val.Float())) + case reflect.Complex64, reflect.Complex128: + return rawVal(fmt.Sprintf("%v", val.Complex())) + } + + // Fall back to the default %#v if we can + if val.CanInterface() { + return rawVal(fmt.Sprintf("%#v", val.Interface())) + } + + return rawVal(val.String()) +} + +type mapPair struct { + key string + value reflect.Value +} + +type byKey []mapPair + +func (v byKey) Len() int { return len(v) } +func (v byKey) Swap(i, j int) { v[i], v[j] = v[j], v[i] } +func (v byKey) Less(i, j int) bool { return v[i].key < v[j].key } diff --git a/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/structure.go b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/structure.go new file mode 100644 index 000000000000..d876f60cad21 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/kylelemons/godebug/pretty/structure.go @@ -0,0 +1,223 @@ +// Copyright 2013 Google Inc. All rights reserved. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pretty + +import ( + "bufio" + "bytes" + "fmt" + "io" + "strconv" + "strings" +) + +// a formatter stores stateful formatting information as well as being +// an io.Writer for simplicity. +type formatter struct { + *bufio.Writer + *Config + + // Self-referential structure tracking + tagNumbers map[int]int // tagNumbers[id] = <#n> +} + +// newFormatter creates a new buffered formatter. For the output to be written +// to the given writer, this must be accompanied by a call to write (or Flush). +func newFormatter(cfg *Config, w io.Writer) *formatter { + return &formatter{ + Writer: bufio.NewWriter(w), + Config: cfg, + tagNumbers: make(map[int]int), + } +} + +func (f *formatter) write(n node) { + defer f.Flush() + n.format(f, "") +} + +func (f *formatter) tagFor(id int) int { + if tag, ok := f.tagNumbers[id]; ok { + return tag + } + if f.tagNumbers == nil { + return 0 + } + tag := len(f.tagNumbers) + 1 + f.tagNumbers[id] = tag + return tag +} + +type node interface { + format(f *formatter, indent string) +} + +func (f *formatter) compactString(n node) string { + switch k := n.(type) { + case stringVal: + return string(k) + case rawVal: + return string(k) + } + + buf := new(bytes.Buffer) + f2 := newFormatter(&Config{Compact: true}, buf) + f2.tagNumbers = f.tagNumbers // reuse tagNumbers just in case + f2.write(n) + return buf.String() +} + +type stringVal string + +func (str stringVal) format(f *formatter, indent string) { + f.WriteString(strconv.Quote(string(str))) +} + +type rawVal string + +func (r rawVal) format(f *formatter, indent string) { + f.WriteString(string(r)) +} + +type keyval struct { + key string + val node +} + +type keyvals []keyval + +func (l keyvals) format(f *formatter, indent string) { + f.WriteByte('{') + + switch { + case f.Compact: + // All on one line: + for i, kv := range l { + if i > 0 { + f.WriteByte(',') + } + f.WriteString(kv.key) + f.WriteByte(':') + kv.val.format(f, indent) + } + case f.Diffable: + f.WriteByte('\n') + inner := indent + " " + // Each value gets its own line: + for _, kv := range l { + f.WriteString(inner) + f.WriteString(kv.key) + f.WriteString(": ") + kv.val.format(f, inner) + f.WriteString(",\n") + } + f.WriteString(indent) + default: + keyWidth := 0 + for _, kv := range l { + if kw := len(kv.key); kw > keyWidth { + keyWidth = kw + } + } + alignKey := indent + " " + alignValue := strings.Repeat(" ", keyWidth) + inner := alignKey + alignValue + " " + // First and last line shared with bracket: + for i, kv := range l { + if i > 0 { + f.WriteString(",\n") + f.WriteString(alignKey) + } + f.WriteString(kv.key) + f.WriteString(": ") + f.WriteString(alignValue[len(kv.key):]) + kv.val.format(f, inner) + } + } + + f.WriteByte('}') +} + +type list []node + +func (l list) format(f *formatter, indent string) { + if max := f.ShortList; max > 0 { + short := f.compactString(l) + if len(short) <= max { + f.WriteString(short) + return + } + } + + f.WriteByte('[') + + switch { + case f.Compact: + // All on one line: + for i, v := range l { + if i > 0 { + f.WriteByte(',') + } + v.format(f, indent) + } + case f.Diffable: + f.WriteByte('\n') + inner := indent + " " + // Each value gets its own line: + for _, v := range l { + f.WriteString(inner) + v.format(f, inner) + f.WriteString(",\n") + } + f.WriteString(indent) + default: + inner := indent + " " + // First and last line shared with bracket: + for i, v := range l { + if i > 0 { + f.WriteString(",\n") + f.WriteString(inner) + } + v.format(f, inner) + } + } + + f.WriteByte(']') +} + +type ref struct { + id int +} + +func (r ref) format(f *formatter, indent string) { + fmt.Fprintf(f, "", f.tagFor(r.id)) +} + +type target struct { + id int + value node +} + +func (t target) format(f *formatter, indent string) { + tag := fmt.Sprintf("<#%d> ", f.tagFor(t.id)) + switch { + case f.Diffable, f.Compact: + // no indent changes + default: + indent += strings.Repeat(" ", len(tag)) + } + f.WriteString(tag) + t.value.format(f, indent) +} diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/LICENSE b/cluster-autoscaler/vendor/github.com/pkg/browser/LICENSE new file mode 100644 index 000000000000..65f78fb62910 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/LICENSE @@ -0,0 +1,23 @@ +Copyright (c) 2014, Dave Cheney +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are met: + +* Redistributions of source code must retain the above copyright notice, this + list of conditions and the following disclaimer. + +* Redistributions in binary form must reproduce the above copyright notice, + this list of conditions and the following disclaimer in the documentation + and/or other materials provided with the distribution. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" +AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE +IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE +DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE +FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL +DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR +SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER +CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, +OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/README.md b/cluster-autoscaler/vendor/github.com/pkg/browser/README.md new file mode 100644 index 000000000000..72b1976e3035 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/README.md @@ -0,0 +1,55 @@ + +# browser + import "github.com/pkg/browser" + +Package browser provides helpers to open files, readers, and urls in a browser window. + +The choice of which browser is started is entirely client dependant. + + + + + +## Variables +``` go +var Stderr io.Writer = os.Stderr +``` +Stderr is the io.Writer to which executed commands write standard error. + +``` go +var Stdout io.Writer = os.Stdout +``` +Stdout is the io.Writer to which executed commands write standard output. + + +## func OpenFile +``` go +func OpenFile(path string) error +``` +OpenFile opens new browser window for the file path. + + +## func OpenReader +``` go +func OpenReader(r io.Reader) error +``` +OpenReader consumes the contents of r and presents the +results in a new browser window. + + +## func OpenURL +``` go +func OpenURL(url string) error +``` +OpenURL opens a new browser window pointing to url. + + + + + + + + + +- - - +Generated by [godoc2md](http://godoc.org/github.com/davecheney/godoc2md) diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/browser.go b/cluster-autoscaler/vendor/github.com/pkg/browser/browser.go new file mode 100644 index 000000000000..d7969d74d80d --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/browser.go @@ -0,0 +1,57 @@ +// Package browser provides helpers to open files, readers, and urls in a browser window. +// +// The choice of which browser is started is entirely client dependant. +package browser + +import ( + "fmt" + "io" + "io/ioutil" + "os" + "os/exec" + "path/filepath" +) + +// Stdout is the io.Writer to which executed commands write standard output. +var Stdout io.Writer = os.Stdout + +// Stderr is the io.Writer to which executed commands write standard error. +var Stderr io.Writer = os.Stderr + +// OpenFile opens new browser window for the file path. +func OpenFile(path string) error { + path, err := filepath.Abs(path) + if err != nil { + return err + } + return OpenURL("file://" + path) +} + +// OpenReader consumes the contents of r and presents the +// results in a new browser window. +func OpenReader(r io.Reader) error { + f, err := ioutil.TempFile("", "browser.*.html") + if err != nil { + return fmt.Errorf("browser: could not create temporary file: %v", err) + } + if _, err := io.Copy(f, r); err != nil { + f.Close() + return fmt.Errorf("browser: caching temporary file failed: %v", err) + } + if err := f.Close(); err != nil { + return fmt.Errorf("browser: caching temporary file failed: %v", err) + } + return OpenFile(f.Name()) +} + +// OpenURL opens a new browser window pointing to url. +func OpenURL(url string) error { + return openBrowser(url) +} + +func runCmd(prog string, args ...string) error { + cmd := exec.Command(prog, args...) + cmd.Stdout = Stdout + cmd.Stderr = Stderr + return cmd.Run() +} diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/browser_darwin.go b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_darwin.go new file mode 100644 index 000000000000..8507cf7c2b45 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_darwin.go @@ -0,0 +1,5 @@ +package browser + +func openBrowser(url string) error { + return runCmd("open", url) +} diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/browser_freebsd.go b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_freebsd.go new file mode 100644 index 000000000000..4fc7ff0761b4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_freebsd.go @@ -0,0 +1,14 @@ +package browser + +import ( + "errors" + "os/exec" +) + +func openBrowser(url string) error { + err := runCmd("xdg-open", url) + if e, ok := err.(*exec.Error); ok && e.Err == exec.ErrNotFound { + return errors.New("xdg-open: command not found - install xdg-utils from ports(8)") + } + return err +} diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/browser_linux.go b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_linux.go new file mode 100644 index 000000000000..d26cdddf9c15 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_linux.go @@ -0,0 +1,21 @@ +package browser + +import ( + "os/exec" + "strings" +) + +func openBrowser(url string) error { + providers := []string{"xdg-open", "x-www-browser", "www-browser"} + + // There are multiple possible providers to open a browser on linux + // One of them is xdg-open, another is x-www-browser, then there's www-browser, etc. + // Look for one that exists and run it + for _, provider := range providers { + if _, err := exec.LookPath(provider); err == nil { + return runCmd(provider, url) + } + } + + return &exec.Error{Name: strings.Join(providers, ","), Err: exec.ErrNotFound} +} diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/browser_netbsd.go b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_netbsd.go new file mode 100644 index 000000000000..65a5e5a29342 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_netbsd.go @@ -0,0 +1,14 @@ +package browser + +import ( + "errors" + "os/exec" +) + +func openBrowser(url string) error { + err := runCmd("xdg-open", url) + if e, ok := err.(*exec.Error); ok && e.Err == exec.ErrNotFound { + return errors.New("xdg-open: command not found - install xdg-utils from pkgsrc(7)") + } + return err +} diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/browser_openbsd.go b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_openbsd.go new file mode 100644 index 000000000000..4fc7ff0761b4 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_openbsd.go @@ -0,0 +1,14 @@ +package browser + +import ( + "errors" + "os/exec" +) + +func openBrowser(url string) error { + err := runCmd("xdg-open", url) + if e, ok := err.(*exec.Error); ok && e.Err == exec.ErrNotFound { + return errors.New("xdg-open: command not found - install xdg-utils from ports(8)") + } + return err +} diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/browser_unsupported.go b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_unsupported.go new file mode 100644 index 000000000000..7c5c17d34d26 --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_unsupported.go @@ -0,0 +1,12 @@ +// +build !linux,!windows,!darwin,!openbsd,!freebsd,!netbsd + +package browser + +import ( + "fmt" + "runtime" +) + +func openBrowser(url string) error { + return fmt.Errorf("openBrowser: unsupported operating system: %v", runtime.GOOS) +} diff --git a/cluster-autoscaler/vendor/github.com/pkg/browser/browser_windows.go b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_windows.go new file mode 100644 index 000000000000..63e192959a5e --- /dev/null +++ b/cluster-autoscaler/vendor/github.com/pkg/browser/browser_windows.go @@ -0,0 +1,7 @@ +package browser + +import "golang.org/x/sys/windows" + +func openBrowser(url string) error { + return windows.ShellExecute(0, nil, windows.StringToUTF16Ptr(url), nil, nil, windows.SW_SHOWNORMAL) +} diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/common.go b/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/common.go index 303e5505e411..9509014e87c0 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/common.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/common.go @@ -34,7 +34,7 @@ const ( RequestCount = "http.server.request_count" // Incoming request count total RequestContentLength = "http.server.request_content_length" // Incoming request bytes total ResponseContentLength = "http.server.response_content_length" // Incoming response bytes total - ServerLatency = "http.server.duration" // Incoming end to end duration, microseconds + ServerLatency = "http.server.duration" // Incoming end to end duration, milliseconds ) // Filter is a predicate used to determine whether a given http.request should @@ -42,5 +42,5 @@ const ( type Filter func(*http.Request) bool func newTracer(tp trace.TracerProvider) trace.Tracer { - return tp.Tracer(instrumentationName, trace.WithInstrumentationVersion(Version())) + return tp.Tracer(ScopeName, trace.WithInstrumentationVersion(Version())) } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/config.go b/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/config.go index e4fa1b8d9d61..a1b5b5e5aa8e 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/config.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/config.go @@ -25,9 +25,8 @@ import ( "go.opentelemetry.io/otel/trace" ) -const ( - instrumentationName = "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" -) +// ScopeName is the instrumentation scope name. +const ScopeName = "go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp" // config represents the configuration options available for the http.Handler // and http.Transport types. @@ -76,7 +75,7 @@ func newConfig(opts ...Option) *config { } c.Meter = c.MeterProvider.Meter( - instrumentationName, + ScopeName, metric.WithInstrumentationVersion(Version()), ) diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/handler.go b/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/handler.go index b2fbe07841ca..9a8260059d99 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/handler.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/handler.go @@ -107,13 +107,25 @@ func (h *middleware) createMeasures() { h.counters = make(map[string]metric.Int64Counter) h.valueRecorders = make(map[string]metric.Float64Histogram) - requestBytesCounter, err := h.meter.Int64Counter(RequestContentLength) + requestBytesCounter, err := h.meter.Int64Counter( + RequestContentLength, + metric.WithUnit("By"), + metric.WithDescription("Measures the size of HTTP request content length (uncompressed)"), + ) handleErr(err) - responseBytesCounter, err := h.meter.Int64Counter(ResponseContentLength) + responseBytesCounter, err := h.meter.Int64Counter( + ResponseContentLength, + metric.WithUnit("By"), + metric.WithDescription("Measures the size of HTTP response content length (uncompressed)"), + ) handleErr(err) - serverLatencyMeasure, err := h.meter.Float64Histogram(ServerLatency) + serverLatencyMeasure, err := h.meter.Float64Histogram( + ServerLatency, + metric.WithUnit("ms"), + metric.WithDescription("Measures the duration of HTTP request handling"), + ) handleErr(err) h.counters[RequestContentLength] = requestBytesCounter diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/version.go b/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/version.go index 8f3f53a9588e..bd41c1804210 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/version.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/version.go @@ -16,7 +16,7 @@ package otelhttp // import "go.opentelemetry.io/contrib/instrumentation/net/http // Version is the current release version of the otelhttp instrumentation. func Version() string { - return "0.44.0" + return "0.46.1" // This string is updated by the pre_release.sh script during release } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/.gitignore b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/.gitignore index f3355c852be8..895c7664beb5 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/.gitignore +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/.gitignore @@ -14,12 +14,9 @@ go.work.sum gen/ /example/dice/dice -/example/fib/fib -/example/fib/traces.txt -/example/jaeger/jaeger /example/namedtracer/namedtracer +/example/otel-collector/otel-collector /example/opencensus/opencensus /example/passthrough/passthrough /example/prometheus/prometheus /example/zipkin/zipkin -/example/otel-collector/otel-collector diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/.golangci.yml b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/.golangci.yml index 6e8eeec00faf..a62511f382e2 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/.golangci.yml +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/.golangci.yml @@ -12,8 +12,9 @@ linters: - depguard - errcheck - godot - - gofmt + - gofumpt - goimports + - gosec - gosimple - govet - ineffassign @@ -53,6 +54,20 @@ issues: text: "calls to (.+) only in main[(][)] or init[(][)] functions" linters: - revive + # It's okay to not run gosec in a test. + - path: _test\.go + linters: + - gosec + # Igonoring gosec G404: Use of weak random number generator (math/rand instead of crypto/rand) + # as we commonly use it in tests and examples. + - text: "G404:" + linters: + - gosec + # Igonoring gosec G402: TLS MinVersion too low + # as the https://pkg.go.dev/crypto/tls#Config handles MinVersion default well. + - text: "G402: TLS MinVersion too low." + linters: + - gosec include: # revive exported should have comment or be unexported. - EXC0012 diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/CHANGELOG.md b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/CHANGELOG.md index 3e5c35b5dcc6..24874f856e35 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/CHANGELOG.md +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/CHANGELOG.md @@ -8,6 +8,85 @@ This project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.htm ## [Unreleased] +## [1.21.0/0.44.0] 2023-11-16 + +### Removed + +- Remove the deprecated `go.opentelemetry.io/otel/bridge/opencensus.NewTracer`. (#4706) +- Remove the deprecated `go.opentelemetry.io/otel/exporters/otlp/otlpmetric` module. (#4707) +- Remove the deprecated `go.opentelemetry.io/otel/example/view` module. (#4708) +- Remove the deprecated `go.opentelemetry.io/otel/example/fib` module. (#4723) + +### Fixed + +- Do not parse non-protobuf responses in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4719) +- Do not parse non-protobuf responses in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`. (#4719) + +## [1.20.0/0.43.0] 2023-11-10 + +This release brings a breaking change for custom trace API implementations. Some interfaces (`TracerProvider`, `Tracer`, `Span`) now embed the `go.opentelemetry.io/otel/trace/embedded` types. Implementors need to update their implementations based on what they want the default behavior to be. See the "API Implementations" section of the [trace API] package documentation for more information about how to accomplish this. + +### Added + +- Add `go.opentelemetry.io/otel/bridge/opencensus.InstallTraceBridge`, which installs the OpenCensus trace bridge, and replaces `opencensus.NewTracer`. (#4567) +- Add scope version to trace and metric bridges in `go.opentelemetry.io/otel/bridge/opencensus`. (#4584) +- Add the `go.opentelemetry.io/otel/trace/embedded` package to be embedded in the exported trace API interfaces. (#4620) +- Add the `go.opentelemetry.io/otel/trace/noop` package as a default no-op implementation of the trace API. (#4620) +- Add context propagation in `go.opentelemetry.io/otel/example/dice`. (#4644) +- Add view configuration to `go.opentelemetry.io/otel/example/prometheus`. (#4649) +- Add `go.opentelemetry.io/otel/metric.WithExplicitBucketBoundaries`, which allows defining default explicit bucket boundaries when creating histogram instruments. (#4603) +- Add `Version` function in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc`. (#4660) +- Add `Version` function in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4660) +- Add Summary, SummaryDataPoint, and QuantileValue to `go.opentelemetry.io/sdk/metric/metricdata`. (#4622) +- `go.opentelemetry.io/otel/bridge/opencensus.NewMetricProducer` now supports exemplars from OpenCensus. (#4585) +- Add support for `WithExplicitBucketBoundaries` in `go.opentelemetry.io/otel/sdk/metric`. (#4605) +- Add support for Summary metrics in `go.opentelemetry.io/otel/bridge/opencensus`. (#4668) + +### Deprecated + +- Deprecate `go.opentelemetry.io/otel/bridge/opencensus.NewTracer` in favor of `opencensus.InstallTraceBridge`. (#4567) +- Deprecate `go.opentelemetry.io/otel/example/fib` package is in favor of `go.opentelemetry.io/otel/example/dice`. (#4618) +- Deprecate `go.opentelemetry.io/otel/trace.NewNoopTracerProvider`. + Use the added `NewTracerProvider` function in `go.opentelemetry.io/otel/trace/noop` instead. (#4620) +- Deprecate `go.opentelemetry.io/otel/example/view` package in favor of `go.opentelemetry.io/otel/example/prometheus`. (#4649) +- Deprecate `go.opentelemetry.io/otel/exporters/otlp/otlpmetric`. (#4693) + +### Changed + +- `go.opentelemetry.io/otel/bridge/opencensus.NewMetricProducer` returns a `*MetricProducer` struct instead of the metric.Producer interface. (#4583) +- The `TracerProvider` in `go.opentelemetry.io/otel/trace` now embeds the `go.opentelemetry.io/otel/trace/embedded.TracerProvider` type. + This extends the `TracerProvider` interface and is is a breaking change for any existing implementation. + Implementors need to update their implementations based on what they want the default behavior of the interface to be. + See the "API Implementations" section of the `go.opentelemetry.io/otel/trace` package documentation for more information about how to accomplish this. (#4620) +- The `Tracer` in `go.opentelemetry.io/otel/trace` now embeds the `go.opentelemetry.io/otel/trace/embedded.Tracer` type. + This extends the `Tracer` interface and is is a breaking change for any existing implementation. + Implementors need to update their implementations based on what they want the default behavior of the interface to be. + See the "API Implementations" section of the `go.opentelemetry.io/otel/trace` package documentation for more information about how to accomplish this. (#4620) +- The `Span` in `go.opentelemetry.io/otel/trace` now embeds the `go.opentelemetry.io/otel/trace/embedded.Span` type. + This extends the `Span` interface and is is a breaking change for any existing implementation. + Implementors need to update their implementations based on what they want the default behavior of the interface to be. + See the "API Implementations" section of the `go.opentelemetry.io/otel/trace` package documentation for more information about how to accomplish this. (#4620) +- `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` does no longer depend on `go.opentelemetry.io/otel/exporters/otlp/otlpmetric`. (#4660) +- `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` does no longer depend on `go.opentelemetry.io/otel/exporters/otlp/otlpmetric`. (#4660) +- Retry for `502 Bad Gateway` and `504 Gateway Timeout` HTTP statuses in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4670) +- Retry for `502 Bad Gateway` and `504 Gateway Timeout` HTTP statuses in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`. (#4670) +- Retry for `RESOURCE_EXHAUSTED` only if RetryInfo is returned in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc`. (#4669) +- Retry for `RESOURCE_EXHAUSTED` only if RetryInfo is returned in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc`. (#4669) +- Retry temporary HTTP request failures in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp`. (#4679) +- Retry temporary HTTP request failures in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp`. (#4679) + +### Fixed + +- Fix improper parsing of characters such us `+`, `/` by `Parse` in `go.opentelemetry.io/otel/baggage` as they were rendered as a whitespace. (#4667) +- Fix improper parsing of characters such us `+`, `/` passed via `OTEL_RESOURCE_ATTRIBUTES` in `go.opentelemetry.io/otel/sdk/resource` as they were rendered as a whitespace. (#4699) +- Fix improper parsing of characters such us `+`, `/` passed via `OTEL_EXPORTER_OTLP_HEADERS` and `OTEL_EXPORTER_OTLP_METRICS_HEADERS` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc` as they were rendered as a whitespace. (#4699) +- Fix improper parsing of characters such us `+`, `/` passed via `OTEL_EXPORTER_OTLP_HEADERS` and `OTEL_EXPORTER_OTLP_METRICS_HEADERS` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp` as they were rendered as a whitespace. (#4699) +- Fix improper parsing of characters such us `+`, `/` passed via `OTEL_EXPORTER_OTLP_HEADERS` and `OTEL_EXPORTER_OTLP_TRACES_HEADERS` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlptracegrpc` as they were rendered as a whitespace. (#4699) +- Fix improper parsing of characters such us `+`, `/` passed via `OTEL_EXPORTER_OTLP_HEADERS` and `OTEL_EXPORTER_OTLP_TRACES_HEADERS` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlptracehttp` as they were rendered as a whitespace. (#4699) +- In `go.opentelemetry.op/otel/exporters/prometheus`, the exporter no longer `Collect`s metrics after `Shutdown` is invoked. (#4648) +- Fix documentation for `WithCompressor` in `go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc`. (#4695) +- Fix documentation for `WithCompressor` in `go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc`. (#4695) + ## [1.19.0/0.42.0/0.0.7] 2023-09-28 This release contains the first stable release of the OpenTelemetry Go [metric SDK]. @@ -2656,7 +2735,9 @@ It contains api and sdk for trace and meter. - CircleCI build CI manifest files. - CODEOWNERS file to track owners of this project. -[Unreleased]: https://github.com/open-telemetry/opentelemetry-go/compare/v1.19.0...HEAD +[Unreleased]: https://github.com/open-telemetry/opentelemetry-go/compare/v1.21.0...HEAD +[1.21.0/0.44.0]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.21.0 +[1.20.0/0.43.0]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.20.0 [1.19.0/0.42.0/0.0.7]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.19.0 [1.19.0-rc.1/0.42.0-rc.1]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.19.0-rc.1 [1.18.0/0.41.0/0.0.6]: https://github.com/open-telemetry/opentelemetry-go/releases/tag/v1.18.0 @@ -2731,7 +2812,7 @@ It contains api and sdk for trace and meter. [Go 1.20]: https://go.dev/doc/go1.20 [Go 1.19]: https://go.dev/doc/go1.19 [Go 1.18]: https://go.dev/doc/go1.18 -[Go 1.19]: https://go.dev/doc/go1.19 [metric API]:https://pkg.go.dev/go.opentelemetry.io/otel/metric [metric SDK]:https://pkg.go.dev/go.opentelemetry.io/otel/sdk/metric +[trace API]:https://pkg.go.dev/go.opentelemetry.io/otel/trace diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/CONTRIBUTING.md b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/CONTRIBUTING.md index a00dbca7b083..850606ae6924 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/CONTRIBUTING.md +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/CONTRIBUTING.md @@ -90,6 +90,10 @@ git push Open a pull request against the main `opentelemetry-go` repo. Be sure to add the pull request ID to the entry you added to `CHANGELOG.md`. +Avoid rebasing and force-pushing to your branch to facilitate reviewing the pull request. +Rewriting Git history makes it difficult to keep track of iterations during code review. +All pull requests are squashed to a single commit upon merge to `main`. + ### How to Receive Comments * If the PR is not ready for review, please put `[WIP]` in the title, diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/Makefile b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/Makefile index 5c311706b0c3..35fc189961b6 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/Makefile +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/Makefile @@ -77,6 +77,9 @@ $(GOTMPL): PACKAGE=go.opentelemetry.io/build-tools/gotmpl GORELEASE = $(TOOLS)/gorelease $(GORELEASE): PACKAGE=golang.org/x/exp/cmd/gorelease +GOVULNCHECK = $(TOOLS)/govulncheck +$(TOOLS)/govulncheck: PACKAGE=golang.org/x/vuln/cmd/govulncheck + .PHONY: tools tools: $(CROSSLINK) $(DBOTCONF) $(GOLANGCI_LINT) $(MISSPELL) $(GOCOVMERGE) $(STRINGER) $(PORTO) $(GOJQ) $(SEMCONVGEN) $(MULTIMOD) $(SEMCONVKIT) $(GOTMPL) $(GORELEASE) @@ -189,6 +192,18 @@ test-coverage: | $(GOCOVMERGE) done; \ $(GOCOVMERGE) $$(find . -name coverage.out) > coverage.txt +# Adding a directory will include all benchmarks in that direcotry if a filter is not specified. +BENCHMARK_TARGETS := sdk/trace +.PHONY: benchmark +benchmark: $(BENCHMARK_TARGETS:%=benchmark/%) +BENCHMARK_FILTER = . +# You can override the filter for a particular directory by adding a rule here. +benchmark/sdk/trace: BENCHMARK_FILTER = SpanWithAttributes_8/AlwaysSample +benchmark/%: + @echo "$(GO) test -timeout $(TIMEOUT)s -run=xxxxxMatchNothingxxxxx -bench=$(BENCHMARK_FILTER) $*..." \ + && cd $* \ + $(foreach filter, $(BENCHMARK_FILTER), && $(GO) test -timeout $(TIMEOUT)s -run=xxxxxMatchNothingxxxxx -bench=$(filter)) + .PHONY: golangci-lint golangci-lint-fix golangci-lint-fix: ARGS=--fix golangci-lint-fix: golangci-lint @@ -216,7 +231,7 @@ go-mod-tidy/%: | crosslink lint-modules: go-mod-tidy .PHONY: lint -lint: misspell lint-modules golangci-lint +lint: misspell lint-modules golangci-lint govulncheck .PHONY: vanity-import-check vanity-import-check: | $(PORTO) @@ -226,6 +241,14 @@ vanity-import-check: | $(PORTO) misspell: | $(MISSPELL) @$(MISSPELL) -w $(ALL_DOCS) +.PHONY: govulncheck +govulncheck: $(OTEL_GO_MOD_DIRS:%=govulncheck/%) +govulncheck/%: DIR=$* +govulncheck/%: | $(GOVULNCHECK) + @echo "govulncheck ./... in $(DIR)" \ + && cd $(DIR) \ + && $(GOVULNCHECK) ./... + .PHONY: codespell codespell: | $(CODESPELL) @$(DOCKERPY) $(CODESPELL) @@ -289,3 +312,7 @@ COMMIT ?= "HEAD" add-tags: | $(MULTIMOD) @[ "${MODSET}" ] || ( echo ">> env var MODSET is not set"; exit 1 ) $(MULTIMOD) verify && $(MULTIMOD) tag -m ${MODSET} -c ${COMMIT} + +.PHONY: lint-markdown +lint-markdown: + docker run -v "$(CURDIR):$(WORKDIR)" docker://avtodev/markdown-lint:v1 -c $(WORKDIR)/.markdownlint.yaml $(WORKDIR)/**/*.md diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/README.md b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/README.md index 634326ef833f..2c5b0cc28ab1 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/README.md +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/README.md @@ -11,16 +11,13 @@ It provides a set of APIs to directly measure performance and behavior of your s ## Project Status -| Signal | Status | Project | -|---------|------------|-----------------------| -| Traces | Stable | N/A | -| Metrics | Mixed [1] | [Go: Metric SDK (GA)] | -| Logs | Frozen [2] | N/A | +| Signal | Status | +|---------|------------| +| Traces | Stable | +| Metrics | Stable | +| Logs | Design [1] | -[Go: Metric SDK (GA)]: https://github.com/orgs/open-telemetry/projects/34 - -- [1]: [Metrics API](https://pkg.go.dev/go.opentelemetry.io/otel/metric) is Stable. [Metrics SDK](https://pkg.go.dev/go.opentelemetry.io/otel/sdk/metric) is Beta. -- [2]: The Logs signal development is halted for this project while we stabilize the Metrics SDK. +- [1]: Currently the logs signal development is in a design phase ([#4696](https://github.com/open-telemetry/opentelemetry-go/issues/4696)). No Logs Pull Requests are currently being accepted. Progress and status specific to this repository is tracked in our diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/baggage/baggage.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/baggage/baggage.go index 9e6b3b7b52af..84532cb1da34 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/baggage/baggage.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/baggage/baggage.go @@ -254,7 +254,7 @@ func NewMember(key, value string, props ...Property) (Member, error) { if err := m.validate(); err != nil { return newInvalidMember(), err } - decodedValue, err := url.QueryUnescape(value) + decodedValue, err := url.PathUnescape(value) if err != nil { return newInvalidMember(), fmt.Errorf("%w: %q", errInvalidValue, value) } @@ -301,7 +301,7 @@ func parseMember(member string) (Member, error) { // when converting the header into a data structure." key = strings.TrimSpace(k) var err error - value, err = url.QueryUnescape(strings.TrimSpace(v)) + value, err = url.PathUnescape(strings.TrimSpace(v)) if err != nil { return newInvalidMember(), fmt.Errorf("%w: %q", err, value) } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/README.md b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/README.md deleted file mode 100644 index 50295223182c..000000000000 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/README.md +++ /dev/null @@ -1,51 +0,0 @@ -# OpenTelemetry-Go OTLP Span Exporter - -[![Go Reference](https://pkg.go.dev/badge/go.opentelemetry.io/otel/exporters/otlp/otlptrace.svg)](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace) - -[OpenTelemetry Protocol Exporter](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/protocol/exporter.md) implementation. - -## Installation - -``` -go get -u go.opentelemetry.io/otel/exporters/otlp/otlptrace -``` - -## Examples - -- [HTTP Exporter setup and examples](./otlptracehttp/example_test.go) -- [Full example of gRPC Exporter sending telemetry to a local collector](../../../example/otel-collector) - -## [`otlptrace`](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace) - -The `otlptrace` package provides an exporter implementing the OTel span exporter interface. -This exporter is configured using a client satisfying the `otlptrace.Client` interface. -This client handles the transformation of data into wire format and the transmission of that data to the collector. - -## [`otlptracegrpc`](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc) - -The `otlptracegrpc` package implements a client for the span exporter that sends trace telemetry data to the collector using gRPC with protobuf-encoded payloads. - -## [`otlptracehttp`](https://pkg.go.dev/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp) - -The `otlptracehttp` package implements a client for the span exporter that sends trace telemetry data to the collector using HTTP with protobuf-encoded payloads. - -## Configuration - -### Environment Variables - -The following environment variables can be used (instead of options objects) to -override the default configuration. For more information about how each of -these environment variables is interpreted, see [the OpenTelemetry -specification](https://github.com/open-telemetry/opentelemetry-specification/blob/v1.20.0/specification/protocol/exporter.md). - -| Environment variable | Option | Default value | -| ------------------------------------------------------------------------ |------------------------------ | -------------------------------------------------------- | -| `OTEL_EXPORTER_OTLP_ENDPOINT` `OTEL_EXPORTER_OTLP_TRACES_ENDPOINT` | `WithEndpoint` `WithInsecure` | `https://localhost:4317` or `https://localhost:4318`[^1] | -| `OTEL_EXPORTER_OTLP_CERTIFICATE` `OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE` | `WithTLSClientConfig` | | -| `OTEL_EXPORTER_OTLP_HEADERS` `OTEL_EXPORTER_OTLP_TRACES_HEADERS` | `WithHeaders` | | -| `OTEL_EXPORTER_OTLP_COMPRESSION` `OTEL_EXPORTER_OTLP_TRACES_COMPRESSION` | `WithCompression` | | -| `OTEL_EXPORTER_OTLP_TIMEOUT` `OTEL_EXPORTER_OTLP_TRACES_TIMEOUT` | `WithTimeout` | `10s` | - -[^1]: The gRPC client defaults to `https://localhost:4317` and the HTTP client `https://localhost:4318`. - -Configuration using options have precedence over the environment variables. diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/doc.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/doc.go new file mode 100644 index 000000000000..9e642235ade4 --- /dev/null +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/doc.go @@ -0,0 +1,21 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +/* +Package otlptrace contains abstractions for OTLP span exporters. +See the official OTLP span exporter implementations: + - [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc], + - [go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp]. +*/ +package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace" diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/exporter.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/exporter.go index 0dbe15555b34..b46a38d60ab0 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/exporter.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/exporter.go @@ -24,9 +24,7 @@ import ( tracesdk "go.opentelemetry.io/otel/sdk/trace" ) -var ( - errAlreadyStarted = errors.New("already started") -) +var errAlreadyStarted = errors.New("already started") // Exporter exports trace data in the OTLP wire format. type Exporter struct { @@ -55,7 +53,7 @@ func (e *Exporter) ExportSpans(ctx context.Context, ss []tracesdk.ReadOnlySpan) // Start establishes a connection to the receiving endpoint. func (e *Exporter) Start(ctx context.Context) error { - var err = errAlreadyStarted + err := errAlreadyStarted e.startOnce.Do(func() { e.mu.Lock() e.started = true diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/client.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/client.go index 86fb61a0dece..b4cc21d7a3c6 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/client.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/client.go @@ -260,30 +260,38 @@ func (c *client) exportContext(parent context.Context) (context.Context, context // duration to wait for if an explicit throttle time is included in err. func retryable(err error) (bool, time.Duration) { s := status.Convert(err) + return retryableGRPCStatus(s) +} + +func retryableGRPCStatus(s *status.Status) (bool, time.Duration) { switch s.Code() { case codes.Canceled, codes.DeadlineExceeded, - codes.ResourceExhausted, codes.Aborted, codes.OutOfRange, codes.Unavailable, codes.DataLoss: - return true, throttleDelay(s) + // Additionally handle RetryInfo. + _, d := throttleDelay(s) + return true, d + case codes.ResourceExhausted: + // Retry only if the server signals that the recovery from resource exhaustion is possible. + return throttleDelay(s) } // Not a retry-able error. return false, 0 } -// throttleDelay returns a duration to wait for if an explicit throttle time -// is included in the response status. -func throttleDelay(s *status.Status) time.Duration { +// throttleDelay returns of the status is RetryInfo +// and the its duration to wait for if an explicit throttle time. +func throttleDelay(s *status.Status) (bool, time.Duration) { for _, detail := range s.Details() { if t, ok := detail.(*errdetails.RetryInfo); ok { - return t.RetryDelay.AsDuration() + return true, t.RetryDelay.AsDuration() } } - return 0 + return false, 0 } // MarshalLog is the marshaling function used by the logging system to represent this Client. diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/doc.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/doc.go new file mode 100644 index 000000000000..1f514ef9ea38 --- /dev/null +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/doc.go @@ -0,0 +1,77 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +/* +Package otlptracegrpc provides an OTLP span exporter using gRPC. +By default the telemetry is sent to https://localhost:4317. + +Exporter should be created using [New]. + +The environment variables described below can be used for configuration. + +OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT (default: "https://localhost:4317") - +target to which the exporter sends telemetry. +The target syntax is defined in https://github.com/grpc/grpc/blob/master/doc/naming.md. +The value must contain a host. +The value may additionally a port, a scheme, and a path. +The value accepts "http" and "https" scheme. +The value should not contain a query string or fragment. +OTEL_EXPORTER_OTLP_TRACES_ENDPOINT takes precedence over OTEL_EXPORTER_OTLP_ENDPOINT. +The configuration can be overridden by [WithEndpoint], [WithInsecure], [WithGRPCConn] options. + +OTEL_EXPORTER_OTLP_INSECURE, OTEL_EXPORTER_OTLP_TRACES_INSECURE (default: "false") - +setting "true" disables client transport security for the exporter's gRPC connection. +You can use this only when an endpoint is provided without the http or https scheme. +OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT setting overrides +the scheme defined via OTEL_EXPORTER_OTLP_ENDPOINT, OTEL_EXPORTER_OTLP_TRACES_ENDPOINT. +OTEL_EXPORTER_OTLP_TRACES_INSECURE takes precedence over OTEL_EXPORTER_OTLP_INSECURE. +The configuration can be overridden by [WithInsecure], [WithGRPCConn] options. + +OTEL_EXPORTER_OTLP_HEADERS, OTEL_EXPORTER_OTLP_TRACES_HEADERS (default: none) - +key-value pairs used as gRPC metadata associated with gRPC requests. +The value is expected to be represented in a format matching to the [W3C Baggage HTTP Header Content Format], +except that additional semi-colon delimited metadata is not supported. +Example value: "key1=value1,key2=value2". +OTEL_EXPORTER_OTLP_TRACES_HEADERS takes precedence over OTEL_EXPORTER_OTLP_HEADERS. +The configuration can be overridden by [WithHeaders] option. + +OTEL_EXPORTER_OTLP_TIMEOUT, OTEL_EXPORTER_OTLP_TRACES_TIMEOUT (default: "10000") - +maximum time in milliseconds the OTLP exporter waits for each batch export. +OTEL_EXPORTER_OTLP_TRACES_TIMEOUT takes precedence over OTEL_EXPORTER_OTLP_TIMEOUT. +The configuration can be overridden by [WithTimeout] option. + +OTEL_EXPORTER_OTLP_COMPRESSION, OTEL_EXPORTER_OTLP_TRACES_COMPRESSION (default: none) - +the gRPC compressor the exporter uses. +Supported value: "gzip". +OTEL_EXPORTER_OTLP_TRACES_COMPRESSION takes precedence over OTEL_EXPORTER_OTLP_COMPRESSION. +The configuration can be overridden by [WithCompressor], [WithGRPCConn] options. + +OTEL_EXPORTER_OTLP_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE (default: none) - +the filepath to the trusted certificate to use when verifying a server's TLS credentials. +OTEL_EXPORTER_OTLP_TRACES_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CERTIFICATE. +The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] options. + +OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE, OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE (default: none) - +the filepath to the client certificate/chain trust for clients private key to use in mTLS communication in PEM format. +OTEL_EXPORTER_OTLP_TRACES_CLIENT_CERTIFICATE takes precedence over OTEL_EXPORTER_OTLP_CLIENT_CERTIFICATE. +The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] options. + +OTEL_EXPORTER_OTLP_CLIENT_KEY, OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY (default: none) - +the filepath to the clients private key to use in mTLS communication in PEM format. +OTEL_EXPORTER_OTLP_TRACES_CLIENT_KEY takes precedence over OTEL_EXPORTER_OTLP_CLIENT_KEY. +The configuration can be overridden by [WithTLSCredentials], [WithGRPCConn] option. + +[W3C Baggage HTTP Header Content Format]: https://www.w3.org/TR/baggage/#header-content +*/ +package otlptracegrpc // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc" diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig/envconfig.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig/envconfig.go index becb1f0fbbe6..5530119e4cf8 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig/envconfig.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig/envconfig.go @@ -174,13 +174,13 @@ func stringToHeader(value string) map[string]string { global.Error(errors.New("missing '="), "parse headers", "input", header) continue } - name, err := url.QueryUnescape(n) + name, err := url.PathUnescape(n) if err != nil { global.Error(err, "escape header key", "key", n) continue } trimmedName := strings.TrimSpace(name) - value, err := url.QueryUnescape(v) + value, err := url.PathUnescape(v) if err != nil { global.Error(err, "escape header value", "value", v) continue diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig/options.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig/options.go index 19b8434d4d2c..dddb1f334dec 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig/options.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig/options.go @@ -141,9 +141,6 @@ func NewGRPCConfig(opts ...GRPCOption) Config { if cfg.Traces.Compression == GzipCompression { cfg.DialOptions = append(cfg.DialOptions, grpc.WithDefaultCallOptions(grpc.UseCompressor(gzip.Name))) } - if len(cfg.DialOptions) != 0 { - cfg.DialOptions = append(cfg.DialOptions, cfg.DialOptions...) - } if cfg.ReconnectionPeriod != 0 { p := grpc.ConnectParams{ Backoff: backoff.DefaultConfig, diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/options.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/options.go index 78ce9ad8f0bc..17ffeaf6ef0e 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/options.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/options.go @@ -93,13 +93,7 @@ func compressorToCompression(compressor string) otlpconfig.Compression { } // WithCompressor sets the compressor for the gRPC client to use when sending -// requests. It is the responsibility of the caller to ensure that the -// compressor set has been registered with google.golang.org/grpc/encoding. -// This can be done by encoding.RegisterCompressor. Some compressors -// auto-register on import, such as gzip, which can be registered by calling -// `import _ "google.golang.org/grpc/encoding/gzip"`. -// -// This option has no effect if WithGRPCConn is used. +// requests. Supported compressor values: "gzip". func WithCompressor(compressor string) Option { return wrappedOption{otlpconfig.WithCompression(compressorToCompression(compressor))} } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/version.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/version.go index 10ac73ee3b8e..8ee285b8d52a 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/version.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/exporters/otlp/otlptrace/version.go @@ -16,5 +16,5 @@ package otlptrace // import "go.opentelemetry.io/otel/exporters/otlp/otlptrace" // Version is the current release version of the OpenTelemetry OTLP trace exporter in use. func Version() string { - return "1.19.0" + return "1.21.0" } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/internal/global/instruments.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/internal/global/instruments.go index a33eded872a3..ebb13c20678e 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/internal/global/instruments.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/internal/global/instruments.go @@ -34,11 +34,13 @@ type afCounter struct { name string opts []metric.Float64ObservableCounterOption - delegate atomic.Value //metric.Float64ObservableCounter + delegate atomic.Value // metric.Float64ObservableCounter } -var _ unwrapper = (*afCounter)(nil) -var _ metric.Float64ObservableCounter = (*afCounter)(nil) +var ( + _ unwrapper = (*afCounter)(nil) + _ metric.Float64ObservableCounter = (*afCounter)(nil) +) func (i *afCounter) setDelegate(m metric.Meter) { ctr, err := m.Float64ObservableCounter(i.name, i.opts...) @@ -63,11 +65,13 @@ type afUpDownCounter struct { name string opts []metric.Float64ObservableUpDownCounterOption - delegate atomic.Value //metric.Float64ObservableUpDownCounter + delegate atomic.Value // metric.Float64ObservableUpDownCounter } -var _ unwrapper = (*afUpDownCounter)(nil) -var _ metric.Float64ObservableUpDownCounter = (*afUpDownCounter)(nil) +var ( + _ unwrapper = (*afUpDownCounter)(nil) + _ metric.Float64ObservableUpDownCounter = (*afUpDownCounter)(nil) +) func (i *afUpDownCounter) setDelegate(m metric.Meter) { ctr, err := m.Float64ObservableUpDownCounter(i.name, i.opts...) @@ -92,11 +96,13 @@ type afGauge struct { name string opts []metric.Float64ObservableGaugeOption - delegate atomic.Value //metric.Float64ObservableGauge + delegate atomic.Value // metric.Float64ObservableGauge } -var _ unwrapper = (*afGauge)(nil) -var _ metric.Float64ObservableGauge = (*afGauge)(nil) +var ( + _ unwrapper = (*afGauge)(nil) + _ metric.Float64ObservableGauge = (*afGauge)(nil) +) func (i *afGauge) setDelegate(m metric.Meter) { ctr, err := m.Float64ObservableGauge(i.name, i.opts...) @@ -121,11 +127,13 @@ type aiCounter struct { name string opts []metric.Int64ObservableCounterOption - delegate atomic.Value //metric.Int64ObservableCounter + delegate atomic.Value // metric.Int64ObservableCounter } -var _ unwrapper = (*aiCounter)(nil) -var _ metric.Int64ObservableCounter = (*aiCounter)(nil) +var ( + _ unwrapper = (*aiCounter)(nil) + _ metric.Int64ObservableCounter = (*aiCounter)(nil) +) func (i *aiCounter) setDelegate(m metric.Meter) { ctr, err := m.Int64ObservableCounter(i.name, i.opts...) @@ -150,11 +158,13 @@ type aiUpDownCounter struct { name string opts []metric.Int64ObservableUpDownCounterOption - delegate atomic.Value //metric.Int64ObservableUpDownCounter + delegate atomic.Value // metric.Int64ObservableUpDownCounter } -var _ unwrapper = (*aiUpDownCounter)(nil) -var _ metric.Int64ObservableUpDownCounter = (*aiUpDownCounter)(nil) +var ( + _ unwrapper = (*aiUpDownCounter)(nil) + _ metric.Int64ObservableUpDownCounter = (*aiUpDownCounter)(nil) +) func (i *aiUpDownCounter) setDelegate(m metric.Meter) { ctr, err := m.Int64ObservableUpDownCounter(i.name, i.opts...) @@ -179,11 +189,13 @@ type aiGauge struct { name string opts []metric.Int64ObservableGaugeOption - delegate atomic.Value //metric.Int64ObservableGauge + delegate atomic.Value // metric.Int64ObservableGauge } -var _ unwrapper = (*aiGauge)(nil) -var _ metric.Int64ObservableGauge = (*aiGauge)(nil) +var ( + _ unwrapper = (*aiGauge)(nil) + _ metric.Int64ObservableGauge = (*aiGauge)(nil) +) func (i *aiGauge) setDelegate(m metric.Meter) { ctr, err := m.Int64ObservableGauge(i.name, i.opts...) @@ -208,7 +220,7 @@ type sfCounter struct { name string opts []metric.Float64CounterOption - delegate atomic.Value //metric.Float64Counter + delegate atomic.Value // metric.Float64Counter } var _ metric.Float64Counter = (*sfCounter)(nil) @@ -234,7 +246,7 @@ type sfUpDownCounter struct { name string opts []metric.Float64UpDownCounterOption - delegate atomic.Value //metric.Float64UpDownCounter + delegate atomic.Value // metric.Float64UpDownCounter } var _ metric.Float64UpDownCounter = (*sfUpDownCounter)(nil) @@ -260,7 +272,7 @@ type sfHistogram struct { name string opts []metric.Float64HistogramOption - delegate atomic.Value //metric.Float64Histogram + delegate atomic.Value // metric.Float64Histogram } var _ metric.Float64Histogram = (*sfHistogram)(nil) @@ -286,7 +298,7 @@ type siCounter struct { name string opts []metric.Int64CounterOption - delegate atomic.Value //metric.Int64Counter + delegate atomic.Value // metric.Int64Counter } var _ metric.Int64Counter = (*siCounter)(nil) @@ -312,7 +324,7 @@ type siUpDownCounter struct { name string opts []metric.Int64UpDownCounterOption - delegate atomic.Value //metric.Int64UpDownCounter + delegate atomic.Value // metric.Int64UpDownCounter } var _ metric.Int64UpDownCounter = (*siUpDownCounter)(nil) @@ -338,7 +350,7 @@ type siHistogram struct { name string opts []metric.Int64HistogramOption - delegate atomic.Value //metric.Int64Histogram + delegate atomic.Value // metric.Int64Histogram } var _ metric.Int64Histogram = (*siHistogram)(nil) diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/internal/global/trace.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/internal/global/trace.go index 5f008d0982be..3f61ec12a34f 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/internal/global/trace.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/internal/global/trace.go @@ -39,6 +39,7 @@ import ( "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/codes" "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/trace/embedded" ) // tracerProvider is a placeholder for a configured SDK TracerProvider. @@ -46,6 +47,8 @@ import ( // All TracerProvider functionality is forwarded to a delegate once // configured. type tracerProvider struct { + embedded.TracerProvider + mtx sync.Mutex tracers map[il]*tracer delegate trace.TracerProvider @@ -119,6 +122,8 @@ type il struct { // All Tracer functionality is forwarded to a delegate once configured. // Otherwise, all functionality is forwarded to a NoopTracer. type tracer struct { + embedded.Tracer + name string opts []trace.TracerOption provider *tracerProvider @@ -156,6 +161,8 @@ func (t *tracer) Start(ctx context.Context, name string, opts ...trace.SpanStart // SpanContext. It performs no operations other than to return the wrapped // SpanContext. type nonRecordingSpan struct { + embedded.Span + sc trace.SpanContext tracer *tracer } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/doc.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/doc.go index ae24e448d91d..54716e13b355 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/doc.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/doc.go @@ -149,7 +149,7 @@ of [go.opentelemetry.io/otel/metric]. Finally, an author can embed another implementation in theirs. The embedded implementation will be used for methods not defined by the author. For example, -an author who want to default to silently dropping the call can use +an author who wants to default to silently dropping the call can use [go.opentelemetry.io/otel/metric/noop]: import "go.opentelemetry.io/otel/metric/noop" diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/instrument.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/instrument.go index cdca00058c68..be89cd533417 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/instrument.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/instrument.go @@ -39,6 +39,12 @@ type InstrumentOption interface { Float64ObservableGaugeOption } +// HistogramOption applies options to histogram instruments. +type HistogramOption interface { + Int64HistogramOption + Float64HistogramOption +} + type descOpt string func (o descOpt) applyFloat64Counter(c Float64CounterConfig) Float64CounterConfig { @@ -171,6 +177,23 @@ func (o unitOpt) applyInt64ObservableGauge(c Int64ObservableGaugeConfig) Int64Ob // The unit u should be defined using the appropriate [UCUM](https://ucum.org) case-sensitive code. func WithUnit(u string) InstrumentOption { return unitOpt(u) } +// WithExplicitBucketBoundaries sets the instrument explicit bucket boundaries. +// +// This option is considered "advisory", and may be ignored by API implementations. +func WithExplicitBucketBoundaries(bounds ...float64) HistogramOption { return bucketOpt(bounds) } + +type bucketOpt []float64 + +func (o bucketOpt) applyFloat64Histogram(c Float64HistogramConfig) Float64HistogramConfig { + c.explicitBucketBoundaries = o + return c +} + +func (o bucketOpt) applyInt64Histogram(c Int64HistogramConfig) Int64HistogramConfig { + c.explicitBucketBoundaries = o + return c +} + // AddOption applies options to an addition measurement. See // [MeasurementOption] for other options that can be used as an AddOption. type AddOption interface { diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/syncfloat64.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/syncfloat64.go index f0b063721d81..0a4825ae6a79 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/syncfloat64.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/syncfloat64.go @@ -147,8 +147,9 @@ type Float64Histogram interface { // Float64HistogramConfig contains options for synchronous counter instruments // that record int64 values. type Float64HistogramConfig struct { - description string - unit string + description string + unit string + explicitBucketBoundaries []float64 } // NewFloat64HistogramConfig returns a new [Float64HistogramConfig] with all @@ -171,6 +172,11 @@ func (c Float64HistogramConfig) Unit() string { return c.unit } +// ExplicitBucketBoundaries returns the configured explicit bucket boundaries. +func (c Float64HistogramConfig) ExplicitBucketBoundaries() []float64 { + return c.explicitBucketBoundaries +} + // Float64HistogramOption applies options to a [Float64HistogramConfig]. See // [InstrumentOption] for other options that can be used as a // Float64HistogramOption. diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/syncint64.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/syncint64.go index 6f508eb66d40..56667d32fc01 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/syncint64.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/metric/syncint64.go @@ -147,8 +147,9 @@ type Int64Histogram interface { // Int64HistogramConfig contains options for synchronous counter instruments // that record int64 values. type Int64HistogramConfig struct { - description string - unit string + description string + unit string + explicitBucketBoundaries []float64 } // NewInt64HistogramConfig returns a new [Int64HistogramConfig] with all opts @@ -171,6 +172,11 @@ func (c Int64HistogramConfig) Unit() string { return c.unit } +// ExplicitBucketBoundaries returns the configured explicit bucket boundaries. +func (c Int64HistogramConfig) ExplicitBucketBoundaries() []float64 { + return c.explicitBucketBoundaries +} + // Int64HistogramOption applies options to a [Int64HistogramConfig]. See // [InstrumentOption] for other options that can be used as an // Int64HistogramOption. diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/propagation/trace_context.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/propagation/trace_context.go index 902692da082e..75a8f3435a52 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/propagation/trace_context.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/propagation/trace_context.go @@ -40,8 +40,10 @@ const ( // their proprietary information. type TraceContext struct{} -var _ TextMapPropagator = TraceContext{} -var traceCtxRegExp = regexp.MustCompile("^(?P[0-9a-f]{2})-(?P[a-f0-9]{32})-(?P[a-f0-9]{16})-(?P[a-f0-9]{2})(?:-.*)?$") +var ( + _ TextMapPropagator = TraceContext{} + traceCtxRegExp = regexp.MustCompile("^(?P[0-9a-f]{2})-(?P[a-f0-9]{32})-(?P[a-f0-9]{16})-(?P[a-f0-9]{2})(?:-.*)?$") +) // Inject set tracecontext from the Context into the carrier. func (tc TraceContext) Inject(ctx context.Context, carrier TextMapCarrier) { diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/requirements.txt b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/requirements.txt index ddff454685c8..e0a43e13840e 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/requirements.txt +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/requirements.txt @@ -1 +1 @@ -codespell==2.2.5 +codespell==2.2.6 diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/auto.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/auto.go index 324dd4baf249..4279013be880 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/auto.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/auto.go @@ -21,12 +21,10 @@ import ( "strings" ) -var ( - // ErrPartialResource is returned by a detector when complete source - // information for a Resource is unavailable or the source information - // contains invalid values that are omitted from the returned Resource. - ErrPartialResource = errors.New("partial resource") -) +// ErrPartialResource is returned by a detector when complete source +// information for a Resource is unavailable or the source information +// contains invalid values that are omitted from the returned Resource. +var ErrPartialResource = errors.New("partial resource") // Detector detects OpenTelemetry resource information. type Detector interface { diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/env.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/env.go index a847c50622e4..e29ae563a694 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/env.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/env.go @@ -28,16 +28,14 @@ import ( const ( // resourceAttrKey is the environment variable name OpenTelemetry Resource information will be read from. - resourceAttrKey = "OTEL_RESOURCE_ATTRIBUTES" + resourceAttrKey = "OTEL_RESOURCE_ATTRIBUTES" //nolint:gosec // False positive G101: Potential hardcoded credentials // svcNameKey is the environment variable name that Service Name information will be read from. svcNameKey = "OTEL_SERVICE_NAME" ) -var ( - // errMissingValue is returned when a resource value is missing. - errMissingValue = fmt.Errorf("%w: missing value", ErrPartialResource) -) +// errMissingValue is returned when a resource value is missing. +var errMissingValue = fmt.Errorf("%w: missing value", ErrPartialResource) // fromEnv is a Detector that implements the Detector and collects // resources from environment. This Detector is included as a @@ -91,7 +89,7 @@ func constructOTResources(s string) (*Resource, error) { continue } key := strings.TrimSpace(k) - val, err := url.QueryUnescape(strings.TrimSpace(v)) + val, err := url.PathUnescape(strings.TrimSpace(v)) if err != nil { // Retain original value if decoding fails, otherwise it will be // an empty string. diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/os.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/os.go index 84e1c5856058..0cbd559739c9 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/os.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/os.go @@ -36,8 +36,10 @@ func setOSDescriptionProvider(osDescriptionProvider osDescriptionProvider) { osDescription = osDescriptionProvider } -type osTypeDetector struct{} -type osDescriptionDetector struct{} +type ( + osTypeDetector struct{} + osDescriptionDetector struct{} +) // Detect returns a *Resource that describes the operating system type the // service is running on. @@ -56,7 +58,6 @@ func (osTypeDetector) Detect(ctx context.Context) (*Resource, error) { // service is running on. func (osDescriptionDetector) Detect(ctx context.Context) (*Resource, error) { description, err := osDescription() - if err != nil { return nil, err } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/process.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/process.go index e67ff29e26d5..ecdd11dd7622 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/process.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/resource/process.go @@ -25,14 +25,16 @@ import ( semconv "go.opentelemetry.io/otel/semconv/v1.21.0" ) -type pidProvider func() int -type executablePathProvider func() (string, error) -type commandArgsProvider func() []string -type ownerProvider func() (*user.User, error) -type runtimeNameProvider func() string -type runtimeVersionProvider func() string -type runtimeOSProvider func() string -type runtimeArchProvider func() string +type ( + pidProvider func() int + executablePathProvider func() (string, error) + commandArgsProvider func() []string + ownerProvider func() (*user.User, error) + runtimeNameProvider func() string + runtimeVersionProvider func() string + runtimeOSProvider func() string + runtimeArchProvider func() string +) var ( defaultPidProvider pidProvider = os.Getpid @@ -108,14 +110,16 @@ func setUserProviders(ownerProvider ownerProvider) { owner = ownerProvider } -type processPIDDetector struct{} -type processExecutableNameDetector struct{} -type processExecutablePathDetector struct{} -type processCommandArgsDetector struct{} -type processOwnerDetector struct{} -type processRuntimeNameDetector struct{} -type processRuntimeVersionDetector struct{} -type processRuntimeDescriptionDetector struct{} +type ( + processPIDDetector struct{} + processExecutableNameDetector struct{} + processExecutablePathDetector struct{} + processCommandArgsDetector struct{} + processOwnerDetector struct{} + processRuntimeNameDetector struct{} + processRuntimeVersionDetector struct{} + processRuntimeDescriptionDetector struct{} +) // Detect returns a *Resource that describes the process identifier (PID) of the // executing process. diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/provider.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/provider.go index 0a018c14ded3..7d46c4b48e52 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/provider.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/provider.go @@ -25,6 +25,8 @@ import ( "go.opentelemetry.io/otel/sdk/instrumentation" "go.opentelemetry.io/otel/sdk/resource" "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/trace/embedded" + "go.opentelemetry.io/otel/trace/noop" ) const ( @@ -73,6 +75,8 @@ func (cfg tracerProviderConfig) MarshalLog() interface{} { // TracerProvider is an OpenTelemetry TracerProvider. It provides Tracers to // instrumentation so it can trace operational flow through a system. type TracerProvider struct { + embedded.TracerProvider + mu sync.Mutex namedTracer map[instrumentation.Scope]*tracer spanProcessors atomic.Pointer[spanProcessorStates] @@ -139,7 +143,7 @@ func NewTracerProvider(opts ...TracerProviderOption) *TracerProvider { func (p *TracerProvider) Tracer(name string, opts ...trace.TracerOption) trace.Tracer { // This check happens before the mutex is acquired to avoid deadlocking if Tracer() is called from within Shutdown(). if p.isShutdown.Load() { - return trace.NewNoopTracerProvider().Tracer(name, opts...) + return noop.NewTracerProvider().Tracer(name, opts...) } c := trace.NewTracerConfig(opts...) if name == "" { @@ -157,7 +161,7 @@ func (p *TracerProvider) Tracer(name string, opts ...trace.TracerOption) trace.T // Must check the flag after acquiring the mutex to avoid returning a valid tracer if Shutdown() ran // after the first check above but before we acquired the mutex. if p.isShutdown.Load() { - return trace.NewNoopTracerProvider().Tracer(name, opts...), true + return noop.NewTracerProvider().Tracer(name, opts...), true } t, ok := p.namedTracer[is] if !ok { diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/sampling.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/sampling.go index 5ee9715d27bb..a7bc125b9e8a 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/sampling.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/sampling.go @@ -158,9 +158,9 @@ func NeverSample() Sampler { return alwaysOffSampler{} } -// ParentBased returns a composite sampler which behaves differently, +// ParentBased returns a sampler decorator which behaves differently, // based on the parent of the span. If the span has no parent, -// the root(Sampler) is used to make sampling decision. If the span has +// the decorated sampler is used to make sampling decision. If the span has // a parent, depending on whether the parent is remote and whether it // is sampled, one of the following samplers will apply: // - remoteParentSampled(Sampler) (default: AlwaysOn) diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/span.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/span.go index 37cdd4a694a6..36dbf67764b9 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/span.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/span.go @@ -32,6 +32,7 @@ import ( "go.opentelemetry.io/otel/sdk/resource" semconv "go.opentelemetry.io/otel/semconv/v1.21.0" "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/trace/embedded" ) // ReadOnlySpan allows reading information from the data structure underlying a @@ -108,6 +109,8 @@ type ReadWriteSpan interface { // recordingSpan is an implementation of the OpenTelemetry Span API // representing the individual component of a trace that is sampled. type recordingSpan struct { + embedded.Span + // mu protects the contents of this span. mu sync.Mutex @@ -158,8 +161,10 @@ type recordingSpan struct { tracer *tracer } -var _ ReadWriteSpan = (*recordingSpan)(nil) -var _ runtimeTracer = (*recordingSpan)(nil) +var ( + _ ReadWriteSpan = (*recordingSpan)(nil) + _ runtimeTracer = (*recordingSpan)(nil) +) // SpanContext returns the SpanContext of this span. func (s *recordingSpan) SpanContext() trace.SpanContext { @@ -772,6 +777,8 @@ func (s *recordingSpan) runtimeTrace(ctx context.Context) context.Context { // that wraps a SpanContext. It performs no operations other than to return // the wrapped SpanContext or TracerProvider that created it. type nonRecordingSpan struct { + embedded.Span + // tracer is the SDK tracer that created this span. tracer *tracer sc trace.SpanContext diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/tracer.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/tracer.go index 85a71227f3fb..301e1a7abccf 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/tracer.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/trace/tracer.go @@ -20,9 +20,12 @@ import ( "go.opentelemetry.io/otel/sdk/instrumentation" "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/trace/embedded" ) type tracer struct { + embedded.Tracer + provider *TracerProvider instrumentationScope instrumentation.Scope } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/version.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/version.go index 72d2cb09f7bf..422d4c964b3f 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/version.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/sdk/version.go @@ -16,5 +16,5 @@ package sdk // import "go.opentelemetry.io/otel/sdk" // Version is the current release version of the OpenTelemetry SDK in use. func Version() string { - return "1.19.0" + return "1.21.0" } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/config.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/config.go index cb3efbb9ad89..3aadc66cf7a7 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/config.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/config.go @@ -268,6 +268,7 @@ func (o stackTraceOption) applyEvent(c EventConfig) EventConfig { c.stackTrace = bool(o) return c } + func (o stackTraceOption) applySpan(c SpanConfig) SpanConfig { c.stackTrace = bool(o) return c diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/doc.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/doc.go index ab0346f9664a..440f3d7565a1 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/doc.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/doc.go @@ -62,5 +62,69 @@ a default. defer span.End() // ... } + +# API Implementations + +This package does not conform to the standard Go versioning policy; all of its +interfaces may have methods added to them without a package major version bump. +This non-standard API evolution could surprise an uninformed implementation +author. They could unknowingly build their implementation in a way that would +result in a runtime panic for their users that update to the new API. + +The API is designed to help inform an instrumentation author about this +non-standard API evolution. It requires them to choose a default behavior for +unimplemented interface methods. There are three behavior choices they can +make: + + - Compilation failure + - Panic + - Default to another implementation + +All interfaces in this API embed a corresponding interface from +[go.opentelemetry.io/otel/trace/embedded]. If an author wants the default +behavior of their implementations to be a compilation failure, signaling to +their users they need to update to the latest version of that implementation, +they need to embed the corresponding interface from +[go.opentelemetry.io/otel/trace/embedded] in their implementation. For +example, + + import "go.opentelemetry.io/otel/trace/embedded" + + type TracerProvider struct { + embedded.TracerProvider + // ... + } + +If an author wants the default behavior of their implementations to panic, they +can embed the API interface directly. + + import "go.opentelemetry.io/otel/trace" + + type TracerProvider struct { + trace.TracerProvider + // ... + } + +This option is not recommended. It will lead to publishing packages that +contain runtime panics when users update to newer versions of +[go.opentelemetry.io/otel/trace], which may be done with a trasitive +dependency. + +Finally, an author can embed another implementation in theirs. The embedded +implementation will be used for methods not defined by the author. For example, +an author who wants to default to silently dropping the call can use +[go.opentelemetry.io/otel/trace/noop]: + + import "go.opentelemetry.io/otel/trace/noop" + + type TracerProvider struct { + noop.TracerProvider + // ... + } + +It is strongly recommended that authors only embed +[go.opentelemetry.io/otel/trace/noop] if they choose this default behavior. +That implementation is the only one OpenTelemetry authors can guarantee will +fully implement all the API interfaces when a user updates their API. */ package trace // import "go.opentelemetry.io/otel/trace" diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/embedded/embedded.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/embedded/embedded.go new file mode 100644 index 000000000000..898db5a7546e --- /dev/null +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/embedded/embedded.go @@ -0,0 +1,56 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package embedded provides interfaces embedded within the [OpenTelemetry +// trace API]. +// +// Implementers of the [OpenTelemetry trace API] can embed the relevant type +// from this package into their implementation directly. Doing so will result +// in a compilation error for users when the [OpenTelemetry trace API] is +// extended (which is something that can happen without a major version bump of +// the API package). +// +// [OpenTelemetry trace API]: https://pkg.go.dev/go.opentelemetry.io/otel/trace +package embedded // import "go.opentelemetry.io/otel/trace/embedded" + +// TracerProvider is embedded in +// [go.opentelemetry.io/otel/trace.TracerProvider]. +// +// Embed this interface in your implementation of the +// [go.opentelemetry.io/otel/trace.TracerProvider] if you want users to +// experience a compilation error, signaling they need to update to your latest +// implementation, when the [go.opentelemetry.io/otel/trace.TracerProvider] +// interface is extended (which is something that can happen without a major +// version bump of the API package). +type TracerProvider interface{ tracerProvider() } + +// Tracer is embedded in [go.opentelemetry.io/otel/trace.Tracer]. +// +// Embed this interface in your implementation of the +// [go.opentelemetry.io/otel/trace.Tracer] if you want users to experience a +// compilation error, signaling they need to update to your latest +// implementation, when the [go.opentelemetry.io/otel/trace.Tracer] interface +// is extended (which is something that can happen without a major version bump +// of the API package). +type Tracer interface{ tracer() } + +// Span is embedded in [go.opentelemetry.io/otel/trace.Span]. +// +// Embed this interface in your implementation of the +// [go.opentelemetry.io/otel/trace.Span] if you want users to experience a +// compilation error, signaling they need to update to your latest +// implementation, when the [go.opentelemetry.io/otel/trace.Span] interface is +// extended (which is something that can happen without a major version bump of +// the API package). +type Span interface{ span() } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/noop.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/noop.go index 7cf6c7f3ef9e..c125491caebf 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/noop.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/noop.go @@ -19,16 +19,20 @@ import ( "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/codes" + "go.opentelemetry.io/otel/trace/embedded" ) // NewNoopTracerProvider returns an implementation of TracerProvider that // performs no operations. The Tracer and Spans created from the returned // TracerProvider also perform no operations. +// +// Deprecated: Use [go.opentelemetry.io/otel/trace/noop.NewTracerProvider] +// instead. func NewNoopTracerProvider() TracerProvider { return noopTracerProvider{} } -type noopTracerProvider struct{} +type noopTracerProvider struct{ embedded.TracerProvider } var _ TracerProvider = noopTracerProvider{} @@ -38,7 +42,7 @@ func (p noopTracerProvider) Tracer(string, ...TracerOption) Tracer { } // noopTracer is an implementation of Tracer that performs no operations. -type noopTracer struct{} +type noopTracer struct{ embedded.Tracer } var _ Tracer = noopTracer{} @@ -54,7 +58,7 @@ func (t noopTracer) Start(ctx context.Context, name string, _ ...SpanStartOption } // noopSpan is an implementation of Span that performs no operations. -type noopSpan struct{} +type noopSpan struct{ embedded.Span } var _ Span = noopSpan{} diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/noop/noop.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/noop/noop.go new file mode 100644 index 000000000000..7f485543c47d --- /dev/null +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/noop/noop.go @@ -0,0 +1,118 @@ +// Copyright The OpenTelemetry Authors +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package noop provides an implementation of the OpenTelemetry trace API that +// produces no telemetry and minimizes used computation resources. +// +// Using this package to implement the OpenTelemetry trace API will effectively +// disable OpenTelemetry. +// +// This implementation can be embedded in other implementations of the +// OpenTelemetry trace API. Doing so will mean the implementation defaults to +// no operation for methods it does not implement. +package noop // import "go.opentelemetry.io/otel/trace/noop" + +import ( + "context" + + "go.opentelemetry.io/otel/attribute" + "go.opentelemetry.io/otel/codes" + "go.opentelemetry.io/otel/trace" + "go.opentelemetry.io/otel/trace/embedded" +) + +var ( + // Compile-time check this implements the OpenTelemetry API. + + _ trace.TracerProvider = TracerProvider{} + _ trace.Tracer = Tracer{} + _ trace.Span = Span{} +) + +// TracerProvider is an OpenTelemetry No-Op TracerProvider. +type TracerProvider struct{ embedded.TracerProvider } + +// NewTracerProvider returns a TracerProvider that does not record any telemetry. +func NewTracerProvider() TracerProvider { + return TracerProvider{} +} + +// Tracer returns an OpenTelemetry Tracer that does not record any telemetry. +func (TracerProvider) Tracer(string, ...trace.TracerOption) trace.Tracer { + return Tracer{} +} + +// Tracer is an OpenTelemetry No-Op Tracer. +type Tracer struct{ embedded.Tracer } + +// Start creates a span. The created span will be set in a child context of ctx +// and returned with the span. +// +// If ctx contains a span context, the returned span will also contain that +// span context. If the span context in ctx is for a non-recording span, that +// span instance will be returned directly. +func (t Tracer) Start(ctx context.Context, _ string, _ ...trace.SpanStartOption) (context.Context, trace.Span) { + span := trace.SpanFromContext(ctx) + + // If the parent context contains a non-zero span context, that span + // context needs to be returned as a non-recording span + // (https://github.com/open-telemetry/opentelemetry-specification/blob/3a1dde966a4ce87cce5adf464359fe369741bbea/specification/trace/api.md#behavior-of-the-api-in-the-absence-of-an-installed-sdk). + var zeroSC trace.SpanContext + if sc := span.SpanContext(); !sc.Equal(zeroSC) { + if !span.IsRecording() { + // If the span is not recording return it directly. + return ctx, span + } + // Otherwise, return the span context needs in a non-recording span. + span = Span{sc: sc} + } else { + // No parent, return a No-Op span with an empty span context. + span = Span{} + } + return trace.ContextWithSpan(ctx, span), span +} + +// Span is an OpenTelemetry No-Op Span. +type Span struct { + embedded.Span + + sc trace.SpanContext +} + +// SpanContext returns an empty span context. +func (s Span) SpanContext() trace.SpanContext { return s.sc } + +// IsRecording always returns false. +func (Span) IsRecording() bool { return false } + +// SetStatus does nothing. +func (Span) SetStatus(codes.Code, string) {} + +// SetAttributes does nothing. +func (Span) SetAttributes(...attribute.KeyValue) {} + +// End does nothing. +func (Span) End(...trace.SpanEndOption) {} + +// RecordError does nothing. +func (Span) RecordError(error, ...trace.EventOption) {} + +// AddEvent does nothing. +func (Span) AddEvent(string, ...trace.EventOption) {} + +// SetName does nothing. +func (Span) SetName(string) {} + +// TracerProvider returns a No-Op TracerProvider. +func (Span) TracerProvider() trace.TracerProvider { return TracerProvider{} } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/trace.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/trace.go index 4aa94f79f46a..26a4b2260ec6 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/trace.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/trace.go @@ -22,6 +22,7 @@ import ( "go.opentelemetry.io/otel/attribute" "go.opentelemetry.io/otel/codes" + "go.opentelemetry.io/otel/trace/embedded" ) const ( @@ -48,8 +49,10 @@ func (e errorConst) Error() string { // nolint:revive // revive complains about stutter of `trace.TraceID`. type TraceID [16]byte -var nilTraceID TraceID -var _ json.Marshaler = nilTraceID +var ( + nilTraceID TraceID + _ json.Marshaler = nilTraceID +) // IsValid checks whether the trace TraceID is valid. A valid trace ID does // not consist of zeros only. @@ -71,8 +74,10 @@ func (t TraceID) String() string { // SpanID is a unique identity of a span in a trace. type SpanID [8]byte -var nilSpanID SpanID -var _ json.Marshaler = nilSpanID +var ( + nilSpanID SpanID + _ json.Marshaler = nilSpanID +) // IsValid checks whether the SpanID is valid. A valid SpanID does not consist // of zeros only. @@ -338,8 +343,15 @@ func (sc SpanContext) MarshalJSON() ([]byte, error) { // create a Span and it is then up to the operation the Span represents to // properly end the Span when the operation itself ends. // -// Warning: methods may be added to this interface in minor releases. +// Warning: Methods may be added to this interface in minor releases. See +// package documentation on API implementation for information on how to set +// default behavior for unimplemented methods. type Span interface { + // Users of the interface can ignore this. This embedded type is only used + // by implementations of this interface. See the "API Implementations" + // section of the package documentation for more information. + embedded.Span + // End completes the Span. The Span is considered complete and ready to be // delivered through the rest of the telemetry pipeline after this method // is called. Therefore, updates to the Span are not allowed after this @@ -486,8 +498,15 @@ func (sk SpanKind) String() string { // Tracer is the creator of Spans. // -// Warning: methods may be added to this interface in minor releases. +// Warning: Methods may be added to this interface in minor releases. See +// package documentation on API implementation for information on how to set +// default behavior for unimplemented methods. type Tracer interface { + // Users of the interface can ignore this. This embedded type is only used + // by implementations of this interface. See the "API Implementations" + // section of the package documentation for more information. + embedded.Tracer + // Start creates a span and a context.Context containing the newly-created span. // // If the context.Context provided in `ctx` contains a Span then the newly-created @@ -518,8 +537,15 @@ type Tracer interface { // at runtime from its users or it can simply use the globally registered one // (see https://pkg.go.dev/go.opentelemetry.io/otel#GetTracerProvider). // -// Warning: methods may be added to this interface in minor releases. +// Warning: Methods may be added to this interface in minor releases. See +// package documentation on API implementation for information on how to set +// default behavior for unimplemented methods. type TracerProvider interface { + // Users of the interface can ignore this. This embedded type is only used + // by implementations of this interface. See the "API Implementations" + // section of the package documentation for more information. + embedded.TracerProvider + // Tracer returns a unique Tracer scoped to be used by instrumentation code // to trace computational workflows. The scope and identity of that // instrumentation code is uniquely defined by the name and options passed. diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/tracestate.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/tracestate.go index ca68a82e5f73..d1e47ca2faac 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/tracestate.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/trace/tracestate.go @@ -28,9 +28,9 @@ const ( // based on the W3C Trace Context specification, see // https://www.w3.org/TR/trace-context-1/#tracestate-header - noTenantKeyFormat = `[a-z][_0-9a-z\-\*\/]{0,255}` - withTenantKeyFormat = `[a-z0-9][_0-9a-z\-\*\/]{0,240}@[a-z][_0-9a-z\-\*\/]{0,13}` - valueFormat = `[\x20-\x2b\x2d-\x3c\x3e-\x7e]{0,255}[\x21-\x2b\x2d-\x3c\x3e-\x7e]` + noTenantKeyFormat = `[a-z][_0-9a-z\-\*\/]*` + withTenantKeyFormat = `[a-z0-9][_0-9a-z\-\*\/]*@[a-z][_0-9a-z\-\*\/]*` + valueFormat = `[\x20-\x2b\x2d-\x3c\x3e-\x7e]*[\x21-\x2b\x2d-\x3c\x3e-\x7e]` errInvalidKey errorConst = "invalid tracestate key" errInvalidValue errorConst = "invalid tracestate value" @@ -40,9 +40,10 @@ const ( ) var ( - keyRe = regexp.MustCompile(`^((` + noTenantKeyFormat + `)|(` + withTenantKeyFormat + `))$`) - valueRe = regexp.MustCompile(`^(` + valueFormat + `)$`) - memberRe = regexp.MustCompile(`^\s*((` + noTenantKeyFormat + `)|(` + withTenantKeyFormat + `))=(` + valueFormat + `)\s*$`) + noTenantKeyRe = regexp.MustCompile(`^` + noTenantKeyFormat + `$`) + withTenantKeyRe = regexp.MustCompile(`^` + withTenantKeyFormat + `$`) + valueRe = regexp.MustCompile(`^` + valueFormat + `$`) + memberRe = regexp.MustCompile(`^\s*((?:` + noTenantKeyFormat + `)|(?:` + withTenantKeyFormat + `))=(` + valueFormat + `)\s*$`) ) type member struct { @@ -51,10 +52,19 @@ type member struct { } func newMember(key, value string) (member, error) { - if !keyRe.MatchString(key) { + if len(key) > 256 { return member{}, fmt.Errorf("%w: %s", errInvalidKey, key) } - if !valueRe.MatchString(value) { + if !noTenantKeyRe.MatchString(key) { + if !withTenantKeyRe.MatchString(key) { + return member{}, fmt.Errorf("%w: %s", errInvalidKey, key) + } + atIndex := strings.LastIndex(key, "@") + if atIndex > 241 || len(key)-1-atIndex > 14 { + return member{}, fmt.Errorf("%w: %s", errInvalidKey, key) + } + } + if len(value) > 256 || !valueRe.MatchString(value) { return member{}, fmt.Errorf("%w: %s", errInvalidValue, value) } return member{Key: key, Value: value}, nil @@ -62,14 +72,14 @@ func newMember(key, value string) (member, error) { func parseMember(m string) (member, error) { matches := memberRe.FindStringSubmatch(m) - if len(matches) != 5 { + if len(matches) != 3 { return member{}, fmt.Errorf("%w: %s", errInvalidMember, m) } - - return member{ - Key: matches[1], - Value: matches[4], - }, nil + result, e := newMember(matches[1], matches[2]) + if e != nil { + return member{}, fmt.Errorf("%w: %s", errInvalidMember, m) + } + return result, nil } // String encodes member into a string compliant with the W3C Trace Context diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/version.go b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/version.go index ad64e199672f..e2f743585d1d 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/version.go +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/version.go @@ -16,5 +16,5 @@ package otel // import "go.opentelemetry.io/otel" // Version is the current release version of OpenTelemetry in use. func Version() string { - return "1.19.0" + return "1.21.0" } diff --git a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/versions.yaml b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/versions.yaml index 7d2127692403..3c153c9d6fc6 100644 --- a/cluster-autoscaler/vendor/go.opentelemetry.io/otel/versions.yaml +++ b/cluster-autoscaler/vendor/go.opentelemetry.io/otel/versions.yaml @@ -14,13 +14,12 @@ module-sets: stable-v1: - version: v1.19.0 + version: v1.21.0 modules: - go.opentelemetry.io/otel - go.opentelemetry.io/otel/bridge/opentracing - go.opentelemetry.io/otel/bridge/opentracing/test - go.opentelemetry.io/otel/example/dice - - go.opentelemetry.io/otel/example/fib - go.opentelemetry.io/otel/example/namedtracer - go.opentelemetry.io/otel/example/otel-collector - go.opentelemetry.io/otel/example/passthrough @@ -35,14 +34,12 @@ module-sets: - go.opentelemetry.io/otel/sdk/metric - go.opentelemetry.io/otel/trace experimental-metrics: - version: v0.42.0 + version: v0.44.0 modules: - go.opentelemetry.io/otel/bridge/opencensus - go.opentelemetry.io/otel/bridge/opencensus/test - go.opentelemetry.io/otel/example/opencensus - go.opentelemetry.io/otel/example/prometheus - - go.opentelemetry.io/otel/example/view - - go.opentelemetry.io/otel/exporters/otlp/otlpmetric - go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetricgrpc - go.opentelemetry.io/otel/exporters/otlp/otlpmetric/otlpmetrichttp - go.opentelemetry.io/otel/exporters/prometheus diff --git a/cluster-autoscaler/vendor/go.uber.org/mock/AUTHORS b/cluster-autoscaler/vendor/go.uber.org/mock/AUTHORS new file mode 100644 index 000000000000..660b8ccc8ae0 --- /dev/null +++ b/cluster-autoscaler/vendor/go.uber.org/mock/AUTHORS @@ -0,0 +1,12 @@ +# This is the official list of GoMock authors for copyright purposes. +# This file is distinct from the CONTRIBUTORS files. +# See the latter for an explanation. + +# Names should be added to this file as +# Name or Organization +# The email address is not required for organizations. + +# Please keep the list sorted. + +Alex Reece +Google Inc. diff --git a/cluster-autoscaler/vendor/go.uber.org/mock/LICENSE b/cluster-autoscaler/vendor/go.uber.org/mock/LICENSE new file mode 100644 index 000000000000..d64569567334 --- /dev/null +++ b/cluster-autoscaler/vendor/go.uber.org/mock/LICENSE @@ -0,0 +1,202 @@ + + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "[]" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright [yyyy] [name of copyright owner] + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. diff --git a/cluster-autoscaler/vendor/go.uber.org/mock/gomock/call.go b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/call.go new file mode 100644 index 000000000000..e1ea82637718 --- /dev/null +++ b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/call.go @@ -0,0 +1,508 @@ +// Copyright 2010 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package gomock + +import ( + "fmt" + "reflect" + "strconv" + "strings" +) + +// Call represents an expected call to a mock. +type Call struct { + t TestHelper // for triggering test failures on invalid call setup + + receiver any // the receiver of the method call + method string // the name of the method + methodType reflect.Type // the type of the method + args []Matcher // the args + origin string // file and line number of call setup + + preReqs []*Call // prerequisite calls + + // Expectations + minCalls, maxCalls int + + numCalls int // actual number made + + // actions are called when this Call is called. Each action gets the args and + // can set the return values by returning a non-nil slice. Actions run in the + // order they are created. + actions []func([]any) []any +} + +// newCall creates a *Call. It requires the method type in order to support +// unexported methods. +func newCall(t TestHelper, receiver any, method string, methodType reflect.Type, args ...any) *Call { + t.Helper() + + // TODO: check arity, types. + mArgs := make([]Matcher, len(args)) + for i, arg := range args { + if m, ok := arg.(Matcher); ok { + mArgs[i] = m + } else if arg == nil { + // Handle nil specially so that passing a nil interface value + // will match the typed nils of concrete args. + mArgs[i] = Nil() + } else { + mArgs[i] = Eq(arg) + } + } + + // callerInfo's skip should be updated if the number of calls between the user's test + // and this line changes, i.e. this code is wrapped in another anonymous function. + // 0 is us, 1 is RecordCallWithMethodType(), 2 is the generated recorder, and 3 is the user's test. + origin := callerInfo(3) + actions := []func([]any) []any{func([]any) []any { + // Synthesize the zero value for each of the return args' types. + rets := make([]any, methodType.NumOut()) + for i := 0; i < methodType.NumOut(); i++ { + rets[i] = reflect.Zero(methodType.Out(i)).Interface() + } + return rets + }} + return &Call{t: t, receiver: receiver, method: method, methodType: methodType, + args: mArgs, origin: origin, minCalls: 1, maxCalls: 1, actions: actions} +} + +// AnyTimes allows the expectation to be called 0 or more times +func (c *Call) AnyTimes() *Call { + c.minCalls, c.maxCalls = 0, 1e8 // close enough to infinity + return c +} + +// MinTimes requires the call to occur at least n times. If AnyTimes or MaxTimes have not been called or if MaxTimes +// was previously called with 1, MinTimes also sets the maximum number of calls to infinity. +func (c *Call) MinTimes(n int) *Call { + c.minCalls = n + if c.maxCalls == 1 { + c.maxCalls = 1e8 + } + return c +} + +// MaxTimes limits the number of calls to n times. If AnyTimes or MinTimes have not been called or if MinTimes was +// previously called with 1, MaxTimes also sets the minimum number of calls to 0. +func (c *Call) MaxTimes(n int) *Call { + c.maxCalls = n + if c.minCalls == 1 { + c.minCalls = 0 + } + return c +} + +// DoAndReturn declares the action to run when the call is matched. +// The return values from this function are returned by the mocked function. +// It takes an any argument to support n-arity functions. +// The anonymous function must match the function signature mocked method. +func (c *Call) DoAndReturn(f any) *Call { + // TODO: Check arity and types here, rather than dying badly elsewhere. + v := reflect.ValueOf(f) + + c.addAction(func(args []any) []any { + c.t.Helper() + ft := v.Type() + if c.methodType.NumIn() != ft.NumIn() { + if ft.IsVariadic() { + c.t.Fatalf("wrong number of arguments in DoAndReturn func for %T.%v The function signature must match the mocked method, a variadic function cannot be used.", + c.receiver, c.method) + } else { + c.t.Fatalf("wrong number of arguments in DoAndReturn func for %T.%v: got %d, want %d [%s]", + c.receiver, c.method, ft.NumIn(), c.methodType.NumIn(), c.origin) + } + return nil + } + vArgs := make([]reflect.Value, len(args)) + for i := 0; i < len(args); i++ { + if args[i] != nil { + vArgs[i] = reflect.ValueOf(args[i]) + } else { + // Use the zero value for the arg. + vArgs[i] = reflect.Zero(ft.In(i)) + } + } + vRets := v.Call(vArgs) + rets := make([]any, len(vRets)) + for i, ret := range vRets { + rets[i] = ret.Interface() + } + return rets + }) + return c +} + +// Do declares the action to run when the call is matched. The function's +// return values are ignored to retain backward compatibility. To use the +// return values call DoAndReturn. +// It takes an any argument to support n-arity functions. +// The anonymous function must match the function signature mocked method. +func (c *Call) Do(f any) *Call { + // TODO: Check arity and types here, rather than dying badly elsewhere. + v := reflect.ValueOf(f) + + c.addAction(func(args []any) []any { + c.t.Helper() + ft := v.Type() + if c.methodType.NumIn() != ft.NumIn() { + if ft.IsVariadic() { + c.t.Fatalf("wrong number of arguments in Do func for %T.%v The function signature must match the mocked method, a variadic function cannot be used.", + c.receiver, c.method) + } else { + c.t.Fatalf("wrong number of arguments in Do func for %T.%v: got %d, want %d [%s]", + c.receiver, c.method, ft.NumIn(), c.methodType.NumIn(), c.origin) + } + return nil + } + vArgs := make([]reflect.Value, len(args)) + for i := 0; i < len(args); i++ { + if args[i] != nil { + vArgs[i] = reflect.ValueOf(args[i]) + } else { + // Use the zero value for the arg. + vArgs[i] = reflect.Zero(ft.In(i)) + } + } + v.Call(vArgs) + return nil + }) + return c +} + +// Return declares the values to be returned by the mocked function call. +func (c *Call) Return(rets ...any) *Call { + c.t.Helper() + + mt := c.methodType + if len(rets) != mt.NumOut() { + c.t.Fatalf("wrong number of arguments to Return for %T.%v: got %d, want %d [%s]", + c.receiver, c.method, len(rets), mt.NumOut(), c.origin) + } + for i, ret := range rets { + if got, want := reflect.TypeOf(ret), mt.Out(i); got == want { + // Identical types; nothing to do. + } else if got == nil { + // Nil needs special handling. + switch want.Kind() { + case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice: + // ok + default: + c.t.Fatalf("argument %d to Return for %T.%v is nil, but %v is not nillable [%s]", + i, c.receiver, c.method, want, c.origin) + } + } else if got.AssignableTo(want) { + // Assignable type relation. Make the assignment now so that the generated code + // can return the values with a type assertion. + v := reflect.New(want).Elem() + v.Set(reflect.ValueOf(ret)) + rets[i] = v.Interface() + } else { + c.t.Fatalf("wrong type of argument %d to Return for %T.%v: %v is not assignable to %v [%s]", + i, c.receiver, c.method, got, want, c.origin) + } + } + + c.addAction(func([]any) []any { + return rets + }) + + return c +} + +// Times declares the exact number of times a function call is expected to be executed. +func (c *Call) Times(n int) *Call { + c.minCalls, c.maxCalls = n, n + return c +} + +// SetArg declares an action that will set the nth argument's value, +// indirected through a pointer. Or, in the case of a slice and map, SetArg +// will copy value's elements/key-value pairs into the nth argument. +func (c *Call) SetArg(n int, value any) *Call { + c.t.Helper() + + mt := c.methodType + // TODO: This will break on variadic methods. + // We will need to check those at invocation time. + if n < 0 || n >= mt.NumIn() { + c.t.Fatalf("SetArg(%d, ...) called for a method with %d args [%s]", + n, mt.NumIn(), c.origin) + } + // Permit setting argument through an interface. + // In the interface case, we don't (nay, can't) check the type here. + at := mt.In(n) + switch at.Kind() { + case reflect.Ptr: + dt := at.Elem() + if vt := reflect.TypeOf(value); !vt.AssignableTo(dt) { + c.t.Fatalf("SetArg(%d, ...) argument is a %v, not assignable to %v [%s]", + n, vt, dt, c.origin) + } + case reflect.Interface: + // nothing to do + case reflect.Slice: + // nothing to do + case reflect.Map: + // nothing to do + default: + c.t.Fatalf("SetArg(%d, ...) referring to argument of non-pointer non-interface non-slice non-map type %v [%s]", + n, at, c.origin) + } + + c.addAction(func(args []any) []any { + v := reflect.ValueOf(value) + switch reflect.TypeOf(args[n]).Kind() { + case reflect.Slice: + setSlice(args[n], v) + case reflect.Map: + setMap(args[n], v) + default: + reflect.ValueOf(args[n]).Elem().Set(v) + } + return nil + }) + return c +} + +// isPreReq returns true if other is a direct or indirect prerequisite to c. +func (c *Call) isPreReq(other *Call) bool { + for _, preReq := range c.preReqs { + if other == preReq || preReq.isPreReq(other) { + return true + } + } + return false +} + +// After declares that the call may only match after preReq has been exhausted. +func (c *Call) After(preReq *Call) *Call { + c.t.Helper() + + if c == preReq { + c.t.Fatalf("A call isn't allowed to be its own prerequisite") + } + if preReq.isPreReq(c) { + c.t.Fatalf("Loop in call order: %v is a prerequisite to %v (possibly indirectly).", c, preReq) + } + + c.preReqs = append(c.preReqs, preReq) + return c +} + +// Returns true if the minimum number of calls have been made. +func (c *Call) satisfied() bool { + return c.numCalls >= c.minCalls +} + +// Returns true if the maximum number of calls have been made. +func (c *Call) exhausted() bool { + return c.numCalls >= c.maxCalls +} + +func (c *Call) String() string { + args := make([]string, len(c.args)) + for i, arg := range c.args { + args[i] = arg.String() + } + arguments := strings.Join(args, ", ") + return fmt.Sprintf("%T.%v(%s) %s", c.receiver, c.method, arguments, c.origin) +} + +// Tests if the given call matches the expected call. +// If yes, returns nil. If no, returns error with message explaining why it does not match. +func (c *Call) matches(args []any) error { + if !c.methodType.IsVariadic() { + if len(args) != len(c.args) { + return fmt.Errorf("expected call at %s has the wrong number of arguments. Got: %d, want: %d", + c.origin, len(args), len(c.args)) + } + + for i, m := range c.args { + if !m.Matches(args[i]) { + return fmt.Errorf( + "expected call at %s doesn't match the argument at index %d.\nGot: %v\nWant: %v", + c.origin, i, formatGottenArg(m, args[i]), m, + ) + } + } + } else { + if len(c.args) < c.methodType.NumIn()-1 { + return fmt.Errorf("expected call at %s has the wrong number of matchers. Got: %d, want: %d", + c.origin, len(c.args), c.methodType.NumIn()-1) + } + if len(c.args) != c.methodType.NumIn() && len(args) != len(c.args) { + return fmt.Errorf("expected call at %s has the wrong number of arguments. Got: %d, want: %d", + c.origin, len(args), len(c.args)) + } + if len(args) < len(c.args)-1 { + return fmt.Errorf("expected call at %s has the wrong number of arguments. Got: %d, want: greater than or equal to %d", + c.origin, len(args), len(c.args)-1) + } + + for i, m := range c.args { + if i < c.methodType.NumIn()-1 { + // Non-variadic args + if !m.Matches(args[i]) { + return fmt.Errorf("expected call at %s doesn't match the argument at index %s.\nGot: %v\nWant: %v", + c.origin, strconv.Itoa(i), formatGottenArg(m, args[i]), m) + } + continue + } + // The last arg has a possibility of a variadic argument, so let it branch + + // sample: Foo(a int, b int, c ...int) + if i < len(c.args) && i < len(args) { + if m.Matches(args[i]) { + // Got Foo(a, b, c) want Foo(matcherA, matcherB, gomock.Any()) + // Got Foo(a, b, c) want Foo(matcherA, matcherB, someSliceMatcher) + // Got Foo(a, b, c) want Foo(matcherA, matcherB, matcherC) + // Got Foo(a, b) want Foo(matcherA, matcherB) + // Got Foo(a, b, c, d) want Foo(matcherA, matcherB, matcherC, matcherD) + continue + } + } + + // The number of actual args don't match the number of matchers, + // or the last matcher is a slice and the last arg is not. + // If this function still matches it is because the last matcher + // matches all the remaining arguments or the lack of any. + // Convert the remaining arguments, if any, into a slice of the + // expected type. + vArgsType := c.methodType.In(c.methodType.NumIn() - 1) + vArgs := reflect.MakeSlice(vArgsType, 0, len(args)-i) + for _, arg := range args[i:] { + vArgs = reflect.Append(vArgs, reflect.ValueOf(arg)) + } + if m.Matches(vArgs.Interface()) { + // Got Foo(a, b, c, d, e) want Foo(matcherA, matcherB, gomock.Any()) + // Got Foo(a, b, c, d, e) want Foo(matcherA, matcherB, someSliceMatcher) + // Got Foo(a, b) want Foo(matcherA, matcherB, gomock.Any()) + // Got Foo(a, b) want Foo(matcherA, matcherB, someEmptySliceMatcher) + break + } + // Wrong number of matchers or not match. Fail. + // Got Foo(a, b) want Foo(matcherA, matcherB, matcherC, matcherD) + // Got Foo(a, b, c) want Foo(matcherA, matcherB, matcherC, matcherD) + // Got Foo(a, b, c, d) want Foo(matcherA, matcherB, matcherC, matcherD, matcherE) + // Got Foo(a, b, c, d, e) want Foo(matcherA, matcherB, matcherC, matcherD) + // Got Foo(a, b, c) want Foo(matcherA, matcherB) + + return fmt.Errorf("expected call at %s doesn't match the argument at index %s.\nGot: %v\nWant: %v", + c.origin, strconv.Itoa(i), formatGottenArg(m, args[i:]), c.args[i]) + } + } + + // Check that all prerequisite calls have been satisfied. + for _, preReqCall := range c.preReqs { + if !preReqCall.satisfied() { + return fmt.Errorf("expected call at %s doesn't have a prerequisite call satisfied:\n%v\nshould be called before:\n%v", + c.origin, preReqCall, c) + } + } + + // Check that the call is not exhausted. + if c.exhausted() { + return fmt.Errorf("expected call at %s has already been called the max number of times", c.origin) + } + + return nil +} + +// dropPrereqs tells the expected Call to not re-check prerequisite calls any +// longer, and to return its current set. +func (c *Call) dropPrereqs() (preReqs []*Call) { + preReqs = c.preReqs + c.preReqs = nil + return +} + +func (c *Call) call() []func([]any) []any { + c.numCalls++ + return c.actions +} + +// InOrder declares that the given calls should occur in order. +// It panics if the type of any of the arguments isn't *Call or a generated +// mock with an embedded *Call. +func InOrder(args ...any) { + calls := make([]*Call, 0, len(args)) + for i := 0; i < len(args); i++ { + if call := getCall(args[i]); call != nil { + calls = append(calls, call) + continue + } + panic(fmt.Sprintf( + "invalid argument at position %d of type %T, InOrder expects *gomock.Call or generated mock types with an embedded *gomock.Call", + i, + args[i], + )) + } + for i := 1; i < len(calls); i++ { + calls[i].After(calls[i-1]) + } +} + +// getCall checks if the parameter is a *Call or a generated struct +// that wraps a *Call and returns the *Call pointer - if neither, it returns nil. +func getCall(arg any) *Call { + if call, ok := arg.(*Call); ok { + return call + } + t := reflect.ValueOf(arg) + if t.Kind() != reflect.Ptr && t.Kind() != reflect.Interface { + return nil + } + t = t.Elem() + for i := 0; i < t.NumField(); i++ { + f := t.Field(i) + if !f.CanInterface() { + continue + } + if call, ok := f.Interface().(*Call); ok { + return call + } + } + return nil +} + +func setSlice(arg any, v reflect.Value) { + va := reflect.ValueOf(arg) + for i := 0; i < v.Len(); i++ { + va.Index(i).Set(v.Index(i)) + } +} + +func setMap(arg any, v reflect.Value) { + va := reflect.ValueOf(arg) + for _, e := range va.MapKeys() { + va.SetMapIndex(e, reflect.Value{}) + } + for _, e := range v.MapKeys() { + va.SetMapIndex(e, v.MapIndex(e)) + } +} + +func (c *Call) addAction(action func([]any) []any) { + c.actions = append(c.actions, action) +} + +func formatGottenArg(m Matcher, arg any) string { + got := fmt.Sprintf("%v (%T)", arg, arg) + if gs, ok := m.(GotFormatter); ok { + got = gs.Got(arg) + } + return got +} diff --git a/cluster-autoscaler/vendor/go.uber.org/mock/gomock/callset.go b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/callset.go new file mode 100644 index 000000000000..f5cc592d6f40 --- /dev/null +++ b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/callset.go @@ -0,0 +1,164 @@ +// Copyright 2011 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package gomock + +import ( + "bytes" + "errors" + "fmt" + "sync" +) + +// callSet represents a set of expected calls, indexed by receiver and method +// name. +type callSet struct { + // Calls that are still expected. + expected map[callSetKey][]*Call + expectedMu *sync.Mutex + // Calls that have been exhausted. + exhausted map[callSetKey][]*Call + // when set to true, existing call expectations are overridden when new call expectations are made + allowOverride bool +} + +// callSetKey is the key in the maps in callSet +type callSetKey struct { + receiver any + fname string +} + +func newCallSet() *callSet { + return &callSet{ + expected: make(map[callSetKey][]*Call), + expectedMu: &sync.Mutex{}, + exhausted: make(map[callSetKey][]*Call), + } +} + +func newOverridableCallSet() *callSet { + return &callSet{ + expected: make(map[callSetKey][]*Call), + expectedMu: &sync.Mutex{}, + exhausted: make(map[callSetKey][]*Call), + allowOverride: true, + } +} + +// Add adds a new expected call. +func (cs callSet) Add(call *Call) { + key := callSetKey{call.receiver, call.method} + + cs.expectedMu.Lock() + defer cs.expectedMu.Unlock() + + m := cs.expected + if call.exhausted() { + m = cs.exhausted + } + if cs.allowOverride { + m[key] = make([]*Call, 0) + } + + m[key] = append(m[key], call) +} + +// Remove removes an expected call. +func (cs callSet) Remove(call *Call) { + key := callSetKey{call.receiver, call.method} + + cs.expectedMu.Lock() + defer cs.expectedMu.Unlock() + + calls := cs.expected[key] + for i, c := range calls { + if c == call { + // maintain order for remaining calls + cs.expected[key] = append(calls[:i], calls[i+1:]...) + cs.exhausted[key] = append(cs.exhausted[key], call) + break + } + } +} + +// FindMatch searches for a matching call. Returns error with explanation message if no call matched. +func (cs callSet) FindMatch(receiver any, method string, args []any) (*Call, error) { + key := callSetKey{receiver, method} + + cs.expectedMu.Lock() + defer cs.expectedMu.Unlock() + + // Search through the expected calls. + expected := cs.expected[key] + var callsErrors bytes.Buffer + for _, call := range expected { + err := call.matches(args) + if err != nil { + _, _ = fmt.Fprintf(&callsErrors, "\n%v", err) + } else { + return call, nil + } + } + + // If we haven't found a match then search through the exhausted calls so we + // get useful error messages. + exhausted := cs.exhausted[key] + for _, call := range exhausted { + if err := call.matches(args); err != nil { + _, _ = fmt.Fprintf(&callsErrors, "\n%v", err) + continue + } + _, _ = fmt.Fprintf( + &callsErrors, "all expected calls for method %q have been exhausted", method, + ) + } + + if len(expected)+len(exhausted) == 0 { + _, _ = fmt.Fprintf(&callsErrors, "there are no expected calls of the method %q for that receiver", method) + } + + return nil, errors.New(callsErrors.String()) +} + +// Failures returns the calls that are not satisfied. +func (cs callSet) Failures() []*Call { + cs.expectedMu.Lock() + defer cs.expectedMu.Unlock() + + failures := make([]*Call, 0, len(cs.expected)) + for _, calls := range cs.expected { + for _, call := range calls { + if !call.satisfied() { + failures = append(failures, call) + } + } + } + return failures +} + +// Satisfied returns true in case all expected calls in this callSet are satisfied. +func (cs callSet) Satisfied() bool { + cs.expectedMu.Lock() + defer cs.expectedMu.Unlock() + + for _, calls := range cs.expected { + for _, call := range calls { + if !call.satisfied() { + return false + } + } + } + + return true +} diff --git a/cluster-autoscaler/vendor/go.uber.org/mock/gomock/controller.go b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/controller.go new file mode 100644 index 000000000000..9d17a2f0c79f --- /dev/null +++ b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/controller.go @@ -0,0 +1,318 @@ +// Copyright 2010 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package gomock + +import ( + "context" + "fmt" + "reflect" + "runtime" + "sync" +) + +// A TestReporter is something that can be used to report test failures. It +// is satisfied by the standard library's *testing.T. +type TestReporter interface { + Errorf(format string, args ...any) + Fatalf(format string, args ...any) +} + +// TestHelper is a TestReporter that has the Helper method. It is satisfied +// by the standard library's *testing.T. +type TestHelper interface { + TestReporter + Helper() +} + +// cleanuper is used to check if TestHelper also has the `Cleanup` method. A +// common pattern is to pass in a `*testing.T` to +// `NewController(t TestReporter)`. In Go 1.14+, `*testing.T` has a cleanup +// method. This can be utilized to call `Finish()` so the caller of this library +// does not have to. +type cleanuper interface { + Cleanup(func()) +} + +// A Controller represents the top-level control of a mock ecosystem. It +// defines the scope and lifetime of mock objects, as well as their +// expectations. It is safe to call Controller's methods from multiple +// goroutines. Each test should create a new Controller and invoke Finish via +// defer. +// +// func TestFoo(t *testing.T) { +// ctrl := gomock.NewController(t) +// // .. +// } +// +// func TestBar(t *testing.T) { +// t.Run("Sub-Test-1", st) { +// ctrl := gomock.NewController(st) +// // .. +// }) +// t.Run("Sub-Test-2", st) { +// ctrl := gomock.NewController(st) +// // .. +// }) +// }) +type Controller struct { + // T should only be called within a generated mock. It is not intended to + // be used in user code and may be changed in future versions. T is the + // TestReporter passed in when creating the Controller via NewController. + // If the TestReporter does not implement a TestHelper it will be wrapped + // with a nopTestHelper. + T TestHelper + mu sync.Mutex + expectedCalls *callSet + finished bool +} + +// NewController returns a new Controller. It is the preferred way to create a Controller. +// +// Passing [*testing.T] registers cleanup function to automatically call [Controller.Finish] +// when the test and all its subtests complete. +func NewController(t TestReporter, opts ...ControllerOption) *Controller { + h, ok := t.(TestHelper) + if !ok { + h = &nopTestHelper{t} + } + ctrl := &Controller{ + T: h, + expectedCalls: newCallSet(), + } + for _, opt := range opts { + opt.apply(ctrl) + } + if c, ok := isCleanuper(ctrl.T); ok { + c.Cleanup(func() { + ctrl.T.Helper() + ctrl.finish(true, nil) + }) + } + + return ctrl +} + +// ControllerOption configures how a Controller should behave. +type ControllerOption interface { + apply(*Controller) +} + +type overridableExpectationsOption struct{} + +// WithOverridableExpectations allows for overridable call expectations +// i.e., subsequent call expectations override existing call expectations +func WithOverridableExpectations() overridableExpectationsOption { + return overridableExpectationsOption{} +} + +func (o overridableExpectationsOption) apply(ctrl *Controller) { + ctrl.expectedCalls = newOverridableCallSet() +} + +type cancelReporter struct { + t TestHelper + cancel func() +} + +func (r *cancelReporter) Errorf(format string, args ...any) { + r.t.Errorf(format, args...) +} +func (r *cancelReporter) Fatalf(format string, args ...any) { + defer r.cancel() + r.t.Fatalf(format, args...) +} + +func (r *cancelReporter) Helper() { + r.t.Helper() +} + +// WithContext returns a new Controller and a Context, which is cancelled on any +// fatal failure. +func WithContext(ctx context.Context, t TestReporter) (*Controller, context.Context) { + h, ok := t.(TestHelper) + if !ok { + h = &nopTestHelper{t: t} + } + + ctx, cancel := context.WithCancel(ctx) + return NewController(&cancelReporter{t: h, cancel: cancel}), ctx +} + +type nopTestHelper struct { + t TestReporter +} + +func (h *nopTestHelper) Errorf(format string, args ...any) { + h.t.Errorf(format, args...) +} +func (h *nopTestHelper) Fatalf(format string, args ...any) { + h.t.Fatalf(format, args...) +} + +func (h nopTestHelper) Helper() {} + +// RecordCall is called by a mock. It should not be called by user code. +func (ctrl *Controller) RecordCall(receiver any, method string, args ...any) *Call { + ctrl.T.Helper() + + recv := reflect.ValueOf(receiver) + for i := 0; i < recv.Type().NumMethod(); i++ { + if recv.Type().Method(i).Name == method { + return ctrl.RecordCallWithMethodType(receiver, method, recv.Method(i).Type(), args...) + } + } + ctrl.T.Fatalf("gomock: failed finding method %s on %T", method, receiver) + panic("unreachable") +} + +// RecordCallWithMethodType is called by a mock. It should not be called by user code. +func (ctrl *Controller) RecordCallWithMethodType(receiver any, method string, methodType reflect.Type, args ...any) *Call { + ctrl.T.Helper() + + call := newCall(ctrl.T, receiver, method, methodType, args...) + + ctrl.mu.Lock() + defer ctrl.mu.Unlock() + ctrl.expectedCalls.Add(call) + + return call +} + +// Call is called by a mock. It should not be called by user code. +func (ctrl *Controller) Call(receiver any, method string, args ...any) []any { + ctrl.T.Helper() + + // Nest this code so we can use defer to make sure the lock is released. + actions := func() []func([]any) []any { + ctrl.T.Helper() + ctrl.mu.Lock() + defer ctrl.mu.Unlock() + + expected, err := ctrl.expectedCalls.FindMatch(receiver, method, args) + if err != nil { + // callerInfo's skip should be updated if the number of calls between the user's test + // and this line changes, i.e. this code is wrapped in another anonymous function. + // 0 is us, 1 is controller.Call(), 2 is the generated mock, and 3 is the user's test. + origin := callerInfo(3) + ctrl.T.Fatalf("Unexpected call to %T.%v(%v) at %s because: %s", receiver, method, args, origin, err) + } + + // Two things happen here: + // * the matching call no longer needs to check prerequite calls, + // * and the prerequite calls are no longer expected, so remove them. + preReqCalls := expected.dropPrereqs() + for _, preReqCall := range preReqCalls { + ctrl.expectedCalls.Remove(preReqCall) + } + + actions := expected.call() + if expected.exhausted() { + ctrl.expectedCalls.Remove(expected) + } + return actions + }() + + var rets []any + for _, action := range actions { + if r := action(args); r != nil { + rets = r + } + } + + return rets +} + +// Finish checks to see if all the methods that were expected to be called were called. +// It is not idempotent and therefore can only be invoked once. +func (ctrl *Controller) Finish() { + // If we're currently panicking, probably because this is a deferred call. + // This must be recovered in the deferred function. + err := recover() + ctrl.finish(false, err) +} + +// Satisfied returns whether all expected calls bound to this Controller have been satisfied. +// Calling Finish is then guaranteed to not fail due to missing calls. +func (ctrl *Controller) Satisfied() bool { + ctrl.mu.Lock() + defer ctrl.mu.Unlock() + return ctrl.expectedCalls.Satisfied() +} + +func (ctrl *Controller) finish(cleanup bool, panicErr any) { + ctrl.T.Helper() + + ctrl.mu.Lock() + defer ctrl.mu.Unlock() + + if ctrl.finished { + if _, ok := isCleanuper(ctrl.T); !ok { + ctrl.T.Fatalf("Controller.Finish was called more than once. It has to be called exactly once.") + } + return + } + ctrl.finished = true + + // Short-circuit, pass through the panic. + if panicErr != nil { + panic(panicErr) + } + + // Check that all remaining expected calls are satisfied. + failures := ctrl.expectedCalls.Failures() + for _, call := range failures { + ctrl.T.Errorf("missing call(s) to %v", call) + } + if len(failures) != 0 { + if !cleanup { + ctrl.T.Fatalf("aborting test due to missing call(s)") + return + } + ctrl.T.Errorf("aborting test due to missing call(s)") + } +} + +// callerInfo returns the file:line of the call site. skip is the number +// of stack frames to skip when reporting. 0 is callerInfo's call site. +func callerInfo(skip int) string { + if _, file, line, ok := runtime.Caller(skip + 1); ok { + return fmt.Sprintf("%s:%d", file, line) + } + return "unknown file" +} + +// isCleanuper checks it if t's base TestReporter has a Cleanup method. +func isCleanuper(t TestReporter) (cleanuper, bool) { + tr := unwrapTestReporter(t) + c, ok := tr.(cleanuper) + return c, ok +} + +// unwrapTestReporter unwraps TestReporter to the base implementation. +func unwrapTestReporter(t TestReporter) TestReporter { + tr := t + switch nt := t.(type) { + case *cancelReporter: + tr = nt.t + if h, check := tr.(*nopTestHelper); check { + tr = h.t + } + case *nopTestHelper: + tr = nt.t + default: + // not wrapped + } + return tr +} diff --git a/cluster-autoscaler/vendor/go.uber.org/mock/gomock/doc.go b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/doc.go new file mode 100644 index 000000000000..696dda388209 --- /dev/null +++ b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/doc.go @@ -0,0 +1,60 @@ +// Copyright 2022 Google LLC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package gomock is a mock framework for Go. +// +// Standard usage: +// +// (1) Define an interface that you wish to mock. +// type MyInterface interface { +// SomeMethod(x int64, y string) +// } +// (2) Use mockgen to generate a mock from the interface. +// (3) Use the mock in a test: +// func TestMyThing(t *testing.T) { +// mockCtrl := gomock.NewController(t) +// mockObj := something.NewMockMyInterface(mockCtrl) +// mockObj.EXPECT().SomeMethod(4, "blah") +// // pass mockObj to a real object and play with it. +// } +// +// By default, expected calls are not enforced to run in any particular order. +// Call order dependency can be enforced by use of InOrder and/or Call.After. +// Call.After can create more varied call order dependencies, but InOrder is +// often more convenient. +// +// The following examples create equivalent call order dependencies. +// +// Example of using Call.After to chain expected call order: +// +// firstCall := mockObj.EXPECT().SomeMethod(1, "first") +// secondCall := mockObj.EXPECT().SomeMethod(2, "second").After(firstCall) +// mockObj.EXPECT().SomeMethod(3, "third").After(secondCall) +// +// Example of using InOrder to declare expected call order: +// +// gomock.InOrder( +// mockObj.EXPECT().SomeMethod(1, "first"), +// mockObj.EXPECT().SomeMethod(2, "second"), +// mockObj.EXPECT().SomeMethod(3, "third"), +// ) +// +// The standard TestReporter most users will pass to `NewController` is a +// `*testing.T` from the context of the test. Note that this will use the +// standard `t.Error` and `t.Fatal` methods to report what happened in the test. +// In some cases this can leave your testing package in a weird state if global +// state is used since `t.Fatal` is like calling panic in the middle of a +// function. In these cases it is recommended that you pass in your own +// `TestReporter`. +package gomock diff --git a/cluster-autoscaler/vendor/go.uber.org/mock/gomock/matchers.go b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/matchers.go new file mode 100644 index 000000000000..c17255096ad5 --- /dev/null +++ b/cluster-autoscaler/vendor/go.uber.org/mock/gomock/matchers.go @@ -0,0 +1,443 @@ +// Copyright 2010 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package gomock + +import ( + "fmt" + "reflect" + "regexp" + "strings" +) + +// A Matcher is a representation of a class of values. +// It is used to represent the valid or expected arguments to a mocked method. +type Matcher interface { + // Matches returns whether x is a match. + Matches(x any) bool + + // String describes what the matcher matches. + String() string +} + +// WantFormatter modifies the given Matcher's String() method to the given +// Stringer. This allows for control on how the "Want" is formatted when +// printing . +func WantFormatter(s fmt.Stringer, m Matcher) Matcher { + type matcher interface { + Matches(x any) bool + } + + return struct { + matcher + fmt.Stringer + }{ + matcher: m, + Stringer: s, + } +} + +// StringerFunc type is an adapter to allow the use of ordinary functions as +// a Stringer. If f is a function with the appropriate signature, +// StringerFunc(f) is a Stringer that calls f. +type StringerFunc func() string + +// String implements fmt.Stringer. +func (f StringerFunc) String() string { + return f() +} + +// GotFormatter is used to better print failure messages. If a matcher +// implements GotFormatter, it will use the result from Got when printing +// the failure message. +type GotFormatter interface { + // Got is invoked with the received value. The result is used when + // printing the failure message. + Got(got any) string +} + +// GotFormatterFunc type is an adapter to allow the use of ordinary +// functions as a GotFormatter. If f is a function with the appropriate +// signature, GotFormatterFunc(f) is a GotFormatter that calls f. +type GotFormatterFunc func(got any) string + +// Got implements GotFormatter. +func (f GotFormatterFunc) Got(got any) string { + return f(got) +} + +// GotFormatterAdapter attaches a GotFormatter to a Matcher. +func GotFormatterAdapter(s GotFormatter, m Matcher) Matcher { + return struct { + GotFormatter + Matcher + }{ + GotFormatter: s, + Matcher: m, + } +} + +type anyMatcher struct{} + +func (anyMatcher) Matches(any) bool { + return true +} + +func (anyMatcher) String() string { + return "is anything" +} + +type condMatcher struct { + fn func(x any) bool +} + +func (c condMatcher) Matches(x any) bool { + return c.fn(x) +} + +func (condMatcher) String() string { + return "adheres to a custom condition" +} + +type eqMatcher struct { + x any +} + +func (e eqMatcher) Matches(x any) bool { + // In case, some value is nil + if e.x == nil || x == nil { + return reflect.DeepEqual(e.x, x) + } + + // Check if types assignable and convert them to common type + x1Val := reflect.ValueOf(e.x) + x2Val := reflect.ValueOf(x) + + if x1Val.Type().AssignableTo(x2Val.Type()) { + x1ValConverted := x1Val.Convert(x2Val.Type()) + return reflect.DeepEqual(x1ValConverted.Interface(), x2Val.Interface()) + } + + return false +} + +func (e eqMatcher) String() string { + return fmt.Sprintf("is equal to %v (%T)", e.x, e.x) +} + +type nilMatcher struct{} + +func (nilMatcher) Matches(x any) bool { + if x == nil { + return true + } + + v := reflect.ValueOf(x) + switch v.Kind() { + case reflect.Chan, reflect.Func, reflect.Interface, reflect.Map, + reflect.Ptr, reflect.Slice: + return v.IsNil() + } + + return false +} + +func (nilMatcher) String() string { + return "is nil" +} + +type notMatcher struct { + m Matcher +} + +func (n notMatcher) Matches(x any) bool { + return !n.m.Matches(x) +} + +func (n notMatcher) String() string { + return "not(" + n.m.String() + ")" +} + +type regexMatcher struct { + regex *regexp.Regexp +} + +func (m regexMatcher) Matches(x any) bool { + switch t := x.(type) { + case string: + return m.regex.MatchString(t) + case []byte: + return m.regex.Match(t) + default: + return false + } +} + +func (m regexMatcher) String() string { + return "matches regex " + m.regex.String() +} + +type assignableToTypeOfMatcher struct { + targetType reflect.Type +} + +func (m assignableToTypeOfMatcher) Matches(x any) bool { + return reflect.TypeOf(x).AssignableTo(m.targetType) +} + +func (m assignableToTypeOfMatcher) String() string { + return "is assignable to " + m.targetType.Name() +} + +type anyOfMatcher struct { + matchers []Matcher +} + +func (am anyOfMatcher) Matches(x any) bool { + for _, m := range am.matchers { + if m.Matches(x) { + return true + } + } + return false +} + +func (am anyOfMatcher) String() string { + ss := make([]string, 0, len(am.matchers)) + for _, matcher := range am.matchers { + ss = append(ss, matcher.String()) + } + return strings.Join(ss, " | ") +} + +type allMatcher struct { + matchers []Matcher +} + +func (am allMatcher) Matches(x any) bool { + for _, m := range am.matchers { + if !m.Matches(x) { + return false + } + } + return true +} + +func (am allMatcher) String() string { + ss := make([]string, 0, len(am.matchers)) + for _, matcher := range am.matchers { + ss = append(ss, matcher.String()) + } + return strings.Join(ss, "; ") +} + +type lenMatcher struct { + i int +} + +func (m lenMatcher) Matches(x any) bool { + v := reflect.ValueOf(x) + switch v.Kind() { + case reflect.Array, reflect.Chan, reflect.Map, reflect.Slice, reflect.String: + return v.Len() == m.i + default: + return false + } +} + +func (m lenMatcher) String() string { + return fmt.Sprintf("has length %d", m.i) +} + +type inAnyOrderMatcher struct { + x any +} + +func (m inAnyOrderMatcher) Matches(x any) bool { + given, ok := m.prepareValue(x) + if !ok { + return false + } + wanted, ok := m.prepareValue(m.x) + if !ok { + return false + } + + if given.Len() != wanted.Len() { + return false + } + + usedFromGiven := make([]bool, given.Len()) + foundFromWanted := make([]bool, wanted.Len()) + for i := 0; i < wanted.Len(); i++ { + wantedMatcher := Eq(wanted.Index(i).Interface()) + for j := 0; j < given.Len(); j++ { + if usedFromGiven[j] { + continue + } + if wantedMatcher.Matches(given.Index(j).Interface()) { + foundFromWanted[i] = true + usedFromGiven[j] = true + break + } + } + } + + missingFromWanted := 0 + for _, found := range foundFromWanted { + if !found { + missingFromWanted++ + } + } + extraInGiven := 0 + for _, used := range usedFromGiven { + if !used { + extraInGiven++ + } + } + + return extraInGiven == 0 && missingFromWanted == 0 +} + +func (m inAnyOrderMatcher) prepareValue(x any) (reflect.Value, bool) { + xValue := reflect.ValueOf(x) + switch xValue.Kind() { + case reflect.Slice, reflect.Array: + return xValue, true + default: + return reflect.Value{}, false + } +} + +func (m inAnyOrderMatcher) String() string { + return fmt.Sprintf("has the same elements as %v", m.x) +} + +// Constructors + +// All returns a composite Matcher that returns true if and only all of the +// matchers return true. +func All(ms ...Matcher) Matcher { return allMatcher{ms} } + +// Any returns a matcher that always matches. +func Any() Matcher { return anyMatcher{} } + +// Cond returns a matcher that matches when the given function returns true +// after passing it the parameter to the mock function. +// This is particularly useful in case you want to match over a field of a custom struct, or dynamic logic. +// +// Example usage: +// +// Cond(func(x any){return x.(int) == 1}).Matches(1) // returns true +// Cond(func(x any){return x.(int) == 2}).Matches(1) // returns false +func Cond(fn func(x any) bool) Matcher { return condMatcher{fn} } + +// AnyOf returns a composite Matcher that returns true if at least one of the +// matchers returns true. +// +// Example usage: +// +// AnyOf(1, 2, 3).Matches(2) // returns true +// AnyOf(1, 2, 3).Matches(10) // returns false +// AnyOf(Nil(), Len(2)).Matches(nil) // returns true +// AnyOf(Nil(), Len(2)).Matches("hi") // returns true +// AnyOf(Nil(), Len(2)).Matches("hello") // returns false +func AnyOf(xs ...any) Matcher { + ms := make([]Matcher, 0, len(xs)) + for _, x := range xs { + if m, ok := x.(Matcher); ok { + ms = append(ms, m) + } else { + ms = append(ms, Eq(x)) + } + } + return anyOfMatcher{ms} +} + +// Eq returns a matcher that matches on equality. +// +// Example usage: +// +// Eq(5).Matches(5) // returns true +// Eq(5).Matches(4) // returns false +func Eq(x any) Matcher { return eqMatcher{x} } + +// Len returns a matcher that matches on length. This matcher returns false if +// is compared to a type that is not an array, chan, map, slice, or string. +func Len(i int) Matcher { + return lenMatcher{i} +} + +// Nil returns a matcher that matches if the received value is nil. +// +// Example usage: +// +// var x *bytes.Buffer +// Nil().Matches(x) // returns true +// x = &bytes.Buffer{} +// Nil().Matches(x) // returns false +func Nil() Matcher { return nilMatcher{} } + +// Not reverses the results of its given child matcher. +// +// Example usage: +// +// Not(Eq(5)).Matches(4) // returns true +// Not(Eq(5)).Matches(5) // returns false +func Not(x any) Matcher { + if m, ok := x.(Matcher); ok { + return notMatcher{m} + } + return notMatcher{Eq(x)} +} + +// Regex checks whether parameter matches the associated regex. +// +// Example usage: +// +// Regex("[0-9]{2}:[0-9]{2}").Matches("23:02") // returns true +// Regex("[0-9]{2}:[0-9]{2}").Matches([]byte{'2', '3', ':', '0', '2'}) // returns true +// Regex("[0-9]{2}:[0-9]{2}").Matches("hello world") // returns false +// Regex("[0-9]{2}").Matches(21) // returns false as it's not a valid type +func Regex(regexStr string) Matcher { + return regexMatcher{regex: regexp.MustCompile(regexStr)} +} + +// AssignableToTypeOf is a Matcher that matches if the parameter to the mock +// function is assignable to the type of the parameter to this function. +// +// Example usage: +// +// var s fmt.Stringer = &bytes.Buffer{} +// AssignableToTypeOf(s).Matches(time.Second) // returns true +// AssignableToTypeOf(s).Matches(99) // returns false +// +// var ctx = reflect.TypeOf((*context.Context)(nil)).Elem() +// AssignableToTypeOf(ctx).Matches(context.Background()) // returns true +func AssignableToTypeOf(x any) Matcher { + if xt, ok := x.(reflect.Type); ok { + return assignableToTypeOfMatcher{xt} + } + return assignableToTypeOfMatcher{reflect.TypeOf(x)} +} + +// InAnyOrder is a Matcher that returns true for collections of the same elements ignoring the order. +// +// Example usage: +// +// InAnyOrder([]int{1, 2, 3}).Matches([]int{1, 3, 2}) // returns true +// InAnyOrder([]int{1, 2, 3}).Matches([]int{1, 2}) // returns false +func InAnyOrder(x any) Matcher { + return inAnyOrderMatcher{x} +} diff --git a/cluster-autoscaler/vendor/go.uber.org/mock/mockgen/model/model.go b/cluster-autoscaler/vendor/go.uber.org/mock/mockgen/model/model.go new file mode 100644 index 000000000000..853dbf2d6b54 --- /dev/null +++ b/cluster-autoscaler/vendor/go.uber.org/mock/mockgen/model/model.go @@ -0,0 +1,533 @@ +// Copyright 2012 Google Inc. +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +// Package model contains the data model necessary for generating mock implementations. +package model + +import ( + "encoding/gob" + "fmt" + "io" + "reflect" + "strings" +) + +// pkgPath is the importable path for package model +const pkgPath = "go.uber.org/mock/mockgen/model" + +// Package is a Go package. It may be a subset. +type Package struct { + Name string + PkgPath string + Interfaces []*Interface + DotImports []string +} + +// Print writes the package name and its exported interfaces. +func (pkg *Package) Print(w io.Writer) { + _, _ = fmt.Fprintf(w, "package %s\n", pkg.Name) + for _, intf := range pkg.Interfaces { + intf.Print(w) + } +} + +// Imports returns the imports needed by the Package as a set of import paths. +func (pkg *Package) Imports() map[string]bool { + im := make(map[string]bool) + for _, intf := range pkg.Interfaces { + intf.addImports(im) + for _, tp := range intf.TypeParams { + tp.Type.addImports(im) + } + } + return im +} + +// Interface is a Go interface. +type Interface struct { + Name string + Methods []*Method + TypeParams []*Parameter +} + +// Print writes the interface name and its methods. +func (intf *Interface) Print(w io.Writer) { + _, _ = fmt.Fprintf(w, "interface %s\n", intf.Name) + for _, m := range intf.Methods { + m.Print(w) + } +} + +func (intf *Interface) addImports(im map[string]bool) { + for _, m := range intf.Methods { + m.addImports(im) + } +} + +// AddMethod adds a new method, de-duplicating by method name. +func (intf *Interface) AddMethod(m *Method) { + for _, me := range intf.Methods { + if me.Name == m.Name { + return + } + } + intf.Methods = append(intf.Methods, m) +} + +// Method is a single method of an interface. +type Method struct { + Name string + In, Out []*Parameter + Variadic *Parameter // may be nil +} + +// Print writes the method name and its signature. +func (m *Method) Print(w io.Writer) { + _, _ = fmt.Fprintf(w, " - method %s\n", m.Name) + if len(m.In) > 0 { + _, _ = fmt.Fprintf(w, " in:\n") + for _, p := range m.In { + p.Print(w) + } + } + if m.Variadic != nil { + _, _ = fmt.Fprintf(w, " ...:\n") + m.Variadic.Print(w) + } + if len(m.Out) > 0 { + _, _ = fmt.Fprintf(w, " out:\n") + for _, p := range m.Out { + p.Print(w) + } + } +} + +func (m *Method) addImports(im map[string]bool) { + for _, p := range m.In { + p.Type.addImports(im) + } + if m.Variadic != nil { + m.Variadic.Type.addImports(im) + } + for _, p := range m.Out { + p.Type.addImports(im) + } +} + +// Parameter is an argument or return parameter of a method. +type Parameter struct { + Name string // may be empty + Type Type +} + +// Print writes a method parameter. +func (p *Parameter) Print(w io.Writer) { + n := p.Name + if n == "" { + n = `""` + } + _, _ = fmt.Fprintf(w, " - %v: %v\n", n, p.Type.String(nil, "")) +} + +// Type is a Go type. +type Type interface { + String(pm map[string]string, pkgOverride string) string + addImports(im map[string]bool) +} + +func init() { + // Call gob.RegisterName with pkgPath as prefix to avoid conflicting with + // github.com/golang/mock/mockgen/model 's registration. + gob.RegisterName(pkgPath+".ArrayType", &ArrayType{}) + gob.RegisterName(pkgPath+".ChanType", &ChanType{}) + gob.RegisterName(pkgPath+".FuncType", &FuncType{}) + gob.RegisterName(pkgPath+".MapType", &MapType{}) + gob.RegisterName(pkgPath+".NamedType", &NamedType{}) + gob.RegisterName(pkgPath+".PointerType", &PointerType{}) + + // Call gob.RegisterName to make sure it has the consistent name registered + // for both gob decoder and encoder. + // + // For a non-pointer type, gob.Register will try to get package full path by + // calling rt.PkgPath() for a name to register. If your project has vendor + // directory, it is possible that PkgPath will get a path like this: + // ../../../vendor/go.uber.org/mock/mockgen/model + gob.RegisterName(pkgPath+".PredeclaredType", PredeclaredType("")) +} + +// ArrayType is an array or slice type. +type ArrayType struct { + Len int // -1 for slices, >= 0 for arrays + Type Type +} + +func (at *ArrayType) String(pm map[string]string, pkgOverride string) string { + s := "[]" + if at.Len > -1 { + s = fmt.Sprintf("[%d]", at.Len) + } + return s + at.Type.String(pm, pkgOverride) +} + +func (at *ArrayType) addImports(im map[string]bool) { at.Type.addImports(im) } + +// ChanType is a channel type. +type ChanType struct { + Dir ChanDir // 0, 1 or 2 + Type Type +} + +func (ct *ChanType) String(pm map[string]string, pkgOverride string) string { + s := ct.Type.String(pm, pkgOverride) + if ct.Dir == RecvDir { + return "<-chan " + s + } + if ct.Dir == SendDir { + return "chan<- " + s + } + return "chan " + s +} + +func (ct *ChanType) addImports(im map[string]bool) { ct.Type.addImports(im) } + +// ChanDir is a channel direction. +type ChanDir int + +// Constants for channel directions. +const ( + RecvDir ChanDir = 1 + SendDir ChanDir = 2 +) + +// FuncType is a function type. +type FuncType struct { + In, Out []*Parameter + Variadic *Parameter // may be nil +} + +func (ft *FuncType) String(pm map[string]string, pkgOverride string) string { + args := make([]string, len(ft.In)) + for i, p := range ft.In { + args[i] = p.Type.String(pm, pkgOverride) + } + if ft.Variadic != nil { + args = append(args, "..."+ft.Variadic.Type.String(pm, pkgOverride)) + } + rets := make([]string, len(ft.Out)) + for i, p := range ft.Out { + rets[i] = p.Type.String(pm, pkgOverride) + } + retString := strings.Join(rets, ", ") + if nOut := len(ft.Out); nOut == 1 { + retString = " " + retString + } else if nOut > 1 { + retString = " (" + retString + ")" + } + return "func(" + strings.Join(args, ", ") + ")" + retString +} + +func (ft *FuncType) addImports(im map[string]bool) { + for _, p := range ft.In { + p.Type.addImports(im) + } + if ft.Variadic != nil { + ft.Variadic.Type.addImports(im) + } + for _, p := range ft.Out { + p.Type.addImports(im) + } +} + +// MapType is a map type. +type MapType struct { + Key, Value Type +} + +func (mt *MapType) String(pm map[string]string, pkgOverride string) string { + return "map[" + mt.Key.String(pm, pkgOverride) + "]" + mt.Value.String(pm, pkgOverride) +} + +func (mt *MapType) addImports(im map[string]bool) { + mt.Key.addImports(im) + mt.Value.addImports(im) +} + +// NamedType is an exported type in a package. +type NamedType struct { + Package string // may be empty + Type string + TypeParams *TypeParametersType +} + +func (nt *NamedType) String(pm map[string]string, pkgOverride string) string { + if pkgOverride == nt.Package { + return nt.Type + nt.TypeParams.String(pm, pkgOverride) + } + prefix := pm[nt.Package] + if prefix != "" { + return prefix + "." + nt.Type + nt.TypeParams.String(pm, pkgOverride) + } + + return nt.Type + nt.TypeParams.String(pm, pkgOverride) +} + +func (nt *NamedType) addImports(im map[string]bool) { + if nt.Package != "" { + im[nt.Package] = true + } + nt.TypeParams.addImports(im) +} + +// PointerType is a pointer to another type. +type PointerType struct { + Type Type +} + +func (pt *PointerType) String(pm map[string]string, pkgOverride string) string { + return "*" + pt.Type.String(pm, pkgOverride) +} +func (pt *PointerType) addImports(im map[string]bool) { pt.Type.addImports(im) } + +// PredeclaredType is a predeclared type such as "int". +type PredeclaredType string + +func (pt PredeclaredType) String(map[string]string, string) string { return string(pt) } +func (pt PredeclaredType) addImports(map[string]bool) {} + +// TypeParametersType contains type parameters for a NamedType. +type TypeParametersType struct { + TypeParameters []Type +} + +func (tp *TypeParametersType) String(pm map[string]string, pkgOverride string) string { + if tp == nil || len(tp.TypeParameters) == 0 { + return "" + } + var sb strings.Builder + sb.WriteString("[") + for i, v := range tp.TypeParameters { + if i != 0 { + sb.WriteString(", ") + } + sb.WriteString(v.String(pm, pkgOverride)) + } + sb.WriteString("]") + return sb.String() +} + +func (tp *TypeParametersType) addImports(im map[string]bool) { + if tp == nil { + return + } + for _, v := range tp.TypeParameters { + v.addImports(im) + } +} + +// The following code is intended to be called by the program generated by ../reflect.go. + +// InterfaceFromInterfaceType returns a pointer to an interface for the +// given reflection interface type. +func InterfaceFromInterfaceType(it reflect.Type) (*Interface, error) { + if it.Kind() != reflect.Interface { + return nil, fmt.Errorf("%v is not an interface", it) + } + intf := &Interface{} + + for i := 0; i < it.NumMethod(); i++ { + mt := it.Method(i) + // TODO: need to skip unexported methods? or just raise an error? + m := &Method{ + Name: mt.Name, + } + + var err error + m.In, m.Variadic, m.Out, err = funcArgsFromType(mt.Type) + if err != nil { + return nil, err + } + + intf.AddMethod(m) + } + + return intf, nil +} + +// t's Kind must be a reflect.Func. +func funcArgsFromType(t reflect.Type) (in []*Parameter, variadic *Parameter, out []*Parameter, err error) { + nin := t.NumIn() + if t.IsVariadic() { + nin-- + } + var p *Parameter + for i := 0; i < nin; i++ { + p, err = parameterFromType(t.In(i)) + if err != nil { + return + } + in = append(in, p) + } + if t.IsVariadic() { + p, err = parameterFromType(t.In(nin).Elem()) + if err != nil { + return + } + variadic = p + } + for i := 0; i < t.NumOut(); i++ { + p, err = parameterFromType(t.Out(i)) + if err != nil { + return + } + out = append(out, p) + } + return +} + +func parameterFromType(t reflect.Type) (*Parameter, error) { + tt, err := typeFromType(t) + if err != nil { + return nil, err + } + return &Parameter{Type: tt}, nil +} + +var errorType = reflect.TypeOf((*error)(nil)).Elem() + +var byteType = reflect.TypeOf(byte(0)) + +func typeFromType(t reflect.Type) (Type, error) { + // Hack workaround for https://golang.org/issue/3853. + // This explicit check should not be necessary. + if t == byteType { + return PredeclaredType("byte"), nil + } + + if imp := t.PkgPath(); imp != "" { + return &NamedType{ + Package: impPath(imp), + Type: t.Name(), + }, nil + } + + // only unnamed or predeclared types after here + + // Lots of types have element types. Let's do the parsing and error checking for all of them. + var elemType Type + switch t.Kind() { + case reflect.Array, reflect.Chan, reflect.Map, reflect.Ptr, reflect.Slice: + var err error + elemType, err = typeFromType(t.Elem()) + if err != nil { + return nil, err + } + } + + switch t.Kind() { + case reflect.Array: + return &ArrayType{ + Len: t.Len(), + Type: elemType, + }, nil + case reflect.Bool, reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64, + reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr, + reflect.Float32, reflect.Float64, reflect.Complex64, reflect.Complex128, reflect.String: + return PredeclaredType(t.Kind().String()), nil + case reflect.Chan: + var dir ChanDir + switch t.ChanDir() { + case reflect.RecvDir: + dir = RecvDir + case reflect.SendDir: + dir = SendDir + } + return &ChanType{ + Dir: dir, + Type: elemType, + }, nil + case reflect.Func: + in, variadic, out, err := funcArgsFromType(t) + if err != nil { + return nil, err + } + return &FuncType{ + In: in, + Out: out, + Variadic: variadic, + }, nil + case reflect.Interface: + // Two special interfaces. + if t.NumMethod() == 0 { + return PredeclaredType("any"), nil + } + if t == errorType { + return PredeclaredType("error"), nil + } + case reflect.Map: + kt, err := typeFromType(t.Key()) + if err != nil { + return nil, err + } + return &MapType{ + Key: kt, + Value: elemType, + }, nil + case reflect.Ptr: + return &PointerType{ + Type: elemType, + }, nil + case reflect.Slice: + return &ArrayType{ + Len: -1, + Type: elemType, + }, nil + case reflect.Struct: + if t.NumField() == 0 { + return PredeclaredType("struct{}"), nil + } + } + + // TODO: Struct, UnsafePointer + return nil, fmt.Errorf("can't yet turn %v (%v) into a model.Type", t, t.Kind()) +} + +// impPath sanitizes the package path returned by `PkgPath` method of a reflect Type so that +// it is importable. PkgPath might return a path that includes "vendor". These paths do not +// compile, so we need to remove everything up to and including "/vendor/". +// See https://github.com/golang/go/issues/12019. +func impPath(imp string) string { + if strings.HasPrefix(imp, "vendor/") { + imp = "/" + imp + } + if i := strings.LastIndex(imp, "/vendor/"); i != -1 { + imp = imp[i+len("/vendor/"):] + } + return imp +} + +// ErrorInterface represent built-in error interface. +var ErrorInterface = Interface{ + Name: "error", + Methods: []*Method{ + { + Name: "Error", + Out: []*Parameter{ + { + Name: "", + Type: PredeclaredType("string"), + }, + }, + }, + }, +} diff --git a/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/bits_compat.go b/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/bits_compat.go deleted file mode 100644 index d33c8890fc53..000000000000 --- a/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/bits_compat.go +++ /dev/null @@ -1,39 +0,0 @@ -// Copyright 2019 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build !go1.13 - -package poly1305 - -// Generic fallbacks for the math/bits intrinsics, copied from -// src/math/bits/bits.go. They were added in Go 1.12, but Add64 and Sum64 had -// variable time fallbacks until Go 1.13. - -func bitsAdd64(x, y, carry uint64) (sum, carryOut uint64) { - sum = x + y + carry - carryOut = ((x & y) | ((x | y) &^ sum)) >> 63 - return -} - -func bitsSub64(x, y, borrow uint64) (diff, borrowOut uint64) { - diff = x - y - borrow - borrowOut = ((^x & y) | (^(x ^ y) & diff)) >> 63 - return -} - -func bitsMul64(x, y uint64) (hi, lo uint64) { - const mask32 = 1<<32 - 1 - x0 := x & mask32 - x1 := x >> 32 - y0 := y & mask32 - y1 := y >> 32 - w0 := x0 * y0 - t := x1*y0 + w0>>32 - w1 := t & mask32 - w2 := t >> 32 - w1 += x0 * y1 - hi = x1*y1 + w2 + w1>>32 - lo = x * y - return -} diff --git a/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/bits_go1.13.go b/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/bits_go1.13.go deleted file mode 100644 index 495c1fa69725..000000000000 --- a/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/bits_go1.13.go +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright 2019 The Go Authors. All rights reserved. -// Use of this source code is governed by a BSD-style -// license that can be found in the LICENSE file. - -//go:build go1.13 - -package poly1305 - -import "math/bits" - -func bitsAdd64(x, y, carry uint64) (sum, carryOut uint64) { - return bits.Add64(x, y, carry) -} - -func bitsSub64(x, y, borrow uint64) (diff, borrowOut uint64) { - return bits.Sub64(x, y, borrow) -} - -func bitsMul64(x, y uint64) (hi, lo uint64) { - return bits.Mul64(x, y) -} diff --git a/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/sum_generic.go b/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/sum_generic.go index e041da5ea3e7..ec2202bd7d5f 100644 --- a/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/sum_generic.go +++ b/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/sum_generic.go @@ -7,7 +7,10 @@ package poly1305 -import "encoding/binary" +import ( + "encoding/binary" + "math/bits" +) // Poly1305 [RFC 7539] is a relatively simple algorithm: the authentication tag // for a 64 bytes message is approximately @@ -114,13 +117,13 @@ type uint128 struct { } func mul64(a, b uint64) uint128 { - hi, lo := bitsMul64(a, b) + hi, lo := bits.Mul64(a, b) return uint128{lo, hi} } func add128(a, b uint128) uint128 { - lo, c := bitsAdd64(a.lo, b.lo, 0) - hi, c := bitsAdd64(a.hi, b.hi, c) + lo, c := bits.Add64(a.lo, b.lo, 0) + hi, c := bits.Add64(a.hi, b.hi, c) if c != 0 { panic("poly1305: unexpected overflow") } @@ -155,8 +158,8 @@ func updateGeneric(state *macState, msg []byte) { // hide leading zeroes. For full chunks, that's 1 << 128, so we can just // add 1 to the most significant (2¹²⁸) limb, h2. if len(msg) >= TagSize { - h0, c = bitsAdd64(h0, binary.LittleEndian.Uint64(msg[0:8]), 0) - h1, c = bitsAdd64(h1, binary.LittleEndian.Uint64(msg[8:16]), c) + h0, c = bits.Add64(h0, binary.LittleEndian.Uint64(msg[0:8]), 0) + h1, c = bits.Add64(h1, binary.LittleEndian.Uint64(msg[8:16]), c) h2 += c + 1 msg = msg[TagSize:] @@ -165,8 +168,8 @@ func updateGeneric(state *macState, msg []byte) { copy(buf[:], msg) buf[len(msg)] = 1 - h0, c = bitsAdd64(h0, binary.LittleEndian.Uint64(buf[0:8]), 0) - h1, c = bitsAdd64(h1, binary.LittleEndian.Uint64(buf[8:16]), c) + h0, c = bits.Add64(h0, binary.LittleEndian.Uint64(buf[0:8]), 0) + h1, c = bits.Add64(h1, binary.LittleEndian.Uint64(buf[8:16]), c) h2 += c msg = nil @@ -219,9 +222,9 @@ func updateGeneric(state *macState, msg []byte) { m3 := h2r1 t0 := m0.lo - t1, c := bitsAdd64(m1.lo, m0.hi, 0) - t2, c := bitsAdd64(m2.lo, m1.hi, c) - t3, _ := bitsAdd64(m3.lo, m2.hi, c) + t1, c := bits.Add64(m1.lo, m0.hi, 0) + t2, c := bits.Add64(m2.lo, m1.hi, c) + t3, _ := bits.Add64(m3.lo, m2.hi, c) // Now we have the result as 4 64-bit limbs, and we need to reduce it // modulo 2¹³⁰ - 5. The special shape of this Crandall prime lets us do @@ -243,14 +246,14 @@ func updateGeneric(state *macState, msg []byte) { // To add c * 5 to h, we first add cc = c * 4, and then add (cc >> 2) = c. - h0, c = bitsAdd64(h0, cc.lo, 0) - h1, c = bitsAdd64(h1, cc.hi, c) + h0, c = bits.Add64(h0, cc.lo, 0) + h1, c = bits.Add64(h1, cc.hi, c) h2 += c cc = shiftRightBy2(cc) - h0, c = bitsAdd64(h0, cc.lo, 0) - h1, c = bitsAdd64(h1, cc.hi, c) + h0, c = bits.Add64(h0, cc.lo, 0) + h1, c = bits.Add64(h1, cc.hi, c) h2 += c // h2 is at most 3 + 1 + 1 = 5, making the whole of h at most @@ -287,9 +290,9 @@ func finalize(out *[TagSize]byte, h *[3]uint64, s *[2]uint64) { // in constant time, we compute t = h - (2¹³⁰ - 5), and select h as the // result if the subtraction underflows, and t otherwise. - hMinusP0, b := bitsSub64(h0, p0, 0) - hMinusP1, b := bitsSub64(h1, p1, b) - _, b = bitsSub64(h2, p2, b) + hMinusP0, b := bits.Sub64(h0, p0, 0) + hMinusP1, b := bits.Sub64(h1, p1, b) + _, b = bits.Sub64(h2, p2, b) // h = h if h < p else h - p h0 = select64(b, h0, hMinusP0) @@ -301,8 +304,8 @@ func finalize(out *[TagSize]byte, h *[3]uint64, s *[2]uint64) { // // by just doing a wide addition with the 128 low bits of h and discarding // the overflow. - h0, c := bitsAdd64(h0, s[0], 0) - h1, _ = bitsAdd64(h1, s[1], c) + h0, c := bits.Add64(h0, s[0], 0) + h1, _ = bits.Add64(h1, s[1], c) binary.LittleEndian.PutUint64(out[0:8], h0) binary.LittleEndian.PutUint64(out[8:16], h1) diff --git a/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/sum_ppc64le.s b/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/sum_ppc64le.s index d2ca5deeb9f5..b3c1699bff51 100644 --- a/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/sum_ppc64le.s +++ b/cluster-autoscaler/vendor/golang.org/x/crypto/internal/poly1305/sum_ppc64le.s @@ -19,15 +19,14 @@ #define POLY1305_MUL(h0, h1, h2, r0, r1, t0, t1, t2, t3, t4, t5) \ MULLD r0, h0, t0; \ - MULLD r0, h1, t4; \ MULHDU r0, h0, t1; \ + MULLD r0, h1, t4; \ MULHDU r0, h1, t5; \ ADDC t4, t1, t1; \ MULLD r0, h2, t2; \ - ADDZE t5; \ MULHDU r1, h0, t4; \ MULLD r1, h0, h0; \ - ADD t5, t2, t2; \ + ADDE t5, t2, t2; \ ADDC h0, t1, t1; \ MULLD h2, r1, t3; \ ADDZE t4, h0; \ @@ -37,13 +36,11 @@ ADDE t5, t3, t3; \ ADDC h0, t2, t2; \ MOVD $-4, t4; \ - MOVD t0, h0; \ - MOVD t1, h1; \ ADDZE t3; \ - ANDCC $3, t2, h2; \ - AND t2, t4, t0; \ + RLDICL $0, t2, $62, h2; \ + AND t2, t4, h0; \ ADDC t0, h0, h0; \ - ADDE t3, h1, h1; \ + ADDE t3, t1, h1; \ SLD $62, t3, t4; \ SRD $2, t2; \ ADDZE h2; \ @@ -75,6 +72,7 @@ TEXT ·update(SB), $0-32 loop: POLY1305_ADD(R4, R8, R9, R10, R20, R21, R22) + PCALIGN $16 multiply: POLY1305_MUL(R8, R9, R10, R11, R12, R16, R17, R18, R14, R20, R21) ADD $-16, R5 diff --git a/cluster-autoscaler/vendor/golang.org/x/net/html/token.go b/cluster-autoscaler/vendor/golang.org/x/net/html/token.go index de67f938a14b..3c57880d6979 100644 --- a/cluster-autoscaler/vendor/golang.org/x/net/html/token.go +++ b/cluster-autoscaler/vendor/golang.org/x/net/html/token.go @@ -910,9 +910,6 @@ func (z *Tokenizer) readTagAttrKey() { return } switch c { - case ' ', '\n', '\r', '\t', '\f', '/': - z.pendingAttr[0].end = z.raw.end - 1 - return case '=': if z.pendingAttr[0].start+1 == z.raw.end { // WHATWG 13.2.5.32, if we see an equals sign before the attribute name @@ -920,7 +917,9 @@ func (z *Tokenizer) readTagAttrKey() { continue } fallthrough - case '>': + case ' ', '\n', '\r', '\t', '\f', '/', '>': + // WHATWG 13.2.5.33 Attribute name state + // We need to reconsume the char in the after attribute name state to support the / character z.raw.end-- z.pendingAttr[0].end = z.raw.end return @@ -939,6 +938,11 @@ func (z *Tokenizer) readTagAttrVal() { if z.err != nil { return } + if c == '/' { + // WHATWG 13.2.5.34 After attribute name state + // U+002F SOLIDUS (/) - Switch to the self-closing start tag state. + return + } if c != '=' { z.raw.end-- return diff --git a/cluster-autoscaler/vendor/golang.org/x/net/http2/frame.go b/cluster-autoscaler/vendor/golang.org/x/net/http2/frame.go index c1f6b90dc32f..43557ab7e977 100644 --- a/cluster-autoscaler/vendor/golang.org/x/net/http2/frame.go +++ b/cluster-autoscaler/vendor/golang.org/x/net/http2/frame.go @@ -1510,13 +1510,12 @@ func (mh *MetaHeadersFrame) checkPseudos() error { } func (fr *Framer) maxHeaderStringLen() int { - v := fr.maxHeaderListSize() - if uint32(int(v)) == v { - return int(v) + v := int(fr.maxHeaderListSize()) + if v < 0 { + // If maxHeaderListSize overflows an int, use no limit (0). + return 0 } - // They had a crazy big number for MaxHeaderBytes anyway, - // so give them unlimited header lengths: - return 0 + return v } // readMetaFrame returns 0 or more CONTINUATION frames from fr and @@ -1565,6 +1564,7 @@ func (fr *Framer) readMetaFrame(hf *HeadersFrame) (*MetaHeadersFrame, error) { if size > remainSize { hdec.SetEmitEnabled(false) mh.Truncated = true + remainSize = 0 return } remainSize -= size @@ -1577,6 +1577,36 @@ func (fr *Framer) readMetaFrame(hf *HeadersFrame) (*MetaHeadersFrame, error) { var hc headersOrContinuation = hf for { frag := hc.HeaderBlockFragment() + + // Avoid parsing large amounts of headers that we will then discard. + // If the sender exceeds the max header list size by too much, + // skip parsing the fragment and close the connection. + // + // "Too much" is either any CONTINUATION frame after we've already + // exceeded the max header list size (in which case remainSize is 0), + // or a frame whose encoded size is more than twice the remaining + // header list bytes we're willing to accept. + if int64(len(frag)) > int64(2*remainSize) { + if VerboseLogs { + log.Printf("http2: header list too large") + } + // It would be nice to send a RST_STREAM before sending the GOAWAY, + // but the structure of the server's frame writer makes this difficult. + return nil, ConnectionError(ErrCodeProtocol) + } + + // Also close the connection after any CONTINUATION frame following an + // invalid header, since we stop tracking the size of the headers after + // an invalid one. + if invalid != nil { + if VerboseLogs { + log.Printf("http2: invalid header: %v", invalid) + } + // It would be nice to send a RST_STREAM before sending the GOAWAY, + // but the structure of the server's frame writer makes this difficult. + return nil, ConnectionError(ErrCodeProtocol) + } + if _, err := hdec.Write(frag); err != nil { return nil, ConnectionError(ErrCodeCompression) } diff --git a/cluster-autoscaler/vendor/golang.org/x/net/http2/pipe.go b/cluster-autoscaler/vendor/golang.org/x/net/http2/pipe.go index 684d984fd96a..3b9f06b96244 100644 --- a/cluster-autoscaler/vendor/golang.org/x/net/http2/pipe.go +++ b/cluster-autoscaler/vendor/golang.org/x/net/http2/pipe.go @@ -77,7 +77,10 @@ func (p *pipe) Read(d []byte) (n int, err error) { } } -var errClosedPipeWrite = errors.New("write on closed buffer") +var ( + errClosedPipeWrite = errors.New("write on closed buffer") + errUninitializedPipeWrite = errors.New("write on uninitialized buffer") +) // Write copies bytes from p into the buffer and wakes a reader. // It is an error to write more data than the buffer can hold. @@ -91,6 +94,12 @@ func (p *pipe) Write(d []byte) (n int, err error) { if p.err != nil || p.breakErr != nil { return 0, errClosedPipeWrite } + // pipe.setBuffer is never invoked, leaving the buffer uninitialized. + // We shouldn't try to write to an uninitialized pipe, + // but returning an error is better than panicking. + if p.b == nil { + return 0, errUninitializedPipeWrite + } return p.b.Write(d) } diff --git a/cluster-autoscaler/vendor/golang.org/x/net/http2/server.go b/cluster-autoscaler/vendor/golang.org/x/net/http2/server.go index ae94c6408d5d..ce2e8b40eee6 100644 --- a/cluster-autoscaler/vendor/golang.org/x/net/http2/server.go +++ b/cluster-autoscaler/vendor/golang.org/x/net/http2/server.go @@ -124,6 +124,7 @@ type Server struct { // IdleTimeout specifies how long until idle clients should be // closed with a GOAWAY frame. PING frames are not considered // activity for the purposes of IdleTimeout. + // If zero or negative, there is no timeout. IdleTimeout time.Duration // MaxUploadBufferPerConnection is the size of the initial flow @@ -434,7 +435,7 @@ func (s *Server) ServeConn(c net.Conn, opts *ServeConnOpts) { // passes the connection off to us with the deadline already set. // Write deadlines are set per stream in serverConn.newStream. // Disarm the net.Conn write deadline here. - if sc.hs.WriteTimeout != 0 { + if sc.hs.WriteTimeout > 0 { sc.conn.SetWriteDeadline(time.Time{}) } @@ -924,7 +925,7 @@ func (sc *serverConn) serve() { sc.setConnState(http.StateActive) sc.setConnState(http.StateIdle) - if sc.srv.IdleTimeout != 0 { + if sc.srv.IdleTimeout > 0 { sc.idleTimer = time.AfterFunc(sc.srv.IdleTimeout, sc.onIdleTimer) defer sc.idleTimer.Stop() } @@ -1637,7 +1638,7 @@ func (sc *serverConn) closeStream(st *stream, err error) { delete(sc.streams, st.id) if len(sc.streams) == 0 { sc.setConnState(http.StateIdle) - if sc.srv.IdleTimeout != 0 { + if sc.srv.IdleTimeout > 0 { sc.idleTimer.Reset(sc.srv.IdleTimeout) } if h1ServerKeepAlivesDisabled(sc.hs) { @@ -2017,7 +2018,7 @@ func (sc *serverConn) processHeaders(f *MetaHeadersFrame) error { // similar to how the http1 server works. Here it's // technically more like the http1 Server's ReadHeaderTimeout // (in Go 1.8), though. That's a more sane option anyway. - if sc.hs.ReadTimeout != 0 { + if sc.hs.ReadTimeout > 0 { sc.conn.SetReadDeadline(time.Time{}) st.readDeadline = time.AfterFunc(sc.hs.ReadTimeout, st.onReadTimeout) } @@ -2038,7 +2039,7 @@ func (sc *serverConn) upgradeRequest(req *http.Request) { // Disable any read deadline set by the net/http package // prior to the upgrade. - if sc.hs.ReadTimeout != 0 { + if sc.hs.ReadTimeout > 0 { sc.conn.SetReadDeadline(time.Time{}) } @@ -2116,7 +2117,7 @@ func (sc *serverConn) newStream(id, pusherID uint32, state streamState) *stream st.flow.conn = &sc.flow // link to conn-level counter st.flow.add(sc.initialStreamSendWindowSize) st.inflow.init(sc.srv.initialStreamRecvWindowSize()) - if sc.hs.WriteTimeout != 0 { + if sc.hs.WriteTimeout > 0 { st.writeDeadline = time.AfterFunc(sc.hs.WriteTimeout, st.onWriteTimeout) } diff --git a/cluster-autoscaler/vendor/golang.org/x/net/http2/testsync.go b/cluster-autoscaler/vendor/golang.org/x/net/http2/testsync.go new file mode 100644 index 000000000000..61075bd16d31 --- /dev/null +++ b/cluster-autoscaler/vendor/golang.org/x/net/http2/testsync.go @@ -0,0 +1,331 @@ +// Copyright 2024 The Go Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. +package http2 + +import ( + "context" + "sync" + "time" +) + +// testSyncHooks coordinates goroutines in tests. +// +// For example, a call to ClientConn.RoundTrip involves several goroutines, including: +// - the goroutine running RoundTrip; +// - the clientStream.doRequest goroutine, which writes the request; and +// - the clientStream.readLoop goroutine, which reads the response. +// +// Using testSyncHooks, a test can start a RoundTrip and identify when all these goroutines +// are blocked waiting for some condition such as reading the Request.Body or waiting for +// flow control to become available. +// +// The testSyncHooks also manage timers and synthetic time in tests. +// This permits us to, for example, start a request and cause it to time out waiting for +// response headers without resorting to time.Sleep calls. +type testSyncHooks struct { + // active/inactive act as a mutex and condition variable. + // + // - neither chan contains a value: testSyncHooks is locked. + // - active contains a value: unlocked, and at least one goroutine is not blocked + // - inactive contains a value: unlocked, and all goroutines are blocked + active chan struct{} + inactive chan struct{} + + // goroutine counts + total int // total goroutines + condwait map[*sync.Cond]int // blocked in sync.Cond.Wait + blocked []*testBlockedGoroutine // otherwise blocked + + // fake time + now time.Time + timers []*fakeTimer + + // Transport testing: Report various events. + newclientconn func(*ClientConn) + newstream func(*clientStream) +} + +// testBlockedGoroutine is a blocked goroutine. +type testBlockedGoroutine struct { + f func() bool // blocked until f returns true + ch chan struct{} // closed when unblocked +} + +func newTestSyncHooks() *testSyncHooks { + h := &testSyncHooks{ + active: make(chan struct{}, 1), + inactive: make(chan struct{}, 1), + condwait: map[*sync.Cond]int{}, + } + h.inactive <- struct{}{} + h.now = time.Date(2000, 1, 1, 0, 0, 0, 0, time.UTC) + return h +} + +// lock acquires the testSyncHooks mutex. +func (h *testSyncHooks) lock() { + select { + case <-h.active: + case <-h.inactive: + } +} + +// waitInactive waits for all goroutines to become inactive. +func (h *testSyncHooks) waitInactive() { + for { + <-h.inactive + if !h.unlock() { + break + } + } +} + +// unlock releases the testSyncHooks mutex. +// It reports whether any goroutines are active. +func (h *testSyncHooks) unlock() (active bool) { + // Look for a blocked goroutine which can be unblocked. + blocked := h.blocked[:0] + unblocked := false + for _, b := range h.blocked { + if !unblocked && b.f() { + unblocked = true + close(b.ch) + } else { + blocked = append(blocked, b) + } + } + h.blocked = blocked + + // Count goroutines blocked on condition variables. + condwait := 0 + for _, count := range h.condwait { + condwait += count + } + + if h.total > condwait+len(blocked) { + h.active <- struct{}{} + return true + } else { + h.inactive <- struct{}{} + return false + } +} + +// goRun starts a new goroutine. +func (h *testSyncHooks) goRun(f func()) { + h.lock() + h.total++ + h.unlock() + go func() { + defer func() { + h.lock() + h.total-- + h.unlock() + }() + f() + }() +} + +// blockUntil indicates that a goroutine is blocked waiting for some condition to become true. +// It waits until f returns true before proceeding. +// +// Example usage: +// +// h.blockUntil(func() bool { +// // Is the context done yet? +// select { +// case <-ctx.Done(): +// default: +// return false +// } +// return true +// }) +// // Wait for the context to become done. +// <-ctx.Done() +// +// The function f passed to blockUntil must be non-blocking and idempotent. +func (h *testSyncHooks) blockUntil(f func() bool) { + if f() { + return + } + ch := make(chan struct{}) + h.lock() + h.blocked = append(h.blocked, &testBlockedGoroutine{ + f: f, + ch: ch, + }) + h.unlock() + <-ch +} + +// broadcast is sync.Cond.Broadcast. +func (h *testSyncHooks) condBroadcast(cond *sync.Cond) { + h.lock() + delete(h.condwait, cond) + h.unlock() + cond.Broadcast() +} + +// broadcast is sync.Cond.Wait. +func (h *testSyncHooks) condWait(cond *sync.Cond) { + h.lock() + h.condwait[cond]++ + h.unlock() +} + +// newTimer creates a new fake timer. +func (h *testSyncHooks) newTimer(d time.Duration) timer { + h.lock() + defer h.unlock() + t := &fakeTimer{ + hooks: h, + when: h.now.Add(d), + c: make(chan time.Time), + } + h.timers = append(h.timers, t) + return t +} + +// afterFunc creates a new fake AfterFunc timer. +func (h *testSyncHooks) afterFunc(d time.Duration, f func()) timer { + h.lock() + defer h.unlock() + t := &fakeTimer{ + hooks: h, + when: h.now.Add(d), + f: f, + } + h.timers = append(h.timers, t) + return t +} + +func (h *testSyncHooks) contextWithTimeout(ctx context.Context, d time.Duration) (context.Context, context.CancelFunc) { + ctx, cancel := context.WithCancel(ctx) + t := h.afterFunc(d, cancel) + return ctx, func() { + t.Stop() + cancel() + } +} + +func (h *testSyncHooks) timeUntilEvent() time.Duration { + h.lock() + defer h.unlock() + var next time.Time + for _, t := range h.timers { + if next.IsZero() || t.when.Before(next) { + next = t.when + } + } + if d := next.Sub(h.now); d > 0 { + return d + } + return 0 +} + +// advance advances time and causes synthetic timers to fire. +func (h *testSyncHooks) advance(d time.Duration) { + h.lock() + defer h.unlock() + h.now = h.now.Add(d) + timers := h.timers[:0] + for _, t := range h.timers { + t := t // remove after go.mod depends on go1.22 + t.mu.Lock() + switch { + case t.when.After(h.now): + timers = append(timers, t) + case t.when.IsZero(): + // stopped timer + default: + t.when = time.Time{} + if t.c != nil { + close(t.c) + } + if t.f != nil { + h.total++ + go func() { + defer func() { + h.lock() + h.total-- + h.unlock() + }() + t.f() + }() + } + } + t.mu.Unlock() + } + h.timers = timers +} + +// A timer wraps a time.Timer, or a synthetic equivalent in tests. +// Unlike time.Timer, timer is single-use: The timer channel is closed when the timer expires. +type timer interface { + C() <-chan time.Time + Stop() bool + Reset(d time.Duration) bool +} + +// timeTimer implements timer using real time. +type timeTimer struct { + t *time.Timer + c chan time.Time +} + +// newTimeTimer creates a new timer using real time. +func newTimeTimer(d time.Duration) timer { + ch := make(chan time.Time) + t := time.AfterFunc(d, func() { + close(ch) + }) + return &timeTimer{t, ch} +} + +// newTimeAfterFunc creates an AfterFunc timer using real time. +func newTimeAfterFunc(d time.Duration, f func()) timer { + return &timeTimer{ + t: time.AfterFunc(d, f), + } +} + +func (t timeTimer) C() <-chan time.Time { return t.c } +func (t timeTimer) Stop() bool { return t.t.Stop() } +func (t timeTimer) Reset(d time.Duration) bool { return t.t.Reset(d) } + +// fakeTimer implements timer using fake time. +type fakeTimer struct { + hooks *testSyncHooks + + mu sync.Mutex + when time.Time // when the timer will fire + c chan time.Time // closed when the timer fires; mutually exclusive with f + f func() // called when the timer fires; mutually exclusive with c +} + +func (t *fakeTimer) C() <-chan time.Time { return t.c } + +func (t *fakeTimer) Stop() bool { + t.mu.Lock() + defer t.mu.Unlock() + stopped := t.when.IsZero() + t.when = time.Time{} + return stopped +} + +func (t *fakeTimer) Reset(d time.Duration) bool { + if t.c != nil || t.f == nil { + panic("fakeTimer only supports Reset on AfterFunc timers") + } + t.mu.Lock() + defer t.mu.Unlock() + t.hooks.lock() + defer t.hooks.unlock() + active := !t.when.IsZero() + t.when = t.hooks.now.Add(d) + if !active { + t.hooks.timers = append(t.hooks.timers, t) + } + return active +} diff --git a/cluster-autoscaler/vendor/golang.org/x/net/http2/transport.go b/cluster-autoscaler/vendor/golang.org/x/net/http2/transport.go index df578b86c650..ce375c8c7535 100644 --- a/cluster-autoscaler/vendor/golang.org/x/net/http2/transport.go +++ b/cluster-autoscaler/vendor/golang.org/x/net/http2/transport.go @@ -147,6 +147,12 @@ type Transport struct { // waiting for their turn. StrictMaxConcurrentStreams bool + // IdleConnTimeout is the maximum amount of time an idle + // (keep-alive) connection will remain idle before closing + // itself. + // Zero means no limit. + IdleConnTimeout time.Duration + // ReadIdleTimeout is the timeout after which a health check using ping // frame will be carried out if no frame is received on the connection. // Note that a ping response will is considered a received frame, so if @@ -178,6 +184,8 @@ type Transport struct { connPoolOnce sync.Once connPoolOrDef ClientConnPool // non-nil version of ConnPool + + syncHooks *testSyncHooks } func (t *Transport) maxHeaderListSize() uint32 { @@ -302,7 +310,7 @@ type ClientConn struct { readerErr error // set before readerDone is closed idleTimeout time.Duration // or 0 for never - idleTimer *time.Timer + idleTimer timer mu sync.Mutex // guards following cond *sync.Cond // hold mu; broadcast on flow/closed changes @@ -344,6 +352,60 @@ type ClientConn struct { werr error // first write error that has occurred hbuf bytes.Buffer // HPACK encoder writes into this henc *hpack.Encoder + + syncHooks *testSyncHooks // can be nil +} + +// Hook points used for testing. +// Outside of tests, cc.syncHooks is nil and these all have minimal implementations. +// Inside tests, see the testSyncHooks function docs. + +// goRun starts a new goroutine. +func (cc *ClientConn) goRun(f func()) { + if cc.syncHooks != nil { + cc.syncHooks.goRun(f) + return + } + go f() +} + +// condBroadcast is cc.cond.Broadcast. +func (cc *ClientConn) condBroadcast() { + if cc.syncHooks != nil { + cc.syncHooks.condBroadcast(cc.cond) + } + cc.cond.Broadcast() +} + +// condWait is cc.cond.Wait. +func (cc *ClientConn) condWait() { + if cc.syncHooks != nil { + cc.syncHooks.condWait(cc.cond) + } + cc.cond.Wait() +} + +// newTimer creates a new time.Timer, or a synthetic timer in tests. +func (cc *ClientConn) newTimer(d time.Duration) timer { + if cc.syncHooks != nil { + return cc.syncHooks.newTimer(d) + } + return newTimeTimer(d) +} + +// afterFunc creates a new time.AfterFunc timer, or a synthetic timer in tests. +func (cc *ClientConn) afterFunc(d time.Duration, f func()) timer { + if cc.syncHooks != nil { + return cc.syncHooks.afterFunc(d, f) + } + return newTimeAfterFunc(d, f) +} + +func (cc *ClientConn) contextWithTimeout(ctx context.Context, d time.Duration) (context.Context, context.CancelFunc) { + if cc.syncHooks != nil { + return cc.syncHooks.contextWithTimeout(ctx, d) + } + return context.WithTimeout(ctx, d) } // clientStream is the state for a single HTTP/2 stream. One of these @@ -425,7 +487,7 @@ func (cs *clientStream) abortStreamLocked(err error) { // TODO(dneil): Clean up tests where cs.cc.cond is nil. if cs.cc.cond != nil { // Wake up writeRequestBody if it is waiting on flow control. - cs.cc.cond.Broadcast() + cs.cc.condBroadcast() } } @@ -435,7 +497,7 @@ func (cs *clientStream) abortRequestBodyWrite() { defer cc.mu.Unlock() if cs.reqBody != nil && cs.reqBodyClosed == nil { cs.closeReqBodyLocked() - cc.cond.Broadcast() + cc.condBroadcast() } } @@ -445,10 +507,10 @@ func (cs *clientStream) closeReqBodyLocked() { } cs.reqBodyClosed = make(chan struct{}) reqBodyClosed := cs.reqBodyClosed - go func() { + cs.cc.goRun(func() { cs.reqBody.Close() close(reqBodyClosed) - }() + }) } type stickyErrWriter struct { @@ -537,15 +599,6 @@ func authorityAddr(scheme string, authority string) (addr string) { return net.JoinHostPort(host, port) } -var retryBackoffHook func(time.Duration) *time.Timer - -func backoffNewTimer(d time.Duration) *time.Timer { - if retryBackoffHook != nil { - return retryBackoffHook(d) - } - return time.NewTimer(d) -} - // RoundTripOpt is like RoundTrip, but takes options. func (t *Transport) RoundTripOpt(req *http.Request, opt RoundTripOpt) (*http.Response, error) { if !(req.URL.Scheme == "https" || (req.URL.Scheme == "http" && t.AllowHTTP)) { @@ -573,13 +626,27 @@ func (t *Transport) RoundTripOpt(req *http.Request, opt RoundTripOpt) (*http.Res backoff := float64(uint(1) << (uint(retry) - 1)) backoff += backoff * (0.1 * mathrand.Float64()) d := time.Second * time.Duration(backoff) - timer := backoffNewTimer(d) + var tm timer + if t.syncHooks != nil { + tm = t.syncHooks.newTimer(d) + t.syncHooks.blockUntil(func() bool { + select { + case <-tm.C(): + case <-req.Context().Done(): + default: + return false + } + return true + }) + } else { + tm = newTimeTimer(d) + } select { - case <-timer.C: + case <-tm.C(): t.vlogf("RoundTrip retrying after failure: %v", roundTripErr) continue case <-req.Context().Done(): - timer.Stop() + tm.Stop() err = req.Context().Err() } } @@ -658,6 +725,9 @@ func canRetryError(err error) bool { } func (t *Transport) dialClientConn(ctx context.Context, addr string, singleUse bool) (*ClientConn, error) { + if t.syncHooks != nil { + return t.newClientConn(nil, singleUse, t.syncHooks) + } host, _, err := net.SplitHostPort(addr) if err != nil { return nil, err @@ -666,7 +736,7 @@ func (t *Transport) dialClientConn(ctx context.Context, addr string, singleUse b if err != nil { return nil, err } - return t.newClientConn(tconn, singleUse) + return t.newClientConn(tconn, singleUse, nil) } func (t *Transport) newTLSConfig(host string) *tls.Config { @@ -732,10 +802,10 @@ func (t *Transport) maxEncoderHeaderTableSize() uint32 { } func (t *Transport) NewClientConn(c net.Conn) (*ClientConn, error) { - return t.newClientConn(c, t.disableKeepAlives()) + return t.newClientConn(c, t.disableKeepAlives(), nil) } -func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, error) { +func (t *Transport) newClientConn(c net.Conn, singleUse bool, hooks *testSyncHooks) (*ClientConn, error) { cc := &ClientConn{ t: t, tconn: c, @@ -750,10 +820,15 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro wantSettingsAck: true, pings: make(map[[8]byte]chan struct{}), reqHeaderMu: make(chan struct{}, 1), + syncHooks: hooks, + } + if hooks != nil { + hooks.newclientconn(cc) + c = cc.tconn } if d := t.idleConnTimeout(); d != 0 { cc.idleTimeout = d - cc.idleTimer = time.AfterFunc(d, cc.onIdleTimeout) + cc.idleTimer = cc.afterFunc(d, cc.onIdleTimeout) } if VerboseLogs { t.vlogf("http2: Transport creating client conn %p to %v", cc, c.RemoteAddr()) @@ -818,7 +893,7 @@ func (t *Transport) newClientConn(c net.Conn, singleUse bool) (*ClientConn, erro return nil, cc.werr } - go cc.readLoop() + cc.goRun(cc.readLoop) return cc, nil } @@ -826,7 +901,7 @@ func (cc *ClientConn) healthCheck() { pingTimeout := cc.t.pingTimeout() // We don't need to periodically ping in the health check, because the readLoop of ClientConn will // trigger the healthCheck again if there is no frame received. - ctx, cancel := context.WithTimeout(context.Background(), pingTimeout) + ctx, cancel := cc.contextWithTimeout(context.Background(), pingTimeout) defer cancel() cc.vlogf("http2: Transport sending health check") err := cc.Ping(ctx) @@ -1056,7 +1131,7 @@ func (cc *ClientConn) Shutdown(ctx context.Context) error { // Wait for all in-flight streams to complete or connection to close done := make(chan struct{}) cancelled := false // guarded by cc.mu - go func() { + cc.goRun(func() { cc.mu.Lock() defer cc.mu.Unlock() for { @@ -1068,9 +1143,9 @@ func (cc *ClientConn) Shutdown(ctx context.Context) error { if cancelled { break } - cc.cond.Wait() + cc.condWait() } - }() + }) shutdownEnterWaitStateHook() select { case <-done: @@ -1080,7 +1155,7 @@ func (cc *ClientConn) Shutdown(ctx context.Context) error { cc.mu.Lock() // Free the goroutine above cancelled = true - cc.cond.Broadcast() + cc.condBroadcast() cc.mu.Unlock() return ctx.Err() } @@ -1118,7 +1193,7 @@ func (cc *ClientConn) closeForError(err error) { for _, cs := range cc.streams { cs.abortStreamLocked(err) } - cc.cond.Broadcast() + cc.condBroadcast() cc.mu.Unlock() cc.closeConn() } @@ -1215,6 +1290,10 @@ func (cc *ClientConn) decrStreamReservationsLocked() { } func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) { + return cc.roundTrip(req, nil) +} + +func (cc *ClientConn) roundTrip(req *http.Request, streamf func(*clientStream)) (*http.Response, error) { ctx := req.Context() cs := &clientStream{ cc: cc, @@ -1229,9 +1308,23 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) { respHeaderRecv: make(chan struct{}), donec: make(chan struct{}), } - go cs.doRequest(req) + cc.goRun(func() { + cs.doRequest(req) + }) waitDone := func() error { + if cc.syncHooks != nil { + cc.syncHooks.blockUntil(func() bool { + select { + case <-cs.donec: + case <-ctx.Done(): + case <-cs.reqCancel: + default: + return false + } + return true + }) + } select { case <-cs.donec: return nil @@ -1292,7 +1385,24 @@ func (cc *ClientConn) RoundTrip(req *http.Request) (*http.Response, error) { return err } + if streamf != nil { + streamf(cs) + } + for { + if cc.syncHooks != nil { + cc.syncHooks.blockUntil(func() bool { + select { + case <-cs.respHeaderRecv: + case <-cs.abort: + case <-ctx.Done(): + case <-cs.reqCancel: + default: + return false + } + return true + }) + } select { case <-cs.respHeaderRecv: return handleResponseHeaders() @@ -1348,6 +1458,21 @@ func (cs *clientStream) writeRequest(req *http.Request) (err error) { if cc.reqHeaderMu == nil { panic("RoundTrip on uninitialized ClientConn") // for tests } + var newStreamHook func(*clientStream) + if cc.syncHooks != nil { + newStreamHook = cc.syncHooks.newstream + cc.syncHooks.blockUntil(func() bool { + select { + case cc.reqHeaderMu <- struct{}{}: + <-cc.reqHeaderMu + case <-cs.reqCancel: + case <-ctx.Done(): + default: + return false + } + return true + }) + } select { case cc.reqHeaderMu <- struct{}{}: case <-cs.reqCancel: @@ -1372,6 +1497,10 @@ func (cs *clientStream) writeRequest(req *http.Request) (err error) { } cc.mu.Unlock() + if newStreamHook != nil { + newStreamHook(cs) + } + // TODO(bradfitz): this is a copy of the logic in net/http. Unify somewhere? if !cc.t.disableCompression() && req.Header.Get("Accept-Encoding") == "" && @@ -1452,15 +1581,30 @@ func (cs *clientStream) writeRequest(req *http.Request) (err error) { var respHeaderTimer <-chan time.Time var respHeaderRecv chan struct{} if d := cc.responseHeaderTimeout(); d != 0 { - timer := time.NewTimer(d) + timer := cc.newTimer(d) defer timer.Stop() - respHeaderTimer = timer.C + respHeaderTimer = timer.C() respHeaderRecv = cs.respHeaderRecv } // Wait until the peer half-closes its end of the stream, // or until the request is aborted (via context, error, or otherwise), // whichever comes first. for { + if cc.syncHooks != nil { + cc.syncHooks.blockUntil(func() bool { + select { + case <-cs.peerClosed: + case <-respHeaderTimer: + case <-respHeaderRecv: + case <-cs.abort: + case <-ctx.Done(): + case <-cs.reqCancel: + default: + return false + } + return true + }) + } select { case <-cs.peerClosed: return nil @@ -1609,7 +1753,7 @@ func (cc *ClientConn) awaitOpenSlotForStreamLocked(cs *clientStream) error { return nil } cc.pendingRequests++ - cc.cond.Wait() + cc.condWait() cc.pendingRequests-- select { case <-cs.abort: @@ -1871,8 +2015,24 @@ func (cs *clientStream) awaitFlowControl(maxBytes int) (taken int32, err error) cs.flow.take(take) return take, nil } - cc.cond.Wait() + cc.condWait() + } +} + +func validateHeaders(hdrs http.Header) string { + for k, vv := range hdrs { + if !httpguts.ValidHeaderFieldName(k) { + return fmt.Sprintf("name %q", k) + } + for _, v := range vv { + if !httpguts.ValidHeaderFieldValue(v) { + // Don't include the value in the error, + // because it may be sensitive. + return fmt.Sprintf("value for header %q", k) + } + } } + return "" } var errNilRequestURL = errors.New("http2: Request.URI is nil") @@ -1912,19 +2072,14 @@ func (cc *ClientConn) encodeHeaders(req *http.Request, addGzipHeader bool, trail } } - // Check for any invalid headers and return an error before we + // Check for any invalid headers+trailers and return an error before we // potentially pollute our hpack state. (We want to be able to // continue to reuse the hpack encoder for future requests) - for k, vv := range req.Header { - if !httpguts.ValidHeaderFieldName(k) { - return nil, fmt.Errorf("invalid HTTP header name %q", k) - } - for _, v := range vv { - if !httpguts.ValidHeaderFieldValue(v) { - // Don't include the value in the error, because it may be sensitive. - return nil, fmt.Errorf("invalid HTTP header value for header %q", k) - } - } + if err := validateHeaders(req.Header); err != "" { + return nil, fmt.Errorf("invalid HTTP header %s", err) + } + if err := validateHeaders(req.Trailer); err != "" { + return nil, fmt.Errorf("invalid HTTP trailer %s", err) } enumerateHeaders := func(f func(name, value string)) { @@ -2143,7 +2298,7 @@ func (cc *ClientConn) forgetStreamID(id uint32) { } // Wake up writeRequestBody via clientStream.awaitFlowControl and // wake up RoundTrip if there is a pending request. - cc.cond.Broadcast() + cc.condBroadcast() closeOnIdle := cc.singleUse || cc.doNotReuse || cc.t.disableKeepAlives() || cc.goAway != nil if closeOnIdle && cc.streamsReserved == 0 && len(cc.streams) == 0 { @@ -2231,7 +2386,7 @@ func (rl *clientConnReadLoop) cleanup() { cs.abortStreamLocked(err) } } - cc.cond.Broadcast() + cc.condBroadcast() cc.mu.Unlock() } @@ -2266,10 +2421,9 @@ func (rl *clientConnReadLoop) run() error { cc := rl.cc gotSettings := false readIdleTimeout := cc.t.ReadIdleTimeout - var t *time.Timer + var t timer if readIdleTimeout != 0 { - t = time.AfterFunc(readIdleTimeout, cc.healthCheck) - defer t.Stop() + t = cc.afterFunc(readIdleTimeout, cc.healthCheck) } for { f, err := cc.fr.ReadFrame() @@ -2684,7 +2838,7 @@ func (rl *clientConnReadLoop) processData(f *DataFrame) error { }) return nil } - if !cs.firstByte { + if !cs.pastHeaders { cc.logf("protocol error: received DATA before a HEADERS frame") rl.endStreamError(cs, StreamError{ StreamID: f.StreamID, @@ -2867,7 +3021,7 @@ func (rl *clientConnReadLoop) processSettingsNoWrite(f *SettingsFrame) error { for _, cs := range cc.streams { cs.flow.add(delta) } - cc.cond.Broadcast() + cc.condBroadcast() cc.initialWindowSize = s.Val case SettingHeaderTableSize: @@ -2911,9 +3065,18 @@ func (rl *clientConnReadLoop) processWindowUpdate(f *WindowUpdateFrame) error { fl = &cs.flow } if !fl.add(int32(f.Increment)) { + // For stream, the sender sends RST_STREAM with an error code of FLOW_CONTROL_ERROR + if cs != nil { + rl.endStreamError(cs, StreamError{ + StreamID: f.StreamID, + Code: ErrCodeFlowControl, + }) + return nil + } + return ConnectionError(ErrCodeFlowControl) } - cc.cond.Broadcast() + cc.condBroadcast() return nil } @@ -2955,24 +3118,38 @@ func (cc *ClientConn) Ping(ctx context.Context) error { } cc.mu.Unlock() } - errc := make(chan error, 1) - go func() { + var pingError error + errc := make(chan struct{}) + cc.goRun(func() { cc.wmu.Lock() defer cc.wmu.Unlock() - if err := cc.fr.WritePing(false, p); err != nil { - errc <- err + if pingError = cc.fr.WritePing(false, p); pingError != nil { + close(errc) return } - if err := cc.bw.Flush(); err != nil { - errc <- err + if pingError = cc.bw.Flush(); pingError != nil { + close(errc) return } - }() + }) + if cc.syncHooks != nil { + cc.syncHooks.blockUntil(func() bool { + select { + case <-c: + case <-errc: + case <-ctx.Done(): + case <-cc.readerDone: + default: + return false + } + return true + }) + } select { case <-c: return nil - case err := <-errc: - return err + case <-errc: + return pingError case <-ctx.Done(): return ctx.Err() case <-cc.readerDone: @@ -3141,9 +3318,17 @@ func (rt noDialH2RoundTripper) RoundTrip(req *http.Request) (*http.Response, err } func (t *Transport) idleConnTimeout() time.Duration { + // to keep things backwards compatible, we use non-zero values of + // IdleConnTimeout, followed by using the IdleConnTimeout on the underlying + // http1 transport, followed by 0 + if t.IdleConnTimeout != 0 { + return t.IdleConnTimeout + } + if t.t1 != nil { return t.t1.IdleConnTimeout } + return 0 } diff --git a/cluster-autoscaler/vendor/golang.org/x/net/websocket/client.go b/cluster-autoscaler/vendor/golang.org/x/net/websocket/client.go index 69a4ac7eefec..1e64157f3ebd 100644 --- a/cluster-autoscaler/vendor/golang.org/x/net/websocket/client.go +++ b/cluster-autoscaler/vendor/golang.org/x/net/websocket/client.go @@ -6,10 +6,12 @@ package websocket import ( "bufio" + "context" "io" "net" "net/http" "net/url" + "time" ) // DialError is an error that occurs while dialling a websocket server. @@ -79,28 +81,59 @@ func parseAuthority(location *url.URL) string { // DialConfig opens a new client connection to a WebSocket with a config. func DialConfig(config *Config) (ws *Conn, err error) { - var client net.Conn + return config.DialContext(context.Background()) +} + +// DialContext opens a new client connection to a WebSocket, with context support for timeouts/cancellation. +func (config *Config) DialContext(ctx context.Context) (*Conn, error) { if config.Location == nil { return nil, &DialError{config, ErrBadWebSocketLocation} } if config.Origin == nil { return nil, &DialError{config, ErrBadWebSocketOrigin} } + dialer := config.Dialer if dialer == nil { dialer = &net.Dialer{} } - client, err = dialWithDialer(dialer, config) - if err != nil { - goto Error - } - ws, err = NewClient(config, client) + + client, err := dialWithDialer(ctx, dialer, config) if err != nil { - client.Close() - goto Error + return nil, &DialError{config, err} } - return -Error: - return nil, &DialError{config, err} + // Cleanup the connection if we fail to create the websocket successfully + success := false + defer func() { + if !success { + _ = client.Close() + } + }() + + var ws *Conn + var wsErr error + doneConnecting := make(chan struct{}) + go func() { + defer close(doneConnecting) + ws, err = NewClient(config, client) + if err != nil { + wsErr = &DialError{config, err} + } + }() + + // The websocket.NewClient() function can block indefinitely, make sure that we + // respect the deadlines specified by the context. + select { + case <-ctx.Done(): + // Force the pending operations to fail, terminating the pending connection attempt + _ = client.SetDeadline(time.Now()) + <-doneConnecting // Wait for the goroutine that tries to establish the connection to finish + return nil, &DialError{config, ctx.Err()} + case <-doneConnecting: + if wsErr == nil { + success = true // Disarm the deferred connection cleanup + } + return ws, wsErr + } } diff --git a/cluster-autoscaler/vendor/golang.org/x/net/websocket/dial.go b/cluster-autoscaler/vendor/golang.org/x/net/websocket/dial.go index 2dab943a489a..8a2d83c473b8 100644 --- a/cluster-autoscaler/vendor/golang.org/x/net/websocket/dial.go +++ b/cluster-autoscaler/vendor/golang.org/x/net/websocket/dial.go @@ -5,18 +5,23 @@ package websocket import ( + "context" "crypto/tls" "net" ) -func dialWithDialer(dialer *net.Dialer, config *Config) (conn net.Conn, err error) { +func dialWithDialer(ctx context.Context, dialer *net.Dialer, config *Config) (conn net.Conn, err error) { switch config.Location.Scheme { case "ws": - conn, err = dialer.Dial("tcp", parseAuthority(config.Location)) + conn, err = dialer.DialContext(ctx, "tcp", parseAuthority(config.Location)) case "wss": - conn, err = tls.DialWithDialer(dialer, "tcp", parseAuthority(config.Location), config.TlsConfig) + tlsDialer := &tls.Dialer{ + NetDialer: dialer, + Config: config.TlsConfig, + } + conn, err = tlsDialer.DialContext(ctx, "tcp", parseAuthority(config.Location)) default: err = ErrBadScheme } diff --git a/cluster-autoscaler/vendor/golang.org/x/oauth2/google/appengine_gen1.go b/cluster-autoscaler/vendor/golang.org/x/oauth2/google/appengine_gen1.go index 16c6c6b90ce5..e61587945b08 100644 --- a/cluster-autoscaler/vendor/golang.org/x/oauth2/google/appengine_gen1.go +++ b/cluster-autoscaler/vendor/golang.org/x/oauth2/google/appengine_gen1.go @@ -3,7 +3,6 @@ // license that can be found in the LICENSE file. //go:build appengine -// +build appengine // This file applies to App Engine first generation runtimes (<= Go 1.9). diff --git a/cluster-autoscaler/vendor/golang.org/x/oauth2/google/appengine_gen2_flex.go b/cluster-autoscaler/vendor/golang.org/x/oauth2/google/appengine_gen2_flex.go index a7e27b3d2991..9c79aa0a0cc5 100644 --- a/cluster-autoscaler/vendor/golang.org/x/oauth2/google/appengine_gen2_flex.go +++ b/cluster-autoscaler/vendor/golang.org/x/oauth2/google/appengine_gen2_flex.go @@ -3,7 +3,6 @@ // license that can be found in the LICENSE file. //go:build !appengine -// +build !appengine // This file applies to App Engine second generation runtimes (>= Go 1.11) and App Engine flexible. diff --git a/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/client_appengine.go b/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/client_appengine.go index e1755d1d9acf..d28140f789ec 100644 --- a/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/client_appengine.go +++ b/cluster-autoscaler/vendor/golang.org/x/oauth2/internal/client_appengine.go @@ -3,7 +3,6 @@ // license that can be found in the LICENSE file. //go:build appengine -// +build appengine package internal diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/aliases.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/aliases.go index e7d3df4bd360..b0e419857502 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/aliases.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/aliases.go @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -//go:build (aix || darwin || dragonfly || freebsd || linux || netbsd || openbsd || solaris || zos) && go1.9 +//go:build aix || darwin || dragonfly || freebsd || linux || netbsd || openbsd || solaris || zos package unix diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/mkerrors.sh b/cluster-autoscaler/vendor/golang.org/x/sys/unix/mkerrors.sh index 6202638bae86..fdcaa974d23b 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/mkerrors.sh +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/mkerrors.sh @@ -248,6 +248,7 @@ struct ltchars { #include #include #include +#include #include #include #include @@ -283,10 +284,6 @@ struct ltchars { #include #endif -#ifndef MSG_FASTOPEN -#define MSG_FASTOPEN 0x20000000 -#endif - #ifndef PTRACE_GETREGS #define PTRACE_GETREGS 0xc #endif @@ -295,14 +292,6 @@ struct ltchars { #define PTRACE_SETREGS 0xd #endif -#ifndef SOL_NETLINK -#define SOL_NETLINK 270 -#endif - -#ifndef SOL_SMC -#define SOL_SMC 286 -#endif - #ifdef SOL_BLUETOOTH // SPARC includes this in /usr/include/sparc64-linux-gnu/bits/socket.h // but it is already in bluetooth_linux.go @@ -319,10 +308,23 @@ struct ltchars { #undef TIPC_WAIT_FOREVER #define TIPC_WAIT_FOREVER 0xffffffff -// Copied from linux/l2tp.h -// Including linux/l2tp.h here causes conflicts between linux/in.h -// and netinet/in.h included via net/route.h above. -#define IPPROTO_L2TP 115 +// Copied from linux/netfilter/nf_nat.h +// Including linux/netfilter/nf_nat.h here causes conflicts between linux/in.h +// and netinet/in.h. +#define NF_NAT_RANGE_MAP_IPS (1 << 0) +#define NF_NAT_RANGE_PROTO_SPECIFIED (1 << 1) +#define NF_NAT_RANGE_PROTO_RANDOM (1 << 2) +#define NF_NAT_RANGE_PERSISTENT (1 << 3) +#define NF_NAT_RANGE_PROTO_RANDOM_FULLY (1 << 4) +#define NF_NAT_RANGE_PROTO_OFFSET (1 << 5) +#define NF_NAT_RANGE_NETMAP (1 << 6) +#define NF_NAT_RANGE_PROTO_RANDOM_ALL \ + (NF_NAT_RANGE_PROTO_RANDOM | NF_NAT_RANGE_PROTO_RANDOM_FULLY) +#define NF_NAT_RANGE_MASK \ + (NF_NAT_RANGE_MAP_IPS | NF_NAT_RANGE_PROTO_SPECIFIED | \ + NF_NAT_RANGE_PROTO_RANDOM | NF_NAT_RANGE_PERSISTENT | \ + NF_NAT_RANGE_PROTO_RANDOM_FULLY | NF_NAT_RANGE_PROTO_OFFSET | \ + NF_NAT_RANGE_NETMAP) // Copied from linux/hid.h. // Keep in sync with the size of the referenced fields. @@ -582,7 +584,7 @@ ccflags="$@" $2 ~ /^KEY_(SPEC|REQKEY_DEFL)_/ || $2 ~ /^KEYCTL_/ || $2 ~ /^PERF_/ || - $2 ~ /^SECCOMP_MODE_/ || + $2 ~ /^SECCOMP_/ || $2 ~ /^SEEK_/ || $2 ~ /^SCHED_/ || $2 ~ /^SPLICE_/ || @@ -603,6 +605,9 @@ ccflags="$@" $2 ~ /^FSOPT_/ || $2 ~ /^WDIO[CFS]_/ || $2 ~ /^NFN/ || + $2 !~ /^NFT_META_IIFTYPE/ && + $2 ~ /^NFT_/ || + $2 ~ /^NF_NAT_/ || $2 ~ /^XDP_/ || $2 ~ /^RWF_/ || $2 ~ /^(HDIO|WIN|SMART)_/ || diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_libSystem.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_libSystem.go index 16dc6993799f..2f0fa76e4f65 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_libSystem.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_darwin_libSystem.go @@ -2,7 +2,7 @@ // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. -//go:build darwin && go1.12 +//go:build darwin package unix diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd.go index 64d1bb4dba58..2b57e0f73bb8 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_freebsd.go @@ -13,6 +13,7 @@ package unix import ( + "errors" "sync" "unsafe" ) @@ -169,25 +170,26 @@ func Getfsstat(buf []Statfs_t, flags int) (n int, err error) { func Uname(uname *Utsname) error { mib := []_C_int{CTL_KERN, KERN_OSTYPE} n := unsafe.Sizeof(uname.Sysname) - if err := sysctl(mib, &uname.Sysname[0], &n, nil, 0); err != nil { + // Suppress ENOMEM errors to be compatible with the C library __xuname() implementation. + if err := sysctl(mib, &uname.Sysname[0], &n, nil, 0); err != nil && !errors.Is(err, ENOMEM) { return err } mib = []_C_int{CTL_KERN, KERN_HOSTNAME} n = unsafe.Sizeof(uname.Nodename) - if err := sysctl(mib, &uname.Nodename[0], &n, nil, 0); err != nil { + if err := sysctl(mib, &uname.Nodename[0], &n, nil, 0); err != nil && !errors.Is(err, ENOMEM) { return err } mib = []_C_int{CTL_KERN, KERN_OSRELEASE} n = unsafe.Sizeof(uname.Release) - if err := sysctl(mib, &uname.Release[0], &n, nil, 0); err != nil { + if err := sysctl(mib, &uname.Release[0], &n, nil, 0); err != nil && !errors.Is(err, ENOMEM) { return err } mib = []_C_int{CTL_KERN, KERN_VERSION} n = unsafe.Sizeof(uname.Version) - if err := sysctl(mib, &uname.Version[0], &n, nil, 0); err != nil { + if err := sysctl(mib, &uname.Version[0], &n, nil, 0); err != nil && !errors.Is(err, ENOMEM) { return err } @@ -205,7 +207,7 @@ func Uname(uname *Utsname) error { mib = []_C_int{CTL_HW, HW_MACHINE} n = unsafe.Sizeof(uname.Machine) - if err := sysctl(mib, &uname.Machine[0], &n, nil, 0); err != nil { + if err := sysctl(mib, &uname.Machine[0], &n, nil, 0); err != nil && !errors.Is(err, ENOMEM) { return err } diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_linux.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_linux.go index 0f85e29e621c..5682e2628ad0 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_linux.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/syscall_linux.go @@ -1849,6 +1849,105 @@ func Dup2(oldfd, newfd int) error { //sys Fsmount(fd int, flags int, mountAttrs int) (fsfd int, err error) //sys Fsopen(fsName string, flags int) (fd int, err error) //sys Fspick(dirfd int, pathName string, flags int) (fd int, err error) + +//sys fsconfig(fd int, cmd uint, key *byte, value *byte, aux int) (err error) + +func fsconfigCommon(fd int, cmd uint, key string, value *byte, aux int) (err error) { + var keyp *byte + if keyp, err = BytePtrFromString(key); err != nil { + return + } + return fsconfig(fd, cmd, keyp, value, aux) +} + +// FsconfigSetFlag is equivalent to fsconfig(2) called +// with cmd == FSCONFIG_SET_FLAG. +// +// fd is the filesystem context to act upon. +// key the parameter key to set. +func FsconfigSetFlag(fd int, key string) (err error) { + return fsconfigCommon(fd, FSCONFIG_SET_FLAG, key, nil, 0) +} + +// FsconfigSetString is equivalent to fsconfig(2) called +// with cmd == FSCONFIG_SET_STRING. +// +// fd is the filesystem context to act upon. +// key the parameter key to set. +// value is the parameter value to set. +func FsconfigSetString(fd int, key string, value string) (err error) { + var valuep *byte + if valuep, err = BytePtrFromString(value); err != nil { + return + } + return fsconfigCommon(fd, FSCONFIG_SET_STRING, key, valuep, 0) +} + +// FsconfigSetBinary is equivalent to fsconfig(2) called +// with cmd == FSCONFIG_SET_BINARY. +// +// fd is the filesystem context to act upon. +// key the parameter key to set. +// value is the parameter value to set. +func FsconfigSetBinary(fd int, key string, value []byte) (err error) { + if len(value) == 0 { + return EINVAL + } + return fsconfigCommon(fd, FSCONFIG_SET_BINARY, key, &value[0], len(value)) +} + +// FsconfigSetPath is equivalent to fsconfig(2) called +// with cmd == FSCONFIG_SET_PATH. +// +// fd is the filesystem context to act upon. +// key the parameter key to set. +// path is a non-empty path for specified key. +// atfd is a file descriptor at which to start lookup from or AT_FDCWD. +func FsconfigSetPath(fd int, key string, path string, atfd int) (err error) { + var valuep *byte + if valuep, err = BytePtrFromString(path); err != nil { + return + } + return fsconfigCommon(fd, FSCONFIG_SET_PATH, key, valuep, atfd) +} + +// FsconfigSetPathEmpty is equivalent to fsconfig(2) called +// with cmd == FSCONFIG_SET_PATH_EMPTY. The same as +// FconfigSetPath but with AT_PATH_EMPTY implied. +func FsconfigSetPathEmpty(fd int, key string, path string, atfd int) (err error) { + var valuep *byte + if valuep, err = BytePtrFromString(path); err != nil { + return + } + return fsconfigCommon(fd, FSCONFIG_SET_PATH_EMPTY, key, valuep, atfd) +} + +// FsconfigSetFd is equivalent to fsconfig(2) called +// with cmd == FSCONFIG_SET_FD. +// +// fd is the filesystem context to act upon. +// key the parameter key to set. +// value is a file descriptor to be assigned to specified key. +func FsconfigSetFd(fd int, key string, value int) (err error) { + return fsconfigCommon(fd, FSCONFIG_SET_FD, key, nil, value) +} + +// FsconfigCreate is equivalent to fsconfig(2) called +// with cmd == FSCONFIG_CMD_CREATE. +// +// fd is the filesystem context to act upon. +func FsconfigCreate(fd int) (err error) { + return fsconfig(fd, FSCONFIG_CMD_CREATE, nil, nil, 0) +} + +// FsconfigReconfigure is equivalent to fsconfig(2) called +// with cmd == FSCONFIG_CMD_RECONFIGURE. +// +// fd is the filesystem context to act upon. +func FsconfigReconfigure(fd int) (err error) { + return fsconfig(fd, FSCONFIG_CMD_RECONFIGURE, nil, nil, 0) +} + //sys Getdents(fd int, buf []byte) (n int, err error) = SYS_GETDENTS64 //sysnb Getpgid(pid int) (pgid int, err error) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux.go index c73cfe2f10b7..36bf8399f4fa 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux.go @@ -1785,6 +1785,8 @@ const ( LANDLOCK_ACCESS_FS_REMOVE_FILE = 0x20 LANDLOCK_ACCESS_FS_TRUNCATE = 0x4000 LANDLOCK_ACCESS_FS_WRITE_FILE = 0x2 + LANDLOCK_ACCESS_NET_BIND_TCP = 0x1 + LANDLOCK_ACCESS_NET_CONNECT_TCP = 0x2 LANDLOCK_CREATE_RULESET_VERSION = 0x1 LINUX_REBOOT_CMD_CAD_OFF = 0x0 LINUX_REBOOT_CMD_CAD_ON = 0x89abcdef @@ -2127,6 +2129,60 @@ const ( NFNL_SUBSYS_QUEUE = 0x3 NFNL_SUBSYS_ULOG = 0x4 NFS_SUPER_MAGIC = 0x6969 + NFT_CHAIN_FLAGS = 0x7 + NFT_CHAIN_MAXNAMELEN = 0x100 + NFT_CT_MAX = 0x17 + NFT_DATA_RESERVED_MASK = 0xffffff00 + NFT_DATA_VALUE_MAXLEN = 0x40 + NFT_EXTHDR_OP_MAX = 0x4 + NFT_FIB_RESULT_MAX = 0x3 + NFT_INNER_MASK = 0xf + NFT_LOGLEVEL_MAX = 0x8 + NFT_NAME_MAXLEN = 0x100 + NFT_NG_MAX = 0x1 + NFT_OBJECT_CONNLIMIT = 0x5 + NFT_OBJECT_COUNTER = 0x1 + NFT_OBJECT_CT_EXPECT = 0x9 + NFT_OBJECT_CT_HELPER = 0x3 + NFT_OBJECT_CT_TIMEOUT = 0x7 + NFT_OBJECT_LIMIT = 0x4 + NFT_OBJECT_MAX = 0xa + NFT_OBJECT_QUOTA = 0x2 + NFT_OBJECT_SECMARK = 0x8 + NFT_OBJECT_SYNPROXY = 0xa + NFT_OBJECT_TUNNEL = 0x6 + NFT_OBJECT_UNSPEC = 0x0 + NFT_OBJ_MAXNAMELEN = 0x100 + NFT_OSF_MAXGENRELEN = 0x10 + NFT_QUEUE_FLAG_BYPASS = 0x1 + NFT_QUEUE_FLAG_CPU_FANOUT = 0x2 + NFT_QUEUE_FLAG_MASK = 0x3 + NFT_REG32_COUNT = 0x10 + NFT_REG32_SIZE = 0x4 + NFT_REG_MAX = 0x4 + NFT_REG_SIZE = 0x10 + NFT_REJECT_ICMPX_MAX = 0x3 + NFT_RT_MAX = 0x4 + NFT_SECMARK_CTX_MAXLEN = 0x100 + NFT_SET_MAXNAMELEN = 0x100 + NFT_SOCKET_MAX = 0x3 + NFT_TABLE_F_MASK = 0x3 + NFT_TABLE_MAXNAMELEN = 0x100 + NFT_TRACETYPE_MAX = 0x3 + NFT_TUNNEL_F_MASK = 0x7 + NFT_TUNNEL_MAX = 0x1 + NFT_TUNNEL_MODE_MAX = 0x2 + NFT_USERDATA_MAXLEN = 0x100 + NFT_XFRM_KEY_MAX = 0x6 + NF_NAT_RANGE_MAP_IPS = 0x1 + NF_NAT_RANGE_MASK = 0x7f + NF_NAT_RANGE_NETMAP = 0x40 + NF_NAT_RANGE_PERSISTENT = 0x8 + NF_NAT_RANGE_PROTO_OFFSET = 0x20 + NF_NAT_RANGE_PROTO_RANDOM = 0x4 + NF_NAT_RANGE_PROTO_RANDOM_ALL = 0x14 + NF_NAT_RANGE_PROTO_RANDOM_FULLY = 0x10 + NF_NAT_RANGE_PROTO_SPECIFIED = 0x2 NILFS_SUPER_MAGIC = 0x3434 NL0 = 0x0 NL1 = 0x100 @@ -2411,6 +2467,7 @@ const ( PR_MCE_KILL_GET = 0x22 PR_MCE_KILL_LATE = 0x0 PR_MCE_KILL_SET = 0x1 + PR_MDWE_NO_INHERIT = 0x2 PR_MDWE_REFUSE_EXEC_GAIN = 0x1 PR_MPX_DISABLE_MANAGEMENT = 0x2c PR_MPX_ENABLE_MANAGEMENT = 0x2b @@ -2615,8 +2672,9 @@ const ( RTAX_FEATURES = 0xc RTAX_FEATURE_ALLFRAG = 0x8 RTAX_FEATURE_ECN = 0x1 - RTAX_FEATURE_MASK = 0xf + RTAX_FEATURE_MASK = 0x1f RTAX_FEATURE_SACK = 0x2 + RTAX_FEATURE_TCP_USEC_TS = 0x10 RTAX_FEATURE_TIMESTAMP = 0x4 RTAX_HOPLIMIT = 0xa RTAX_INITCWND = 0xb @@ -2859,9 +2917,38 @@ const ( SCM_RIGHTS = 0x1 SCM_TIMESTAMP = 0x1d SC_LOG_FLUSH = 0x100000 + SECCOMP_ADDFD_FLAG_SEND = 0x2 + SECCOMP_ADDFD_FLAG_SETFD = 0x1 + SECCOMP_FILTER_FLAG_LOG = 0x2 + SECCOMP_FILTER_FLAG_NEW_LISTENER = 0x8 + SECCOMP_FILTER_FLAG_SPEC_ALLOW = 0x4 + SECCOMP_FILTER_FLAG_TSYNC = 0x1 + SECCOMP_FILTER_FLAG_TSYNC_ESRCH = 0x10 + SECCOMP_FILTER_FLAG_WAIT_KILLABLE_RECV = 0x20 + SECCOMP_GET_ACTION_AVAIL = 0x2 + SECCOMP_GET_NOTIF_SIZES = 0x3 + SECCOMP_IOCTL_NOTIF_RECV = 0xc0502100 + SECCOMP_IOCTL_NOTIF_SEND = 0xc0182101 + SECCOMP_IOC_MAGIC = '!' SECCOMP_MODE_DISABLED = 0x0 SECCOMP_MODE_FILTER = 0x2 SECCOMP_MODE_STRICT = 0x1 + SECCOMP_RET_ACTION = 0x7fff0000 + SECCOMP_RET_ACTION_FULL = 0xffff0000 + SECCOMP_RET_ALLOW = 0x7fff0000 + SECCOMP_RET_DATA = 0xffff + SECCOMP_RET_ERRNO = 0x50000 + SECCOMP_RET_KILL = 0x0 + SECCOMP_RET_KILL_PROCESS = 0x80000000 + SECCOMP_RET_KILL_THREAD = 0x0 + SECCOMP_RET_LOG = 0x7ffc0000 + SECCOMP_RET_TRACE = 0x7ff00000 + SECCOMP_RET_TRAP = 0x30000 + SECCOMP_RET_USER_NOTIF = 0x7fc00000 + SECCOMP_SET_MODE_FILTER = 0x1 + SECCOMP_SET_MODE_STRICT = 0x0 + SECCOMP_USER_NOTIF_FD_SYNC_WAKE_UP = 0x1 + SECCOMP_USER_NOTIF_FLAG_CONTINUE = 0x1 SECRETMEM_MAGIC = 0x5345434d SECURITYFS_MAGIC = 0x73636673 SEEK_CUR = 0x1 @@ -3021,6 +3108,7 @@ const ( SOL_TIPC = 0x10f SOL_TLS = 0x11a SOL_UDP = 0x11 + SOL_VSOCK = 0x11f SOL_X25 = 0x106 SOL_XDP = 0x11b SOMAXCONN = 0x1000 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_386.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_386.go index 4920821cf3b2..42ff8c3c1b06 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_386.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_386.go @@ -281,6 +281,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x40182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x40082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x40082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go index a0c1e411275c..dca436004fa4 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_amd64.go @@ -282,6 +282,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x40182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x40082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x40082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go index c63985560f61..5cca668ac302 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm.go @@ -288,6 +288,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x40182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x40082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x40082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go index 47cc62e25c14..d8cae6d15340 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_arm64.go @@ -278,6 +278,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x40182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x40082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x40082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go index 27ac4a09e22a..28e39afdcb4a 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_loong64.go @@ -275,6 +275,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x40182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x40082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x40082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go index 54694642a5de..cd66e92cb426 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips.go @@ -281,6 +281,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x80182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x80082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x80082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x80 SIOCATMARK = 0x40047307 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go index 3adb81d75822..c1595eba78e3 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64.go @@ -281,6 +281,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x80182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x80082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x80082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x80 SIOCATMARK = 0x40047307 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go index 2dfe98f0d1b1..ee9456b0da74 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mips64le.go @@ -281,6 +281,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x80182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x80082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x80082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x80 SIOCATMARK = 0x40047307 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go index f5398f84f041..8cfca81e1b56 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_mipsle.go @@ -281,6 +281,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x80182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x80082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x80082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x80 SIOCATMARK = 0x40047307 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go index c54f152d68fd..60b0deb3af77 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc.go @@ -336,6 +336,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x80182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x80082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x80082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go index 76057dc72fb5..f90aa7281bfb 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64.go @@ -340,6 +340,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x80182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x80082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x80082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go index e0c3725e2b89..ba9e01503383 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_ppc64le.go @@ -340,6 +340,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x80182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x80082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x80082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go index 18f2813ed54b..07cdfd6e9fd3 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_riscv64.go @@ -272,6 +272,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x40182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x40082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x40082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go index 11619d4ec88f..2f1dd214a74e 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_s390x.go @@ -344,6 +344,9 @@ const ( SCM_TIMESTAMPNS = 0x23 SCM_TXTIME = 0x3d SCM_WIFI_STATUS = 0x29 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x40182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x40082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x40082104 SFD_CLOEXEC = 0x80000 SFD_NONBLOCK = 0x800 SIOCATMARK = 0x8905 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go index 396d994da79c..f40519d90180 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zerrors_linux_sparc64.go @@ -335,6 +335,9 @@ const ( SCM_TIMESTAMPNS = 0x21 SCM_TXTIME = 0x3f SCM_WIFI_STATUS = 0x25 + SECCOMP_IOCTL_NOTIF_ADDFD = 0x80182103 + SECCOMP_IOCTL_NOTIF_ID_VALID = 0x80082102 + SECCOMP_IOCTL_NOTIF_SET_FLAGS = 0x80082104 SFD_CLOEXEC = 0x400000 SFD_NONBLOCK = 0x4000 SF_FP = 0x38 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_linux.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_linux.go index 1488d27128cd..87d8612a1dc7 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_linux.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_linux.go @@ -906,6 +906,16 @@ func Fspick(dirfd int, pathName string, flags int) (fd int, err error) { // THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT +func fsconfig(fd int, cmd uint, key *byte, value *byte, aux int) (err error) { + _, _, e1 := Syscall6(SYS_FSCONFIG, uintptr(fd), uintptr(cmd), uintptr(unsafe.Pointer(key)), uintptr(unsafe.Pointer(value)), uintptr(aux), 0) + if e1 != 0 { + err = errnoErr(e1) + } + return +} + +// THIS FILE IS GENERATED BY THE COMMAND AT THE TOP; DO NOT EDIT + func Getdents(fd int, buf []byte) (n int, err error) { var _p0 unsafe.Pointer if len(buf) > 0 { diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go index a1d061597ccc..9dc42410b78b 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_386.go @@ -2297,5 +2297,3 @@ func unveil(path *byte, flags *byte) (err error) { var libc_unveil_trampoline_addr uintptr //go:cgo_import_dynamic libc_unveil unveil "libc.so" - - diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go index 5b2a74097786..0d3a0751cd43 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_amd64.go @@ -2297,5 +2297,3 @@ func unveil(path *byte, flags *byte) (err error) { var libc_unveil_trampoline_addr uintptr //go:cgo_import_dynamic libc_unveil unveil "libc.so" - - diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go index f6eda1344a83..c39f7776db33 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm.go @@ -2297,5 +2297,3 @@ func unveil(path *byte, flags *byte) (err error) { var libc_unveil_trampoline_addr uintptr //go:cgo_import_dynamic libc_unveil unveil "libc.so" - - diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go index 55df20ae9d8d..57571d072fe6 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_arm64.go @@ -2297,5 +2297,3 @@ func unveil(path *byte, flags *byte) (err error) { var libc_unveil_trampoline_addr uintptr //go:cgo_import_dynamic libc_unveil unveil "libc.so" - - diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go index 8c1155cbc087..e62963e67e20 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_mips64.go @@ -2297,5 +2297,3 @@ func unveil(path *byte, flags *byte) (err error) { var libc_unveil_trampoline_addr uintptr //go:cgo_import_dynamic libc_unveil unveil "libc.so" - - diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go index 7cc80c58d985..00831354c82f 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_ppc64.go @@ -2297,5 +2297,3 @@ func unveil(path *byte, flags *byte) (err error) { var libc_unveil_trampoline_addr uintptr //go:cgo_import_dynamic libc_unveil unveil "libc.so" - - diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go index 0688737f4944..79029ed58482 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsyscall_openbsd_riscv64.go @@ -2297,5 +2297,3 @@ func unveil(path *byte, flags *byte) (err error) { var libc_unveil_trampoline_addr uintptr //go:cgo_import_dynamic libc_unveil unveil "libc.so" - - diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_386.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_386.go index fcf3ecbddee1..0cc3ce496e22 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_386.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_386.go @@ -448,4 +448,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_amd64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_amd64.go index f56dc2504ae1..856d92d69ef9 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_amd64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_amd64.go @@ -371,4 +371,7 @@ const ( SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_arm.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_arm.go index 974bf246767e..8d467094cf57 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_arm.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_arm.go @@ -412,4 +412,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_arm64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_arm64.go index 39a2739e2310..edc173244d0d 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_arm64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_arm64.go @@ -315,4 +315,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_loong64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_loong64.go index cf9c9d77e10f..445eba206155 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_loong64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_loong64.go @@ -309,4 +309,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips.go index 10b7362ef442..adba01bca701 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips.go @@ -432,4 +432,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 4450 SYS_CACHESTAT = 4451 SYS_FCHMODAT2 = 4452 + SYS_MAP_SHADOW_STACK = 4453 + SYS_FUTEX_WAKE = 4454 + SYS_FUTEX_WAIT = 4455 + SYS_FUTEX_REQUEUE = 4456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64.go index cd4d8b4fd35e..014c4e9c7a75 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64.go @@ -362,4 +362,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 5450 SYS_CACHESTAT = 5451 SYS_FCHMODAT2 = 5452 + SYS_MAP_SHADOW_STACK = 5453 + SYS_FUTEX_WAKE = 5454 + SYS_FUTEX_WAIT = 5455 + SYS_FUTEX_REQUEUE = 5456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64le.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64le.go index 2c0efca818b3..ccc97d74d05d 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64le.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mips64le.go @@ -362,4 +362,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 5450 SYS_CACHESTAT = 5451 SYS_FCHMODAT2 = 5452 + SYS_MAP_SHADOW_STACK = 5453 + SYS_FUTEX_WAKE = 5454 + SYS_FUTEX_WAIT = 5455 + SYS_FUTEX_REQUEUE = 5456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mipsle.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mipsle.go index a72e31d391d5..ec2b64a95d74 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mipsle.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_mipsle.go @@ -432,4 +432,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 4450 SYS_CACHESTAT = 4451 SYS_FCHMODAT2 = 4452 + SYS_MAP_SHADOW_STACK = 4453 + SYS_FUTEX_WAKE = 4454 + SYS_FUTEX_WAIT = 4455 + SYS_FUTEX_REQUEUE = 4456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc.go index c7d1e374713c..21a839e338b3 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc.go @@ -439,4 +439,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64.go index f4d4838c870d..c11121ec3b4d 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64.go @@ -411,4 +411,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go index b64f0e59114d..909b631fcb45 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_ppc64le.go @@ -411,4 +411,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_riscv64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_riscv64.go index 95711195a064..e49bed16ea6b 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_riscv64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_riscv64.go @@ -316,4 +316,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_s390x.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_s390x.go index f94e943bc4f5..66017d2d32b3 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_s390x.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_s390x.go @@ -377,4 +377,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_sparc64.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_sparc64.go index ba0c2bc5154a..47bab18dcedb 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_sparc64.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/zsysnum_linux_sparc64.go @@ -390,4 +390,8 @@ const ( SYS_SET_MEMPOLICY_HOME_NODE = 450 SYS_CACHESTAT = 451 SYS_FCHMODAT2 = 452 + SYS_MAP_SHADOW_STACK = 453 + SYS_FUTEX_WAKE = 454 + SYS_FUTEX_WAIT = 455 + SYS_FUTEX_REQUEUE = 456 ) diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux.go b/cluster-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux.go index bbf8399ff586..eff6bcdef814 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/unix/ztypes_linux.go @@ -174,7 +174,8 @@ type FscryptPolicyV2 struct { Contents_encryption_mode uint8 Filenames_encryption_mode uint8 Flags uint8 - _ [4]uint8 + Log2_data_unit_size uint8 + _ [3]uint8 Master_key_identifier [16]uint8 } @@ -455,60 +456,63 @@ type Ucred struct { } type TCPInfo struct { - State uint8 - Ca_state uint8 - Retransmits uint8 - Probes uint8 - Backoff uint8 - Options uint8 - Rto uint32 - Ato uint32 - Snd_mss uint32 - Rcv_mss uint32 - Unacked uint32 - Sacked uint32 - Lost uint32 - Retrans uint32 - Fackets uint32 - Last_data_sent uint32 - Last_ack_sent uint32 - Last_data_recv uint32 - Last_ack_recv uint32 - Pmtu uint32 - Rcv_ssthresh uint32 - Rtt uint32 - Rttvar uint32 - Snd_ssthresh uint32 - Snd_cwnd uint32 - Advmss uint32 - Reordering uint32 - Rcv_rtt uint32 - Rcv_space uint32 - Total_retrans uint32 - Pacing_rate uint64 - Max_pacing_rate uint64 - Bytes_acked uint64 - Bytes_received uint64 - Segs_out uint32 - Segs_in uint32 - Notsent_bytes uint32 - Min_rtt uint32 - Data_segs_in uint32 - Data_segs_out uint32 - Delivery_rate uint64 - Busy_time uint64 - Rwnd_limited uint64 - Sndbuf_limited uint64 - Delivered uint32 - Delivered_ce uint32 - Bytes_sent uint64 - Bytes_retrans uint64 - Dsack_dups uint32 - Reord_seen uint32 - Rcv_ooopack uint32 - Snd_wnd uint32 - Rcv_wnd uint32 - Rehash uint32 + State uint8 + Ca_state uint8 + Retransmits uint8 + Probes uint8 + Backoff uint8 + Options uint8 + Rto uint32 + Ato uint32 + Snd_mss uint32 + Rcv_mss uint32 + Unacked uint32 + Sacked uint32 + Lost uint32 + Retrans uint32 + Fackets uint32 + Last_data_sent uint32 + Last_ack_sent uint32 + Last_data_recv uint32 + Last_ack_recv uint32 + Pmtu uint32 + Rcv_ssthresh uint32 + Rtt uint32 + Rttvar uint32 + Snd_ssthresh uint32 + Snd_cwnd uint32 + Advmss uint32 + Reordering uint32 + Rcv_rtt uint32 + Rcv_space uint32 + Total_retrans uint32 + Pacing_rate uint64 + Max_pacing_rate uint64 + Bytes_acked uint64 + Bytes_received uint64 + Segs_out uint32 + Segs_in uint32 + Notsent_bytes uint32 + Min_rtt uint32 + Data_segs_in uint32 + Data_segs_out uint32 + Delivery_rate uint64 + Busy_time uint64 + Rwnd_limited uint64 + Sndbuf_limited uint64 + Delivered uint32 + Delivered_ce uint32 + Bytes_sent uint64 + Bytes_retrans uint64 + Dsack_dups uint32 + Reord_seen uint32 + Rcv_ooopack uint32 + Snd_wnd uint32 + Rcv_wnd uint32 + Rehash uint32 + Total_rto uint16 + Total_rto_recoveries uint16 + Total_rto_time uint32 } type CanFilter struct { @@ -551,7 +555,7 @@ const ( SizeofIPv6MTUInfo = 0x20 SizeofICMPv6Filter = 0x20 SizeofUcred = 0xc - SizeofTCPInfo = 0xf0 + SizeofTCPInfo = 0xf8 SizeofCanFilter = 0x8 SizeofTCPRepairOpt = 0x8 ) @@ -832,6 +836,15 @@ const ( FSPICK_EMPTY_PATH = 0x8 FSMOUNT_CLOEXEC = 0x1 + + FSCONFIG_SET_FLAG = 0x0 + FSCONFIG_SET_STRING = 0x1 + FSCONFIG_SET_BINARY = 0x2 + FSCONFIG_SET_PATH = 0x3 + FSCONFIG_SET_PATH_EMPTY = 0x4 + FSCONFIG_SET_FD = 0x5 + FSCONFIG_CMD_CREATE = 0x6 + FSCONFIG_CMD_RECONFIGURE = 0x7 ) type OpenHow struct { @@ -1546,6 +1559,7 @@ const ( IFLA_DEVLINK_PORT = 0x3e IFLA_GSO_IPV4_MAX_SIZE = 0x3f IFLA_GRO_IPV4_MAX_SIZE = 0x40 + IFLA_DPLL_PIN = 0x41 IFLA_PROTO_DOWN_REASON_UNSPEC = 0x0 IFLA_PROTO_DOWN_REASON_MASK = 0x1 IFLA_PROTO_DOWN_REASON_VALUE = 0x2 @@ -1561,6 +1575,7 @@ const ( IFLA_INET6_ICMP6STATS = 0x6 IFLA_INET6_TOKEN = 0x7 IFLA_INET6_ADDR_GEN_MODE = 0x8 + IFLA_INET6_RA_MTU = 0x9 IFLA_BR_UNSPEC = 0x0 IFLA_BR_FORWARD_DELAY = 0x1 IFLA_BR_HELLO_TIME = 0x2 @@ -1608,6 +1623,9 @@ const ( IFLA_BR_MCAST_MLD_VERSION = 0x2c IFLA_BR_VLAN_STATS_PER_PORT = 0x2d IFLA_BR_MULTI_BOOLOPT = 0x2e + IFLA_BR_MCAST_QUERIER_STATE = 0x2f + IFLA_BR_FDB_N_LEARNED = 0x30 + IFLA_BR_FDB_MAX_LEARNED = 0x31 IFLA_BRPORT_UNSPEC = 0x0 IFLA_BRPORT_STATE = 0x1 IFLA_BRPORT_PRIORITY = 0x2 @@ -1645,6 +1663,14 @@ const ( IFLA_BRPORT_BACKUP_PORT = 0x22 IFLA_BRPORT_MRP_RING_OPEN = 0x23 IFLA_BRPORT_MRP_IN_OPEN = 0x24 + IFLA_BRPORT_MCAST_EHT_HOSTS_LIMIT = 0x25 + IFLA_BRPORT_MCAST_EHT_HOSTS_CNT = 0x26 + IFLA_BRPORT_LOCKED = 0x27 + IFLA_BRPORT_MAB = 0x28 + IFLA_BRPORT_MCAST_N_GROUPS = 0x29 + IFLA_BRPORT_MCAST_MAX_GROUPS = 0x2a + IFLA_BRPORT_NEIGH_VLAN_SUPPRESS = 0x2b + IFLA_BRPORT_BACKUP_NHID = 0x2c IFLA_INFO_UNSPEC = 0x0 IFLA_INFO_KIND = 0x1 IFLA_INFO_DATA = 0x2 @@ -1666,6 +1692,9 @@ const ( IFLA_MACVLAN_MACADDR = 0x4 IFLA_MACVLAN_MACADDR_DATA = 0x5 IFLA_MACVLAN_MACADDR_COUNT = 0x6 + IFLA_MACVLAN_BC_QUEUE_LEN = 0x7 + IFLA_MACVLAN_BC_QUEUE_LEN_USED = 0x8 + IFLA_MACVLAN_BC_CUTOFF = 0x9 IFLA_VRF_UNSPEC = 0x0 IFLA_VRF_TABLE = 0x1 IFLA_VRF_PORT_UNSPEC = 0x0 @@ -1689,9 +1718,22 @@ const ( IFLA_XFRM_UNSPEC = 0x0 IFLA_XFRM_LINK = 0x1 IFLA_XFRM_IF_ID = 0x2 + IFLA_XFRM_COLLECT_METADATA = 0x3 IFLA_IPVLAN_UNSPEC = 0x0 IFLA_IPVLAN_MODE = 0x1 IFLA_IPVLAN_FLAGS = 0x2 + NETKIT_NEXT = -0x1 + NETKIT_PASS = 0x0 + NETKIT_DROP = 0x2 + NETKIT_REDIRECT = 0x7 + NETKIT_L2 = 0x0 + NETKIT_L3 = 0x1 + IFLA_NETKIT_UNSPEC = 0x0 + IFLA_NETKIT_PEER_INFO = 0x1 + IFLA_NETKIT_PRIMARY = 0x2 + IFLA_NETKIT_POLICY = 0x3 + IFLA_NETKIT_PEER_POLICY = 0x4 + IFLA_NETKIT_MODE = 0x5 IFLA_VXLAN_UNSPEC = 0x0 IFLA_VXLAN_ID = 0x1 IFLA_VXLAN_GROUP = 0x2 @@ -1722,6 +1764,8 @@ const ( IFLA_VXLAN_GPE = 0x1b IFLA_VXLAN_TTL_INHERIT = 0x1c IFLA_VXLAN_DF = 0x1d + IFLA_VXLAN_VNIFILTER = 0x1e + IFLA_VXLAN_LOCALBYPASS = 0x1f IFLA_GENEVE_UNSPEC = 0x0 IFLA_GENEVE_ID = 0x1 IFLA_GENEVE_REMOTE = 0x2 @@ -1736,6 +1780,7 @@ const ( IFLA_GENEVE_LABEL = 0xb IFLA_GENEVE_TTL_INHERIT = 0xc IFLA_GENEVE_DF = 0xd + IFLA_GENEVE_INNER_PROTO_INHERIT = 0xe IFLA_BAREUDP_UNSPEC = 0x0 IFLA_BAREUDP_PORT = 0x1 IFLA_BAREUDP_ETHERTYPE = 0x2 @@ -1748,6 +1793,8 @@ const ( IFLA_GTP_FD1 = 0x2 IFLA_GTP_PDP_HASHSIZE = 0x3 IFLA_GTP_ROLE = 0x4 + IFLA_GTP_CREATE_SOCKETS = 0x5 + IFLA_GTP_RESTART_COUNT = 0x6 IFLA_BOND_UNSPEC = 0x0 IFLA_BOND_MODE = 0x1 IFLA_BOND_ACTIVE_SLAVE = 0x2 @@ -1777,6 +1824,9 @@ const ( IFLA_BOND_AD_ACTOR_SYSTEM = 0x1a IFLA_BOND_TLB_DYNAMIC_LB = 0x1b IFLA_BOND_PEER_NOTIF_DELAY = 0x1c + IFLA_BOND_AD_LACP_ACTIVE = 0x1d + IFLA_BOND_MISSED_MAX = 0x1e + IFLA_BOND_NS_IP6_TARGET = 0x1f IFLA_BOND_AD_INFO_UNSPEC = 0x0 IFLA_BOND_AD_INFO_AGGREGATOR = 0x1 IFLA_BOND_AD_INFO_NUM_PORTS = 0x2 @@ -1792,6 +1842,7 @@ const ( IFLA_BOND_SLAVE_AD_AGGREGATOR_ID = 0x6 IFLA_BOND_SLAVE_AD_ACTOR_OPER_PORT_STATE = 0x7 IFLA_BOND_SLAVE_AD_PARTNER_OPER_PORT_STATE = 0x8 + IFLA_BOND_SLAVE_PRIO = 0x9 IFLA_VF_INFO_UNSPEC = 0x0 IFLA_VF_INFO = 0x1 IFLA_VF_UNSPEC = 0x0 @@ -1850,8 +1901,16 @@ const ( IFLA_STATS_LINK_XSTATS_SLAVE = 0x3 IFLA_STATS_LINK_OFFLOAD_XSTATS = 0x4 IFLA_STATS_AF_SPEC = 0x5 + IFLA_STATS_GETSET_UNSPEC = 0x0 + IFLA_STATS_GET_FILTERS = 0x1 + IFLA_STATS_SET_OFFLOAD_XSTATS_L3_STATS = 0x2 IFLA_OFFLOAD_XSTATS_UNSPEC = 0x0 IFLA_OFFLOAD_XSTATS_CPU_HIT = 0x1 + IFLA_OFFLOAD_XSTATS_HW_S_INFO = 0x2 + IFLA_OFFLOAD_XSTATS_L3_STATS = 0x3 + IFLA_OFFLOAD_XSTATS_HW_S_INFO_UNSPEC = 0x0 + IFLA_OFFLOAD_XSTATS_HW_S_INFO_REQUEST = 0x1 + IFLA_OFFLOAD_XSTATS_HW_S_INFO_USED = 0x2 IFLA_XDP_UNSPEC = 0x0 IFLA_XDP_FD = 0x1 IFLA_XDP_ATTACHED = 0x2 @@ -1881,6 +1940,11 @@ const ( IFLA_RMNET_UNSPEC = 0x0 IFLA_RMNET_MUX_ID = 0x1 IFLA_RMNET_FLAGS = 0x2 + IFLA_MCTP_UNSPEC = 0x0 + IFLA_MCTP_NET = 0x1 + IFLA_DSA_UNSPEC = 0x0 + IFLA_DSA_CONDUIT = 0x1 + IFLA_DSA_MASTER = 0x1 ) const ( @@ -3399,7 +3463,7 @@ const ( DEVLINK_PORT_FN_ATTR_STATE = 0x2 DEVLINK_PORT_FN_ATTR_OPSTATE = 0x3 DEVLINK_PORT_FN_ATTR_CAPS = 0x4 - DEVLINK_PORT_FUNCTION_ATTR_MAX = 0x4 + DEVLINK_PORT_FUNCTION_ATTR_MAX = 0x5 ) type FsverityDigest struct { @@ -4183,7 +4247,8 @@ const ( ) type LandlockRulesetAttr struct { - Access_fs uint64 + Access_fs uint64 + Access_net uint64 } type LandlockPathBeneathAttr struct { @@ -5134,7 +5199,7 @@ const ( NL80211_FREQUENCY_ATTR_GO_CONCURRENT = 0xf NL80211_FREQUENCY_ATTR_INDOOR_ONLY = 0xe NL80211_FREQUENCY_ATTR_IR_CONCURRENT = 0xf - NL80211_FREQUENCY_ATTR_MAX = 0x1b + NL80211_FREQUENCY_ATTR_MAX = 0x1c NL80211_FREQUENCY_ATTR_MAX_TX_POWER = 0x6 NL80211_FREQUENCY_ATTR_NO_10MHZ = 0x11 NL80211_FREQUENCY_ATTR_NO_160MHZ = 0xc @@ -5547,7 +5612,7 @@ const ( NL80211_REGDOM_TYPE_CUSTOM_WORLD = 0x2 NL80211_REGDOM_TYPE_INTERSECTION = 0x3 NL80211_REGDOM_TYPE_WORLD = 0x1 - NL80211_REG_RULE_ATTR_MAX = 0x7 + NL80211_REG_RULE_ATTR_MAX = 0x8 NL80211_REKEY_DATA_AKM = 0x4 NL80211_REKEY_DATA_KCK = 0x2 NL80211_REKEY_DATA_KEK = 0x1 diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/windows/env_windows.go b/cluster-autoscaler/vendor/golang.org/x/sys/windows/env_windows.go index b8ad19250689..d4577a423887 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/windows/env_windows.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/windows/env_windows.go @@ -37,14 +37,17 @@ func (token Token) Environ(inheritExisting bool) (env []string, err error) { return nil, err } defer DestroyEnvironmentBlock(block) - blockp := unsafe.Pointer(block) - for { - entry := UTF16PtrToString((*uint16)(blockp)) - if len(entry) == 0 { - break + size := unsafe.Sizeof(*block) + for *block != 0 { + // find NUL terminator + end := unsafe.Pointer(block) + for *(*uint16)(end) != 0 { + end = unsafe.Add(end, size) } - env = append(env, entry) - blockp = unsafe.Add(blockp, 2*(len(entry)+1)) + + entry := unsafe.Slice(block, (uintptr(end)-uintptr(unsafe.Pointer(block)))/size) + env = append(env, UTF16ToString(entry)) + block = (*uint16)(unsafe.Add(end, size)) } return env, nil } diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/windows/syscall_windows.go b/cluster-autoscaler/vendor/golang.org/x/sys/windows/syscall_windows.go index 47dc57967690..6395a031d45d 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/windows/syscall_windows.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/windows/syscall_windows.go @@ -125,8 +125,7 @@ func UTF16PtrToString(p *uint16) string { for ptr := unsafe.Pointer(p); *(*uint16)(ptr) != 0; n++ { ptr = unsafe.Pointer(uintptr(ptr) + unsafe.Sizeof(*p)) } - - return string(utf16.Decode(unsafe.Slice(p, n))) + return UTF16ToString(unsafe.Slice(p, n)) } func Getpagesize() int { return 4096 } @@ -194,6 +193,7 @@ func NewCallbackCDecl(fn interface{}) uintptr { //sys GetComputerName(buf *uint16, n *uint32) (err error) = GetComputerNameW //sys GetComputerNameEx(nametype uint32, buf *uint16, n *uint32) (err error) = GetComputerNameExW //sys SetEndOfFile(handle Handle) (err error) +//sys SetFileValidData(handle Handle, validDataLength int64) (err error) //sys GetSystemTimeAsFileTime(time *Filetime) //sys GetSystemTimePreciseAsFileTime(time *Filetime) //sys GetTimeZoneInformation(tzi *Timezoneinformation) (rc uint32, err error) [failretval==0xffffffff] diff --git a/cluster-autoscaler/vendor/golang.org/x/sys/windows/zsyscall_windows.go b/cluster-autoscaler/vendor/golang.org/x/sys/windows/zsyscall_windows.go index 146a1f0196f9..e8791c82c30f 100644 --- a/cluster-autoscaler/vendor/golang.org/x/sys/windows/zsyscall_windows.go +++ b/cluster-autoscaler/vendor/golang.org/x/sys/windows/zsyscall_windows.go @@ -342,6 +342,7 @@ var ( procSetDefaultDllDirectories = modkernel32.NewProc("SetDefaultDllDirectories") procSetDllDirectoryW = modkernel32.NewProc("SetDllDirectoryW") procSetEndOfFile = modkernel32.NewProc("SetEndOfFile") + procSetFileValidData = modkernel32.NewProc("SetFileValidData") procSetEnvironmentVariableW = modkernel32.NewProc("SetEnvironmentVariableW") procSetErrorMode = modkernel32.NewProc("SetErrorMode") procSetEvent = modkernel32.NewProc("SetEvent") @@ -2988,6 +2989,14 @@ func SetEndOfFile(handle Handle) (err error) { return } +func SetFileValidData(handle Handle, validDataLength int64) (err error) { + r1, _, e1 := syscall.Syscall(procSetFileValidData.Addr(), 2, uintptr(handle), uintptr(validDataLength), 0) + if r1 == 0 { + err = errnoErr(e1) + } + return +} + func SetEnvironmentVariable(name *uint16, value *uint16) (err error) { r1, _, e1 := syscall.Syscall(procSetEnvironmentVariableW.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(value)), 0) if r1 == 0 { diff --git a/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/annotations/field_behavior.pb.go b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/annotations/field_behavior.pb.go index dbe2e2d0c657..6ce01ac9a69c 100644 --- a/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/annotations/field_behavior.pb.go +++ b/cluster-autoscaler/vendor/google.golang.org/genproto/googleapis/api/annotations/field_behavior.pb.go @@ -15,7 +15,7 @@ // Code generated by protoc-gen-go. DO NOT EDIT. // versions: // protoc-gen-go v1.26.0 -// protoc v3.21.9 +// protoc v3.21.12 // source: google/api/field_behavior.proto package annotations @@ -78,6 +78,19 @@ const ( // a non-empty value will be returned. The user will not be aware of what // non-empty value to expect. FieldBehavior_NON_EMPTY_DEFAULT FieldBehavior = 7 + // Denotes that the field in a resource (a message annotated with + // google.api.resource) is used in the resource name to uniquely identify the + // resource. For AIP-compliant APIs, this should only be applied to the + // `name` field on the resource. + // + // This behavior should not be applied to references to other resources within + // the message. + // + // The identifier field of resources often have different field behavior + // depending on the request it is embedded in (e.g. for Create methods name + // is optional and unused, while for Update methods it is required). Instead + // of method-specific annotations, only `IDENTIFIER` is required. + FieldBehavior_IDENTIFIER FieldBehavior = 8 ) // Enum value maps for FieldBehavior. @@ -91,6 +104,7 @@ var ( 5: "IMMUTABLE", 6: "UNORDERED_LIST", 7: "NON_EMPTY_DEFAULT", + 8: "IDENTIFIER", } FieldBehavior_value = map[string]int32{ "FIELD_BEHAVIOR_UNSPECIFIED": 0, @@ -101,6 +115,7 @@ var ( "IMMUTABLE": 5, "UNORDERED_LIST": 6, "NON_EMPTY_DEFAULT": 7, + "IDENTIFIER": 8, } ) @@ -169,7 +184,7 @@ var file_google_api_field_behavior_proto_rawDesc = []byte{ 0x6f, 0x12, 0x0a, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x61, 0x70, 0x69, 0x1a, 0x20, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2f, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2f, 0x64, 0x65, 0x73, 0x63, 0x72, 0x69, 0x70, 0x74, 0x6f, 0x72, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x2a, - 0xa6, 0x01, 0x0a, 0x0d, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x42, 0x65, 0x68, 0x61, 0x76, 0x69, 0x6f, + 0xb6, 0x01, 0x0a, 0x0d, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x42, 0x65, 0x68, 0x61, 0x76, 0x69, 0x6f, 0x72, 0x12, 0x1e, 0x0a, 0x1a, 0x46, 0x49, 0x45, 0x4c, 0x44, 0x5f, 0x42, 0x45, 0x48, 0x41, 0x56, 0x49, 0x4f, 0x52, 0x5f, 0x55, 0x4e, 0x53, 0x50, 0x45, 0x43, 0x49, 0x46, 0x49, 0x45, 0x44, 0x10, 0x00, 0x12, 0x0c, 0x0a, 0x08, 0x4f, 0x50, 0x54, 0x49, 0x4f, 0x4e, 0x41, 0x4c, 0x10, 0x01, 0x12, @@ -179,7 +194,8 @@ var file_google_api_field_behavior_proto_rawDesc = []byte{ 0x0a, 0x09, 0x49, 0x4d, 0x4d, 0x55, 0x54, 0x41, 0x42, 0x4c, 0x45, 0x10, 0x05, 0x12, 0x12, 0x0a, 0x0e, 0x55, 0x4e, 0x4f, 0x52, 0x44, 0x45, 0x52, 0x45, 0x44, 0x5f, 0x4c, 0x49, 0x53, 0x54, 0x10, 0x06, 0x12, 0x15, 0x0a, 0x11, 0x4e, 0x4f, 0x4e, 0x5f, 0x45, 0x4d, 0x50, 0x54, 0x59, 0x5f, 0x44, - 0x45, 0x46, 0x41, 0x55, 0x4c, 0x54, 0x10, 0x07, 0x3a, 0x60, 0x0a, 0x0e, 0x66, 0x69, 0x65, 0x6c, + 0x45, 0x46, 0x41, 0x55, 0x4c, 0x54, 0x10, 0x07, 0x12, 0x0e, 0x0a, 0x0a, 0x49, 0x44, 0x45, 0x4e, + 0x54, 0x49, 0x46, 0x49, 0x45, 0x52, 0x10, 0x08, 0x3a, 0x60, 0x0a, 0x0e, 0x66, 0x69, 0x65, 0x6c, 0x64, 0x5f, 0x62, 0x65, 0x68, 0x61, 0x76, 0x69, 0x6f, 0x72, 0x12, 0x1d, 0x2e, 0x67, 0x6f, 0x6f, 0x67, 0x6c, 0x65, 0x2e, 0x70, 0x72, 0x6f, 0x74, 0x6f, 0x62, 0x75, 0x66, 0x2e, 0x46, 0x69, 0x65, 0x6c, 0x64, 0x4f, 0x70, 0x74, 0x69, 0x6f, 0x6e, 0x73, 0x18, 0x9c, 0x08, 0x20, 0x03, 0x28, 0x0e, diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/README.md b/cluster-autoscaler/vendor/google.golang.org/grpc/README.md index 1bc92248cb47..ab0fbb79b863 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/README.md +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/README.md @@ -1,8 +1,8 @@ # gRPC-Go -[![Build Status](https://travis-ci.org/grpc/grpc-go.svg)](https://travis-ci.org/grpc/grpc-go) [![GoDoc](https://pkg.go.dev/badge/google.golang.org/grpc)][API] [![GoReportCard](https://goreportcard.com/badge/grpc/grpc-go)](https://goreportcard.com/report/github.com/grpc/grpc-go) +[![codecov](https://codecov.io/gh/grpc/grpc-go/graph/badge.svg)](https://codecov.io/gh/grpc/grpc-go) The [Go][] implementation of [gRPC][]: A high performance, open source, general RPC framework that puts mobile and HTTP/2 first. For more information see the diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/attributes/attributes.go b/cluster-autoscaler/vendor/google.golang.org/grpc/attributes/attributes.go index 712fef4d0fb9..52d530d7ad01 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/attributes/attributes.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/attributes/attributes.go @@ -121,9 +121,9 @@ func (a *Attributes) String() string { return sb.String() } -func str(x any) string { +func str(x any) (s string) { if v, ok := x.(fmt.Stringer); ok { - return v.String() + return fmt.Sprint(v) } else if v, ok := x.(string); ok { return v } diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/balancer/balancer.go b/cluster-autoscaler/vendor/google.golang.org/grpc/balancer/balancer.go index b6377f445ad2..d79560a2e268 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/balancer/balancer.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/balancer/balancer.go @@ -30,6 +30,7 @@ import ( "google.golang.org/grpc/channelz" "google.golang.org/grpc/connectivity" "google.golang.org/grpc/credentials" + "google.golang.org/grpc/grpclog" "google.golang.org/grpc/internal" "google.golang.org/grpc/metadata" "google.golang.org/grpc/resolver" @@ -39,6 +40,8 @@ import ( var ( // m is a map from name to balancer builder. m = make(map[string]Builder) + + logger = grpclog.Component("balancer") ) // Register registers the balancer builder to the balancer map. b.Name @@ -51,6 +54,12 @@ var ( // an init() function), and is not thread-safe. If multiple Balancers are // registered with the same name, the one registered last will take effect. func Register(b Builder) { + if strings.ToLower(b.Name()) != b.Name() { + // TODO: Skip the use of strings.ToLower() to index the map after v1.59 + // is released to switch to case sensitive balancer registry. Also, + // remove this warning and update the docstrings for Register and Get. + logger.Warningf("Balancer registered with name %q. grpc-go will be switching to case sensitive balancer registries soon", b.Name()) + } m[strings.ToLower(b.Name())] = b } @@ -70,6 +79,12 @@ func init() { // Note that the compare is done in a case-insensitive fashion. // If no builder is register with the name, nil will be returned. func Get(name string) Builder { + if strings.ToLower(name) != name { + // TODO: Skip the use of strings.ToLower() to index the map after v1.59 + // is released to switch to case sensitive balancer registry. Also, + // remove this warning and update the docstrings for Register and Get. + logger.Warningf("Balancer retrieved for name %q. grpc-go will be switching to case sensitive balancer registries soon", name) + } if b, ok := m[strings.ToLower(name)]; ok { return b } diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/clientconn.go b/cluster-autoscaler/vendor/google.golang.org/grpc/clientconn.go index ff7fea102288..429c389e4730 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/clientconn.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/clientconn.go @@ -337,8 +337,8 @@ func (cc *ClientConn) exitIdleMode() error { return errConnClosing } if cc.idlenessState != ccIdlenessStateIdle { - cc.mu.Unlock() channelz.Infof(logger, cc.channelzID, "ClientConn asked to exit idle mode, current mode is %v", cc.idlenessState) + cc.mu.Unlock() return nil } @@ -404,13 +404,13 @@ func (cc *ClientConn) exitIdleMode() error { // name resolver, load balancer and any subchannels. func (cc *ClientConn) enterIdleMode() error { cc.mu.Lock() + defer cc.mu.Unlock() + if cc.conns == nil { - cc.mu.Unlock() return ErrClientConnClosing } if cc.idlenessState != ccIdlenessStateActive { - channelz.Errorf(logger, cc.channelzID, "ClientConn asked to enter idle mode, current mode is %v", cc.idlenessState) - cc.mu.Unlock() + channelz.Warningf(logger, cc.channelzID, "ClientConn asked to enter idle mode, current mode is %v", cc.idlenessState) return nil } @@ -431,14 +431,14 @@ func (cc *ClientConn) enterIdleMode() error { cc.balancerWrapper.enterIdleMode() cc.csMgr.updateState(connectivity.Idle) cc.idlenessState = ccIdlenessStateIdle - cc.mu.Unlock() + cc.addTraceEvent("entering idle mode") go func() { - cc.addTraceEvent("entering idle mode") for ac := range conns { ac.tearDown(errConnIdling) } }() + return nil } @@ -804,6 +804,12 @@ func init() { internal.SubscribeToConnectivityStateChanges = func(cc *ClientConn, s grpcsync.Subscriber) func() { return cc.csMgr.pubSub.Subscribe(s) } + internal.EnterIdleModeForTesting = func(cc *ClientConn) error { + return cc.enterIdleMode() + } + internal.ExitIdleModeForTesting = func(cc *ClientConn) error { + return cc.exitIdleMode() + } } func (cc *ClientConn) maybeApplyDefaultServiceConfig(addrs []resolver.Address) { diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/dialoptions.go b/cluster-autoscaler/vendor/google.golang.org/grpc/dialoptions.go index 1fd0d5c127f4..cfc9fd85e8dd 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/dialoptions.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/dialoptions.go @@ -644,6 +644,7 @@ func defaultDialOptions() dialOptions { UseProxy: true, }, recvBufferPool: nopBufferPool{}, + idleTimeout: 30 * time.Minute, } } @@ -680,8 +681,8 @@ func WithResolvers(rs ...resolver.Builder) DialOption { // channel will exit idle mode when the Connect() method is called or when an // RPC is initiated. // -// By default this feature is disabled, which can also be explicitly configured -// by passing zero to this function. +// A default timeout of 30 minutes will be used if this dial option is not set +// at dial time and idleness can be disabled by passing a timeout of zero. // // # Experimental // diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/encoding/encoding.go b/cluster-autoscaler/vendor/google.golang.org/grpc/encoding/encoding.go index 69d5580b6adf..5ebf88d7147f 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/encoding/encoding.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/encoding/encoding.go @@ -38,6 +38,10 @@ const Identity = "identity" // Compressor is used for compressing and decompressing when sending or // receiving messages. +// +// If a Compressor implements `DecompressedSize(compressedBytes []byte) int`, +// gRPC will invoke it to determine the size of the buffer allocated for the +// result of decompression. A return value of -1 indicates unknown size. type Compressor interface { // Compress writes the data written to wc to w after compressing it. If an // error occurs while initializing the compressor, that error is returned @@ -51,15 +55,6 @@ type Compressor interface { // coding header. The result must be static; the result cannot change // between calls. Name() string - // If a Compressor implements - // DecompressedSize(compressedBytes []byte) int, gRPC will call it - // to determine the size of the buffer allocated for the result of decompression. - // Return -1 to indicate unknown size. - // - // Experimental - // - // Notice: This API is EXPERIMENTAL and may be changed or removed in a - // later release. } var registeredCompressor = make(map[string]Compressor) diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go b/cluster-autoscaler/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go index a01a1b4d54bd..4439cda0f3cb 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/health/grpc_health_v1/health_grpc.pb.go @@ -44,8 +44,15 @@ const ( // // For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream. type HealthClient interface { - // If the requested service is unknown, the call will fail with status - // NOT_FOUND. + // Check gets the health of the specified service. If the requested service + // is unknown, the call will fail with status NOT_FOUND. If the caller does + // not specify a service name, the server should respond with its overall + // health status. + // + // Clients should set a deadline when calling Check, and can declare the + // server unhealthy if they do not receive a timely response. + // + // Check implementations should be idempotent and side effect free. Check(ctx context.Context, in *HealthCheckRequest, opts ...grpc.CallOption) (*HealthCheckResponse, error) // Performs a watch for the serving status of the requested service. // The server will immediately send back a message indicating the current @@ -118,8 +125,15 @@ func (x *healthWatchClient) Recv() (*HealthCheckResponse, error) { // All implementations should embed UnimplementedHealthServer // for forward compatibility type HealthServer interface { - // If the requested service is unknown, the call will fail with status - // NOT_FOUND. + // Check gets the health of the specified service. If the requested service + // is unknown, the call will fail with status NOT_FOUND. If the caller does + // not specify a service name, the server should respond with its overall + // health status. + // + // Clients should set a deadline when calling Check, and can declare the + // server unhealthy if they do not receive a timely response. + // + // Check implementations should be idempotent and side effect free. Check(context.Context, *HealthCheckRequest) (*HealthCheckResponse, error) // Performs a watch for the serving status of the requested service. // The server will immediately send back a message indicating the current diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/backoff/backoff.go b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/backoff/backoff.go index 5fc0ee3da53b..fed1c011a325 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/backoff/backoff.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/backoff/backoff.go @@ -23,6 +23,8 @@ package backoff import ( + "context" + "errors" "time" grpcbackoff "google.golang.org/grpc/backoff" @@ -71,3 +73,37 @@ func (bc Exponential) Backoff(retries int) time.Duration { } return time.Duration(backoff) } + +// ErrResetBackoff is the error to be returned by the function executed by RunF, +// to instruct the latter to reset its backoff state. +var ErrResetBackoff = errors.New("reset backoff state") + +// RunF provides a convenient way to run a function f repeatedly until the +// context expires or f returns a non-nil error that is not ErrResetBackoff. +// When f returns ErrResetBackoff, RunF continues to run f, but resets its +// backoff state before doing so. backoff accepts an integer representing the +// number of retries, and returns the amount of time to backoff. +func RunF(ctx context.Context, f func() error, backoff func(int) time.Duration) { + attempt := 0 + timer := time.NewTimer(0) + for ctx.Err() == nil { + select { + case <-timer.C: + case <-ctx.Done(): + timer.Stop() + return + } + + err := f() + if errors.Is(err, ErrResetBackoff) { + timer.Reset(0) + attempt = 0 + continue + } + if err != nil { + return + } + timer.Reset(backoff(attempt)) + attempt++ + } +} diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/internal.go b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/internal.go index c8a8c76d628c..0d94c63e06e2 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/internal.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/internal.go @@ -175,6 +175,12 @@ var ( // GRPCResolverSchemeExtraMetadata determines when gRPC will add extra // metadata to RPCs. GRPCResolverSchemeExtraMetadata string = "xds" + + // EnterIdleModeForTesting gets the ClientConn to enter IDLE mode. + EnterIdleModeForTesting any // func(*grpc.ClientConn) error + + // ExitIdleModeForTesting gets the ClientConn to exit IDLE mode. + ExitIdleModeForTesting any // func(*grpc.ClientConn) error ) // HealthChecker defines the signature of the client-side LB channel health checking function. diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/status/status.go b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/status/status.go index 4cf85cad9f81..03ef2fedd5cb 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/status/status.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/status/status.go @@ -43,6 +43,34 @@ type Status struct { s *spb.Status } +// NewWithProto returns a new status including details from statusProto. This +// is meant to be used by the gRPC library only. +func NewWithProto(code codes.Code, message string, statusProto []string) *Status { + if len(statusProto) != 1 { + // No grpc-status-details bin header, or multiple; just ignore. + return &Status{s: &spb.Status{Code: int32(code), Message: message}} + } + st := &spb.Status{} + if err := proto.Unmarshal([]byte(statusProto[0]), st); err != nil { + // Probably not a google.rpc.Status proto; do not provide details. + return &Status{s: &spb.Status{Code: int32(code), Message: message}} + } + if st.Code == int32(code) { + // The codes match between the grpc-status header and the + // grpc-status-details-bin header; use the full details proto. + return &Status{s: st} + } + return &Status{ + s: &spb.Status{ + Code: int32(codes.Internal), + Message: fmt.Sprintf( + "grpc-status-details-bin mismatch: grpc-status=%v, grpc-message=%q, grpc-status-details-bin=%+v", + code, message, st, + ), + }, + } +} + // New returns a Status representing c and msg. func New(c codes.Code, msg string) *Status { return &Status{s: &spb.Status{Code: int32(c), Message: msg}} diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/handler_server.go b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/handler_server.go index 98f80e3fa00a..17f7a21b5a9f 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/handler_server.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/handler_server.go @@ -220,18 +220,20 @@ func (ht *serverHandlerTransport) WriteStatus(s *Stream, st *status.Status) erro h.Set("Grpc-Message", encodeGrpcMessage(m)) } + s.hdrMu.Lock() if p := st.Proto(); p != nil && len(p.Details) > 0 { + delete(s.trailer, grpcStatusDetailsBinHeader) stBytes, err := proto.Marshal(p) if err != nil { // TODO: return error instead, when callers are able to handle it. panic(err) } - h.Set("Grpc-Status-Details-Bin", encodeBinHeader(stBytes)) + h.Set(grpcStatusDetailsBinHeader, encodeBinHeader(stBytes)) } - if md := s.Trailer(); len(md) > 0 { - for k, vv := range md { + if len(s.trailer) > 0 { + for k, vv := range s.trailer { // Clients don't tolerate reading restricted headers after some non restricted ones were sent. if isReservedHeader(k) { continue @@ -243,6 +245,7 @@ func (ht *serverHandlerTransport) WriteStatus(s *Stream, st *status.Status) erro } } } + s.hdrMu.Unlock() }) if err == nil { // transport has not been closed @@ -287,7 +290,7 @@ func (ht *serverHandlerTransport) writeCommonHeaders(s *Stream) { } // writeCustomHeaders sets custom headers set on the stream via SetHeader -// on the first write call (Write, WriteHeader, or WriteStatus). +// on the first write call (Write, WriteHeader, or WriteStatus) func (ht *serverHandlerTransport) writeCustomHeaders(s *Stream) { h := ht.rw.Header() @@ -344,7 +347,7 @@ func (ht *serverHandlerTransport) WriteHeader(s *Stream, md metadata.MD) error { return err } -func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream), traceCtx func(context.Context, string) context.Context) { +func (ht *serverHandlerTransport) HandleStreams(startStream func(*Stream)) { // With this transport type there will be exactly 1 stream: this HTTP request. ctx := ht.req.Context() diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http2_client.go b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http2_client.go index badab8acf3b1..d6f5c49358b5 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http2_client.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http2_client.go @@ -1399,7 +1399,6 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) { mdata = make(map[string][]string) contentTypeErr = "malformed header: missing HTTP content-type" grpcMessage string - statusGen *status.Status recvCompress string httpStatusCode *int httpStatusErr string @@ -1434,12 +1433,6 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) { rawStatusCode = codes.Code(uint32(code)) case "grpc-message": grpcMessage = decodeGrpcMessage(hf.Value) - case "grpc-status-details-bin": - var err error - statusGen, err = decodeGRPCStatusDetails(hf.Value) - if err != nil { - headerError = fmt.Sprintf("transport: malformed grpc-status-details-bin: %v", err) - } case ":status": if hf.Value == "200" { httpStatusErr = "" @@ -1548,14 +1541,12 @@ func (t *http2Client) operateHeaders(frame *http2.MetaHeadersFrame) { return } - if statusGen == nil { - statusGen = status.New(rawStatusCode, grpcMessage) - } + status := istatus.NewWithProto(rawStatusCode, grpcMessage, mdata[grpcStatusDetailsBinHeader]) // If client received END_STREAM from server while stream was still active, // send RST_STREAM. rstStream := s.getState() == streamActive - t.closeStream(s, io.EOF, rstStream, http2.ErrCodeNo, statusGen, mdata, true) + t.closeStream(s, io.EOF, rstStream, http2.ErrCodeNo, status, mdata, true) } // readServerPreface reads and handles the initial settings frame from the diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http2_server.go b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http2_server.go index c06db679d89c..6fa1eb41992a 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http2_server.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http2_server.go @@ -342,7 +342,7 @@ func NewServerTransport(conn net.Conn, config *ServerConfig) (_ ServerTransport, // operateHeaders takes action on the decoded headers. Returns an error if fatal // error encountered and transport needs to close, otherwise returns nil. -func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream), traceCtx func(context.Context, string) context.Context) error { +func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func(*Stream)) error { // Acquire max stream ID lock for entire duration t.maxStreamMu.Lock() defer t.maxStreamMu.Unlock() @@ -561,7 +561,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( } if t.inTapHandle != nil { var err error - if s.ctx, err = t.inTapHandle(s.ctx, &tap.Info{FullMethodName: s.method}); err != nil { + if s.ctx, err = t.inTapHandle(s.ctx, &tap.Info{FullMethodName: s.method, Header: mdata}); err != nil { t.mu.Unlock() if t.logger.V(logLevel) { t.logger.Infof("Aborting the stream early due to InTapHandle failure: %v", err) @@ -592,7 +592,6 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( s.requestRead = func(n int) { t.adjustWindow(s, uint32(n)) } - s.ctx = traceCtx(s.ctx, s.method) for _, sh := range t.stats { s.ctx = sh.TagRPC(s.ctx, &stats.RPCTagInfo{FullMethodName: s.method}) inHeader := &stats.InHeader{ @@ -630,7 +629,7 @@ func (t *http2Server) operateHeaders(frame *http2.MetaHeadersFrame, handle func( // HandleStreams receives incoming streams using the given handler. This is // typically run in a separate goroutine. // traceCtx attaches trace to ctx and returns the new context. -func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context.Context, string) context.Context) { +func (t *http2Server) HandleStreams(handle func(*Stream)) { defer close(t.readerDone) for { t.controlBuf.throttle() @@ -665,7 +664,7 @@ func (t *http2Server) HandleStreams(handle func(*Stream), traceCtx func(context. } switch frame := frame.(type) { case *http2.MetaHeadersFrame: - if err := t.operateHeaders(frame, handle, traceCtx); err != nil { + if err := t.operateHeaders(frame, handle); err != nil { t.Close(err) break } @@ -1053,12 +1052,15 @@ func (t *http2Server) WriteStatus(s *Stream, st *status.Status) error { headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-message", Value: encodeGrpcMessage(st.Message())}) if p := st.Proto(); p != nil && len(p.Details) > 0 { + // Do not use the user's grpc-status-details-bin (if present) if we are + // even attempting to set our own. + delete(s.trailer, grpcStatusDetailsBinHeader) stBytes, err := proto.Marshal(p) if err != nil { // TODO: return error instead, when callers are able to handle it. t.logger.Errorf("Failed to marshal rpc status: %s, error: %v", pretty.ToJSON(p), err) } else { - headerFields = append(headerFields, hpack.HeaderField{Name: "grpc-status-details-bin", Value: encodeBinHeader(stBytes)}) + headerFields = append(headerFields, hpack.HeaderField{Name: grpcStatusDetailsBinHeader, Value: encodeBinHeader(stBytes)}) } } diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http_util.go b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http_util.go index 1958140082b3..dc29d590e91f 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http_util.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/http_util.go @@ -34,12 +34,9 @@ import ( "time" "unicode/utf8" - "github.com/golang/protobuf/proto" "golang.org/x/net/http2" "golang.org/x/net/http2/hpack" - spb "google.golang.org/genproto/googleapis/rpc/status" "google.golang.org/grpc/codes" - "google.golang.org/grpc/status" ) const ( @@ -88,6 +85,8 @@ var ( } ) +var grpcStatusDetailsBinHeader = "grpc-status-details-bin" + // isReservedHeader checks whether hdr belongs to HTTP2 headers // reserved by gRPC protocol. Any other headers are classified as the // user-specified metadata. @@ -103,7 +102,6 @@ func isReservedHeader(hdr string) bool { "grpc-message", "grpc-status", "grpc-timeout", - "grpc-status-details-bin", // Intentionally exclude grpc-previous-rpc-attempts and // grpc-retry-pushback-ms, which are "reserved", but their API // intentionally works via metadata. @@ -154,18 +152,6 @@ func decodeMetadataHeader(k, v string) (string, error) { return v, nil } -func decodeGRPCStatusDetails(rawDetails string) (*status.Status, error) { - v, err := decodeBinHeader(rawDetails) - if err != nil { - return nil, err - } - st := &spb.Status{} - if err = proto.Unmarshal(v, st); err != nil { - return nil, err - } - return status.FromProto(st), nil -} - type timeoutUnit uint8 const ( diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/transport.go b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/transport.go index 74a811fc0590..aac056e723bb 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/transport.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/internal/transport/transport.go @@ -698,7 +698,7 @@ type ClientTransport interface { // Write methods for a given Stream will be called serially. type ServerTransport interface { // HandleStreams receives incoming streams using the given handler. - HandleStreams(func(*Stream), func(context.Context, string) context.Context) + HandleStreams(func(*Stream)) // WriteHeader sends the header metadata for the given stream. // WriteHeader may not be called on all streams. diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/resolver/manual/manual.go b/cluster-autoscaler/vendor/google.golang.org/grpc/resolver/manual/manual.go index e6b0f14cd941..0a4262342f35 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/resolver/manual/manual.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/resolver/manual/manual.go @@ -26,7 +26,9 @@ import ( "google.golang.org/grpc/resolver" ) -// NewBuilderWithScheme creates a new test resolver builder with the given scheme. +// NewBuilderWithScheme creates a new manual resolver builder with the given +// scheme. Every instance of the manual resolver may only ever be used with a +// single grpc.ClientConn. Otherwise, bad things will happen. func NewBuilderWithScheme(scheme string) *Resolver { return &Resolver{ BuildCallback: func(resolver.Target, resolver.ClientConn, resolver.BuildOptions) {}, @@ -58,30 +60,34 @@ type Resolver struct { scheme string // Fields actually belong to the resolver. - mu sync.Mutex // Guards access to CC. - CC resolver.ClientConn - bootstrapState *resolver.State + // Guards access to below fields. + mu sync.Mutex + CC resolver.ClientConn + // Storing the most recent state update makes this resolver resilient to + // restarts, which is possible with channel idleness. + lastSeenState *resolver.State } // InitialState adds initial state to the resolver so that UpdateState doesn't // need to be explicitly called after Dial. func (r *Resolver) InitialState(s resolver.State) { - r.bootstrapState = &s + r.lastSeenState = &s } // Build returns itself for Resolver, because it's both a builder and a resolver. func (r *Resolver) Build(target resolver.Target, cc resolver.ClientConn, opts resolver.BuildOptions) (resolver.Resolver, error) { + r.BuildCallback(target, cc, opts) r.mu.Lock() r.CC = cc - r.mu.Unlock() - r.BuildCallback(target, cc, opts) - if r.bootstrapState != nil { - r.UpdateState(*r.bootstrapState) + if r.lastSeenState != nil { + err := r.CC.UpdateState(*r.lastSeenState) + go r.UpdateStateCallback(err) } + r.mu.Unlock() return r, nil } -// Scheme returns the test scheme. +// Scheme returns the manual resolver's scheme. func (r *Resolver) Scheme() string { return r.scheme } @@ -100,6 +106,7 @@ func (r *Resolver) Close() { func (r *Resolver) UpdateState(s resolver.State) { r.mu.Lock() err := r.CC.UpdateState(s) + r.lastSeenState = &s r.mu.Unlock() r.UpdateStateCallback(err) } diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/server.go b/cluster-autoscaler/vendor/google.golang.org/grpc/server.go index eeae92fbe020..8f60d421437d 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/server.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/server.go @@ -983,7 +983,7 @@ func (s *Server) serveStreams(st transport.ServerTransport) { f := func() { defer streamQuota.release() defer wg.Done() - s.handleStream(st, stream, s.traceInfo(st, stream)) + s.handleStream(st, stream) } if s.opts.numServerWorkers > 0 { @@ -995,12 +995,6 @@ func (s *Server) serveStreams(st transport.ServerTransport) { } } go f() - }, func(ctx context.Context, method string) context.Context { - if !EnableTracing { - return ctx - } - tr := trace.New("grpc.Recv."+methodFamily(method), method) - return trace.NewContext(ctx, tr) }) wg.Wait() } @@ -1049,30 +1043,6 @@ func (s *Server) ServeHTTP(w http.ResponseWriter, r *http.Request) { s.serveStreams(st) } -// traceInfo returns a traceInfo and associates it with stream, if tracing is enabled. -// If tracing is not enabled, it returns nil. -func (s *Server) traceInfo(st transport.ServerTransport, stream *transport.Stream) (trInfo *traceInfo) { - if !EnableTracing { - return nil - } - tr, ok := trace.FromContext(stream.Context()) - if !ok { - return nil - } - - trInfo = &traceInfo{ - tr: tr, - firstLine: firstLine{ - client: false, - remoteAddr: st.RemoteAddr(), - }, - } - if dl, ok := stream.Context().Deadline(); ok { - trInfo.firstLine.deadline = time.Until(dl) - } - return trInfo -} - func (s *Server) addConn(addr string, st transport.ServerTransport) bool { s.mu.Lock() defer s.mu.Unlock() @@ -1133,7 +1103,7 @@ func (s *Server) incrCallsFailed() { atomic.AddInt64(&s.czData.callsFailed, 1) } -func (s *Server) sendResponse(t transport.ServerTransport, stream *transport.Stream, msg any, cp Compressor, opts *transport.Options, comp encoding.Compressor) error { +func (s *Server) sendResponse(ctx context.Context, t transport.ServerTransport, stream *transport.Stream, msg any, cp Compressor, opts *transport.Options, comp encoding.Compressor) error { data, err := encode(s.getCodec(stream.ContentSubtype()), msg) if err != nil { channelz.Error(logger, s.channelzID, "grpc: server failed to encode response: ", err) @@ -1152,7 +1122,7 @@ func (s *Server) sendResponse(t transport.ServerTransport, stream *transport.Str err = t.Write(stream, hdr, payload, opts) if err == nil { for _, sh := range s.opts.statsHandlers { - sh.HandleRPC(stream.Context(), outPayload(false, msg, data, payload, time.Now())) + sh.HandleRPC(ctx, outPayload(false, msg, data, payload, time.Now())) } } return err @@ -1194,7 +1164,7 @@ func getChainUnaryHandler(interceptors []UnaryServerInterceptor, curr int, info } } -func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport.Stream, info *serviceInfo, md *MethodDesc, trInfo *traceInfo) (err error) { +func (s *Server) processUnaryRPC(ctx context.Context, t transport.ServerTransport, stream *transport.Stream, info *serviceInfo, md *MethodDesc, trInfo *traceInfo) (err error) { shs := s.opts.statsHandlers if len(shs) != 0 || trInfo != nil || channelz.IsOn() { if channelz.IsOn() { @@ -1208,7 +1178,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. IsClientStream: false, IsServerStream: false, } - sh.HandleRPC(stream.Context(), statsBegin) + sh.HandleRPC(ctx, statsBegin) } if trInfo != nil { trInfo.tr.LazyLog(&trInfo.firstLine, false) @@ -1240,7 +1210,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. if err != nil && err != io.EOF { end.Error = toRPCErr(err) } - sh.HandleRPC(stream.Context(), end) + sh.HandleRPC(ctx, end) } if channelz.IsOn() { @@ -1262,7 +1232,6 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. } } if len(binlogs) != 0 { - ctx := stream.Context() md, _ := metadata.FromIncomingContext(ctx) logEntry := &binarylog.ClientHeader{ Header: md, @@ -1348,7 +1317,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. return status.Errorf(codes.Internal, "grpc: error unmarshalling request: %v", err) } for _, sh := range shs { - sh.HandleRPC(stream.Context(), &stats.InPayload{ + sh.HandleRPC(ctx, &stats.InPayload{ RecvTime: time.Now(), Payload: v, Length: len(d), @@ -1362,7 +1331,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Message: d, } for _, binlog := range binlogs { - binlog.Log(stream.Context(), cm) + binlog.Log(ctx, cm) } } if trInfo != nil { @@ -1370,7 +1339,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. } return nil } - ctx := NewContextWithServerTransportStream(stream.Context(), stream) + ctx = NewContextWithServerTransportStream(ctx, stream) reply, appErr := md.Handler(info.serviceImpl, ctx, df, s.opts.unaryInt) if appErr != nil { appStatus, ok := status.FromError(appErr) @@ -1395,7 +1364,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Header: h, } for _, binlog := range binlogs { - binlog.Log(stream.Context(), sh) + binlog.Log(ctx, sh) } } st := &binarylog.ServerTrailer{ @@ -1403,7 +1372,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Err: appErr, } for _, binlog := range binlogs { - binlog.Log(stream.Context(), st) + binlog.Log(ctx, st) } } return appErr @@ -1418,7 +1387,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. if stream.SendCompress() != sendCompressorName { comp = encoding.GetCompressor(stream.SendCompress()) } - if err := s.sendResponse(t, stream, reply, cp, opts, comp); err != nil { + if err := s.sendResponse(ctx, t, stream, reply, cp, opts, comp); err != nil { if err == io.EOF { // The entire stream is done (for unary RPC only). return err @@ -1445,8 +1414,8 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Err: appErr, } for _, binlog := range binlogs { - binlog.Log(stream.Context(), sh) - binlog.Log(stream.Context(), st) + binlog.Log(ctx, sh) + binlog.Log(ctx, st) } } return err @@ -1460,8 +1429,8 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Message: reply, } for _, binlog := range binlogs { - binlog.Log(stream.Context(), sh) - binlog.Log(stream.Context(), sm) + binlog.Log(ctx, sh) + binlog.Log(ctx, sm) } } if channelz.IsOn() { @@ -1479,7 +1448,7 @@ func (s *Server) processUnaryRPC(t transport.ServerTransport, stream *transport. Err: appErr, } for _, binlog := range binlogs { - binlog.Log(stream.Context(), st) + binlog.Log(ctx, st) } } return t.WriteStatus(stream, statusOK) @@ -1521,7 +1490,7 @@ func getChainStreamHandler(interceptors []StreamServerInterceptor, curr int, inf } } -func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transport.Stream, info *serviceInfo, sd *StreamDesc, trInfo *traceInfo) (err error) { +func (s *Server) processStreamingRPC(ctx context.Context, t transport.ServerTransport, stream *transport.Stream, info *serviceInfo, sd *StreamDesc, trInfo *traceInfo) (err error) { if channelz.IsOn() { s.incrCallsStarted() } @@ -1535,10 +1504,10 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp IsServerStream: sd.ServerStreams, } for _, sh := range shs { - sh.HandleRPC(stream.Context(), statsBegin) + sh.HandleRPC(ctx, statsBegin) } } - ctx := NewContextWithServerTransportStream(stream.Context(), stream) + ctx = NewContextWithServerTransportStream(ctx, stream) ss := &serverStream{ ctx: ctx, t: t, @@ -1574,7 +1543,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp end.Error = toRPCErr(err) } for _, sh := range shs { - sh.HandleRPC(stream.Context(), end) + sh.HandleRPC(ctx, end) } } @@ -1616,7 +1585,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp logEntry.PeerAddr = peer.Addr } for _, binlog := range ss.binlogs { - binlog.Log(stream.Context(), logEntry) + binlog.Log(ctx, logEntry) } } @@ -1694,7 +1663,7 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp Err: appErr, } for _, binlog := range ss.binlogs { - binlog.Log(stream.Context(), st) + binlog.Log(ctx, st) } } t.WriteStatus(ss.s, appStatus) @@ -1712,33 +1681,50 @@ func (s *Server) processStreamingRPC(t transport.ServerTransport, stream *transp Err: appErr, } for _, binlog := range ss.binlogs { - binlog.Log(stream.Context(), st) + binlog.Log(ctx, st) } } return t.WriteStatus(ss.s, statusOK) } -func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Stream, trInfo *traceInfo) { +func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Stream) { + ctx := stream.Context() + var ti *traceInfo + if EnableTracing { + tr := trace.New("grpc.Recv."+methodFamily(stream.Method()), stream.Method()) + ctx = trace.NewContext(ctx, tr) + ti = &traceInfo{ + tr: tr, + firstLine: firstLine{ + client: false, + remoteAddr: t.RemoteAddr(), + }, + } + if dl, ok := ctx.Deadline(); ok { + ti.firstLine.deadline = time.Until(dl) + } + } + sm := stream.Method() if sm != "" && sm[0] == '/' { sm = sm[1:] } pos := strings.LastIndex(sm, "/") if pos == -1 { - if trInfo != nil { - trInfo.tr.LazyLog(&fmtStringer{"Malformed method name %q", []any{sm}}, true) - trInfo.tr.SetError() + if ti != nil { + ti.tr.LazyLog(&fmtStringer{"Malformed method name %q", []any{sm}}, true) + ti.tr.SetError() } errDesc := fmt.Sprintf("malformed method name: %q", stream.Method()) if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil { - if trInfo != nil { - trInfo.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true) - trInfo.tr.SetError() + if ti != nil { + ti.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true) + ti.tr.SetError() } channelz.Warningf(logger, s.channelzID, "grpc: Server.handleStream failed to write status: %v", err) } - if trInfo != nil { - trInfo.tr.Finish() + if ti != nil { + ti.tr.Finish() } return } @@ -1748,17 +1734,17 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str srv, knownService := s.services[service] if knownService { if md, ok := srv.methods[method]; ok { - s.processUnaryRPC(t, stream, srv, md, trInfo) + s.processUnaryRPC(ctx, t, stream, srv, md, ti) return } if sd, ok := srv.streams[method]; ok { - s.processStreamingRPC(t, stream, srv, sd, trInfo) + s.processStreamingRPC(ctx, t, stream, srv, sd, ti) return } } // Unknown service, or known server unknown method. if unknownDesc := s.opts.unknownStreamDesc; unknownDesc != nil { - s.processStreamingRPC(t, stream, nil, unknownDesc, trInfo) + s.processStreamingRPC(ctx, t, stream, nil, unknownDesc, ti) return } var errDesc string @@ -1767,19 +1753,19 @@ func (s *Server) handleStream(t transport.ServerTransport, stream *transport.Str } else { errDesc = fmt.Sprintf("unknown method %v for service %v", method, service) } - if trInfo != nil { - trInfo.tr.LazyPrintf("%s", errDesc) - trInfo.tr.SetError() + if ti != nil { + ti.tr.LazyPrintf("%s", errDesc) + ti.tr.SetError() } if err := t.WriteStatus(stream, status.New(codes.Unimplemented, errDesc)); err != nil { - if trInfo != nil { - trInfo.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true) - trInfo.tr.SetError() + if ti != nil { + ti.tr.LazyLog(&fmtStringer{"%v", []any{err}}, true) + ti.tr.SetError() } channelz.Warningf(logger, s.channelzID, "grpc: Server.handleStream failed to write status: %v", err) } - if trInfo != nil { - trInfo.tr.Finish() + if ti != nil { + ti.tr.Finish() } } diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/tap/tap.go b/cluster-autoscaler/vendor/google.golang.org/grpc/tap/tap.go index bfa5dfa40e4d..07f012576880 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/tap/tap.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/tap/tap.go @@ -27,6 +27,8 @@ package tap import ( "context" + + "google.golang.org/grpc/metadata" ) // Info defines the relevant information needed by the handles. @@ -34,6 +36,10 @@ type Info struct { // FullMethodName is the string of grpc method (in the format of // /package.service/method). FullMethodName string + + // Header contains the header metadata received. + Header metadata.MD + // TODO: More to be added. } diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/version.go b/cluster-autoscaler/vendor/google.golang.org/grpc/version.go index 724ad2102130..6d2cadd79a9b 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/version.go +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/version.go @@ -19,4 +19,4 @@ package grpc // Version is the current grpc version. -const Version = "1.58.3" +const Version = "1.59.0" diff --git a/cluster-autoscaler/vendor/google.golang.org/grpc/vet.sh b/cluster-autoscaler/vendor/google.golang.org/grpc/vet.sh index bbc9e2e3c8e3..bb480f1f9cca 100644 --- a/cluster-autoscaler/vendor/google.golang.org/grpc/vet.sh +++ b/cluster-autoscaler/vendor/google.golang.org/grpc/vet.sh @@ -93,6 +93,9 @@ git grep -l -e 'grpclog.I' --or -e 'grpclog.W' --or -e 'grpclog.E' --or -e 'grpc # - Ensure all ptypes proto packages are renamed when importing. not git grep "\(import \|^\s*\)\"github.com/golang/protobuf/ptypes/" -- "*.go" +# - Ensure all usages of grpc_testing package are renamed when importing. +not git grep "\(import \|^\s*\)\"google.golang.org/grpc/interop/grpc_testing" -- "*.go" + # - Ensure all xds proto imports are renamed to *pb or *grpc. git grep '"github.com/envoyproxy/go-control-plane/envoy' -- '*.go' ':(exclude)*.pb.go' | not grep -v 'pb "\|grpc "' diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.proto b/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.proto index cf9b6e6ebc07..d099238cdf60 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.proto +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/generated.proto @@ -3286,7 +3286,7 @@ message PersistentVolumeStatus { // lastPhaseTransitionTime is the time the phase transitioned from one to another // and automatically resets to current time everytime a volume phase transitions. - // This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature. + // This is a beta field and requires the PersistentVolumeLastPhaseTransitionTime feature to be enabled (enabled by default). // +featureGate=PersistentVolumeLastPhaseTransitionTime // +optional optional k8s.io.apimachinery.pkg.apis.meta.v1.Time lastPhaseTransitionTime = 4; diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/types.go b/cluster-autoscaler/vendor/k8s.io/api/core/v1/types.go index 1aade3806fa0..61ba21bcad4e 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/types.go +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/types.go @@ -423,7 +423,7 @@ type PersistentVolumeStatus struct { Reason string `json:"reason,omitempty" protobuf:"bytes,3,opt,name=reason"` // lastPhaseTransitionTime is the time the phase transitioned from one to another // and automatically resets to current time everytime a volume phase transitions. - // This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature. + // This is a beta field and requires the PersistentVolumeLastPhaseTransitionTime feature to be enabled (enabled by default). // +featureGate=PersistentVolumeLastPhaseTransitionTime // +optional LastPhaseTransitionTime *metav1.Time `json:"lastPhaseTransitionTime,omitempty" protobuf:"bytes,4,opt,name=lastPhaseTransitionTime"` diff --git a/cluster-autoscaler/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go b/cluster-autoscaler/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go index 01152a0964cd..fd6f7dc61b90 100644 --- a/cluster-autoscaler/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go +++ b/cluster-autoscaler/vendor/k8s.io/api/core/v1/types_swagger_doc_generated.go @@ -1478,7 +1478,7 @@ var map_PersistentVolumeStatus = map[string]string{ "phase": "phase indicates if a volume is available, bound to a claim, or released by a claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#phase", "message": "message is a human-readable message indicating details about why the volume is in this state.", "reason": "reason is a brief CamelCase string that describes any failure and is meant for machine parsing and tidy display in the CLI.", - "lastPhaseTransitionTime": "lastPhaseTransitionTime is the time the phase transitioned from one to another and automatically resets to current time everytime a volume phase transitions. This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature.", + "lastPhaseTransitionTime": "lastPhaseTransitionTime is the time the phase transitioned from one to another and automatically resets to current time everytime a volume phase transitions. This is a beta field and requires the PersistentVolumeLastPhaseTransitionTime feature to be enabled (enabled by default).", } func (PersistentVolumeStatus) SwaggerDoc() map[string]string { diff --git a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/conn.go b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/conn.go index 7cfdd0632170..8a741936a3d5 100644 --- a/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/conn.go +++ b/cluster-autoscaler/vendor/k8s.io/apimachinery/pkg/util/httpstream/wsstream/conn.go @@ -344,7 +344,7 @@ func (conn *Conn) handle(ws *websocket.Conn) { continue } if _, err := conn.channels[channel].DataFromSocket(data); err != nil { - klog.Errorf("Unable to write frame to %d: %v\n%s", channel, err, string(data)) + klog.Errorf("Unable to write frame (%d bytes) to %d: %v", len(data), channel, err) continue } } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/composition.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/composition.go index 646c640fcaea..2dbfa0991646 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/composition.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/cel/composition.go @@ -178,7 +178,7 @@ func (a *variableAccessor) Callback(_ *lazy.MapValue) ref.Val { return types.NewErr("composited variable %q fails to compile: %v", a.name, a.result.Error) } - v, details, err := a.result.Program.Eval(a.activation) + v, details, err := a.result.Program.ContextEval(a.context, a.activation) if details == nil { return types.NewErr("unable to get evaluation details of variable %q", a.name) } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller_reconcile.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller_reconcile.go index b2624694c846..9cd3c01aed34 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller_reconcile.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/admission/plugin/validatingadmissionpolicy/controller_reconcile.go @@ -180,8 +180,9 @@ func (c *policyController) reconcilePolicyDefinitionSpec(namespace, name string, celmetrics.Metrics.ObserveDefinition(context.TODO(), "active", "deny") } - // Skip reconcile if the spec of the definition is unchanged - if info.lastReconciledValue != nil && definition != nil && + // Skip reconcile if the spec of the definition is unchanged and had a + // successful previous sync + if info.configurationError == nil && info.lastReconciledValue != nil && definition != nil && apiequality.Semantic.DeepEqual(info.lastReconciledValue.Spec, definition.Spec) { return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/base.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/base.go index 76a0bccee8a8..0c1dee82dc5e 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/base.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/cel/environment/base.go @@ -62,10 +62,18 @@ var baseOpts = []VersionedOptions{ library.URLs(), library.Regex(), library.Lists(), + + // cel-go v0.17.7 change the cost of has() from 0 to 1, but also provided the CostEstimatorOptions option to preserve the old behavior, so we enabled it at the same time we bumped our cel version to v0.17.7. + // Since it is a regression fix, we apply it uniformly to all code use v0.17.7. + cel.CostEstimatorOptions(checker.PresenceTestHasCost(false)), }, ProgramOptions: []cel.ProgramOption{ cel.EvalOptions(cel.OptOptimize, cel.OptTrackCost), cel.CostLimit(celconfig.PerCallLimit), + + // cel-go v0.17.7 change the cost of has() from 0 to 1, but also provided the CostEstimatorOptions option to preserve the old behavior, so we enabled it at the same time we bumped our cel version to v0.17.7. + // Since it is a regression fix, we apply it uniformly to all code use v0.17.7. + cel.CostTrackerOptions(interpreter.PresenceTestHasCost(false)), }, }, { @@ -113,14 +121,6 @@ var baseOpts = []VersionedOptions{ IntroducedVersion: version.MajorMinor(1, 29), EnvOptions: []cel.EnvOption{ ext.Sets(), - // cel-go v0.17.7 introduced CostEstimatorOptions. - // Previous the presence has a cost of 0 but cel fixed it to 1. We still set to 0 here to avoid breaking changes. - cel.CostEstimatorOptions(checker.PresenceTestHasCost(false)), - }, - ProgramOptions: []cel.ProgramOption{ - // cel-go v0.17.7 introduced CostTrackerOptions. - // Previous the presence has a cost of 0 but cel fixed it to 1. We still set to 0 here to avoid breaking changes. - cel.CostTrackerOptions(interpreter.PresenceTestHasCost(false)), }, }, } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/features/kube_features.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/features/kube_features.go index f059cef9bc8d..e524e0c6474c 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/features/kube_features.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/features/kube_features.go @@ -163,6 +163,13 @@ const ( // Deprecates and removes SelfLink from ObjectMeta and ListMeta. RemoveSelfLink featuregate.Feature = "RemoveSelfLink" + // owner: @serathius + // beta: v1.30 + // + // Allow watch cache to create a watch on a dedicated RPC. + // This prevents watch cache from being starved by other watches. + SeparateCacheWatchRPC featuregate.Feature = "SeparateCacheWatchRPC" + // owner: @apelisse, @lavalamp // alpha: v1.14 // beta: v1.16 @@ -233,6 +240,12 @@ const ( // Enables support for watch bookmark events. WatchBookmark featuregate.Feature = "WatchBookmark" + // owner: @serathius + // beta: 1.30 + // Enables watches without resourceVersion to be served from storage. + // Used to prevent https://github.com/kubernetes/kubernetes/issues/123072 until etcd fixes the issue. + WatchFromStorageWithoutResourceVersion featuregate.Feature = "WatchFromStorageWithoutResourceVersion" + // owner: @vinaykul // kep: http://kep.k8s.io/1287 // alpha: v1.27 @@ -303,6 +316,8 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS RemoveSelfLink: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, + SeparateCacheWatchRPC: {Default: true, PreRelease: featuregate.Beta}, + ServerSideApply: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 ServerSideFieldValidation: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 @@ -319,6 +334,8 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS WatchBookmark: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, + WatchFromStorageWithoutResourceVersion: {Default: false, PreRelease: featuregate.Beta}, + InPlacePodVerticalScaling: {Default: false, PreRelease: featuregate.Alpha}, WatchList: {Default: false, PreRelease: featuregate.Alpha}, diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/cacher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/cacher.go index 4f408044197c..900f300cd5f9 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/cacher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/cacher.go @@ -25,6 +25,7 @@ import ( "time" "go.opentelemetry.io/otel/attribute" + "google.golang.org/grpc/metadata" "k8s.io/apimachinery/pkg/api/errors" "k8s.io/apimachinery/pkg/api/meta" @@ -397,10 +398,18 @@ func NewCacherFromConfig(config Config) (*Cacher, error) { // so that future reuse does not get a spurious timeout. <-cacher.timer.C } - progressRequester := newConditionalProgressRequester(config.Storage.RequestWatchProgress, config.Clock) + var contextMetadata metadata.MD + if utilfeature.DefaultFeatureGate.Enabled(features.SeparateCacheWatchRPC) { + // Add grpc context metadata to watch and progress notify requests done by cacher to: + // * Prevent starvation of watch opened by cacher, by moving it to separate Watch RPC than watch request that bypass cacher. + // * Ensure that progress notification requests are executed on the same Watch RPC as their watch, which is required for it to work. + contextMetadata = metadata.New(map[string]string{"source": "cache"}) + } + + progressRequester := newConditionalProgressRequester(config.Storage.RequestWatchProgress, config.Clock, contextMetadata) watchCache := newWatchCache( config.KeyFunc, cacher.processEvent, config.GetAttrsFunc, config.Versioner, config.Indexers, config.Clock, config.GroupResource, progressRequester) - listerWatcher := NewListerWatcher(config.Storage, config.ResourcePrefix, config.NewListFunc) + listerWatcher := NewListerWatcher(config.Storage, config.ResourcePrefix, config.NewListFunc, contextMetadata) reflectorName := "storage/cacher.go:" + config.ResourcePrefix reflector := cache.NewNamedReflector(reflectorName, listerWatcher, obj, watchCache, 0) @@ -513,7 +522,7 @@ func (c *Cacher) Watch(ctx context.Context, key string, opts storage.ListOptions if !utilfeature.DefaultFeatureGate.Enabled(features.WatchList) && opts.SendInitialEvents != nil { opts.SendInitialEvents = nil } - if opts.SendInitialEvents == nil && opts.ResourceVersion == "" { + if utilfeature.DefaultFeatureGate.Enabled(features.WatchFromStorageWithoutResourceVersion) && opts.SendInitialEvents == nil && opts.ResourceVersion == "" { return c.storage.Watch(ctx, key, opts) } requestedWatchRV, err := c.versioner.ParseResourceVersion(opts.ResourceVersion) diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go index 1252e5e34959..2817a93dd0cc 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/lister_watcher.go @@ -19,6 +19,8 @@ package cacher import ( "context" + "google.golang.org/grpc/metadata" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/fields" "k8s.io/apimachinery/pkg/labels" @@ -30,17 +32,19 @@ import ( // listerWatcher opaques storage.Interface to expose cache.ListerWatcher. type listerWatcher struct { - storage storage.Interface - resourcePrefix string - newListFunc func() runtime.Object + storage storage.Interface + resourcePrefix string + newListFunc func() runtime.Object + contextMetadata metadata.MD } // NewListerWatcher returns a storage.Interface backed ListerWatcher. -func NewListerWatcher(storage storage.Interface, resourcePrefix string, newListFunc func() runtime.Object) cache.ListerWatcher { +func NewListerWatcher(storage storage.Interface, resourcePrefix string, newListFunc func() runtime.Object, contextMetadata metadata.MD) cache.ListerWatcher { return &listerWatcher{ - storage: storage, - resourcePrefix: resourcePrefix, - newListFunc: newListFunc, + storage: storage, + resourcePrefix: resourcePrefix, + newListFunc: newListFunc, + contextMetadata: contextMetadata, } } @@ -59,7 +63,11 @@ func (lw *listerWatcher) List(options metav1.ListOptions) (runtime.Object, error Predicate: pred, Recursive: true, } - if err := lw.storage.GetList(context.TODO(), lw.resourcePrefix, storageOpts, list); err != nil { + ctx := context.Background() + if lw.contextMetadata != nil { + ctx = metadata.NewOutgoingContext(ctx, lw.contextMetadata) + } + if err := lw.storage.GetList(ctx, lw.resourcePrefix, storageOpts, list); err != nil { return nil, err } return list, nil @@ -73,5 +81,9 @@ func (lw *listerWatcher) Watch(options metav1.ListOptions) (watch.Interface, err Recursive: true, ProgressNotify: true, } - return lw.storage.Watch(context.TODO(), lw.resourcePrefix, opts) + ctx := context.Background() + if lw.contextMetadata != nil { + ctx = metadata.NewOutgoingContext(ctx, lw.contextMetadata) + } + return lw.storage.Watch(ctx, lw.resourcePrefix, opts) } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go index c26eb55dac44..c27ca053b78e 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache.go @@ -492,8 +492,7 @@ func (s sortableStoreElements) Swap(i, j int) { // WaitUntilFreshAndList returns list of pointers to `storeElement` objects along // with their ResourceVersion and the name of the index, if any, that was used. -func (w *watchCache) WaitUntilFreshAndList(ctx context.Context, resourceVersion uint64, matchValues []storage.MatchValue) ([]interface{}, uint64, string, error) { - var err error +func (w *watchCache) WaitUntilFreshAndList(ctx context.Context, resourceVersion uint64, matchValues []storage.MatchValue) (result []interface{}, rv uint64, index string, err error) { if utilfeature.DefaultFeatureGate.Enabled(features.ConsistentListFromCache) && w.notFresh(resourceVersion) { w.waitingUntilFresh.Add() err = w.waitUntilFreshAndBlock(ctx, resourceVersion) @@ -501,12 +500,14 @@ func (w *watchCache) WaitUntilFreshAndList(ctx context.Context, resourceVersion } else { err = w.waitUntilFreshAndBlock(ctx, resourceVersion) } + + defer func() { sort.Sort(sortableStoreElements(result)) }() defer w.RUnlock() if err != nil { - return nil, 0, "", err + return result, rv, index, err } - result, rv, index, err := func() ([]interface{}, uint64, string, error) { + result, rv, index, err = func() ([]interface{}, uint64, string, error) { // This isn't the place where we do "final filtering" - only some "prefiltering" is happening here. So the only // requirement here is to NOT miss anything that should be returned. We can return as many non-matching items as we // want - they will be filtered out later. The fact that we return less things is only further performance improvement. @@ -519,7 +520,6 @@ func (w *watchCache) WaitUntilFreshAndList(ctx context.Context, resourceVersion return w.store.List(), w.resourceVersion, "", nil }() - sort.Sort(sortableStoreElements(result)) return result, rv, index, err } diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache_interval.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache_interval.go index c455357e04de..2b57dd165098 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache_interval.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_cache_interval.go @@ -18,6 +18,7 @@ package cacher import ( "fmt" + "sort" "sync" "k8s.io/apimachinery/pkg/fields" @@ -114,9 +115,24 @@ func newCacheInterval(startIndex, endIndex int, indexer indexerFunc, indexValida } } +type sortableWatchCacheEvents []*watchCacheEvent + +func (s sortableWatchCacheEvents) Len() int { + return len(s) +} + +func (s sortableWatchCacheEvents) Less(i, j int) bool { + return s[i].Key < s[j].Key +} + +func (s sortableWatchCacheEvents) Swap(i, j int) { + s[i], s[j] = s[j], s[i] +} + // newCacheIntervalFromStore is meant to handle the case of rv=0, such that the events // returned by Next() need to be events from a List() done on the underlying store of // the watch cache. +// The items returned in the interval will be sorted by Key. func newCacheIntervalFromStore(resourceVersion uint64, store cache.Indexer, getAttrsFunc attrFunc) (*watchCacheInterval, error) { buffer := &watchCacheIntervalBuffer{} allItems := store.List() @@ -140,6 +156,7 @@ func newCacheIntervalFromStore(resourceVersion uint64, store cache.Indexer, getA } buffer.endIndex++ } + sort.Sort(sortableWatchCacheEvents(buffer.buffer)) ci := &watchCacheInterval{ startIndex: 0, // Simulate that we already have all the events we're looking for. diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_progress.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_progress.go index f44ca9325b88..13f50bc187d8 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_progress.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/cacher/watch_progress.go @@ -21,6 +21,8 @@ import ( "sync" "time" + "google.golang.org/grpc/metadata" + utilruntime "k8s.io/apimachinery/pkg/util/runtime" "k8s.io/apimachinery/pkg/util/wait" @@ -34,10 +36,11 @@ const ( progressRequestPeriod = 100 * time.Millisecond ) -func newConditionalProgressRequester(requestWatchProgress WatchProgressRequester, clock TickerFactory) *conditionalProgressRequester { +func newConditionalProgressRequester(requestWatchProgress WatchProgressRequester, clock TickerFactory, contextMetadata metadata.MD) *conditionalProgressRequester { pr := &conditionalProgressRequester{ clock: clock, requestWatchProgress: requestWatchProgress, + contextMetadata: contextMetadata, } pr.cond = sync.NewCond(pr.mux.RLocker()) return pr @@ -54,6 +57,7 @@ type TickerFactory interface { type conditionalProgressRequester struct { clock TickerFactory requestWatchProgress WatchProgressRequester + contextMetadata metadata.MD mux sync.RWMutex cond *sync.Cond @@ -63,6 +67,9 @@ type conditionalProgressRequester struct { func (pr *conditionalProgressRequester) Run(stopCh <-chan struct{}) { ctx := wait.ContextForChannel(stopCh) + if pr.contextMetadata != nil { + ctx = metadata.NewOutgoingContext(ctx, pr.contextMetadata) + } go func() { defer utilruntime.HandleCrash() <-stopCh diff --git a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go index fadc87d53de2..1c0307bd913d 100644 --- a/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/apiserver/pkg/storage/etcd3/metrics/metrics.go @@ -84,7 +84,7 @@ var ( }, []string{"endpoint"}, ) - storageSizeDescription = compbasemetrics.NewDesc("apiserver_storage_size_bytes", "Size of the storage database file physically allocated in bytes.", []string{"cluster"}, nil, compbasemetrics.ALPHA, "") + storageSizeDescription = compbasemetrics.NewDesc("apiserver_storage_size_bytes", "Size of the storage database file physically allocated in bytes.", []string{"storage_cluster_id"}, nil, compbasemetrics.ALPHA, "") storageMonitor = &monitorCollector{monitorGetter: func() ([]Monitor, error) { return nil, nil }} etcdEventsReceivedCounts = compbasemetrics.NewCounterVec( &compbasemetrics.CounterOpts{ @@ -287,21 +287,21 @@ func (c *monitorCollector) CollectWithStability(ch chan<- compbasemetrics.Metric } for i, m := range monitors { - cluster := fmt.Sprintf("etcd-%d", i) + storageClusterID := fmt.Sprintf("etcd-%d", i) - klog.V(4).InfoS("Start collecting storage metrics", "cluster", cluster) + klog.V(4).InfoS("Start collecting storage metrics", "storage_cluster_id", storageClusterID) ctx, cancel := context.WithTimeout(context.Background(), time.Second) metrics, err := m.Monitor(ctx) cancel() m.Close() if err != nil { - klog.InfoS("Failed to get storage metrics", "cluster", cluster, "err", err) + klog.InfoS("Failed to get storage metrics", "storage_cluster_id", storageClusterID, "err", err) continue } - metric, err := compbasemetrics.NewConstMetric(storageSizeDescription, compbasemetrics.GaugeValue, float64(metrics.Size), cluster) + metric, err := compbasemetrics.NewConstMetric(storageSizeDescription, compbasemetrics.GaugeValue, float64(metrics.Size), storageClusterID) if err != nil { - klog.ErrorS(err, "Failed to create metric", "cluster", cluster) + klog.ErrorS(err, "Failed to create metric", "storage_cluster_id", storageClusterID) } ch <- metric } diff --git a/cluster-autoscaler/vendor/k8s.io/client-go/tools/remotecommand/websocket.go b/cluster-autoscaler/vendor/k8s.io/client-go/tools/remotecommand/websocket.go index a60986decca8..49ef4717cd94 100644 --- a/cluster-autoscaler/vendor/k8s.io/client-go/tools/remotecommand/websocket.go +++ b/cluster-autoscaler/vendor/k8s.io/client-go/tools/remotecommand/websocket.go @@ -187,6 +187,9 @@ type wsStreamCreator struct { // map of stream id to stream; multiple streams read/write the connection streams map[byte]*stream streamsMu sync.Mutex + // setStreamErr holds the error to return to anyone calling setStreams. + // this is populated in closeAllStreamReaders + setStreamErr error } func newWSStreamCreator(conn *gwebsocket.Conn) *wsStreamCreator { @@ -202,10 +205,14 @@ func (c *wsStreamCreator) getStream(id byte) *stream { return c.streams[id] } -func (c *wsStreamCreator) setStream(id byte, s *stream) { +func (c *wsStreamCreator) setStream(id byte, s *stream) error { c.streamsMu.Lock() defer c.streamsMu.Unlock() + if c.setStreamErr != nil { + return c.setStreamErr + } c.streams[id] = s + return nil } // CreateStream uses id from passed headers to create a stream over "c.conn" connection. @@ -228,7 +235,11 @@ func (c *wsStreamCreator) CreateStream(headers http.Header) (httpstream.Stream, connWriteLock: &c.connWriteLock, id: id, } - c.setStream(id, s) + if err := c.setStream(id, s); err != nil { + _ = s.writePipe.Close() + _ = s.readPipe.Close() + return nil, err + } return s, nil } @@ -312,7 +323,7 @@ func (c *wsStreamCreator) readDemuxLoop(bufferSize int, period time.Duration, de } // closeAllStreamReaders closes readers in all streams. -// This unblocks all stream.Read() calls. +// This unblocks all stream.Read() calls, and keeps any future streams from being created. func (c *wsStreamCreator) closeAllStreamReaders(err error) { c.streamsMu.Lock() defer c.streamsMu.Unlock() @@ -320,6 +331,12 @@ func (c *wsStreamCreator) closeAllStreamReaders(err error) { // Closing writePipe unblocks all readPipe.Read() callers and prevents any future writes. _ = s.writePipe.CloseWithError(err) } + // ensure callers to setStreams receive an error after this point + if err != nil { + c.setStreamErr = err + } else { + c.setStreamErr = fmt.Errorf("closed all streams") + } } type stream struct { diff --git a/cluster-autoscaler/vendor/k8s.io/code-generator/generate-internal-groups.sh b/cluster-autoscaler/vendor/k8s.io/code-generator/generate-internal-groups.sh index 9676fac313b6..415b0b67c648 100644 --- a/cluster-autoscaler/vendor/k8s.io/code-generator/generate-internal-groups.sh +++ b/cluster-autoscaler/vendor/k8s.io/code-generator/generate-internal-groups.sh @@ -55,6 +55,12 @@ echo "WARNING: $(basename "$0") is deprecated." echo "WARNING: Please use k8s.io/code-generator/kube_codegen.sh instead." echo +# If verification only is requested, avoid deleting files +verify_only="" +for ((i = 1; i <= $#; i++)); do + if [ "${!i}" = --verify-only ]; then verify_only=1; fi +done + if [ "${GENS}" = "all" ] || grep -qw "all" <<<"${GENS}"; then ALL="client,conversion,deepcopy,defaulter,informer,lister,openapi" echo "WARNING: Specifying \"all\" as a generator is deprecated." @@ -124,12 +130,14 @@ CLIENTSET_PKG="${CLIENTSET_PKG_NAME:-clientset}" CLIENTSET_NAME="${CLIENTSET_NAME_VERSIONED:-versioned}" if grep -qw "deepcopy" <<<"${GENS}"; then - # Nuke existing files - for dir in $(GO111MODULE=on go list -f '{{.Dir}}' "${ALL_FQ_APIS[@]}"); do - pushd "${dir}" >/dev/null - git_find -z ':(glob)**'/zz_generated.deepcopy.go | xargs -0 rm -f - popd >/dev/null - done + if [ ! "$verify_only" ]; then + # Nuke existing files + for dir in $(GO111MODULE=on go list -f '{{.Dir}}' "${ALL_FQ_APIS[@]}"); do + pushd "${dir}" >/dev/null + git_find -z ':(glob)**'/zz_generated.deepcopy.go | xargs -0 rm -f + popd >/dev/null + done + fi echo "Generating deepcopy funcs" "${gobin}/deepcopy-gen" \ @@ -139,12 +147,14 @@ if grep -qw "deepcopy" <<<"${GENS}"; then fi if grep -qw "defaulter" <<<"${GENS}"; then - # Nuke existing files - for dir in $(GO111MODULE=on go list -f '{{.Dir}}' "${ALL_FQ_APIS[@]}"); do - pushd "${dir}" >/dev/null - git_find -z ':(glob)**'/zz_generated.defaults.go | xargs -0 rm -f - popd >/dev/null - done + if [ ! "$verify_only" ]; then + # Nuke existing files + for dir in $(GO111MODULE=on go list -f '{{.Dir}}' "${ALL_FQ_APIS[@]}"); do + pushd "${dir}" >/dev/null + git_find -z ':(glob)**'/zz_generated.defaults.go | xargs -0 rm -f + popd >/dev/null + done + fi echo "Generating defaulters" "${gobin}/defaulter-gen" \ @@ -154,12 +164,14 @@ if grep -qw "defaulter" <<<"${GENS}"; then fi if grep -qw "conversion" <<<"${GENS}"; then - # Nuke existing files - for dir in $(GO111MODULE=on go list -f '{{.Dir}}' "${ALL_FQ_APIS[@]}"); do - pushd "${dir}" >/dev/null - git_find -z ':(glob)**'/zz_generated.conversion.go | xargs -0 rm -f - popd >/dev/null - done + if [ ! "$verify_only" ]; then + # Nuke existing files + for dir in $(GO111MODULE=on go list -f '{{.Dir}}' "${ALL_FQ_APIS[@]}"); do + pushd "${dir}" >/dev/null + git_find -z ':(glob)**'/zz_generated.conversion.go | xargs -0 rm -f + popd >/dev/null + done + fi echo "Generating conversions" "${gobin}/conversion-gen" \ @@ -171,15 +183,17 @@ fi if grep -qw "applyconfiguration" <<<"${GENS}"; then APPLY_CONFIGURATION_PACKAGE="${OUTPUT_PKG}/${APPLYCONFIGURATION_PKG_NAME:-applyconfiguration}" - # Nuke existing files - root="$(GO111MODULE=on go list -f '{{.Dir}}' "${APPLY_CONFIGURATION_PACKAGE}" 2>/dev/null || true)" - if [ -n "${root}" ]; then - pushd "${root}" >/dev/null - git_grep -l --null \ - -e '^// Code generated by applyconfiguration-gen. DO NOT EDIT.$' \ - ':(glob)**/*.go' \ - | xargs -0 rm -f - popd >/dev/null + if [ ! "$verify_only" ]; then + # Nuke existing files + root="$(GO111MODULE=on go list -f '{{.Dir}}' "${APPLY_CONFIGURATION_PACKAGE}" 2>/dev/null || true)" + if [ -n "${root}" ]; then + pushd "${root}" >/dev/null + git_grep -l --null \ + -e '^// Code generated by applyconfiguration-gen. DO NOT EDIT.$' \ + ':(glob)**/*.go' \ + | xargs -0 rm -f + popd >/dev/null + fi fi echo "Generating apply configuration for ${GROUPS_WITH_VERSIONS} at ${APPLY_CONFIGURATION_PACKAGE}" @@ -190,15 +204,17 @@ if grep -qw "applyconfiguration" <<<"${GENS}"; then fi if grep -qw "client" <<<"${GENS}"; then - # Nuke existing files - root="$(GO111MODULE=on go list -f '{{.Dir}}' "${OUTPUT_PKG}/${CLIENTSET_PKG}/${CLIENTSET_NAME}" 2>/dev/null || true)" - if [ -n "${root}" ]; then - pushd "${root}" >/dev/null - git_grep -l --null \ - -e '^// Code generated by client-gen. DO NOT EDIT.$' \ - ':(glob)**/*.go' \ - | xargs -0 rm -f - popd >/dev/null + if [ ! "$verify_only" ]; then + # Nuke existing files + root="$(GO111MODULE=on go list -f '{{.Dir}}' "${OUTPUT_PKG}/${CLIENTSET_PKG}/${CLIENTSET_NAME}" 2>/dev/null || true)" + if [ -n "${root}" ]; then + pushd "${root}" >/dev/null + git_grep -l --null \ + -e '^// Code generated by client-gen. DO NOT EDIT.$' \ + ':(glob)**/*.go' \ + | xargs -0 rm -f + popd >/dev/null + fi fi echo "Generating clientset for ${GROUPS_WITH_VERSIONS} at ${OUTPUT_PKG}/${CLIENTSET_PKG}" @@ -212,18 +228,20 @@ if grep -qw "client" <<<"${GENS}"; then fi if grep -qw "lister" <<<"${GENS}"; then - # Nuke existing files - for gv in "${GROUP_VERSIONS[@]}"; do - root="$(GO111MODULE=on go list -f '{{.Dir}}' "${OUTPUT_PKG}/listers/${gv}" 2>/dev/null || true)" - if [ -n "${root}" ]; then - pushd "${root}" >/dev/null - git_grep -l --null \ - -e '^// Code generated by lister-gen. DO NOT EDIT.$' \ - ':(glob)**/*.go' \ - | xargs -0 rm -f - popd >/dev/null - fi - done + if [ ! "$verify_only" ]; then + # Nuke existing files + for gv in "${GROUP_VERSIONS[@]}"; do + root="$(GO111MODULE=on go list -f '{{.Dir}}' "${OUTPUT_PKG}/listers/${gv}" 2>/dev/null || true)" + if [ -n "${root}" ]; then + pushd "${root}" >/dev/null + git_grep -l --null \ + -e '^// Code generated by lister-gen. DO NOT EDIT.$' \ + ':(glob)**/*.go' \ + | xargs -0 rm -f + popd >/dev/null + fi + done + fi echo "Generating listers for ${GROUPS_WITH_VERSIONS} at ${OUTPUT_PKG}/listers" "${gobin}/lister-gen" \ @@ -233,15 +251,17 @@ if grep -qw "lister" <<<"${GENS}"; then fi if grep -qw "informer" <<<"${GENS}"; then - # Nuke existing files - root="$(GO111MODULE=on go list -f '{{.Dir}}' "${OUTPUT_PKG}/informers/externalversions" 2>/dev/null || true)" - if [ -n "${root}" ]; then - pushd "${root}" >/dev/null - git_grep -l --null \ - -e '^// Code generated by informer-gen. DO NOT EDIT.$' \ - ':(glob)**/*.go' \ - | xargs -0 rm -f - popd >/dev/null + if [ ! "$verify_only" ]; then + # Nuke existing files + root="$(GO111MODULE=on go list -f '{{.Dir}}' "${OUTPUT_PKG}/informers/externalversions" 2>/dev/null || true)" + if [ -n "${root}" ]; then + pushd "${root}" >/dev/null + git_grep -l --null \ + -e '^// Code generated by informer-gen. DO NOT EDIT.$' \ + ':(glob)**/*.go' \ + | xargs -0 rm -f + popd >/dev/null + fi fi echo "Generating informers for ${GROUPS_WITH_VERSIONS} at ${OUTPUT_PKG}/informers" @@ -254,12 +274,14 @@ if grep -qw "informer" <<<"${GENS}"; then fi if grep -qw "openapi" <<<"${GENS}"; then - # Nuke existing files - for dir in $(GO111MODULE=on go list -f '{{.Dir}}' "${FQ_APIS[@]}"); do - pushd "${dir}" >/dev/null - git_find -z ':(glob)**'/zz_generated.openapi.go | xargs -0 rm -f - popd >/dev/null - done + if [ ! "$verify_only" ]; then + # Nuke existing files + for dir in $(GO111MODULE=on go list -f '{{.Dir}}' "${FQ_APIS[@]}"); do + pushd "${dir}" >/dev/null + git_find -z ':(glob)**'/zz_generated.openapi.go | xargs -0 rm -f + popd >/dev/null + done + fi echo "Generating OpenAPI definitions for ${GROUPS_WITH_VERSIONS} at ${OUTPUT_PKG}/openapi" declare -a OPENAPI_EXTRA_PACKAGES diff --git a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/slis/metrics.go b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/slis/metrics.go index 3d464d12d75e..39cd2ba28858 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/slis/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/component-base/metrics/prometheus/slis/metrics.go @@ -57,6 +57,7 @@ var ( func Register(registry k8smetrics.KubeRegistry) { registry.Register(healthcheck) registry.Register(healthchecksTotal) + _ = k8smetrics.RegisterProcessStartTime(registry.Register) } func ResetHealthMetrics() { diff --git a/cluster-autoscaler/vendor/k8s.io/component-helpers/storage/volume/helpers.go b/cluster-autoscaler/vendor/k8s.io/component-helpers/storage/volume/helpers.go index 7ec376f34a07..5068c654624d 100644 --- a/cluster-autoscaler/vendor/k8s.io/component-helpers/storage/volume/helpers.go +++ b/cluster-autoscaler/vendor/k8s.io/component-helpers/storage/volume/helpers.go @@ -24,6 +24,20 @@ import ( "k8s.io/component-helpers/scheduling/corev1" ) +// PersistentVolumeClaimHasClass returns true if given claim has set StorageClassName field. +func PersistentVolumeClaimHasClass(claim *v1.PersistentVolumeClaim) bool { + // Use beta annotation first + if _, found := claim.Annotations[v1.BetaStorageClassAnnotation]; found { + return true + } + + if claim.Spec.StorageClassName != nil { + return true + } + + return false +} + // GetPersistentVolumeClaimClass returns StorageClassName. If no storage class was // requested, it returns "". func GetPersistentVolumeClaimClass(claim *v1.PersistentVolumeClaim) string { diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/types.go index fa1242b8a1d3..6a3f888ab3df 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/types.go @@ -392,7 +392,7 @@ type PersistentVolumeStatus struct { Reason string // LastPhaseTransitionTime is the time the phase transitioned from one to another // and automatically resets to current time everytime a volume phase transitions. - // This is an alpha field and requires enabling PersistentVolumeLastPhaseTransitionTime feature. + // This is a beta field and requires the PersistentVolumeLastPhaseTransitionTime feature to be enabled (enabled by default). // +featureGate=PersistentVolumeLastPhaseTransitionTime // +optional LastPhaseTransitionTime *metav1.Time diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/validation.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/validation.go index a6f7fef30123..1885531f873c 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/validation.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/apis/core/validation/validation.go @@ -5141,6 +5141,46 @@ func ValidateContainerStateTransition(newStatuses, oldStatuses []core.ContainerS return allErrs } +// ValidateInitContainerStateTransition test to if any illegal init container state transitions are being attempted +func ValidateInitContainerStateTransition(newStatuses, oldStatuses []core.ContainerStatus, fldpath *field.Path, podSpec *core.PodSpec) field.ErrorList { + allErrs := field.ErrorList{} + // If we should always restart, containers are allowed to leave the terminated state + if podSpec.RestartPolicy == core.RestartPolicyAlways { + return allErrs + } + for i, oldStatus := range oldStatuses { + // Skip any container that is not terminated + if oldStatus.State.Terminated == nil { + continue + } + // Skip any container that failed but is allowed to restart + if oldStatus.State.Terminated.ExitCode != 0 && podSpec.RestartPolicy == core.RestartPolicyOnFailure { + continue + } + + // Skip any restartable init container that is allowed to restart + isRestartableInitContainer := false + for _, c := range podSpec.InitContainers { + if oldStatus.Name == c.Name { + if c.RestartPolicy != nil && *c.RestartPolicy == core.ContainerRestartPolicyAlways { + isRestartableInitContainer = true + } + break + } + } + if isRestartableInitContainer { + continue + } + + for _, newStatus := range newStatuses { + if oldStatus.Name == newStatus.Name && newStatus.State.Terminated == nil { + allErrs = append(allErrs, field.Forbidden(fldpath.Index(i).Child("state"), "may not be transitioned to non-terminated state")) + } + } + } + return allErrs +} + // ValidatePodStatusUpdate checks for changes to status that shouldn't occur in normal operation. func ValidatePodStatusUpdate(newPod, oldPod *core.Pod, opts PodValidationOptions) field.ErrorList { fldPath := field.NewPath("metadata") @@ -5162,7 +5202,7 @@ func ValidatePodStatusUpdate(newPod, oldPod *core.Pod, opts PodValidationOptions // If pod should not restart, make sure the status update does not transition // any terminated containers to a non-terminated state. allErrs = append(allErrs, ValidateContainerStateTransition(newPod.Status.ContainerStatuses, oldPod.Status.ContainerStatuses, fldPath.Child("containerStatuses"), oldPod.Spec.RestartPolicy)...) - allErrs = append(allErrs, ValidateContainerStateTransition(newPod.Status.InitContainerStatuses, oldPod.Status.InitContainerStatuses, fldPath.Child("initContainerStatuses"), oldPod.Spec.RestartPolicy)...) + allErrs = append(allErrs, ValidateInitContainerStateTransition(newPod.Status.InitContainerStatuses, oldPod.Status.InitContainerStatuses, fldPath.Child("initContainerStatuses"), &oldPod.Spec)...) // The kubelet will never restart ephemeral containers, so treat them like they have an implicit RestartPolicyNever. allErrs = append(allErrs, ValidateContainerStateTransition(newPod.Status.EphemeralContainerStatuses, oldPod.Status.EphemeralContainerStatuses, fldPath.Child("ephemeralContainerStatuses"), core.RestartPolicyNever)...) allErrs = append(allErrs, validatePodResourceClaimStatuses(newPod.Status.ResourceClaimStatuses, newPod.Spec.ResourceClaims, fldPath.Child("resourceClaimStatuses"))...) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/features/kube_features.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/features/kube_features.go index edd33feb74d2..b0b12d6ebf77 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/features/kube_features.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/features/kube_features.go @@ -259,7 +259,6 @@ const ( // owner: @harche // kep: http://kep.k8s.io/3386 // alpha: v1.25 - // beta: v1.27 // // Allows using event-driven PLEG (pod lifecycle event generator) through kubelet // which avoids frequent relisting of containers which helps optimize performance. @@ -617,6 +616,7 @@ const ( // owner: @RomanBednar // kep: https://kep.k8s.io/3762 // alpha: v1.28 + // beta: v1.29 // // Adds a new field to persistent volumes which holds a timestamp of when the volume last transitioned its phase. PersistentVolumeLastPhaseTransitionTime featuregate.Feature = "PersistentVolumeLastPhaseTransitionTime" @@ -1048,7 +1048,7 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS DynamicResourceAllocation: {Default: false, PreRelease: featuregate.Alpha}, - EventedPLEG: {Default: false, PreRelease: featuregate.Beta}, // off by default, requires CRI Runtime support + EventedPLEG: {Default: false, PreRelease: featuregate.Alpha}, ExecProbeTimeout: {Default: true, PreRelease: featuregate.GA}, // lock to default and remove after v1.22 based on KEP #1972 update @@ -1263,6 +1263,8 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS genericfeatures.OpenAPIEnums: {Default: true, PreRelease: featuregate.Beta}, + genericfeatures.SeparateCacheWatchRPC: {Default: true, PreRelease: featuregate.Beta}, + genericfeatures.ServerSideApply: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 genericfeatures.ServerSideFieldValidation: {Default: true, PreRelease: featuregate.GA, LockToDefault: true}, // remove in 1.29 @@ -1273,6 +1275,8 @@ var defaultKubernetesFeatureGates = map[featuregate.Feature]featuregate.FeatureS genericfeatures.ZeroLimitedNominalConcurrencyShares: {Default: false, PreRelease: featuregate.Beta}, + genericfeatures.WatchFromStorageWithoutResourceVersion: {Default: false, PreRelease: featuregate.Beta}, + // inherited features from apiextensions-apiserver, relisted here to get a conflict if it is changed // unintentionally on either side: diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_volumes.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_volumes.go index 2eb91338e24b..27c82ae430e5 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_volumes.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/kubelet_volumes.go @@ -116,7 +116,8 @@ func (kl *Kubelet) newVolumeMounterFromPlugins(spec *volume.Spec, pod *v1.Pod, o // removeOrphanedPodVolumeDirs attempts to remove the pod volumes directory and // its subdirectories. There should be no files left under normal conditions // when this is called, so it effectively does a recursive rmdir instead of -// RemoveAll to ensure it only removes directories and not regular files. +// RemoveAll to ensure it only removes empty directories and files that were +// used as mount points, but not content of the mount points. func (kl *Kubelet) removeOrphanedPodVolumeDirs(uid types.UID) []error { orphanVolumeErrors := []error{} @@ -136,7 +137,7 @@ func (kl *Kubelet) removeOrphanedPodVolumeDirs(uid types.UID) []error { } } - // If there are any volume-subpaths, attempt to rmdir them + // If there are any volume-subpaths, attempt to remove them subpathVolumePaths, err := kl.getPodVolumeSubpathListFromDisk(uid) if err != nil { orphanVolumeErrors = append(orphanVolumeErrors, fmt.Errorf("orphaned pod %q found, but error occurred during reading of volume-subpaths dir from disk: %v", uid, err)) @@ -144,7 +145,8 @@ func (kl *Kubelet) removeOrphanedPodVolumeDirs(uid types.UID) []error { } if len(subpathVolumePaths) > 0 { for _, subpathVolumePath := range subpathVolumePaths { - if err := syscall.Rmdir(subpathVolumePath); err != nil { + // Remove both files and empty directories here, as the subpath may have been a bind-mount of a file or a directory. + if err := os.Remove(subpathVolumePath); err != nil { orphanVolumeErrors = append(orphanVolumeErrors, fmt.Errorf("orphaned pod %q found, but failed to rmdir() subpath at path %v: %v", uid, subpathVolumePath, err)) } else { klog.InfoS("Cleaned up orphaned volume subpath from pod", "podUID", uid, "path", subpathVolumePath) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/actual_state_of_world.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/actual_state_of_world.go index ada8c4415e44..2741e459f32c 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/actual_state_of_world.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/cache/actual_state_of_world.go @@ -418,6 +418,13 @@ func (asw *actualStateOfWorld) IsVolumeReconstructed(volumeName v1.UniqueVolumeN return foundPod } +func (asw *actualStateOfWorld) IsVolumeDeviceReconstructed(volumeName v1.UniqueVolumeName) bool { + asw.RLock() + defer asw.RUnlock() + _, ok := asw.foundDuringReconstruction[volumeName] + return ok +} + func (asw *actualStateOfWorld) CheckAndMarkVolumeAsUncertainViaReconstruction(opts operationexecutor.MarkVolumeOpts) (bool, error) { asw.Lock() defer asw.Unlock() @@ -710,7 +717,7 @@ func (asw *actualStateOfWorld) AddPodToVolume(markVolumeOpts operationexecutor.M // Update uncertain volumes - the new markVolumeOpts may have updated information. // Especially reconstructed volumes (marked as uncertain during reconstruction) need // an update. - updateUncertainVolume = utilfeature.DefaultFeatureGate.Enabled(features.SELinuxMountReadWriteOncePod) && podObj.volumeMountStateForPod == operationexecutor.VolumeMountUncertain + updateUncertainVolume = utilfeature.DefaultFeatureGate.Enabled(features.NewVolumeManagerReconstruction) && podObj.volumeMountStateForPod == operationexecutor.VolumeMountUncertain } if !podExists || updateUncertainVolume { // Add new mountedPod or update existing one. @@ -810,6 +817,7 @@ func (asw *actualStateOfWorld) SetDeviceMountState( volumeObj.seLinuxMountContext = &seLinuxMountContext } } + asw.attachedVolumes[volumeName] = volumeObj return nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct.go index 71bc69e8f0b0..ee00537da90e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconstruct.go @@ -80,10 +80,6 @@ func (rc *reconciler) syncStates(kubeletPodDir string) { blockVolumeMapper: reconstructedVolume.blockVolumeMapper, mounter: reconstructedVolume.mounter, } - if cachedInfo, ok := volumesNeedUpdate[reconstructedVolume.volumeName]; ok { - gvl = cachedInfo - } - gvl.addPodVolume(reconstructedVolume) if volumeInDSW { // Some pod needs the volume. And it exists on disk. Some previous // kubelet must have created the directory, therefore it must have @@ -91,6 +87,10 @@ func (rc *reconciler) syncStates(kubeletPodDir string) { // this new kubelet so reconcile() calls SetUp and re-mounts the // volume if it's necessary. volumeNeedReport = append(volumeNeedReport, reconstructedVolume.volumeName) + if cachedInfo, ok := rc.skippedDuringReconstruction[reconstructedVolume.volumeName]; ok { + gvl = cachedInfo + } + gvl.addPodVolume(reconstructedVolume) rc.skippedDuringReconstruction[reconstructedVolume.volumeName] = gvl klog.V(4).InfoS("Volume exists in desired state, marking as InUse", "podName", volume.podName, "volumeSpecName", volume.volumeSpecName) continue @@ -100,6 +100,10 @@ func (rc *reconciler) syncStates(kubeletPodDir string) { klog.InfoS("Volume is in pending operation, skip cleaning up mounts") } klog.V(2).InfoS("Reconciler sync states: could not find pod information in desired state, update it in actual state", "reconstructedVolume", reconstructedVolume) + if cachedInfo, ok := volumesNeedUpdate[reconstructedVolume.volumeName]; ok { + gvl = cachedInfo + } + gvl.addPodVolume(reconstructedVolume) volumesNeedUpdate[reconstructedVolume.volumeName] = gvl } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/winstats/perfcounter_nodestats.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/winstats/perfcounter_nodestats.go index 41bdd21168b1..992f96d7fbef 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/winstats/perfcounter_nodestats.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/kubelet/winstats/perfcounter_nodestats.go @@ -250,7 +250,8 @@ func (p *perfCounterNodeStatsClient) convertCPUValue(cpuCores int, cpuValue uint func (p *perfCounterNodeStatsClient) getCPUUsageNanoCores() uint64 { cachePeriodSeconds := uint64(defaultCachePeriod / time.Second) - cpuUsageNanoCores := (p.cpuUsageCoreNanoSecondsCache.latestValue - p.cpuUsageCoreNanoSecondsCache.previousValue) / cachePeriodSeconds + perfCounterUpdatePeriodSeconds := uint64(perfCounterUpdatePeriod / time.Second) + cpuUsageNanoCores := ((p.cpuUsageCoreNanoSecondsCache.latestValue - p.cpuUsageCoreNanoSecondsCache.previousValue) * perfCounterUpdatePeriodSeconds) / cachePeriodSeconds return cpuUsageNanoCores } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/interface.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/interface.go index d8f0f7d0e224..7d0610822d98 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/interface.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/interface.go @@ -49,7 +49,8 @@ type NodeScore struct { Score int64 } -// NodeToStatusMap declares map from node name to its status. +// NodeToStatusMap contains the statuses of the Nodes where the incoming Pod was not schedulable. +// A PostFilter plugin that uses this map should interpret absent Nodes as UnschedulableAndUnresolvable. type NodeToStatusMap map[string]*Status // NodePluginScores is a struct with node name and scores for that node. @@ -435,6 +436,8 @@ type FilterPlugin interface { type PostFilterPlugin interface { Plugin // PostFilter is called by the scheduling framework. + // If there is no entry in the NodeToStatus map, its implicit status is UnschedulableAndUnresolvable. + // // A PostFilter plugin should return one of the following statuses: // - Unschedulable: the plugin gets executed successfully but the pod cannot be made schedulable. // - Success: the plugin gets executed successfully and the pod can be made schedulable. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go index 9dcc65683c9f..ca8263d0685f 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeaffinity/node_affinity.go @@ -25,13 +25,11 @@ import ( "k8s.io/apimachinery/pkg/runtime" "k8s.io/apimachinery/pkg/util/sets" "k8s.io/component-helpers/scheduling/corev1/nodeaffinity" - "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/scheduler/apis/config" "k8s.io/kubernetes/pkg/scheduler/apis/config/validation" "k8s.io/kubernetes/pkg/scheduler/framework" "k8s.io/kubernetes/pkg/scheduler/framework/plugins/helper" "k8s.io/kubernetes/pkg/scheduler/framework/plugins/names" - "k8s.io/kubernetes/pkg/scheduler/util" ) // NodeAffinity is a plugin that checks if a pod node selector matches the node label. @@ -85,51 +83,10 @@ func (s *preFilterState) Clone() framework.StateData { // failed by this plugin schedulable. func (pl *NodeAffinity) EventsToRegister() []framework.ClusterEventWithHint { return []framework.ClusterEventWithHint{ - {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.Update}, QueueingHintFn: pl.isSchedulableAfterNodeChange}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.Update}}, } } -// isSchedulableAfterNodeChange is invoked whenever a node changed. It checks whether -// that change made a previously unschedulable pod schedulable. -func (pl *NodeAffinity) isSchedulableAfterNodeChange(logger klog.Logger, pod *v1.Pod, oldObj, newObj interface{}) (framework.QueueingHint, error) { - originalNode, modifiedNode, err := util.As[*v1.Node](oldObj, newObj) - if err != nil { - return framework.Queue, err - } - - if pl.addedNodeSelector != nil && !pl.addedNodeSelector.Match(modifiedNode) { - logger.V(4).Info("added or modified node didn't match scheduler-enforced node affinity and this event won't make the Pod schedulable", "pod", klog.KObj(pod), "node", klog.KObj(modifiedNode)) - return framework.QueueSkip, nil - } - - requiredNodeAffinity := nodeaffinity.GetRequiredNodeAffinity(pod) - isMatched, err := requiredNodeAffinity.Match(modifiedNode) - if err != nil { - return framework.Queue, err - } - if !isMatched { - logger.V(4).Info("node was created or updated, but doesn't matches with the pod's NodeAffinity", "pod", klog.KObj(pod), "node", klog.KObj(modifiedNode)) - return framework.QueueSkip, nil - } - - wasMatched := false - if originalNode != nil { - wasMatched, err = requiredNodeAffinity.Match(originalNode) - if err != nil { - return framework.Queue, err - } - } - - if !wasMatched { - // This modification makes this Node match with Pod's NodeAffinity. - logger.V(4).Info("node was created or updated, and matches with the pod's NodeAffinity", "pod", klog.KObj(pod), "node", klog.KObj(modifiedNode)) - return framework.Queue, nil - } - - logger.V(4).Info("node was created or updated, but it doesn't make this pod schedulable", "pod", klog.KObj(pod), "node", klog.KObj(modifiedNode)) - return framework.QueueSkip, nil -} - // PreFilter builds and writes cycle state used by Filter. func (pl *NodeAffinity) PreFilter(ctx context.Context, cycleState *framework.CycleState, pod *v1.Pod) (*framework.PreFilterResult, *framework.Status) { affinity := pod.Spec.Affinity diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeunschedulable/node_unschedulable.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeunschedulable/node_unschedulable.go index 674c9390b488..125c0a691d34 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeunschedulable/node_unschedulable.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/plugins/nodeunschedulable/node_unschedulable.go @@ -22,10 +22,8 @@ import ( v1 "k8s.io/api/core/v1" "k8s.io/apimachinery/pkg/runtime" v1helper "k8s.io/component-helpers/scheduling/corev1" - "k8s.io/klog/v2" "k8s.io/kubernetes/pkg/scheduler/framework" "k8s.io/kubernetes/pkg/scheduler/framework/plugins/names" - "k8s.io/kubernetes/pkg/scheduler/util" ) // NodeUnschedulable plugin filters nodes that set node.Spec.Unschedulable=true unless @@ -50,34 +48,10 @@ const ( // failed by this plugin schedulable. func (pl *NodeUnschedulable) EventsToRegister() []framework.ClusterEventWithHint { return []framework.ClusterEventWithHint{ - {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeTaint}, QueueingHintFn: pl.isSchedulableAfterNodeChange}, + {Event: framework.ClusterEvent{Resource: framework.Node, ActionType: framework.Add | framework.UpdateNodeTaint}}, } } -// isSchedulableAfterNodeChange is invoked for all node events reported by -// an informer. It checks whether that change made a previously unschedulable -// pod schedulable. -func (pl *NodeUnschedulable) isSchedulableAfterNodeChange(logger klog.Logger, pod *v1.Pod, oldObj, newObj interface{}) (framework.QueueingHint, error) { - originalNode, modifiedNode, err := util.As[*v1.Node](oldObj, newObj) - if err != nil { - logger.Error(err, "unexpected objects in isSchedulableAfterNodeChange", "oldObj", oldObj, "newObj", newObj) - return framework.Queue, err - } - - originalNodeSchedulable, modifiedNodeSchedulable := false, !modifiedNode.Spec.Unschedulable - if originalNode != nil { - originalNodeSchedulable = !originalNode.Spec.Unschedulable - } - - if !originalNodeSchedulable && modifiedNodeSchedulable { - logger.V(4).Info("node was created or updated, pod may be schedulable now", "pod", klog.KObj(pod), "node", klog.KObj(modifiedNode)) - return framework.Queue, nil - } - - logger.V(4).Info("node was created or updated, but it doesn't make this pod schedulable", "pod", klog.KObj(pod), "node", klog.KObj(modifiedNode)) - return framework.QueueSkip, nil -} - // Name returns name of the plugin. It is used in logs, etc. func (pl *NodeUnschedulable) Name() string { return Name diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/preemption/preemption.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/preemption/preemption.go index 376b6337e99f..29864adb52f7 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/preemption/preemption.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/preemption/preemption.go @@ -415,15 +415,18 @@ func (ev *Evaluator) prepareCandidate(ctx context.Context, c Candidate, pod *v1. func nodesWherePreemptionMightHelp(nodes []*framework.NodeInfo, m framework.NodeToStatusMap) ([]*framework.NodeInfo, framework.NodeToStatusMap) { var potentialNodes []*framework.NodeInfo nodeStatuses := make(framework.NodeToStatusMap) + unresolvableStatus := framework.NewStatus(framework.UnschedulableAndUnresolvable, "Preemption is not helpful for scheduling") for _, node := range nodes { - name := node.Node().Name - // We rely on the status by each plugin - 'Unschedulable' or 'UnschedulableAndUnresolvable' - // to determine whether preemption may help or not on the node. - if m[name].Code() == framework.UnschedulableAndUnresolvable { - nodeStatuses[node.Node().Name] = framework.NewStatus(framework.UnschedulableAndUnresolvable, "Preemption is not helpful for scheduling") - continue + nodeName := node.Node().Name + // We only attempt preemption on nodes with status 'Unschedulable'. For + // diagnostic purposes, we propagate UnschedulableAndUnresolvable if either + // implied by absence in map or explicitly set. + status, ok := m[nodeName] + if status.Code() == framework.Unschedulable { + potentialNodes = append(potentialNodes, node) + } else if !ok || status.Code() == framework.UnschedulableAndUnresolvable { + nodeStatuses[nodeName] = unresolvableStatus } - potentialNodes = append(potentialNodes, node) } return potentialNodes, nodeStatuses } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go index 8caa8ec1cb49..3a2ba6a005ee 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/runtime/framework.go @@ -37,6 +37,7 @@ import ( "k8s.io/kubernetes/pkg/scheduler/framework" "k8s.io/kubernetes/pkg/scheduler/framework/parallelize" "k8s.io/kubernetes/pkg/scheduler/metrics" + "k8s.io/kubernetes/pkg/util/slice" ) const ( @@ -526,7 +527,7 @@ func (f *frameworkImpl) expandMultiPointPlugins(logger klog.Logger, profile *con // - part 3: other plugins (excluded by part 1 & 2) in regular extension point. newPlugins := reflect.New(reflect.TypeOf(e.slicePtr).Elem()).Elem() // part 1 - for _, name := range enabledSet.list { + for _, name := range slice.CopyStrings(enabledSet.list) { if overridePlugins.has(name) { newPlugins = reflect.Append(newPlugins, reflect.ValueOf(pluginsMap[name])) enabledSet.delete(name) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/types.go index 696ad9b41ac0..d85c4541aa66 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/framework/types.go @@ -281,6 +281,11 @@ type WeightedAffinityTerm struct { // Diagnosis records the details to diagnose a scheduling failure. type Diagnosis struct { + // NodeToStatusMap records the status of each retriable node (status Unschedulable) + // if they're rejected in PreFilter (via PreFilterResult) or Filter plugins. + // Nodes that pass PreFilter/Filter plugins are not included in this map. + // While this map may contain UnschedulableAndUnresolvable statuses, the absence of + // a node should be interpreted as UnschedulableAndUnresolvable. NodeToStatusMap NodeToStatusMap // UnschedulablePlugins are plugins that returns Unschedulable or UnschedulableAndUnresolvable. UnschedulablePlugins sets.Set[string] diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/queue/scheduling_queue.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/queue/scheduling_queue.go index c50d853ba6ca..83c7aeee99c7 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/queue/scheduling_queue.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/internal/queue/scheduling_queue.go @@ -1143,6 +1143,11 @@ func (p *PriorityQueue) requeuePodViaQueueingHint(logger klog.Logger, pInfo *fra func (p *PriorityQueue) movePodsToActiveOrBackoffQueue(logger klog.Logger, podInfoList []*framework.QueuedPodInfo, event framework.ClusterEvent, oldObj, newObj interface{}) { activated := false for _, pInfo := range podInfoList { + // Since there may be many gated pods and they will not move from the + // unschedulable pool, we skip calling the expensive isPodWorthRequeueing. + if pInfo.Gated { + continue + } schedulingHint := p.isPodWorthRequeuing(logger, pInfo, event, oldObj, newObj) if schedulingHint == queueSkip { // QueueingHintFn determined that this Pod isn't worth putting to activeQ or backoffQ by this event. diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/metrics.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/metrics.go index d4871e70d7f8..220acad24e8f 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/metrics.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/metrics/metrics.go @@ -124,7 +124,7 @@ var ( // Start with 10ms with the last bucket being [~88m, Inf). Buckets: metrics.ExponentialBuckets(0.01, 2, 20), StabilityLevel: metrics.STABLE, - DeprecatedVersion: "1.28.0", + DeprecatedVersion: "1.29.0", }, []string{"attempts"}) diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/schedule_one.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/schedule_one.go index b9ded88732cd..397c0d2134b0 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/schedule_one.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/schedule_one.go @@ -483,19 +483,19 @@ func (sched *Scheduler) findNodesThatFitPod(ctx context.Context, fwk framework.F nodes := allNodes if !preRes.AllNodes() { nodes = make([]*framework.NodeInfo, 0, len(preRes.NodeNames)) - for n := range preRes.NodeNames { - nInfo, err := sched.nodeInfoSnapshot.NodeInfos().Get(n) - if err != nil { - return nil, diagnosis, err + for nodeName := range preRes.NodeNames { + // PreRes may return nodeName(s) which do not exist; we verify + // node exists in the Snapshot. + if nodeInfo, err := sched.nodeInfoSnapshot.Get(nodeName); err == nil { + nodes = append(nodes, nodeInfo) } - nodes = append(nodes, nInfo) } } feasibleNodes, err := sched.findNodesThatPassFilters(ctx, fwk, state, pod, &diagnosis, nodes) // always try to update the sched.nextStartNodeIndex regardless of whether an error has occurred // this is helpful to make sure that all the nodes have a chance to be searched processedNodes := len(feasibleNodes) + len(diagnosis.NodeToStatusMap) - sched.nextStartNodeIndex = (sched.nextStartNodeIndex + processedNodes) % len(nodes) + sched.nextStartNodeIndex = (sched.nextStartNodeIndex + processedNodes) % len(allNodes) if err != nil { return nil, diagnosis, err } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go index 84b73c44954f..640f0342b897 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/scheduler/scheduler.go @@ -512,7 +512,9 @@ func newPodInformer(cs clientset.Interface, resyncPeriod time.Duration) cache.Sh // The Extract workflow (i.e. `ExtractPod`) should be unused. trim := func(obj interface{}) (interface{}, error) { if accessor, err := meta.Accessor(obj); err == nil { - accessor.SetManagedFields(nil) + if accessor.GetManagedFields() != nil { + accessor.SetManagedFields(nil) + } } return obj, nil } diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/slice/slice.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/slice/slice.go new file mode 100644 index 000000000000..872fbdcad650 --- /dev/null +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/util/slice/slice.go @@ -0,0 +1,75 @@ +/* +Copyright 2015 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +// Package slice provides utility methods for common operations on slices. +package slice + +import ( + "sort" +) + +// CopyStrings copies the contents of the specified string slice +// into a new slice. +func CopyStrings(s []string) []string { + if s == nil { + return nil + } + c := make([]string, len(s)) + copy(c, s) + return c +} + +// SortStrings sorts the specified string slice in place. It returns the same +// slice that was provided in order to facilitate method chaining. +func SortStrings(s []string) []string { + sort.Strings(s) + return s +} + +// ContainsString checks if a given slice of strings contains the provided string. +// If a modifier func is provided, it is called with the slice item before the comparation. +func ContainsString(slice []string, s string, modifier func(s string) string) bool { + for _, item := range slice { + if item == s { + return true + } + if modifier != nil && modifier(item) == s { + return true + } + } + return false +} + +// RemoveString returns a newly created []string that contains all items from slice that +// are not equal to s and modifier(s) in case modifier func is provided. +func RemoveString(slice []string, s string, modifier func(s string) string) []string { + newSlice := make([]string, 0) + for _, item := range slice { + if item == s { + continue + } + if modifier != nil && modifier(item) == s { + continue + } + newSlice = append(newSlice, item) + } + if len(newSlice) == 0 { + // Sanitize for unit tests so we don't need to distinguish empty array + // and nil. + newSlice = nil + } + return newSlice +} diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_plugin.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_plugin.go index 7176c826c15a..8cfa3186d532 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_plugin.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/csi_plugin.go @@ -236,6 +236,9 @@ func (p *csiPlugin) Init(host volume.VolumeHost) error { csitranslationplugins.AzureFileInTreePluginName: func() bool { return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationAzureFile) }, + csitranslationplugins.VSphereInTreePluginName: func() bool { + return true + }, csitranslationplugins.PortworxVolumePluginName: func() bool { return utilfeature.DefaultFeatureGate.Enabled(features.CSIMigrationPortworx) }, diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/expander.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/expander.go index 408780582f11..d6aae0601013 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/expander.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/csi/expander.go @@ -72,7 +72,7 @@ func (c *csiPlugin) nodeExpandWithClient( } if !nodeExpandSet { - return false, fmt.Errorf("Expander.NodeExpand found CSI plugin %s/%s to not support node expansion", c.GetPluginName(), driverName) + return false, volumetypes.NewOperationNotSupportedError(fmt.Sprintf("NodeExpand is not supported by the CSI driver %s", driverName)) } pv := resizeOptions.VolumeSpec.PersistentVolume diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/plugins.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/plugins.go index 94c2330afc9f..dcccb56f102e 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/plugins.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/plugins.go @@ -1064,7 +1064,7 @@ func NewPersistentVolumeRecyclerPodTemplate() *v1.Pod { Containers: []v1.Container{ { Name: "pv-recycler", - Image: "registry.k8s.io/build-image/debian-base:bookworm-v1.0.0", + Image: "registry.k8s.io/build-image/debian-base:bookworm-v1.0.3", Command: []string{"/bin/sh"}, Args: []string{"-c", "test -e /scrub && find /scrub -mindepth 1 -delete && test -z \"$(ls -A /scrub)\" || exit 1"}, VolumeMounts: []v1.VolumeMount{ diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go index f4d257b005ac..0fa04e427e40 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_executor.go @@ -229,6 +229,10 @@ type ActualStateOfWorldMounterUpdater interface { // IsVolumeReconstructed returns true if volume currently added to actual state of the world // was found during reconstruction. IsVolumeReconstructed(volumeName v1.UniqueVolumeName, podName volumetypes.UniquePodName) bool + + // IsVolumeDeviceReconstructed returns true if volume device identified by volumeName has been + // found during reconstruction. + IsVolumeDeviceReconstructed(volumeName v1.UniqueVolumeName) bool } // ActualStateOfWorldAttacherUpdater defines a set of operations updating the diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go index a81835725642..5bfab8bb5086 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/operationexecutor/operation_generator.go @@ -780,6 +780,12 @@ func (og *operationGenerator) checkForFailedMount(volumeToMount VolumeToMount, m func (og *operationGenerator) markDeviceErrorState(volumeToMount VolumeToMount, devicePath, deviceMountPath string, mountError error, actualStateOfWorld ActualStateOfWorldMounterUpdater) { if volumetypes.IsOperationFinishedError(mountError) && actualStateOfWorld.GetDeviceMountState(volumeToMount.VolumeName) == DeviceMountUncertain { + + if actualStateOfWorld.IsVolumeDeviceReconstructed(volumeToMount.VolumeName) { + klog.V(2).InfoS("MountVolume.markDeviceErrorState leaving volume uncertain", "volumeName", volumeToMount.VolumeName) + return + } + // Only devices which were uncertain can be marked as unmounted markDeviceUnmountError := actualStateOfWorld.MarkDeviceAsUnmounted(volumeToMount.VolumeName) if markDeviceUnmountError != nil { @@ -2213,6 +2219,14 @@ func (og *operationGenerator) legacyCallNodeExpandOnPlugin(resizeOp nodeResizeOp _, resizeErr := expandableVolumePlugin.NodeExpand(rsOpts) if resizeErr != nil { + // This is a workaround for now, until RecoverFromVolumeExpansionFailure feature goes GA. + // If RecoverFromVolumeExpansionFailure feature is enabled, we will not ever hit this state, because + // we will wait for VolumeExpansionPendingOnNode before trying to expand volume in kubelet. + if volumetypes.IsOperationNotSupportedError(resizeErr) { + klog.V(4).InfoS(volumeToMount.GenerateMsgDetailed("MountVolume.NodeExpandVolume failed", "NodeExpandVolume not supported"), "pod", klog.KObj(volumeToMount.Pod)) + return true, nil + } + // if driver returned FailedPrecondition error that means // volume expansion should not be retried on this node but // expansion operation should not block mounting diff --git a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/types/types.go b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/types/types.go index af309353ba75..238e919b331c 100644 --- a/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/types/types.go +++ b/cluster-autoscaler/vendor/k8s.io/kubernetes/pkg/volume/util/types/types.go @@ -102,6 +102,23 @@ func IsFailedPreconditionError(err error) bool { return errors.As(err, &failedPreconditionError) } +type OperationNotSupported struct { + msg string +} + +func (err *OperationNotSupported) Error() string { + return err.msg +} + +func NewOperationNotSupportedError(msg string) *OperationNotSupported { + return &OperationNotSupported{msg: msg} +} + +func IsOperationNotSupportedError(err error) bool { + var operationNotSupportedError *OperationNotSupported + return errors.As(err, &operationNotSupportedError) +} + // TransientOperationFailure indicates operation failed with a transient error // and may fix itself when retried. type TransientOperationFailure struct { diff --git a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/azure_managedDiskController.go b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/azure_managedDiskController.go index b31f73203356..76ea737e6946 100644 --- a/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/azure_managedDiskController.go +++ b/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/azure_managedDiskController.go @@ -364,6 +364,11 @@ func (c *Cloud) GetAzureDiskLabels(diskURI string) (map[string]string, error) { return nil, err } + if c.Location == "" { + // The cloud provider is not initialized and cannot get the topology labels. + return nil, nil + } + labels := map[string]string{ v1.LabelTopologyRegion: c.Location, } diff --git a/cluster-autoscaler/vendor/modules.txt b/cluster-autoscaler/vendor/modules.txt index f1306e68d164..d3793465c5a9 100644 --- a/cluster-autoscaler/vendor/modules.txt +++ b/cluster-autoscaler/vendor/modules.txt @@ -21,6 +21,50 @@ github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2021-02-01/storage github.com/Azure/azure-sdk-for-go/services/storage/mgmt/2021-09-01/storage github.com/Azure/azure-sdk-for-go/storage github.com/Azure/azure-sdk-for-go/version +# github.com/Azure/azure-sdk-for-go-extensions v0.1.6 +## explicit; go 1.19 +github.com/Azure/azure-sdk-for-go-extensions/pkg/middleware +# github.com/Azure/azure-sdk-for-go/sdk/azcore v1.9.2 +## explicit; go 1.18 +github.com/Azure/azure-sdk-for-go/sdk/azcore +github.com/Azure/azure-sdk-for-go/sdk/azcore/arm +github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/internal/resource +github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/policy +github.com/Azure/azure-sdk-for-go/sdk/azcore/arm/runtime +github.com/Azure/azure-sdk-for-go/sdk/azcore/cloud +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/exported +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/log +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/async +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/body +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/fake +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/loc +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/pollers/op +github.com/Azure/azure-sdk-for-go/sdk/azcore/internal/shared +github.com/Azure/azure-sdk-for-go/sdk/azcore/log +github.com/Azure/azure-sdk-for-go/sdk/azcore/policy +github.com/Azure/azure-sdk-for-go/sdk/azcore/runtime +github.com/Azure/azure-sdk-for-go/sdk/azcore/streaming +github.com/Azure/azure-sdk-for-go/sdk/azcore/to +github.com/Azure/azure-sdk-for-go/sdk/azcore/tracing +# github.com/Azure/azure-sdk-for-go/sdk/azidentity v1.4.0 +## explicit; go 1.18 +github.com/Azure/azure-sdk-for-go/sdk/azidentity +# github.com/Azure/azure-sdk-for-go/sdk/internal v1.5.2 +## explicit; go 1.18 +github.com/Azure/azure-sdk-for-go/sdk/internal/diag +github.com/Azure/azure-sdk-for-go/sdk/internal/errorinfo +github.com/Azure/azure-sdk-for-go/sdk/internal/exported +github.com/Azure/azure-sdk-for-go/sdk/internal/log +github.com/Azure/azure-sdk-for-go/sdk/internal/poller +github.com/Azure/azure-sdk-for-go/sdk/internal/temporal +github.com/Azure/azure-sdk-for-go/sdk/internal/uuid +# github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4 v4.8.0-beta.1 +## explicit; go 1.18 +github.com/Azure/azure-sdk-for-go/sdk/resourcemanager/containerservice/armcontainerservice/v4 +# github.com/Azure/go-armbalancer v0.0.2 +## explicit; go 1.18 +github.com/Azure/go-armbalancer # github.com/Azure/go-autorest v14.2.0+incompatible ## explicit github.com/Azure/go-autorest @@ -58,6 +102,29 @@ github.com/Azure/go-autorest/tracing # github.com/Azure/skewer v0.0.14 ## explicit; go 1.13 github.com/Azure/skewer +# github.com/AzureAD/microsoft-authentication-library-for-go v1.1.1 +## explicit; go 1.18 +github.com/AzureAD/microsoft-authentication-library-for-go/apps/cache +github.com/AzureAD/microsoft-authentication-library-for-go/apps/confidential +github.com/AzureAD/microsoft-authentication-library-for-go/apps/errors +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/base/internal/storage +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/exported +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/json/types/time +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/local +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/accesstokens +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/authority +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/comm +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/internal/grant +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/oauth/ops/wstrust/defs +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/options +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/shared +github.com/AzureAD/microsoft-authentication-library-for-go/apps/internal/version +github.com/AzureAD/microsoft-authentication-library-for-go/apps/public # github.com/GoogleCloudPlatform/k8s-cloud-provider v1.18.1-0.20220218231025-f11817397a1b ## explicit; go 1.13 github.com/GoogleCloudPlatform/k8s-cloud-provider/pkg/cloud @@ -231,7 +298,7 @@ github.com/euank/go-kmsg-parser/kmsgparser # github.com/evanphx/json-patch v5.6.0+incompatible ## explicit github.com/evanphx/json-patch -# github.com/felixge/httpsnoop v1.0.3 +# github.com/felixge/httpsnoop v1.0.4 ## explicit; go 1.13 github.com/felixge/httpsnoop # github.com/fsnotify/fsnotify v1.7.0 @@ -299,6 +366,9 @@ github.com/gogo/protobuf/types # github.com/golang-jwt/jwt/v4 v4.5.0 ## explicit; go 1.16 github.com/golang-jwt/jwt/v4 +# github.com/golang-jwt/jwt/v5 v5.0.0 +## explicit; go 1.18 +github.com/golang-jwt/jwt/v5 # github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da ## explicit github.com/golang/groupcache/lru @@ -479,6 +549,10 @@ github.com/json-iterator/go # github.com/karrick/godirwalk v1.17.0 ## explicit; go 1.13 github.com/karrick/godirwalk +# github.com/kylelemons/godebug v1.1.0 +## explicit; go 1.11 +github.com/kylelemons/godebug/diff +github.com/kylelemons/godebug/pretty # github.com/libopenstorage/openstorage v1.0.0 ## explicit github.com/libopenstorage/openstorage/api @@ -601,6 +675,9 @@ github.com/opencontainers/runtime-spec/specs-go github.com/opencontainers/selinux/go-selinux github.com/opencontainers/selinux/go-selinux/label github.com/opencontainers/selinux/pkg/pwalkdir +# github.com/pkg/browser v0.0.0-20210911075715-681adbf594b8 +## explicit; go 1.14 +github.com/pkg/browser # github.com/pkg/errors v0.9.1 ## explicit github.com/pkg/errors @@ -748,11 +825,11 @@ go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelr ## explicit; go 1.19 go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/internal -# go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.44.0 -## explicit; go 1.19 +# go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.46.1 +## explicit; go 1.20 go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp/internal/semconvutil -# go.opentelemetry.io/otel v1.19.0 +# go.opentelemetry.io/otel v1.21.0 ## explicit; go 1.20 go.opentelemetry.io/otel go.opentelemetry.io/otel/attribute @@ -769,22 +846,22 @@ go.opentelemetry.io/otel/semconv/v1.12.0 go.opentelemetry.io/otel/semconv/v1.17.0 go.opentelemetry.io/otel/semconv/v1.17.0/httpconv go.opentelemetry.io/otel/semconv/v1.21.0 -# go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.19.0 +# go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.21.0 ## explicit; go 1.20 go.opentelemetry.io/otel/exporters/otlp/otlptrace go.opentelemetry.io/otel/exporters/otlp/otlptrace/internal/tracetransform -# go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.19.0 +# go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc v1.21.0 ## explicit; go 1.20 go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/envconfig go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/otlpconfig go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracegrpc/internal/retry -# go.opentelemetry.io/otel/metric v1.19.0 +# go.opentelemetry.io/otel/metric v1.21.0 ## explicit; go 1.20 go.opentelemetry.io/otel/metric go.opentelemetry.io/otel/metric/embedded -# go.opentelemetry.io/otel/sdk v1.19.0 +# go.opentelemetry.io/otel/sdk v1.21.0 ## explicit; go 1.20 go.opentelemetry.io/otel/sdk go.opentelemetry.io/otel/sdk/instrumentation @@ -792,9 +869,11 @@ go.opentelemetry.io/otel/sdk/internal go.opentelemetry.io/otel/sdk/internal/env go.opentelemetry.io/otel/sdk/resource go.opentelemetry.io/otel/sdk/trace -# go.opentelemetry.io/otel/trace v1.19.0 +# go.opentelemetry.io/otel/trace v1.21.0 ## explicit; go 1.20 go.opentelemetry.io/otel/trace +go.opentelemetry.io/otel/trace/embedded +go.opentelemetry.io/otel/trace/noop # go.opentelemetry.io/proto/otlp v1.0.0 ## explicit; go 1.17 go.opentelemetry.io/proto/otlp/collector/trace/v1 @@ -804,6 +883,10 @@ go.opentelemetry.io/proto/otlp/trace/v1 # go.uber.org/atomic v1.10.0 ## explicit; go 1.18 go.uber.org/atomic +# go.uber.org/mock v0.4.0 +## explicit; go 1.20 +go.uber.org/mock/gomock +go.uber.org/mock/mockgen/model # go.uber.org/multierr v1.11.0 ## explicit; go 1.19 go.uber.org/multierr @@ -817,7 +900,7 @@ go.uber.org/zap/internal/color go.uber.org/zap/internal/exit go.uber.org/zap/zapcore go.uber.org/zap/zapgrpc -# golang.org/x/crypto v0.16.0 +# golang.org/x/crypto v0.21.0 ## explicit; go 1.18 golang.org/x/crypto/chacha20 golang.org/x/crypto/chacha20poly1305 @@ -839,7 +922,7 @@ golang.org/x/exp/slices golang.org/x/mod/internal/lazyregexp golang.org/x/mod/module golang.org/x/mod/semver -# golang.org/x/net v0.19.0 +# golang.org/x/net v0.23.0 ## explicit; go 1.18 golang.org/x/net/bpf golang.org/x/net/context @@ -855,8 +938,8 @@ golang.org/x/net/internal/timeseries golang.org/x/net/proxy golang.org/x/net/trace golang.org/x/net/websocket -# golang.org/x/oauth2 v0.10.0 -## explicit; go 1.17 +# golang.org/x/oauth2 v0.11.0 +## explicit; go 1.18 golang.org/x/oauth2 golang.org/x/oauth2/authhandler golang.org/x/oauth2/clientcredentials @@ -868,7 +951,7 @@ golang.org/x/oauth2/jwt # golang.org/x/sync v0.5.0 ## explicit; go 1.18 golang.org/x/sync/singleflight -# golang.org/x/sys v0.15.0 +# golang.org/x/sys v0.18.0 ## explicit; go 1.18 golang.org/x/sys/cpu golang.org/x/sys/plan9 @@ -876,7 +959,7 @@ golang.org/x/sys/unix golang.org/x/sys/windows golang.org/x/sys/windows/registry golang.org/x/sys/windows/svc -# golang.org/x/term v0.15.0 +# golang.org/x/term v0.18.0 ## explicit; go 1.18 golang.org/x/term # golang.org/x/text v0.14.0 @@ -968,10 +1051,10 @@ google.golang.org/appengine/internal/modules google.golang.org/appengine/internal/remote_api google.golang.org/appengine/internal/urlfetch google.golang.org/appengine/urlfetch -# google.golang.org/genproto v0.0.0-20230803162519-f966b187b2e5 +# google.golang.org/genproto v0.0.0-20230822172742-b8732ec3820d ## explicit; go 1.19 google.golang.org/genproto/internal -# google.golang.org/genproto/googleapis/api v0.0.0-20230726155614-23370e0ffb3e +# google.golang.org/genproto/googleapis/api v0.0.0-20230822172742-b8732ec3820d ## explicit; go 1.19 google.golang.org/genproto/googleapis/api google.golang.org/genproto/googleapis/api/annotations @@ -982,7 +1065,7 @@ google.golang.org/genproto/googleapis/api/httpbody google.golang.org/genproto/googleapis/rpc/code google.golang.org/genproto/googleapis/rpc/errdetails google.golang.org/genproto/googleapis/rpc/status -# google.golang.org/grpc v1.58.3 +# google.golang.org/grpc v1.59.0 ## explicit; go 1.19 google.golang.org/grpc google.golang.org/grpc/attributes @@ -1098,7 +1181,7 @@ gopkg.in/yaml.v2 # gopkg.in/yaml.v3 v3.0.1 ## explicit gopkg.in/yaml.v3 -# k8s.io/api v0.29.3 => k8s.io/api v0.29.0 +# k8s.io/api v0.29.6 => k8s.io/api v0.29.6 ## explicit; go 1.21 k8s.io/api/admission/v1 k8s.io/api/admission/v1beta1 @@ -1154,10 +1237,10 @@ k8s.io/api/scheduling/v1beta1 k8s.io/api/storage/v1 k8s.io/api/storage/v1alpha1 k8s.io/api/storage/v1beta1 -# k8s.io/apiextensions-apiserver v0.29.3 => k8s.io/apiextensions-apiserver v0.29.0 +# k8s.io/apiextensions-apiserver v0.29.3 => k8s.io/apiextensions-apiserver v0.29.6 ## explicit; go 1.21 k8s.io/apiextensions-apiserver/pkg/features -# k8s.io/apimachinery v0.29.3 => k8s.io/apimachinery v0.29.0 +# k8s.io/apimachinery v0.29.6 => k8s.io/apimachinery v0.29.6 ## explicit; go 1.21 k8s.io/apimachinery/pkg/api/equality k8s.io/apimachinery/pkg/api/errors @@ -1220,7 +1303,7 @@ k8s.io/apimachinery/pkg/watch k8s.io/apimachinery/third_party/forked/golang/json k8s.io/apimachinery/third_party/forked/golang/netutil k8s.io/apimachinery/third_party/forked/golang/reflect -# k8s.io/apiserver v0.29.3 => k8s.io/apiserver v0.29.0 +# k8s.io/apiserver v0.29.6 => k8s.io/apiserver v0.29.6 ## explicit; go 1.21 k8s.io/apiserver/pkg/admission k8s.io/apiserver/pkg/admission/cel @@ -1368,7 +1451,7 @@ k8s.io/apiserver/plugin/pkg/audit/truncate k8s.io/apiserver/plugin/pkg/audit/webhook k8s.io/apiserver/plugin/pkg/authenticator/token/webhook k8s.io/apiserver/plugin/pkg/authorizer/webhook -# k8s.io/client-go v0.29.3 => k8s.io/client-go v0.29.0 +# k8s.io/client-go v0.29.6 => k8s.io/client-go v0.29.6 ## explicit; go 1.21 k8s.io/client-go/applyconfigurations/admissionregistration/v1 k8s.io/client-go/applyconfigurations/admissionregistration/v1alpha1 @@ -1696,7 +1779,7 @@ k8s.io/client-go/util/homedir k8s.io/client-go/util/keyutil k8s.io/client-go/util/retry k8s.io/client-go/util/workqueue -# k8s.io/cloud-provider v0.29.3 => k8s.io/cloud-provider v0.29.0 +# k8s.io/cloud-provider v0.29.6 => k8s.io/cloud-provider v0.29.6 ## explicit; go 1.21 k8s.io/cloud-provider k8s.io/cloud-provider/api @@ -1719,7 +1802,7 @@ k8s.io/cloud-provider/volume/helpers # k8s.io/cloud-provider-aws v1.27.0 ## explicit; go 1.20 k8s.io/cloud-provider-aws/pkg/providers/v1 -# k8s.io/code-generator v0.29.3 => k8s.io/code-generator v0.29.0 +# k8s.io/code-generator v0.29.6 => k8s.io/code-generator v0.29.6 ## explicit; go 1.21 k8s.io/code-generator k8s.io/code-generator/cmd/applyconfiguration-gen @@ -1757,7 +1840,7 @@ k8s.io/code-generator/cmd/set-gen k8s.io/code-generator/pkg/namer k8s.io/code-generator/pkg/util k8s.io/code-generator/third_party/forked/golang/reflect -# k8s.io/component-base v0.29.3 => k8s.io/component-base v0.29.0 +# k8s.io/component-base v0.29.6 => k8s.io/component-base v0.29.6 ## explicit; go 1.21 k8s.io/component-base/cli/flag k8s.io/component-base/codec @@ -1787,7 +1870,7 @@ k8s.io/component-base/tracing k8s.io/component-base/tracing/api/v1 k8s.io/component-base/version k8s.io/component-base/version/verflag -# k8s.io/component-helpers v0.29.3 => k8s.io/component-helpers v0.29.0 +# k8s.io/component-helpers v0.29.6 => k8s.io/component-helpers v0.29.6 ## explicit; go 1.21 k8s.io/component-helpers/apimachinery/lease k8s.io/component-helpers/node/topology @@ -1797,7 +1880,7 @@ k8s.io/component-helpers/scheduling/corev1 k8s.io/component-helpers/scheduling/corev1/nodeaffinity k8s.io/component-helpers/storage/ephemeral k8s.io/component-helpers/storage/volume -# k8s.io/controller-manager v0.29.3 => k8s.io/controller-manager v0.29.0 +# k8s.io/controller-manager v0.29.6 => k8s.io/controller-manager v0.29.6 ## explicit; go 1.21 k8s.io/controller-manager/config k8s.io/controller-manager/config/v1 @@ -1809,16 +1892,16 @@ k8s.io/controller-manager/pkg/features k8s.io/controller-manager/pkg/features/register k8s.io/controller-manager/pkg/leadermigration/config k8s.io/controller-manager/pkg/leadermigration/options -# k8s.io/cri-api v0.29.3 => k8s.io/cri-api v0.29.0 +# k8s.io/cri-api v0.29.6 => k8s.io/cri-api v0.29.6 ## explicit; go 1.21 k8s.io/cri-api/pkg/apis k8s.io/cri-api/pkg/apis/runtime/v1 k8s.io/cri-api/pkg/errors -# k8s.io/csi-translation-lib v0.27.0 => k8s.io/csi-translation-lib v0.29.0 +# k8s.io/csi-translation-lib v0.27.0 => k8s.io/csi-translation-lib v0.29.6 ## explicit; go 1.21 k8s.io/csi-translation-lib k8s.io/csi-translation-lib/plugins -# k8s.io/dynamic-resource-allocation v0.0.0 => k8s.io/dynamic-resource-allocation v0.29.0 +# k8s.io/dynamic-resource-allocation v0.0.0 => k8s.io/dynamic-resource-allocation v0.29.6 ## explicit; go 1.21 k8s.io/dynamic-resource-allocation/resourceclaim # k8s.io/gengo v0.0.0-20230829151522-9cce18d56c01 @@ -1842,7 +1925,7 @@ k8s.io/klog/v2/internal/dbg k8s.io/klog/v2/internal/serialize k8s.io/klog/v2/internal/severity k8s.io/klog/v2/internal/sloghandler -# k8s.io/kms v0.29.3 => k8s.io/kms v0.29.0 +# k8s.io/kms v0.29.6 => k8s.io/kms v0.29.6 ## explicit; go 1.21 k8s.io/kms/apis/v1beta1 k8s.io/kms/apis/v2 @@ -1873,14 +1956,14 @@ k8s.io/kube-openapi/pkg/validation/errors k8s.io/kube-openapi/pkg/validation/spec k8s.io/kube-openapi/pkg/validation/strfmt k8s.io/kube-openapi/pkg/validation/strfmt/bson -# k8s.io/kube-scheduler v0.0.0 => k8s.io/kube-scheduler v0.29.0 +# k8s.io/kube-scheduler v0.0.0 => k8s.io/kube-scheduler v0.29.6 ## explicit; go 1.21 k8s.io/kube-scheduler/config/v1 k8s.io/kube-scheduler/extender/v1 -# k8s.io/kubectl v0.28.0 => k8s.io/kubectl v0.29.0 +# k8s.io/kubectl v0.28.0 => k8s.io/kubectl v0.29.6 ## explicit; go 1.21 k8s.io/kubectl/pkg/scale -# k8s.io/kubelet v0.29.3 => k8s.io/kubelet v0.29.0 +# k8s.io/kubelet v0.29.6 => k8s.io/kubelet v0.29.6 ## explicit; go 1.21 k8s.io/kubelet/config/v1 k8s.io/kubelet/config/v1alpha1 @@ -1902,7 +1985,7 @@ k8s.io/kubelet/pkg/cri/streaming k8s.io/kubelet/pkg/cri/streaming/portforward k8s.io/kubelet/pkg/cri/streaming/remotecommand k8s.io/kubelet/pkg/types -# k8s.io/kubernetes v1.29.0 +# k8s.io/kubernetes v1.29.6 ## explicit; go 1.21 k8s.io/kubernetes/cmd/kubelet/app k8s.io/kubernetes/cmd/kubelet/app/options @@ -2111,6 +2194,7 @@ k8s.io/kubernetes/pkg/util/parsers k8s.io/kubernetes/pkg/util/pod k8s.io/kubernetes/pkg/util/removeall k8s.io/kubernetes/pkg/util/rlimit +k8s.io/kubernetes/pkg/util/slice k8s.io/kubernetes/pkg/util/tail k8s.io/kubernetes/pkg/util/taints k8s.io/kubernetes/pkg/volume @@ -2148,7 +2232,7 @@ k8s.io/kubernetes/pkg/volume/validation k8s.io/kubernetes/pkg/windows/service k8s.io/kubernetes/test/utils k8s.io/kubernetes/third_party/forked/golang/expansion -# k8s.io/legacy-cloud-providers v0.0.0 => k8s.io/legacy-cloud-providers v0.29.0 +# k8s.io/legacy-cloud-providers v0.0.0 => k8s.io/legacy-cloud-providers v0.29.6 ## explicit; go 1.21 k8s.io/legacy-cloud-providers/azure k8s.io/legacy-cloud-providers/azure/auth @@ -2190,7 +2274,7 @@ k8s.io/legacy-cloud-providers/gce/gcpcredential k8s.io/legacy-cloud-providers/vsphere k8s.io/legacy-cloud-providers/vsphere/vclib k8s.io/legacy-cloud-providers/vsphere/vclib/diskmanagers -# k8s.io/mount-utils v0.26.0-alpha.0 => k8s.io/mount-utils v0.30.0-alpha.0 +# k8s.io/mount-utils v0.26.0-alpha.0 => k8s.io/mount-utils v0.29.6 ## explicit; go 1.21 k8s.io/mount-utils # k8s.io/utils v0.0.0-20230726121419-3b25d923346b @@ -2290,34 +2374,33 @@ sigs.k8s.io/yaml # github.com/aws/aws-sdk-go/service/eks => github.com/aws/aws-sdk-go/service/eks v1.38.49 # github.com/digitalocean/godo => github.com/digitalocean/godo v1.27.0 # github.com/rancher/go-rancher => github.com/rancher/go-rancher v0.1.0 -# k8s.io/api => k8s.io/api v0.29.0 -# k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.29.0 -# k8s.io/apimachinery => k8s.io/apimachinery v0.29.0 -# k8s.io/apiserver => k8s.io/apiserver v0.29.0 -# k8s.io/cli-runtime => k8s.io/cli-runtime v0.29.0 -# k8s.io/client-go => k8s.io/client-go v0.29.0 -# k8s.io/cloud-provider => k8s.io/cloud-provider v0.29.0 -# k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.29.0 -# k8s.io/code-generator => k8s.io/code-generator v0.29.0 -# k8s.io/component-base => k8s.io/component-base v0.29.0 -# k8s.io/component-helpers => k8s.io/component-helpers v0.29.0 -# k8s.io/controller-manager => k8s.io/controller-manager v0.29.0 -# k8s.io/cri-api => k8s.io/cri-api v0.29.0 -# k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.29.0 -# k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.29.0 -# k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.29.0 -# k8s.io/kube-proxy => k8s.io/kube-proxy v0.29.0 -# k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.29.0 -# k8s.io/kubectl => k8s.io/kubectl v0.29.0 -# k8s.io/kubelet => k8s.io/kubelet v0.29.0 -# k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.29.0 -# k8s.io/metrics => k8s.io/metrics v0.29.0 -# k8s.io/mount-utils => k8s.io/mount-utils v0.30.0-alpha.0 -# k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.29.0 -# k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.29.0 -# k8s.io/sample-controller => k8s.io/sample-controller v0.29.0 -# k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.29.0 -# k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.29.0 -# k8s.io/kms => k8s.io/kms v0.29.0 -# k8s.io/noderesourcetopology-api => k8s.io/noderesourcetopology-api v0.27.0 -# k8s.io/endpointslice => k8s.io/endpointslice v0.29.0 +# k8s.io/api => k8s.io/api v0.29.6 +# k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.29.6 +# k8s.io/apimachinery => k8s.io/apimachinery v0.29.6 +# k8s.io/apiserver => k8s.io/apiserver v0.29.6 +# k8s.io/cli-runtime => k8s.io/cli-runtime v0.29.6 +# k8s.io/client-go => k8s.io/client-go v0.29.6 +# k8s.io/cloud-provider => k8s.io/cloud-provider v0.29.6 +# k8s.io/cluster-bootstrap => k8s.io/cluster-bootstrap v0.29.6 +# k8s.io/code-generator => k8s.io/code-generator v0.29.6 +# k8s.io/component-base => k8s.io/component-base v0.29.6 +# k8s.io/component-helpers => k8s.io/component-helpers v0.29.6 +# k8s.io/controller-manager => k8s.io/controller-manager v0.29.6 +# k8s.io/cri-api => k8s.io/cri-api v0.29.6 +# k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.29.6 +# k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.29.6 +# k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.29.6 +# k8s.io/kube-proxy => k8s.io/kube-proxy v0.29.6 +# k8s.io/kube-scheduler => k8s.io/kube-scheduler v0.29.6 +# k8s.io/kubectl => k8s.io/kubectl v0.29.6 +# k8s.io/kubelet => k8s.io/kubelet v0.29.6 +# k8s.io/legacy-cloud-providers => k8s.io/legacy-cloud-providers v0.29.6 +# k8s.io/metrics => k8s.io/metrics v0.29.6 +# k8s.io/mount-utils => k8s.io/mount-utils v0.29.6 +# k8s.io/sample-apiserver => k8s.io/sample-apiserver v0.29.6 +# k8s.io/sample-cli-plugin => k8s.io/sample-cli-plugin v0.29.6 +# k8s.io/sample-controller => k8s.io/sample-controller v0.29.6 +# k8s.io/pod-security-admission => k8s.io/pod-security-admission v0.29.6 +# k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.29.6 +# k8s.io/kms => k8s.io/kms v0.29.6 +# k8s.io/endpointslice => k8s.io/endpointslice v0.29.6 diff --git a/cluster-autoscaler/version/version.go b/cluster-autoscaler/version/version.go index 03e7668318f4..3329ac75d566 100644 --- a/cluster-autoscaler/version/version.go +++ b/cluster-autoscaler/version/version.go @@ -17,4 +17,4 @@ limitations under the License. package version // ClusterAutoscalerVersion contains version of CA. -const ClusterAutoscalerVersion = "1.29.2" +const ClusterAutoscalerVersion = "1.29.4" diff --git a/vertical-pod-autoscaler/e2e/vendor/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go b/vertical-pod-autoscaler/e2e/vendor/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go index 5e9b6adb739a..cd3eba2ea125 100644 --- a/vertical-pod-autoscaler/e2e/vendor/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go +++ b/vertical-pod-autoscaler/e2e/vendor/k8s.io/autoscaler/vertical-pod-autoscaler/pkg/apis/autoscaling.k8s.io/v1/types.go @@ -126,7 +126,7 @@ type EvictionRequirement struct { // Resources is a list of one or more resources that the condition applies // to. If more than one resource is given, the EvictionRequirement is fulfilled // if at least one resource meets `changeRequirement`. - Resources []v1.ResourceName `json:"resources" protobuf:"bytes,1,name=resources"` + Resources []v1.ResourceName `json:"resource" protobuf:"bytes,1,name=resources"` ChangeRequirement EvictionChangeRequirement `json:"changeRequirement" protobuf:"bytes,2,name=changeRequirement"` }