diff --git a/.github/workflows/e2e.yml b/.github/workflows/e2e.yml new file mode 100644 index 000000000..5fb84acd6 --- /dev/null +++ b/.github/workflows/e2e.yml @@ -0,0 +1,22 @@ +name: E2e Workflow +on: + push: + pull_request: +jobs: + e2e: + name: E2e test + needs: build + runs-on: ubuntu-22.04 + steps: + - name: Checkout code + uses: actions/checkout@v3 + with: + fetch-depth: 0 + - name: Run e2e test + run: ./hack/rune2e.sh + - name: Upload logs + uses: actions/upload-artifact@v3 + if: failure() + with: + name: kosmos-e2e-logs-${{ github.run_id }} + path: ${{ github.workspace }}/e2e-test/logs-* diff --git a/README.md b/README.md index b8d9a954f..81095d400 100644 --- a/README.md +++ b/README.md @@ -23,38 +23,17 @@ The Kosmos ClusterLink module currently includes the following key components: - `Multi-Cluster-Coredns`:Implements multi-cluster service discovery. - `Elector`:Elects the gateway node. -### Quick Start - -#### Start Locally - -The following command allows you to quickly run an experimental environment locally. This command will use kind (so Docker needs to be installed firstly) to create two Kubernetes clusters and deploy ClusterLink. -```Shell -./hack/local-up-clusterlink.sh -``` - -Verify if the service is running smoothly. -```Shell -kubectl --context=kind-cluster-host-local get pods -nclusterlink-system -kubectl --context=kind-cluster-member1-local get pods -nclusterlink-system -``` - -Verify if the cross-cluster network is connected. -```Shell -kubectl --context=kind-cluster-host-local exec -it -- ping -``` - - ## ClusterTree The Kosmos clustertree module realizes the tree-like scaling of Kubernetes and achieves cross-cluster orchestration of applications. -
+
Currently, it primarily supports the following ability: 1. **Full Compatibility with k8s API**: Users can interact with the host cluster's `kube-apiserver` using tools like `kubectl`, `client-go`, and others just like they normally would. However, the `Pods` are actually distributed across the entire multi-cloud, multi-cluster environment. 2. **Support for Stateful and k8s-native Applications**: In addition to stateless applications, Kosmos also facilitates the orchestration of stateful applications and k8s-native applications (interacting with `kube-apiserver`). Kosmos will automatically detect the storage and permission resources that `Pods`depend on, such as pv/pvc, sa, etc., and perform automatic bothway synchronization. 3. **Diverse Pod Topology Constraints**: Users can easily control the distribution of Pods within the global clusters, such as by region, availability zone, cluster, or node. This helps achieve high availability and improve resource utilization. -## Scheduler (Under Construction) +## Scheduler The Kosmos scheduling module is an extension developed on top of the Kubernetes scheduling framework, aiming to meet the container management needs in mixed-node and sub-cluster environments. It provides the following core features to enhance the flexibility and efficiency of container management: @@ -65,8 +44,21 @@ The Kosmos scheduling module is an extension developed on top of the Kubernetes 3. **Fine-grained Fragmented Resource Handling**: The Kosmos scheduling module intelligently detects fragmented resources within sub-clusters, effectively avoiding situations where pod deployment encounters insufficient resources in the sub-cluster. This helps ensure a more balanced allocation of resources across different nodes, enhancing system stability and performance. Whether building a hybrid cloud environment or requiring flexible deployment of workloads across different clusters, the Kosmos scheduling module serves as a reliable solution, assisting users in managing containerized applications more efficiently. -## Contact +## Quick Start +The following command allows you to quickly run an experimental environment with three clusters. +Install the control plane in the host cluster. +```Shell +kosmosctl install --cni calico --default-nic eth0 (We build a network tunnel based the network interface value passed by the arg default-nic) +``` +Join the two member clusters. +```Shell +kosmosctl join cluster --name cluster1 --kubeconfig ~/kubeconfig/cluster1-kubeconfig --cni calico --default-nic eth0 --enable-all +kosmosctl join cluster --name cluster2 --kubeconfig ~/kubeconfig/cluster2-kubeconfig --cni calico --default-nic eth0 --enable-all +``` +And then we can Use the Kosmos clusters like single cluster. + +## Contact If you have questions, feel free to reach out to us in the following ways: - [Email](mailto:wuyingjun@cmss.chinamobile.com) - [WeChat](./docs/images/kosmos-WechatIMG.jpg) diff --git a/README_zh.md b/README_zh.md index 3998bb2cd..f4df99343 100644 --- a/README_zh.md +++ b/README_zh.md @@ -23,33 +23,17 @@ Kosmos多集群网络模块目前包含以下几个关键组件: - `Multi-Cluster-Coredns`: 实现多集群服务发现; - `Elector`:负责gateway节点选举; -### 快速开始 - -#### 本地启动 -通过以下命令可以快速在本地运行一个实验环境,该命令将基于`kind`(因此需要先安装docker)创建两个k8s集群,并部署ClusterLink。 -```bash -./hack/local-up-clusterlink.sh -``` -检查服务是否正常运行 -```bash -kubectl --context=kind-cluster-host-local get pods -nclusterlink-system -kubectl --context=kind-cluster-member1-local get pods -nclusterlink-system -``` -确认跨集群网络是否打通 -```bash -kubectl --context=kind-cluster-host-local exec -it -- ping -``` ## 多集群管理编排 Kosmos多集群管理编排模块实现了Kubernetes的树形扩展和应用的跨集群编排。 -
+
目前主要支持以下功能: 1. **完全兼容k8s api**:用户可以像往常那样,使用 `kubectl`、`client-go`等工具与host集群的`kube-apiserver`交互,而`Pod`实际上是分布在整个多云多集群中。 2. **有状态应用、k8s-native应用支持**:除了无状态应用,Kosmos还支持对有状态应用和 k8s-native(与 `kube-apiserver`存在交互)应用的编排。Kosmos会自动检测`Pod`依赖的存储、权限资源,例如:pv/pvc、sa等,并自动进行双向同步。 3. **多样化Pod拓扑分布约束**:用户可以轻易的控制Pod在联邦集群中的分布,如:区域(Region)、可用区(Zone)、集群或者节点,有助于实现高可用并提升资源利用率。 -## 多集群调度(建设中) +## 多集群调度 Kosmos调度模块是基于Kubernetes调度框架的扩展开发,旨在满足混合节点和子集群环境下的容器管理需求。这一调度器经过精心设计与定制,提供了以下核心功能,以增强容器管理的灵活性和效率: 1. **灵活的节点和集群混合调度**: Kosmos调度模块允许用户依据自定义配置,轻松地将工作负载在真实节点和子集群之间智能地调度。这使得用户能够充分利用不同节点的资源,以确保工作负载在性能和可用性方面的最佳表现。基于该功能,Kosmos可以让工作负载实现灵活的跨云跨集群部署。 @@ -58,8 +42,21 @@ Kosmos调度模块是基于Kubernetes调度框架的扩展开发,旨在满足 无论是构建混合云环境还是需要在不同集群中进行工作负载的灵活部署,Kosmos调度模块都可作为可靠的解决方案,协助用户更高效地管理容器化应用。 -## 贡献者 +## 快速开始 +通过以下命令可以快速在本地运行一个三个集群的实验环境: +在主集群部署管理组件 +```bash +kosmosctl install --cni calico --default-nic eth0 (参数default-nic 表示基于哪个网卡创建网络隧道) +``` +加入两个子集群 +```bash +kosmosctl join cluster --name cluster1 --kubeconfig ~/kubeconfig/cluster1-kubeconfig --cni calico --default-nic eth0 --enable-all +kosmosctl join cluster --name cluster2 --kubeconfig ~/kubeconfig/cluster2-kubeconfig --cni calico --default-nic eth0 --enable-all +``` + +然后我们就可以像使用单集群一样去使用多集群了 +## 贡献者 diff --git a/cmd/clustertree/cluster-manager/app/manager.go b/cmd/clustertree/cluster-manager/app/manager.go index 85e0693de..aba786d69 100644 --- a/cmd/clustertree/cluster-manager/app/manager.go +++ b/cmd/clustertree/cluster-manager/app/manager.go @@ -24,6 +24,7 @@ import ( podcontrollers "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pod" "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pv" "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pvc" + nodeserver "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/node-server" leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils" "github.com/kosmos.io/kosmos/pkg/scheme" "github.com/kosmos.io/kosmos/pkg/sharedcli/klogflag" @@ -164,7 +165,7 @@ func run(ctx context.Context, opts *options.Options) error { clusterController := clusterManager.ClusterController{ Root: mgr.GetClient(), RootDynamic: dynamicClient, - RootClient: rootClient, + RootClientset: rootClient, EventRecorder: mgr.GetEventRecorderFor(clusterManager.ControllerName), Options: opts, RootResourceManager: rootResourceManager, @@ -187,11 +188,12 @@ func run(ctx context.Context, opts *options.Options) error { // add auto create mcs resources controller autoCreateMCSController := mcs.AutoCreateMCSController{ - RootClient: mgr.GetClient(), - EventRecorder: mgr.GetEventRecorderFor(mcs.AutoCreateMCSControllerName), - Logger: mgr.GetLogger(), - RootKosmosClient: rootKosmosClient, - GlobalLeafManager: globalleafManager, + RootClient: mgr.GetClient(), + EventRecorder: mgr.GetEventRecorderFor(mcs.AutoCreateMCSControllerName), + Logger: mgr.GetLogger(), + AutoCreateMCSPrefix: opts.AutoCreateMCSPrefix, + RootKosmosClient: rootKosmosClient, + GlobalLeafManager: globalleafManager, } if err = autoCreateMCSController.SetupWithManager(mgr); err != nil { return fmt.Errorf("error starting %s: %v", mcs.AutoCreateMCSControllerName, err) @@ -220,20 +222,40 @@ func run(ctx context.Context, opts *options.Options) error { return fmt.Errorf("error starting rootPodReconciler %s: %v", podcontrollers.RootPodControllerName, err) } - rootPVCController := pvc.RootPVCController{ - RootClient: mgr.GetClient(), - GlobalLeafManager: globalleafManager, - } - if err := rootPVCController.SetupWithManager(mgr); err != nil { - return fmt.Errorf("error starting root pvc controller %v", err) - } + if !opts.OnewayStorageControllers { + rootPVCController := pvc.RootPVCController{ + RootClient: mgr.GetClient(), + GlobalLeafManager: globalleafManager, + } + if err := rootPVCController.SetupWithManager(mgr); err != nil { + return fmt.Errorf("error starting root pvc controller %v", err) + } - rootPVController := pv.RootPVController{ - RootClient: mgr.GetClient(), - GlobalLeafManager: globalleafManager, - } - if err := rootPVController.SetupWithManager(mgr); err != nil { - return fmt.Errorf("error starting root pv controller %v", err) + rootPVController := pv.RootPVController{ + RootClient: mgr.GetClient(), + GlobalLeafManager: globalleafManager, + } + if err := rootPVController.SetupWithManager(mgr); err != nil { + return fmt.Errorf("error starting root pv controller %v", err) + } + } else { + onewayPVController := pv.OnewayPVController{ + Root: mgr.GetClient(), + RootDynamic: dynamicClient, + GlobalLeafManager: globalleafManager, + } + if err := onewayPVController.SetupWithManager(mgr); err != nil { + return fmt.Errorf("error starting oneway pv controller %v", err) + } + + onewayPVCController := pvc.OnewayPVCController{ + Root: mgr.GetClient(), + RootDynamic: dynamicClient, + GlobalLeafManager: globalleafManager, + } + if err := onewayPVCController.SetupWithManager(mgr); err != nil { + return fmt.Errorf("error starting oneway pvc controller %v", err) + } } // init commonController @@ -259,6 +281,16 @@ func run(ctx context.Context, opts *options.Options) error { } }() + nodeServer := nodeserver.NodeServer{ + RootClient: mgr.GetClient(), + GlobalLeafManager: globalleafManager, + } + go func() { + if err := nodeServer.Start(ctx, opts); err != nil { + klog.Errorf("failed to start node server: %v", err) + } + }() + rootResourceManager.InformerFactory.Start(ctx.Done()) rootResourceManager.KosmosInformerFactory.Start(ctx.Done()) if !cache.WaitForCacheSync(ctx.Done(), rootResourceManager.EndpointSliceInformer.HasSynced, rootResourceManager.ServiceInformer.HasSynced) { diff --git a/cmd/clustertree/cluster-manager/app/options/options.go b/cmd/clustertree/cluster-manager/app/options/options.go index cf891e496..cd9a3974d 100644 --- a/cmd/clustertree/cluster-manager/app/options/options.go +++ b/cmd/clustertree/cluster-manager/app/options/options.go @@ -31,6 +31,12 @@ type Options struct { // clusters. RootCoreDNSServiceNamespace string RootCoreDNSServiceName string + + // Enable oneway storage controllers + OnewayStorageControllers bool + + // AutoCreateMCSPrefix is the prefix of the namespace for service to auto create in leaf cluster + AutoCreateMCSPrefix []string } type KubernetesOptions struct { @@ -70,6 +76,7 @@ func (o *Options) AddFlags(flags *pflag.FlagSet) { flags.BoolVar(&o.MultiClusterService, "multi-cluster-service", false, "Turn on or off mcs support.") flags.StringVar(&o.RootCoreDNSServiceNamespace, "root-coredns-service-namespace", CoreDNSServiceNamespace, "The namespace of the CoreDNS service in the root cluster, used to locate the CoreDNS service when MultiClusterService is disabled.") flags.StringVar(&o.RootCoreDNSServiceName, "root-coredns-service-name", CoreDNSServiceName, "The name of the CoreDNS service in the root cluster, used to locate the CoreDNS service when MultiClusterService is disabled.") - + flags.BoolVar(&o.OnewayStorageControllers, "oneway-storage-controllers", false, "Turn on or off oneway storage controllers.") + flags.StringSliceVar(&o.AutoCreateMCSPrefix, "auto-mcs-prefix", []string{}, "The prefix of namespace for service to auto create mcs resources") options.BindLeaderElectionFlags(&o.LeaderElection, flags) } diff --git a/deploy/clustertree-cluster-manager.yml b/deploy/clustertree-cluster-manager.yml index 137eead7b..7345694fd 100644 --- a/deploy/clustertree-cluster-manager.yml +++ b/deploy/clustertree-cluster-manager.yml @@ -28,6 +28,17 @@ subjects: name: clustertree namespace: kosmos-system --- +apiVersion: v1 +kind: Secret +metadata: + name: clustertree-cluster-manager + namespace: kosmos-system +type: Opaque +data: + cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUQzakNDQXNhZ0F3SUJBZ0lJVWE0NWVxZmI0c0V3RFFZSktvWklodmNOQVFFTEJRQXdmekVMTUFrR0ExVUUKQmhNQ1ZWTXhEekFOQmdOVkJBZ1RCazl5WldkdmJqRVJNQThHQTFVRUJ4TUlVRzl5ZEd4aGJtUXhHREFXQmdOVgpCQW9URDNacmRXSmxiR1YwTFcxdlkyc3RNREVZTUJZR0ExVUVDeE1QZG10MVltVnNaWFF0Ylc5amF5MHdNUmd3CkZnWURWUVFERXc5MmEzVmlaV3hsZEMxdGIyTnJMVEF3SGhjTk1UZ3hNVEkyTVRJd016SXpXaGNOTVRrd01qSTEKTVRnd09ESXpXakIvTVFzd0NRWURWUVFHRXdKVlV6RVBNQTBHQTFVRUNCTUdUM0psWjI5dU1SRXdEd1lEVlFRSApFd2hRYjNKMGJHRnVaREVZTUJZR0ExVUVDaE1QZG10MVltVnNaWFF0Ylc5amF5MHdNUmd3RmdZRFZRUUxFdzkyCmEzVmlaV3hsZEMxdGIyTnJMVEF4R0RBV0JnTlZCQU1URDNacmRXSmxiR1YwTFcxdlkyc3RNRENDQVNJd0RRWUoKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTHJ5SHZLM1VCQkJxR1YyRnB3eW1mMHAvWUtHUUE5cgpOdTBONmYyK1JrVVhMdVFYRytXZEZRbDNaUXliUExmQ0UyaHdGY2wzSUYrM2hDelkzLzJVSXlHQmxvQklmdDdLCllGTE0zWVdKRHk1RWxLRGcxYk5EU0x6RjZ0a3BOTERuVmxna1BQSVR6cEVISUF1K0JUNURaR1doWUFXTy9EaXIKWGR4b0pCT2hQWlpDY0JDVitrd1FRUGJzWHpaeStxN1FoeDI3MENSTUlYc285QzVMSmhHWUw5ZndzeG11a0FPUgo1NlNtZnNBYW1sN1VPbHpISVRSRHdENUFRMUJrVFNFRnkwOGRrNkpBWUw4TERMaGdhTG9Xb1YwR2UyZ09JZXBSCmpwbDg3ZEdiU1ZHeUJIbVRYdjRvNnV0cVQ2UzZuVTc2TG45TlNpN1loTXFqOHVXdjBwVERsWWNDQXdFQUFhTmUKTUZ3d0RnWURWUjBQQVFIL0JBUURBZ1dnTUIwR0ExVWRKUVFXTUJRR0NDc0dBUVVGQndNQkJnZ3JCZ0VGQlFjRApBakFNQmdOVkhSTUJBZjhFQWpBQU1CMEdBMVVkRGdRV0JCUVZId1Uxc3k3UW53MVd2VnZGTGNacmhvVDQwREFOCkJna3Foa2lHOXcwQkFRc0ZBQU9DQVFFQXNOR05LejFKd2Z3ZzdyWWFPN1ZGL3phbjAxWFhaRlAxYm5GWW5YSnUKMTVSemhPQk1zcDNLdldDVmh3VWZ4TmU4R2hVRFN4MnRtUzVFQS84b2FFbmdMRmwzanRSM3BuVU5Pd0RWbHpseQpRT0NOM3JsT2k0K3AyNkx2TWlBRnA1aHhYQXYzTE9SczZEenI2aDMvUVR0bFY1akRTaFVPWFpkRmRPUEpkWjJuCmc0Ymlyckc3TU82dnd2UjhDaU5jUTI2YitiOHA5QkdYYkU4YnNKb0htY3NxeWE4ZmJWczJuNkNkRUplSSs0aEQKTjZ4bG81U3Zoakg1dEZJSTdlQ1ZlZHlaR2wwQkt2a29jT2lnTGdxOFgrSnpGeGoxd3RkbXRYdjdzamRLY0I5cgo2VFdHSlJyWlZ4b3hVT3paaHB4VWozai9wTGFSY0RtdHRTSkN1RHUzTkF0a2dRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= + key.pem: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBdXZJZThyZFFFRUdvWlhZV25ES1ovU245Z29aQUQyczI3UTNwL2I1R1JSY3U1QmNiCjVaMFZDWGRsREpzOHQ4SVRhSEFWeVhjZ1g3ZUVMTmpmL1pRaklZR1dnRWgrM3NwZ1VzemRoWWtQTGtTVW9PRFYKczBOSXZNWHEyU2swc09kV1dDUTg4aFBPa1FjZ0M3NEZQa05rWmFGZ0JZNzhPS3RkM0dna0U2RTlsa0p3RUpYNgpUQkJBOXV4Zk5uTDZydENISGJ2UUpFd2hleWowTGtzbUVaZ3YxL0N6R2E2UUE1SG5wS1ord0JxYVh0UTZYTWNoCk5FUEFQa0JEVUdSTklRWExUeDJUb2tCZ3Z3c011R0JvdWhhaFhRWjdhQTRoNmxHT21YenQwWnRKVWJJRWVaTmUKL2lqcTYycFBwTHFkVHZvdWYwMUtMdGlFeXFQeTVhL1NsTU9WaHdJREFRQUJBb0lCQUVOODR0VkdmaDNRUmlXUwpzdWppajVySVROK1E3WkZqYUNtOTZ5b1NSYlh0ZjUwU0JwMG16eEJpek5UM09iMHd6K2JWQjloNksvTENBbkphClBNcURid2RLaS9WMXRtOWhhZEthYUtJcmI1S0phWXFHZ0Q4OTNBVmlBYjB4MWZiREhQV201MldRNXZLT092QmkKUWV4UFVmQXFpTXFZNnM3ZWRuejZENFFTb25RYW14Q1VQQlBZdnVkbWF5SHRQbGM4UWI2ZVkwVitwY2RGblcwOApTRFpYWU94ZXkzL0lBalp5ZGNBN1hndk5TYys2WE93bWhLc0dBVzcxdUZUVGFnSnZ6WDNlUENZMTRya0dKbURHCm0vMTBob1c2Tk1LR2VWL1J5WDNkWDBqSm1EazFWZnhBUVczeHBPaXBaZmdmdmdhdkNPcUhuS0E2SThkSzN6aGcKdkU5QmxlRUNnWUVBODdYL3p0UVpESTRxT1RBOUNXL25NWGZ3QXk5UU8xSzZiR2hCSFV1N0pzNHBxZ3h1SDhGawpoUWdRSzdWOGlhc255L2RDeWo2Q3UzUUpOb2Z4dWRBdkxMUUtrcXV5UU9hK3pxRkNVcFZpZDdKVlJNY1JMSmx0CjNIbHlDTnZWbGhmakRUMGNJMlJkVTQ1cThNblpveTFmM0RQWkIxNmNIYjNITDl6MWdRWlRpWEVDZ1lFQXhGOWEKNjhTYnhtV0ZCSzdQYW9iSTh3VmZEb1Rpckhtb0F2bnlwWUswb1FrQVg4Vm1FbXRFRXMyK04xbW9LalNUUHIrdAp1czRKS2d1QTh6MnR1TGs1aitlRit6RGwvMlUrN2RqVEY4RkNOcHJ3ejNzWHI0MjdHQ0lHTDVZdnBJQlorVEw4CkJqaTJ1eW9vOGs5U0FXTWI0T2JPemZHbTR0ZUN2Y2lTOTlndzBuY0NnWUF0NUdiQVZ0WkVzL3lsZWp6MEt2dFoKS0dHczU5cnU0TncwRDhtN0w0aVZmUnNCWjRmUk9RU3B2R1AzSnh6RmU5SnBxUzBOa29uaHJLOFRjclFGTG52RApxaitYY1BlSEd5eHhFcEsvcEZ1L2VIaHdGQ0JheXFXU2I5Z1diUGNpWldzZkVoUGJZa25rc3h2V0xkeHF5dCtUClFyd3FsQmxIekhYV3dJQUdoTjkwTVFLQmdRQzVDWWtwQkZnc3VGaUJNeCtySjFxTzlJNi9wYVBhRmNDbEhWVHgKZEpvejY4RjRmUTlUWjlQN1MvZGpQSTVqUnF0QXcyazJ6eEovbGR0cVdNSXJnQTJuZGVnZjY5R3R1SDkxcTR3dApwQ042Uk1HSklGb1BTQ1AxOTRtUXFabzNEZUs2R0xxMk9oYWxnbktXOFBzNjUyTExwM0ZUU2RPUmlMVmZrM0k1CkxIUEV2UUtCZ0RDeGEvM3ZuZUc4dmdzOEFyRWpOODlCL1l4TzFxSVU1bXhKZTZaYWZiODFOZGhZVWpmUkFWcm8KQUxUb2ZpQXBNc25EYkpESE1pd3Z3Y0RVSGJQTHBydUs4MFIvL3ptWDdYZW4rRis1b2JmU1E4ajBHU21tZVdGUQpTVkc2QXBOdGt0TFBJMG5LMm5FSUgvUXg0b3VHQzlOMHBBRFJDbFFRUFN4RVBtRHZmNHhmCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCgo= + +--- apiVersion: apps/v1 kind: Deployment metadata: @@ -49,8 +60,25 @@ spec: containers: - name: clustertree-cluster-manager image: ghcr.io/kosmos-io/clustertree-cluster-manager:__VERSION__ - imagePullPolicy: Always + imagePullPolicy: IfNotPresent + env: + - name: APISERVER_CERT_LOCATION + value: /etc/cluster-tree/cert/cert.pem + - name: APISERVER_KEY_LOCATION + value: /etc/cluster-tree/cert/key.pem + - name: KNODE_POD_IP + valueFrom: + fieldRef: + fieldPath: status.podIP + volumeMounts: + - name: credentials + mountPath: "/etc/cluster-tree/cert" + readOnly: true command: - clustertree-cluster-manager - --multi-cluster-service=true - --v=4 + volumes: + - name: credentials + secret: + secretName: clustertree-cluster-manager diff --git a/docs/images/clustertree-arch.png b/docs/images/clustertree-arch.png new file mode 100644 index 000000000..add83396a Binary files /dev/null and b/docs/images/clustertree-arch.png differ diff --git a/docs/images/knode-arch.png b/docs/images/knode-arch.png deleted file mode 100644 index 19095bac7..000000000 Binary files a/docs/images/knode-arch.png and /dev/null differ diff --git a/docs/images/link-arch.png b/docs/images/link-arch.png index 9be6a2dec..cf6180e3a 100644 Binary files a/docs/images/link-arch.png and b/docs/images/link-arch.png differ diff --git a/go.mod b/go.mod index 157137a03..d7053b463 100644 --- a/go.mod +++ b/go.mod @@ -9,6 +9,7 @@ require ( github.com/go-logr/logr v1.2.3 github.com/gogo/protobuf v1.3.2 github.com/google/go-cmp v0.5.9 + github.com/gorilla/mux v1.8.1 github.com/olekukonko/tablewriter v0.0.4 github.com/onsi/ginkgo/v2 v2.9.2 github.com/onsi/gomega v1.27.4 @@ -62,6 +63,7 @@ require ( github.com/emicklei/go-restful/v3 v3.9.0 // indirect github.com/evanphx/json-patch/v5 v5.6.0 // indirect github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect + github.com/fatih/camelcase v1.0.0 // indirect github.com/felixge/httpsnoop v1.0.3 // indirect github.com/fsnotify/fsnotify v1.6.0 // indirect github.com/fvbommel/sortorder v1.0.1 // indirect diff --git a/go.sum b/go.sum index 1f59f8a54..61a3b595d 100644 --- a/go.sum +++ b/go.sum @@ -186,6 +186,8 @@ github.com/evanphx/json-patch/v5 v5.6.0 h1:b91NhWfaz02IuVxO9faSllyAtNXHMPkC5J8sJ github.com/evanphx/json-patch/v5 v5.6.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4= github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d h1:105gxyaGwCFad8crR9dcMQWvV9Hvulu6hwUh4tWPJnM= github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d/go.mod h1:ZZMPRZwes7CROmyNKgQzC3XPs6L/G2EJLHddWejkmf4= +github.com/fatih/camelcase v1.0.0 h1:hxNvNX/xYBp0ovncs8WyWZrOrpBNub/JfaMvbURyft8= +github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwoZc+/fpc= github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4= github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBdXk= github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U= @@ -354,6 +356,8 @@ github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU= github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4= github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY= +github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY= +github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ= github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ= github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc= github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE= diff --git a/hack/cluster.sh b/hack/cluster.sh index d2725d7cc..14be32112 100755 --- a/hack/cluster.sh +++ b/hack/cluster.sh @@ -117,10 +117,18 @@ function join_cluster() { local kubeconfig_path="${ROOT}/environments/${member_cluster}/kubeconfig" local base64_kubeconfig=$(base64 < "$kubeconfig_path") echo " base64 kubeconfig successfully converted: $base64_kubeconfig " + + local common_metadata="" + if [ "$host_cluster" == "$member_cluster" ]; then + common_metadata="annotations: + kosmos.io/cluster-role: root" + fi + cat < /dev/null; then + install_go +fi + +# Verify the Go version +if ! go version | grep -q "go1.20"; then + echo "Installed Go version does not match the required version (1.20)." + install_go +fi + +echo "Go is installed and the version is correct." diff --git a/hack/install_kind_kubectl.sh b/hack/install_kind_kubectl.sh new file mode 100644 index 000000000..fbfe1f0c9 --- /dev/null +++ b/hack/install_kind_kubectl.sh @@ -0,0 +1,40 @@ +#!/usr/bin/env bash + +set -o errexit +set -o nounset +set -o pipefail + +ROOT="$(dirname "${BASH_SOURCE[0]}")" +source "${ROOT}/util.sh" + +# Make sure go exists and the go version is a viable version. +if command -v go &> /dev/null; then + util::verify_go_version +else + source "$(dirname "${BASH_SOURCE[0]}")/install-go.sh" +fi + +# Make sure docker exists +util::cmd_must_exist "docker" + +# install kind and kubectl +kind_version=v0.20.0 +echo -n "Preparing: 'kind' existence check - " +if util::cmd_exist kind; then + echo "passed" +else + echo "not pass" + util::install_tools "sigs.k8s.io/kind" $kind_version +fi +# get arch name and os name in bootstrap +BS_ARCH=$(go env GOARCH) +BS_OS=$(go env GOOS) +# check arch and os name before installing +util::install_environment_check "${BS_ARCH}" "${BS_OS}" +echo -n "Preparing: 'kubectl' existence check - " +if util::cmd_exist kubectl; then + echo "passed" +else + echo "not pass" + util::install_kubectl "" "${BS_ARCH}" "${BS_OS}" +fi diff --git a/hack/local-up-clusterlink.sh b/hack/local-up-clusterlink.sh index 6f747ff9a..a53b1f111 100755 --- a/hack/local-up-clusterlink.sh +++ b/hack/local-up-clusterlink.sh @@ -18,6 +18,7 @@ MEMBER2_CLUSTER_SERVICE_CIDR="10.235.0.0/18" export VERSION="latest" ROOT="$(dirname "${BASH_SOURCE[0]}")" +source "$(dirname "${BASH_SOURCE[0]}")/install_kind_kubectl.sh" source "$(dirname "${BASH_SOURCE[0]}")/cluster.sh" make images GOOS="linux" --directory="${ROOT}" diff --git a/hack/rune2e.sh b/hack/rune2e.sh index 52a3899a7..3ad37c2b2 100755 --- a/hack/rune2e.sh +++ b/hack/rune2e.sh @@ -21,19 +21,23 @@ MEMBER2_CLUSTER_SERVICE_CIDR="10.235.0.0/18" ROOT="$(dirname "${BASH_SOURCE[0]}")" export VERSION="latest" +source "$(dirname "${BASH_SOURCE[0]}")/install_kind_kubectl.sh" source "$(dirname "${BASH_SOURCE[0]}")/cluster.sh" make images GOOS="linux" --directory="${ROOT}" #cluster cluster create_cluster $HOST_CLUSTER_NAME $HOST_CLUSTER_POD_CIDR $HOST_CLUSTER_SERVICE_CIDR create_cluster $MEMBER1_CLUSTER_NAME $MEMBER1_CLUSTER_POD_CIDR $MEMBER1_CLUSTER_SERVICE_CIDR true -#deploy clusterlink -deploy_clusterlink $HOST_CLUSTER_NAME -load_clusterlink_images $MEMBER1_CLUSTER_NAME +create_cluster $MEMBER2_CLUSTER_NAME $MEMBER2_CLUSTER_POD_CIDR $MEMBER2_CLUSTER_SERVICE_CIDR true +#deploy cluster +deploy_cluster $HOST_CLUSTER_NAME +load_cluster_images $MEMBER1_CLUSTER_NAME +load_cluster_images $MEMBER2_CLUSTER_NAME #join cluster join_cluster $HOST_CLUSTER_NAME $HOST_CLUSTER_NAME join_cluster $HOST_CLUSTER_NAME $MEMBER1_CLUSTER_NAME +join_cluster $HOST_CLUSTER_NAME $MEMBER2_CLUSTER_NAME echo "e2e test enviroment init success" @@ -56,6 +60,10 @@ echo "Collecting $MEMBER1_CLUSTER_NAME logs..." mkdir -p "$MEMBER1_CLUSTER_NAME/$MEMBER1_CLUSTER_NAME" kind export logs --name="$MEMBER1_CLUSTER_NAME" "$LOG_PATH/$MEMBER1_CLUSTER_NAME" +echo "Collecting $MEMBER2_CLUSTER_NAME logs..." +mkdir -p "$MEMBER2_CLUSTER_NAME/$MEMBER2_CLUSTER_NAME" +kind export logs --name="$MEMBER2_CLUSTER_NAME" "$LOG_PATH/$MEMBER2_CLUSTER_NAME" + #TODO delete cluster -exit $TESTING_RESULT \ No newline at end of file +exit $TESTING_RESULT diff --git a/pkg/clusterlink/agent/controller.go b/pkg/clusterlink/agent/controller.go index 75120ad6c..86e22dbb4 100644 --- a/pkg/clusterlink/agent/controller.go +++ b/pkg/clusterlink/agent/controller.go @@ -2,7 +2,6 @@ package agent import ( "context" - "fmt" "time" apierrors "k8s.io/apimachinery/pkg/api/errors" @@ -19,6 +18,7 @@ import ( kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1" networkmanager "github.com/kosmos.io/kosmos/pkg/clusterlink/agent/network-manager" + "github.com/kosmos.io/kosmos/pkg/clusterlink/controllers/node" "github.com/kosmos.io/kosmos/pkg/clusterlink/network" kosmosv1alpha1lister "github.com/kosmos.io/kosmos/pkg/generated/listers/kosmos/v1alpha1" ) @@ -44,30 +44,42 @@ func NetworkManager() *networkmanager.NetworkManager { return networkmanager.NewNetworkManager(net) } -var predicatesFunc = predicate.Funcs{ - CreateFunc: func(createEvent event.CreateEvent) bool { - return true - }, - UpdateFunc: func(updateEvent event.UpdateEvent) bool { - return true - }, - DeleteFunc: func(deleteEvent event.DeleteEvent) bool { - return true - }, - GenericFunc: func(genericEvent event.GenericEvent) bool { - return true - }, -} - func (r *Reconciler) SetupWithManager(mgr manager.Manager) error { if r.Client == nil { r.Client = mgr.GetClient() } + skipEvent := func(obj client.Object) bool { + eventObj, ok := obj.(*kosmosv1alpha1.NodeConfig) + if !ok { + return false + } + + if eventObj.Name != node.ClusterNodeName(r.ClusterName, r.NodeName) { + klog.Infof("reconcile node name: %s, current node name: %s-%s", eventObj.Name, r.ClusterName, r.NodeName) + return false + } + + return true + } + return ctrl.NewControllerManagedBy(mgr). Named(controllerName). WithOptions(controller.Options{}). - For(&kosmosv1alpha1.NodeConfig{}, builder.WithPredicates(predicatesFunc)). + For(&kosmosv1alpha1.NodeConfig{}, builder.WithPredicates(predicate.Funcs{ + CreateFunc: func(createEvent event.CreateEvent) bool { + return skipEvent(createEvent.Object) + }, + UpdateFunc: func(updateEvent event.UpdateEvent) bool { + return skipEvent(updateEvent.ObjectNew) + }, + DeleteFunc: func(deleteEvent event.DeleteEvent) bool { + return skipEvent(deleteEvent.Object) + }, + GenericFunc: func(genericEvent event.GenericEvent) bool { + return skipEvent(genericEvent.Object) + }, + })). Complete(r) } @@ -97,12 +109,6 @@ func (r *Reconciler) Reconcile(ctx context.Context, request reconcile.Request) ( return reconcile.Result{RequeueAfter: RequeueTime}, nil } - klog.Infof("reconcile node name: %s, current node name: %s-%s", reconcileNode.Name, r.ClusterName, r.NodeName) - if reconcileNode.Name != fmt.Sprintf("%s-%s", r.ClusterName, r.NodeName) { - klog.Infof("not match, drop this event.") - return reconcile.Result{}, nil - } - localCluster, err := r.ClusterLister.Get(r.ClusterName) if err != nil { klog.Errorf("could not get local cluster, clusterNode: %s, err: %v", r.NodeName, err) diff --git a/pkg/clustertree/cluster-manager/cluster_controller.go b/pkg/clustertree/cluster-manager/cluster_controller.go index f43004c8d..94a99faa6 100644 --- a/pkg/clustertree/cluster-manager/cluster_controller.go +++ b/pkg/clustertree/cluster-manager/cluster_controller.go @@ -48,9 +48,9 @@ const ( ) type ClusterController struct { - Root client.Client - RootDynamic dynamic.Interface - RootClient kubernetes.Interface + Root client.Client + RootDynamic dynamic.Interface + RootClientset kubernetes.Interface EventRecorder record.EventRecorder Logger logr.Logger @@ -63,6 +63,8 @@ type ClusterController struct { RootResourceManager *utils.ResourceManager GlobalLeafManager leafUtils.LeafResourceManager + + LeafModelHandler leafUtils.LeafModelHandler } var predicatesFunc = predicate.Funcs{ @@ -172,15 +174,6 @@ func (c *ClusterController) Reconcile(ctx context.Context, request reconcile.Req return reconcile.Result{}, nil } - nodes, err := c.createNode(ctx, cluster, leafClient) - if err != nil { - return reconcile.Result{}, fmt.Errorf("create node with err %v, cluster %s", err, cluster.Name) - } - // TODO @wyz - for _, node := range nodes { - node.ResourceVersion = "" - } - // build mgr for cluster // TODO bug, the v4 log is lost mgr, err := controllerruntime.NewManager(config, controllerruntime.Options{ @@ -194,6 +187,18 @@ func (c *ClusterController) Reconcile(ctx context.Context, request reconcile.Req return reconcile.Result{}, fmt.Errorf("new manager with err %v, cluster %s", err, cluster.Name) } + leafModelHandler := leafUtils.NewLeafModelHandler(cluster, c.Root, mgr.GetClient(), c.RootClientset, leafClient) + c.LeafModelHandler = leafModelHandler + + nodes, err := c.createNode(ctx, cluster, leafClient) + if err != nil { + return reconcile.Result{RequeueAfter: RequeueTime}, fmt.Errorf("create node with err %v, cluster %s", err, cluster.Name) + } + // TODO @wyz + for _, node := range nodes { + node.ResourceVersion = "" + } + subContext, cancel := context.WithCancel(ctx) c.ControllerManagersLock.Lock() @@ -201,7 +206,7 @@ func (c *ClusterController) Reconcile(ctx context.Context, request reconcile.Req c.ManagerCancelFuncs[cluster.Name] = &cancel c.ControllerManagersLock.Unlock() - if err = c.setupControllers(mgr, cluster, nodes, leafDynamic, leafClient, kosmosClient); err != nil { + if err = c.setupControllers(mgr, cluster, nodes, leafDynamic, leafClient, kosmosClient, config); err != nil { return reconcile.Result{}, fmt.Errorf("failed to setup cluster %s controllers: %v", cluster.Name, err) } @@ -230,53 +235,54 @@ func (c *ClusterController) clearClusterControllers(cluster *kosmosv1alpha1.Clus c.GlobalLeafManager.RemoveLeafResource(cluster.Name) } -func (c *ClusterController) setupControllers(mgr manager.Manager, cluster *kosmosv1alpha1.Cluster, nodes []*corev1.Node, clientDynamic *dynamic.DynamicClient, leafClient kubernetes.Interface, kosmosClient kosmosversioned.Interface) error { - isNode2NodeFunc := func(cluster *kosmosv1alpha1.Cluster) bool { - return cluster.Spec.ClusterTreeOptions.LeafModels != nil - } - - clusterName := fmt.Sprintf("%s%s", utils.KosmosNodePrefix, cluster.Name) - if isNode2NodeFunc(cluster) { - clusterName = cluster.Name - } - - c.GlobalLeafManager.AddLeafResource(clusterName, &leafUtils.LeafResource{ +func (c *ClusterController) setupControllers( + mgr manager.Manager, + cluster *kosmosv1alpha1.Cluster, + nodes []*corev1.Node, + clientDynamic *dynamic.DynamicClient, + leafClientset kubernetes.Interface, + kosmosClient kosmosversioned.Interface, + leafRestConfig *rest.Config) error { + c.GlobalLeafManager.AddLeafResource(&leafUtils.LeafResource{ Client: mgr.GetClient(), DynamicClient: clientDynamic, - Clientset: leafClient, + Clientset: leafClientset, KosmosClient: kosmosClient, - ClusterName: clusterName, + ClusterName: cluster.Name, // TODO: define node options Namespace: "", IgnoreLabels: strings.Split("", ","), EnableServiceAccount: true, - }, cluster.Spec.ClusterTreeOptions.LeafModels, nodes) + RestConfig: leafRestConfig, + }, cluster, nodes) nodeResourcesController := controllers.NodeResourcesController{ Leaf: mgr.GetClient(), GlobalLeafManager: c.GlobalLeafManager, Root: c.Root, - RootClientset: c.RootClient, + RootClientset: c.RootClientset, Nodes: nodes, - Node2Node: isNode2NodeFunc(cluster), + LeafModelHandler: c.LeafModelHandler, Cluster: cluster, } if err := nodeResourcesController.SetupWithManager(mgr); err != nil { return fmt.Errorf("error starting %s: %v", controllers.NodeResourcesControllerName, err) } - nodeLeaseController := controllers.NewNodeLeaseController(leafClient, c.Root, nodes, c.RootClient, isNode2NodeFunc(cluster)) + nodeLeaseController := controllers.NewNodeLeaseController(leafClientset, c.Root, nodes, c.RootClientset, c.LeafModelHandler) if err := mgr.Add(nodeLeaseController); err != nil { return fmt.Errorf("error starting %s: %v", controllers.NodeLeaseControllerName, err) } if c.Options.MultiClusterService { serviceImportController := &mcs.ServiceImportController{ - LeafClient: mgr.GetClient(), - RootKosmosClient: kosmosClient, - EventRecorder: mgr.GetEventRecorderFor(mcs.LeafServiceImportControllerName), - Logger: mgr.GetLogger(), - LeafNodeName: clusterName, + LeafClient: mgr.GetClient(), + RootKosmosClient: kosmosClient, + EventRecorder: mgr.GetEventRecorderFor(mcs.LeafServiceImportControllerName), + Logger: mgr.GetLogger(), + LeafNodeName: cluster.Name, + // todo @wyz + IPFamilyType: cluster.Spec.ClusterLinkOptions.IPFamily, RootResourceManager: c.RootResourceManager, } if err := serviceImportController.AddController(mgr); err != nil { @@ -293,19 +299,21 @@ func (c *ClusterController) setupControllers(mgr manager.Manager, cluster *kosmo return fmt.Errorf("error starting podUpstreamReconciler %s: %v", podcontrollers.LeafPodControllerName, err) } - err := c.setupStorageControllers(mgr, nodes, leafClient, cluster.Name) - if err != nil { - return err + if !c.Options.OnewayStorageControllers { + err := c.setupStorageControllers(mgr, utils.IsOne2OneMode(cluster), cluster.Name) + if err != nil { + return err + } } return nil } -func (c *ClusterController) setupStorageControllers(mgr manager.Manager, nodes []*corev1.Node, leafClient kubernetes.Interface, clustername string) error { +func (c *ClusterController) setupStorageControllers(mgr manager.Manager, isOne2OneMode bool, clustername string) error { leafPVCController := pvc.LeafPVCController{ LeafClient: mgr.GetClient(), RootClient: c.Root, - RootClientSet: c.RootClient, + RootClientSet: c.RootClientset, ClusterName: clustername, } if err := leafPVCController.SetupWithManager(mgr); err != nil { @@ -315,8 +323,9 @@ func (c *ClusterController) setupStorageControllers(mgr manager.Manager, nodes [ leafPVController := pv.LeafPVController{ LeafClient: mgr.GetClient(), RootClient: c.Root, - RootClientSet: c.RootClient, + RootClientSet: c.RootClientset, ClusterName: clustername, + IsOne2OneMode: isOne2OneMode, } if err := leafPVController.SetupWithManager(mgr); err != nil { return fmt.Errorf("error starting leaf pv controller %v", err) @@ -324,131 +333,23 @@ func (c *ClusterController) setupStorageControllers(mgr manager.Manager, nodes [ return nil } -func (c *ClusterController) setNodeStatus(ctx context.Context, nodeName string, leafClient kubernetes.Interface, node *corev1.Node, isNode2Node bool) error { - if isNode2Node { - if leafnode, err := leafClient.CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{}); err != nil { - klog.Errorf("create node %s failed, cannot get node from leaf cluster, err: %v", nodeName, err) - return err - } else { - node.Status = leafnode.Status - address, err := leafUtils.SortAddress(ctx, c.RootClient, nodeName, leafClient, node.Status.Addresses) - if err != nil { - return err - } - node.Status.Addresses = address - return nil - } - } - - leafnodes, err := leafClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{ - // TODO: LabelSelector - }) - if err != nil { - klog.Errorf("create node %s failed, cannot get node from leaf cluster, err: %v", nodeName, err) - return err - } - - if len(leafnodes.Items) == 0 { - klog.Errorf("create node %s failed, cannot get node from leaf cluster, len of leafnodes is 0", nodeName) - return err - } - - address, err := leafUtils.SortAddress(ctx, c.RootClient, nodeName, leafClient, leafnodes.Items[0].Status.Addresses) - - if err != nil { - return err - } - - node.Status.Addresses = address - - return nil -} - func (c *ClusterController) createNode(ctx context.Context, cluster *kosmosv1alpha1.Cluster, leafClient kubernetes.Interface) ([]*corev1.Node, error) { - getNodeLen := func(cluster *kosmosv1alpha1.Cluster) int32 { - if cluster.Spec.ClusterTreeOptions.Enable { - return int32(len(cluster.Spec.ClusterTreeOptions.LeafModels)) - } - return 0 - } - serverVersion, err := leafClient.Discovery().ServerVersion() if err != nil { klog.Errorf("create node failed, can not connect to leaf %s", cluster.Name) return nil, err } - createNode := func(ctx context.Context, nodeName, clusterName string, isNode2Node bool) (*corev1.Node, error) { - node, err := c.RootClient.CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{}) - if err != nil { - if errors.IsNotFound(err) { - node = utils.BuildNodeTemplate(nodeName) - if isNode2Node { - nodeAnnotations := node.GetAnnotations() - if nodeAnnotations == nil { - nodeAnnotations = make(map[string]string, 1) - } - nodeAnnotations[utils.KosmosNodeOwnedByClusterAnnotations] = clusterName - node.SetAnnotations(nodeAnnotations) - } - - if err := c.setNodeStatus(ctx, nodeName, leafClient, node, isNode2Node); err != nil { - return nil, err - } - - node.Status.NodeInfo.KubeletVersion = serverVersion.GitVersion - node.Status.DaemonEndpoints = corev1.NodeDaemonEndpoints{ - KubeletEndpoint: corev1.DaemonEndpoint{ - Port: c.Options.ListenPort, - }, - } - - node, err = c.RootClient.CoreV1().Nodes().Create(ctx, node, metav1.CreateOptions{}) - if err != nil { - if !errors.IsAlreadyExists(err) { - klog.Errorf("create node %s failed, err: %v", nodeName, err) - return nil, err - } else { - return node, nil - } - } - } else { - klog.Errorf("create node failed, can not get node %s", nodeName) - return nil, err - } - } - return node, nil - } - - nodes := make([]*corev1.Node, 0) - - if getNodeLen(cluster) > 0 { - for _, leafModel := range cluster.Spec.ClusterTreeOptions.LeafModels { - // todo only support nodeName now - if leafModel.NodeSelector.NodeName != "" { - nodeName := leafModel.NodeSelector.NodeName - - node, err := createNode(ctx, nodeName, cluster.Name, true) - if err != nil { - return nil, err - } - nodes = append(nodes, node) - } - } - } else { - nodeName := fmt.Sprintf("%s%s", utils.KosmosNodePrefix, cluster.Name) - node, err := createNode(ctx, nodeName, cluster.Name, false) - if err != nil { - return nil, err - } - nodes = append(nodes, node) + nodes, err := c.LeafModelHandler.CreateNodeInRoot(ctx, cluster, c.Options.ListenPort, serverVersion.GitVersion) + if err != nil { + klog.Errorf("create node for cluster %s failed, err: %v", cluster.Name, err) + return nil, err } - return nodes, nil } func (c *ClusterController) deleteNode(ctx context.Context, cluster *kosmosv1alpha1.Cluster) error { - err := c.RootClient.CoreV1().Nodes().Delete(ctx, cluster.Name, metav1.DeleteOptions{}) + err := c.RootClientset.CoreV1().Nodes().Delete(ctx, cluster.Name, metav1.DeleteOptions{}) if err != nil && !errors.IsNotFound(err) { return err } diff --git a/pkg/clustertree/cluster-manager/controllers/common_controller.go b/pkg/clustertree/cluster-manager/controllers/common_controller.go index 9c461469b..b86def352 100644 --- a/pkg/clustertree/cluster-manager/controllers/common_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/common_controller.go @@ -30,7 +30,7 @@ var SYNC_GVRS = []schema.GroupVersionResource{utils.GVR_CONFIGMAP, utils.GVR_SEC var SYNC_OBJS = []client.Object{&corev1.ConfigMap{}, &corev1.Secret{}} const SYNC_KIND_CONFIGMAP = "ConfigMap" -const SYNC_KIND_SECRET = "SECRET" +const SYNC_KIND_SECRET = "Secret" type SyncResourcesReconciler struct { GroupVersionResource schema.GroupVersionResource @@ -44,7 +44,7 @@ type SyncResourcesReconciler struct { } func (r *SyncResourcesReconciler) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) { - var owners []string + var clusters []string rootobj, err := r.DynamicRootClient.Resource(r.GroupVersionResource).Namespace(request.Namespace).Get(ctx, request.Name, metav1.GetOptions{}) if err != nil && !errors.IsNotFound(err) { klog.Errorf("get %s error: %v", request.NamespacedName, err) @@ -53,16 +53,16 @@ func (r *SyncResourcesReconciler) Reconcile(ctx context.Context, request reconci if err != nil && errors.IsNotFound(err) { // delete all - owners = r.GlobalLeafManager.ListNodeNames() + clusters = r.GlobalLeafManager.ListClusters() } else { - owners = utils.ListResourceOwnersAnnotations(rootobj.GetAnnotations()) + clusters = utils.ListResourceClusters(rootobj.GetAnnotations()) } - for _, owner := range owners { - if r.GlobalLeafManager.Has(owner) { - lr, err := r.GlobalLeafManager.GetLeafResource(owner) + for _, cluster := range clusters { + if r.GlobalLeafManager.HasCluster(cluster) { + lr, err := r.GlobalLeafManager.GetLeafResource(cluster) if err != nil { - klog.Errorf("get lr(owner: %s) err: %v", owner, err) + klog.Errorf("get lr(cluster: %s) err: %v", cluster, err) return reconcile.Result{RequeueAfter: SyncResourcesRequeueTime}, nil } if err = r.SyncResource(ctx, request, lr); err != nil { @@ -115,7 +115,7 @@ func (r *SyncResourcesReconciler) SetupWithManager(mgr manager.Manager, gvr sche } func (r *SyncResourcesReconciler) SyncResource(ctx context.Context, request reconcile.Request, lr *leafUtils.LeafResource) error { - klog.V(5).Infof("Started sync resource processing, ns: %s, name: %s", request.Namespace, request.Name) + klog.V(4).Infof("Started sync resource processing, ns: %s, name: %s", request.Namespace, request.Name) deleteSecretInClient := false @@ -146,7 +146,7 @@ func (r *SyncResourcesReconciler) SyncResource(ctx context.Context, request reco } return err } - klog.V(5).Infof("%s %q deleted", r.GroupVersionResource.Resource, request.Name) + klog.V(4).Infof("%s %q deleted", r.GroupVersionResource.Resource, request.Name) return nil } diff --git a/pkg/clustertree/cluster-manager/controllers/mcs/auto_mcs_controller.go b/pkg/clustertree/cluster-manager/controllers/mcs/auto_mcs_controller.go index 9a501a0b0..ba6fdd184 100644 --- a/pkg/clustertree/cluster-manager/controllers/mcs/auto_mcs_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/mcs/auto_mcs_controller.go @@ -2,7 +2,7 @@ package mcs import ( "context" - "fmt" + "strings" "time" "github.com/go-logr/logr" @@ -24,7 +24,7 @@ import ( mcsv1alpha1 "sigs.k8s.io/mcs-api/pkg/apis/v1alpha1" kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1" - leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils" + clustertreeutils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils" kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned" "github.com/kosmos.io/kosmos/pkg/utils" ) @@ -37,7 +37,9 @@ type AutoCreateMCSController struct { RootKosmosClient kosmosversioned.Interface EventRecorder record.EventRecorder Logger logr.Logger - GlobalLeafManager leafUtils.LeafResourceManager + GlobalLeafManager clustertreeutils.LeafResourceManager + // AutoCreateMCSPrefix is the prefix of the namespace for service to auto create in leaf cluster + AutoCreateMCSPrefix []string } func (c *AutoCreateMCSController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) { @@ -56,11 +58,7 @@ func (c *AutoCreateMCSController) Reconcile(ctx context.Context, request reconci shouldDelete = true } - annotations := service.GetAnnotations() - if annotations == nil { - shouldDelete = true - } - if _, exists := annotations[utils.AutoCreateMCSAnnotation]; !exists { + if !matchNamespace(service.Namespace, c.AutoCreateMCSPrefix) && !hasAutoMCSAnnotation(service) { shouldDelete = true } @@ -85,6 +83,41 @@ func (c *AutoCreateMCSController) Reconcile(ctx context.Context, request reconci return controllerruntime.Result{}, nil } +func matchNamespace(namespace string, prefix []string) bool { + for _, p := range prefix { + if strings.HasPrefix(namespace, p) { + return true + } + } + return false +} + +func hasAutoMCSAnnotation(service *corev1.Service) bool { + annotations := service.GetAnnotations() + if annotations == nil { + return false + } + if _, exists := annotations[utils.AutoCreateMCSAnnotation]; exists { + return true + } + return false +} + +func (c *AutoCreateMCSController) shouldEnqueue(service *corev1.Service) bool { + if len(c.AutoCreateMCSPrefix) > 0 { + for _, prefix := range c.AutoCreateMCSPrefix { + if strings.HasPrefix(service.GetNamespace(), prefix) { + return true + } + } + } + + if hasAutoMCSAnnotation(service) { + return true + } + return false +} + func (c *AutoCreateMCSController) SetupWithManager(mgr manager.Manager) error { clusterFn := handler.MapFunc( func(object client.Object) []reconcile.Request { @@ -124,27 +157,35 @@ func (c *AutoCreateMCSController) SetupWithManager(mgr manager.Manager) error { }, ) - shouldEnqueue := func(obj client.Object) bool { - annotations := obj.GetAnnotations() - if annotations == nil { - return false - } - if _, exists := annotations[utils.AutoCreateMCSAnnotation]; exists { - return true - } else { - return false - } - } - servicePredicate := builder.WithPredicates(predicate.Funcs{ CreateFunc: func(event event.CreateEvent) bool { - return shouldEnqueue(event.Object) + service, ok := event.Object.(*corev1.Service) + if !ok { + return false + } + + return c.shouldEnqueue(service) }, DeleteFunc: func(deleteEvent event.DeleteEvent) bool { - return shouldEnqueue(deleteEvent.Object) + service, ok := deleteEvent.Object.(*corev1.Service) + if !ok { + return false + } + + return c.shouldEnqueue(service) }, UpdateFunc: func(updateEvent event.UpdateEvent) bool { - return shouldEnqueue(updateEvent.ObjectOld) != shouldEnqueue(updateEvent.ObjectNew) + newService, ok := updateEvent.ObjectNew.(*corev1.Service) + if !ok { + return false + } + + oldService, ok := updateEvent.ObjectOld.(*corev1.Service) + if !ok { + return false + } + + return c.shouldEnqueue(newService) != c.shouldEnqueue(oldService) }, GenericFunc: func(genericEvent event.GenericEvent) bool { return false @@ -172,12 +213,11 @@ func (c *AutoCreateMCSController) cleanUpMcsResources(ctx context.Context, names // delete serviceImport in all leaf cluster for _, cluster := range clusterList.Items { newCluster := cluster.DeepCopy() - if leafUtils.IsRootCluster(newCluster) { + if clustertreeutils.IsRootCluster(newCluster) { continue } - leafNodeName := fmt.Sprintf("%s%s", utils.KosmosNodePrefix, cluster.Name) - // TODO: @duanmengkk - leafManager, err := c.GlobalLeafManager.GetLeafResource(leafNodeName) + + leafManager, err := c.GlobalLeafManager.GetLeafResource(cluster.Name) if err != nil { klog.Errorf("get leafManager for cluster %s failed,Error: %v", cluster.Name, err) return err @@ -210,12 +250,11 @@ func (c *AutoCreateMCSController) autoCreateMcsResources(ctx context.Context, se // create serviceImport in leaf cluster for _, cluster := range clusterList.Items { newCluster := cluster.DeepCopy() - if leafUtils.IsRootCluster(newCluster) { + if clustertreeutils.IsRootCluster(newCluster) { continue } - leafNodeName := fmt.Sprintf("%s%s", utils.KosmosNodePrefix, cluster.Name) - // TODO: @duanmengkk - leafManager, err := c.GlobalLeafManager.GetLeafResource(leafNodeName) + + leafManager, err := c.GlobalLeafManager.GetLeafResource(cluster.Name) if err != nil { klog.Errorf("get leafManager for cluster %s failed,Error: %v", cluster.Name, err) return err diff --git a/pkg/clustertree/cluster-manager/controllers/mcs/serviceexport_controller.go b/pkg/clustertree/cluster-manager/controllers/mcs/serviceexport_controller.go index 48678d803..da00a9a5f 100644 --- a/pkg/clustertree/cluster-manager/controllers/mcs/serviceexport_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/mcs/serviceexport_controller.go @@ -137,7 +137,6 @@ func (c *ServiceExportController) removeAnnotation(ctx context.Context, namespac return err } } - klog.Infof("ServiceImport (%s/%s) deleted", namespace, name) return nil } diff --git a/pkg/clustertree/cluster-manager/controllers/mcs/serviceimport_controller.go b/pkg/clustertree/cluster-manager/controllers/mcs/serviceimport_controller.go index 207990a33..55027106a 100644 --- a/pkg/clustertree/cluster-manager/controllers/mcs/serviceimport_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/mcs/serviceimport_controller.go @@ -21,6 +21,7 @@ import ( "sigs.k8s.io/controller-runtime/pkg/manager" mcsv1alpha1 "sigs.k8s.io/mcs-api/pkg/apis/v1alpha1" + kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1" kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned" "github.com/kosmos.io/kosmos/pkg/generated/informers/externalversions" "github.com/kosmos.io/kosmos/pkg/utils" @@ -35,6 +36,7 @@ type ServiceImportController struct { LeafClient client.Client RootKosmosClient kosmosversioned.Interface LeafNodeName string + IPFamilyType kosmosv1alpha1.IPFamilyType EventRecorder record.EventRecorder Logger logr.Logger processor utils.AsyncWorker @@ -230,6 +232,12 @@ func (c *ServiceImportController) importEndpointSliceHandler(ctx context.Context clearEndpointSlice(endpointSlice, disConnectedAddress) } + if endpointSlice.AddressType == discoveryv1.AddressTypeIPv4 && c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV6 || + endpointSlice.AddressType == discoveryv1.AddressTypeIPv6 && c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV4 { + klog.Warningf("The endpointSlice's AddressType is not match leaf cluster %s IPFamilyType,so ignore it", c.LeafNodeName) + return nil + } + return c.createOrUpdateEndpointSliceInClient(ctx, endpointSlice, serviceImport.Name) } @@ -308,8 +316,14 @@ func clearEndpointSlice(slice *discoveryv1.EndpointSlice, disconnectedAddress [] } func (c *ServiceImportController) importServiceHandler(ctx context.Context, rootService *corev1.Service, serviceImport *mcsv1alpha1.ServiceImport) error { - clientService := generateService(rootService, serviceImport) - err := c.createOrUpdateServiceInClient(ctx, clientService) + err := c.checkServiceType(rootService) + if err != nil { + klog.Warningf("Cloud not create service in leaf cluster %s,Error: %v", c.LeafNodeName, err) + // return nil will not requeue + return nil + } + clientService := c.generateService(rootService, serviceImport) + err = c.createOrUpdateServiceInClient(ctx, clientService) if err != nil { return err } @@ -426,12 +440,28 @@ func retainServiceFields(oldSvc, newSvc *corev1.Service) { newSvc.ResourceVersion = oldSvc.ResourceVersion } -func generateService(service *corev1.Service, serviceImport *mcsv1alpha1.ServiceImport) *corev1.Service { +func (c *ServiceImportController) generateService(service *corev1.Service, serviceImport *mcsv1alpha1.ServiceImport) *corev1.Service { clusterIP := corev1.ClusterIPNone if isServiceIPSet(service) { clusterIP = "" } + iPFamilies := make([]corev1.IPFamily, 0) + if c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeALL { + iPFamilies = service.Spec.IPFamilies + } else if c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV4 { + iPFamilies = append(iPFamilies, corev1.IPv4Protocol) + } else { + iPFamilies = append(iPFamilies, corev1.IPv6Protocol) + } + + var iPFamilyPolicy corev1.IPFamilyPolicy + if c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeALL { + iPFamilyPolicy = *service.Spec.IPFamilyPolicy + } else { + iPFamilyPolicy = corev1.IPFamilyPolicySingleStack + } + return &corev1.Service{ ObjectMeta: metav1.ObjectMeta{ Namespace: serviceImport.Namespace, @@ -444,12 +474,22 @@ func generateService(service *corev1.Service, serviceImport *mcsv1alpha1.Service Type: service.Spec.Type, ClusterIP: clusterIP, Ports: servicePorts(service), - IPFamilies: service.Spec.IPFamilies, - IPFamilyPolicy: service.Spec.IPFamilyPolicy, + IPFamilies: iPFamilies, + IPFamilyPolicy: &iPFamilyPolicy, }, } } +func (c *ServiceImportController) checkServiceType(service *corev1.Service) error { + if *service.Spec.IPFamilyPolicy == corev1.IPFamilyPolicySingleStack { + if service.Spec.IPFamilies[0] == corev1.IPv6Protocol && c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV4 || + service.Spec.IPFamilies[0] == corev1.IPv4Protocol && c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV6 { + return fmt.Errorf("service's IPFamilyPolicy %s is not match the leaf cluster %s", *service.Spec.IPFamilyPolicy, c.LeafNodeName) + } + } + return nil +} + func isServiceIPSet(service *corev1.Service) bool { return service.Spec.ClusterIP != corev1.ClusterIPNone && service.Spec.ClusterIP != "" } diff --git a/pkg/clustertree/cluster-manager/controllers/node_lease_controller.go b/pkg/clustertree/cluster-manager/controllers/node_lease_controller.go index de81cb1c4..4dceb03a2 100644 --- a/pkg/clustertree/cluster-manager/controllers/node_lease_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/node_lease_controller.go @@ -2,7 +2,6 @@ package controllers import ( "context" - "fmt" "sync" "time" @@ -18,7 +17,7 @@ import ( "k8s.io/utils/pointer" "sigs.k8s.io/controller-runtime/pkg/client" - "github.com/kosmos.io/kosmos/pkg/utils" + leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils" ) const ( @@ -31,10 +30,10 @@ const ( ) type NodeLeaseController struct { - leafClient kubernetes.Interface - rootClient kubernetes.Interface - root client.Client - Node2Node bool + leafClient kubernetes.Interface + rootClient kubernetes.Interface + root client.Client + LeafModelHandler leafUtils.LeafModelHandler leaseInterval time.Duration statusInterval time.Duration @@ -43,15 +42,15 @@ type NodeLeaseController struct { nodeLock sync.Mutex } -func NewNodeLeaseController(leafClient kubernetes.Interface, root client.Client, nodes []*corev1.Node, rootClient kubernetes.Interface, node2Node bool) *NodeLeaseController { +func NewNodeLeaseController(leafClient kubernetes.Interface, root client.Client, nodes []*corev1.Node, rootClient kubernetes.Interface, LeafModelHandler leafUtils.LeafModelHandler) *NodeLeaseController { c := &NodeLeaseController{ - leafClient: leafClient, - rootClient: rootClient, - root: root, - nodes: nodes, - Node2Node: node2Node, - leaseInterval: getRenewInterval(), - statusInterval: DefaultNodeStatusUpdateInterval, + leafClient: leafClient, + rootClient: rootClient, + root: root, + nodes: nodes, + LeafModelHandler: LeafModelHandler, + leaseInterval: getRenewInterval(), + statusInterval: DefaultNodeStatusUpdateInterval, } return c } @@ -80,37 +79,9 @@ func (c *NodeLeaseController) syncNodeStatus(ctx context.Context) { // nolint func (c *NodeLeaseController) updateNodeStatus(ctx context.Context, n []*corev1.Node) error { - if !c.Node2Node { - var name string - if len(n) > 0 { - name = n[0].Name - } - - node := &corev1.Node{} - namespacedName := types.NamespacedName{ - Name: name, - } - err := retry.RetryOnConflict(retry.DefaultRetry, func() error { - err := c.root.Get(ctx, namespacedName, node) - if err != nil { - // TODO: If a node is accidentally deleted, recreate it - return fmt.Errorf("cannot get node while update node status %s, err: %v", name, err) - } - - clone := node.DeepCopy() - clone.Status.Conditions = utils.NodeConditions() - - patch, err := utils.CreateMergePatch(node, clone) - if err != nil { - return fmt.Errorf("cannot get node while update node status %s, err: %v", node.Name, err) - } - - if node, err = c.rootClient.CoreV1().Nodes().PatchStatus(ctx, node.Name, patch); err != nil { - return err - } - return nil - }) - return err + err := c.LeafModelHandler.UpdateNodeStatus(ctx, n) + if err != nil { + klog.Errorf("Could not update node status in root cluster,Error: %v", err) } return nil } diff --git a/pkg/clustertree/cluster-manager/controllers/node_resources_controller.go b/pkg/clustertree/cluster-manager/controllers/node_resources_controller.go index 4172ecc86..ec3aee9dc 100644 --- a/pkg/clustertree/cluster-manager/controllers/node_resources_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/node_resources_controller.go @@ -39,10 +39,10 @@ type NodeResourcesController struct { GlobalLeafManager leafUtils.LeafResourceManager RootClientset kubernetes.Interface - Nodes []*corev1.Node - Node2Node bool - Cluster *kosmosv1alpha1.Cluster - EventRecorder record.EventRecorder + Nodes []*corev1.Node + LeafModelHandler leafUtils.LeafModelHandler + Cluster *kosmosv1alpha1.Cluster + EventRecorder record.EventRecorder } var predicatesFunc = predicate.Funcs{ @@ -110,57 +110,27 @@ func (c *NodeResourcesController) Reconcile(ctx context.Context, request reconci }, fmt.Errorf("cannot get node while update nodeInRoot resources %s, err: %v", rootNode.Name, err) } - nodesInLeaf := &corev1.NodeList{} - pods := &corev1.PodList{} - - if !c.Node2Node { - if err = c.Leaf.List(ctx, nodesInLeaf); err != nil { - klog.Errorf("Could not list node in leaf cluster,Error: %v", err) - return controllerruntime.Result{ - RequeueAfter: RequeueTime, - }, err - } - - if err = c.Leaf.List(ctx, pods); err != nil { - klog.Errorf("Could not list pod in leaf cluster,Error: %v", err) - return controllerruntime.Result{ - RequeueAfter: RequeueTime, - }, err - } - } else { - leafNodeName := c.Cluster.Name - leafResource, err := c.GlobalLeafManager.GetLeafResource(leafNodeName) - if err != nil { - klog.Errorf("Could not get leafResource,Error: %v", err) - return controllerruntime.Result{ - RequeueAfter: RequeueTime, - }, err - } - nodesInLeaf, err = leafResource.Clientset.CoreV1().Nodes().List(ctx, metav1.ListOptions{FieldSelector: fmt.Sprintf("metadata.name=%s", rootNode.Name)}) - if err != nil { - klog.Errorf("Could not get node in leaf cluster %s,Error: %v", c.Cluster.Name, err) - return controllerruntime.Result{ - RequeueAfter: RequeueTime, - }, err - } + nodesInLeaf, err := c.LeafModelHandler.GetLeafNodes(ctx, rootNode) + if err != nil { + klog.Errorf("Could not get node in leaf cluster %s,Error: %v", c.Cluster.Name, err) + return controllerruntime.Result{ + RequeueAfter: RequeueTime, + }, err + } - pods, err = leafResource.Clientset.CoreV1().Pods("").List(ctx, metav1.ListOptions{FieldSelector: fmt.Sprintf("spec.nodeName=%s", rootNode.Name)}) - if err != nil { - klog.Errorf("Could not list pod in leaf cluster %s,Error: %v", c.Cluster.Name, err) - return controllerruntime.Result{ - RequeueAfter: RequeueTime, - }, err - } + pods, err := c.LeafModelHandler.GetLeafPods(ctx, rootNode) + if err != nil { + klog.Errorf("Could not list pod in leaf cluster %s,Error: %v", c.Cluster.Name, err) + return controllerruntime.Result{ + RequeueAfter: RequeueTime, + }, err } - clusterResources := utils.CalculateClusterResources(nodesInLeaf, pods) clone := nodeInRoot.DeepCopy() - clone.Status.Allocatable = clusterResources - clone.Status.Capacity = clusterResources clone.Status.Conditions = utils.NodeConditions() // Node2Node mode should sync leaf node's labels and annotations to root nodeInRoot - if c.Node2Node { + if c.LeafModelHandler.GetLeafModelType() == leafUtils.DispersionModel { getNode := func(nodes *corev1.NodeList) *corev1.Node { for _, nodeInLeaf := range nodes.Items { if nodeInLeaf.Name == rootNode.Name { @@ -171,29 +141,22 @@ func (c *NodeResourcesController) Reconcile(ctx context.Context, request reconci } node := getNode(nodesInLeaf) if node != nil { - clone.Labels = mergeMap(rootNode.GetLabels(), node.GetLabels()) - clone.Annotations = mergeMap(rootNode.GetAnnotations(), node.GetAnnotations()) - clone.Status = node.Status - // TODO: @duanmengkk - leafNodeName := c.Cluster.Name - leafResource, err := c.GlobalLeafManager.GetLeafResource(leafNodeName) - if err != nil { - klog.Errorf("Could not get leafResource,Error: %v", err) - return controllerruntime.Result{ - RequeueAfter: RequeueTime, - }, err - } - address, err := leafUtils.SortAddress(ctx, c.RootClientset, rootNode.Name, leafResource.Clientset, node.Status.Addresses) - if err != nil { - return controllerruntime.Result{ - RequeueAfter: RequeueTime, - }, err + clone.Labels = mergeMap(node.GetLabels(), clone.GetLabels()) + clone.Annotations = mergeMap(node.GetAnnotations(), clone.GetAnnotations()) + spec := corev1.NodeSpec{ + Taints: rootNode.Spec.Taints, } - node.Status.Addresses = address + clone.Spec = spec + clone.Status = node.Status + clone.Status.Addresses = leafUtils.GetAddress() } } + clusterResources := utils.CalculateClusterResources(nodesInLeaf, pods) + clone.Status.Allocatable = clusterResources + clone.Status.Capacity = clusterResources + patch, err := utils.CreateMergePatch(nodeInRoot, clone) if err != nil { klog.Errorf("Could not CreateMergePatch,Error: %v", err) diff --git a/pkg/clustertree/cluster-manager/controllers/pod/leaf_pod_controller.go b/pkg/clustertree/cluster-manager/controllers/pod/leaf_pod_controller.go index 1e0bd4a75..7320e2747 100644 --- a/pkg/clustertree/cluster-manager/controllers/pod/leaf_pod_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/pod/leaf_pod_controller.go @@ -57,7 +57,7 @@ func (r *LeafPodReconciler) Reconcile(ctx context.Context, request reconcile.Req podutils.FitObjectMeta(&podCopy.ObjectMeta) podCopy.ResourceVersion = "0" if err := r.RootClient.Status().Update(ctx, podCopy); err != nil && !apierrors.IsNotFound(err) { - klog.V(5).Info(errors.Wrap(err, "error while updating pod status in kubernetes")) + klog.V(4).Info(errors.Wrap(err, "error while updating pod status in kubernetes")) return reconcile.Result{RequeueAfter: LeafPodRequeueTime}, nil } } diff --git a/pkg/clustertree/cluster-manager/controllers/pod/root_pod_controller.go b/pkg/clustertree/cluster-manager/controllers/pod/root_pod_controller.go index 5bf30e303..f527f28cf 100644 --- a/pkg/clustertree/cluster-manager/controllers/pod/root_pod_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/pod/root_pod_controller.go @@ -121,9 +121,9 @@ func (r *RootPodReconciler) Reconcile(ctx context.Context, request reconcile.Req if err := r.Get(ctx, request.NamespacedName, &cachepod); err != nil { if errors.IsNotFound(err) { // TODO: we cannot get leaf pod when we donnot known the node name of pod, so delete all ... - owners := r.GlobalLeafManager.ListNodeNames() - for _, owner := range owners { - lr, err := r.GlobalLeafManager.GetLeafResourceByNodeName(owner) + nodeNames := r.GlobalLeafManager.ListNodes() + for _, nodeName := range nodeNames { + lr, err := r.GlobalLeafManager.GetLeafResourceByNodeName(nodeName) if err != nil { // wait for leaf resource init return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil @@ -168,7 +168,7 @@ func (r *RootPodReconciler) Reconcile(ctx context.Context, request reconcile.Req // TODO: GlobalLeafResourceManager may not inited.... // belongs to the current node - if !r.GlobalLeafManager.HasNodeName(rootpod.Spec.NodeName) { + if !r.GlobalLeafManager.HasNode(rootpod.Spec.NodeName) { return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil } @@ -291,7 +291,7 @@ func (r *RootPodReconciler) createStorageInLeafCluster(ctx context.Context, lr * return fmt.Errorf("could not get resource gvr(%v) %s from root cluster: %v", gvr, rname, err) } rootannotations := rootobj.GetAnnotations() - rootannotations = utils.AddResourceOwnersAnnotations(rootannotations, lr.ClusterName) + rootannotations = utils.AddResourceClusters(rootannotations, lr.ClusterName) rootobj.SetAnnotations(rootannotations) @@ -325,7 +325,7 @@ func (r *RootPodReconciler) createStorageInLeafCluster(ctx context.Context, lr * klog.Errorf("Failed to create gvr(%v) %v err: %v", gvr, rname, err) return err } - klog.V(5).Infof("Create gvr(%v) %v in %v success", gvr, rname, ns) + klog.V(4).Infof("Create gvr(%v) %v in %v success", gvr, rname, ns) continue } return fmt.Errorf("could not check gvr(%v) %s in external cluster: %v", gvr, rname, err) @@ -445,7 +445,7 @@ func (r *RootPodReconciler) createCAInLeafCluster(ctx context.Context, lr *leafU Name: utils.RooTCAConfigMapName, } - err = lr.Client.Get(ctx, rootCAConfigmapKey, ca) + err = r.Client.Get(ctx, rootCAConfigmapKey, ca) if err != nil { return nil, fmt.Errorf("could not find configmap %s in master cluster: %v", ca, err) } @@ -556,7 +556,7 @@ func (r *RootPodReconciler) createServiceAccountInLeafCluster(ctx context.Contex if secret.Annotations == nil { return fmt.Errorf("parse secret service account error") } - klog.V(5).Infof("secret service-account info: [%v]", secret.Annotations) + klog.V(4).Infof("secret service-account info: [%v]", secret.Annotations) accountName := secret.Annotations[corev1.ServiceAccountNameKey] if accountName == "" { err := fmt.Errorf("get secret of serviceAccount not exits: [%s] [%v]", @@ -573,7 +573,7 @@ func (r *RootPodReconciler) createServiceAccountInLeafCluster(ctx context.Contex err := lr.Client.Get(ctx, saKey, sa) if err != nil || sa == nil { - klog.V(5).Infof("get serviceAccount [%v] err: [%v]]", sa, err) + klog.V(4).Infof("get serviceAccount [%v] err: [%v]]", sa, err) sa = &corev1.ServiceAccount{ ObjectMeta: metav1.ObjectMeta{ Name: accountName, @@ -589,7 +589,7 @@ func (r *RootPodReconciler) createServiceAccountInLeafCluster(ctx context.Contex return err } } else { - klog.V(5).Infof("get secret serviceAccount info: [%s] [%v] [%v] [%v]", + klog.V(4).Infof("get secret serviceAccount info: [%s] [%v] [%v] [%v]", sa.Name, sa.CreationTimestamp, sa.Annotations, sa.UID) } secret.UID = sa.UID @@ -611,7 +611,7 @@ func (r *RootPodReconciler) createServiceAccountInLeafCluster(ctx context.Contex err = lr.Client.Update(ctx, sa) if err != nil { - klog.V(5).Infof( + klog.V(4).Infof( "update serviceAccount [%v] err: [%v]]", sa, err) return err @@ -619,6 +619,87 @@ func (r *RootPodReconciler) createServiceAccountInLeafCluster(ctx context.Contex return nil } +func (r *RootPodReconciler) createVolumes(ctx context.Context, lr *leafUtils.LeafResource, basicPod *corev1.Pod, clusterNodeInfo *leafUtils.ClusterNode) error { + // create secret configmap pvc + secretNames, imagePullSecrets := podutils.GetSecrets(basicPod) + configMaps := podutils.GetConfigmaps(basicPod) + pvcs := podutils.GetPVCs(basicPod) + + ch := make(chan string, 3) + + // configmap + go func() { + if err := wait.PollImmediate(500*time.Millisecond, 30*time.Second, func() (bool, error) { + klog.V(4).Info("Trying to creating dependent configmaps") + if err := r.createStorageInLeafCluster(ctx, lr, utils.GVR_CONFIGMAP, configMaps, basicPod, clusterNodeInfo); err != nil { + klog.Error(err) + return false, nil + } + klog.V(4).Infof("Create configmaps %v of %v/%v success", configMaps, basicPod.Namespace, basicPod.Name) + return true, nil + }); err != nil { + ch <- fmt.Sprintf("create configmap failed: %v", err) + } + ch <- "" + }() + + // pvc + go func() { + if err := wait.PollImmediate(500*time.Millisecond, 30*time.Second, func() (bool, error) { + if !r.Options.OnewayStorageControllers { + klog.V(4).Info("Trying to creating dependent pvc") + if err := r.createStorageInLeafCluster(ctx, lr, utils.GVR_PVC, pvcs, basicPod, clusterNodeInfo); err != nil { + klog.Error(err) + return false, nil + } + klog.V(4).Infof("Create pvc %v of %v/%v success", pvcs, basicPod.Namespace, basicPod.Name) + } + return true, nil + }); err != nil { + ch <- fmt.Sprintf("create pvc failed: %v", err) + } + ch <- "" + }() + + // secret + go func() { + if err := wait.PollImmediate(500*time.Millisecond, 10*time.Second, func() (bool, error) { + klog.V(4).Info("Trying to creating secret") + if err := r.createStorageInLeafCluster(ctx, lr, utils.GVR_SECRET, secretNames, basicPod, clusterNodeInfo); err != nil { + klog.Error(err) + return false, nil + } + + // try to create image pull secrets, ignore err + if errignore := r.createStorageInLeafCluster(ctx, lr, utils.GVR_SECRET, imagePullSecrets, basicPod, clusterNodeInfo); errignore != nil { + klog.Warning(errignore) + } + return true, nil + }); err != nil { + ch <- fmt.Sprintf("create secrets failed: %v", err) + } + ch <- "" + }() + + t1 := <-ch + t2 := <-ch + t3 := <-ch + + errString := "" + errs := []string{t1, t2, t3} + for i := range errs { + if len(errs[i]) > 0 { + errString = errString + errs[i] + } + } + + if len(errString) > 0 { + return fmt.Errorf("%s", errString) + } + + return nil +} + func (r *RootPodReconciler) CreatePodInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, pod *corev1.Pod) error { if err := podutils.PopulateEnvironmentVariables(ctx, pod, r.envResourceManager); err != nil { // span.SetStatus(err) @@ -631,7 +712,7 @@ func (r *RootPodReconciler) CreatePodInLeafCluster(ctx context.Context, lr *leaf } basicPod := podutils.FitPod(pod, lr.IgnoreLabels, clusterNodeInfo.LeafMode == leafUtils.ALL) - klog.V(5).Infof("Creating pod %v/%+v", pod.Namespace, pod.Name) + klog.V(4).Infof("Creating pod %v/%+v", pod.Namespace, pod.Name) // create ns ns := &corev1.Namespace{} @@ -643,7 +724,7 @@ func (r *RootPodReconciler) CreatePodInLeafCluster(ctx context.Context, lr *leaf // cannot get ns in root cluster, retry return err } - klog.V(5).Infof("Namespace %s does not exist for pod %s, creating it", basicPod.Namespace, basicPod.Name) + klog.V(4).Infof("Namespace %s does not exist for pod %s, creating it", basicPod.Namespace, basicPod.Name) ns := &corev1.Namespace{ ObjectMeta: metav1.ObjectMeta{ Name: basicPod.Namespace, @@ -652,53 +733,20 @@ func (r *RootPodReconciler) CreatePodInLeafCluster(ctx context.Context, lr *leaf if createErr := lr.Client.Create(ctx, ns); createErr != nil { if !errors.IsAlreadyExists(createErr) { - klog.V(5).Infof("Namespace %s create failed error: %v", basicPod.Namespace, createErr) + klog.V(4).Infof("Namespace %s create failed error: %v", basicPod.Namespace, createErr) return err } else { // namespace already existed, skip create - klog.V(5).Info("Namespace %s already existed: %v", basicPod.Namespace, createErr) + klog.V(4).Info("Namespace %s already existed: %v", basicPod.Namespace, createErr) } } } - // create secret configmap pvc - secretNames, imagePullSecrets := podutils.GetSecrets(basicPod) - configMaps := podutils.GetConfigmaps(basicPod) - pvcs := podutils.GetPVCs(basicPod) - // nolint:errcheck - go wait.PollImmediate(500*time.Millisecond, 10*time.Minute, func() (bool, error) { - klog.V(5).Info("Trying to creating base dependent") - if err := r.createStorageInLeafCluster(ctx, lr, utils.GVR_CONFIGMAP, configMaps, basicPod, clusterNodeInfo); err != nil { - klog.Error(err) - return false, nil - } - - klog.V(5).Infof("Create configmaps %v of %v/%v success", configMaps, basicPod.Namespace, basicPod.Name) - if err := r.createStorageInLeafCluster(ctx, lr, utils.GVR_PVC, pvcs, basicPod, clusterNodeInfo); err != nil { - klog.Error(err) - return false, nil - } - klog.V(5).Infof("Create pvc %v of %v/%v success", pvcs, basicPod.Namespace, basicPod.Name) - return true, nil - }) - var err error - // nolint:errcheck - wait.PollImmediate(100*time.Millisecond, 1*time.Second, func() (bool, error) { - klog.V(5).Info("Trying to creating secret and service account") - - if err = r.createStorageInLeafCluster(ctx, lr, utils.GVR_SECRET, secretNames, basicPod, clusterNodeInfo); err != nil { - klog.Error(err) - return false, nil - } - - // try to create image pull secrets, ignore err - if errignore := r.createStorageInLeafCluster(ctx, lr, utils.GVR_SECRET, imagePullSecrets, basicPod, clusterNodeInfo); errignore != nil { - klog.Warning(errignore) - } - return true, nil - }) - if err != nil { - return fmt.Errorf("create secrets failed: %v", err) + if err := r.createVolumes(ctx, lr, basicPod, clusterNodeInfo); err != nil { + klog.Errorf("Creating Volumes error %+v", basicPod) + return err + } else { + klog.V(4).Infof("Creating Volumes successed %+v", basicPod) } r.convertAuth(ctx, lr, basicPod) @@ -707,23 +755,23 @@ func (r *RootPodReconciler) CreatePodInLeafCluster(ctx context.Context, lr *leaf r.changeToMasterCoreDNS(ctx, basicPod, r.Options) } - klog.V(5).Infof("Creating pod %+v", basicPod) + klog.V(4).Infof("Creating pod %+v", basicPod) - err = lr.Client.Create(ctx, basicPod) + err := lr.Client.Create(ctx, basicPod) if err != nil { return fmt.Errorf("could not create pod: %v", err) } - klog.V(5).Infof("Create pod %v/%+v success", basicPod.Namespace, basicPod.Name) + klog.V(4).Infof("Create pod %v/%+v success", basicPod.Namespace, basicPod.Name) return nil } func (r *RootPodReconciler) UpdatePodInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, rootpod *corev1.Pod, leafpod *corev1.Pod) error { // TODO: update env // TODO: update config secret pv pvc ... - klog.V(5).Infof("Updating pod %v/%+v", rootpod.Namespace, rootpod.Name) + klog.V(4).Infof("Updating pod %v/%+v", rootpod.Namespace, rootpod.Name) if !podutils.IsKosmosPod(leafpod) { - klog.V(5).Info("Pod is not created by kosmos tree, ignore") + klog.V(4).Info("Pod is not created by kosmos tree, ignore") return nil } // not used @@ -744,18 +792,18 @@ func (r *RootPodReconciler) UpdatePodInLeafCluster(ctx context.Context, lr *leaf r.changeToMasterCoreDNS(ctx, podCopy, r.Options) } - klog.V(5).Infof("Updating pod %+v", podCopy) + klog.V(4).Infof("Updating pod %+v", podCopy) err := lr.Client.Update(ctx, podCopy) if err != nil { return fmt.Errorf("could not update pod: %v", err) } - klog.V(5).Infof("Update pod %v/%+v success ", rootpod.Namespace, rootpod.Name) + klog.V(4).Infof("Update pod %v/%+v success ", rootpod.Namespace, rootpod.Name) return nil } func (r *RootPodReconciler) DeletePodInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, rootnamespacedname types.NamespacedName) error { - klog.V(5).Infof("Deleting pod %v/%+v", rootnamespacedname.Namespace, rootnamespacedname.Name) + klog.V(4).Infof("Deleting pod %v/%+v", rootnamespacedname.Namespace, rootnamespacedname.Name) leafPod := &corev1.Pod{} err := lr.Client.Get(ctx, rootnamespacedname, leafPod) @@ -768,7 +816,7 @@ func (r *RootPodReconciler) DeletePodInLeafCluster(ctx context.Context, lr *leaf } if !podutils.IsKosmosPod(leafPod) { - klog.V(5).Info("Pod is not create by kosmos tree, ignore") + klog.V(4).Info("Pod is not create by kosmos tree, ignore") return nil } @@ -776,11 +824,11 @@ func (r *RootPodReconciler) DeletePodInLeafCluster(ctx context.Context, lr *leaf err = lr.Client.Delete(ctx, leafPod, deleteOption) if err != nil { if errors.IsNotFound(err) { - klog.V(5).Infof("Tried to delete pod %s/%s, but it did not exist in the cluster", leafPod.Namespace, leafPod.Name) + klog.V(4).Infof("Tried to delete pod %s/%s, but it did not exist in the cluster", leafPod.Namespace, leafPod.Name) return nil } return fmt.Errorf("could not delete pod: %v", err) } - klog.V(5).Infof("Delete pod %v/%+v success", leafPod.Namespace, leafPod.Name) + klog.V(4).Infof("Delete pod %v/%+v success", leafPod.Namespace, leafPod.Name) return nil } diff --git a/pkg/clustertree/cluster-manager/controllers/pv/leaf_pv_controller.go b/pkg/clustertree/cluster-manager/controllers/pv/leaf_pv_controller.go index d6930a80d..f41ab3406 100644 --- a/pkg/clustertree/cluster-manager/controllers/pv/leaf_pv_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/pv/leaf_pv_controller.go @@ -33,12 +33,13 @@ type LeafPVController struct { RootClient client.Client RootClientSet kubernetes.Interface ClusterName string + IsOne2OneMode bool } func (l *LeafPVController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) { pv := &v1.PersistentVolume{} - err := l.LeafClient.Get(ctx, request.NamespacedName, pv) pvNeedDelete := false + err := l.LeafClient.Get(ctx, request.NamespacedName, pv) if err != nil { if !errors.IsNotFound(err) { klog.Errorf("get pv from leaf cluster failed, error: %v", err) @@ -56,7 +57,7 @@ func (l *LeafPVController) Reconcile(ctx context.Context, request reconcile.Requ return reconcile.Result{RequeueAfter: LeafPVRequeueTime}, nil } - if pv.DeletionTimestamp != nil { + if pvNeedDelete || pv.DeletionTimestamp != nil { return reconcile.Result{}, nil } @@ -84,7 +85,7 @@ func (l *LeafPVController) Reconcile(ctx context.Context, request reconcile.Requ } rootPV = pv.DeepCopy() - filterPV(rootPV, l.ClusterName) + filterPV(rootPV, utils.NodeAffinity4RootPV(pv, l.IsOne2OneMode, l.ClusterName)) nn := types.NamespacedName{ Name: rootPV.Spec.ClaimRef.Name, Namespace: rootPV.Spec.ClaimRef.Namespace, @@ -101,7 +102,7 @@ func (l *LeafPVController) Reconcile(ctx context.Context, request reconcile.Requ rootPV.Spec.ClaimRef.UID = rootPVC.UID rootPV.Spec.ClaimRef.ResourceVersion = rootPVC.ResourceVersion - utils.AddResourceOwnersAnnotations(rootPV.Annotations, l.ClusterName) + utils.AddResourceClusters(rootPV.Annotations, l.ClusterName) rootPV, err = l.RootClientSet.CoreV1().PersistentVolumes().Create(ctx, rootPV, metav1.CreateOptions{}) if err != nil || rootPV == nil { @@ -112,7 +113,7 @@ func (l *LeafPVController) Reconcile(ctx context.Context, request reconcile.Requ return reconcile.Result{}, nil } - if !utils.HasResourceOwnersAnnotations(rootPV.Annotations, l.ClusterName) { + if !utils.HasResourceClusters(rootPV.Annotations, l.ClusterName) { klog.Errorf("meet the same name root pv name: %q !", request.NamespacedName.Name) return reconcile.Result{}, nil } @@ -128,7 +129,7 @@ func (l *LeafPVController) Reconcile(ctx context.Context, request reconcile.Requ return reconcile.Result{}, nil } - filterPV(rootPV, l.ClusterName) + filterPV(rootPV, utils.NodeAffinity4RootPV(pv, l.IsOne2OneMode, l.ClusterName)) if pvCopy.Spec.ClaimRef != nil || rootPV.Spec.ClaimRef == nil { nn := types.NamespacedName{ Name: pvCopy.Spec.ClaimRef.Name, @@ -151,7 +152,7 @@ func (l *LeafPVController) Reconcile(ctx context.Context, request reconcile.Requ pvCopy.Spec.NodeAffinity = rootPV.Spec.NodeAffinity pvCopy.UID = rootPV.UID pvCopy.ResourceVersion = rootPV.ResourceVersion - utils.AddResourceOwnersAnnotations(pvCopy.Annotations, l.ClusterName) + utils.AddResourceClusters(pvCopy.Annotations, l.ClusterName) if utils.IsPVEqual(rootPV, pvCopy) { return reconcile.Result{}, nil diff --git a/pkg/clustertree/cluster-manager/controllers/pv/oneway_pv_controller.go b/pkg/clustertree/cluster-manager/controllers/pv/oneway_pv_controller.go new file mode 100644 index 000000000..65e6ff62d --- /dev/null +++ b/pkg/clustertree/cluster-manager/controllers/pv/oneway_pv_controller.go @@ -0,0 +1,206 @@ +package pv + +import ( + "context" + "time" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/dynamic" + "k8s.io/klog" + controllerruntime "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/predicate" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils" + "github.com/kosmos.io/kosmos/pkg/utils" +) + +const ( + controllerName = "oneway-pv-controller" + requeueTime = 10 * time.Second + quickRequeueTime = 3 * time.Second + csiDriverName = "infini.volumepath.csi" +) + +var VolumePathGVR = schema.GroupVersionResource{ + Version: "v1alpha1", + Group: "lvm.infinilabs.com", + Resource: "volumepaths", +} + +type OnewayPVController struct { + Root client.Client + RootDynamic dynamic.Interface + GlobalLeafManager leafUtils.LeafResourceManager +} + +func (c *OnewayPVController) SetupWithManager(mgr manager.Manager) error { + predicatesFunc := predicate.Funcs{ + CreateFunc: func(createEvent event.CreateEvent) bool { + curr := createEvent.Object.(*corev1.PersistentVolume) + return curr.Spec.CSI != nil && curr.Spec.CSI.Driver == csiDriverName + }, + UpdateFunc: func(updateEvent event.UpdateEvent) bool { + curr := updateEvent.ObjectNew.(*corev1.PersistentVolume) + return curr.Spec.CSI != nil && curr.Spec.CSI.Driver == csiDriverName + }, + DeleteFunc: func(deleteEvent event.DeleteEvent) bool { + curr := deleteEvent.Object.(*corev1.PersistentVolume) + return curr.Spec.CSI != nil && curr.Spec.CSI.Driver == csiDriverName + }, + GenericFunc: func(genericEvent event.GenericEvent) bool { + return false + }, + } + + return controllerruntime.NewControllerManagedBy(mgr). + Named(controllerName). + WithOptions(controller.Options{}). + For(&corev1.PersistentVolume{}, builder.WithPredicates(predicatesFunc)). + Complete(c) +} + +func (c *OnewayPVController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) { + klog.V(4).Infof("============ %s starts to reconcile %s ============", controllerName, request.Name) + defer func() { + klog.V(4).Infof("============ %s has been reconciled =============", request.Name) + }() + + pv := &corev1.PersistentVolume{} + pvErr := c.Root.Get(ctx, types.NamespacedName{Name: request.Name}, pv) + if pvErr != nil && !errors.IsNotFound(pvErr) { + klog.Errorf("get pv %s error: %v", request.Name, pvErr) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + // volumePath has the same name with pv + vp, err := c.RootDynamic.Resource(VolumePathGVR).Get(ctx, request.Name, metav1.GetOptions{}) + if err != nil { + if errors.IsNotFound(err) { + klog.V(4).Infof("vp %s not found", request.Name) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + klog.Errorf("get volumePath %s error: %v", request.Name, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + nodeName, _, _ := unstructured.NestedString(vp.Object, "spec", "node") + if nodeName == "" { + klog.Warningf("vp %s's nodeName is empty, skip", request.Name) + return reconcile.Result{}, nil + } + + node := &corev1.Node{} + err = c.Root.Get(ctx, types.NamespacedName{Name: nodeName}, node) + if err != nil { + if errors.IsNotFound(err) { + klog.Warningf("cannot find node %s, error: %v", nodeName, err) + return reconcile.Result{}, nil + } + klog.Warningf("get node %s error: %v, will requeue", nodeName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + if !utils.IsKosmosNode(node) { + return reconcile.Result{}, nil + } + + clusterName := node.Annotations[utils.KosmosNodeOwnedByClusterAnnotations] + if clusterName == "" { + klog.Warningf("node %s is kosmos node, but node's %s annotation is empty, will requeue", node.Name, utils.KosmosNodeOwnedByClusterAnnotations) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + leaf, err := c.GlobalLeafManager.GetLeafResource(clusterName) + if err != nil { + klog.Warningf("get leafManager for cluster %s failed, error: %v, will requeue", clusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + if pvErr != nil && errors.IsNotFound(pvErr) || + !pv.DeletionTimestamp.IsZero() { + return c.clearLeafPV(ctx, leaf, pv) + } + + return c.ensureLeafPV(ctx, leaf, pv) +} + +func (c *OnewayPVController) clearLeafPV(ctx context.Context, leaf *leafUtils.LeafResource, pv *corev1.PersistentVolume) (reconcile.Result, error) { + err := leaf.Clientset.CoreV1().PersistentVolumes().Delete(ctx, pv.Name, metav1.DeleteOptions{}) + if err != nil && !errors.IsNotFound(err) { + klog.Errorf("delete pv %s in %s cluster failed, error: %v", pv.Name, leaf.ClusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + return reconcile.Result{}, nil +} + +func (c *OnewayPVController) ensureLeafPV(ctx context.Context, leaf *leafUtils.LeafResource, pv *corev1.PersistentVolume) (reconcile.Result, error) { + clusterName := leaf.ClusterName + newPV := pv.DeepCopy() + + pvc := &corev1.PersistentVolumeClaim{} + err := leaf.Client.Get(ctx, types.NamespacedName{ + Namespace: newPV.Spec.ClaimRef.Namespace, + Name: newPV.Spec.ClaimRef.Name, + }, pvc) + if err != nil { + klog.Errorf("get pvc from cluster %s error: %v, will requeue", leaf.ClusterName, err) + return reconcile.Result{RequeueAfter: quickRequeueTime}, nil + } + + newPV.Spec.ClaimRef.ResourceVersion = pvc.ResourceVersion + newPV.Spec.ClaimRef.UID = pvc.UID + + anno := newPV.GetAnnotations() + anno = utils.AddResourceClusters(anno, leaf.ClusterName) + anno[utils.KosmosGlobalLabel] = "true" + newPV.SetAnnotations(anno) + + oldPV := &corev1.PersistentVolume{} + err = leaf.Client.Get(ctx, types.NamespacedName{ + Name: newPV.Name, + }, oldPV) + if err != nil && !errors.IsNotFound(err) { + klog.Errorf("get pv from cluster %s error: %v, will requeue", leaf.ClusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + // create + if err != nil && errors.IsNotFound(err) { + newPV.UID = "" + newPV.ResourceVersion = "" + if err = leaf.Client.Create(ctx, newPV); err != nil && !errors.IsAlreadyExists(err) { + klog.Errorf("create pv to cluster %s error: %v, will requeue", clusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + return reconcile.Result{}, nil + } + + // update + newPV.ResourceVersion = oldPV.ResourceVersion + newPV.UID = oldPV.UID + if utils.IsPVEqual(oldPV, newPV) { + return reconcile.Result{}, nil + } + patch, err := utils.CreateMergePatch(oldPV, newPV) + if err != nil { + klog.Errorf("patch pv error: %v", err) + return reconcile.Result{}, err + } + _, err = leaf.Clientset.CoreV1().PersistentVolumes().Patch(ctx, newPV.Name, types.MergePatchType, patch, metav1.PatchOptions{}) + if err != nil { + klog.Errorf("patch pv %s to %s cluster failed, error: %v", newPV.Name, leaf.ClusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + return reconcile.Result{}, nil +} diff --git a/pkg/clustertree/cluster-manager/controllers/pv/root_pv_controller.go b/pkg/clustertree/cluster-manager/controllers/pv/root_pv_controller.go index d7d2f2045..0936497e7 100644 --- a/pkg/clustertree/cluster-manager/controllers/pv/root_pv_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/pv/root_pv_controller.go @@ -52,7 +52,7 @@ func (r *RootPVController) SetupWithManager(mgr manager.Manager) error { } pv := deleteEvent.Object.(*v1.PersistentVolume) - clusters := utils.ListResourceOwnersAnnotations(pv.Annotations) + clusters := utils.ListResourceClusters(pv.Annotations) if len(clusters) == 0 { klog.Warningf("pv leaf %q doesn't existed", deleteEvent.Object.GetName()) return false diff --git a/pkg/clustertree/cluster-manager/controllers/pvc/leaf_pvc_controller.go b/pkg/clustertree/cluster-manager/controllers/pvc/leaf_pvc_controller.go index 07a85bab7..821ee7687 100644 --- a/pkg/clustertree/cluster-manager/controllers/pvc/leaf_pvc_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/pvc/leaf_pvc_controller.go @@ -10,6 +10,7 @@ import ( "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" mergetypes "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/wait" "k8s.io/client-go/kubernetes" "k8s.io/klog" ctrl "sigs.k8s.io/controller-runtime" @@ -65,6 +66,29 @@ func (l *LeafPVCController) Reconcile(ctx context.Context, request reconcile.Req if reflect.DeepEqual(rootPVC.Status, pvcCopy.Status) { return reconcile.Result{}, nil } + + //when root pvc is not bound, it's status can't be changed to bound + if pvcCopy.Status.Phase == v1.ClaimBound { + err = wait.PollImmediate(500*time.Millisecond, 1*time.Minute, func() (bool, error) { + if rootPVC.Spec.VolumeName == "" { + klog.Warningf("pvc namespace: %q, name: %q is not bounded", request.NamespacedName.Namespace, + request.NamespacedName.Name) + err = l.RootClient.Get(ctx, request.NamespacedName, rootPVC) + if err != nil { + return false, err + } + return false, nil + } + return true, nil + }) + if err != nil { + if !errors.IsNotFound(err) { + return reconcile.Result{RequeueAfter: LeafPVCRequeueTime}, nil + } + return reconcile.Result{}, nil + } + } + if err = filterPVC(pvcCopy, l.ClusterName); err != nil { return reconcile.Result{}, nil } @@ -72,7 +96,7 @@ func (l *LeafPVCController) Reconcile(ctx context.Context, request reconcile.Req delete(pvcCopy.Annotations, utils.PVCSelectedNodeKey) pvcCopy.ResourceVersion = rootPVC.ResourceVersion pvcCopy.OwnerReferences = rootPVC.OwnerReferences - utils.AddResourceOwnersAnnotations(pvcCopy.Annotations, l.ClusterName) + utils.AddResourceClusters(pvcCopy.Annotations, l.ClusterName) pvcCopy.Spec = rootPVC.Spec klog.V(4).Infof("rootPVC %+v\n, pvc %+v", rootPVC, pvcCopy) diff --git a/pkg/clustertree/cluster-manager/controllers/pvc/oneway_pvc_controller.go b/pkg/clustertree/cluster-manager/controllers/pvc/oneway_pvc_controller.go new file mode 100644 index 000000000..c6f005a46 --- /dev/null +++ b/pkg/clustertree/cluster-manager/controllers/pvc/oneway_pvc_controller.go @@ -0,0 +1,198 @@ +package pvc + +import ( + "context" + "time" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/dynamic" + "k8s.io/klog" + controllerruntime "sigs.k8s.io/controller-runtime" + "sigs.k8s.io/controller-runtime/pkg/builder" + "sigs.k8s.io/controller-runtime/pkg/client" + "sigs.k8s.io/controller-runtime/pkg/controller" + "sigs.k8s.io/controller-runtime/pkg/event" + "sigs.k8s.io/controller-runtime/pkg/manager" + "sigs.k8s.io/controller-runtime/pkg/predicate" + "sigs.k8s.io/controller-runtime/pkg/reconcile" + + leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils" + "github.com/kosmos.io/kosmos/pkg/utils" +) + +const ( + controllerName = "oneway-pvc-controller" + requeueTime = 10 * time.Second + vpAnnotationKey = "volumepath" +) + +var VolumePathGVR = schema.GroupVersionResource{ + Version: "v1alpha1", + Group: "lvm.infinilabs.com", + Resource: "volumepaths", +} + +type OnewayPVCController struct { + Root client.Client + RootDynamic dynamic.Interface + GlobalLeafManager leafUtils.LeafResourceManager +} + +func pvcEventFilter(pvc *corev1.PersistentVolumeClaim) bool { + anno := pvc.GetAnnotations() + if anno == nil { + return false + } + if _, ok := anno[vpAnnotationKey]; ok { + return true + } + return false +} + +func (c *OnewayPVCController) SetupWithManager(mgr manager.Manager) error { + predicatesFunc := predicate.Funcs{ + CreateFunc: func(createEvent event.CreateEvent) bool { + curr := createEvent.Object.(*corev1.PersistentVolumeClaim) + return pvcEventFilter(curr) + }, + UpdateFunc: func(updateEvent event.UpdateEvent) bool { + curr := updateEvent.ObjectNew.(*corev1.PersistentVolumeClaim) + return pvcEventFilter(curr) + }, + DeleteFunc: func(deleteEvent event.DeleteEvent) bool { + curr := deleteEvent.Object.(*corev1.PersistentVolumeClaim) + return pvcEventFilter(curr) + }, + GenericFunc: func(genericEvent event.GenericEvent) bool { + return false + }, + } + return controllerruntime.NewControllerManagedBy(mgr). + Named(controllerName). + WithOptions(controller.Options{}). + For(&corev1.PersistentVolumeClaim{}, builder.WithPredicates(predicatesFunc)). + Complete(c) +} + +func (c *OnewayPVCController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) { + klog.V(4).Infof("============ %s starts to reconcile %s ============", controllerName, request.Name) + defer func() { + klog.V(4).Infof("============ %s has been reconciled =============", request.Name) + }() + + rootPVC := &corev1.PersistentVolumeClaim{} + pvcErr := c.Root.Get(ctx, types.NamespacedName{Namespace: request.Namespace, Name: request.Name}, rootPVC) + if pvcErr != nil && !errors.IsNotFound(pvcErr) { + klog.Errorf("get pvc %s error: %v", request.Name, pvcErr) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + // volumePath has the same name with pvc + vp, err := c.RootDynamic.Resource(VolumePathGVR).Get(ctx, request.Name, metav1.GetOptions{}) + if err != nil { + if errors.IsNotFound(err) { + klog.V(4).Infof("vp %s not found", request.Name) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + klog.Errorf("get volumePath %s error: %v", request.Name, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + nodeName, _, _ := unstructured.NestedString(vp.Object, "spec", "node") + if nodeName == "" { + klog.Warningf("vp %s's nodeName is empty, skip", request.Name) + return reconcile.Result{}, nil + } + + node := &corev1.Node{} + err = c.Root.Get(ctx, types.NamespacedName{Name: nodeName}, node) + if err != nil { + if errors.IsNotFound(err) { + klog.Warningf("cannot find node %s, error: %v", nodeName, err) + return reconcile.Result{}, nil + } + klog.Warningf("get node %s error: %v, will requeue", nodeName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + if !utils.IsKosmosNode(node) { + return reconcile.Result{}, nil + } + + clusterName := node.Annotations[utils.KosmosNodeOwnedByClusterAnnotations] + if clusterName == "" { + klog.Warningf("node %s is kosmos node, but node's %s annotation is empty, will requeue", node.Name, utils.KosmosNodeOwnedByClusterAnnotations) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + leaf, err := c.GlobalLeafManager.GetLeafResource(clusterName) + if err != nil { + klog.Warningf("get leafManager for cluster %s failed, error: %v, will requeue", clusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + if pvcErr != nil && errors.IsNotFound(pvcErr) || + !rootPVC.DeletionTimestamp.IsZero() { + return c.clearLeafPVC(ctx, leaf, rootPVC) + } + + return c.ensureLeafPVC(ctx, leaf, rootPVC) +} + +func (c *OnewayPVCController) clearLeafPVC(ctx context.Context, leaf *leafUtils.LeafResource, pvc *corev1.PersistentVolumeClaim) (reconcile.Result, error) { + return reconcile.Result{}, nil +} + +func (c *OnewayPVCController) ensureLeafPVC(ctx context.Context, leaf *leafUtils.LeafResource, pvc *corev1.PersistentVolumeClaim) (reconcile.Result, error) { + clusterName := leaf.ClusterName + newPVC := pvc.DeepCopy() + + anno := newPVC.GetAnnotations() + anno = utils.AddResourceClusters(anno, leaf.ClusterName) + anno[utils.KosmosGlobalLabel] = "true" + newPVC.SetAnnotations(anno) + + oldPVC := &corev1.PersistentVolumeClaim{} + err := leaf.Client.Get(ctx, types.NamespacedName{ + Name: newPVC.Name, + Namespace: newPVC.Namespace, + }, oldPVC) + if err != nil && !errors.IsNotFound(err) { + klog.Errorf("get pvc from cluster %s error: %v, will requeue", leaf.ClusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + + // create + if err != nil && errors.IsNotFound(err) { + newPVC.UID = "" + newPVC.ResourceVersion = "" + if err = leaf.Client.Create(ctx, newPVC); err != nil && !errors.IsAlreadyExists(err) { + klog.Errorf("create pv to cluster %s error: %v, will requeue", clusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + return reconcile.Result{}, nil + } + + // update + newPVC.ResourceVersion = oldPVC.ResourceVersion + newPVC.UID = oldPVC.UID + if utils.IsPVCEqual(oldPVC, newPVC) { + return reconcile.Result{}, nil + } + patch, err := utils.CreateMergePatch(oldPVC, newPVC) + if err != nil { + klog.Errorf("patch pv error: %v", err) + return reconcile.Result{}, err + } + _, err = leaf.Clientset.CoreV1().PersistentVolumeClaims(newPVC.Namespace).Patch(ctx, newPVC.Name, types.MergePatchType, patch, metav1.PatchOptions{}) + if err != nil { + klog.Errorf("patch pvc %s to %s cluster failed, error: %v", newPVC.Name, leaf.ClusterName, err) + return reconcile.Result{RequeueAfter: requeueTime}, nil + } + return reconcile.Result{}, nil +} diff --git a/pkg/clustertree/cluster-manager/controllers/pvc/root_pvc_controller.go b/pkg/clustertree/cluster-manager/controllers/pvc/root_pvc_controller.go index 83d0247c0..eba645ceb 100644 --- a/pkg/clustertree/cluster-manager/controllers/pvc/root_pvc_controller.go +++ b/pkg/clustertree/cluster-manager/controllers/pvc/root_pvc_controller.go @@ -44,7 +44,7 @@ func (r *RootPVCController) Reconcile(ctx context.Context, request reconcile.Req return reconcile.Result{}, nil } - clusters := utils.ListResourceOwnersAnnotations(pvc.Annotations) + clusters := utils.ListResourceClusters(pvc.Annotations) if len(clusters) == 0 { klog.V(4).Infof("pvc leaf %q: %q doesn't existed", request.NamespacedName.Namespace, request.NamespacedName.Name) return reconcile.Result{RequeueAfter: RootPVCRequeueTime}, nil @@ -71,9 +71,11 @@ func (r *RootPVCController) Reconcile(ctx context.Context, request reconcile.Req return reconcile.Result{}, nil }*/ - if reflect.DeepEqual(pvcOld.Spec, pvc.Spec) { + if reflect.DeepEqual(pvcOld.Spec.Resources.Requests, pvc.Spec.Resources.Requests) { return reconcile.Result{}, nil } + pvcOld.Spec.Resources.Requests = pvc.Spec.Resources.Requests + pvc.Spec = pvcOld.Spec pvc.Annotations = pvcOld.Annotations pvc.ObjectMeta.UID = pvcOld.ObjectMeta.UID @@ -114,7 +116,7 @@ func (r *RootPVCController) SetupWithManager(mgr manager.Manager) error { } pvc := deleteEvent.Object.(*v1.PersistentVolumeClaim) - clusters := utils.ListResourceOwnersAnnotations(pvc.Annotations) + clusters := utils.ListResourceClusters(pvc.Annotations) if len(clusters) == 0 { klog.V(4).Infof("pvc leaf %q: %q doesn't existed", deleteEvent.Object.GetNamespace(), deleteEvent.Object.GetName()) return false diff --git a/pkg/clustertree/cluster-manager/node-server/api/errdefs.go b/pkg/clustertree/cluster-manager/node-server/api/errdefs.go new file mode 100644 index 000000000..59c38885a --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/api/errdefs.go @@ -0,0 +1,73 @@ +package api + +import ( + "errors" + "fmt" +) + +const ( + ERR_NOT_FOUND = "ErrNotFound" + ERR_INVALID_INPUT = "ErrInvalidInput" +) + +type causal interface { + Cause() error + error +} + +type ErrNodeServer interface { + GetErrorType() string + error +} + +type errNodeServer struct { + errType string + error +} + +func (e *errNodeServer) GetErrorType() string { + return e.errType +} + +func ErrNotFound(msg string) error { + return &errNodeServer{ERR_NOT_FOUND, errors.New(msg)} +} + +func ErrInvalidInput(msg string) error { + return &errNodeServer{ERR_INVALID_INPUT, errors.New(msg)} +} + +func IsMatchErrType(err error, errType string) bool { + if err == nil { + return false + } + if e, ok := err.(ErrNodeServer); ok { + return e.GetErrorType() == errType + } + + if e, ok := err.(causal); ok { + return IsMatchErrType(e.Cause(), errType) + } + + return false +} + +func IsNotFound(err error) bool { + return IsMatchErrType(err, ERR_NOT_FOUND) +} + +func IsInvalidInput(err error) bool { + return IsMatchErrType(err, ERR_INVALID_INPUT) +} + +func ConvertNotFound(err error) error { + return &errNodeServer{ERR_NOT_FOUND, err} +} + +func ConvertInvalidInput(err error) error { + return &errNodeServer{ERR_INVALID_INPUT, err} +} + +func ErrInvalidInputf(format string, args ...interface{}) error { + return &errNodeServer{ERR_INVALID_INPUT, fmt.Errorf(format, args...)} +} diff --git a/pkg/clustertree/cluster-manager/node-server/api/exec.go b/pkg/clustertree/cluster-manager/node-server/api/exec.go new file mode 100644 index 000000000..295772914 --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/api/exec.go @@ -0,0 +1,256 @@ +package api + +import ( + "context" + "fmt" + "io" + "net/http" + "strings" + "time" + + "github.com/gorilla/mux" + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/httpstream" + "k8s.io/client-go/kubernetes/scheme" + remoteutils "k8s.io/client-go/tools/remotecommand" + + "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/node-server/api/remotecommand" +) + +type execIO struct { + tty bool + stdin io.Reader + stdout io.WriteCloser + stderr io.WriteCloser + chResize chan TermSize +} + +type ContainerExecHandlerFunc func(ctx context.Context, namespace, podName, containerName string, cmd []string, attach AttachIO, getClient getClientFunc) error + +func (e *execIO) TTY() bool { + return e.tty +} + +func (e *execIO) Stdin() io.Reader { + return e.stdin +} + +func (e *execIO) Stdout() io.WriteCloser { + return e.stdout +} + +func (e *execIO) Stderr() io.WriteCloser { + return e.stderr +} + +func (e *execIO) Resize() <-chan TermSize { + return e.chResize +} + +type containerExecutor struct { + h ContainerExecHandlerFunc + namespace, pod, container string + ctx context.Context + getClient getClientFunc +} + +func (c *containerExecutor) ExecInContainer(name string, uid types.UID, container string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remoteutils.TerminalSize, timeout time.Duration) error { + eio := &execIO{ + tty: tty, + stdin: in, + stdout: out, + stderr: err, + } + + if tty { + eio.chResize = make(chan TermSize) + } + + ctx, cancel := context.WithCancel(c.ctx) + defer cancel() + + if tty { + go func() { + send := func(s remoteutils.TerminalSize) bool { + select { + case eio.chResize <- TermSize{Width: s.Width, Height: s.Height}: + return false + case <-ctx.Done(): + return true + } + } + + for { + select { + case s := <-resize: + if send(s) { + return + } + case <-ctx.Done(): + return + } + } + }() + } + + return c.h(c.ctx, c.namespace, c.pod, c.container, cmd, eio, c.getClient) +} + +type AttachIO interface { + Stdin() io.Reader + Stdout() io.WriteCloser + Stderr() io.WriteCloser + TTY() bool + Resize() <-chan TermSize +} + +type TermSize struct { + Width uint16 + Height uint16 +} + +type termSize struct { + attach AttachIO +} + +func (t *termSize) Next() *remoteutils.TerminalSize { + resize := <-t.attach.Resize() + return &remoteutils.TerminalSize{ + Height: resize.Height, + Width: resize.Width, + } +} + +type ContainerExecOptions struct { + StreamIdleTimeout time.Duration + StreamCreationTimeout time.Duration +} + +func getVarFromReq(req *http.Request) (string, string, string, []string, []string) { + vars := mux.Vars(req) + namespace := vars[namespaceVar] + pod := vars[podVar] + container := vars[containerVar] + + supportedStreamProtocols := strings.Split(req.Header.Get(httpstream.HeaderProtocolVersion), ",") + + q := req.URL.Query() + command := q[commandVar] + + return namespace, pod, container, supportedStreamProtocols, command +} + +func getExecOptions(req *http.Request) (*remotecommand.Options, error) { + tty := req.FormValue(execTTYParam) == "1" + stdin := req.FormValue(execStdinParam) == "1" + stdout := req.FormValue(execStdoutParam) == "1" + stderr := req.FormValue(execStderrParam) == "1" + + if tty && stderr { + return nil, errors.New("cannot exec with tty and stderr") + } + + if !stdin && !stdout && !stderr { + return nil, errors.New("you must specify at least one of stdin, stdout, stderr") + } + return &remotecommand.Options{ + Stdin: stdin, + Stdout: stdout, + Stderr: stderr, + TTY: tty, + }, nil +} + +func execInContainer(ctx context.Context, namespace string, podName string, containerName string, cmd []string, attach AttachIO, getClient getClientFunc) error { + defer func() { + if attach.Stdout() != nil { + attach.Stdout().Close() + } + if attach.Stderr() != nil { + attach.Stderr().Close() + } + }() + + client, config, err := getClient(ctx, namespace, podName) + + if err != nil { + return fmt.Errorf("could not get the leaf client, podName: %s, namespace: %s, err: %v", podName, namespace, err) + } + + req := client.CoreV1().RESTClient(). + Post(). + Namespace(namespace). + Resource("pods"). + Name(podName). + SubResource("exec"). + Timeout(0). + VersionedParams(&corev1.PodExecOptions{ + Container: containerName, + Command: cmd, + Stdin: attach.Stdin() != nil, + Stdout: attach.Stdout() != nil, + Stderr: attach.Stderr() != nil, + TTY: attach.TTY(), + }, scheme.ParameterCodec) + + exec, err := remoteutils.NewSPDYExecutor(config, "POST", req.URL()) + if err != nil { + return fmt.Errorf("could not make remote command: %v", err) + } + + ts := &termSize{attach: attach} + + err = exec.StreamWithContext(ctx, remoteutils.StreamOptions{ + Stdin: attach.Stdin(), + Stdout: attach.Stdout(), + Stderr: attach.Stderr(), + Tty: attach.TTY(), + TerminalSizeQueue: ts, + }) + + if err != nil { + return err + } + + return nil +} + +func ContainerExecHandler(cfg ContainerExecOptions, getClient getClientFunc) http.HandlerFunc { + return handleError(func(w http.ResponseWriter, req *http.Request) error { + namespace, pod, container, supportedStreamProtocols, command := getVarFromReq(req) + + streamOpts, err := getExecOptions(req) + if err != nil { + return ConvertInvalidInput(err) + } + + ctx, cancel := context.WithCancel(req.Context()) + defer cancel() + + exec := &containerExecutor{ + ctx: ctx, + h: execInContainer, + pod: pod, + namespace: namespace, + container: container, + getClient: getClient, + } + remotecommand.ServeExec( + w, + req, + exec, + "", + "", + container, + command, + streamOpts, + cfg.StreamIdleTimeout, + cfg.StreamCreationTimeout, + supportedStreamProtocols, + ) + + return nil + }) +} diff --git a/pkg/clustertree/cluster-manager/node-server/api/helper.go b/pkg/clustertree/cluster-manager/node-server/api/helper.go new file mode 100644 index 000000000..fe465f1e9 --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/api/helper.go @@ -0,0 +1,94 @@ +package api + +import ( + "context" + "io" + "net/http" + + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" + "k8s.io/klog/v2" +) + +const ( + execTTYParam = "tty" + execStdinParam = "input" + execStdoutParam = "output" + execStderrParam = "error" + namespaceVar = "namespace" + podVar = "pod" + containerVar = "container" + commandVar = "command" +) + +type handlerFunc func(http.ResponseWriter, *http.Request) error + +type getClientFunc func(ctx context.Context, namespace string, podName string) (kubernetes.Interface, *rest.Config, error) + +func handleError(f handlerFunc) http.HandlerFunc { + return func(w http.ResponseWriter, req *http.Request) { + err := f(w, req) + if err == nil { + return + } + + code := httpStatusCode(err) + w.WriteHeader(code) + if _, err := io.WriteString(w, err.Error()); err != nil { + klog.Error("error writing error response") + } + + if code >= 500 { + klog.Error("Internal server error on request") + } else { + klog.Error("Error on request") + } + } +} + +func flushOnWrite(w io.Writer) io.Writer { + if fw, ok := w.(writeFlusher); ok { + return &flushWriter{fw} + } + return w +} + +type flushWriter struct { + w writeFlusher +} + +type writeFlusher interface { + Flush() + Write([]byte) (int, error) +} + +func (fw *flushWriter) Write(p []byte) (int, error) { + n, err := fw.w.Write(p) + if n > 0 { + fw.w.Flush() + } + return n, err +} + +func httpStatusCode(err error) int { + switch { + case err == nil: + return http.StatusOK + case IsNotFound(err): + return http.StatusNotFound + case IsInvalidInput(err): + return http.StatusBadRequest + default: + return http.StatusInternalServerError + } +} + +func NotImplemented(w http.ResponseWriter, r *http.Request) { + klog.Warning("501 not implemented") + http.Error(w, "501 not implemented", http.StatusNotImplemented) +} + +func NotFound(w http.ResponseWriter, r *http.Request) { + klog.Warningf("404 request not found, url: %s", r.URL) + http.Error(w, "404 request not found", http.StatusNotFound) +} diff --git a/pkg/clustertree/cluster-manager/node-server/api/logs.go b/pkg/clustertree/cluster-manager/node-server/api/logs.go new file mode 100644 index 000000000..b0a7317fa --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/api/logs.go @@ -0,0 +1,169 @@ +package api + +import ( + "context" + "fmt" + "io" + "net/http" + "net/url" + "strconv" + "time" + + "github.com/gorilla/mux" + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/klog/v2" +) + +type ContainerLogsHandlerFunc func(ctx context.Context, namespace, podName, containerName string, opts ContainerLogOpts) (io.ReadCloser, error) + +type ContainerLogOpts struct { + Tail int + LimitBytes int + Timestamps bool + Follow bool + Previous bool + SinceSeconds int + SinceTime time.Time +} + +func parseLogOptions(q url.Values) (opts ContainerLogOpts, err error) { + if tailLines := q.Get("tailLines"); tailLines != "" { + opts.Tail, err = strconv.Atoi(tailLines) + if err != nil { + return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"tailLines\"")) + } + if opts.Tail < 0 { + return opts, ErrInvalidInputf("\"tailLines\" is %d", opts.Tail) + } + } + if follow := q.Get("follow"); follow != "" { + opts.Follow, err = strconv.ParseBool(follow) + if err != nil { + return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"follow\"")) + } + } + if limitBytes := q.Get("limitBytes"); limitBytes != "" { + opts.LimitBytes, err = strconv.Atoi(limitBytes) + if err != nil { + return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"limitBytes\"")) + } + if opts.LimitBytes < 1 { + return opts, ErrInvalidInputf("\"limitBytes\" is %d", opts.LimitBytes) + } + } + if previous := q.Get("previous"); previous != "" { + opts.Previous, err = strconv.ParseBool(previous) + if err != nil { + return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"previous\"")) + } + } + if sinceSeconds := q.Get("sinceSeconds"); sinceSeconds != "" { + opts.SinceSeconds, err = strconv.Atoi(sinceSeconds) + if err != nil { + return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"sinceSeconds\"")) + } + if opts.SinceSeconds < 1 { + return opts, ErrInvalidInputf("\"sinceSeconds\" is %d", opts.SinceSeconds) + } + } + if sinceTime := q.Get("sinceTime"); sinceTime != "" { + opts.SinceTime, err = time.Parse(time.RFC3339, sinceTime) + if err != nil { + return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"sinceTime\"")) + } + if opts.SinceSeconds > 0 { + return opts, ErrInvalidInputf("both \"sinceSeconds\" and \"sinceTime\" are set") + } + } + if timestamps := q.Get("timestamps"); timestamps != "" { + opts.Timestamps, err = strconv.ParseBool(timestamps) + if err != nil { + return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"timestamps\"")) + } + } + return opts, nil +} + +func getContainerLogs(ctx context.Context, namespace string, + podName string, containerName string, opts ContainerLogOpts, getClient getClientFunc) (io.ReadCloser, error) { + tailLine := int64(opts.Tail) + limitBytes := int64(opts.LimitBytes) + sinceSeconds := opts.SinceSeconds + options := &corev1.PodLogOptions{ + Container: containerName, + Timestamps: opts.Timestamps, + Follow: opts.Follow, + } + if tailLine != 0 { + options.TailLines = &tailLine + } + if limitBytes != 0 { + options.LimitBytes = &limitBytes + } + if !opts.SinceTime.IsZero() { + *options.SinceTime = metav1.Time{Time: opts.SinceTime} + } + if sinceSeconds != 0 { + *options.SinceSeconds = int64(sinceSeconds) + } + if opts.Previous { + options.Previous = opts.Previous + } + if opts.Follow { + options.Follow = opts.Follow + } + + client, _, err := getClient(ctx, namespace, podName) + + if err != nil { + return nil, fmt.Errorf("could not get the leaf client, podName: %s, namespace: %s, err: %v", podName, namespace, err) + } + + logs := client.CoreV1().Pods(namespace).GetLogs(podName, options) + stream, err := logs.Stream(ctx) + if err != nil { + return nil, fmt.Errorf("could not get stream from logs request: %v", err) + } + return stream, nil +} + +func ContainerLogsHandler(getClient getClientFunc) http.HandlerFunc { + return handleError(func(w http.ResponseWriter, req *http.Request) error { + vars := mux.Vars(req) + if len(vars) != 3 { + return ErrNotFound("not found") + } + + ctx := req.Context() + + namespace := vars[namespaceVar] + pod := vars[podVar] + container := vars[containerVar] + + query := req.URL.Query() + opts, err := parseLogOptions(query) + if err != nil { + return err + } + + logs, err := getContainerLogs(ctx, namespace, pod, container, opts, getClient) + if err != nil { + return errors.Wrap(err, "error getting container logs?)") + } + + defer logs.Close() + + req.Header.Set("Transfer-Encoding", "chunked") + + if _, ok := w.(writeFlusher); !ok { + klog.V(4).Info("http response writer does not support flushes") + } + + if _, err := io.Copy(flushOnWrite(w), logs); err != nil { + return errors.Wrap(err, "error writing response to client") + } + return nil + }) +} diff --git a/pkg/clustertree/cluster-manager/node-server/api/remotecommand/attach.go b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/attach.go new file mode 100644 index 000000000..69d78de33 --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/attach.go @@ -0,0 +1,70 @@ +// This code is directly lifted from the Kubernetes +// For reference: +// https://github.com/kubernetes/kubernetes/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/attach.go + +package remotecommand + +import ( + "fmt" + "io" + "net/http" + "time" + + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + remotecommandconsts "k8s.io/apimachinery/pkg/util/remotecommand" + "k8s.io/apimachinery/pkg/util/runtime" + "k8s.io/client-go/tools/remotecommand" + utilexec "k8s.io/utils/exec" +) + +// Attacher knows how to attach to a container in a pod. +type Attacher interface { + // AttachToContainer attaches to a container in the pod, copying data + // between in/out/err and the container's stdin/stdout/stderr. + AttachToContainer(name string, uid types.UID, container string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error +} + +// ServeAttach handles requests to attach to a container. After +// creating/receiving the required streams, it delegates the actual attachment +// to the attacher. +func ServeAttach(w http.ResponseWriter, req *http.Request, attacher Attacher, podName string, uid types.UID, container string, streamOpts *Options, idleTimeout, streamCreationTimeout time.Duration, supportedProtocols []string) { + ctx, ok := createStreams(req, w, streamOpts, supportedProtocols, idleTimeout, streamCreationTimeout) + if !ok { + // error is handled by createStreams + return + } + defer ctx.conn.Close() + + err := attacher.AttachToContainer(podName, uid, container, ctx.stdinStream, ctx.stdoutStream, ctx.stderrStream, ctx.tty, ctx.resizeChan, 0) + if err != nil { + if exitErr, ok := err.(utilexec.ExitError); ok && exitErr.Exited() { + rc := exitErr.ExitStatus() + // nolint:errcheck + ctx.writeStatus(&apierrors.StatusError{ErrStatus: metav1.Status{ + Status: metav1.StatusFailure, + Reason: remotecommandconsts.NonZeroExitCodeReason, + Details: &metav1.StatusDetails{ + Causes: []metav1.StatusCause{ + { + Type: remotecommandconsts.ExitCodeCauseType, + Message: fmt.Sprintf("%d", rc), + }, + }, + }, + Message: fmt.Sprintf("command terminated with non-zero exit code: %v", exitErr), + }}) + return + } + err = fmt.Errorf("error attaching to container: %v", err) + runtime.HandleError(err) + // nolint:errcheck + ctx.writeStatus(apierrors.NewInternalError(err)) + return + } + // nolint:errcheck + ctx.writeStatus(&apierrors.StatusError{ErrStatus: metav1.Status{ + Status: metav1.StatusSuccess, + }}) +} diff --git a/pkg/clustertree/cluster-manager/node-server/api/remotecommand/exec.go b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/exec.go new file mode 100644 index 000000000..f2bcc0cbc --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/exec.go @@ -0,0 +1,70 @@ +// This code is directly lifted from the Kubernetes +// For reference: +// https://github.com/kubernetes/kubernetes/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/exec.go + +package remotecommand + +import ( + "fmt" + "io" + "net/http" + "time" + + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + remotecommandconsts "k8s.io/apimachinery/pkg/util/remotecommand" + "k8s.io/apimachinery/pkg/util/runtime" + "k8s.io/client-go/tools/remotecommand" + utilexec "k8s.io/utils/exec" +) + +// Executor knows how to execute a command in a container in a pod. +type Executor interface { + // ExecInContainer executes a command in a container in the pod, copying data + // between in/out/err and the container's stdin/stdout/stderr. + ExecInContainer(name string, uid types.UID, container string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error +} + +// ServeExec handles requests to execute a command in a container. After +// creating/receiving the required streams, it delegates the actual execution +// to the executor. +func ServeExec(w http.ResponseWriter, req *http.Request, executor Executor, podName string, uid types.UID, container string, cmd []string, streamOpts *Options, idleTimeout, streamCreationTimeout time.Duration, supportedProtocols []string) { + ctx, ok := createStreams(req, w, streamOpts, supportedProtocols, idleTimeout, streamCreationTimeout) + if !ok { + // error is handled by createStreams + return + } + defer ctx.conn.Close() + + err := executor.ExecInContainer(podName, uid, container, cmd, ctx.stdinStream, ctx.stdoutStream, ctx.stderrStream, ctx.tty, ctx.resizeChan, 0) + if err != nil { + if exitErr, ok := err.(utilexec.ExitError); ok && exitErr.Exited() { + rc := exitErr.ExitStatus() + // nolint:errcheck + ctx.writeStatus(&apierrors.StatusError{ErrStatus: metav1.Status{ + Status: metav1.StatusFailure, + Reason: remotecommandconsts.NonZeroExitCodeReason, + Details: &metav1.StatusDetails{ + Causes: []metav1.StatusCause{ + { + Type: remotecommandconsts.ExitCodeCauseType, + Message: fmt.Sprintf("%d", rc), + }, + }, + }, + Message: fmt.Sprintf("command terminated with non-zero exit code: %v", exitErr), + }}) + } else { + err = fmt.Errorf("error executing command in container: %v", err) + runtime.HandleError(err) + // nolint:errcheck + ctx.writeStatus(apierrors.NewInternalError(err)) + } + } else { + // nolint:errcheck + ctx.writeStatus(&apierrors.StatusError{ErrStatus: metav1.Status{ + Status: metav1.StatusSuccess, + }}) + } +} diff --git a/pkg/clustertree/cluster-manager/node-server/api/remotecommand/httpstream.go b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/httpstream.go new file mode 100644 index 000000000..462a5544d --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/httpstream.go @@ -0,0 +1,439 @@ +// This code is directly lifted from the Kubernetes +// For reference: +// https://github.com/kubernetes/kubernetes/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/httpstream.go + +package remotecommand + +import ( + "encoding/json" + "errors" + "fmt" + "io" + "net/http" + "time" + + api "k8s.io/api/core/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/util/httpstream" + "k8s.io/apimachinery/pkg/util/httpstream/spdy" + remotecommandconsts "k8s.io/apimachinery/pkg/util/remotecommand" + "k8s.io/apimachinery/pkg/util/runtime" + "k8s.io/apiserver/pkg/util/wsstream" + "k8s.io/client-go/tools/remotecommand" + "k8s.io/klog/v2" +) + +// Options contains details about which streams are required for +// remote command execution. +type Options struct { + Stdin bool + Stdout bool + Stderr bool + TTY bool +} + +// NewOptions creates a new Options from the Request. +func NewOptions(req *http.Request) (*Options, error) { + tty := req.FormValue(api.ExecTTYParam) == "1" + stdin := req.FormValue(api.ExecStdinParam) == "1" + stdout := req.FormValue(api.ExecStdoutParam) == "1" + stderr := req.FormValue(api.ExecStderrParam) == "1" + if tty && stderr { + // TODO: make this an error before we reach this method + klog.V(4).Infof("Access to exec with tty and stderr is not supported, bypassing stderr") + stderr = false + } + + if !stdin && !stdout && !stderr { + return nil, fmt.Errorf("you must specify at least 1 of stdin, stdout, stderr") + } + + return &Options{ + Stdin: stdin, + Stdout: stdout, + Stderr: stderr, + TTY: tty, + }, nil +} + +// context contains the connection and streams used when +// forwarding an attach or execute session into a container. +type context struct { + conn io.Closer + stdinStream io.ReadCloser + stdoutStream io.WriteCloser + stderrStream io.WriteCloser + writeStatus func(status *apierrors.StatusError) error + resizeStream io.ReadCloser + resizeChan chan remotecommand.TerminalSize + tty bool +} + +// streamAndReply holds both a Stream and a channel that is closed when the stream's reply frame is +// enqueued. Consumers can wait for replySent to be closed prior to proceeding, to ensure that the +// replyFrame is enqueued before the connection's goaway frame is sent (e.g. if a stream was +// received and right after, the connection gets closed). +type streamAndReply struct { + httpstream.Stream + replySent <-chan struct{} +} + +// waitStreamReply waits until either replySent or stop is closed. If replySent is closed, it sends +// an empty struct to the notify channel. +func waitStreamReply(replySent <-chan struct{}, notify chan<- struct{}, stop <-chan struct{}) { + select { + case <-replySent: + notify <- struct{}{} + case <-stop: + } +} + +func createStreams(req *http.Request, w http.ResponseWriter, opts *Options, supportedStreamProtocols []string, idleTimeout, streamCreationTimeout time.Duration) (*context, bool) { + var ctx *context + var ok bool + if wsstream.IsWebSocketRequest(req) { + ctx, ok = createWebSocketStreams(req, w, opts, idleTimeout) + } else { + ctx, ok = createHTTPStreamStreams(req, w, opts, supportedStreamProtocols, idleTimeout, streamCreationTimeout) + } + if !ok { + return nil, false + } + + if ctx.resizeStream != nil { + ctx.resizeChan = make(chan remotecommand.TerminalSize) + go handleResizeEvents(ctx.resizeStream, ctx.resizeChan) + } + + return ctx, true +} + +func createHTTPStreamStreams(req *http.Request, w http.ResponseWriter, opts *Options, supportedStreamProtocols []string, idleTimeout, streamCreationTimeout time.Duration) (*context, bool) { + protocol, err := httpstream.Handshake(req, w, supportedStreamProtocols) + if err != nil { + http.Error(w, err.Error(), http.StatusBadRequest) + return nil, false + } + + streamCh := make(chan streamAndReply) + + upgrader := spdy.NewResponseUpgrader() + conn := upgrader.UpgradeResponse(w, req, func(stream httpstream.Stream, replySent <-chan struct{}) error { + streamCh <- streamAndReply{Stream: stream, replySent: replySent} + return nil + }) + // from this point on, we can no longer call methods on response + if conn == nil { + // The upgrader is responsible for notifying the client of any errors that + // occurred during upgrading. All we can do is return here at this point + // if we weren't successful in upgrading. + return nil, false + } + + conn.SetIdleTimeout(idleTimeout) + + var handler protocolHandler + switch protocol { + case remotecommandconsts.StreamProtocolV4Name: + handler = &v4ProtocolHandler{} + case remotecommandconsts.StreamProtocolV3Name: + handler = &v3ProtocolHandler{} + case remotecommandconsts.StreamProtocolV2Name: + handler = &v2ProtocolHandler{} + case "": + klog.V(4).Infof("Client did not request protocol negotiation. Falling back to %q", remotecommandconsts.StreamProtocolV1Name) + fallthrough + case remotecommandconsts.StreamProtocolV1Name: + handler = &v1ProtocolHandler{} + } + + // count the streams client asked for, starting with 1 + expectedStreams := 1 + if opts.Stdin { + expectedStreams++ + } + if opts.Stdout { + expectedStreams++ + } + if opts.Stderr { + expectedStreams++ + } + if opts.TTY && handler.supportsTerminalResizing() { + expectedStreams++ + } + + expired := time.NewTimer(streamCreationTimeout) + defer expired.Stop() + + ctx, err := handler.waitForStreams(streamCh, expectedStreams, expired.C) + if err != nil { + runtime.HandleError(err) + return nil, false + } + + ctx.conn = conn + ctx.tty = opts.TTY + + return ctx, true +} + +type protocolHandler interface { + // waitForStreams waits for the expected streams or a timeout, returning a + // remoteCommandContext if all the streams were received, or an error if not. + waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) + // supportsTerminalResizing returns true if the protocol handler supports terminal resizing + supportsTerminalResizing() bool +} + +// v4ProtocolHandler implements the V4 protocol version for streaming command execution. It only differs +// in from v3 in the error stream format using an json-marshaled metav1.Status which carries +// the process' exit code. +type v4ProtocolHandler struct{} + +// nolint:dupl +func (*v4ProtocolHandler) waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) { + ctx := &context{} + receivedStreams := 0 + replyChan := make(chan struct{}) + stop := make(chan struct{}) + defer close(stop) +WaitForStreams: + for { + select { + case stream := <-streams: + streamType := stream.Headers().Get(api.StreamType) + switch streamType { + case api.StreamTypeError: + ctx.writeStatus = v4WriteStatusFunc(stream) // write json errors + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStdin: + ctx.stdinStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStdout: + ctx.stdoutStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStderr: + ctx.stderrStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeResize: + ctx.resizeStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + default: + runtime.HandleError(fmt.Errorf("unexpected stream type: %q", streamType)) + } + case <-replyChan: + receivedStreams++ + if receivedStreams == expectedStreams { + break WaitForStreams + } + case <-expired: + // TODO find a way to return the error to the user. Maybe use a separate + // stream to report errors? + return nil, errors.New("timed out waiting for client to create streams") + } + } + + return ctx, nil +} + +// supportsTerminalResizing returns true because v4ProtocolHandler supports it +func (*v4ProtocolHandler) supportsTerminalResizing() bool { return true } + +// v3ProtocolHandler implements the V3 protocol version for streaming command execution. +type v3ProtocolHandler struct{} + +// nolint:dupl +func (*v3ProtocolHandler) waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) { + ctx := &context{} + receivedStreams := 0 + replyChan := make(chan struct{}) + stop := make(chan struct{}) + defer close(stop) +WaitForStreams: + for { + select { + case stream := <-streams: + streamType := stream.Headers().Get(api.StreamType) + switch streamType { + case api.StreamTypeError: + ctx.writeStatus = v1WriteStatusFunc(stream) + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStdin: + ctx.stdinStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStdout: + ctx.stdoutStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStderr: + ctx.stderrStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeResize: + ctx.resizeStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + default: + runtime.HandleError(fmt.Errorf("unexpected stream type: %q", streamType)) + } + case <-replyChan: + receivedStreams++ + if receivedStreams == expectedStreams { + break WaitForStreams + } + case <-expired: + // TODO find a way to return the error to the user. Maybe use a separate + // stream to report errors? + return nil, errors.New("timed out waiting for client to create streams") + } + } + + return ctx, nil +} + +// supportsTerminalResizing returns true because v3ProtocolHandler supports it +func (*v3ProtocolHandler) supportsTerminalResizing() bool { return true } + +// v2ProtocolHandler implements the V2 protocol version for streaming command execution. +type v2ProtocolHandler struct{} + +// nolint:dupl +func (*v2ProtocolHandler) waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) { + ctx := &context{} + receivedStreams := 0 + replyChan := make(chan struct{}) + stop := make(chan struct{}) + defer close(stop) +WaitForStreams: + for { + select { + case stream := <-streams: + streamType := stream.Headers().Get(api.StreamType) + switch streamType { + case api.StreamTypeError: + ctx.writeStatus = v1WriteStatusFunc(stream) + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStdin: + ctx.stdinStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStdout: + ctx.stdoutStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStderr: + ctx.stderrStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + default: + runtime.HandleError(fmt.Errorf("unexpected stream type: %q", streamType)) + } + case <-replyChan: + receivedStreams++ + if receivedStreams == expectedStreams { + break WaitForStreams + } + case <-expired: + // TODO find a way to return the error to the user. Maybe use a separate + // stream to report errors? + return nil, errors.New("timed out waiting for client to create streams") + } + } + + return ctx, nil +} + +// supportsTerminalResizing returns false because v2ProtocolHandler doesn't support it. +func (*v2ProtocolHandler) supportsTerminalResizing() bool { return false } + +// v1ProtocolHandler implements the V1 protocol version for streaming command execution. +type v1ProtocolHandler struct{} + +// nolint:dupl +func (*v1ProtocolHandler) waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) { + ctx := &context{} + receivedStreams := 0 + replyChan := make(chan struct{}) + stop := make(chan struct{}) + defer close(stop) +WaitForStreams: + for { + select { + case stream := <-streams: + streamType := stream.Headers().Get(api.StreamType) + switch streamType { + case api.StreamTypeError: + ctx.writeStatus = v1WriteStatusFunc(stream) + + // This defer statement shouldn't be here, but due to previous refactoring, it ended up in + // here. This is what 1.0.x kubelets do, so we're retaining that behavior. This is fixed in + // the v2ProtocolHandler. + // nolint:errcheck + defer stream.Reset() + + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStdin: + ctx.stdinStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStdout: + ctx.stdoutStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + case api.StreamTypeStderr: + ctx.stderrStream = stream + go waitStreamReply(stream.replySent, replyChan, stop) + default: + runtime.HandleError(fmt.Errorf("unexpected stream type: %q", streamType)) + } + case <-replyChan: + receivedStreams++ + if receivedStreams == expectedStreams { + break WaitForStreams + } + case <-expired: + // TODO find a way to return the error to the user. Maybe use a separate + // stream to report errors? + return nil, errors.New("timed out waiting for client to create streams") + } + } + + if ctx.stdinStream != nil { + ctx.stdinStream.Close() + } + + return ctx, nil +} + +// supportsTerminalResizing returns false because v1ProtocolHandler doesn't support it. +func (*v1ProtocolHandler) supportsTerminalResizing() bool { return false } + +func handleResizeEvents(stream io.Reader, channel chan<- remotecommand.TerminalSize) { + defer runtime.HandleCrash() + defer close(channel) + + decoder := json.NewDecoder(stream) + for { + size := remotecommand.TerminalSize{} + if err := decoder.Decode(&size); err != nil { + break + } + channel <- size + } +} + +func v1WriteStatusFunc(stream io.Writer) func(status *apierrors.StatusError) error { + return func(status *apierrors.StatusError) error { + if status.Status().Status == metav1.StatusSuccess { + return nil // send error messages + } + _, err := stream.Write([]byte(status.Error())) + return err + } +} + +// v4WriteStatusFunc returns a WriteStatusFunc that marshals a given api Status +// as json in the error channel. +func v4WriteStatusFunc(stream io.Writer) func(status *apierrors.StatusError) error { + return func(status *apierrors.StatusError) error { + bs, err := json.Marshal(status.Status()) + if err != nil { + return err + } + _, err = stream.Write(bs) + return err + } +} diff --git a/pkg/clustertree/cluster-manager/node-server/api/remotecommand/websocket.go b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/websocket.go new file mode 100644 index 000000000..ccde42a5c --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/websocket.go @@ -0,0 +1,123 @@ +// This code is directly lifted from the Kubernetes +// For reference: +// https://github.com/kubernetes/kubernetes/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/websocket.go + +package remotecommand + +import ( + "fmt" + "net/http" + "time" + + "k8s.io/apimachinery/pkg/util/runtime" + "k8s.io/apiserver/pkg/server/httplog" + "k8s.io/apiserver/pkg/util/wsstream" +) + +const ( + stdinChannel = iota + stdoutChannel + stderrChannel + errorChannel + resizeChannel + + preV4BinaryWebsocketProtocol = wsstream.ChannelWebSocketProtocol + preV4Base64WebsocketProtocol = wsstream.Base64ChannelWebSocketProtocol + v4BinaryWebsocketProtocol = "v4." + wsstream.ChannelWebSocketProtocol + v4Base64WebsocketProtocol = "v4." + wsstream.Base64ChannelWebSocketProtocol +) + +// createChannels returns the standard channel types for a shell connection (STDIN 0, STDOUT 1, STDERR 2) +// along with the approximate duplex value. It also creates the error (3) and resize (4) channels. +func createChannels(opts *Options) []wsstream.ChannelType { + // open the requested channels, and always open the error channel + channels := make([]wsstream.ChannelType, 5) + channels[stdinChannel] = readChannel(opts.Stdin) + channels[stdoutChannel] = writeChannel(opts.Stdout) + channels[stderrChannel] = writeChannel(opts.Stderr) + channels[errorChannel] = wsstream.WriteChannel + channels[resizeChannel] = wsstream.ReadChannel + return channels +} + +// readChannel returns wsstream.ReadChannel if real is true, or wsstream.IgnoreChannel. +func readChannel(real bool) wsstream.ChannelType { + if real { + return wsstream.ReadChannel + } + return wsstream.IgnoreChannel +} + +// writeChannel returns wsstream.WriteChannel if real is true, or wsstream.IgnoreChannel. +func writeChannel(real bool) wsstream.ChannelType { + if real { + return wsstream.WriteChannel + } + return wsstream.IgnoreChannel +} + +// createWebSocketStreams returns a context containing the websocket connection and +// streams needed to perform an exec or an attach. +func createWebSocketStreams(req *http.Request, w http.ResponseWriter, opts *Options, idleTimeout time.Duration) (*context, bool) { + channels := createChannels(opts) + conn := wsstream.NewConn(map[string]wsstream.ChannelProtocolConfig{ + "": { + Binary: true, + Channels: channels, + }, + preV4BinaryWebsocketProtocol: { + Binary: true, + Channels: channels, + }, + preV4Base64WebsocketProtocol: { + Binary: false, + Channels: channels, + }, + v4BinaryWebsocketProtocol: { + Binary: true, + Channels: channels, + }, + v4Base64WebsocketProtocol: { + Binary: false, + Channels: channels, + }, + }) + conn.SetIdleTimeout(idleTimeout) + negotiatedProtocol, streams, err := conn.Open(httplog.Unlogged(req, w), req) + if err != nil { + runtime.HandleError(fmt.Errorf("unable to upgrade websocket connection: %v", err)) + return nil, false + } + + // Send an empty message to the lowest writable channel to notify the client the connection is established + // TODO: make generic to SPDY and WebSockets and do it outside of this method? + switch { + case opts.Stdout: + // nolint:errcheck + streams[stdoutChannel].Write([]byte{}) + case opts.Stderr: + // nolint:errcheck + streams[stderrChannel].Write([]byte{}) + default: + // nolint:errcheck + streams[errorChannel].Write([]byte{}) + } + + ctx := &context{ + conn: conn, + stdinStream: streams[stdinChannel], + stdoutStream: streams[stdoutChannel], + stderrStream: streams[stderrChannel], + tty: opts.TTY, + resizeStream: streams[resizeChannel], + } + + switch negotiatedProtocol { + case v4BinaryWebsocketProtocol, v4Base64WebsocketProtocol: + ctx.writeStatus = v4WriteStatusFunc(streams[errorChannel]) + default: + ctx.writeStatus = v1WriteStatusFunc(streams[errorChannel]) + } + + return ctx, true +} diff --git a/pkg/clustertree/cluster-manager/node-server/server.go b/pkg/clustertree/cluster-manager/node-server/server.go new file mode 100644 index 000000000..6b2d00fae --- /dev/null +++ b/pkg/clustertree/cluster-manager/node-server/server.go @@ -0,0 +1,191 @@ +package nodeserver + +import ( + "context" + "crypto/tls" + "crypto/x509" + "fmt" + "net/http" + "os" + "time" + + "github.com/gorilla/mux" + "github.com/pkg/errors" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" + "k8s.io/klog/v2" + "sigs.k8s.io/controller-runtime/pkg/client" + + "github.com/kosmos.io/kosmos/cmd/clustertree/cluster-manager/app/options" + "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/node-server/api" + leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils" +) + +func DefaultServerCiphers() []uint16 { + return []uint16{ + tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, + tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, + tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, + tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, + + tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, + tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, + tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, + tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, + } +} + +type NodeServer struct { + RootClient client.Client + GlobalLeafManager leafUtils.LeafResourceManager +} + +type HttpConfig struct { + listenAddr string + handler http.Handler + tlsConfig *tls.Config +} + +func (n *NodeServer) getClient(ctx context.Context, namespace string, podName string) (kubernetes.Interface, *rest.Config, error) { + nsname := types.NamespacedName{ + Namespace: namespace, + Name: podName, + } + + rootPod := &corev1.Pod{} + if err := n.RootClient.Get(ctx, nsname, rootPod); err != nil { + return nil, nil, err + } + + nodeName := rootPod.Spec.NodeName + + lr, err := n.GlobalLeafManager.GetLeafResourceByNodeName(nodeName) + if err != nil { + return nil, nil, err + } + + return lr.Clientset, lr.RestConfig, nil +} + +func (s *NodeServer) RunHTTP(ctx context.Context, httpConfig HttpConfig) (func(), error) { + if httpConfig.tlsConfig == nil { + klog.Warning("TLS config not provided, not starting up http service") + return func() {}, nil + } + if httpConfig.handler == nil { + klog.Warning("No http handler, not starting up http service") + return func() {}, nil + } + + l, err := tls.Listen("tcp", httpConfig.listenAddr, httpConfig.tlsConfig) + if err != nil { + return nil, errors.Wrap(err, "error starting http listener") + } + + klog.V(4).Info("Started TLS listener") + + srv := &http.Server{Handler: httpConfig.handler, TLSConfig: httpConfig.tlsConfig, ReadHeaderTimeout: 30 * time.Second} + // nolint:errcheck + go srv.Serve(l) + klog.V(4).Infof("HTTP server running, port: %s", httpConfig.listenAddr) + + return func() { + srv.Close() + l.Close() + }, nil +} + +func (s *NodeServer) AttachRoutes(m *http.ServeMux) { + r := mux.NewRouter() + r.StrictSlash(true) + + r.HandleFunc( + "/containerLogs/{namespace}/{pod}/{container}", + api.ContainerLogsHandler(s.getClient), + ).Methods("GET") + + r.HandleFunc( + "/exec/{namespace}/{pod}/{container}", + api.ContainerExecHandler( + api.ContainerExecOptions{ + StreamIdleTimeout: 30 * time.Second, + StreamCreationTimeout: 30 * time.Second, + }, + s.getClient, + ), + ).Methods("POST", "GET") + + // append func here + // TODO: return node status, url: /stats/summary?only_cpu_and_memory=true + + r.NotFoundHandler = http.HandlerFunc(api.NotFound) + + m.Handle("/", r) +} + +func (s *NodeServer) initTLSConfig() (*tls.Config, error) { + CertPath := os.Getenv("APISERVER_CERT_LOCATION") + KeyPath := os.Getenv("APISERVER_KEY_LOCATION") + CACertPath := os.Getenv("APISERVER_CA_CERT_LOCATION") + + tlsCfg := &tls.Config{ + MinVersion: tls.VersionTLS12, + PreferServerCipherSuites: true, + CipherSuites: DefaultServerCiphers(), + ClientAuth: tls.RequestClientCert, + } + + cert, err := tls.LoadX509KeyPair(CertPath, KeyPath) + if err != nil { + return nil, err + } + tlsCfg.Certificates = append(tlsCfg.Certificates, cert) + + if CACertPath != "" { + pem, err := os.ReadFile(CACertPath) + if err != nil { + return nil, fmt.Errorf("error reading ca cert pem: %w", err) + } + tlsCfg.ClientAuth = tls.RequireAndVerifyClientCert + + if tlsCfg.ClientCAs == nil { + tlsCfg.ClientCAs = x509.NewCertPool() + } + if !tlsCfg.ClientCAs.AppendCertsFromPEM(pem) { + return nil, fmt.Errorf("could not parse ca cert pem") + } + } + + return tlsCfg, nil +} + +func (s *NodeServer) Start(ctx context.Context, opts *options.Options) error { + tlsConfig, err := s.initTLSConfig() + + if err != nil { + klog.Fatalf("Node http server start failed: %s", err) + return err + } + + handler := http.NewServeMux() + s.AttachRoutes(handler) + + cancelHTTP, err := s.RunHTTP(ctx, HttpConfig{ + listenAddr: fmt.Sprintf(":%d", opts.ListenPort), + tlsConfig: tlsConfig, + handler: handler, + }) + + if err != nil { + return err + } + defer cancelHTTP() + + <-ctx.Done() + + klog.V(4).Infof("Stop node http proxy") + + return nil +} diff --git a/pkg/clustertree/cluster-manager/utils/leaf_model_handler.go b/pkg/clustertree/cluster-manager/utils/leaf_model_handler.go new file mode 100644 index 000000000..1bbf092e1 --- /dev/null +++ b/pkg/clustertree/cluster-manager/utils/leaf_model_handler.go @@ -0,0 +1,272 @@ +package utils + +import ( + "context" + "fmt" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/types" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/util/retry" + "sigs.k8s.io/controller-runtime/pkg/client" + + kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1" + "github.com/kosmos.io/kosmos/pkg/utils" +) + +// LeafModelHandler is the interface to handle the leafModel logic +type LeafModelHandler interface { + // GetLeafModelType returns the leafModelType for a Cluster + GetLeafModelType() LeafModelType + + // GetLeafNodes returns nodes in leaf cluster by the rootNode + GetLeafNodes(ctx context.Context, rootNode *corev1.Node) (*corev1.NodeList, error) + + // GetLeafPods returns pods in leaf cluster by the rootNode + GetLeafPods(ctx context.Context, rootNode *corev1.Node) (*corev1.PodList, error) + + // UpdateNodeStatus updates the node's status in root cluster + UpdateNodeStatus(ctx context.Context, node []*corev1.Node) error + + // CreateNodeInRoot creates the node in root cluster + CreateNodeInRoot(ctx context.Context, cluster *kosmosv1alpha1.Cluster, listenPort int32, gitVersion string) ([]*corev1.Node, error) +} + +// LeafModelType represents the type of leaf model +type LeafModelType string + +const ( + AggregationModel LeafModelType = "aggregation" + DispersionModel LeafModelType = "dispersion" +) + +// AggregationModelHandler handles the aggregation leaf model +type AggregationModelHandler struct { + Cluster *kosmosv1alpha1.Cluster + LeafClient client.Client + RootClient client.Client + RootClientset kubernetes.Interface +} + +// CreateNodeInRoot creates the node in root cluster +func (h AggregationModelHandler) CreateNodeInRoot(ctx context.Context, cluster *kosmosv1alpha1.Cluster, listenPort int32, gitVersion string) ([]*corev1.Node, error) { + nodes := make([]*corev1.Node, 0) + nodeName := fmt.Sprintf("%s%s", utils.KosmosNodePrefix, cluster.Name) + node, err := h.RootClientset.CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{}) + if err != nil { + if !errors.IsNotFound(err) { + return nil, err + } + node = utils.BuildNodeTemplate(nodeName) + node.Status.NodeInfo.KubeletVersion = gitVersion + node.Status.DaemonEndpoints = corev1.NodeDaemonEndpoints{ + KubeletEndpoint: corev1.DaemonEndpoint{ + Port: listenPort, + }, + } + + node.Status.Addresses = GetAddress() + + node, err = h.RootClientset.CoreV1().Nodes().Create(ctx, node, metav1.CreateOptions{}) + if err != nil { + return nil, err + } + } + nodes = append(nodes, node) + return nodes, nil +} + +// UpdateNodeStatus updates the node's status in root cluster +func (h AggregationModelHandler) UpdateNodeStatus(ctx context.Context, n []*corev1.Node) error { + var name string + if len(n) > 0 { + name = n[0].Name + } + + node := &corev1.Node{} + namespacedName := types.NamespacedName{ + Name: name, + } + err := retry.RetryOnConflict(retry.DefaultRetry, func() error { + err := h.RootClient.Get(ctx, namespacedName, node) + if err != nil { + // TODO: If a node is accidentally deleted, recreate it + return fmt.Errorf("cannot get node while update node status %s, err: %v", name, err) + } + + clone := node.DeepCopy() + clone.Status.Conditions = utils.NodeConditions() + + patch, err := utils.CreateMergePatch(node, clone) + if err != nil { + return fmt.Errorf("cannot get node while update node status %s, err: %v", node.Name, err) + } + + if node, err = h.RootClientset.CoreV1().Nodes().PatchStatus(ctx, node.Name, patch); err != nil { + return err + } + return nil + }) + if err != nil { + return err + } + return nil +} + +// GetLeafPods returns pods in leaf cluster by the rootNode +func (h AggregationModelHandler) GetLeafPods(ctx context.Context, rootNode *corev1.Node) (*corev1.PodList, error) { + pods := &corev1.PodList{} + err := h.LeafClient.List(ctx, pods) + if err != nil { + return nil, err + } + return pods, nil +} + +// GetLeafNodes returns nodes in leaf cluster by the rootNode +func (h AggregationModelHandler) GetLeafNodes(ctx context.Context, _ *corev1.Node) (*corev1.NodeList, error) { + nodesInLeaf := &corev1.NodeList{} + err := h.LeafClient.List(ctx, nodesInLeaf) + if err != nil { + return nil, err + } + return nodesInLeaf, nil +} + +// GetLeafModelType returns the leafModelType for a Cluster +func (h AggregationModelHandler) GetLeafModelType() LeafModelType { + return AggregationModel +} + +// DispersionModelHandler handles the dispersion leaf model +type DispersionModelHandler struct { + Cluster *kosmosv1alpha1.Cluster + LeafClient client.Client + RootClient client.Client + RootClientset kubernetes.Interface + LeafClientset kubernetes.Interface +} + +// CreateNodeInRoot creates the node in root cluster +func (h DispersionModelHandler) CreateNodeInRoot(ctx context.Context, cluster *kosmosv1alpha1.Cluster, listenPort int32, gitVersion string) ([]*corev1.Node, error) { + nodes := make([]*corev1.Node, 0) + for _, leafModel := range cluster.Spec.ClusterTreeOptions.LeafModels { + // todo only support nodeName now + if leafModel.NodeSelector.NodeName != "" { + nodeName := leafModel.NodeSelector.NodeName + node, err := h.RootClientset.CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{}) + if err != nil { + if !errors.IsNotFound(err) { + return nil, err + } + + node = utils.BuildNodeTemplate(nodeName) + nodeAnnotations := node.GetAnnotations() + if nodeAnnotations == nil { + nodeAnnotations = make(map[string]string, 1) + } + nodeAnnotations[utils.KosmosNodeOwnedByClusterAnnotations] = cluster.Name + node.SetAnnotations(nodeAnnotations) + + node.Status.NodeInfo.KubeletVersion = gitVersion + node.Status.DaemonEndpoints = corev1.NodeDaemonEndpoints{ + KubeletEndpoint: corev1.DaemonEndpoint{ + Port: listenPort, + }, + } + + node.Status.Addresses = GetAddress() + + node, err = h.RootClientset.CoreV1().Nodes().Create(ctx, node, metav1.CreateOptions{}) + if err != nil { + return nil, err + } + } + nodes = append(nodes, node) + } + } + return nodes, nil +} + +// UpdateNodeStatus updates the node's status in root cluster +func (h DispersionModelHandler) UpdateNodeStatus(ctx context.Context, n []*corev1.Node) error { + for _, node := range n { + nodeCopy := node.DeepCopy() + namespacedName := types.NamespacedName{ + Name: nodeCopy.Name, + } + err := retry.RetryOnConflict(retry.DefaultRetry, func() error { + nodeInLeaf := &corev1.Node{} + err := h.LeafClient.Get(ctx, namespacedName, nodeInLeaf) + if err != nil { + // TODO: If a node is accidentally deleted, recreate it + return fmt.Errorf("cannot get node in leaf cluster while update node status %s, err: %v", nodeCopy.Name, err) + } + + nodeRoot := &corev1.Node{} + err = h.RootClient.Get(ctx, namespacedName, nodeRoot) + if err != nil { + // TODO: If a node is accidentally deleted, recreate it + return fmt.Errorf("cannot get node in root cluster while update node status %s, err: %v", nodeCopy.Name, err) + } + + rootCopy := nodeRoot.DeepCopy() + nodeRoot.Status = nodeInLeaf.Status + nodeRoot.Status.Addresses = GetAddress() + nodeRoot.Status.Allocatable = rootCopy.Status.Allocatable + nodeRoot.Status.Capacity = rootCopy.Status.Capacity + + if node, err = h.RootClientset.CoreV1().Nodes().UpdateStatus(ctx, nodeRoot, metav1.UpdateOptions{}); err != nil { + return err + } + return nil + }) + if err != nil { + return err + } + } + return nil +} + +func (h DispersionModelHandler) GetLeafPods(ctx context.Context, rootNode *corev1.Node) (*corev1.PodList, error) { + pods, err := h.LeafClientset.CoreV1().Pods("").List(ctx, metav1.ListOptions{FieldSelector: fmt.Sprintf("spec.nodeName=%s", rootNode.Name)}) + if err != nil { + return nil, err + } + return pods, nil +} + +func (h DispersionModelHandler) GetLeafNodes(ctx context.Context, rootNode *corev1.Node) (*corev1.NodeList, error) { + nodesInLeaf, err := h.LeafClientset.CoreV1().Nodes().List(ctx, metav1.ListOptions{FieldSelector: fmt.Sprintf("metadata.name=%s", rootNode.Name)}) + if err != nil { + return nil, err + } + return nodesInLeaf, nil +} + +func (h DispersionModelHandler) GetLeafModelType() LeafModelType { + return DispersionModel +} + +// NewLeafModelHandler create a LeafModelHandler for Cluster +func NewLeafModelHandler(cluster *kosmosv1alpha1.Cluster, root, leafClient client.Client, rootClientset, leafClientset kubernetes.Interface) LeafModelHandler { + // todo support nodeSelector mode + if cluster.Spec.ClusterTreeOptions.LeafModels != nil { + return &DispersionModelHandler{ + Cluster: cluster, + LeafClient: leafClient, + RootClient: root, + RootClientset: rootClientset, + LeafClientset: leafClientset, + } + } else { + return &AggregationModelHandler{ + Cluster: cluster, + LeafClient: leafClient, + RootClient: root, + RootClientset: rootClientset, + } + } +} diff --git a/pkg/clustertree/cluster-manager/utils/leaf_resource_manager.go b/pkg/clustertree/cluster-manager/utils/leaf_resource_manager.go index c6617c063..5c40a96d0 100644 --- a/pkg/clustertree/cluster-manager/utils/leaf_resource_manager.go +++ b/pkg/clustertree/cluster-manager/utils/leaf_resource_manager.go @@ -1,16 +1,19 @@ -package leafUtils +package utils import ( "fmt" + "strings" "sync" corev1 "k8s.io/api/core/v1" "k8s.io/client-go/dynamic" "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" "sigs.k8s.io/controller-runtime/pkg/client" kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1" kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned" + "github.com/kosmos.io/kosmos/pkg/utils" ) var ( @@ -41,20 +44,26 @@ type LeafResource struct { IgnoreLabels []string EnableServiceAccount bool Nodes []ClusterNode + RestConfig *rest.Config } type LeafResourceManager interface { - AddLeafResource(string, *LeafResource, []kosmosv1alpha1.LeafModel, []*corev1.Node) - RemoveLeafResource(string) + AddLeafResource(lr *LeafResource, cluster *kosmosv1alpha1.Cluster, node []*corev1.Node) + RemoveLeafResource(clusterName string) // get leafresource by cluster name - GetLeafResource(string) (*LeafResource, error) - // get leafresource by knode name - GetLeafResourceByNodeName(string) (*LeafResource, error) - // judge if the map has leafresource of nodename - Has(string) bool - HasNodeName(string) bool - ListNodeNames() []string - GetClusterNode(string) *ClusterNode + GetLeafResource(clusterName string) (*LeafResource, error) + // get leafresource by node name + GetLeafResourceByNodeName(nodeName string) (*LeafResource, error) + // determine if the cluster is present in the map + HasCluster(clusterName string) bool + // determine if the node is present in the map + HasNode(nodeName string) bool + // list all all node name + ListNodes() []string + // list all all cluster name + ListClusters() []string + // get ClusterNode(struct) by node name + GetClusterNode(nodeName string) *ClusterNode } type leafResourceManager struct { @@ -62,6 +71,10 @@ type leafResourceManager struct { leafResourceManagersLock sync.Mutex } +func trimNamePrefix(name string) string { + return strings.TrimPrefix(name, utils.KosmosNodePrefix) +} + func has(clusternodes []ClusterNode, target string) bool { for _, v := range clusternodes { if v.NodeName == target { @@ -80,9 +93,14 @@ func getClusterNode(clusternodes []ClusterNode, target string) *ClusterNode { return nil } -func (l *leafResourceManager) AddLeafResource(clustername string, lptr *LeafResource, leafModels []kosmosv1alpha1.LeafModel, nodes []*corev1.Node) { +func (l *leafResourceManager) AddLeafResource(lptr *LeafResource, cluster *kosmosv1alpha1.Cluster, nodes []*corev1.Node) { l.leafResourceManagersLock.Lock() defer l.leafResourceManagersLock.Unlock() + + clusterName := cluster.Name + + leafModels := cluster.Spec.ClusterTreeOptions.LeafModels + clusterNodes := []ClusterNode{} for i, n := range nodes { if leafModels != nil && len(leafModels[i].NodeSelector.NodeName) > 0 { @@ -91,50 +109,51 @@ func (l *leafResourceManager) AddLeafResource(clustername string, lptr *LeafReso LeafMode: Node, }) // } else if leafModels != nil && leafModels[i].NodeSelector.LabelSelector != nil { - // // TODO: + // TODO: support labelselector } else { clusterNodes = append(clusterNodes, ClusterNode{ - NodeName: n.Name, + NodeName: trimNamePrefix(n.Name), LeafMode: ALL, }) } } lptr.Nodes = clusterNodes - l.resourceMap[clustername] = lptr + l.resourceMap[clusterName] = lptr } -func (l *leafResourceManager) RemoveLeafResource(clustername string) { +func (l *leafResourceManager) RemoveLeafResource(clusterName string) { l.leafResourceManagersLock.Lock() defer l.leafResourceManagersLock.Unlock() - delete(l.resourceMap, clustername) + delete(l.resourceMap, clusterName) } -func (l *leafResourceManager) GetLeafResource(clustername string) (*LeafResource, error) { +func (l *leafResourceManager) GetLeafResource(clusterName string) (*LeafResource, error) { l.leafResourceManagersLock.Lock() defer l.leafResourceManagersLock.Unlock() - if m, ok := l.resourceMap[clustername]; ok { + if m, ok := l.resourceMap[clusterName]; ok { return m, nil } else { - return nil, fmt.Errorf("cannot get leaf resource, clustername: %s", clustername) + return nil, fmt.Errorf("cannot get leaf resource, clusterName: %s", clusterName) } } -func (l *leafResourceManager) GetLeafResourceByNodeName(nodename string) (*LeafResource, error) { +func (l *leafResourceManager) GetLeafResourceByNodeName(nodeName string) (*LeafResource, error) { l.leafResourceManagersLock.Lock() defer l.leafResourceManagersLock.Unlock() - + nodeName = trimNamePrefix(nodeName) for k := range l.resourceMap { - if has(l.resourceMap[k].Nodes, nodename) { + if has(l.resourceMap[k].Nodes, nodeName) { return l.resourceMap[k], nil } } - return nil, fmt.Errorf("cannot get leaf resource, nodename: %s", nodename) + return nil, fmt.Errorf("cannot get leaf resource, nodeName: %s", nodeName) } -func (l *leafResourceManager) HasNodeName(nodename string) bool { +func (l *leafResourceManager) HasNode(nodeName string) bool { + nodeName = trimNamePrefix(nodeName) for k := range l.resourceMap { - if has(l.resourceMap[k].Nodes, nodename) { + if has(l.resourceMap[k].Nodes, nodeName) { return true } } @@ -142,7 +161,7 @@ func (l *leafResourceManager) HasNodeName(nodename string) bool { return false } -func (l *leafResourceManager) Has(clustername string) bool { +func (l *leafResourceManager) HasCluster(clustername string) bool { for k := range l.resourceMap { if k == clustername { return true @@ -152,19 +171,34 @@ func (l *leafResourceManager) Has(clustername string) bool { return false } -func (l *leafResourceManager) GetClusterNode(nodename string) *ClusterNode { +func (l *leafResourceManager) GetClusterNode(nodeName string) *ClusterNode { + nodeName = trimNamePrefix(nodeName) for k := range l.resourceMap { - if clusterNode := getClusterNode(l.resourceMap[k].Nodes, nodename); clusterNode != nil { + if clusterNode := getClusterNode(l.resourceMap[k].Nodes, nodeName); clusterNode != nil { return clusterNode } } return nil } -func (l *leafResourceManager) ListNodeNames() []string { +func (l *leafResourceManager) ListClusters() []string { + l.leafResourceManagersLock.Lock() + defer l.leafResourceManagersLock.Unlock() + keys := make([]string, 0) + for k := range l.resourceMap { + if len(k) == 0 { + continue + } + + keys = append(keys, k) + } + return keys +} + +func (l *leafResourceManager) ListNodes() []string { l.leafResourceManagersLock.Lock() defer l.leafResourceManagersLock.Unlock() - keys := make([]string, 0, len(l.resourceMap)) + keys := make([]string, 0) for k := range l.resourceMap { if len(k) == 0 { continue diff --git a/pkg/clustertree/cluster-manager/utils/rootcluster.go b/pkg/clustertree/cluster-manager/utils/rootcluster.go index 7dcfae21d..73cf17e9a 100644 --- a/pkg/clustertree/cluster-manager/utils/rootcluster.go +++ b/pkg/clustertree/cluster-manager/utils/rootcluster.go @@ -1,16 +1,11 @@ -package leafUtils +package utils import ( - "context" - "sort" + "os" corev1 "k8s.io/api/core/v1" - metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/kubernetes" - "k8s.io/klog" kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1" - "github.com/kosmos.io/kosmos/pkg/utils" ) const ( @@ -27,55 +22,9 @@ func IsRootCluster(cluster *kosmosv1alpha1.Cluster) bool { return false } -func SortAddress(ctx context.Context, rootClient kubernetes.Interface, nodeName string, leafClient kubernetes.Interface, originAddress []corev1.NodeAddress) ([]corev1.NodeAddress, error) { - rootnodes, err := rootClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{}) - if err != nil { - klog.Errorf("create node %s failed, cannot get node from root cluster, err: %v", nodeName, err) - return nil, err +func GetAddress() []corev1.NodeAddress { + address := []corev1.NodeAddress{ + {Type: corev1.NodeInternalIP, Address: os.Getenv("KNODE_POD_IP")}, } - - if len(rootnodes.Items) == 0 { - klog.Errorf("create node %s failed, cannot get node from root cluster, len of leafnodes is 0", nodeName) - return nil, err - } - - isIPv4First := true - for _, addr := range rootnodes.Items[0].Status.Addresses { - if addr.Type == corev1.NodeInternalIP { - if utils.IsIPv6(addr.Address) { - isIPv4First = false - } - break - } - } - - address := []corev1.NodeAddress{} - - for _, addr := range originAddress { - if addr.Type == corev1.NodeInternalIP { - address = append(address, corev1.NodeAddress{Type: corev1.NodeInternalIP, Address: addr.Address}) - } - } - - sort.Slice(address, func(i, j int) bool { - if isIPv4First { - if !utils.IsIPv6(address[i].Address) && utils.IsIPv6(address[j].Address) { - return true - } - if utils.IsIPv6(address[i].Address) && !utils.IsIPv6(address[j].Address) { - return false - } - return true - } else { - if !utils.IsIPv6(address[i].Address) && utils.IsIPv6(address[j].Address) { - return false - } - if utils.IsIPv6(address[i].Address) && !utils.IsIPv6(address[j].Address) { - return true - } - return true - } - }) - - return address, nil + return address } diff --git a/pkg/kosmosctl/floater/check.go b/pkg/kosmosctl/floater/check.go index a1c0f17b3..04a19cf0c 100644 --- a/pkg/kosmosctl/floater/check.go +++ b/pkg/kosmosctl/floater/check.go @@ -104,13 +104,13 @@ func (o *CommandCheckOptions) Complete() error { o.DstImageRepository = o.ImageRepository } - srcFloater := NewCheckFloater(o) + srcFloater := NewCheckFloater(o, false) if err := srcFloater.completeFromKubeConfigPath(o.SrcKubeConfig); err != nil { return err } o.SrcFloater = srcFloater - dstFloater := NewCheckFloater(o) + dstFloater := NewCheckFloater(o, true) if err := dstFloater.completeFromKubeConfigPath(o.DstKubeConfig); err != nil { return err } diff --git a/pkg/kosmosctl/floater/floater.go b/pkg/kosmosctl/floater/floater.go index 0d6150c2c..874615177 100644 --- a/pkg/kosmosctl/floater/floater.go +++ b/pkg/kosmosctl/floater/floater.go @@ -66,11 +66,15 @@ type Floater struct { Client kubernetes.Interface } -func NewCheckFloater(o *CommandCheckOptions) *Floater { +func NewCheckFloater(o *CommandCheckOptions, isDst bool) *Floater { + imageRepository := o.ImageRepository + if isDst { + imageRepository = o.DstImageRepository + } floater := &Floater{ Namespace: o.Namespace, Name: DefaultFloaterName, - ImageRepository: o.DstImageRepository, + ImageRepository: imageRepository, Version: o.Version, PodWaitTime: o.PodWaitTime, Port: o.Port, diff --git a/pkg/kosmosctl/get/get.go b/pkg/kosmosctl/get/get.go index 4f3a47534..92b462784 100644 --- a/pkg/kosmosctl/get/get.go +++ b/pkg/kosmosctl/get/get.go @@ -1,22 +1,31 @@ package get import ( + "context" "fmt" "strings" "github.com/spf13/cobra" + authenticationv1 "k8s.io/api/authentication/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/tools/clientcmd" ctlget "k8s.io/kubectl/pkg/cmd/get" ctlutil "k8s.io/kubectl/pkg/cmd/util" "k8s.io/kubectl/pkg/util/i18n" + "k8s.io/utils/pointer" + "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned" + "github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest" + "github.com/kosmos.io/kosmos/pkg/kosmosctl/util" "github.com/kosmos.io/kosmos/pkg/utils" ) const ( ClustersGroupResource = "clusters.kosmos.io" ClusterNodesGroupResource = "clusternodes.kosmos.io" - KnodesGroupResource = "knodes.kosmos.io" + NodeConfigsGroupResource = "nodeconfigs.kosmos.io" ) type CommandGetOptions struct { @@ -28,6 +37,8 @@ type CommandGetOptions struct { GetOptions *ctlget.GetOptions } +var newF ctlutil.Factory + // NewCmdGet Display resources from the Kosmos control plane. func NewCmdGet(f ctlutil.Factory, streams genericclioptions.IOStreams) *cobra.Command { o := NewCommandGetOptions(streams) @@ -47,9 +58,10 @@ func NewCmdGet(f ctlutil.Factory, streams genericclioptions.IOStreams) *cobra.Co }, } - o.GetOptions.PrintFlags.AddFlags(cmd) flags := cmd.Flags() flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "If present, the namespace scope for this CLI request.") + flags.StringVar(&o.Cluster, "cluster", utils.DefaultClusterName, "Specify a cluster, the default is the control cluster.") + o.GetOptions.PrintFlags.AddFlags(cmd) return cmd } @@ -63,13 +75,71 @@ func NewCommandGetOptions(streams genericclioptions.IOStreams) *CommandGetOption } func (o *CommandGetOptions) Complete(f ctlutil.Factory, cmd *cobra.Command, args []string) error { - err := o.GetOptions.Complete(f, cmd, args) - if err != nil { - return fmt.Errorf("kosmosctl get complete error, options failed: %s", err) + if o.Cluster != utils.DefaultClusterName { + controlConfig, err := f.ToRESTConfig() + if err != nil { + return err + } + + rootClient, err := versioned.NewForConfig(controlConfig) + if err != nil { + return err + } + cluster, err := rootClient.KosmosV1alpha1().Clusters().Get(context.TODO(), o.Cluster, metav1.GetOptions{}) + if err != nil { + return err + } + + leafConfig, err := clientcmd.RESTConfigFromKubeConfig(cluster.Spec.Kubeconfig) + if err != nil { + return fmt.Errorf("kosmosctl get complete error, load leaf cluster kubeconfig failed: %s", err) + } + + leafClient, err := kubernetes.NewForConfig(leafConfig) + if err != nil { + return fmt.Errorf("kosmosctl get complete error, generate leaf cluster client failed: %s", err) + } + + kosmosControlSA, err := util.GenerateServiceAccount(manifest.KosmosControlServiceAccount, manifest.ServiceAccountReplace{ + Namespace: o.Namespace, + }) + if err != nil { + return fmt.Errorf("kosmosctl get complete error, generate kosmos serviceaccount failed: %s", err) + } + expirationSeconds := int64(600) + leafToken, err := leafClient.CoreV1().ServiceAccounts(kosmosControlSA.Namespace).CreateToken( + context.TODO(), kosmosControlSA.Name, &authenticationv1.TokenRequest{ + Spec: authenticationv1.TokenRequestSpec{ + ExpirationSeconds: &expirationSeconds, + }, + }, metav1.CreateOptions{}) + if err != nil { + return fmt.Errorf("kosmosctl get complete error, list leaf cluster secret failed: %s", err) + } + + configFlags := genericclioptions.NewConfigFlags(false) + configFlags.APIServer = &leafConfig.Host + configFlags.BearerToken = &leafToken.Status.Token + configFlags.Insecure = pointer.Bool(true) + configFlags.Namespace = &o.Namespace + + newF = ctlutil.NewFactory(configFlags) + + err = o.GetOptions.Complete(newF, cmd, args) + if err != nil { + return fmt.Errorf("kosmosctl get complete error, options failed: %s", err) + } + + o.GetOptions.Namespace = o.Namespace + } else { + err := o.GetOptions.Complete(f, cmd, args) + if err != nil { + return fmt.Errorf("kosmosctl get complete error, options failed: %s", err) + } + + o.GetOptions.Namespace = o.Namespace } - o.GetOptions.Namespace = o.Namespace - return nil } @@ -88,13 +158,20 @@ func (o *CommandGetOptions) Run(f ctlutil.Factory, cmd *cobra.Command, args []st args[0] = ClustersGroupResource case "clusternode", "clusternodes": args[0] = ClusterNodesGroupResource - case "knode", "knodes": - args[0] = KnodesGroupResource + case "nodeconfig", "nodeconfigs": + args[0] = NodeConfigsGroupResource } - err := o.GetOptions.Run(f, cmd, args) - if err != nil { - return fmt.Errorf("kosmosctl get run error, options failed: %s", err) + if o.Cluster != utils.DefaultClusterName { + err := o.GetOptions.Run(newF, cmd, args) + if err != nil { + return fmt.Errorf("kosmosctl get run error, options failed: %s", err) + } + } else { + err := o.GetOptions.Run(f, cmd, args) + if err != nil { + return fmt.Errorf("kosmosctl get run error, options failed: %s", err) + } } return nil diff --git a/pkg/kosmosctl/install/install.go b/pkg/kosmosctl/install/install.go index c3bd72288..8fe6f535c 100644 --- a/pkg/kosmosctl/install/install.go +++ b/pkg/kosmosctl/install/install.go @@ -12,7 +12,6 @@ import ( extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset" apierrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/dynamic" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "k8s.io/client-go/tools/clientcmd" @@ -32,21 +31,20 @@ import ( ) var installExample = templates.Examples(i18n.T(` - # Install all module to Kosmos control plane, e.g: - kosmosctl install --cni cni-name --default-nic nic-name - + # Install all module to Kosmos control plane, e.g: + kosmosctl install --cni cni-name --default-nic nic-name + # Install Kosmos control plane, if you need to specify a special control plane cluster kubeconfig, e.g: - kosmosctl install --kubeconfig ~/kubeconfig/cluster-kubeconfig - - # Install clusterlink module to Kosmos control plane, e.g: - kosmosctl install -m clusterlink --cni cni-name --default-nic nic-name - + kosmosctl install --kubeconfig ~/kubeconfig/cluster-kubeconfig + # Install clustertree module to Kosmos control plane, e.g: - kosmosctl install -m clustertree - + kosmosctl install -m clustertree + + # Install clusterlink module to Kosmos control plane and set the necessary parameters, e.g: + kosmosctl install -m clusterlink --cni cni-name --default-nic nic-name + # Install coredns module to Kosmos control plane, e.g: - kosmosctl install -m coredns -`)) + kosmosctl install -m coredns`)) type CommandInstallOptions struct { Namespace string @@ -65,7 +63,6 @@ type CommandInstallOptions struct { KosmosClient versioned.Interface K8sClient kubernetes.Interface - K8sDynamicClient *dynamic.DynamicClient K8sExtensionsClient extensionsclient.Interface } @@ -137,11 +134,6 @@ func (o *CommandInstallOptions) Complete(f ctlutil.Factory) error { return fmt.Errorf("kosmosctl install complete error, generate K8s basic client failed: %v", err) } - o.K8sDynamicClient, err = dynamic.NewForConfig(config) - if err != nil { - return fmt.Errorf("kosmosctl install complete error, generate K8s dynamic client failed: %s", err) - } - o.K8sExtensionsClient, err = extensionsclient.NewForConfig(config) if err != nil { return fmt.Errorf("kosmosctl install complete error, generate K8s extensions client failed: %v", err) @@ -312,7 +304,7 @@ func (o *CommandInstallOptions) runClusterlink() error { } operatorDeploy, err := util.GenerateDeployment(manifest.KosmosOperatorDeployment, manifest.DeploymentReplace{ - Namespace: utils.DefaultNamespace, + Namespace: o.Namespace, Version: version.GetReleaseVersion().PatchRelease(), UseProxy: o.UseProxy, ImageRepository: o.ImageRegistry, @@ -470,7 +462,7 @@ func (o *CommandInstallOptions) runClustertree() error { } operatorDeploy, err := util.GenerateDeployment(manifest.KosmosOperatorDeployment, manifest.DeploymentReplace{ - Namespace: utils.DefaultNamespace, + Namespace: o.Namespace, Version: version.GetReleaseVersion().PatchRelease(), UseProxy: o.UseProxy, ImageRepository: o.ImageRegistry, @@ -580,7 +572,6 @@ func (o *CommandInstallOptions) createControlCluster() error { WaitTime: o.WaitTime, KosmosClient: o.KosmosClient, K8sClient: o.K8sClient, - K8sDynamicClient: o.K8sDynamicClient, RootFlag: true, EnableLink: true, CNI: o.CNI, @@ -637,7 +628,6 @@ func (o *CommandInstallOptions) createControlCluster() error { WaitTime: o.WaitTime, KosmosClient: o.KosmosClient, K8sClient: o.K8sClient, - K8sDynamicClient: o.K8sDynamicClient, RootFlag: true, EnableTree: true, } @@ -674,7 +664,6 @@ func (o *CommandInstallOptions) createControlCluster() error { WaitTime: o.WaitTime, KosmosClient: o.KosmosClient, K8sClient: o.K8sClient, - K8sDynamicClient: o.K8sDynamicClient, RootFlag: true, EnableLink: true, CNI: o.CNI, diff --git a/pkg/kosmosctl/join/join.go b/pkg/kosmosctl/join/join.go index 236224332..a307adc6f 100644 --- a/pkg/kosmosctl/join/join.go +++ b/pkg/kosmosctl/join/join.go @@ -8,10 +8,8 @@ import ( "github.com/spf13/cobra" corev1 "k8s.io/api/core/v1" - extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset" apierrors "k8s.io/apimachinery/pkg/api/errors" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" - "k8s.io/client-go/dynamic" "k8s.io/client-go/kubernetes" "k8s.io/client-go/rest" "k8s.io/client-go/tools/clientcmd" @@ -63,10 +61,8 @@ type CommandJoinOptions struct { EnableTree bool - KosmosClient versioned.Interface - K8sClient kubernetes.Interface - K8sDynamicClient *dynamic.DynamicClient - K8sExtensionsClient extensionsclient.Interface + KosmosClient versioned.Interface + K8sClient kubernetes.Interface } // NewCmdJoin join resource to Kosmos control plane. @@ -138,11 +134,6 @@ func (o *CommandJoinOptions) Complete(f ctlutil.Factory) error { return fmt.Errorf("kosmosctl install complete error, generate Kosmos client failed: %v", err) } - o.K8sDynamicClient, err = dynamic.NewForConfig(hostConfig) - if err != nil { - return fmt.Errorf("kosmosctl join complete error, generate dynamic client failed: %s", err) - } - if len(o.KubeConfig) > 0 { o.KubeConfigStream, err = os.ReadFile(o.KubeConfig) if err != nil { @@ -158,11 +149,6 @@ func (o *CommandJoinOptions) Complete(f ctlutil.Factory) error { if err != nil { return fmt.Errorf("kosmosctl join complete error, generate basic client failed: %v", err) } - - o.K8sExtensionsClient, err = extensionsclient.NewForConfig(clusterConfig) - if err != nil { - return fmt.Errorf("kosmosctl join complete error, generate extensions client failed: %v", err) - } } else { return fmt.Errorf("kosmosctl join complete error, arg ClusterKubeConfig is required") } @@ -181,7 +167,7 @@ func (o *CommandJoinOptions) Validate(args []string) error { switch args[0] { case "cluster": - _, err := o.K8sDynamicClient.Resource(util.ClusterGVR).Get(context.TODO(), o.Name, metav1.GetOptions{}) + _, err := o.KosmosClient.KosmosV1alpha1().Clusters().Get(context.TODO(), o.Name, metav1.GetOptions{}) if err != nil { if apierrors.IsAlreadyExists(err) { return fmt.Errorf("kosmosctl join validate error, clsuter already exists: %s", err) @@ -278,63 +264,75 @@ func (o *CommandJoinOptions) runCluster() error { klog.Info("Cluster " + o.Name + " has been created.") // create ns if it does not exist - namespace := &corev1.Namespace{} - namespace.Name = utils.DefaultNamespace - _, err = o.K8sClient.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{}) + kosmosNS := &corev1.Namespace{} + kosmosNS.Name = o.Namespace + _, err = o.K8sClient.CoreV1().Namespaces().Create(context.TODO(), kosmosNS, metav1.CreateOptions{}) if err != nil && !apierrors.IsAlreadyExists(err) { return fmt.Errorf("kosmosctl join run error, create namespace failed: %s", err) } // create rbac - secret := &corev1.Secret{ + kosmosControlSA, err := util.GenerateServiceAccount(manifest.KosmosControlServiceAccount, manifest.ServiceAccountReplace{ + Namespace: o.Namespace, + }) + if err != nil { + return fmt.Errorf("kosmosctl join run error, generate kosmos serviceaccount failed: %s", err) + } + _, err = o.K8sClient.CoreV1().ServiceAccounts(kosmosControlSA.Namespace).Create(context.TODO(), kosmosControlSA, metav1.CreateOptions{}) + if err != nil && !apierrors.IsAlreadyExists(err) { + return fmt.Errorf("kosmosctl join run error, create kosmos serviceaccount failed: %s", err) + } + klog.Info("ServiceAccount " + kosmosControlSA.Name + " has been created.") + + controlPanelSecret := &corev1.Secret{ TypeMeta: metav1.TypeMeta{}, ObjectMeta: metav1.ObjectMeta{ Name: utils.ControlPanelSecretName, - Namespace: utils.DefaultNamespace, + Namespace: o.Namespace, }, Data: map[string][]byte{ "kubeconfig": o.HostKubeConfigStream, }, } - _, err = o.K8sClient.CoreV1().Secrets(secret.Namespace).Create(context.TODO(), secret, metav1.CreateOptions{}) + _, err = o.K8sClient.CoreV1().Secrets(controlPanelSecret.Namespace).Create(context.TODO(), controlPanelSecret, metav1.CreateOptions{}) if err != nil && !apierrors.IsAlreadyExists(err) { return fmt.Errorf("kosmosctl join run error, create secret failed: %s", err) } - klog.Info("Secret " + secret.Name + " has been created.") + klog.Info("Secret " + controlPanelSecret.Name + " has been created.") - clusterRole, err := util.GenerateClusterRole(manifest.KosmosClusterRole, nil) + kosmosCR, err := util.GenerateClusterRole(manifest.KosmosClusterRole, nil) if err != nil { return fmt.Errorf("kosmosctl join run error, generate clusterrole failed: %s", err) } - _, err = o.K8sClient.RbacV1().ClusterRoles().Create(context.TODO(), clusterRole, metav1.CreateOptions{}) + _, err = o.K8sClient.RbacV1().ClusterRoles().Create(context.TODO(), kosmosCR, metav1.CreateOptions{}) if err != nil && !apierrors.IsAlreadyExists(err) { return fmt.Errorf("kosmosctl join run error, create clusterrole failed: %s", err) } - klog.Info("ClusterRole " + clusterRole.Name + " has been created.") + klog.Info("ClusterRole " + kosmosCR.Name + " has been created.") - clusterRoleBinding, err := util.GenerateClusterRoleBinding(manifest.KosmosClusterRoleBinding, manifest.ClusterRoleBindingReplace{ - Namespace: utils.DefaultNamespace, + kosmosCRB, err := util.GenerateClusterRoleBinding(manifest.KosmosClusterRoleBinding, manifest.ClusterRoleBindingReplace{ + Namespace: o.Namespace, }) if err != nil { return fmt.Errorf("kosmosctl join run error, generate clusterrolebinding failed: %s", err) } - _, err = o.K8sClient.RbacV1().ClusterRoleBindings().Create(context.TODO(), clusterRoleBinding, metav1.CreateOptions{}) + _, err = o.K8sClient.RbacV1().ClusterRoleBindings().Create(context.TODO(), kosmosCRB, metav1.CreateOptions{}) if err != nil && !apierrors.IsAlreadyExists(err) { return fmt.Errorf("kosmosctl join run error, create clusterrolebinding failed: %s", err) } - klog.Info("ClusterRoleBinding " + clusterRoleBinding.Name + " has been created.") + klog.Info("ClusterRoleBinding " + kosmosCRB.Name + " has been created.") - serviceAccount, err := util.GenerateServiceAccount(manifest.KosmosOperatorServiceAccount, manifest.ServiceAccountReplace{ - Namespace: utils.DefaultNamespace, + kosmosOperatorSA, err := util.GenerateServiceAccount(manifest.KosmosOperatorServiceAccount, manifest.ServiceAccountReplace{ + Namespace: o.Namespace, }) if err != nil { return fmt.Errorf("kosmosctl join run error, generate serviceaccount failed: %s", err) } - _, err = o.K8sClient.CoreV1().ServiceAccounts(serviceAccount.Namespace).Create(context.TODO(), serviceAccount, metav1.CreateOptions{}) + _, err = o.K8sClient.CoreV1().ServiceAccounts(kosmosOperatorSA.Namespace).Create(context.TODO(), kosmosOperatorSA, metav1.CreateOptions{}) if err != nil && !apierrors.IsAlreadyExists(err) { return fmt.Errorf("kosmosctl join run error, create serviceaccount failed: %s", err) } - klog.Info("ServiceAccount " + serviceAccount.Name + " has been created.") + klog.Info("ServiceAccount " + kosmosOperatorSA.Name + " has been created.") //ToDo Wait for all services to be running diff --git a/pkg/kosmosctl/kosmosctl.go b/pkg/kosmosctl/kosmosctl.go index a819184ad..2664d936e 100644 --- a/pkg/kosmosctl/kosmosctl.go +++ b/pkg/kosmosctl/kosmosctl.go @@ -17,6 +17,7 @@ import ( "github.com/kosmos.io/kosmos/pkg/kosmosctl/get" "github.com/kosmos.io/kosmos/pkg/kosmosctl/install" "github.com/kosmos.io/kosmos/pkg/kosmosctl/join" + "github.com/kosmos.io/kosmos/pkg/kosmosctl/logs" "github.com/kosmos.io/kosmos/pkg/kosmosctl/rsmigrate" "github.com/kosmos.io/kosmos/pkg/kosmosctl/uninstall" "github.com/kosmos.io/kosmos/pkg/kosmosctl/unjoin" @@ -69,8 +70,9 @@ func NewKosmosCtlCommand() *cobra.Command { }, }, { - Message: "Cluster Check/Analysis Commands:", + Message: "Troubleshooting and Debugging Commands:", Commands: []*cobra.Command{ + logs.NewCmdLogs(f, ioStreams), floater.NewCmdCheck(), floater.NewCmdAnalysis(f), }, diff --git a/pkg/kosmosctl/logs/logs.go b/pkg/kosmosctl/logs/logs.go new file mode 100644 index 000000000..a06b567c3 --- /dev/null +++ b/pkg/kosmosctl/logs/logs.go @@ -0,0 +1,154 @@ +package logs + +import ( + "context" + "fmt" + + "github.com/spf13/cobra" + authenticationv1 "k8s.io/api/authentication/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/tools/clientcmd" + ctllogs "k8s.io/kubectl/pkg/cmd/logs" + ctlutil "k8s.io/kubectl/pkg/cmd/util" + "k8s.io/kubectl/pkg/util/i18n" + "k8s.io/kubectl/pkg/util/templates" + "k8s.io/utils/pointer" + + "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned" + "github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest" + "github.com/kosmos.io/kosmos/pkg/kosmosctl/util" + "github.com/kosmos.io/kosmos/pkg/utils" +) + +var ( + logsLong = templates.LongDesc(i18n.T(` + Print the logs for a container in a pod or specified resource from the specified cluster. + If the pod has only one container, the container name is optional.`)) + + logsExample = templates.Examples(i18n.T(` + # Return logs from pod, e.g: + kosmosctl logs pod-name --cluster cluster-name + + # Return logs from pod of special container, e.g: + kosmosctl logs pod-name --cluster cluster-name -c container-name`)) +) + +type CommandLogsOptions struct { + Cluster string + + Namespace string + + LogsOptions *ctllogs.LogsOptions +} + +func NewCmdLogs(f ctlutil.Factory, streams genericclioptions.IOStreams) *cobra.Command { + o := NewCommandLogsOptions(streams) + + cmd := &cobra.Command{ + Use: "logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER] (--cluster CLUSTER_NAME)", + Short: i18n.T("Display resources from the Kosmos control plane"), + Long: logsLong, + Example: logsExample, + SilenceUsage: true, + DisableFlagsInUseLine: true, + RunE: func(cmd *cobra.Command, args []string) error { + ctlutil.CheckErr(o.Complete(f, cmd, args)) + ctlutil.CheckErr(o.Validate()) + ctlutil.CheckErr(o.Run()) + return nil + }, + } + + flags := cmd.Flags() + flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "If present, the namespace scope for this CLI request.") + flags.StringVar(&o.Cluster, "cluster", utils.DefaultClusterName, "Specify a cluster, the default is the control cluster.") + o.LogsOptions.AddFlags(cmd) + + return cmd +} + +func NewCommandLogsOptions(streams genericclioptions.IOStreams) *CommandLogsOptions { + logsOptions := ctllogs.NewLogsOptions(streams, false) + return &CommandLogsOptions{ + LogsOptions: logsOptions, + } +} + +func (o *CommandLogsOptions) Complete(f ctlutil.Factory, cmd *cobra.Command, args []string) error { + controlConfig, err := f.ToRESTConfig() + if err != nil { + return err + } + + rootClient, err := versioned.NewForConfig(controlConfig) + if err != nil { + return err + } + cluster, err := rootClient.KosmosV1alpha1().Clusters().Get(context.TODO(), o.Cluster, metav1.GetOptions{}) + if err != nil { + return err + } + + leafConfig, err := clientcmd.RESTConfigFromKubeConfig(cluster.Spec.Kubeconfig) + if err != nil { + return fmt.Errorf("kosmosctl logs complete error, load leaf cluster kubeconfig failed: %s", err) + } + + leafClient, err := kubernetes.NewForConfig(leafConfig) + if err != nil { + return fmt.Errorf("kosmosctl logs complete error, generate leaf cluster client failed: %s", err) + } + + kosmosControlSA, err := util.GenerateServiceAccount(manifest.KosmosControlServiceAccount, manifest.ServiceAccountReplace{ + Namespace: o.Namespace, + }) + if err != nil { + return fmt.Errorf("kosmosctl logs complete error, generate kosmos serviceaccount failed: %s", err) + } + expirationSeconds := int64(600) + leafToken, err := leafClient.CoreV1().ServiceAccounts(kosmosControlSA.Namespace).CreateToken( + context.TODO(), kosmosControlSA.Name, &authenticationv1.TokenRequest{ + Spec: authenticationv1.TokenRequestSpec{ + ExpirationSeconds: &expirationSeconds, + }, + }, metav1.CreateOptions{}) + if err != nil { + return fmt.Errorf("kosmosctl logs complete error, list leaf cluster secret failed: %s", err) + } + + configFlags := genericclioptions.NewConfigFlags(false) + configFlags.APIServer = &leafConfig.Host + configFlags.BearerToken = &leafToken.Status.Token + configFlags.Insecure = pointer.Bool(true) + configFlags.Namespace = &o.Namespace + + o.LogsOptions.Namespace = o.Namespace + + newF := ctlutil.NewFactory(configFlags) + err = o.LogsOptions.Complete(newF, cmd, args) + if err != nil { + return fmt.Errorf("kosmosctl logs complete error, options failed: %s", err) + } + + return nil +} + +func (o *CommandLogsOptions) Validate() error { + err := o.LogsOptions.Validate() + if err != nil { + return fmt.Errorf("kosmosctl logs validate error, options failed: %s", err) + } + + return nil +} + +func (o *CommandLogsOptions) Run() error { + err := o.LogsOptions.RunLogs() + if err != nil { + return fmt.Errorf("kosmosctl logs run error, options failed: %s", err) + } + + return nil +} diff --git a/pkg/kosmosctl/manifest/manifest_clusterrolebindings.go b/pkg/kosmosctl/manifest/manifest_clusterrolebindings.go index 81c7020e2..1c000f674 100644 --- a/pkg/kosmosctl/manifest/manifest_clusterrolebindings.go +++ b/pkg/kosmosctl/manifest/manifest_clusterrolebindings.go @@ -40,7 +40,10 @@ roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kosmos -subjects: +subjects: + - kind: ServiceAccount + name: kosmos-control + namespace: {{ .Namespace }} - kind: ServiceAccount name: clusterlink-controller-manager namespace: {{ .Namespace }} diff --git a/pkg/kosmosctl/manifest/manifest_clusterroles.go b/pkg/kosmosctl/manifest/manifest_clusterroles.go index b4ebca507..7e4d55c36 100644 --- a/pkg/kosmosctl/manifest/manifest_clusterroles.go +++ b/pkg/kosmosctl/manifest/manifest_clusterroles.go @@ -37,7 +37,7 @@ rules: resources: ['*'] verbs: ["*"] - nonResourceURLs: ['*'] - verbs: ["get"] + verbs: ["*"] ` ClusterTreeClusterRole = ` diff --git a/pkg/kosmosctl/manifest/manifest_serviceaccounts.go b/pkg/kosmosctl/manifest/manifest_serviceaccounts.go index bc361c5e8..a23318507 100644 --- a/pkg/kosmosctl/manifest/manifest_serviceaccounts.go +++ b/pkg/kosmosctl/manifest/manifest_serviceaccounts.go @@ -1,6 +1,14 @@ package manifest const ( + KosmosControlServiceAccount = ` +apiVersion: v1 +kind: ServiceAccount +metadata: + name: kosmos-control + namespace: {{ .Namespace }} +` + KosmosOperatorServiceAccount = ` apiVersion: v1 kind: ServiceAccount diff --git a/pkg/kosmosctl/uninstall/uninstall.go b/pkg/kosmosctl/uninstall/uninstall.go index b8a6cedc9..cd6f4accc 100644 --- a/pkg/kosmosctl/uninstall/uninstall.go +++ b/pkg/kosmosctl/uninstall/uninstall.go @@ -340,7 +340,7 @@ func (o *CommandUninstallOptions) runClustertree() error { klog.Info("CRD " + clusterCRD.Name + " is deleted.") } - err = o.K8sClient.CoreV1().ConfigMaps(utils.DefaultNamespace).Delete(context.TODO(), utils.HostKubeConfigName, metav1.DeleteOptions{}) + err = o.K8sClient.CoreV1().ConfigMaps(o.Namespace).Delete(context.TODO(), utils.HostKubeConfigName, metav1.DeleteOptions{}) if err != nil { if !apierrors.IsNotFound(err) { return fmt.Errorf("kosmosctl uninstall clustertree run error, configmap options failed: %v", err) diff --git a/pkg/utils/constants.go b/pkg/utils/constants.go index 20451302e..37ff95e98 100644 --- a/pkg/utils/constants.go +++ b/pkg/utils/constants.go @@ -14,6 +14,7 @@ const ( DefaultWaitTime = 120 RootClusterAnnotationKey = "kosmos.io/cluster-role" RootClusterAnnotationValue = "root" + KosmosSchedulerName = "kosmos-scheduler" ) const ( @@ -74,7 +75,9 @@ const ( KosmosTrippedLabels = "kosmos-io/tripped" KosmosPvcLabelSelector = "kosmos-io/label-selector" - KosmosResourceOwnersAnnotations = "kosmos-io/cluster-owners" + // on resorce (pv, configmap, secret), represents which cluster this resource belongs to + KosmosResourceOwnersAnnotations = "kosmos-io/cluster-owners" + // on node, represents which cluster this node belongs to KosmosNodeOwnedByClusterAnnotations = "kosmos-io/owned-by-cluster" KosmosDaemonsetAllowAnnotations = "kosmos-io/daemonset-allow" diff --git a/pkg/utils/k8s.go b/pkg/utils/k8s.go index 1d72d424c..168a4918d 100644 --- a/pkg/utils/k8s.go +++ b/pkg/utils/k8s.go @@ -19,9 +19,10 @@ import ( ) type ClustersNodeSelection struct { - NodeSelector map[string]string `json:"nodeSelector,omitempty"` - Affinity *corev1.Affinity `json:"affinity,omitempty"` - Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + NodeSelector map[string]string `json:"nodeSelector,omitempty"` + Affinity *corev1.Affinity `json:"affinity,omitempty"` + Tolerations []corev1.Toleration `json:"tolerations,omitempty"` + TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"` } type EnvResourceManager interface { @@ -281,7 +282,7 @@ func IsObjectUnstructuredGlobal(obj map[string]string) bool { return false } -func AddResourceOwnersAnnotations(anno map[string]string, owner string) map[string]string { +func AddResourceClusters(anno map[string]string, clusterName string) map[string]string { if anno == nil { anno = map[string]string{} } @@ -294,28 +295,28 @@ func AddResourceOwnersAnnotations(anno map[string]string, owner string) map[stri continue } newowners = append(newowners, v) - if v == owner { + if v == clusterName { // already existed flag = true } } if !flag { - newowners = append(newowners, owner) + newowners = append(newowners, clusterName) } anno[KosmosResourceOwnersAnnotations] = strings.Join(newowners, ",") return anno } -func HasResourceOwnersAnnotations(anno map[string]string, owner string) bool { +func HasResourceClusters(anno map[string]string, clusterName string) bool { if anno == nil { anno = map[string]string{} } owners := strings.Split(anno[KosmosResourceOwnersAnnotations], ",") for _, v := range owners { - if v == owner { + if v == clusterName { // already existed return true } @@ -323,7 +324,7 @@ func HasResourceOwnersAnnotations(anno map[string]string, owner string) bool { return false } -func ListResourceOwnersAnnotations(anno map[string]string) []string { +func ListResourceClusters(anno map[string]string) []string { if anno == nil || anno[KosmosResourceOwnersAnnotations] == "" { return []string{} } diff --git a/pkg/utils/podutils/env.go b/pkg/utils/podutils/env.go index c51cd851d..58a42ab4d 100644 --- a/pkg/utils/podutils/env.go +++ b/pkg/utils/podutils/env.go @@ -1,4 +1,4 @@ -// This code is directly lifted from the karmada +// This code is directly lifted from the VIRTUAL-KUBELET // For reference: // https://github.com/virtual-kubelet/virtual-kubelet/blob/master/internal/podutils/env.go @@ -19,6 +19,7 @@ import ( "k8s.io/apimachinery/pkg/util/sets" apivalidation "k8s.io/apimachinery/pkg/util/validation" "k8s.io/klog" + "k8s.io/utils/pointer" "github.com/kosmos.io/kosmos/pkg/utils" ) @@ -295,13 +296,13 @@ func makeEnvironmentMap(ctx context.Context, pod *corev1.Pod, container *corev1. // If the variable's Value is set, expand the `$(var)` references to other // variables in the .Value field; the sources of variables are the declared // variables of the container and the service environment variables. - // mappingFunc := expansion.MappingFuncFor(res, svcEnv) + mappingFunc := MappingFuncFor(res, svcEnv) // Iterate over environment variables in order to populate the map. var keys []corev1.EnvVar for _, env := range container.Env { envptr := env - val, err := getEnvironmentVariableValue(ctx, &envptr, pod, container, rm) + val, err := getEnvironmentVariableValue(ctx, &envptr, mappingFunc, pod, container, rm) if err != nil { keys = append(keys, env) } @@ -320,12 +321,12 @@ func makeEnvironmentMap(ctx context.Context, pod *corev1.Pod, container *corev1. return keys, nil } -func getEnvironmentVariableValue(ctx context.Context, env *corev1.EnvVar, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (*string, error) { +func getEnvironmentVariableValue(ctx context.Context, env *corev1.EnvVar, mappingFunc func(string) string, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (*string, error) { if env.ValueFrom != nil { return getEnvironmentVariableValueWithValueFrom(ctx, env, pod, container, rm) } // Handle values that have been directly provided after expanding variable references. - return &env.Value, nil + return pointer.String(Expand(env.Value, mappingFunc)), nil } func getEnvironmentVariableValueWithValueFrom(ctx context.Context, env *corev1.EnvVar, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (*string, error) { diff --git a/pkg/utils/podutils/expand.go b/pkg/utils/podutils/expand.go new file mode 100644 index 000000000..5f9bab3ed --- /dev/null +++ b/pkg/utils/podutils/expand.go @@ -0,0 +1,107 @@ +//Copied from +//https://github.com/kubernetes/kubernetes/tree/master/third_party/forked/golang/expansion . +// +//This is to eliminate a direct dependency on kubernetes/kubernetes. + +package podutils + +import ( + "bytes" +) + +const ( + operator = '$' + referenceOpener = '(' + referenceCloser = ')' +) + +// syntaxWrap returns the input string wrapped by the expansion syntax. +func syntaxWrap(input string) string { + return string(operator) + string(referenceOpener) + input + string(referenceCloser) +} + +// MappingFuncFor returns a mapping function for use with Expand that +// implements the expansion semantics defined in the expansion spec; it +// returns the input string wrapped in the expansion syntax if no mapping +// for the input is found. +func MappingFuncFor(context ...map[string]string) func(string) string { + return func(input string) string { + for _, vars := range context { + val, ok := vars[input] + if ok { + return val + } + } + + return syntaxWrap(input) + } +} + +// Expand replaces variable references in the input string according to +// the expansion spec using the given mapping function to resolve the +// values of variables. +func Expand(input string, mapping func(string) string) string { + var buf bytes.Buffer + checkpoint := 0 + for cursor := 0; cursor < len(input); cursor++ { + if input[cursor] == operator && cursor+1 < len(input) { + // Copy the portion of the input string since the last + // checkpoint into the buffer + buf.WriteString(input[checkpoint:cursor]) + + // Attempt to read the variable name as defined by the + // syntax from the input string + read, isVar, advance := tryReadVariableName(input[cursor+1:]) + + if isVar { + // We were able to read a variable name correctly; + // apply the mapping to the variable name and copy the + // bytes into the buffer + buf.WriteString(mapping(read)) + } else { + // Not a variable name; copy the read bytes into the buffer + buf.WriteString(read) + } + + // Advance the cursor in the input string to account for + // bytes consumed to read the variable name expression + cursor += advance + + // Advance the checkpoint in the input string + checkpoint = cursor + 1 + } + } + + // Return the buffer and any remaining unwritten bytes in the + // input string. + return buf.String() + input[checkpoint:] +} + +// tryReadVariableName attempts to read a variable name from the input +// string and returns the content read from the input, whether that content +// represents a variable name to perform mapping on, and the number of bytes +// consumed in the input string. +// +// The input string is assumed not to contain the initial operator. +func tryReadVariableName(input string) (string, bool, int) { + switch input[0] { + case operator: + // Escaped operator; return it. + return input[0:1], false, 1 + case referenceOpener: + // Scan to expression closer + for i := 1; i < len(input); i++ { + if input[i] == referenceCloser { + return input[1:i], true, i + 1 + } + } + + // Incomplete reference; return it. + return string(operator) + string(referenceOpener), false, 1 + default: + // Not the beginning of an expression, ie, an operator + // that doesn't begin an expression. Return the operator + // and the first rune in the string. + return (string(operator) + string(input[0])), false, 1 + } +} diff --git a/pkg/utils/podutils/pod.go b/pkg/utils/podutils/pod.go index 5d5b9c01b..d6da91e18 100644 --- a/pkg/utils/podutils/pod.go +++ b/pkg/utils/podutils/pod.go @@ -166,7 +166,9 @@ func FitPod(pod *corev1.Pod, ignoreLabels []string, cleanNodeName bool) *corev1. podCopy.Spec.Volumes = vols podCopy.Status = corev1.PodStatus{} - podCopy.Spec.SchedulerName = "" + if podCopy.Spec.SchedulerName == utils.KosmosSchedulerName { + podCopy.Spec.SchedulerName = "" + } if cleanNodeName { podCopy.Spec.NodeName = "" @@ -286,6 +288,7 @@ func recoverSelectors(pod *corev1.Pod, cns *utils.ClustersNodeSelection) { if cns != nil { pod.Spec.NodeSelector = cns.NodeSelector pod.Spec.Tolerations = cns.Tolerations + pod.Spec.TopologySpreadConstraints = cns.TopologySpreadConstraints if pod.Spec.Affinity == nil { pod.Spec.Affinity = cns.Affinity } else { @@ -302,6 +305,7 @@ func recoverSelectors(pod *corev1.Pod, cns *utils.ClustersNodeSelection) { } else { pod.Spec.NodeSelector = nil pod.Spec.Tolerations = nil + pod.Spec.TopologySpreadConstraints = nil if pod.Spec.Affinity != nil && pod.Spec.Affinity.NodeAffinity != nil { pod.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = nil } diff --git a/pkg/utils/pvpvc.go b/pkg/utils/pvpvc.go index d033b4d2e..9ceaebbf4 100644 --- a/pkg/utils/pvpvc.go +++ b/pkg/utils/pvpvc.go @@ -1,9 +1,12 @@ package utils import ( + "fmt" "reflect" v1 "k8s.io/api/core/v1" + + kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1" ) func IsPVEqual(pv *v1.PersistentVolume, clone *v1.PersistentVolume) bool { @@ -14,3 +17,35 @@ func IsPVEqual(pv *v1.PersistentVolume, clone *v1.PersistentVolume) bool { } return false } + +func IsOne2OneMode(cluster *kosmosv1alpha1.Cluster) bool { + return cluster.Spec.ClusterTreeOptions.LeafModels != nil +} + +func NodeAffinity4RootPV(pv *v1.PersistentVolume, isOne2OneMode bool, clusterName string) string { + node4RootPV := fmt.Sprintf("%s%s", KosmosNodePrefix, clusterName) + if isOne2OneMode { + for _, v := range pv.Spec.NodeAffinity.Required.NodeSelectorTerms { + for _, val := range v.MatchFields { + if val.Key == NodeHostnameValue || val.Key == NodeHostnameValueBeta { + node4RootPV = val.Values[0] + } + } + for _, val := range v.MatchExpressions { + if val.Key == NodeHostnameValue || val.Key == NodeHostnameValueBeta { + node4RootPV = val.Values[0] + } + } + } + } + return node4RootPV +} + +func IsPVCEqual(pvc *v1.PersistentVolumeClaim, clone *v1.PersistentVolumeClaim) bool { + if reflect.DeepEqual(pvc.Annotations, clone.Annotations) && + reflect.DeepEqual(pvc.Spec, clone.Spec) && + reflect.DeepEqual(pvc.Status, clone.Status) { + return true + } + return false +} diff --git a/pkg/utils/resources.go b/pkg/utils/resources.go index 161f1618d..13fdee5df 100644 --- a/pkg/utils/resources.go +++ b/pkg/utils/resources.go @@ -2,13 +2,20 @@ package utils import ( corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" v1resource "k8s.io/kubernetes/pkg/api/v1/resource" ) +const ( + podResourceName corev1.ResourceName = "pods" +) + func CalculateClusterResources(nodes *corev1.NodeList, pods *corev1.PodList) corev1.ResourceList { base := GetNodesTotalResources(nodes) reqs, _ := GetPodsTotalRequestsAndLimits(pods) + podNums := GetUsedPodNums(pods) SubResourceList(base, reqs) + SubResourceList(base, podNums) return base } @@ -70,3 +77,18 @@ func GetPodsTotalRequestsAndLimits(podList *corev1.PodList) (reqs corev1.Resourc } return } + +func GetUsedPodNums(podList *corev1.PodList) (res corev1.ResourceList) { + podQuantity := resource.Quantity{} + res = corev1.ResourceList{} + for _, p := range podList.Items { + pod := p + if IsVirtualPod(&pod) { + continue + } + q := resource.MustParse("1") + podQuantity.Add(q) + } + res[podResourceName] = podQuantity + return +} diff --git a/vendor/github.com/fatih/camelcase/.travis.yml b/vendor/github.com/fatih/camelcase/.travis.yml new file mode 100644 index 000000000..3489e3871 --- /dev/null +++ b/vendor/github.com/fatih/camelcase/.travis.yml @@ -0,0 +1,3 @@ +language: go +go: 1.x + diff --git a/vendor/github.com/fatih/camelcase/LICENSE.md b/vendor/github.com/fatih/camelcase/LICENSE.md new file mode 100644 index 000000000..aa4a536ca --- /dev/null +++ b/vendor/github.com/fatih/camelcase/LICENSE.md @@ -0,0 +1,20 @@ +The MIT License (MIT) + +Copyright (c) 2015 Fatih Arslan + +Permission is hereby granted, free of charge, to any person obtaining a copy of +this software and associated documentation files (the "Software"), to deal in +the Software without restriction, including without limitation the rights to +use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of +the Software, and to permit persons to whom the Software is furnished to do so, +subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS +FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR +COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER +IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN +CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. diff --git a/vendor/github.com/fatih/camelcase/README.md b/vendor/github.com/fatih/camelcase/README.md new file mode 100644 index 000000000..105a6ae33 --- /dev/null +++ b/vendor/github.com/fatih/camelcase/README.md @@ -0,0 +1,58 @@ +# CamelCase [![GoDoc](http://img.shields.io/badge/go-documentation-blue.svg?style=flat-square)](http://godoc.org/github.com/fatih/camelcase) [![Build Status](http://img.shields.io/travis/fatih/camelcase.svg?style=flat-square)](https://travis-ci.org/fatih/camelcase) + +CamelCase is a Golang (Go) package to split the words of a camelcase type +string into a slice of words. It can be used to convert a camelcase word (lower +or upper case) into any type of word. + +## Splitting rules: + +1. If string is not valid UTF-8, return it without splitting as + single item array. +2. Assign all unicode characters into one of 4 sets: lower case + letters, upper case letters, numbers, and all other characters. +3. Iterate through characters of string, introducing splits + between adjacent characters that belong to different sets. +4. Iterate through array of split strings, and if a given string + is upper case: + * if subsequent string is lower case: + * move last character of upper case string to beginning of + lower case string + +## Install + +```bash +go get github.com/fatih/camelcase +``` + +## Usage and examples + +```go +splitted := camelcase.Split("GolangPackage") + +fmt.Println(splitted[0], splitted[1]) // prints: "Golang", "Package" +``` + +Both lower camel case and upper camel case are supported. For more info please +check: [http://en.wikipedia.org/wiki/CamelCase](http://en.wikipedia.org/wiki/CamelCase) + +Below are some example cases: + +``` +"" => [] +"lowercase" => ["lowercase"] +"Class" => ["Class"] +"MyClass" => ["My", "Class"] +"MyC" => ["My", "C"] +"HTML" => ["HTML"] +"PDFLoader" => ["PDF", "Loader"] +"AString" => ["A", "String"] +"SimpleXMLParser" => ["Simple", "XML", "Parser"] +"vimRPCPlugin" => ["vim", "RPC", "Plugin"] +"GL11Version" => ["GL", "11", "Version"] +"99Bottles" => ["99", "Bottles"] +"May5" => ["May", "5"] +"BFG9000" => ["BFG", "9000"] +"BöseÜberraschung" => ["Böse", "Überraschung"] +"Two spaces" => ["Two", " ", "spaces"] +"BadUTF8\xe2\xe2\xa1" => ["BadUTF8\xe2\xe2\xa1"] +``` diff --git a/vendor/github.com/fatih/camelcase/camelcase.go b/vendor/github.com/fatih/camelcase/camelcase.go new file mode 100644 index 000000000..02160c9a4 --- /dev/null +++ b/vendor/github.com/fatih/camelcase/camelcase.go @@ -0,0 +1,90 @@ +// Package camelcase is a micro package to split the words of a camelcase type +// string into a slice of words. +package camelcase + +import ( + "unicode" + "unicode/utf8" +) + +// Split splits the camelcase word and returns a list of words. It also +// supports digits. Both lower camel case and upper camel case are supported. +// For more info please check: http://en.wikipedia.org/wiki/CamelCase +// +// Examples +// +// "" => [""] +// "lowercase" => ["lowercase"] +// "Class" => ["Class"] +// "MyClass" => ["My", "Class"] +// "MyC" => ["My", "C"] +// "HTML" => ["HTML"] +// "PDFLoader" => ["PDF", "Loader"] +// "AString" => ["A", "String"] +// "SimpleXMLParser" => ["Simple", "XML", "Parser"] +// "vimRPCPlugin" => ["vim", "RPC", "Plugin"] +// "GL11Version" => ["GL", "11", "Version"] +// "99Bottles" => ["99", "Bottles"] +// "May5" => ["May", "5"] +// "BFG9000" => ["BFG", "9000"] +// "BöseÜberraschung" => ["Böse", "Überraschung"] +// "Two spaces" => ["Two", " ", "spaces"] +// "BadUTF8\xe2\xe2\xa1" => ["BadUTF8\xe2\xe2\xa1"] +// +// Splitting rules +// +// 1) If string is not valid UTF-8, return it without splitting as +// single item array. +// 2) Assign all unicode characters into one of 4 sets: lower case +// letters, upper case letters, numbers, and all other characters. +// 3) Iterate through characters of string, introducing splits +// between adjacent characters that belong to different sets. +// 4) Iterate through array of split strings, and if a given string +// is upper case: +// if subsequent string is lower case: +// move last character of upper case string to beginning of +// lower case string +func Split(src string) (entries []string) { + // don't split invalid utf8 + if !utf8.ValidString(src) { + return []string{src} + } + entries = []string{} + var runes [][]rune + lastClass := 0 + class := 0 + // split into fields based on class of unicode character + for _, r := range src { + switch true { + case unicode.IsLower(r): + class = 1 + case unicode.IsUpper(r): + class = 2 + case unicode.IsDigit(r): + class = 3 + default: + class = 4 + } + if class == lastClass { + runes[len(runes)-1] = append(runes[len(runes)-1], r) + } else { + runes = append(runes, []rune{r}) + } + lastClass = class + } + // handle upper case -> lower case sequences, e.g. + // "PDFL", "oader" -> "PDF", "Loader" + for i := 0; i < len(runes)-1; i++ { + if unicode.IsUpper(runes[i][0]) && unicode.IsLower(runes[i+1][0]) { + runes[i+1] = append([]rune{runes[i][len(runes[i])-1]}, runes[i+1]...) + runes[i] = runes[i][:len(runes[i])-1] + } + } + // construct []string from results + for _, s := range runes { + if len(s) > 0 { + entries = append(entries, string(s)) + } + } + return +} diff --git a/vendor/github.com/gorilla/mux/.editorconfig b/vendor/github.com/gorilla/mux/.editorconfig new file mode 100644 index 000000000..c6b74c3e0 --- /dev/null +++ b/vendor/github.com/gorilla/mux/.editorconfig @@ -0,0 +1,20 @@ +; https://editorconfig.org/ + +root = true + +[*] +insert_final_newline = true +charset = utf-8 +trim_trailing_whitespace = true +indent_style = space +indent_size = 2 + +[{Makefile,go.mod,go.sum,*.go,.gitmodules}] +indent_style = tab +indent_size = 4 + +[*.md] +indent_size = 4 +trim_trailing_whitespace = false + +eclint_indent_style = unset \ No newline at end of file diff --git a/vendor/github.com/gorilla/mux/.gitignore b/vendor/github.com/gorilla/mux/.gitignore new file mode 100644 index 000000000..84039fec6 --- /dev/null +++ b/vendor/github.com/gorilla/mux/.gitignore @@ -0,0 +1 @@ +coverage.coverprofile diff --git a/vendor/github.com/gorilla/mux/LICENSE b/vendor/github.com/gorilla/mux/LICENSE new file mode 100644 index 000000000..bb9d80bc9 --- /dev/null +++ b/vendor/github.com/gorilla/mux/LICENSE @@ -0,0 +1,27 @@ +Copyright (c) 2023 The Gorilla Authors. All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright +notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above +copyright notice, this list of conditions and the following disclaimer +in the documentation and/or other materials provided with the +distribution. + * Neither the name of Google Inc. nor the names of its +contributors may be used to endorse or promote products derived from +this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/vendor/github.com/gorilla/mux/Makefile b/vendor/github.com/gorilla/mux/Makefile new file mode 100644 index 000000000..98f5ab75f --- /dev/null +++ b/vendor/github.com/gorilla/mux/Makefile @@ -0,0 +1,34 @@ +GO_LINT=$(shell which golangci-lint 2> /dev/null || echo '') +GO_LINT_URI=github.com/golangci/golangci-lint/cmd/golangci-lint@latest + +GO_SEC=$(shell which gosec 2> /dev/null || echo '') +GO_SEC_URI=github.com/securego/gosec/v2/cmd/gosec@latest + +GO_VULNCHECK=$(shell which govulncheck 2> /dev/null || echo '') +GO_VULNCHECK_URI=golang.org/x/vuln/cmd/govulncheck@latest + +.PHONY: golangci-lint +golangci-lint: + $(if $(GO_LINT), ,go install $(GO_LINT_URI)) + @echo "##### Running golangci-lint" + golangci-lint run -v + +.PHONY: gosec +gosec: + $(if $(GO_SEC), ,go install $(GO_SEC_URI)) + @echo "##### Running gosec" + gosec ./... + +.PHONY: govulncheck +govulncheck: + $(if $(GO_VULNCHECK), ,go install $(GO_VULNCHECK_URI)) + @echo "##### Running govulncheck" + govulncheck ./... + +.PHONY: verify +verify: golangci-lint gosec govulncheck + +.PHONY: test +test: + @echo "##### Running tests" + go test -race -cover -coverprofile=coverage.coverprofile -covermode=atomic -v ./... \ No newline at end of file diff --git a/vendor/github.com/gorilla/mux/README.md b/vendor/github.com/gorilla/mux/README.md new file mode 100644 index 000000000..382513d57 --- /dev/null +++ b/vendor/github.com/gorilla/mux/README.md @@ -0,0 +1,812 @@ +# gorilla/mux + +![testing](https://github.com/gorilla/mux/actions/workflows/test.yml/badge.svg) +[![codecov](https://codecov.io/github/gorilla/mux/branch/main/graph/badge.svg)](https://codecov.io/github/gorilla/mux) +[![godoc](https://godoc.org/github.com/gorilla/mux?status.svg)](https://godoc.org/github.com/gorilla/mux) +[![sourcegraph](https://sourcegraph.com/github.com/gorilla/mux/-/badge.svg)](https://sourcegraph.com/github.com/gorilla/mux?badge) + + +![Gorilla Logo](https://github.com/gorilla/.github/assets/53367916/d92caabf-98e0-473e-bfbf-ab554ba435e5) + +Package `gorilla/mux` implements a request router and dispatcher for matching incoming requests to +their respective handler. + +The name mux stands for "HTTP request multiplexer". Like the standard `http.ServeMux`, `mux.Router` matches incoming requests against a list of registered routes and calls a handler for the route that matches the URL or other conditions. The main features are: + +* It implements the `http.Handler` interface so it is compatible with the standard `http.ServeMux`. +* Requests can be matched based on URL host, path, path prefix, schemes, header and query values, HTTP methods or using custom matchers. +* URL hosts, paths and query values can have variables with an optional regular expression. +* Registered URLs can be built, or "reversed", which helps maintaining references to resources. +* Routes can be used as subrouters: nested routes are only tested if the parent route matches. This is useful to define groups of routes that share common conditions like a host, a path prefix or other repeated attributes. As a bonus, this optimizes request matching. + +--- + +* [Install](#install) +* [Examples](#examples) +* [Matching Routes](#matching-routes) +* [Static Files](#static-files) +* [Serving Single Page Applications](#serving-single-page-applications) (e.g. React, Vue, Ember.js, etc.) +* [Registered URLs](#registered-urls) +* [Walking Routes](#walking-routes) +* [Graceful Shutdown](#graceful-shutdown) +* [Middleware](#middleware) +* [Handling CORS Requests](#handling-cors-requests) +* [Testing Handlers](#testing-handlers) +* [Full Example](#full-example) + +--- + +## Install + +With a [correctly configured](https://golang.org/doc/install#testing) Go toolchain: + +```sh +go get -u github.com/gorilla/mux +``` + +## Examples + +Let's start registering a couple of URL paths and handlers: + +```go +func main() { + r := mux.NewRouter() + r.HandleFunc("/", HomeHandler) + r.HandleFunc("/products", ProductsHandler) + r.HandleFunc("/articles", ArticlesHandler) + http.Handle("/", r) +} +``` + +Here we register three routes mapping URL paths to handlers. This is equivalent to how `http.HandleFunc()` works: if an incoming request URL matches one of the paths, the corresponding handler is called passing (`http.ResponseWriter`, `*http.Request`) as parameters. + +Paths can have variables. They are defined using the format `{name}` or `{name:pattern}`. If a regular expression pattern is not defined, the matched variable will be anything until the next slash. For example: + +```go +r := mux.NewRouter() +r.HandleFunc("/products/{key}", ProductHandler) +r.HandleFunc("/articles/{category}/", ArticlesCategoryHandler) +r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler) +``` + +The names are used to create a map of route variables which can be retrieved calling `mux.Vars()`: + +```go +func ArticlesCategoryHandler(w http.ResponseWriter, r *http.Request) { + vars := mux.Vars(r) + w.WriteHeader(http.StatusOK) + fmt.Fprintf(w, "Category: %v\n", vars["category"]) +} +``` + +And this is all you need to know about the basic usage. More advanced options are explained below. + +### Matching Routes + +Routes can also be restricted to a domain or subdomain. Just define a host pattern to be matched. They can also have variables: + +```go +r := mux.NewRouter() +// Only matches if domain is "www.example.com". +r.Host("www.example.com") +// Matches a dynamic subdomain. +r.Host("{subdomain:[a-z]+}.example.com") +``` + +There are several other matchers that can be added. To match path prefixes: + +```go +r.PathPrefix("/products/") +``` + +...or HTTP methods: + +```go +r.Methods("GET", "POST") +``` + +...or URL schemes: + +```go +r.Schemes("https") +``` + +...or header values: + +```go +r.Headers("X-Requested-With", "XMLHttpRequest") +``` + +...or query values: + +```go +r.Queries("key", "value") +``` + +...or to use a custom matcher function: + +```go +r.MatcherFunc(func(r *http.Request, rm *RouteMatch) bool { + return r.ProtoMajor == 0 +}) +``` + +...and finally, it is possible to combine several matchers in a single route: + +```go +r.HandleFunc("/products", ProductsHandler). + Host("www.example.com"). + Methods("GET"). + Schemes("http") +``` + +Routes are tested in the order they were added to the router. If two routes match, the first one wins: + +```go +r := mux.NewRouter() +r.HandleFunc("/specific", specificHandler) +r.PathPrefix("/").Handler(catchAllHandler) +``` + +Setting the same matching conditions again and again can be boring, so we have a way to group several routes that share the same requirements. We call it "subrouting". + +For example, let's say we have several URLs that should only match when the host is `www.example.com`. Create a route for that host and get a "subrouter" from it: + +```go +r := mux.NewRouter() +s := r.Host("www.example.com").Subrouter() +``` + +Then register routes in the subrouter: + +```go +s.HandleFunc("/products/", ProductsHandler) +s.HandleFunc("/products/{key}", ProductHandler) +s.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler) +``` + +The three URL paths we registered above will only be tested if the domain is `www.example.com`, because the subrouter is tested first. This is not only convenient, but also optimizes request matching. You can create subrouters combining any attribute matchers accepted by a route. + +Subrouters can be used to create domain or path "namespaces": you define subrouters in a central place and then parts of the app can register its paths relatively to a given subrouter. + +There's one more thing about subroutes. When a subrouter has a path prefix, the inner routes use it as base for their paths: + +```go +r := mux.NewRouter() +s := r.PathPrefix("/products").Subrouter() +// "/products/" +s.HandleFunc("/", ProductsHandler) +// "/products/{key}/" +s.HandleFunc("/{key}/", ProductHandler) +// "/products/{key}/details" +s.HandleFunc("/{key}/details", ProductDetailsHandler) +``` + + +### Static Files + +Note that the path provided to `PathPrefix()` represents a "wildcard": calling +`PathPrefix("/static/").Handler(...)` means that the handler will be passed any +request that matches "/static/\*". This makes it easy to serve static files with mux: + +```go +func main() { + var dir string + + flag.StringVar(&dir, "dir", ".", "the directory to serve files from. Defaults to the current dir") + flag.Parse() + r := mux.NewRouter() + + // This will serve files under http://localhost:8000/static/ + r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.Dir(dir)))) + + srv := &http.Server{ + Handler: r, + Addr: "127.0.0.1:8000", + // Good practice: enforce timeouts for servers you create! + WriteTimeout: 15 * time.Second, + ReadTimeout: 15 * time.Second, + } + + log.Fatal(srv.ListenAndServe()) +} +``` + +### Serving Single Page Applications + +Most of the time it makes sense to serve your SPA on a separate web server from your API, +but sometimes it's desirable to serve them both from one place. It's possible to write a simple +handler for serving your SPA (for use with React Router's [BrowserRouter](https://reacttraining.com/react-router/web/api/BrowserRouter) for example), and leverage +mux's powerful routing for your API endpoints. + +```go +package main + +import ( + "encoding/json" + "log" + "net/http" + "os" + "path/filepath" + "time" + + "github.com/gorilla/mux" +) + +// spaHandler implements the http.Handler interface, so we can use it +// to respond to HTTP requests. The path to the static directory and +// path to the index file within that static directory are used to +// serve the SPA in the given static directory. +type spaHandler struct { + staticPath string + indexPath string +} + +// ServeHTTP inspects the URL path to locate a file within the static dir +// on the SPA handler. If a file is found, it will be served. If not, the +// file located at the index path on the SPA handler will be served. This +// is suitable behavior for serving an SPA (single page application). +func (h spaHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) { + // Join internally call path.Clean to prevent directory traversal + path := filepath.Join(h.staticPath, r.URL.Path) + + // check whether a file exists or is a directory at the given path + fi, err := os.Stat(path) + if os.IsNotExist(err) || fi.IsDir() { + // file does not exist or path is a directory, serve index.html + http.ServeFile(w, r, filepath.Join(h.staticPath, h.indexPath)) + return + } + + if err != nil { + // if we got an error (that wasn't that the file doesn't exist) stating the + // file, return a 500 internal server error and stop + http.Error(w, err.Error(), http.StatusInternalServerError) + return + } + + // otherwise, use http.FileServer to serve the static file + http.FileServer(http.Dir(h.staticPath)).ServeHTTP(w, r) +} + +func main() { + router := mux.NewRouter() + + router.HandleFunc("/api/health", func(w http.ResponseWriter, r *http.Request) { + // an example API handler + json.NewEncoder(w).Encode(map[string]bool{"ok": true}) + }) + + spa := spaHandler{staticPath: "build", indexPath: "index.html"} + router.PathPrefix("/").Handler(spa) + + srv := &http.Server{ + Handler: router, + Addr: "127.0.0.1:8000", + // Good practice: enforce timeouts for servers you create! + WriteTimeout: 15 * time.Second, + ReadTimeout: 15 * time.Second, + } + + log.Fatal(srv.ListenAndServe()) +} +``` + +### Registered URLs + +Now let's see how to build registered URLs. + +Routes can be named. All routes that define a name can have their URLs built, or "reversed". We define a name calling `Name()` on a route. For example: + +```go +r := mux.NewRouter() +r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). + Name("article") +``` + +To build a URL, get the route and call the `URL()` method, passing a sequence of key/value pairs for the route variables. For the previous route, we would do: + +```go +url, err := r.Get("article").URL("category", "technology", "id", "42") +``` + +...and the result will be a `url.URL` with the following path: + +``` +"/articles/technology/42" +``` + +This also works for host and query value variables: + +```go +r := mux.NewRouter() +r.Host("{subdomain}.example.com"). + Path("/articles/{category}/{id:[0-9]+}"). + Queries("filter", "{filter}"). + HandlerFunc(ArticleHandler). + Name("article") + +// url.String() will be "http://news.example.com/articles/technology/42?filter=gorilla" +url, err := r.Get("article").URL("subdomain", "news", + "category", "technology", + "id", "42", + "filter", "gorilla") +``` + +All variables defined in the route are required, and their values must conform to the corresponding patterns. These requirements guarantee that a generated URL will always match a registered route -- the only exception is for explicitly defined "build-only" routes which never match. + +Regex support also exists for matching Headers within a route. For example, we could do: + +```go +r.HeadersRegexp("Content-Type", "application/(text|json)") +``` + +...and the route will match both requests with a Content-Type of `application/json` as well as `application/text` + +There's also a way to build only the URL host or path for a route: use the methods `URLHost()` or `URLPath()` instead. For the previous route, we would do: + +```go +// "http://news.example.com/" +host, err := r.Get("article").URLHost("subdomain", "news") + +// "/articles/technology/42" +path, err := r.Get("article").URLPath("category", "technology", "id", "42") +``` + +And if you use subrouters, host and path defined separately can be built as well: + +```go +r := mux.NewRouter() +s := r.Host("{subdomain}.example.com").Subrouter() +s.Path("/articles/{category}/{id:[0-9]+}"). + HandlerFunc(ArticleHandler). + Name("article") + +// "http://news.example.com/articles/technology/42" +url, err := r.Get("article").URL("subdomain", "news", + "category", "technology", + "id", "42") +``` + +To find all the required variables for a given route when calling `URL()`, the method `GetVarNames()` is available: +```go +r := mux.NewRouter() +r.Host("{domain}"). + Path("/{group}/{item_id}"). + Queries("some_data1", "{some_data1}"). + Queries("some_data2", "{some_data2}"). + Name("article") + +// Will print [domain group item_id some_data1 some_data2] +fmt.Println(r.Get("article").GetVarNames()) + +``` +### Walking Routes + +The `Walk` function on `mux.Router` can be used to visit all of the routes that are registered on a router. For example, +the following prints all of the registered routes: + +```go +package main + +import ( + "fmt" + "net/http" + "strings" + + "github.com/gorilla/mux" +) + +func handler(w http.ResponseWriter, r *http.Request) { + return +} + +func main() { + r := mux.NewRouter() + r.HandleFunc("/", handler) + r.HandleFunc("/products", handler).Methods("POST") + r.HandleFunc("/articles", handler).Methods("GET") + r.HandleFunc("/articles/{id}", handler).Methods("GET", "PUT") + r.HandleFunc("/authors", handler).Queries("surname", "{surname}") + err := r.Walk(func(route *mux.Route, router *mux.Router, ancestors []*mux.Route) error { + pathTemplate, err := route.GetPathTemplate() + if err == nil { + fmt.Println("ROUTE:", pathTemplate) + } + pathRegexp, err := route.GetPathRegexp() + if err == nil { + fmt.Println("Path regexp:", pathRegexp) + } + queriesTemplates, err := route.GetQueriesTemplates() + if err == nil { + fmt.Println("Queries templates:", strings.Join(queriesTemplates, ",")) + } + queriesRegexps, err := route.GetQueriesRegexp() + if err == nil { + fmt.Println("Queries regexps:", strings.Join(queriesRegexps, ",")) + } + methods, err := route.GetMethods() + if err == nil { + fmt.Println("Methods:", strings.Join(methods, ",")) + } + fmt.Println() + return nil + }) + + if err != nil { + fmt.Println(err) + } + + http.Handle("/", r) +} +``` + +### Graceful Shutdown + +Go 1.8 introduced the ability to [gracefully shutdown](https://golang.org/doc/go1.8#http_shutdown) a `*http.Server`. Here's how to do that alongside `mux`: + +```go +package main + +import ( + "context" + "flag" + "log" + "net/http" + "os" + "os/signal" + "time" + + "github.com/gorilla/mux" +) + +func main() { + var wait time.Duration + flag.DurationVar(&wait, "graceful-timeout", time.Second * 15, "the duration for which the server gracefully wait for existing connections to finish - e.g. 15s or 1m") + flag.Parse() + + r := mux.NewRouter() + // Add your routes as needed + + srv := &http.Server{ + Addr: "0.0.0.0:8080", + // Good practice to set timeouts to avoid Slowloris attacks. + WriteTimeout: time.Second * 15, + ReadTimeout: time.Second * 15, + IdleTimeout: time.Second * 60, + Handler: r, // Pass our instance of gorilla/mux in. + } + + // Run our server in a goroutine so that it doesn't block. + go func() { + if err := srv.ListenAndServe(); err != nil { + log.Println(err) + } + }() + + c := make(chan os.Signal, 1) + // We'll accept graceful shutdowns when quit via SIGINT (Ctrl+C) + // SIGKILL, SIGQUIT or SIGTERM (Ctrl+/) will not be caught. + signal.Notify(c, os.Interrupt) + + // Block until we receive our signal. + <-c + + // Create a deadline to wait for. + ctx, cancel := context.WithTimeout(context.Background(), wait) + defer cancel() + // Doesn't block if no connections, but will otherwise wait + // until the timeout deadline. + srv.Shutdown(ctx) + // Optionally, you could run srv.Shutdown in a goroutine and block on + // <-ctx.Done() if your application should wait for other services + // to finalize based on context cancellation. + log.Println("shutting down") + os.Exit(0) +} +``` + +### Middleware + +Mux supports the addition of middlewares to a [Router](https://godoc.org/github.com/gorilla/mux#Router), which are executed in the order they are added if a match is found, including its subrouters. +Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or `ResponseWriter` hijacking. + +Mux middlewares are defined using the de facto standard type: + +```go +type MiddlewareFunc func(http.Handler) http.Handler +``` + +Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc. This takes advantage of closures being able access variables from the context where they are created, while retaining the signature enforced by the receivers. + +A very basic middleware which logs the URI of the request being handled could be written as: + +```go +func loggingMiddleware(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + // Do stuff here + log.Println(r.RequestURI) + // Call the next handler, which can be another middleware in the chain, or the final handler. + next.ServeHTTP(w, r) + }) +} +``` + +Middlewares can be added to a router using `Router.Use()`: + +```go +r := mux.NewRouter() +r.HandleFunc("/", handler) +r.Use(loggingMiddleware) +``` + +A more complex authentication middleware, which maps session token to users, could be written as: + +```go +// Define our struct +type authenticationMiddleware struct { + tokenUsers map[string]string +} + +// Initialize it somewhere +func (amw *authenticationMiddleware) Populate() { + amw.tokenUsers["00000000"] = "user0" + amw.tokenUsers["aaaaaaaa"] = "userA" + amw.tokenUsers["05f717e5"] = "randomUser" + amw.tokenUsers["deadbeef"] = "user0" +} + +// Middleware function, which will be called for each request +func (amw *authenticationMiddleware) Middleware(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + token := r.Header.Get("X-Session-Token") + + if user, found := amw.tokenUsers[token]; found { + // We found the token in our map + log.Printf("Authenticated user %s\n", user) + // Pass down the request to the next middleware (or final handler) + next.ServeHTTP(w, r) + } else { + // Write an error and stop the handler chain + http.Error(w, "Forbidden", http.StatusForbidden) + } + }) +} +``` + +```go +r := mux.NewRouter() +r.HandleFunc("/", handler) + +amw := authenticationMiddleware{tokenUsers: make(map[string]string)} +amw.Populate() + +r.Use(amw.Middleware) +``` + +Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to. Middlewares _should_ write to `ResponseWriter` if they _are_ going to terminate the request, and they _should not_ write to `ResponseWriter` if they _are not_ going to terminate it. + +### Handling CORS Requests + +[CORSMethodMiddleware](https://godoc.org/github.com/gorilla/mux#CORSMethodMiddleware) intends to make it easier to strictly set the `Access-Control-Allow-Methods` response header. + +* You will still need to use your own CORS handler to set the other CORS headers such as `Access-Control-Allow-Origin` +* The middleware will set the `Access-Control-Allow-Methods` header to all the method matchers (e.g. `r.Methods(http.MethodGet, http.MethodPut, http.MethodOptions)` -> `Access-Control-Allow-Methods: GET,PUT,OPTIONS`) on a route +* If you do not specify any methods, then: +> _Important_: there must be an `OPTIONS` method matcher for the middleware to set the headers. + +Here is an example of using `CORSMethodMiddleware` along with a custom `OPTIONS` handler to set all the required CORS headers: + +```go +package main + +import ( + "net/http" + "github.com/gorilla/mux" +) + +func main() { + r := mux.NewRouter() + + // IMPORTANT: you must specify an OPTIONS method matcher for the middleware to set CORS headers + r.HandleFunc("/foo", fooHandler).Methods(http.MethodGet, http.MethodPut, http.MethodPatch, http.MethodOptions) + r.Use(mux.CORSMethodMiddleware(r)) + + http.ListenAndServe(":8080", r) +} + +func fooHandler(w http.ResponseWriter, r *http.Request) { + w.Header().Set("Access-Control-Allow-Origin", "*") + if r.Method == http.MethodOptions { + return + } + + w.Write([]byte("foo")) +} +``` + +And an request to `/foo` using something like: + +```bash +curl localhost:8080/foo -v +``` + +Would look like: + +```bash +* Trying ::1... +* TCP_NODELAY set +* Connected to localhost (::1) port 8080 (#0) +> GET /foo HTTP/1.1 +> Host: localhost:8080 +> User-Agent: curl/7.59.0 +> Accept: */* +> +< HTTP/1.1 200 OK +< Access-Control-Allow-Methods: GET,PUT,PATCH,OPTIONS +< Access-Control-Allow-Origin: * +< Date: Fri, 28 Jun 2019 20:13:30 GMT +< Content-Length: 3 +< Content-Type: text/plain; charset=utf-8 +< +* Connection #0 to host localhost left intact +foo +``` + +### Testing Handlers + +Testing handlers in a Go web application is straightforward, and _mux_ doesn't complicate this any further. Given two files: `endpoints.go` and `endpoints_test.go`, here's how we'd test an application using _mux_. + +First, our simple HTTP handler: + +```go +// endpoints.go +package main + +func HealthCheckHandler(w http.ResponseWriter, r *http.Request) { + // A very simple health check. + w.Header().Set("Content-Type", "application/json") + w.WriteHeader(http.StatusOK) + + // In the future we could report back on the status of our DB, or our cache + // (e.g. Redis) by performing a simple PING, and include them in the response. + io.WriteString(w, `{"alive": true}`) +} + +func main() { + r := mux.NewRouter() + r.HandleFunc("/health", HealthCheckHandler) + + log.Fatal(http.ListenAndServe("localhost:8080", r)) +} +``` + +Our test code: + +```go +// endpoints_test.go +package main + +import ( + "net/http" + "net/http/httptest" + "testing" +) + +func TestHealthCheckHandler(t *testing.T) { + // Create a request to pass to our handler. We don't have any query parameters for now, so we'll + // pass 'nil' as the third parameter. + req, err := http.NewRequest("GET", "/health", nil) + if err != nil { + t.Fatal(err) + } + + // We create a ResponseRecorder (which satisfies http.ResponseWriter) to record the response. + rr := httptest.NewRecorder() + handler := http.HandlerFunc(HealthCheckHandler) + + // Our handlers satisfy http.Handler, so we can call their ServeHTTP method + // directly and pass in our Request and ResponseRecorder. + handler.ServeHTTP(rr, req) + + // Check the status code is what we expect. + if status := rr.Code; status != http.StatusOK { + t.Errorf("handler returned wrong status code: got %v want %v", + status, http.StatusOK) + } + + // Check the response body is what we expect. + expected := `{"alive": true}` + if rr.Body.String() != expected { + t.Errorf("handler returned unexpected body: got %v want %v", + rr.Body.String(), expected) + } +} +``` + +In the case that our routes have [variables](#examples), we can pass those in the request. We could write +[table-driven tests](https://dave.cheney.net/2013/06/09/writing-table-driven-tests-in-go) to test multiple +possible route variables as needed. + +```go +// endpoints.go +func main() { + r := mux.NewRouter() + // A route with a route variable: + r.HandleFunc("/metrics/{type}", MetricsHandler) + + log.Fatal(http.ListenAndServe("localhost:8080", r)) +} +``` + +Our test file, with a table-driven test of `routeVariables`: + +```go +// endpoints_test.go +func TestMetricsHandler(t *testing.T) { + tt := []struct{ + routeVariable string + shouldPass bool + }{ + {"goroutines", true}, + {"heap", true}, + {"counters", true}, + {"queries", true}, + {"adhadaeqm3k", false}, + } + + for _, tc := range tt { + path := fmt.Sprintf("/metrics/%s", tc.routeVariable) + req, err := http.NewRequest("GET", path, nil) + if err != nil { + t.Fatal(err) + } + + rr := httptest.NewRecorder() + + // To add the vars to the context, + // we need to create a router through which we can pass the request. + router := mux.NewRouter() + router.HandleFunc("/metrics/{type}", MetricsHandler) + router.ServeHTTP(rr, req) + + // In this case, our MetricsHandler returns a non-200 response + // for a route variable it doesn't know about. + if rr.Code == http.StatusOK && !tc.shouldPass { + t.Errorf("handler should have failed on routeVariable %s: got %v want %v", + tc.routeVariable, rr.Code, http.StatusOK) + } + } +} +``` + +## Full Example + +Here's a complete, runnable example of a small `mux` based server: + +```go +package main + +import ( + "net/http" + "log" + "github.com/gorilla/mux" +) + +func YourHandler(w http.ResponseWriter, r *http.Request) { + w.Write([]byte("Gorilla!\n")) +} + +func main() { + r := mux.NewRouter() + // Routes consist of a path and a handler function. + r.HandleFunc("/", YourHandler) + + // Bind to a port and pass our router in + log.Fatal(http.ListenAndServe(":8000", r)) +} +``` + +## License + +BSD licensed. See the LICENSE file for details. diff --git a/vendor/github.com/gorilla/mux/doc.go b/vendor/github.com/gorilla/mux/doc.go new file mode 100644 index 000000000..80601351f --- /dev/null +++ b/vendor/github.com/gorilla/mux/doc.go @@ -0,0 +1,305 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +/* +Package mux implements a request router and dispatcher. + +The name mux stands for "HTTP request multiplexer". Like the standard +http.ServeMux, mux.Router matches incoming requests against a list of +registered routes and calls a handler for the route that matches the URL +or other conditions. The main features are: + + - Requests can be matched based on URL host, path, path prefix, schemes, + header and query values, HTTP methods or using custom matchers. + - URL hosts, paths and query values can have variables with an optional + regular expression. + - Registered URLs can be built, or "reversed", which helps maintaining + references to resources. + - Routes can be used as subrouters: nested routes are only tested if the + parent route matches. This is useful to define groups of routes that + share common conditions like a host, a path prefix or other repeated + attributes. As a bonus, this optimizes request matching. + - It implements the http.Handler interface so it is compatible with the + standard http.ServeMux. + +Let's start registering a couple of URL paths and handlers: + + func main() { + r := mux.NewRouter() + r.HandleFunc("/", HomeHandler) + r.HandleFunc("/products", ProductsHandler) + r.HandleFunc("/articles", ArticlesHandler) + http.Handle("/", r) + } + +Here we register three routes mapping URL paths to handlers. This is +equivalent to how http.HandleFunc() works: if an incoming request URL matches +one of the paths, the corresponding handler is called passing +(http.ResponseWriter, *http.Request) as parameters. + +Paths can have variables. They are defined using the format {name} or +{name:pattern}. If a regular expression pattern is not defined, the matched +variable will be anything until the next slash. For example: + + r := mux.NewRouter() + r.HandleFunc("/products/{key}", ProductHandler) + r.HandleFunc("/articles/{category}/", ArticlesCategoryHandler) + r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler) + +Groups can be used inside patterns, as long as they are non-capturing (?:re). For example: + + r.HandleFunc("/articles/{category}/{sort:(?:asc|desc|new)}", ArticlesCategoryHandler) + +The names are used to create a map of route variables which can be retrieved +calling mux.Vars(): + + vars := mux.Vars(request) + category := vars["category"] + +Note that if any capturing groups are present, mux will panic() during parsing. To prevent +this, convert any capturing groups to non-capturing, e.g. change "/{sort:(asc|desc)}" to +"/{sort:(?:asc|desc)}". This is a change from prior versions which behaved unpredictably +when capturing groups were present. + +And this is all you need to know about the basic usage. More advanced options +are explained below. + +Routes can also be restricted to a domain or subdomain. Just define a host +pattern to be matched. They can also have variables: + + r := mux.NewRouter() + // Only matches if domain is "www.example.com". + r.Host("www.example.com") + // Matches a dynamic subdomain. + r.Host("{subdomain:[a-z]+}.domain.com") + +There are several other matchers that can be added. To match path prefixes: + + r.PathPrefix("/products/") + +...or HTTP methods: + + r.Methods("GET", "POST") + +...or URL schemes: + + r.Schemes("https") + +...or header values: + + r.Headers("X-Requested-With", "XMLHttpRequest") + +...or query values: + + r.Queries("key", "value") + +...or to use a custom matcher function: + + r.MatcherFunc(func(r *http.Request, rm *RouteMatch) bool { + return r.ProtoMajor == 0 + }) + +...and finally, it is possible to combine several matchers in a single route: + + r.HandleFunc("/products", ProductsHandler). + Host("www.example.com"). + Methods("GET"). + Schemes("http") + +Setting the same matching conditions again and again can be boring, so we have +a way to group several routes that share the same requirements. +We call it "subrouting". + +For example, let's say we have several URLs that should only match when the +host is "www.example.com". Create a route for that host and get a "subrouter" +from it: + + r := mux.NewRouter() + s := r.Host("www.example.com").Subrouter() + +Then register routes in the subrouter: + + s.HandleFunc("/products/", ProductsHandler) + s.HandleFunc("/products/{key}", ProductHandler) + s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler) + +The three URL paths we registered above will only be tested if the domain is +"www.example.com", because the subrouter is tested first. This is not +only convenient, but also optimizes request matching. You can create +subrouters combining any attribute matchers accepted by a route. + +Subrouters can be used to create domain or path "namespaces": you define +subrouters in a central place and then parts of the app can register its +paths relatively to a given subrouter. + +There's one more thing about subroutes. When a subrouter has a path prefix, +the inner routes use it as base for their paths: + + r := mux.NewRouter() + s := r.PathPrefix("/products").Subrouter() + // "/products/" + s.HandleFunc("/", ProductsHandler) + // "/products/{key}/" + s.HandleFunc("/{key}/", ProductHandler) + // "/products/{key}/details" + s.HandleFunc("/{key}/details", ProductDetailsHandler) + +Note that the path provided to PathPrefix() represents a "wildcard": calling +PathPrefix("/static/").Handler(...) means that the handler will be passed any +request that matches "/static/*". This makes it easy to serve static files with mux: + + func main() { + var dir string + + flag.StringVar(&dir, "dir", ".", "the directory to serve files from. Defaults to the current dir") + flag.Parse() + r := mux.NewRouter() + + // This will serve files under http://localhost:8000/static/ + r.PathPrefix("/static/").Handler(http.StripPrefix("/static/", http.FileServer(http.Dir(dir)))) + + srv := &http.Server{ + Handler: r, + Addr: "127.0.0.1:8000", + // Good practice: enforce timeouts for servers you create! + WriteTimeout: 15 * time.Second, + ReadTimeout: 15 * time.Second, + } + + log.Fatal(srv.ListenAndServe()) + } + +Now let's see how to build registered URLs. + +Routes can be named. All routes that define a name can have their URLs built, +or "reversed". We define a name calling Name() on a route. For example: + + r := mux.NewRouter() + r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). + Name("article") + +To build a URL, get the route and call the URL() method, passing a sequence of +key/value pairs for the route variables. For the previous route, we would do: + + url, err := r.Get("article").URL("category", "technology", "id", "42") + +...and the result will be a url.URL with the following path: + + "/articles/technology/42" + +This also works for host and query value variables: + + r := mux.NewRouter() + r.Host("{subdomain}.domain.com"). + Path("/articles/{category}/{id:[0-9]+}"). + Queries("filter", "{filter}"). + HandlerFunc(ArticleHandler). + Name("article") + + // url.String() will be "http://news.domain.com/articles/technology/42?filter=gorilla" + url, err := r.Get("article").URL("subdomain", "news", + "category", "technology", + "id", "42", + "filter", "gorilla") + +All variables defined in the route are required, and their values must +conform to the corresponding patterns. These requirements guarantee that a +generated URL will always match a registered route -- the only exception is +for explicitly defined "build-only" routes which never match. + +Regex support also exists for matching Headers within a route. For example, we could do: + + r.HeadersRegexp("Content-Type", "application/(text|json)") + +...and the route will match both requests with a Content-Type of `application/json` as well as +`application/text` + +There's also a way to build only the URL host or path for a route: +use the methods URLHost() or URLPath() instead. For the previous route, +we would do: + + // "http://news.domain.com/" + host, err := r.Get("article").URLHost("subdomain", "news") + + // "/articles/technology/42" + path, err := r.Get("article").URLPath("category", "technology", "id", "42") + +And if you use subrouters, host and path defined separately can be built +as well: + + r := mux.NewRouter() + s := r.Host("{subdomain}.domain.com").Subrouter() + s.Path("/articles/{category}/{id:[0-9]+}"). + HandlerFunc(ArticleHandler). + Name("article") + + // "http://news.domain.com/articles/technology/42" + url, err := r.Get("article").URL("subdomain", "news", + "category", "technology", + "id", "42") + +Mux supports the addition of middlewares to a Router, which are executed in the order they are added if a match is found, including its subrouters. Middlewares are (typically) small pieces of code which take one request, do something with it, and pass it down to another middleware or the final handler. Some common use cases for middleware are request logging, header manipulation, or ResponseWriter hijacking. + + type MiddlewareFunc func(http.Handler) http.Handler + +Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed to it, and then calls the handler passed as parameter to the MiddlewareFunc (closures can access variables from the context where they are created). + +A very basic middleware which logs the URI of the request being handled could be written as: + + func simpleMw(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + // Do stuff here + log.Println(r.RequestURI) + // Call the next handler, which can be another middleware in the chain, or the final handler. + next.ServeHTTP(w, r) + }) + } + +Middlewares can be added to a router using `Router.Use()`: + + r := mux.NewRouter() + r.HandleFunc("/", handler) + r.Use(simpleMw) + +A more complex authentication middleware, which maps session token to users, could be written as: + + // Define our struct + type authenticationMiddleware struct { + tokenUsers map[string]string + } + + // Initialize it somewhere + func (amw *authenticationMiddleware) Populate() { + amw.tokenUsers["00000000"] = "user0" + amw.tokenUsers["aaaaaaaa"] = "userA" + amw.tokenUsers["05f717e5"] = "randomUser" + amw.tokenUsers["deadbeef"] = "user0" + } + + // Middleware function, which will be called for each request + func (amw *authenticationMiddleware) Middleware(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { + token := r.Header.Get("X-Session-Token") + + if user, found := amw.tokenUsers[token]; found { + // We found the token in our map + log.Printf("Authenticated user %s\n", user) + next.ServeHTTP(w, r) + } else { + http.Error(w, "Forbidden", http.StatusForbidden) + } + }) + } + + r := mux.NewRouter() + r.HandleFunc("/", handler) + + amw := authenticationMiddleware{tokenUsers: make(map[string]string)} + amw.Populate() + + r.Use(amw.Middleware) + +Note: The handler chain will be stopped if your middleware doesn't call `next.ServeHTTP()` with the corresponding parameters. This can be used to abort a request if the middleware writer wants to. +*/ +package mux diff --git a/vendor/github.com/gorilla/mux/middleware.go b/vendor/github.com/gorilla/mux/middleware.go new file mode 100644 index 000000000..cb51c565e --- /dev/null +++ b/vendor/github.com/gorilla/mux/middleware.go @@ -0,0 +1,74 @@ +package mux + +import ( + "net/http" + "strings" +) + +// MiddlewareFunc is a function which receives an http.Handler and returns another http.Handler. +// Typically, the returned handler is a closure which does something with the http.ResponseWriter and http.Request passed +// to it, and then calls the handler passed as parameter to the MiddlewareFunc. +type MiddlewareFunc func(http.Handler) http.Handler + +// middleware interface is anything which implements a MiddlewareFunc named Middleware. +type middleware interface { + Middleware(handler http.Handler) http.Handler +} + +// Middleware allows MiddlewareFunc to implement the middleware interface. +func (mw MiddlewareFunc) Middleware(handler http.Handler) http.Handler { + return mw(handler) +} + +// Use appends a MiddlewareFunc to the chain. Middleware can be used to intercept or otherwise modify requests and/or responses, and are executed in the order that they are applied to the Router. +func (r *Router) Use(mwf ...MiddlewareFunc) { + for _, fn := range mwf { + r.middlewares = append(r.middlewares, fn) + } +} + +// useInterface appends a middleware to the chain. Middleware can be used to intercept or otherwise modify requests and/or responses, and are executed in the order that they are applied to the Router. +func (r *Router) useInterface(mw middleware) { + r.middlewares = append(r.middlewares, mw) +} + +// CORSMethodMiddleware automatically sets the Access-Control-Allow-Methods response header +// on requests for routes that have an OPTIONS method matcher to all the method matchers on +// the route. Routes that do not explicitly handle OPTIONS requests will not be processed +// by the middleware. See examples for usage. +func CORSMethodMiddleware(r *Router) MiddlewareFunc { + return func(next http.Handler) http.Handler { + return http.HandlerFunc(func(w http.ResponseWriter, req *http.Request) { + allMethods, err := getAllMethodsForRoute(r, req) + if err == nil { + for _, v := range allMethods { + if v == http.MethodOptions { + w.Header().Set("Access-Control-Allow-Methods", strings.Join(allMethods, ",")) + } + } + } + + next.ServeHTTP(w, req) + }) + } +} + +// getAllMethodsForRoute returns all the methods from method matchers matching a given +// request. +func getAllMethodsForRoute(r *Router, req *http.Request) ([]string, error) { + var allMethods []string + + for _, route := range r.routes { + var match RouteMatch + if route.Match(req, &match) || match.MatchErr == ErrMethodMismatch { + methods, err := route.GetMethods() + if err != nil { + return nil, err + } + + allMethods = append(allMethods, methods...) + } + } + + return allMethods, nil +} diff --git a/vendor/github.com/gorilla/mux/mux.go b/vendor/github.com/gorilla/mux/mux.go new file mode 100644 index 000000000..1e089906f --- /dev/null +++ b/vendor/github.com/gorilla/mux/mux.go @@ -0,0 +1,608 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package mux + +import ( + "context" + "errors" + "fmt" + "net/http" + "path" + "regexp" +) + +var ( + // ErrMethodMismatch is returned when the method in the request does not match + // the method defined against the route. + ErrMethodMismatch = errors.New("method is not allowed") + // ErrNotFound is returned when no route match is found. + ErrNotFound = errors.New("no matching route was found") +) + +// NewRouter returns a new router instance. +func NewRouter() *Router { + return &Router{namedRoutes: make(map[string]*Route)} +} + +// Router registers routes to be matched and dispatches a handler. +// +// It implements the http.Handler interface, so it can be registered to serve +// requests: +// +// var router = mux.NewRouter() +// +// func main() { +// http.Handle("/", router) +// } +// +// Or, for Google App Engine, register it in a init() function: +// +// func init() { +// http.Handle("/", router) +// } +// +// This will send all incoming requests to the router. +type Router struct { + // Configurable Handler to be used when no route matches. + // This can be used to render your own 404 Not Found errors. + NotFoundHandler http.Handler + + // Configurable Handler to be used when the request method does not match the route. + // This can be used to render your own 405 Method Not Allowed errors. + MethodNotAllowedHandler http.Handler + + // Routes to be matched, in order. + routes []*Route + + // Routes by name for URL building. + namedRoutes map[string]*Route + + // If true, do not clear the request context after handling the request. + // + // Deprecated: No effect, since the context is stored on the request itself. + KeepContext bool + + // Slice of middlewares to be called after a match is found + middlewares []middleware + + // configuration shared with `Route` + routeConf +} + +// common route configuration shared between `Router` and `Route` +type routeConf struct { + // If true, "/path/foo%2Fbar/to" will match the path "/path/{var}/to" + useEncodedPath bool + + // If true, when the path pattern is "/path/", accessing "/path" will + // redirect to the former and vice versa. + strictSlash bool + + // If true, when the path pattern is "/path//to", accessing "/path//to" + // will not redirect + skipClean bool + + // Manager for the variables from host and path. + regexp routeRegexpGroup + + // List of matchers. + matchers []matcher + + // The scheme used when building URLs. + buildScheme string + + buildVarsFunc BuildVarsFunc +} + +// returns an effective deep copy of `routeConf` +func copyRouteConf(r routeConf) routeConf { + c := r + + if r.regexp.path != nil { + c.regexp.path = copyRouteRegexp(r.regexp.path) + } + + if r.regexp.host != nil { + c.regexp.host = copyRouteRegexp(r.regexp.host) + } + + c.regexp.queries = make([]*routeRegexp, 0, len(r.regexp.queries)) + for _, q := range r.regexp.queries { + c.regexp.queries = append(c.regexp.queries, copyRouteRegexp(q)) + } + + c.matchers = make([]matcher, len(r.matchers)) + copy(c.matchers, r.matchers) + + return c +} + +func copyRouteRegexp(r *routeRegexp) *routeRegexp { + c := *r + return &c +} + +// Match attempts to match the given request against the router's registered routes. +// +// If the request matches a route of this router or one of its subrouters the Route, +// Handler, and Vars fields of the the match argument are filled and this function +// returns true. +// +// If the request does not match any of this router's or its subrouters' routes +// then this function returns false. If available, a reason for the match failure +// will be filled in the match argument's MatchErr field. If the match failure type +// (eg: not found) has a registered handler, the handler is assigned to the Handler +// field of the match argument. +func (r *Router) Match(req *http.Request, match *RouteMatch) bool { + for _, route := range r.routes { + if route.Match(req, match) { + // Build middleware chain if no error was found + if match.MatchErr == nil { + for i := len(r.middlewares) - 1; i >= 0; i-- { + match.Handler = r.middlewares[i].Middleware(match.Handler) + } + } + return true + } + } + + if match.MatchErr == ErrMethodMismatch { + if r.MethodNotAllowedHandler != nil { + match.Handler = r.MethodNotAllowedHandler + return true + } + + return false + } + + // Closest match for a router (includes sub-routers) + if r.NotFoundHandler != nil { + match.Handler = r.NotFoundHandler + match.MatchErr = ErrNotFound + return true + } + + match.MatchErr = ErrNotFound + return false +} + +// ServeHTTP dispatches the handler registered in the matched route. +// +// When there is a match, the route variables can be retrieved calling +// mux.Vars(request). +func (r *Router) ServeHTTP(w http.ResponseWriter, req *http.Request) { + if !r.skipClean { + path := req.URL.Path + if r.useEncodedPath { + path = req.URL.EscapedPath() + } + // Clean path to canonical form and redirect. + if p := cleanPath(path); p != path { + + // Added 3 lines (Philip Schlump) - It was dropping the query string and #whatever from query. + // This matches with fix in go 1.2 r.c. 4 for same problem. Go Issue: + // http://code.google.com/p/go/issues/detail?id=5252 + url := *req.URL + url.Path = p + p = url.String() + + w.Header().Set("Location", p) + w.WriteHeader(http.StatusMovedPermanently) + return + } + } + var match RouteMatch + var handler http.Handler + if r.Match(req, &match) { + handler = match.Handler + req = requestWithVars(req, match.Vars) + req = requestWithRoute(req, match.Route) + } + + if handler == nil && match.MatchErr == ErrMethodMismatch { + handler = methodNotAllowedHandler() + } + + if handler == nil { + handler = http.NotFoundHandler() + } + + handler.ServeHTTP(w, req) +} + +// Get returns a route registered with the given name. +func (r *Router) Get(name string) *Route { + return r.namedRoutes[name] +} + +// GetRoute returns a route registered with the given name. This method +// was renamed to Get() and remains here for backwards compatibility. +func (r *Router) GetRoute(name string) *Route { + return r.namedRoutes[name] +} + +// StrictSlash defines the trailing slash behavior for new routes. The initial +// value is false. +// +// When true, if the route path is "/path/", accessing "/path" will perform a redirect +// to the former and vice versa. In other words, your application will always +// see the path as specified in the route. +// +// When false, if the route path is "/path", accessing "/path/" will not match +// this route and vice versa. +// +// The re-direct is a HTTP 301 (Moved Permanently). Note that when this is set for +// routes with a non-idempotent method (e.g. POST, PUT), the subsequent re-directed +// request will be made as a GET by most clients. Use middleware or client settings +// to modify this behaviour as needed. +// +// Special case: when a route sets a path prefix using the PathPrefix() method, +// strict slash is ignored for that route because the redirect behavior can't +// be determined from a prefix alone. However, any subrouters created from that +// route inherit the original StrictSlash setting. +func (r *Router) StrictSlash(value bool) *Router { + r.strictSlash = value + return r +} + +// SkipClean defines the path cleaning behaviour for new routes. The initial +// value is false. Users should be careful about which routes are not cleaned +// +// When true, if the route path is "/path//to", it will remain with the double +// slash. This is helpful if you have a route like: /fetch/http://xkcd.com/534/ +// +// When false, the path will be cleaned, so /fetch/http://xkcd.com/534/ will +// become /fetch/http/xkcd.com/534 +func (r *Router) SkipClean(value bool) *Router { + r.skipClean = value + return r +} + +// UseEncodedPath tells the router to match the encoded original path +// to the routes. +// For eg. "/path/foo%2Fbar/to" will match the path "/path/{var}/to". +// +// If not called, the router will match the unencoded path to the routes. +// For eg. "/path/foo%2Fbar/to" will match the path "/path/foo/bar/to" +func (r *Router) UseEncodedPath() *Router { + r.useEncodedPath = true + return r +} + +// ---------------------------------------------------------------------------- +// Route factories +// ---------------------------------------------------------------------------- + +// NewRoute registers an empty route. +func (r *Router) NewRoute() *Route { + // initialize a route with a copy of the parent router's configuration + route := &Route{routeConf: copyRouteConf(r.routeConf), namedRoutes: r.namedRoutes} + r.routes = append(r.routes, route) + return route +} + +// Name registers a new route with a name. +// See Route.Name(). +func (r *Router) Name(name string) *Route { + return r.NewRoute().Name(name) +} + +// Handle registers a new route with a matcher for the URL path. +// See Route.Path() and Route.Handler(). +func (r *Router) Handle(path string, handler http.Handler) *Route { + return r.NewRoute().Path(path).Handler(handler) +} + +// HandleFunc registers a new route with a matcher for the URL path. +// See Route.Path() and Route.HandlerFunc(). +func (r *Router) HandleFunc(path string, f func(http.ResponseWriter, + *http.Request)) *Route { + return r.NewRoute().Path(path).HandlerFunc(f) +} + +// Headers registers a new route with a matcher for request header values. +// See Route.Headers(). +func (r *Router) Headers(pairs ...string) *Route { + return r.NewRoute().Headers(pairs...) +} + +// Host registers a new route with a matcher for the URL host. +// See Route.Host(). +func (r *Router) Host(tpl string) *Route { + return r.NewRoute().Host(tpl) +} + +// MatcherFunc registers a new route with a custom matcher function. +// See Route.MatcherFunc(). +func (r *Router) MatcherFunc(f MatcherFunc) *Route { + return r.NewRoute().MatcherFunc(f) +} + +// Methods registers a new route with a matcher for HTTP methods. +// See Route.Methods(). +func (r *Router) Methods(methods ...string) *Route { + return r.NewRoute().Methods(methods...) +} + +// Path registers a new route with a matcher for the URL path. +// See Route.Path(). +func (r *Router) Path(tpl string) *Route { + return r.NewRoute().Path(tpl) +} + +// PathPrefix registers a new route with a matcher for the URL path prefix. +// See Route.PathPrefix(). +func (r *Router) PathPrefix(tpl string) *Route { + return r.NewRoute().PathPrefix(tpl) +} + +// Queries registers a new route with a matcher for URL query values. +// See Route.Queries(). +func (r *Router) Queries(pairs ...string) *Route { + return r.NewRoute().Queries(pairs...) +} + +// Schemes registers a new route with a matcher for URL schemes. +// See Route.Schemes(). +func (r *Router) Schemes(schemes ...string) *Route { + return r.NewRoute().Schemes(schemes...) +} + +// BuildVarsFunc registers a new route with a custom function for modifying +// route variables before building a URL. +func (r *Router) BuildVarsFunc(f BuildVarsFunc) *Route { + return r.NewRoute().BuildVarsFunc(f) +} + +// Walk walks the router and all its sub-routers, calling walkFn for each route +// in the tree. The routes are walked in the order they were added. Sub-routers +// are explored depth-first. +func (r *Router) Walk(walkFn WalkFunc) error { + return r.walk(walkFn, []*Route{}) +} + +// SkipRouter is used as a return value from WalkFuncs to indicate that the +// router that walk is about to descend down to should be skipped. +var SkipRouter = errors.New("skip this router") + +// WalkFunc is the type of the function called for each route visited by Walk. +// At every invocation, it is given the current route, and the current router, +// and a list of ancestor routes that lead to the current route. +type WalkFunc func(route *Route, router *Router, ancestors []*Route) error + +func (r *Router) walk(walkFn WalkFunc, ancestors []*Route) error { + for _, t := range r.routes { + err := walkFn(t, r, ancestors) + if err == SkipRouter { + continue + } + if err != nil { + return err + } + for _, sr := range t.matchers { + if h, ok := sr.(*Router); ok { + ancestors = append(ancestors, t) + err := h.walk(walkFn, ancestors) + if err != nil { + return err + } + ancestors = ancestors[:len(ancestors)-1] + } + } + if h, ok := t.handler.(*Router); ok { + ancestors = append(ancestors, t) + err := h.walk(walkFn, ancestors) + if err != nil { + return err + } + ancestors = ancestors[:len(ancestors)-1] + } + } + return nil +} + +// ---------------------------------------------------------------------------- +// Context +// ---------------------------------------------------------------------------- + +// RouteMatch stores information about a matched route. +type RouteMatch struct { + Route *Route + Handler http.Handler + Vars map[string]string + + // MatchErr is set to appropriate matching error + // It is set to ErrMethodMismatch if there is a mismatch in + // the request method and route method + MatchErr error +} + +type contextKey int + +const ( + varsKey contextKey = iota + routeKey +) + +// Vars returns the route variables for the current request, if any. +func Vars(r *http.Request) map[string]string { + if rv := r.Context().Value(varsKey); rv != nil { + return rv.(map[string]string) + } + return nil +} + +// CurrentRoute returns the matched route for the current request, if any. +// This only works when called inside the handler of the matched route +// because the matched route is stored in the request context which is cleared +// after the handler returns. +func CurrentRoute(r *http.Request) *Route { + if rv := r.Context().Value(routeKey); rv != nil { + return rv.(*Route) + } + return nil +} + +func requestWithVars(r *http.Request, vars map[string]string) *http.Request { + ctx := context.WithValue(r.Context(), varsKey, vars) + return r.WithContext(ctx) +} + +func requestWithRoute(r *http.Request, route *Route) *http.Request { + ctx := context.WithValue(r.Context(), routeKey, route) + return r.WithContext(ctx) +} + +// ---------------------------------------------------------------------------- +// Helpers +// ---------------------------------------------------------------------------- + +// cleanPath returns the canonical path for p, eliminating . and .. elements. +// Borrowed from the net/http package. +func cleanPath(p string) string { + if p == "" { + return "/" + } + if p[0] != '/' { + p = "/" + p + } + np := path.Clean(p) + // path.Clean removes trailing slash except for root; + // put the trailing slash back if necessary. + if p[len(p)-1] == '/' && np != "/" { + np += "/" + } + + return np +} + +// uniqueVars returns an error if two slices contain duplicated strings. +func uniqueVars(s1, s2 []string) error { + for _, v1 := range s1 { + for _, v2 := range s2 { + if v1 == v2 { + return fmt.Errorf("mux: duplicated route variable %q", v2) + } + } + } + return nil +} + +// checkPairs returns the count of strings passed in, and an error if +// the count is not an even number. +func checkPairs(pairs ...string) (int, error) { + length := len(pairs) + if length%2 != 0 { + return length, fmt.Errorf( + "mux: number of parameters must be multiple of 2, got %v", pairs) + } + return length, nil +} + +// mapFromPairsToString converts variadic string parameters to a +// string to string map. +func mapFromPairsToString(pairs ...string) (map[string]string, error) { + length, err := checkPairs(pairs...) + if err != nil { + return nil, err + } + m := make(map[string]string, length/2) + for i := 0; i < length; i += 2 { + m[pairs[i]] = pairs[i+1] + } + return m, nil +} + +// mapFromPairsToRegex converts variadic string parameters to a +// string to regex map. +func mapFromPairsToRegex(pairs ...string) (map[string]*regexp.Regexp, error) { + length, err := checkPairs(pairs...) + if err != nil { + return nil, err + } + m := make(map[string]*regexp.Regexp, length/2) + for i := 0; i < length; i += 2 { + regex, err := regexp.Compile(pairs[i+1]) + if err != nil { + return nil, err + } + m[pairs[i]] = regex + } + return m, nil +} + +// matchInArray returns true if the given string value is in the array. +func matchInArray(arr []string, value string) bool { + for _, v := range arr { + if v == value { + return true + } + } + return false +} + +// matchMapWithString returns true if the given key/value pairs exist in a given map. +func matchMapWithString(toCheck map[string]string, toMatch map[string][]string, canonicalKey bool) bool { + for k, v := range toCheck { + // Check if key exists. + if canonicalKey { + k = http.CanonicalHeaderKey(k) + } + if values := toMatch[k]; values == nil { + return false + } else if v != "" { + // If value was defined as an empty string we only check that the + // key exists. Otherwise we also check for equality. + valueExists := false + for _, value := range values { + if v == value { + valueExists = true + break + } + } + if !valueExists { + return false + } + } + } + return true +} + +// matchMapWithRegex returns true if the given key/value pairs exist in a given map compiled against +// the given regex +func matchMapWithRegex(toCheck map[string]*regexp.Regexp, toMatch map[string][]string, canonicalKey bool) bool { + for k, v := range toCheck { + // Check if key exists. + if canonicalKey { + k = http.CanonicalHeaderKey(k) + } + if values := toMatch[k]; values == nil { + return false + } else if v != nil { + // If value was defined as an empty string we only check that the + // key exists. Otherwise we also check for equality. + valueExists := false + for _, value := range values { + if v.MatchString(value) { + valueExists = true + break + } + } + if !valueExists { + return false + } + } + } + return true +} + +// methodNotAllowed replies to the request with an HTTP status code 405. +func methodNotAllowed(w http.ResponseWriter, r *http.Request) { + w.WriteHeader(http.StatusMethodNotAllowed) +} + +// methodNotAllowedHandler returns a simple request handler +// that replies to each request with a status code 405. +func methodNotAllowedHandler() http.Handler { return http.HandlerFunc(methodNotAllowed) } diff --git a/vendor/github.com/gorilla/mux/regexp.go b/vendor/github.com/gorilla/mux/regexp.go new file mode 100644 index 000000000..5d05cfa0e --- /dev/null +++ b/vendor/github.com/gorilla/mux/regexp.go @@ -0,0 +1,388 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package mux + +import ( + "bytes" + "fmt" + "net/http" + "net/url" + "regexp" + "strconv" + "strings" +) + +type routeRegexpOptions struct { + strictSlash bool + useEncodedPath bool +} + +type regexpType int + +const ( + regexpTypePath regexpType = iota + regexpTypeHost + regexpTypePrefix + regexpTypeQuery +) + +// newRouteRegexp parses a route template and returns a routeRegexp, +// used to match a host, a path or a query string. +// +// It will extract named variables, assemble a regexp to be matched, create +// a "reverse" template to build URLs and compile regexps to validate variable +// values used in URL building. +// +// Previously we accepted only Python-like identifiers for variable +// names ([a-zA-Z_][a-zA-Z0-9_]*), but currently the only restriction is that +// name and pattern can't be empty, and names can't contain a colon. +func newRouteRegexp(tpl string, typ regexpType, options routeRegexpOptions) (*routeRegexp, error) { + // Check if it is well-formed. + idxs, errBraces := braceIndices(tpl) + if errBraces != nil { + return nil, errBraces + } + // Backup the original. + template := tpl + // Now let's parse it. + defaultPattern := "[^/]+" + if typ == regexpTypeQuery { + defaultPattern = ".*" + } else if typ == regexpTypeHost { + defaultPattern = "[^.]+" + } + // Only match strict slash if not matching + if typ != regexpTypePath { + options.strictSlash = false + } + // Set a flag for strictSlash. + endSlash := false + if options.strictSlash && strings.HasSuffix(tpl, "/") { + tpl = tpl[:len(tpl)-1] + endSlash = true + } + varsN := make([]string, len(idxs)/2) + varsR := make([]*regexp.Regexp, len(idxs)/2) + pattern := bytes.NewBufferString("") + pattern.WriteByte('^') + reverse := bytes.NewBufferString("") + var end int + var err error + for i := 0; i < len(idxs); i += 2 { + // Set all values we are interested in. + raw := tpl[end:idxs[i]] + end = idxs[i+1] + parts := strings.SplitN(tpl[idxs[i]+1:end-1], ":", 2) + name := parts[0] + patt := defaultPattern + if len(parts) == 2 { + patt = parts[1] + } + // Name or pattern can't be empty. + if name == "" || patt == "" { + return nil, fmt.Errorf("mux: missing name or pattern in %q", + tpl[idxs[i]:end]) + } + // Build the regexp pattern. + fmt.Fprintf(pattern, "%s(?P<%s>%s)", regexp.QuoteMeta(raw), varGroupName(i/2), patt) + + // Build the reverse template. + fmt.Fprintf(reverse, "%s%%s", raw) + + // Append variable name and compiled pattern. + varsN[i/2] = name + varsR[i/2], err = regexp.Compile(fmt.Sprintf("^%s$", patt)) + if err != nil { + return nil, err + } + } + // Add the remaining. + raw := tpl[end:] + pattern.WriteString(regexp.QuoteMeta(raw)) + if options.strictSlash { + pattern.WriteString("[/]?") + } + if typ == regexpTypeQuery { + // Add the default pattern if the query value is empty + if queryVal := strings.SplitN(template, "=", 2)[1]; queryVal == "" { + pattern.WriteString(defaultPattern) + } + } + if typ != regexpTypePrefix { + pattern.WriteByte('$') + } + + var wildcardHostPort bool + if typ == regexpTypeHost { + if !strings.Contains(pattern.String(), ":") { + wildcardHostPort = true + } + } + reverse.WriteString(raw) + if endSlash { + reverse.WriteByte('/') + } + // Compile full regexp. + reg, errCompile := regexp.Compile(pattern.String()) + if errCompile != nil { + return nil, errCompile + } + + // Check for capturing groups which used to work in older versions + if reg.NumSubexp() != len(idxs)/2 { + panic(fmt.Sprintf("route %s contains capture groups in its regexp. ", template) + + "Only non-capturing groups are accepted: e.g. (?:pattern) instead of (pattern)") + } + + // Done! + return &routeRegexp{ + template: template, + regexpType: typ, + options: options, + regexp: reg, + reverse: reverse.String(), + varsN: varsN, + varsR: varsR, + wildcardHostPort: wildcardHostPort, + }, nil +} + +// routeRegexp stores a regexp to match a host or path and information to +// collect and validate route variables. +type routeRegexp struct { + // The unmodified template. + template string + // The type of match + regexpType regexpType + // Options for matching + options routeRegexpOptions + // Expanded regexp. + regexp *regexp.Regexp + // Reverse template. + reverse string + // Variable names. + varsN []string + // Variable regexps (validators). + varsR []*regexp.Regexp + // Wildcard host-port (no strict port match in hostname) + wildcardHostPort bool +} + +// Match matches the regexp against the URL host or path. +func (r *routeRegexp) Match(req *http.Request, match *RouteMatch) bool { + if r.regexpType == regexpTypeHost { + host := getHost(req) + if r.wildcardHostPort { + // Don't be strict on the port match + if i := strings.Index(host, ":"); i != -1 { + host = host[:i] + } + } + return r.regexp.MatchString(host) + } + + if r.regexpType == regexpTypeQuery { + return r.matchQueryString(req) + } + path := req.URL.Path + if r.options.useEncodedPath { + path = req.URL.EscapedPath() + } + return r.regexp.MatchString(path) +} + +// url builds a URL part using the given values. +func (r *routeRegexp) url(values map[string]string) (string, error) { + urlValues := make([]interface{}, len(r.varsN)) + for k, v := range r.varsN { + value, ok := values[v] + if !ok { + return "", fmt.Errorf("mux: missing route variable %q", v) + } + if r.regexpType == regexpTypeQuery { + value = url.QueryEscape(value) + } + urlValues[k] = value + } + rv := fmt.Sprintf(r.reverse, urlValues...) + if !r.regexp.MatchString(rv) { + // The URL is checked against the full regexp, instead of checking + // individual variables. This is faster but to provide a good error + // message, we check individual regexps if the URL doesn't match. + for k, v := range r.varsN { + if !r.varsR[k].MatchString(values[v]) { + return "", fmt.Errorf( + "mux: variable %q doesn't match, expected %q", values[v], + r.varsR[k].String()) + } + } + } + return rv, nil +} + +// getURLQuery returns a single query parameter from a request URL. +// For a URL with foo=bar&baz=ding, we return only the relevant key +// value pair for the routeRegexp. +func (r *routeRegexp) getURLQuery(req *http.Request) string { + if r.regexpType != regexpTypeQuery { + return "" + } + templateKey := strings.SplitN(r.template, "=", 2)[0] + val, ok := findFirstQueryKey(req.URL.RawQuery, templateKey) + if ok { + return templateKey + "=" + val + } + return "" +} + +// findFirstQueryKey returns the same result as (*url.URL).Query()[key][0]. +// If key was not found, empty string and false is returned. +func findFirstQueryKey(rawQuery, key string) (value string, ok bool) { + query := []byte(rawQuery) + for len(query) > 0 { + foundKey := query + if i := bytes.IndexAny(foundKey, "&;"); i >= 0 { + foundKey, query = foundKey[:i], foundKey[i+1:] + } else { + query = query[:0] + } + if len(foundKey) == 0 { + continue + } + var value []byte + if i := bytes.IndexByte(foundKey, '='); i >= 0 { + foundKey, value = foundKey[:i], foundKey[i+1:] + } + if len(foundKey) < len(key) { + // Cannot possibly be key. + continue + } + keyString, err := url.QueryUnescape(string(foundKey)) + if err != nil { + continue + } + if keyString != key { + continue + } + valueString, err := url.QueryUnescape(string(value)) + if err != nil { + continue + } + return valueString, true + } + return "", false +} + +func (r *routeRegexp) matchQueryString(req *http.Request) bool { + return r.regexp.MatchString(r.getURLQuery(req)) +} + +// braceIndices returns the first level curly brace indices from a string. +// It returns an error in case of unbalanced braces. +func braceIndices(s string) ([]int, error) { + var level, idx int + var idxs []int + for i := 0; i < len(s); i++ { + switch s[i] { + case '{': + if level++; level == 1 { + idx = i + } + case '}': + if level--; level == 0 { + idxs = append(idxs, idx, i+1) + } else if level < 0 { + return nil, fmt.Errorf("mux: unbalanced braces in %q", s) + } + } + } + if level != 0 { + return nil, fmt.Errorf("mux: unbalanced braces in %q", s) + } + return idxs, nil +} + +// varGroupName builds a capturing group name for the indexed variable. +func varGroupName(idx int) string { + return "v" + strconv.Itoa(idx) +} + +// ---------------------------------------------------------------------------- +// routeRegexpGroup +// ---------------------------------------------------------------------------- + +// routeRegexpGroup groups the route matchers that carry variables. +type routeRegexpGroup struct { + host *routeRegexp + path *routeRegexp + queries []*routeRegexp +} + +// setMatch extracts the variables from the URL once a route matches. +func (v routeRegexpGroup) setMatch(req *http.Request, m *RouteMatch, r *Route) { + // Store host variables. + if v.host != nil { + host := getHost(req) + if v.host.wildcardHostPort { + // Don't be strict on the port match + if i := strings.Index(host, ":"); i != -1 { + host = host[:i] + } + } + matches := v.host.regexp.FindStringSubmatchIndex(host) + if len(matches) > 0 { + extractVars(host, matches, v.host.varsN, m.Vars) + } + } + path := req.URL.Path + if r.useEncodedPath { + path = req.URL.EscapedPath() + } + // Store path variables. + if v.path != nil { + matches := v.path.regexp.FindStringSubmatchIndex(path) + if len(matches) > 0 { + extractVars(path, matches, v.path.varsN, m.Vars) + // Check if we should redirect. + if v.path.options.strictSlash { + p1 := strings.HasSuffix(path, "/") + p2 := strings.HasSuffix(v.path.template, "/") + if p1 != p2 { + u, _ := url.Parse(req.URL.String()) + if p1 { + u.Path = u.Path[:len(u.Path)-1] + } else { + u.Path += "/" + } + m.Handler = http.RedirectHandler(u.String(), http.StatusMovedPermanently) + } + } + } + } + // Store query string variables. + for _, q := range v.queries { + queryURL := q.getURLQuery(req) + matches := q.regexp.FindStringSubmatchIndex(queryURL) + if len(matches) > 0 { + extractVars(queryURL, matches, q.varsN, m.Vars) + } + } +} + +// getHost tries its best to return the request host. +// According to section 14.23 of RFC 2616 the Host header +// can include the port number if the default value of 80 is not used. +func getHost(r *http.Request) string { + if r.URL.IsAbs() { + return r.URL.Host + } + return r.Host +} + +func extractVars(input string, matches []int, names []string, output map[string]string) { + for i, name := range names { + output[name] = input[matches[2*i+2]:matches[2*i+3]] + } +} diff --git a/vendor/github.com/gorilla/mux/route.go b/vendor/github.com/gorilla/mux/route.go new file mode 100644 index 000000000..e8f11df22 --- /dev/null +++ b/vendor/github.com/gorilla/mux/route.go @@ -0,0 +1,765 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package mux + +import ( + "errors" + "fmt" + "net/http" + "net/url" + "regexp" + "strings" +) + +// Route stores information to match a request and build URLs. +type Route struct { + // Request handler for the route. + handler http.Handler + // If true, this route never matches: it is only used to build URLs. + buildOnly bool + // The name used to build URLs. + name string + // Error resulted from building a route. + err error + + // "global" reference to all named routes + namedRoutes map[string]*Route + + // config possibly passed in from `Router` + routeConf +} + +// SkipClean reports whether path cleaning is enabled for this route via +// Router.SkipClean. +func (r *Route) SkipClean() bool { + return r.skipClean +} + +// Match matches the route against the request. +func (r *Route) Match(req *http.Request, match *RouteMatch) bool { + if r.buildOnly || r.err != nil { + return false + } + + var matchErr error + + // Match everything. + for _, m := range r.matchers { + if matched := m.Match(req, match); !matched { + if _, ok := m.(methodMatcher); ok { + matchErr = ErrMethodMismatch + continue + } + + // Ignore ErrNotFound errors. These errors arise from match call + // to Subrouters. + // + // This prevents subsequent matching subrouters from failing to + // run middleware. If not ignored, the middleware would see a + // non-nil MatchErr and be skipped, even when there was a + // matching route. + if match.MatchErr == ErrNotFound { + match.MatchErr = nil + } + + matchErr = nil // nolint:ineffassign + return false + } else { + // Multiple routes may share the same path but use different HTTP methods. For instance: + // Route 1: POST "/users/{id}". + // Route 2: GET "/users/{id}", parameters: "id": "[0-9]+". + // + // The router must handle these cases correctly. For a GET request to "/users/abc" with "id" as "-2", + // The router should return a "Not Found" error as no route fully matches this request. + if match.MatchErr == ErrMethodMismatch { + match.MatchErr = nil + } + } + } + + if matchErr != nil { + match.MatchErr = matchErr + return false + } + + if match.MatchErr == ErrMethodMismatch && r.handler != nil { + // We found a route which matches request method, clear MatchErr + match.MatchErr = nil + // Then override the mis-matched handler + match.Handler = r.handler + } + + // Yay, we have a match. Let's collect some info about it. + if match.Route == nil { + match.Route = r + } + if match.Handler == nil { + match.Handler = r.handler + } + if match.Vars == nil { + match.Vars = make(map[string]string) + } + + // Set variables. + r.regexp.setMatch(req, match, r) + return true +} + +// ---------------------------------------------------------------------------- +// Route attributes +// ---------------------------------------------------------------------------- + +// GetError returns an error resulted from building the route, if any. +func (r *Route) GetError() error { + return r.err +} + +// BuildOnly sets the route to never match: it is only used to build URLs. +func (r *Route) BuildOnly() *Route { + r.buildOnly = true + return r +} + +// Handler -------------------------------------------------------------------- + +// Handler sets a handler for the route. +func (r *Route) Handler(handler http.Handler) *Route { + if r.err == nil { + r.handler = handler + } + return r +} + +// HandlerFunc sets a handler function for the route. +func (r *Route) HandlerFunc(f func(http.ResponseWriter, *http.Request)) *Route { + return r.Handler(http.HandlerFunc(f)) +} + +// GetHandler returns the handler for the route, if any. +func (r *Route) GetHandler() http.Handler { + return r.handler +} + +// Name ----------------------------------------------------------------------- + +// Name sets the name for the route, used to build URLs. +// It is an error to call Name more than once on a route. +func (r *Route) Name(name string) *Route { + if r.name != "" { + r.err = fmt.Errorf("mux: route already has name %q, can't set %q", + r.name, name) + } + if r.err == nil { + r.name = name + r.namedRoutes[name] = r + } + return r +} + +// GetName returns the name for the route, if any. +func (r *Route) GetName() string { + return r.name +} + +// ---------------------------------------------------------------------------- +// Matchers +// ---------------------------------------------------------------------------- + +// matcher types try to match a request. +type matcher interface { + Match(*http.Request, *RouteMatch) bool +} + +// addMatcher adds a matcher to the route. +func (r *Route) addMatcher(m matcher) *Route { + if r.err == nil { + r.matchers = append(r.matchers, m) + } + return r +} + +// addRegexpMatcher adds a host or path matcher and builder to a route. +func (r *Route) addRegexpMatcher(tpl string, typ regexpType) error { + if r.err != nil { + return r.err + } + if typ == regexpTypePath || typ == regexpTypePrefix { + if len(tpl) > 0 && tpl[0] != '/' { + return fmt.Errorf("mux: path must start with a slash, got %q", tpl) + } + if r.regexp.path != nil { + tpl = strings.TrimRight(r.regexp.path.template, "/") + tpl + } + } + rr, err := newRouteRegexp(tpl, typ, routeRegexpOptions{ + strictSlash: r.strictSlash, + useEncodedPath: r.useEncodedPath, + }) + if err != nil { + return err + } + for _, q := range r.regexp.queries { + if err = uniqueVars(rr.varsN, q.varsN); err != nil { + return err + } + } + if typ == regexpTypeHost { + if r.regexp.path != nil { + if err = uniqueVars(rr.varsN, r.regexp.path.varsN); err != nil { + return err + } + } + r.regexp.host = rr + } else { + if r.regexp.host != nil { + if err = uniqueVars(rr.varsN, r.regexp.host.varsN); err != nil { + return err + } + } + if typ == regexpTypeQuery { + r.regexp.queries = append(r.regexp.queries, rr) + } else { + r.regexp.path = rr + } + } + r.addMatcher(rr) + return nil +} + +// Headers -------------------------------------------------------------------- + +// headerMatcher matches the request against header values. +type headerMatcher map[string]string + +func (m headerMatcher) Match(r *http.Request, match *RouteMatch) bool { + return matchMapWithString(m, r.Header, true) +} + +// Headers adds a matcher for request header values. +// It accepts a sequence of key/value pairs to be matched. For example: +// +// r := mux.NewRouter().NewRoute() +// r.Headers("Content-Type", "application/json", +// "X-Requested-With", "XMLHttpRequest") +// +// The above route will only match if both request header values match. +// If the value is an empty string, it will match any value if the key is set. +func (r *Route) Headers(pairs ...string) *Route { + if r.err == nil { + var headers map[string]string + headers, r.err = mapFromPairsToString(pairs...) + return r.addMatcher(headerMatcher(headers)) + } + return r +} + +// headerRegexMatcher matches the request against the route given a regex for the header +type headerRegexMatcher map[string]*regexp.Regexp + +func (m headerRegexMatcher) Match(r *http.Request, match *RouteMatch) bool { + return matchMapWithRegex(m, r.Header, true) +} + +// HeadersRegexp accepts a sequence of key/value pairs, where the value has regex +// support. For example: +// +// r := mux.NewRouter().NewRoute() +// r.HeadersRegexp("Content-Type", "application/(text|json)", +// "X-Requested-With", "XMLHttpRequest") +// +// The above route will only match if both the request header matches both regular expressions. +// If the value is an empty string, it will match any value if the key is set. +// Use the start and end of string anchors (^ and $) to match an exact value. +func (r *Route) HeadersRegexp(pairs ...string) *Route { + if r.err == nil { + var headers map[string]*regexp.Regexp + headers, r.err = mapFromPairsToRegex(pairs...) + return r.addMatcher(headerRegexMatcher(headers)) + } + return r +} + +// Host ----------------------------------------------------------------------- + +// Host adds a matcher for the URL host. +// It accepts a template with zero or more URL variables enclosed by {}. +// Variables can define an optional regexp pattern to be matched: +// +// - {name} matches anything until the next dot. +// +// - {name:pattern} matches the given regexp pattern. +// +// For example: +// +// r := mux.NewRouter().NewRoute() +// r.Host("www.example.com") +// r.Host("{subdomain}.domain.com") +// r.Host("{subdomain:[a-z]+}.domain.com") +// +// Variable names must be unique in a given route. They can be retrieved +// calling mux.Vars(request). +func (r *Route) Host(tpl string) *Route { + r.err = r.addRegexpMatcher(tpl, regexpTypeHost) + return r +} + +// MatcherFunc ---------------------------------------------------------------- + +// MatcherFunc is the function signature used by custom matchers. +type MatcherFunc func(*http.Request, *RouteMatch) bool + +// Match returns the match for a given request. +func (m MatcherFunc) Match(r *http.Request, match *RouteMatch) bool { + return m(r, match) +} + +// MatcherFunc adds a custom function to be used as request matcher. +func (r *Route) MatcherFunc(f MatcherFunc) *Route { + return r.addMatcher(f) +} + +// Methods -------------------------------------------------------------------- + +// methodMatcher matches the request against HTTP methods. +type methodMatcher []string + +func (m methodMatcher) Match(r *http.Request, match *RouteMatch) bool { + return matchInArray(m, r.Method) +} + +// Methods adds a matcher for HTTP methods. +// It accepts a sequence of one or more methods to be matched, e.g.: +// "GET", "POST", "PUT". +func (r *Route) Methods(methods ...string) *Route { + for k, v := range methods { + methods[k] = strings.ToUpper(v) + } + return r.addMatcher(methodMatcher(methods)) +} + +// Path ----------------------------------------------------------------------- + +// Path adds a matcher for the URL path. +// It accepts a template with zero or more URL variables enclosed by {}. The +// template must start with a "/". +// Variables can define an optional regexp pattern to be matched: +// +// - {name} matches anything until the next slash. +// +// - {name:pattern} matches the given regexp pattern. +// +// For example: +// +// r := mux.NewRouter().NewRoute() +// r.Path("/products/").Handler(ProductsHandler) +// r.Path("/products/{key}").Handler(ProductsHandler) +// r.Path("/articles/{category}/{id:[0-9]+}"). +// Handler(ArticleHandler) +// +// Variable names must be unique in a given route. They can be retrieved +// calling mux.Vars(request). +func (r *Route) Path(tpl string) *Route { + r.err = r.addRegexpMatcher(tpl, regexpTypePath) + return r +} + +// PathPrefix ----------------------------------------------------------------- + +// PathPrefix adds a matcher for the URL path prefix. This matches if the given +// template is a prefix of the full URL path. See Route.Path() for details on +// the tpl argument. +// +// Note that it does not treat slashes specially ("/foobar/" will be matched by +// the prefix "/foo") so you may want to use a trailing slash here. +// +// Also note that the setting of Router.StrictSlash() has no effect on routes +// with a PathPrefix matcher. +func (r *Route) PathPrefix(tpl string) *Route { + r.err = r.addRegexpMatcher(tpl, regexpTypePrefix) + return r +} + +// Query ---------------------------------------------------------------------- + +// Queries adds a matcher for URL query values. +// It accepts a sequence of key/value pairs. Values may define variables. +// For example: +// +// r := mux.NewRouter().NewRoute() +// r.Queries("foo", "bar", "id", "{id:[0-9]+}") +// +// The above route will only match if the URL contains the defined queries +// values, e.g.: ?foo=bar&id=42. +// +// If the value is an empty string, it will match any value if the key is set. +// +// Variables can define an optional regexp pattern to be matched: +// +// - {name} matches anything until the next slash. +// +// - {name:pattern} matches the given regexp pattern. +func (r *Route) Queries(pairs ...string) *Route { + length := len(pairs) + if length%2 != 0 { + r.err = fmt.Errorf( + "mux: number of parameters must be multiple of 2, got %v", pairs) + return nil + } + for i := 0; i < length; i += 2 { + if r.err = r.addRegexpMatcher(pairs[i]+"="+pairs[i+1], regexpTypeQuery); r.err != nil { + return r + } + } + + return r +} + +// Schemes -------------------------------------------------------------------- + +// schemeMatcher matches the request against URL schemes. +type schemeMatcher []string + +func (m schemeMatcher) Match(r *http.Request, match *RouteMatch) bool { + scheme := r.URL.Scheme + // https://golang.org/pkg/net/http/#Request + // "For [most] server requests, fields other than Path and RawQuery will be + // empty." + // Since we're an http muxer, the scheme is either going to be http or https + // though, so we can just set it based on the tls termination state. + if scheme == "" { + if r.TLS == nil { + scheme = "http" + } else { + scheme = "https" + } + } + return matchInArray(m, scheme) +} + +// Schemes adds a matcher for URL schemes. +// It accepts a sequence of schemes to be matched, e.g.: "http", "https". +// If the request's URL has a scheme set, it will be matched against. +// Generally, the URL scheme will only be set if a previous handler set it, +// such as the ProxyHeaders handler from gorilla/handlers. +// If unset, the scheme will be determined based on the request's TLS +// termination state. +// The first argument to Schemes will be used when constructing a route URL. +func (r *Route) Schemes(schemes ...string) *Route { + for k, v := range schemes { + schemes[k] = strings.ToLower(v) + } + if len(schemes) > 0 { + r.buildScheme = schemes[0] + } + return r.addMatcher(schemeMatcher(schemes)) +} + +// BuildVarsFunc -------------------------------------------------------------- + +// BuildVarsFunc is the function signature used by custom build variable +// functions (which can modify route variables before a route's URL is built). +type BuildVarsFunc func(map[string]string) map[string]string + +// BuildVarsFunc adds a custom function to be used to modify build variables +// before a route's URL is built. +func (r *Route) BuildVarsFunc(f BuildVarsFunc) *Route { + if r.buildVarsFunc != nil { + // compose the old and new functions + old := r.buildVarsFunc + r.buildVarsFunc = func(m map[string]string) map[string]string { + return f(old(m)) + } + } else { + r.buildVarsFunc = f + } + return r +} + +// Subrouter ------------------------------------------------------------------ + +// Subrouter creates a subrouter for the route. +// +// It will test the inner routes only if the parent route matched. For example: +// +// r := mux.NewRouter().NewRoute() +// s := r.Host("www.example.com").Subrouter() +// s.HandleFunc("/products/", ProductsHandler) +// s.HandleFunc("/products/{key}", ProductHandler) +// s.HandleFunc("/articles/{category}/{id:[0-9]+}"), ArticleHandler) +// +// Here, the routes registered in the subrouter won't be tested if the host +// doesn't match. +func (r *Route) Subrouter() *Router { + // initialize a subrouter with a copy of the parent route's configuration + router := &Router{routeConf: copyRouteConf(r.routeConf), namedRoutes: r.namedRoutes} + r.addMatcher(router) + return router +} + +// ---------------------------------------------------------------------------- +// URL building +// ---------------------------------------------------------------------------- + +// URL builds a URL for the route. +// +// It accepts a sequence of key/value pairs for the route variables. For +// example, given this route: +// +// r := mux.NewRouter() +// r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). +// Name("article") +// +// ...a URL for it can be built using: +// +// url, err := r.Get("article").URL("category", "technology", "id", "42") +// +// ...which will return an url.URL with the following path: +// +// "/articles/technology/42" +// +// This also works for host variables: +// +// r := mux.NewRouter() +// r.HandleFunc("/articles/{category}/{id:[0-9]+}", ArticleHandler). +// Host("{subdomain}.domain.com"). +// Name("article") +// +// // url.String() will be "http://news.domain.com/articles/technology/42" +// url, err := r.Get("article").URL("subdomain", "news", +// "category", "technology", +// "id", "42") +// +// The scheme of the resulting url will be the first argument that was passed to Schemes: +// +// // url.String() will be "https://example.com" +// r := mux.NewRouter().NewRoute() +// url, err := r.Host("example.com") +// .Schemes("https", "http").URL() +// +// All variables defined in the route are required, and their values must +// conform to the corresponding patterns. +func (r *Route) URL(pairs ...string) (*url.URL, error) { + if r.err != nil { + return nil, r.err + } + values, err := r.prepareVars(pairs...) + if err != nil { + return nil, err + } + var scheme, host, path string + queries := make([]string, 0, len(r.regexp.queries)) + if r.regexp.host != nil { + if host, err = r.regexp.host.url(values); err != nil { + return nil, err + } + scheme = "http" + if r.buildScheme != "" { + scheme = r.buildScheme + } + } + if r.regexp.path != nil { + if path, err = r.regexp.path.url(values); err != nil { + return nil, err + } + } + for _, q := range r.regexp.queries { + var query string + if query, err = q.url(values); err != nil { + return nil, err + } + queries = append(queries, query) + } + return &url.URL{ + Scheme: scheme, + Host: host, + Path: path, + RawQuery: strings.Join(queries, "&"), + }, nil +} + +// URLHost builds the host part of the URL for a route. See Route.URL(). +// +// The route must have a host defined. +func (r *Route) URLHost(pairs ...string) (*url.URL, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp.host == nil { + return nil, errors.New("mux: route doesn't have a host") + } + values, err := r.prepareVars(pairs...) + if err != nil { + return nil, err + } + host, err := r.regexp.host.url(values) + if err != nil { + return nil, err + } + u := &url.URL{ + Scheme: "http", + Host: host, + } + if r.buildScheme != "" { + u.Scheme = r.buildScheme + } + return u, nil +} + +// URLPath builds the path part of the URL for a route. See Route.URL(). +// +// The route must have a path defined. +func (r *Route) URLPath(pairs ...string) (*url.URL, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp.path == nil { + return nil, errors.New("mux: route doesn't have a path") + } + values, err := r.prepareVars(pairs...) + if err != nil { + return nil, err + } + path, err := r.regexp.path.url(values) + if err != nil { + return nil, err + } + return &url.URL{ + Path: path, + }, nil +} + +// GetPathTemplate returns the template used to build the +// route match. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not define a path. +func (r *Route) GetPathTemplate() (string, error) { + if r.err != nil { + return "", r.err + } + if r.regexp.path == nil { + return "", errors.New("mux: route doesn't have a path") + } + return r.regexp.path.template, nil +} + +// GetPathRegexp returns the expanded regular expression used to match route path. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not define a path. +func (r *Route) GetPathRegexp() (string, error) { + if r.err != nil { + return "", r.err + } + if r.regexp.path == nil { + return "", errors.New("mux: route does not have a path") + } + return r.regexp.path.regexp.String(), nil +} + +// GetQueriesRegexp returns the expanded regular expressions used to match the +// route queries. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not have queries. +func (r *Route) GetQueriesRegexp() ([]string, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp.queries == nil { + return nil, errors.New("mux: route doesn't have queries") + } + queries := make([]string, 0, len(r.regexp.queries)) + for _, query := range r.regexp.queries { + queries = append(queries, query.regexp.String()) + } + return queries, nil +} + +// GetQueriesTemplates returns the templates used to build the +// query matching. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not define queries. +func (r *Route) GetQueriesTemplates() ([]string, error) { + if r.err != nil { + return nil, r.err + } + if r.regexp.queries == nil { + return nil, errors.New("mux: route doesn't have queries") + } + queries := make([]string, 0, len(r.regexp.queries)) + for _, query := range r.regexp.queries { + queries = append(queries, query.template) + } + return queries, nil +} + +// GetMethods returns the methods the route matches against +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if route does not have methods. +func (r *Route) GetMethods() ([]string, error) { + if r.err != nil { + return nil, r.err + } + for _, m := range r.matchers { + if methods, ok := m.(methodMatcher); ok { + return []string(methods), nil + } + } + return nil, errors.New("mux: route doesn't have methods") +} + +// GetHostTemplate returns the template used to build the +// route match. +// This is useful for building simple REST API documentation and for instrumentation +// against third-party services. +// An error will be returned if the route does not define a host. +func (r *Route) GetHostTemplate() (string, error) { + if r.err != nil { + return "", r.err + } + if r.regexp.host == nil { + return "", errors.New("mux: route doesn't have a host") + } + return r.regexp.host.template, nil +} + +// GetVarNames returns the names of all variables added by regexp matchers +// These can be used to know which route variables should be passed into r.URL() +func (r *Route) GetVarNames() ([]string, error) { + if r.err != nil { + return nil, r.err + } + var varNames []string + if r.regexp.host != nil { + varNames = append(varNames, r.regexp.host.varsN...) + } + if r.regexp.path != nil { + varNames = append(varNames, r.regexp.path.varsN...) + } + for _, regx := range r.regexp.queries { + varNames = append(varNames, regx.varsN...) + } + return varNames, nil +} + +// prepareVars converts the route variable pairs into a map. If the route has a +// BuildVarsFunc, it is invoked. +func (r *Route) prepareVars(pairs ...string) (map[string]string, error) { + m, err := mapFromPairsToString(pairs...) + if err != nil { + return nil, err + } + return r.buildVars(m), nil +} + +func (r *Route) buildVars(m map[string]string) map[string]string { + if r.buildVarsFunc != nil { + m = r.buildVarsFunc(m) + } + return m +} diff --git a/vendor/github.com/gorilla/mux/test_helpers.go b/vendor/github.com/gorilla/mux/test_helpers.go new file mode 100644 index 000000000..5f5c496de --- /dev/null +++ b/vendor/github.com/gorilla/mux/test_helpers.go @@ -0,0 +1,19 @@ +// Copyright 2012 The Gorilla Authors. All rights reserved. +// Use of this source code is governed by a BSD-style +// license that can be found in the LICENSE file. + +package mux + +import "net/http" + +// SetURLVars sets the URL variables for the given request, to be accessed via +// mux.Vars for testing route behaviour. Arguments are not modified, a shallow +// copy is returned. +// +// This API should only be used for testing purposes; it provides a way to +// inject variables into the request context. Alternatively, URL variables +// can be set by making a route that captures the required variables, +// starting a server and sending the request to that server. +func SetURLVars(r *http.Request, val map[string]string) *http.Request { + return requestWithVars(r, val) +} diff --git a/vendor/k8s.io/client-go/util/certificate/csr/csr.go b/vendor/k8s.io/client-go/util/certificate/csr/csr.go new file mode 100644 index 000000000..0390d1c02 --- /dev/null +++ b/vendor/k8s.io/client-go/util/certificate/csr/csr.go @@ -0,0 +1,364 @@ +/* +Copyright 2016 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package csr + +import ( + "context" + "crypto" + "crypto/x509" + "encoding/pem" + "fmt" + "reflect" + "time" + + certificatesv1 "k8s.io/api/certificates/v1" + certificatesv1beta1 "k8s.io/api/certificates/v1beta1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/fields" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/wait" + "k8s.io/apimachinery/pkg/watch" + clientset "k8s.io/client-go/kubernetes" + "k8s.io/client-go/tools/cache" + watchtools "k8s.io/client-go/tools/watch" + certutil "k8s.io/client-go/util/cert" + "k8s.io/klog/v2" + "k8s.io/utils/pointer" +) + +// RequestCertificate will either use an existing (if this process has run +// before but not to completion) or create a certificate signing request using the +// PEM encoded CSR and send it to API server. An optional requestedDuration may be passed +// to set the spec.expirationSeconds field on the CSR to control the lifetime of the issued +// certificate. This is not guaranteed as the signer may choose to ignore the request. +func RequestCertificate(client clientset.Interface, csrData []byte, name, signerName string, requestedDuration *time.Duration, usages []certificatesv1.KeyUsage, privateKey interface{}) (reqName string, reqUID types.UID, err error) { + csr := &certificatesv1.CertificateSigningRequest{ + // Username, UID, Groups will be injected by API server. + TypeMeta: metav1.TypeMeta{Kind: "CertificateSigningRequest"}, + ObjectMeta: metav1.ObjectMeta{ + Name: name, + }, + Spec: certificatesv1.CertificateSigningRequestSpec{ + Request: csrData, + Usages: usages, + SignerName: signerName, + }, + } + if len(csr.Name) == 0 { + csr.GenerateName = "csr-" + } + if requestedDuration != nil { + csr.Spec.ExpirationSeconds = DurationToExpirationSeconds(*requestedDuration) + } + + reqName, reqUID, err = create(client, csr) + switch { + case err == nil: + return reqName, reqUID, err + + case apierrors.IsAlreadyExists(err) && len(name) > 0: + klog.Infof("csr for this node already exists, reusing") + req, err := get(client, name) + if err != nil { + return "", "", formatError("cannot retrieve certificate signing request: %v", err) + } + if err := ensureCompatible(req, csr, privateKey); err != nil { + return "", "", fmt.Errorf("retrieved csr is not compatible: %v", err) + } + klog.Infof("csr for this node is still valid") + return req.Name, req.UID, nil + + default: + return "", "", formatError("cannot create certificate signing request: %v", err) + } +} + +func DurationToExpirationSeconds(duration time.Duration) *int32 { + return pointer.Int32(int32(duration / time.Second)) +} + +func ExpirationSecondsToDuration(expirationSeconds int32) time.Duration { + return time.Duration(expirationSeconds) * time.Second +} + +func get(client clientset.Interface, name string) (*certificatesv1.CertificateSigningRequest, error) { + v1req, v1err := client.CertificatesV1().CertificateSigningRequests().Get(context.TODO(), name, metav1.GetOptions{}) + if v1err == nil || !apierrors.IsNotFound(v1err) { + return v1req, v1err + } + + v1beta1req, v1beta1err := client.CertificatesV1beta1().CertificateSigningRequests().Get(context.TODO(), name, metav1.GetOptions{}) + if v1beta1err != nil { + return nil, v1beta1err + } + + v1req = &certificatesv1.CertificateSigningRequest{ + ObjectMeta: v1beta1req.ObjectMeta, + Spec: certificatesv1.CertificateSigningRequestSpec{ + Request: v1beta1req.Spec.Request, + }, + } + if v1beta1req.Spec.SignerName != nil { + v1req.Spec.SignerName = *v1beta1req.Spec.SignerName + } + for _, usage := range v1beta1req.Spec.Usages { + v1req.Spec.Usages = append(v1req.Spec.Usages, certificatesv1.KeyUsage(usage)) + } + return v1req, nil +} + +func create(client clientset.Interface, csr *certificatesv1.CertificateSigningRequest) (reqName string, reqUID types.UID, err error) { + // only attempt a create via v1 if we specified signerName and usages and are not using the legacy unknown signerName + if len(csr.Spec.Usages) > 0 && len(csr.Spec.SignerName) > 0 && csr.Spec.SignerName != "kubernetes.io/legacy-unknown" { + v1req, v1err := client.CertificatesV1().CertificateSigningRequests().Create(context.TODO(), csr, metav1.CreateOptions{}) + switch { + case v1err != nil && apierrors.IsNotFound(v1err): + // v1 CSR API was not found, continue to try v1beta1 + + case v1err != nil: + // other creation error + return "", "", v1err + + default: + // success + return v1req.Name, v1req.UID, v1err + } + } + + // convert relevant bits to v1beta1 + v1beta1csr := &certificatesv1beta1.CertificateSigningRequest{ + ObjectMeta: csr.ObjectMeta, + Spec: certificatesv1beta1.CertificateSigningRequestSpec{ + SignerName: &csr.Spec.SignerName, + Request: csr.Spec.Request, + }, + } + for _, usage := range csr.Spec.Usages { + v1beta1csr.Spec.Usages = append(v1beta1csr.Spec.Usages, certificatesv1beta1.KeyUsage(usage)) + } + + // create v1beta1 + v1beta1req, v1beta1err := client.CertificatesV1beta1().CertificateSigningRequests().Create(context.TODO(), v1beta1csr, metav1.CreateOptions{}) + if v1beta1err != nil { + return "", "", v1beta1err + } + return v1beta1req.Name, v1beta1req.UID, nil +} + +// WaitForCertificate waits for a certificate to be issued until timeout, or returns an error. +func WaitForCertificate(ctx context.Context, client clientset.Interface, reqName string, reqUID types.UID) (certData []byte, err error) { + fieldSelector := fields.OneTermEqualSelector("metadata.name", reqName).String() + + var lw *cache.ListWatch + var obj runtime.Object + for { + // see if the v1 API is available + if _, err := client.CertificatesV1().CertificateSigningRequests().List(ctx, metav1.ListOptions{FieldSelector: fieldSelector}); err == nil { + // watch v1 objects + obj = &certificatesv1.CertificateSigningRequest{} + lw = &cache.ListWatch{ + ListFunc: func(options metav1.ListOptions) (runtime.Object, error) { + options.FieldSelector = fieldSelector + return client.CertificatesV1().CertificateSigningRequests().List(ctx, options) + }, + WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) { + options.FieldSelector = fieldSelector + return client.CertificatesV1().CertificateSigningRequests().Watch(ctx, options) + }, + } + break + } else { + klog.V(2).Infof("error fetching v1 certificate signing request: %v", err) + } + + // return if we've timed out + if err := ctx.Err(); err != nil { + return nil, wait.ErrWaitTimeout + } + + // see if the v1beta1 API is available + if _, err := client.CertificatesV1beta1().CertificateSigningRequests().List(ctx, metav1.ListOptions{FieldSelector: fieldSelector}); err == nil { + // watch v1beta1 objects + obj = &certificatesv1beta1.CertificateSigningRequest{} + lw = &cache.ListWatch{ + ListFunc: func(options metav1.ListOptions) (runtime.Object, error) { + options.FieldSelector = fieldSelector + return client.CertificatesV1beta1().CertificateSigningRequests().List(ctx, options) + }, + WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) { + options.FieldSelector = fieldSelector + return client.CertificatesV1beta1().CertificateSigningRequests().Watch(ctx, options) + }, + } + break + } else { + klog.V(2).Infof("error fetching v1beta1 certificate signing request: %v", err) + } + + // return if we've timed out + if err := ctx.Err(); err != nil { + return nil, wait.ErrWaitTimeout + } + + // wait and try again + time.Sleep(time.Second) + } + + var issuedCertificate []byte + _, err = watchtools.UntilWithSync( + ctx, + lw, + obj, + nil, + func(event watch.Event) (bool, error) { + switch event.Type { + case watch.Modified, watch.Added: + case watch.Deleted: + return false, fmt.Errorf("csr %q was deleted", reqName) + default: + return false, nil + } + + switch csr := event.Object.(type) { + case *certificatesv1.CertificateSigningRequest: + if csr.UID != reqUID { + return false, fmt.Errorf("csr %q changed UIDs", csr.Name) + } + approved := false + for _, c := range csr.Status.Conditions { + if c.Type == certificatesv1.CertificateDenied { + return false, fmt.Errorf("certificate signing request is denied, reason: %v, message: %v", c.Reason, c.Message) + } + if c.Type == certificatesv1.CertificateFailed { + return false, fmt.Errorf("certificate signing request failed, reason: %v, message: %v", c.Reason, c.Message) + } + if c.Type == certificatesv1.CertificateApproved { + approved = true + } + } + if approved { + if len(csr.Status.Certificate) > 0 { + klog.V(2).Infof("certificate signing request %s is issued", csr.Name) + issuedCertificate = csr.Status.Certificate + return true, nil + } + klog.V(2).Infof("certificate signing request %s is approved, waiting to be issued", csr.Name) + } + + case *certificatesv1beta1.CertificateSigningRequest: + if csr.UID != reqUID { + return false, fmt.Errorf("csr %q changed UIDs", csr.Name) + } + approved := false + for _, c := range csr.Status.Conditions { + if c.Type == certificatesv1beta1.CertificateDenied { + return false, fmt.Errorf("certificate signing request is denied, reason: %v, message: %v", c.Reason, c.Message) + } + if c.Type == certificatesv1beta1.CertificateFailed { + return false, fmt.Errorf("certificate signing request failed, reason: %v, message: %v", c.Reason, c.Message) + } + if c.Type == certificatesv1beta1.CertificateApproved { + approved = true + } + } + if approved { + if len(csr.Status.Certificate) > 0 { + klog.V(2).Infof("certificate signing request %s is issued", csr.Name) + issuedCertificate = csr.Status.Certificate + return true, nil + } + klog.V(2).Infof("certificate signing request %s is approved, waiting to be issued", csr.Name) + } + + default: + return false, fmt.Errorf("unexpected type received: %T", event.Object) + } + + return false, nil + }, + ) + if err == wait.ErrWaitTimeout { + return nil, wait.ErrWaitTimeout + } + if err != nil { + return nil, formatError("cannot watch on the certificate signing request: %v", err) + } + + return issuedCertificate, nil +} + +// ensureCompatible ensures that a CSR object is compatible with an original CSR +func ensureCompatible(new, orig *certificatesv1.CertificateSigningRequest, privateKey interface{}) error { + newCSR, err := parseCSR(new.Spec.Request) + if err != nil { + return fmt.Errorf("unable to parse new csr: %v", err) + } + origCSR, err := parseCSR(orig.Spec.Request) + if err != nil { + return fmt.Errorf("unable to parse original csr: %v", err) + } + if !reflect.DeepEqual(newCSR.Subject, origCSR.Subject) { + return fmt.Errorf("csr subjects differ: new: %#v, orig: %#v", newCSR.Subject, origCSR.Subject) + } + if len(new.Spec.SignerName) > 0 && len(orig.Spec.SignerName) > 0 && new.Spec.SignerName != orig.Spec.SignerName { + return fmt.Errorf("csr signerNames differ: new %q, orig: %q", new.Spec.SignerName, orig.Spec.SignerName) + } + signer, ok := privateKey.(crypto.Signer) + if !ok { + return fmt.Errorf("privateKey is not a signer") + } + newCSR.PublicKey = signer.Public() + if err := newCSR.CheckSignature(); err != nil { + return fmt.Errorf("error validating signature new CSR against old key: %v", err) + } + if len(new.Status.Certificate) > 0 { + certs, err := certutil.ParseCertsPEM(new.Status.Certificate) + if err != nil { + return fmt.Errorf("error parsing signed certificate for CSR: %v", err) + } + now := time.Now() + for _, cert := range certs { + if now.After(cert.NotAfter) { + return fmt.Errorf("one of the certificates for the CSR has expired: %s", cert.NotAfter) + } + } + } + return nil +} + +// formatError preserves the type of an API message but alters the message. Expects +// a single argument format string, and returns the wrapped error. +func formatError(format string, err error) error { + if s, ok := err.(apierrors.APIStatus); ok { + se := &apierrors.StatusError{ErrStatus: s.Status()} + se.ErrStatus.Message = fmt.Sprintf(format, se.ErrStatus.Message) + return se + } + return fmt.Errorf(format, err) +} + +// parseCSR extracts the CSR from the API object and decodes it. +func parseCSR(pemData []byte) (*x509.CertificateRequest, error) { + // extract PEM from request object + block, _ := pem.Decode(pemData) + if block == nil || block.Type != "CERTIFICATE REQUEST" { + return nil, fmt.Errorf("PEM block type must be CERTIFICATE REQUEST") + } + return x509.ParseCertificateRequest(block.Bytes) +} diff --git a/vendor/k8s.io/kubectl/pkg/apps/kind_visitor.go b/vendor/k8s.io/kubectl/pkg/apps/kind_visitor.go new file mode 100644 index 000000000..931c63b18 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/apps/kind_visitor.go @@ -0,0 +1,75 @@ +/* +Copyright 2017 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package apps + +import ( + "fmt" + + "k8s.io/apimachinery/pkg/runtime/schema" +) + +// KindVisitor is used with GroupKindElement to call a particular function depending on the +// Kind of a schema.GroupKind +type KindVisitor interface { + VisitDaemonSet(kind GroupKindElement) + VisitDeployment(kind GroupKindElement) + VisitJob(kind GroupKindElement) + VisitPod(kind GroupKindElement) + VisitReplicaSet(kind GroupKindElement) + VisitReplicationController(kind GroupKindElement) + VisitStatefulSet(kind GroupKindElement) + VisitCronJob(kind GroupKindElement) +} + +// GroupKindElement defines a Kubernetes API group elem +type GroupKindElement schema.GroupKind + +// Accept calls the Visit method on visitor that corresponds to elem's Kind +func (elem GroupKindElement) Accept(visitor KindVisitor) error { + switch { + case elem.GroupMatch("apps", "extensions") && elem.Kind == "DaemonSet": + visitor.VisitDaemonSet(elem) + case elem.GroupMatch("apps", "extensions") && elem.Kind == "Deployment": + visitor.VisitDeployment(elem) + case elem.GroupMatch("batch") && elem.Kind == "Job": + visitor.VisitJob(elem) + case elem.GroupMatch("", "core") && elem.Kind == "Pod": + visitor.VisitPod(elem) + case elem.GroupMatch("apps", "extensions") && elem.Kind == "ReplicaSet": + visitor.VisitReplicaSet(elem) + case elem.GroupMatch("", "core") && elem.Kind == "ReplicationController": + visitor.VisitReplicationController(elem) + case elem.GroupMatch("apps") && elem.Kind == "StatefulSet": + visitor.VisitStatefulSet(elem) + case elem.GroupMatch("batch") && elem.Kind == "CronJob": + visitor.VisitCronJob(elem) + default: + return fmt.Errorf("no visitor method exists for %v", elem) + } + return nil +} + +// GroupMatch returns true if and only if elem's group matches one +// of the group arguments +func (elem GroupKindElement) GroupMatch(groups ...string) bool { + for _, g := range groups { + if elem.Group == g { + return true + } + } + return false +} diff --git a/vendor/k8s.io/kubectl/pkg/cmd/apiresources/apiresources.go b/vendor/k8s.io/kubectl/pkg/cmd/apiresources/apiresources.go new file mode 100644 index 000000000..6a02ac879 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/cmd/apiresources/apiresources.go @@ -0,0 +1,292 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package apiresources + +import ( + "fmt" + "io" + "sort" + "strings" + + "github.com/spf13/cobra" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/util/errors" + "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/cli-runtime/pkg/printers" + "k8s.io/client-go/discovery" + cmdutil "k8s.io/kubectl/pkg/cmd/util" + "k8s.io/kubectl/pkg/util/i18n" + "k8s.io/kubectl/pkg/util/templates" +) + +var ( + apiresourcesExample = templates.Examples(` + # Print the supported API resources + kubectl api-resources + + # Print the supported API resources with more information + kubectl api-resources -o wide + + # Print the supported API resources sorted by a column + kubectl api-resources --sort-by=name + + # Print the supported namespaced resources + kubectl api-resources --namespaced=true + + # Print the supported non-namespaced resources + kubectl api-resources --namespaced=false + + # Print the supported API resources with a specific APIGroup + kubectl api-resources --api-group=rbac.authorization.k8s.io`) +) + +// APIResourceOptions is the start of the data required to perform the operation. +// As new fields are added, add them here instead of referencing the cmd.Flags() +type APIResourceOptions struct { + Output string + SortBy string + APIGroup string + Namespaced bool + Verbs []string + NoHeaders bool + Cached bool + Categories []string + + groupChanged bool + nsChanged bool + + discoveryClient discovery.CachedDiscoveryInterface + + genericclioptions.IOStreams +} + +// groupResource contains the APIGroup and APIResource +type groupResource struct { + APIGroup string + APIGroupVersion string + APIResource metav1.APIResource +} + +// NewAPIResourceOptions creates the options for APIResource +func NewAPIResourceOptions(ioStreams genericclioptions.IOStreams) *APIResourceOptions { + return &APIResourceOptions{ + IOStreams: ioStreams, + Namespaced: true, + } +} + +// NewCmdAPIResources creates the `api-resources` command +func NewCmdAPIResources(restClientGetter genericclioptions.RESTClientGetter, ioStreams genericclioptions.IOStreams) *cobra.Command { + o := NewAPIResourceOptions(ioStreams) + + cmd := &cobra.Command{ + Use: "api-resources", + Short: i18n.T("Print the supported API resources on the server"), + Long: i18n.T("Print the supported API resources on the server."), + Example: apiresourcesExample, + Run: func(cmd *cobra.Command, args []string) { + cmdutil.CheckErr(o.Complete(restClientGetter, cmd, args)) + cmdutil.CheckErr(o.Validate()) + cmdutil.CheckErr(o.RunAPIResources()) + }, + } + + cmd.Flags().BoolVar(&o.NoHeaders, "no-headers", o.NoHeaders, "When using the default or custom-column output format, don't print headers (default print headers).") + cmd.Flags().StringVarP(&o.Output, "output", "o", o.Output, `Output format. One of: (wide, name).`) + + cmd.Flags().StringVar(&o.APIGroup, "api-group", o.APIGroup, "Limit to resources in the specified API group.") + cmd.Flags().BoolVar(&o.Namespaced, "namespaced", o.Namespaced, "If false, non-namespaced resources will be returned, otherwise returning namespaced resources by default.") + cmd.Flags().StringSliceVar(&o.Verbs, "verbs", o.Verbs, "Limit to resources that support the specified verbs.") + cmd.Flags().StringVar(&o.SortBy, "sort-by", o.SortBy, "If non-empty, sort list of resources using specified field. The field can be either 'name' or 'kind'.") + cmd.Flags().BoolVar(&o.Cached, "cached", o.Cached, "Use the cached list of resources if available.") + cmd.Flags().StringSliceVar(&o.Categories, "categories", o.Categories, "Limit to resources that belong the the specified categories.") + return cmd +} + +// Validate checks to the APIResourceOptions to see if there is sufficient information run the command +func (o *APIResourceOptions) Validate() error { + supportedOutputTypes := sets.NewString("", "wide", "name") + if !supportedOutputTypes.Has(o.Output) { + return fmt.Errorf("--output %v is not available", o.Output) + } + supportedSortTypes := sets.NewString("", "name", "kind") + if len(o.SortBy) > 0 { + if !supportedSortTypes.Has(o.SortBy) { + return fmt.Errorf("--sort-by accepts only name or kind") + } + } + return nil +} + +// Complete adapts from the command line args and validates them +func (o *APIResourceOptions) Complete(restClientGetter genericclioptions.RESTClientGetter, cmd *cobra.Command, args []string) error { + if len(args) != 0 { + return cmdutil.UsageErrorf(cmd, "unexpected arguments: %v", args) + } + + discoveryClient, err := restClientGetter.ToDiscoveryClient() + if err != nil { + return err + } + o.discoveryClient = discoveryClient + + o.groupChanged = cmd.Flags().Changed("api-group") + o.nsChanged = cmd.Flags().Changed("namespaced") + + return nil +} + +// RunAPIResources does the work +func (o *APIResourceOptions) RunAPIResources() error { + w := printers.GetNewTabWriter(o.Out) + defer w.Flush() + + if !o.Cached { + // Always request fresh data from the server + o.discoveryClient.Invalidate() + } + + errs := []error{} + lists, err := o.discoveryClient.ServerPreferredResources() + if err != nil { + errs = append(errs, err) + } + + resources := []groupResource{} + + for _, list := range lists { + if len(list.APIResources) == 0 { + continue + } + gv, err := schema.ParseGroupVersion(list.GroupVersion) + if err != nil { + continue + } + for _, resource := range list.APIResources { + if len(resource.Verbs) == 0 { + continue + } + // filter apiGroup + if o.groupChanged && o.APIGroup != gv.Group { + continue + } + // filter namespaced + if o.nsChanged && o.Namespaced != resource.Namespaced { + continue + } + // filter to resources that support the specified verbs + if len(o.Verbs) > 0 && !sets.NewString(resource.Verbs...).HasAll(o.Verbs...) { + continue + } + // filter to resources that belong to the specified categories + if len(o.Categories) > 0 && !sets.NewString(resource.Categories...).HasAll(o.Categories...) { + continue + } + resources = append(resources, groupResource{ + APIGroup: gv.Group, + APIGroupVersion: gv.String(), + APIResource: resource, + }) + } + } + + if o.NoHeaders == false && o.Output != "name" { + if err = printContextHeaders(w, o.Output); err != nil { + return err + } + } + + sort.Stable(sortableResource{resources, o.SortBy}) + for _, r := range resources { + switch o.Output { + case "name": + name := r.APIResource.Name + if len(r.APIGroup) > 0 { + name += "." + r.APIGroup + } + if _, err := fmt.Fprintf(w, "%s\n", name); err != nil { + errs = append(errs, err) + } + case "wide": + if _, err := fmt.Fprintf(w, "%s\t%s\t%s\t%v\t%s\t%v\t%v\n", + r.APIResource.Name, + strings.Join(r.APIResource.ShortNames, ","), + r.APIGroupVersion, + r.APIResource.Namespaced, + r.APIResource.Kind, + strings.Join(r.APIResource.Verbs, ","), + strings.Join(r.APIResource.Categories, ",")); err != nil { + errs = append(errs, err) + } + case "": + if _, err := fmt.Fprintf(w, "%s\t%s\t%s\t%v\t%s\n", + r.APIResource.Name, + strings.Join(r.APIResource.ShortNames, ","), + r.APIGroupVersion, + r.APIResource.Namespaced, + r.APIResource.Kind); err != nil { + errs = append(errs, err) + } + } + } + + if len(errs) > 0 { + return errors.NewAggregate(errs) + } + return nil +} + +func printContextHeaders(out io.Writer, output string) error { + columnNames := []string{"NAME", "SHORTNAMES", "APIVERSION", "NAMESPACED", "KIND"} + if output == "wide" { + columnNames = append(columnNames, "VERBS", "CATEGORIES") + } + _, err := fmt.Fprintf(out, "%s\n", strings.Join(columnNames, "\t")) + return err +} + +type sortableResource struct { + resources []groupResource + sortBy string +} + +func (s sortableResource) Len() int { return len(s.resources) } +func (s sortableResource) Swap(i, j int) { + s.resources[i], s.resources[j] = s.resources[j], s.resources[i] +} +func (s sortableResource) Less(i, j int) bool { + ret := strings.Compare(s.compareValues(i, j)) + if ret > 0 { + return false + } else if ret == 0 { + return strings.Compare(s.resources[i].APIResource.Name, s.resources[j].APIResource.Name) < 0 + } + return true +} + +func (s sortableResource) compareValues(i, j int) (string, string) { + switch s.sortBy { + case "name": + return s.resources[i].APIResource.Name, s.resources[j].APIResource.Name + case "kind": + return s.resources[i].APIResource.Kind, s.resources[j].APIResource.Kind + } + return s.resources[i].APIGroup, s.resources[j].APIGroup +} diff --git a/vendor/k8s.io/kubectl/pkg/cmd/apiresources/apiversions.go b/vendor/k8s.io/kubectl/pkg/cmd/apiresources/apiversions.go new file mode 100644 index 000000000..5003a9476 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/cmd/apiresources/apiversions.go @@ -0,0 +1,95 @@ +/* +Copyright 2014 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package apiresources + +import ( + "fmt" + "sort" + + "github.com/spf13/cobra" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/client-go/discovery" + cmdutil "k8s.io/kubectl/pkg/cmd/util" + "k8s.io/kubectl/pkg/util/i18n" + "k8s.io/kubectl/pkg/util/templates" +) + +var ( + apiversionsExample = templates.Examples(i18n.T(` + # Print the supported API versions + kubectl api-versions`)) +) + +// APIVersionsOptions have the data required for API versions +type APIVersionsOptions struct { + discoveryClient discovery.CachedDiscoveryInterface + + genericclioptions.IOStreams +} + +// NewAPIVersionsOptions creates the options for APIVersions +func NewAPIVersionsOptions(ioStreams genericclioptions.IOStreams) *APIVersionsOptions { + return &APIVersionsOptions{ + IOStreams: ioStreams, + } +} + +// NewCmdAPIVersions creates the `api-versions` command +func NewCmdAPIVersions(restClientGetter genericclioptions.RESTClientGetter, ioStreams genericclioptions.IOStreams) *cobra.Command { + o := NewAPIVersionsOptions(ioStreams) + cmd := &cobra.Command{ + Use: "api-versions", + Short: i18n.T("Print the supported API versions on the server, in the form of \"group/version\""), + Long: i18n.T("Print the supported API versions on the server, in the form of \"group/version\"."), + Example: apiversionsExample, + DisableFlagsInUseLine: true, + Run: func(cmd *cobra.Command, args []string) { + cmdutil.CheckErr(o.Complete(restClientGetter, cmd, args)) + cmdutil.CheckErr(o.RunAPIVersions()) + }, + } + return cmd +} + +// Complete adapts from the command line args and factory to the data required +func (o *APIVersionsOptions) Complete(restClientGetter genericclioptions.RESTClientGetter, cmd *cobra.Command, args []string) error { + if len(args) != 0 { + return cmdutil.UsageErrorf(cmd, "unexpected arguments: %v", args) + } + var err error + o.discoveryClient, err = restClientGetter.ToDiscoveryClient() + return err +} + +// RunAPIVersions does the work +func (o *APIVersionsOptions) RunAPIVersions() error { + // Always request fresh data from the server + o.discoveryClient.Invalidate() + + groupList, err := o.discoveryClient.ServerGroups() + if err != nil { + return fmt.Errorf("couldn't get available api versions from server: %v", err) + } + apiVersions := metav1.ExtractGroupVersions(groupList) + sort.Strings(apiVersions) + for _, v := range apiVersions { + fmt.Fprintln(o.Out, v) + } + return nil +} diff --git a/vendor/k8s.io/kubectl/pkg/cmd/logs/logs.go b/vendor/k8s.io/kubectl/pkg/cmd/logs/logs.go new file mode 100644 index 000000000..bd901a9c0 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/cmd/logs/logs.go @@ -0,0 +1,463 @@ +/* +Copyright 2014 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package logs + +import ( + "bufio" + "context" + "errors" + "fmt" + "io" + "regexp" + "sync" + "time" + + "github.com/spf13/cobra" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/client-go/rest" + cmdutil "k8s.io/kubectl/pkg/cmd/util" + "k8s.io/kubectl/pkg/polymorphichelpers" + "k8s.io/kubectl/pkg/scheme" + "k8s.io/kubectl/pkg/util" + "k8s.io/kubectl/pkg/util/completion" + "k8s.io/kubectl/pkg/util/i18n" + "k8s.io/kubectl/pkg/util/templates" +) + +const ( + logsUsageStr = "logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER]" +) + +var ( + logsLong = templates.LongDesc(i18n.T(` + Print the logs for a container in a pod or specified resource. + If the pod has only one container, the container name is optional.`)) + + logsExample = templates.Examples(i18n.T(` + # Return snapshot logs from pod nginx with only one container + kubectl logs nginx + + # Return snapshot logs from pod nginx with multi containers + kubectl logs nginx --all-containers=true + + # Return snapshot logs from all containers in pods defined by label app=nginx + kubectl logs -l app=nginx --all-containers=true + + # Return snapshot of previous terminated ruby container logs from pod web-1 + kubectl logs -p -c ruby web-1 + + # Begin streaming the logs of the ruby container in pod web-1 + kubectl logs -f -c ruby web-1 + + # Begin streaming the logs from all containers in pods defined by label app=nginx + kubectl logs -f -l app=nginx --all-containers=true + + # Display only the most recent 20 lines of output in pod nginx + kubectl logs --tail=20 nginx + + # Show all logs from pod nginx written in the last hour + kubectl logs --since=1h nginx + + # Show logs from a kubelet with an expired serving certificate + kubectl logs --insecure-skip-tls-verify-backend nginx + + # Return snapshot logs from first container of a job named hello + kubectl logs job/hello + + # Return snapshot logs from container nginx-1 of a deployment named nginx + kubectl logs deployment/nginx -c nginx-1`)) + + selectorTail int64 = 10 + logsUsageErrStr = fmt.Sprintf("expected '%s'.\nPOD or TYPE/NAME is a required argument for the logs command", logsUsageStr) +) + +const ( + defaultPodLogsTimeout = 20 * time.Second +) + +type LogsOptions struct { + Namespace string + ResourceArg string + AllContainers bool + Options runtime.Object + Resources []string + + ConsumeRequestFn func(rest.ResponseWrapper, io.Writer) error + + // PodLogOptions + SinceTime string + SinceSeconds time.Duration + Follow bool + Previous bool + Timestamps bool + IgnoreLogErrors bool + LimitBytes int64 + Tail int64 + Container string + InsecureSkipTLSVerifyBackend bool + + // whether or not a container name was given via --container + ContainerNameSpecified bool + Selector string + MaxFollowConcurrency int + Prefix bool + + Object runtime.Object + GetPodTimeout time.Duration + RESTClientGetter genericclioptions.RESTClientGetter + LogsForObject polymorphichelpers.LogsForObjectFunc + + genericclioptions.IOStreams + + TailSpecified bool + + containerNameFromRefSpecRegexp *regexp.Regexp +} + +func NewLogsOptions(streams genericclioptions.IOStreams, allContainers bool) *LogsOptions { + return &LogsOptions{ + IOStreams: streams, + AllContainers: allContainers, + Tail: -1, + MaxFollowConcurrency: 5, + + containerNameFromRefSpecRegexp: regexp.MustCompile(`spec\.(?:initContainers|containers|ephemeralContainers){(.+)}`), + } +} + +// NewCmdLogs creates a new pod logs command +func NewCmdLogs(f cmdutil.Factory, streams genericclioptions.IOStreams) *cobra.Command { + o := NewLogsOptions(streams, false) + + cmd := &cobra.Command{ + Use: logsUsageStr, + DisableFlagsInUseLine: true, + Short: i18n.T("Print the logs for a container in a pod"), + Long: logsLong, + Example: logsExample, + ValidArgsFunction: completion.PodResourceNameAndContainerCompletionFunc(f), + Run: func(cmd *cobra.Command, args []string) { + cmdutil.CheckErr(o.Complete(f, cmd, args)) + cmdutil.CheckErr(o.Validate()) + cmdutil.CheckErr(o.RunLogs()) + }, + } + o.AddFlags(cmd) + return cmd +} + +func (o *LogsOptions) AddFlags(cmd *cobra.Command) { + cmd.Flags().BoolVar(&o.AllContainers, "all-containers", o.AllContainers, "Get all containers' logs in the pod(s).") + cmd.Flags().BoolVarP(&o.Follow, "follow", "f", o.Follow, "Specify if the logs should be streamed.") + cmd.Flags().BoolVar(&o.Timestamps, "timestamps", o.Timestamps, "Include timestamps on each line in the log output") + cmd.Flags().Int64Var(&o.LimitBytes, "limit-bytes", o.LimitBytes, "Maximum bytes of logs to return. Defaults to no limit.") + cmd.Flags().BoolVarP(&o.Previous, "previous", "p", o.Previous, "If true, print the logs for the previous instance of the container in a pod if it exists.") + cmd.Flags().Int64Var(&o.Tail, "tail", o.Tail, "Lines of recent log file to display. Defaults to -1 with no selector, showing all log lines otherwise 10, if a selector is provided.") + cmd.Flags().BoolVar(&o.IgnoreLogErrors, "ignore-errors", o.IgnoreLogErrors, "If watching / following pod logs, allow for any errors that occur to be non-fatal") + cmd.Flags().StringVar(&o.SinceTime, "since-time", o.SinceTime, i18n.T("Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used.")) + cmd.Flags().DurationVar(&o.SinceSeconds, "since", o.SinceSeconds, "Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used.") + cmd.Flags().StringVarP(&o.Container, "container", "c", o.Container, "Print the logs of this container") + cmd.Flags().BoolVar(&o.InsecureSkipTLSVerifyBackend, "insecure-skip-tls-verify-backend", o.InsecureSkipTLSVerifyBackend, + "Skip verifying the identity of the kubelet that logs are requested from. In theory, an attacker could provide invalid log content back. You might want to use this if your kubelet serving certificates have expired.") + cmdutil.AddPodRunningTimeoutFlag(cmd, defaultPodLogsTimeout) + cmdutil.AddLabelSelectorFlagVar(cmd, &o.Selector) + cmd.Flags().IntVar(&o.MaxFollowConcurrency, "max-log-requests", o.MaxFollowConcurrency, "Specify maximum number of concurrent logs to follow when using by a selector. Defaults to 5.") + cmd.Flags().BoolVar(&o.Prefix, "prefix", o.Prefix, "Prefix each log line with the log source (pod name and container name)") +} + +func (o *LogsOptions) ToLogOptions() (*corev1.PodLogOptions, error) { + logOptions := &corev1.PodLogOptions{ + Container: o.Container, + Follow: o.Follow, + Previous: o.Previous, + Timestamps: o.Timestamps, + InsecureSkipTLSVerifyBackend: o.InsecureSkipTLSVerifyBackend, + } + + if len(o.SinceTime) > 0 { + t, err := util.ParseRFC3339(o.SinceTime, metav1.Now) + if err != nil { + return nil, err + } + + logOptions.SinceTime = &t + } + + if o.LimitBytes != 0 { + logOptions.LimitBytes = &o.LimitBytes + } + + if o.SinceSeconds != 0 { + // round up to the nearest second + sec := int64(o.SinceSeconds.Round(time.Second).Seconds()) + logOptions.SinceSeconds = &sec + } + + if len(o.Selector) > 0 && o.Tail == -1 && !o.TailSpecified { + logOptions.TailLines = &selectorTail + } else if o.Tail != -1 { + logOptions.TailLines = &o.Tail + } + + return logOptions, nil +} + +func (o *LogsOptions) Complete(f cmdutil.Factory, cmd *cobra.Command, args []string) error { + o.ContainerNameSpecified = cmd.Flag("container").Changed + o.TailSpecified = cmd.Flag("tail").Changed + o.Resources = args + + switch len(args) { + case 0: + if len(o.Selector) == 0 { + return cmdutil.UsageErrorf(cmd, "%s", logsUsageErrStr) + } + case 1: + o.ResourceArg = args[0] + if len(o.Selector) != 0 { + return cmdutil.UsageErrorf(cmd, "only a selector (-l) or a POD name is allowed") + } + case 2: + o.ResourceArg = args[0] + o.Container = args[1] + default: + return cmdutil.UsageErrorf(cmd, "%s", logsUsageErrStr) + } + var err error + o.Namespace, _, err = f.ToRawKubeConfigLoader().Namespace() + if err != nil { + return err + } + + o.ConsumeRequestFn = DefaultConsumeRequest + + o.GetPodTimeout, err = cmdutil.GetPodRunningTimeoutFlag(cmd) + if err != nil { + return err + } + + o.Options, err = o.ToLogOptions() + if err != nil { + return err + } + + o.RESTClientGetter = f + o.LogsForObject = polymorphichelpers.LogsForObjectFn + + if o.Object == nil { + builder := f.NewBuilder(). + WithScheme(scheme.Scheme, scheme.Scheme.PrioritizedVersionsAllGroups()...). + NamespaceParam(o.Namespace).DefaultNamespace(). + SingleResourceType() + if o.ResourceArg != "" { + builder.ResourceNames("pods", o.ResourceArg) + } + if o.Selector != "" { + builder.ResourceTypes("pods").LabelSelectorParam(o.Selector) + } + infos, err := builder.Do().Infos() + if err != nil { + return err + } + if o.Selector == "" && len(infos) != 1 { + return errors.New("expected a resource") + } + o.Object = infos[0].Object + if o.Selector != "" && len(o.Object.(*corev1.PodList).Items) == 0 { + fmt.Fprintf(o.ErrOut, "No resources found in %s namespace.\n", o.Namespace) + } + } + + return nil +} + +func (o LogsOptions) Validate() error { + if len(o.SinceTime) > 0 && o.SinceSeconds != 0 { + return fmt.Errorf("at most one of `sinceTime` or `sinceSeconds` may be specified") + } + + logsOptions, ok := o.Options.(*corev1.PodLogOptions) + if !ok { + return errors.New("unexpected logs options object") + } + if o.AllContainers && len(logsOptions.Container) > 0 { + return fmt.Errorf("--all-containers=true should not be specified with container name %s", logsOptions.Container) + } + + if o.ContainerNameSpecified && len(o.Resources) == 2 { + return fmt.Errorf("only one of -c or an inline [CONTAINER] arg is allowed") + } + + if o.LimitBytes < 0 { + return fmt.Errorf("--limit-bytes must be greater than 0") + } + + if logsOptions.SinceSeconds != nil && *logsOptions.SinceSeconds < int64(0) { + return fmt.Errorf("--since must be greater than 0") + } + + if logsOptions.TailLines != nil && *logsOptions.TailLines < -1 { + return fmt.Errorf("--tail must be greater than or equal to -1") + } + + return nil +} + +// RunLogs retrieves a pod log +func (o LogsOptions) RunLogs() error { + requests, err := o.LogsForObject(o.RESTClientGetter, o.Object, o.Options, o.GetPodTimeout, o.AllContainers) + if err != nil { + return err + } + + if o.Follow && len(requests) > 1 { + if len(requests) > o.MaxFollowConcurrency { + return fmt.Errorf( + "you are attempting to follow %d log streams, but maximum allowed concurrency is %d, use --max-log-requests to increase the limit", + len(requests), o.MaxFollowConcurrency, + ) + } + + return o.parallelConsumeRequest(requests) + } + + return o.sequentialConsumeRequest(requests) +} + +func (o LogsOptions) parallelConsumeRequest(requests map[corev1.ObjectReference]rest.ResponseWrapper) error { + reader, writer := io.Pipe() + wg := &sync.WaitGroup{} + wg.Add(len(requests)) + for objRef, request := range requests { + go func(objRef corev1.ObjectReference, request rest.ResponseWrapper) { + defer wg.Done() + out := o.addPrefixIfNeeded(objRef, writer) + if err := o.ConsumeRequestFn(request, out); err != nil { + if !o.IgnoreLogErrors { + writer.CloseWithError(err) + + // It's important to return here to propagate the error via the pipe + return + } + + fmt.Fprintf(writer, "error: %v\n", err) + } + + }(objRef, request) + } + + go func() { + wg.Wait() + writer.Close() + }() + + _, err := io.Copy(o.Out, reader) + return err +} + +func (o LogsOptions) sequentialConsumeRequest(requests map[corev1.ObjectReference]rest.ResponseWrapper) error { + for objRef, request := range requests { + out := o.addPrefixIfNeeded(objRef, o.Out) + if err := o.ConsumeRequestFn(request, out); err != nil { + if !o.IgnoreLogErrors { + return err + } + + fmt.Fprintf(o.Out, "error: %v\n", err) + } + } + + return nil +} + +func (o LogsOptions) addPrefixIfNeeded(ref corev1.ObjectReference, writer io.Writer) io.Writer { + if !o.Prefix || ref.FieldPath == "" || ref.Name == "" { + return writer + } + + // We rely on ref.FieldPath to contain a reference to a container + // including a container name (not an index) so we can get a container name + // without making an extra API request. + var containerName string + containerNameMatches := o.containerNameFromRefSpecRegexp.FindStringSubmatch(ref.FieldPath) + if len(containerNameMatches) == 2 { + containerName = containerNameMatches[1] + } + + prefix := fmt.Sprintf("[pod/%s/%s] ", ref.Name, containerName) + return &prefixingWriter{ + prefix: []byte(prefix), + writer: writer, + } +} + +// DefaultConsumeRequest reads the data from request and writes into +// the out writer. It buffers data from requests until the newline or io.EOF +// occurs in the data, so it doesn't interleave logs sub-line +// when running concurrently. +// +// A successful read returns err == nil, not err == io.EOF. +// Because the function is defined to read from request until io.EOF, it does +// not treat an io.EOF as an error to be reported. +func DefaultConsumeRequest(request rest.ResponseWrapper, out io.Writer) error { + readCloser, err := request.Stream(context.TODO()) + if err != nil { + return err + } + defer readCloser.Close() + + r := bufio.NewReader(readCloser) + for { + bytes, err := r.ReadBytes('\n') + if _, err := out.Write(bytes); err != nil { + return err + } + + if err != nil { + if err != io.EOF { + return err + } + return nil + } + } +} + +type prefixingWriter struct { + prefix []byte + writer io.Writer +} + +func (pw *prefixingWriter) Write(p []byte) (int, error) { + if len(p) == 0 { + return 0, nil + } + + // Perform an "atomic" write of a prefix and p to make sure that it doesn't interleave + // sub-line when used concurrently with io.PipeWrite. + n, err := pw.writer.Write(append(pw.prefix, p...)) + if n > len(p) { + // To comply with the io.Writer interface requirements we must + // return a number of bytes written from p (0 <= n <= len(p)), + // so we are ignoring the length of the prefix here. + return len(p), err + } + return n, err +} diff --git a/vendor/k8s.io/kubectl/pkg/cmd/util/podcmd/podcmd.go b/vendor/k8s.io/kubectl/pkg/cmd/util/podcmd/podcmd.go new file mode 100644 index 000000000..bf760645d --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/cmd/util/podcmd/podcmd.go @@ -0,0 +1,104 @@ +/* +Copyright 2021 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package podcmd + +import ( + "fmt" + "io" + "strings" + + v1 "k8s.io/api/core/v1" + "k8s.io/klog/v2" +) + +// DefaultContainerAnnotationName is an annotation name that can be used to preselect the interesting container +// from a pod when running kubectl. +const DefaultContainerAnnotationName = "kubectl.kubernetes.io/default-container" + +// FindContainerByName selects the named container from the spec of +// the provided pod or return nil if no such container exists. +func FindContainerByName(pod *v1.Pod, name string) (*v1.Container, string) { + for i := range pod.Spec.Containers { + if pod.Spec.Containers[i].Name == name { + return &pod.Spec.Containers[i], fmt.Sprintf("spec.containers{%s}", name) + } + } + for i := range pod.Spec.InitContainers { + if pod.Spec.InitContainers[i].Name == name { + return &pod.Spec.InitContainers[i], fmt.Sprintf("spec.initContainers{%s}", name) + } + } + for i := range pod.Spec.EphemeralContainers { + if pod.Spec.EphemeralContainers[i].Name == name { + return (*v1.Container)(&pod.Spec.EphemeralContainers[i].EphemeralContainerCommon), fmt.Sprintf("spec.ephemeralContainers{%s}", name) + } + } + return nil, "" +} + +// FindOrDefaultContainerByName defaults a container for a pod to the first container if any +// exists, or returns an error. It will print a message to the user indicating a default was +// selected if there was more than one container. +func FindOrDefaultContainerByName(pod *v1.Pod, name string, quiet bool, warn io.Writer) (*v1.Container, error) { + var container *v1.Container + + if len(name) > 0 { + container, _ = FindContainerByName(pod, name) + if container == nil { + return nil, fmt.Errorf("container %s not found in pod %s", name, pod.Name) + } + return container, nil + } + + // this should never happen, but just in case + if len(pod.Spec.Containers) == 0 { + return nil, fmt.Errorf("pod %s/%s does not have any containers", pod.Namespace, pod.Name) + } + + // read the default container the annotation as per + // https://github.com/kubernetes/enhancements/tree/master/keps/sig-cli/2227-kubectl-default-container + if name := pod.Annotations[DefaultContainerAnnotationName]; len(name) > 0 { + if container, _ = FindContainerByName(pod, name); container != nil { + klog.V(4).Infof("Defaulting container name from annotation %s", container.Name) + return container, nil + } + klog.V(4).Infof("Default container name from annotation %s was not found in the pod", name) + } + + // pick the first container as per existing behavior + container = &pod.Spec.Containers[0] + if !quiet && (len(pod.Spec.Containers) > 1 || len(pod.Spec.InitContainers) > 0 || len(pod.Spec.EphemeralContainers) > 0) { + fmt.Fprintf(warn, "Defaulted container %q out of: %s\n", container.Name, AllContainerNames(pod)) + } + + klog.V(4).Infof("Defaulting container name to %s", container.Name) + return &pod.Spec.Containers[0], nil +} + +func AllContainerNames(pod *v1.Pod) string { + var containers []string + for _, container := range pod.Spec.Containers { + containers = append(containers, container.Name) + } + for _, container := range pod.Spec.EphemeralContainers { + containers = append(containers, fmt.Sprintf("%s (ephem)", container.Name)) + } + for _, container := range pod.Spec.InitContainers { + containers = append(containers, fmt.Sprintf("%s (init)", container.Name)) + } + return strings.Join(containers, ", ") +} diff --git a/vendor/k8s.io/kubectl/pkg/describe/describe.go b/vendor/k8s.io/kubectl/pkg/describe/describe.go new file mode 100644 index 000000000..e190ef2a5 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/describe/describe.go @@ -0,0 +1,5677 @@ +/* +Copyright 2014 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package describe + +import ( + "bytes" + "context" + "crypto/x509" + "fmt" + "io" + "net" + "net/url" + "reflect" + "sort" + "strconv" + "strings" + "text/tabwriter" + "time" + "unicode" + + "github.com/fatih/camelcase" + appsv1 "k8s.io/api/apps/v1" + autoscalingv1 "k8s.io/api/autoscaling/v1" + autoscalingv2 "k8s.io/api/autoscaling/v2" + batchv1 "k8s.io/api/batch/v1" + batchv1beta1 "k8s.io/api/batch/v1beta1" + certificatesv1beta1 "k8s.io/api/certificates/v1beta1" + coordinationv1 "k8s.io/api/coordination/v1" + corev1 "k8s.io/api/core/v1" + discoveryv1 "k8s.io/api/discovery/v1" + discoveryv1beta1 "k8s.io/api/discovery/v1beta1" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + networkingv1 "k8s.io/api/networking/v1" + networkingv1alpha1 "k8s.io/api/networking/v1alpha1" + networkingv1beta1 "k8s.io/api/networking/v1beta1" + policyv1 "k8s.io/api/policy/v1" + policyv1beta1 "k8s.io/api/policy/v1beta1" + rbacv1 "k8s.io/api/rbac/v1" + schedulingv1 "k8s.io/api/scheduling/v1" + storagev1 "k8s.io/api/storage/v1" + apierrors "k8s.io/apimachinery/pkg/api/errors" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/api/resource" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/fields" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/duration" + "k8s.io/apimachinery/pkg/util/intstr" + "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/cli-runtime/pkg/printers" + runtimeresource "k8s.io/cli-runtime/pkg/resource" + "k8s.io/client-go/dynamic" + clientset "k8s.io/client-go/kubernetes" + corev1client "k8s.io/client-go/kubernetes/typed/core/v1" + "k8s.io/client-go/rest" + "k8s.io/client-go/tools/reference" + utilcsr "k8s.io/client-go/util/certificate/csr" + "k8s.io/klog/v2" + "k8s.io/kubectl/pkg/scheme" + "k8s.io/kubectl/pkg/util/certificate" + deploymentutil "k8s.io/kubectl/pkg/util/deployment" + "k8s.io/kubectl/pkg/util/event" + "k8s.io/kubectl/pkg/util/fieldpath" + "k8s.io/kubectl/pkg/util/qos" + "k8s.io/kubectl/pkg/util/rbac" + resourcehelper "k8s.io/kubectl/pkg/util/resource" + "k8s.io/kubectl/pkg/util/slice" + storageutil "k8s.io/kubectl/pkg/util/storage" +) + +// Each level has 2 spaces for PrefixWriter +const ( + LEVEL_0 = iota + LEVEL_1 + LEVEL_2 + LEVEL_3 + LEVEL_4 +) + +var ( + // globally skipped annotations + skipAnnotations = sets.NewString(corev1.LastAppliedConfigAnnotation) + + // DescriberFn gives a way to easily override the function for unit testing if needed + DescriberFn DescriberFunc = Describer +) + +// Describer returns a Describer for displaying the specified RESTMapping type or an error. +func Describer(restClientGetter genericclioptions.RESTClientGetter, mapping *meta.RESTMapping) (ResourceDescriber, error) { + clientConfig, err := restClientGetter.ToRESTConfig() + if err != nil { + return nil, err + } + // try to get a describer + if describer, ok := DescriberFor(mapping.GroupVersionKind.GroupKind(), clientConfig); ok { + return describer, nil + } + // if this is a kind we don't have a describer for yet, go generic if possible + if genericDescriber, ok := GenericDescriberFor(mapping, clientConfig); ok { + return genericDescriber, nil + } + // otherwise return an unregistered error + return nil, fmt.Errorf("no description has been implemented for %s", mapping.GroupVersionKind.String()) +} + +// PrefixWriter can write text at various indentation levels. +type PrefixWriter interface { + // Write writes text with the specified indentation level. + Write(level int, format string, a ...interface{}) + // WriteLine writes an entire line with no indentation level. + WriteLine(a ...interface{}) + // Flush forces indentation to be reset. + Flush() +} + +// prefixWriter implements PrefixWriter +type prefixWriter struct { + out io.Writer +} + +var _ PrefixWriter = &prefixWriter{} + +// NewPrefixWriter creates a new PrefixWriter. +func NewPrefixWriter(out io.Writer) PrefixWriter { + return &prefixWriter{out: out} +} + +func (pw *prefixWriter) Write(level int, format string, a ...interface{}) { + levelSpace := " " + prefix := "" + for i := 0; i < level; i++ { + prefix += levelSpace + } + output := fmt.Sprintf(prefix+format, a...) + printers.WriteEscaped(pw.out, output) +} + +func (pw *prefixWriter) WriteLine(a ...interface{}) { + output := fmt.Sprintln(a...) + printers.WriteEscaped(pw.out, output) +} + +func (pw *prefixWriter) Flush() { + if f, ok := pw.out.(flusher); ok { + f.Flush() + } +} + +// nestedPrefixWriter implements PrefixWriter by increasing the level +// before passing text on to some other writer. +type nestedPrefixWriter struct { + PrefixWriter + indent int +} + +var _ PrefixWriter = &prefixWriter{} + +// NewPrefixWriter creates a new PrefixWriter. +func NewNestedPrefixWriter(out PrefixWriter, indent int) PrefixWriter { + return &nestedPrefixWriter{PrefixWriter: out, indent: indent} +} + +func (npw *nestedPrefixWriter) Write(level int, format string, a ...interface{}) { + npw.PrefixWriter.Write(level+npw.indent, format, a...) +} + +func (npw *nestedPrefixWriter) WriteLine(a ...interface{}) { + npw.PrefixWriter.Write(npw.indent, "%s", fmt.Sprintln(a...)) +} + +func describerMap(clientConfig *rest.Config) (map[schema.GroupKind]ResourceDescriber, error) { + c, err := clientset.NewForConfig(clientConfig) + if err != nil { + return nil, err + } + + m := map[schema.GroupKind]ResourceDescriber{ + {Group: corev1.GroupName, Kind: "Pod"}: &PodDescriber{c}, + {Group: corev1.GroupName, Kind: "ReplicationController"}: &ReplicationControllerDescriber{c}, + {Group: corev1.GroupName, Kind: "Secret"}: &SecretDescriber{c}, + {Group: corev1.GroupName, Kind: "Service"}: &ServiceDescriber{c}, + {Group: corev1.GroupName, Kind: "ServiceAccount"}: &ServiceAccountDescriber{c}, + {Group: corev1.GroupName, Kind: "Node"}: &NodeDescriber{c}, + {Group: corev1.GroupName, Kind: "LimitRange"}: &LimitRangeDescriber{c}, + {Group: corev1.GroupName, Kind: "ResourceQuota"}: &ResourceQuotaDescriber{c}, + {Group: corev1.GroupName, Kind: "PersistentVolume"}: &PersistentVolumeDescriber{c}, + {Group: corev1.GroupName, Kind: "PersistentVolumeClaim"}: &PersistentVolumeClaimDescriber{c}, + {Group: corev1.GroupName, Kind: "Namespace"}: &NamespaceDescriber{c}, + {Group: corev1.GroupName, Kind: "Endpoints"}: &EndpointsDescriber{c}, + {Group: corev1.GroupName, Kind: "ConfigMap"}: &ConfigMapDescriber{c}, + {Group: corev1.GroupName, Kind: "PriorityClass"}: &PriorityClassDescriber{c}, + {Group: discoveryv1beta1.GroupName, Kind: "EndpointSlice"}: &EndpointSliceDescriber{c}, + {Group: discoveryv1.GroupName, Kind: "EndpointSlice"}: &EndpointSliceDescriber{c}, + {Group: policyv1beta1.GroupName, Kind: "PodSecurityPolicy"}: &PodSecurityPolicyDescriber{c}, + {Group: autoscalingv2.GroupName, Kind: "HorizontalPodAutoscaler"}: &HorizontalPodAutoscalerDescriber{c}, + {Group: extensionsv1beta1.GroupName, Kind: "Ingress"}: &IngressDescriber{c}, + {Group: networkingv1beta1.GroupName, Kind: "Ingress"}: &IngressDescriber{c}, + {Group: networkingv1beta1.GroupName, Kind: "IngressClass"}: &IngressClassDescriber{c}, + {Group: networkingv1.GroupName, Kind: "Ingress"}: &IngressDescriber{c}, + {Group: networkingv1.GroupName, Kind: "IngressClass"}: &IngressClassDescriber{c}, + {Group: networkingv1alpha1.GroupName, Kind: "ClusterCIDR"}: &ClusterCIDRDescriber{c}, + {Group: batchv1.GroupName, Kind: "Job"}: &JobDescriber{c}, + {Group: batchv1.GroupName, Kind: "CronJob"}: &CronJobDescriber{c}, + {Group: batchv1beta1.GroupName, Kind: "CronJob"}: &CronJobDescriber{c}, + {Group: appsv1.GroupName, Kind: "StatefulSet"}: &StatefulSetDescriber{c}, + {Group: appsv1.GroupName, Kind: "Deployment"}: &DeploymentDescriber{c}, + {Group: appsv1.GroupName, Kind: "DaemonSet"}: &DaemonSetDescriber{c}, + {Group: appsv1.GroupName, Kind: "ReplicaSet"}: &ReplicaSetDescriber{c}, + {Group: certificatesv1beta1.GroupName, Kind: "CertificateSigningRequest"}: &CertificateSigningRequestDescriber{c}, + {Group: storagev1.GroupName, Kind: "StorageClass"}: &StorageClassDescriber{c}, + {Group: storagev1.GroupName, Kind: "CSINode"}: &CSINodeDescriber{c}, + {Group: policyv1beta1.GroupName, Kind: "PodDisruptionBudget"}: &PodDisruptionBudgetDescriber{c}, + {Group: policyv1.GroupName, Kind: "PodDisruptionBudget"}: &PodDisruptionBudgetDescriber{c}, + {Group: rbacv1.GroupName, Kind: "Role"}: &RoleDescriber{c}, + {Group: rbacv1.GroupName, Kind: "ClusterRole"}: &ClusterRoleDescriber{c}, + {Group: rbacv1.GroupName, Kind: "RoleBinding"}: &RoleBindingDescriber{c}, + {Group: rbacv1.GroupName, Kind: "ClusterRoleBinding"}: &ClusterRoleBindingDescriber{c}, + {Group: networkingv1.GroupName, Kind: "NetworkPolicy"}: &NetworkPolicyDescriber{c}, + {Group: schedulingv1.GroupName, Kind: "PriorityClass"}: &PriorityClassDescriber{c}, + } + + return m, nil +} + +// DescriberFor returns the default describe functions for each of the standard +// Kubernetes types. +func DescriberFor(kind schema.GroupKind, clientConfig *rest.Config) (ResourceDescriber, bool) { + describers, err := describerMap(clientConfig) + if err != nil { + klog.V(1).Info(err) + return nil, false + } + + f, ok := describers[kind] + return f, ok +} + +// GenericDescriberFor returns a generic describer for the specified mapping +// that uses only information available from runtime.Unstructured +func GenericDescriberFor(mapping *meta.RESTMapping, clientConfig *rest.Config) (ResourceDescriber, bool) { + // used to fetch the resource + dynamicClient, err := dynamic.NewForConfig(clientConfig) + if err != nil { + return nil, false + } + + // used to get events for the resource + clientSet, err := clientset.NewForConfig(clientConfig) + if err != nil { + return nil, false + } + eventsClient := clientSet.CoreV1() + + return &genericDescriber{mapping, dynamicClient, eventsClient}, true +} + +type genericDescriber struct { + mapping *meta.RESTMapping + dynamic dynamic.Interface + events corev1client.EventsGetter +} + +func (g *genericDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (output string, err error) { + obj, err := g.dynamic.Resource(g.mapping.Resource).Namespace(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(g.events, obj, describerSettings.ChunkSize) + } + + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", obj.GetName()) + w.Write(LEVEL_0, "Namespace:\t%s\n", obj.GetNamespace()) + printLabelsMultiline(w, "Labels", obj.GetLabels()) + printAnnotationsMultiline(w, "Annotations", obj.GetAnnotations()) + printUnstructuredContent(w, LEVEL_0, obj.UnstructuredContent(), "", ".metadata.name", ".metadata.namespace", ".metadata.labels", ".metadata.annotations") + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func printUnstructuredContent(w PrefixWriter, level int, content map[string]interface{}, skipPrefix string, skip ...string) { + fields := []string{} + for field := range content { + fields = append(fields, field) + } + sort.Strings(fields) + + for _, field := range fields { + value := content[field] + switch typedValue := value.(type) { + case map[string]interface{}: + skipExpr := fmt.Sprintf("%s.%s", skipPrefix, field) + if slice.ContainsString(skip, skipExpr, nil) { + continue + } + w.Write(level, "%s:\n", smartLabelFor(field)) + printUnstructuredContent(w, level+1, typedValue, skipExpr, skip...) + + case []interface{}: + skipExpr := fmt.Sprintf("%s.%s", skipPrefix, field) + if slice.ContainsString(skip, skipExpr, nil) { + continue + } + w.Write(level, "%s:\n", smartLabelFor(field)) + for _, child := range typedValue { + switch typedChild := child.(type) { + case map[string]interface{}: + printUnstructuredContent(w, level+1, typedChild, skipExpr, skip...) + default: + w.Write(level+1, "%v\n", typedChild) + } + } + + default: + skipExpr := fmt.Sprintf("%s.%s", skipPrefix, field) + if slice.ContainsString(skip, skipExpr, nil) { + continue + } + w.Write(level, "%s:\t%v\n", smartLabelFor(field), typedValue) + } + } +} + +func smartLabelFor(field string) string { + // skip creating smart label if field name contains + // special characters other than '-' + if strings.IndexFunc(field, func(r rune) bool { + return !unicode.IsLetter(r) && r != '-' + }) != -1 { + return field + } + + commonAcronyms := []string{"API", "URL", "UID", "OSB", "GUID"} + parts := camelcase.Split(field) + result := make([]string, 0, len(parts)) + for _, part := range parts { + if part == "_" { + continue + } + + if slice.ContainsString(commonAcronyms, strings.ToUpper(part), nil) { + part = strings.ToUpper(part) + } else { + part = strings.Title(part) + } + result = append(result, part) + } + + return strings.Join(result, " ") +} + +// DefaultObjectDescriber can describe the default Kubernetes objects. +var DefaultObjectDescriber ObjectDescriber + +func init() { + d := &Describers{} + err := d.Add( + describeLimitRange, + describeQuota, + describePod, + describeService, + describeReplicationController, + describeDaemonSet, + describeNode, + describeNamespace, + ) + if err != nil { + klog.Fatalf("Cannot register describers: %v", err) + } + DefaultObjectDescriber = d +} + +// NamespaceDescriber generates information about a namespace +type NamespaceDescriber struct { + clientset.Interface +} + +func (d *NamespaceDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + ns, err := d.CoreV1().Namespaces().Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + resourceQuotaList := &corev1.ResourceQuotaList{} + err = runtimeresource.FollowContinue(&metav1.ListOptions{Limit: describerSettings.ChunkSize}, + func(options metav1.ListOptions) (runtime.Object, error) { + newList, err := d.CoreV1().ResourceQuotas(name).List(context.TODO(), options) + if err != nil { + return nil, runtimeresource.EnhanceListError(err, options, corev1.ResourceQuotas.String()) + } + resourceQuotaList.Items = append(resourceQuotaList.Items, newList.Items...) + return newList, nil + }) + if err != nil { + if apierrors.IsNotFound(err) { + // Server does not support resource quotas. + // Not an error, will not show resource quotas information. + resourceQuotaList = nil + } else { + return "", err + } + } + + limitRangeList := &corev1.LimitRangeList{} + err = runtimeresource.FollowContinue(&metav1.ListOptions{Limit: describerSettings.ChunkSize}, + func(options metav1.ListOptions) (runtime.Object, error) { + newList, err := d.CoreV1().LimitRanges(name).List(context.TODO(), options) + if err != nil { + return nil, runtimeresource.EnhanceListError(err, options, "limitranges") + } + limitRangeList.Items = append(limitRangeList.Items, newList.Items...) + return newList, nil + }) + if err != nil { + if apierrors.IsNotFound(err) { + // Server does not support limit ranges. + // Not an error, will not show limit ranges information. + limitRangeList = nil + } else { + return "", err + } + } + return describeNamespace(ns, resourceQuotaList, limitRangeList) +} + +func describeNamespace(namespace *corev1.Namespace, resourceQuotaList *corev1.ResourceQuotaList, limitRangeList *corev1.LimitRangeList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", namespace.Name) + printLabelsMultiline(w, "Labels", namespace.Labels) + printAnnotationsMultiline(w, "Annotations", namespace.Annotations) + w.Write(LEVEL_0, "Status:\t%s\n", string(namespace.Status.Phase)) + + if len(namespace.Status.Conditions) > 0 { + w.Write(LEVEL_0, "Conditions:\n") + w.Write(LEVEL_1, "Type\tStatus\tLastTransitionTime\tReason\tMessage\n") + w.Write(LEVEL_1, "----\t------\t------------------\t------\t-------\n") + for _, c := range namespace.Status.Conditions { + w.Write(LEVEL_1, "%v\t%v\t%s\t%v\t%v\n", + c.Type, + c.Status, + c.LastTransitionTime.Time.Format(time.RFC1123Z), + c.Reason, + c.Message) + } + } + + if resourceQuotaList != nil { + w.Write(LEVEL_0, "\n") + DescribeResourceQuotas(resourceQuotaList, w) + } + + if limitRangeList != nil { + w.Write(LEVEL_0, "\n") + DescribeLimitRanges(limitRangeList, w) + } + + return nil + }) +} + +func describeLimitRangeSpec(spec corev1.LimitRangeSpec, prefix string, w PrefixWriter) { + for i := range spec.Limits { + item := spec.Limits[i] + maxResources := item.Max + minResources := item.Min + defaultLimitResources := item.Default + defaultRequestResources := item.DefaultRequest + ratio := item.MaxLimitRequestRatio + + set := map[corev1.ResourceName]bool{} + for k := range maxResources { + set[k] = true + } + for k := range minResources { + set[k] = true + } + for k := range defaultLimitResources { + set[k] = true + } + for k := range defaultRequestResources { + set[k] = true + } + for k := range ratio { + set[k] = true + } + + for k := range set { + // if no value is set, we output - + maxValue := "-" + minValue := "-" + defaultLimitValue := "-" + defaultRequestValue := "-" + ratioValue := "-" + + maxQuantity, maxQuantityFound := maxResources[k] + if maxQuantityFound { + maxValue = maxQuantity.String() + } + + minQuantity, minQuantityFound := minResources[k] + if minQuantityFound { + minValue = minQuantity.String() + } + + defaultLimitQuantity, defaultLimitQuantityFound := defaultLimitResources[k] + if defaultLimitQuantityFound { + defaultLimitValue = defaultLimitQuantity.String() + } + + defaultRequestQuantity, defaultRequestQuantityFound := defaultRequestResources[k] + if defaultRequestQuantityFound { + defaultRequestValue = defaultRequestQuantity.String() + } + + ratioQuantity, ratioQuantityFound := ratio[k] + if ratioQuantityFound { + ratioValue = ratioQuantity.String() + } + + msg := "%s%s\t%v\t%v\t%v\t%v\t%v\t%v\n" + w.Write(LEVEL_0, msg, prefix, item.Type, k, minValue, maxValue, defaultRequestValue, defaultLimitValue, ratioValue) + } + } +} + +// DescribeLimitRanges merges a set of limit range items into a single tabular description +func DescribeLimitRanges(limitRanges *corev1.LimitRangeList, w PrefixWriter) { + if len(limitRanges.Items) == 0 { + w.Write(LEVEL_0, "No LimitRange resource.\n") + return + } + w.Write(LEVEL_0, "Resource Limits\n Type\tResource\tMin\tMax\tDefault Request\tDefault Limit\tMax Limit/Request Ratio\n") + w.Write(LEVEL_0, " ----\t--------\t---\t---\t---------------\t-------------\t-----------------------\n") + for _, limitRange := range limitRanges.Items { + describeLimitRangeSpec(limitRange.Spec, " ", w) + } +} + +// DescribeResourceQuotas merges a set of quota items into a single tabular description of all quotas +func DescribeResourceQuotas(quotas *corev1.ResourceQuotaList, w PrefixWriter) { + if len(quotas.Items) == 0 { + w.Write(LEVEL_0, "No resource quota.\n") + return + } + sort.Sort(SortableResourceQuotas(quotas.Items)) + + w.Write(LEVEL_0, "Resource Quotas\n") + for _, q := range quotas.Items { + w.Write(LEVEL_1, "Name:\t%s\n", q.Name) + if len(q.Spec.Scopes) > 0 { + scopes := make([]string, 0, len(q.Spec.Scopes)) + for _, scope := range q.Spec.Scopes { + scopes = append(scopes, string(scope)) + } + sort.Strings(scopes) + w.Write(LEVEL_1, "Scopes:\t%s\n", strings.Join(scopes, ", ")) + for _, scope := range scopes { + helpText := helpTextForResourceQuotaScope(corev1.ResourceQuotaScope(scope)) + if len(helpText) > 0 { + w.Write(LEVEL_1, "* %s\n", helpText) + } + } + } + + w.Write(LEVEL_1, "Resource\tUsed\tHard\n") + w.Write(LEVEL_1, "--------\t---\t---\n") + + resources := make([]corev1.ResourceName, 0, len(q.Status.Hard)) + for resource := range q.Status.Hard { + resources = append(resources, resource) + } + sort.Sort(SortableResourceNames(resources)) + + for _, resource := range resources { + hardQuantity := q.Status.Hard[resource] + usedQuantity := q.Status.Used[resource] + w.Write(LEVEL_1, "%s\t%s\t%s\n", string(resource), usedQuantity.String(), hardQuantity.String()) + } + } +} + +// LimitRangeDescriber generates information about a limit range +type LimitRangeDescriber struct { + clientset.Interface +} + +func (d *LimitRangeDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + lr := d.CoreV1().LimitRanges(namespace) + + limitRange, err := lr.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + return describeLimitRange(limitRange) +} + +func describeLimitRange(limitRange *corev1.LimitRange) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", limitRange.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", limitRange.Namespace) + w.Write(LEVEL_0, "Type\tResource\tMin\tMax\tDefault Request\tDefault Limit\tMax Limit/Request Ratio\n") + w.Write(LEVEL_0, "----\t--------\t---\t---\t---------------\t-------------\t-----------------------\n") + describeLimitRangeSpec(limitRange.Spec, "", w) + return nil + }) +} + +// ResourceQuotaDescriber generates information about a resource quota +type ResourceQuotaDescriber struct { + clientset.Interface +} + +func (d *ResourceQuotaDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + rq := d.CoreV1().ResourceQuotas(namespace) + + resourceQuota, err := rq.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + return describeQuota(resourceQuota) +} + +func helpTextForResourceQuotaScope(scope corev1.ResourceQuotaScope) string { + switch scope { + case corev1.ResourceQuotaScopeTerminating: + return "Matches all pods that have an active deadline. These pods have a limited lifespan on a node before being actively terminated by the system." + case corev1.ResourceQuotaScopeNotTerminating: + return "Matches all pods that do not have an active deadline. These pods usually include long running pods whose container command is not expected to terminate." + case corev1.ResourceQuotaScopeBestEffort: + return "Matches all pods that do not have resource requirements set. These pods have a best effort quality of service." + case corev1.ResourceQuotaScopeNotBestEffort: + return "Matches all pods that have at least one resource requirement set. These pods have a burstable or guaranteed quality of service." + default: + return "" + } +} +func describeQuota(resourceQuota *corev1.ResourceQuota) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", resourceQuota.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", resourceQuota.Namespace) + if len(resourceQuota.Spec.Scopes) > 0 { + scopes := make([]string, 0, len(resourceQuota.Spec.Scopes)) + for _, scope := range resourceQuota.Spec.Scopes { + scopes = append(scopes, string(scope)) + } + sort.Strings(scopes) + w.Write(LEVEL_0, "Scopes:\t%s\n", strings.Join(scopes, ", ")) + for _, scope := range scopes { + helpText := helpTextForResourceQuotaScope(corev1.ResourceQuotaScope(scope)) + if len(helpText) > 0 { + w.Write(LEVEL_0, " * %s\n", helpText) + } + } + } + w.Write(LEVEL_0, "Resource\tUsed\tHard\n") + w.Write(LEVEL_0, "--------\t----\t----\n") + + resources := make([]corev1.ResourceName, 0, len(resourceQuota.Status.Hard)) + for resource := range resourceQuota.Status.Hard { + resources = append(resources, resource) + } + sort.Sort(SortableResourceNames(resources)) + + msg := "%v\t%v\t%v\n" + for i := range resources { + resourceName := resources[i] + hardQuantity := resourceQuota.Status.Hard[resourceName] + usedQuantity := resourceQuota.Status.Used[resourceName] + if hardQuantity.Format != usedQuantity.Format { + usedQuantity = *resource.NewQuantity(usedQuantity.Value(), hardQuantity.Format) + } + w.Write(LEVEL_0, msg, resourceName, usedQuantity.String(), hardQuantity.String()) + } + return nil + }) +} + +// PodDescriber generates information about a pod and the replication controllers that +// create it. +type PodDescriber struct { + clientset.Interface +} + +func (d *PodDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + pod, err := d.CoreV1().Pods(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + if describerSettings.ShowEvents { + eventsInterface := d.CoreV1().Events(namespace) + selector := eventsInterface.GetFieldSelector(&name, &namespace, nil, nil) + initialOpts := metav1.ListOptions{ + FieldSelector: selector.String(), + Limit: describerSettings.ChunkSize, + } + events := &corev1.EventList{} + err2 := runtimeresource.FollowContinue(&initialOpts, + func(options metav1.ListOptions) (runtime.Object, error) { + newList, err := eventsInterface.List(context.TODO(), options) + if err != nil { + return nil, runtimeresource.EnhanceListError(err, options, "events") + } + events.Items = append(events.Items, newList.Items...) + return newList, nil + }) + + if err2 == nil && len(events.Items) > 0 { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Pod '%v': error '%v', but found events.\n", name, err) + DescribeEvents(events, w) + return nil + }) + } + } + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + if ref, err := reference.GetReference(scheme.Scheme, pod); err != nil { + klog.Errorf("Unable to construct reference to '%#v': %v", pod, err) + } else { + ref.Kind = "" + if _, isMirrorPod := pod.Annotations[corev1.MirrorPodAnnotationKey]; isMirrorPod { + ref.UID = types.UID(pod.Annotations[corev1.MirrorPodAnnotationKey]) + } + events, _ = searchEvents(d.CoreV1(), ref, describerSettings.ChunkSize) + } + } + + return describePod(pod, events) +} + +func describePod(pod *corev1.Pod, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", pod.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", pod.Namespace) + if pod.Spec.Priority != nil { + w.Write(LEVEL_0, "Priority:\t%d\n", *pod.Spec.Priority) + } + if len(pod.Spec.PriorityClassName) > 0 { + w.Write(LEVEL_0, "Priority Class Name:\t%s\n", pod.Spec.PriorityClassName) + } + if pod.Spec.RuntimeClassName != nil && len(*pod.Spec.RuntimeClassName) > 0 { + w.Write(LEVEL_0, "Runtime Class Name:\t%s\n", *pod.Spec.RuntimeClassName) + } + if len(pod.Spec.ServiceAccountName) > 0 { + w.Write(LEVEL_0, "Service Account:\t%s\n", pod.Spec.ServiceAccountName) + } + if pod.Spec.NodeName == "" { + w.Write(LEVEL_0, "Node:\t\n") + } else { + w.Write(LEVEL_0, "Node:\t%s\n", pod.Spec.NodeName+"/"+pod.Status.HostIP) + } + if pod.Status.StartTime != nil { + w.Write(LEVEL_0, "Start Time:\t%s\n", pod.Status.StartTime.Time.Format(time.RFC1123Z)) + } + printLabelsMultiline(w, "Labels", pod.Labels) + printAnnotationsMultiline(w, "Annotations", pod.Annotations) + if pod.DeletionTimestamp != nil { + w.Write(LEVEL_0, "Status:\tTerminating (lasts %s)\n", translateTimestampSince(*pod.DeletionTimestamp)) + w.Write(LEVEL_0, "Termination Grace Period:\t%ds\n", *pod.DeletionGracePeriodSeconds) + } else { + w.Write(LEVEL_0, "Status:\t%s\n", string(pod.Status.Phase)) + } + if len(pod.Status.Reason) > 0 { + w.Write(LEVEL_0, "Reason:\t%s\n", pod.Status.Reason) + } + if len(pod.Status.Message) > 0 { + w.Write(LEVEL_0, "Message:\t%s\n", pod.Status.Message) + } + // remove when .IP field is depreciated + w.Write(LEVEL_0, "IP:\t%s\n", pod.Status.PodIP) + describePodIPs(pod, w, "") + if controlledBy := printController(pod); len(controlledBy) > 0 { + w.Write(LEVEL_0, "Controlled By:\t%s\n", controlledBy) + } + if len(pod.Status.NominatedNodeName) > 0 { + w.Write(LEVEL_0, "NominatedNodeName:\t%s\n", pod.Status.NominatedNodeName) + } + + if len(pod.Spec.InitContainers) > 0 { + describeContainers("Init Containers", pod.Spec.InitContainers, pod.Status.InitContainerStatuses, EnvValueRetriever(pod), w, "") + } + describeContainers("Containers", pod.Spec.Containers, pod.Status.ContainerStatuses, EnvValueRetriever(pod), w, "") + if len(pod.Spec.EphemeralContainers) > 0 { + var ec []corev1.Container + for i := range pod.Spec.EphemeralContainers { + ec = append(ec, corev1.Container(pod.Spec.EphemeralContainers[i].EphemeralContainerCommon)) + } + describeContainers("Ephemeral Containers", ec, pod.Status.EphemeralContainerStatuses, EnvValueRetriever(pod), w, "") + } + if len(pod.Spec.ReadinessGates) > 0 { + w.Write(LEVEL_0, "Readiness Gates:\n Type\tStatus\n") + for _, g := range pod.Spec.ReadinessGates { + status := "" + for _, c := range pod.Status.Conditions { + if c.Type == g.ConditionType { + status = fmt.Sprintf("%v", c.Status) + break + } + } + w.Write(LEVEL_1, "%v \t%v \n", + g.ConditionType, + status) + } + } + if len(pod.Status.Conditions) > 0 { + w.Write(LEVEL_0, "Conditions:\n Type\tStatus\n") + for _, c := range pod.Status.Conditions { + w.Write(LEVEL_1, "%v \t%v \n", + c.Type, + c.Status) + } + } + describeVolumes(pod.Spec.Volumes, w, "") + if pod.Status.QOSClass != "" { + w.Write(LEVEL_0, "QoS Class:\t%s\n", pod.Status.QOSClass) + } else { + w.Write(LEVEL_0, "QoS Class:\t%s\n", qos.GetPodQOS(pod)) + } + printLabelsMultiline(w, "Node-Selectors", pod.Spec.NodeSelector) + printPodTolerationsMultiline(w, "Tolerations", pod.Spec.Tolerations) + describeTopologySpreadConstraints(pod.Spec.TopologySpreadConstraints, w, "") + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func printController(controllee metav1.Object) string { + if controllerRef := metav1.GetControllerOf(controllee); controllerRef != nil { + return fmt.Sprintf("%s/%s", controllerRef.Kind, controllerRef.Name) + } + return "" +} + +func describePodIPs(pod *corev1.Pod, w PrefixWriter, space string) { + if len(pod.Status.PodIPs) == 0 { + w.Write(LEVEL_0, "%sIPs:\t\n", space) + return + } + w.Write(LEVEL_0, "%sIPs:\n", space) + for _, ipInfo := range pod.Status.PodIPs { + w.Write(LEVEL_1, "IP:\t%s\n", ipInfo.IP) + } +} + +func describeTopologySpreadConstraints(tscs []corev1.TopologySpreadConstraint, w PrefixWriter, space string) { + if len(tscs) == 0 { + return + } + + sort.Slice(tscs, func(i, j int) bool { + return tscs[i].TopologyKey < tscs[j].TopologyKey + }) + + w.Write(LEVEL_0, "%sTopology Spread Constraints:\t", space) + for i, tsc := range tscs { + if i != 0 { + w.Write(LEVEL_0, "%s", space) + w.Write(LEVEL_0, "%s", "\t") + } + + w.Write(LEVEL_0, "%s:", tsc.TopologyKey) + w.Write(LEVEL_0, "%v", tsc.WhenUnsatisfiable) + w.Write(LEVEL_0, " when max skew %d is exceeded", tsc.MaxSkew) + if tsc.LabelSelector != nil { + w.Write(LEVEL_0, " for selector %s", metav1.FormatLabelSelector(tsc.LabelSelector)) + } + w.Write(LEVEL_0, "\n") + } +} + +func describeVolumes(volumes []corev1.Volume, w PrefixWriter, space string) { + if len(volumes) == 0 { + w.Write(LEVEL_0, "%sVolumes:\t\n", space) + return + } + + w.Write(LEVEL_0, "%sVolumes:\n", space) + for _, volume := range volumes { + nameIndent := "" + if len(space) > 0 { + nameIndent = " " + } + w.Write(LEVEL_1, "%s%v:\n", nameIndent, volume.Name) + switch { + case volume.VolumeSource.HostPath != nil: + printHostPathVolumeSource(volume.VolumeSource.HostPath, w) + case volume.VolumeSource.EmptyDir != nil: + printEmptyDirVolumeSource(volume.VolumeSource.EmptyDir, w) + case volume.VolumeSource.GCEPersistentDisk != nil: + printGCEPersistentDiskVolumeSource(volume.VolumeSource.GCEPersistentDisk, w) + case volume.VolumeSource.AWSElasticBlockStore != nil: + printAWSElasticBlockStoreVolumeSource(volume.VolumeSource.AWSElasticBlockStore, w) + case volume.VolumeSource.GitRepo != nil: + printGitRepoVolumeSource(volume.VolumeSource.GitRepo, w) + case volume.VolumeSource.Secret != nil: + printSecretVolumeSource(volume.VolumeSource.Secret, w) + case volume.VolumeSource.ConfigMap != nil: + printConfigMapVolumeSource(volume.VolumeSource.ConfigMap, w) + case volume.VolumeSource.NFS != nil: + printNFSVolumeSource(volume.VolumeSource.NFS, w) + case volume.VolumeSource.ISCSI != nil: + printISCSIVolumeSource(volume.VolumeSource.ISCSI, w) + case volume.VolumeSource.Glusterfs != nil: + printGlusterfsVolumeSource(volume.VolumeSource.Glusterfs, w) + case volume.VolumeSource.PersistentVolumeClaim != nil: + printPersistentVolumeClaimVolumeSource(volume.VolumeSource.PersistentVolumeClaim, w) + case volume.VolumeSource.Ephemeral != nil: + printEphemeralVolumeSource(volume.VolumeSource.Ephemeral, w) + case volume.VolumeSource.RBD != nil: + printRBDVolumeSource(volume.VolumeSource.RBD, w) + case volume.VolumeSource.Quobyte != nil: + printQuobyteVolumeSource(volume.VolumeSource.Quobyte, w) + case volume.VolumeSource.DownwardAPI != nil: + printDownwardAPIVolumeSource(volume.VolumeSource.DownwardAPI, w) + case volume.VolumeSource.AzureDisk != nil: + printAzureDiskVolumeSource(volume.VolumeSource.AzureDisk, w) + case volume.VolumeSource.VsphereVolume != nil: + printVsphereVolumeSource(volume.VolumeSource.VsphereVolume, w) + case volume.VolumeSource.Cinder != nil: + printCinderVolumeSource(volume.VolumeSource.Cinder, w) + case volume.VolumeSource.PhotonPersistentDisk != nil: + printPhotonPersistentDiskVolumeSource(volume.VolumeSource.PhotonPersistentDisk, w) + case volume.VolumeSource.PortworxVolume != nil: + printPortworxVolumeSource(volume.VolumeSource.PortworxVolume, w) + case volume.VolumeSource.ScaleIO != nil: + printScaleIOVolumeSource(volume.VolumeSource.ScaleIO, w) + case volume.VolumeSource.CephFS != nil: + printCephFSVolumeSource(volume.VolumeSource.CephFS, w) + case volume.VolumeSource.StorageOS != nil: + printStorageOSVolumeSource(volume.VolumeSource.StorageOS, w) + case volume.VolumeSource.FC != nil: + printFCVolumeSource(volume.VolumeSource.FC, w) + case volume.VolumeSource.AzureFile != nil: + printAzureFileVolumeSource(volume.VolumeSource.AzureFile, w) + case volume.VolumeSource.FlexVolume != nil: + printFlexVolumeSource(volume.VolumeSource.FlexVolume, w) + case volume.VolumeSource.Flocker != nil: + printFlockerVolumeSource(volume.VolumeSource.Flocker, w) + case volume.VolumeSource.Projected != nil: + printProjectedVolumeSource(volume.VolumeSource.Projected, w) + case volume.VolumeSource.CSI != nil: + printCSIVolumeSource(volume.VolumeSource.CSI, w) + default: + w.Write(LEVEL_1, "\n") + } + } +} + +func printHostPathVolumeSource(hostPath *corev1.HostPathVolumeSource, w PrefixWriter) { + hostPathType := "" + if hostPath.Type != nil { + hostPathType = string(*hostPath.Type) + } + w.Write(LEVEL_2, "Type:\tHostPath (bare host directory volume)\n"+ + " Path:\t%v\n"+ + " HostPathType:\t%v\n", + hostPath.Path, hostPathType) +} + +func printEmptyDirVolumeSource(emptyDir *corev1.EmptyDirVolumeSource, w PrefixWriter) { + var sizeLimit string + if emptyDir.SizeLimit != nil && emptyDir.SizeLimit.Cmp(resource.Quantity{}) > 0 { + sizeLimit = fmt.Sprintf("%v", emptyDir.SizeLimit) + } else { + sizeLimit = "" + } + w.Write(LEVEL_2, "Type:\tEmptyDir (a temporary directory that shares a pod's lifetime)\n"+ + " Medium:\t%v\n"+ + " SizeLimit:\t%v\n", + emptyDir.Medium, sizeLimit) +} + +func printGCEPersistentDiskVolumeSource(gce *corev1.GCEPersistentDiskVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tGCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)\n"+ + " PDName:\t%v\n"+ + " FSType:\t%v\n"+ + " Partition:\t%v\n"+ + " ReadOnly:\t%v\n", + gce.PDName, gce.FSType, gce.Partition, gce.ReadOnly) +} + +func printAWSElasticBlockStoreVolumeSource(aws *corev1.AWSElasticBlockStoreVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tAWSElasticBlockStore (a Persistent Disk resource in AWS)\n"+ + " VolumeID:\t%v\n"+ + " FSType:\t%v\n"+ + " Partition:\t%v\n"+ + " ReadOnly:\t%v\n", + aws.VolumeID, aws.FSType, aws.Partition, aws.ReadOnly) +} + +func printGitRepoVolumeSource(git *corev1.GitRepoVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tGitRepo (a volume that is pulled from git when the pod is created)\n"+ + " Repository:\t%v\n"+ + " Revision:\t%v\n", + git.Repository, git.Revision) +} + +func printSecretVolumeSource(secret *corev1.SecretVolumeSource, w PrefixWriter) { + optional := secret.Optional != nil && *secret.Optional + w.Write(LEVEL_2, "Type:\tSecret (a volume populated by a Secret)\n"+ + " SecretName:\t%v\n"+ + " Optional:\t%v\n", + secret.SecretName, optional) +} + +func printConfigMapVolumeSource(configMap *corev1.ConfigMapVolumeSource, w PrefixWriter) { + optional := configMap.Optional != nil && *configMap.Optional + w.Write(LEVEL_2, "Type:\tConfigMap (a volume populated by a ConfigMap)\n"+ + " Name:\t%v\n"+ + " Optional:\t%v\n", + configMap.Name, optional) +} + +func printProjectedVolumeSource(projected *corev1.ProjectedVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tProjected (a volume that contains injected data from multiple sources)\n") + for _, source := range projected.Sources { + if source.Secret != nil { + w.Write(LEVEL_2, "SecretName:\t%v\n"+ + " SecretOptionalName:\t%v\n", + source.Secret.Name, source.Secret.Optional) + } else if source.DownwardAPI != nil { + w.Write(LEVEL_2, "DownwardAPI:\ttrue\n") + } else if source.ConfigMap != nil { + w.Write(LEVEL_2, "ConfigMapName:\t%v\n"+ + " ConfigMapOptional:\t%v\n", + source.ConfigMap.Name, source.ConfigMap.Optional) + } else if source.ServiceAccountToken != nil { + w.Write(LEVEL_2, "TokenExpirationSeconds:\t%d\n", + *source.ServiceAccountToken.ExpirationSeconds) + } + } +} + +func printNFSVolumeSource(nfs *corev1.NFSVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tNFS (an NFS mount that lasts the lifetime of a pod)\n"+ + " Server:\t%v\n"+ + " Path:\t%v\n"+ + " ReadOnly:\t%v\n", + nfs.Server, nfs.Path, nfs.ReadOnly) +} + +func printQuobyteVolumeSource(quobyte *corev1.QuobyteVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tQuobyte (a Quobyte mount on the host that shares a pod's lifetime)\n"+ + " Registry:\t%v\n"+ + " Volume:\t%v\n"+ + " ReadOnly:\t%v\n", + quobyte.Registry, quobyte.Volume, quobyte.ReadOnly) +} + +func printPortworxVolumeSource(pwxVolume *corev1.PortworxVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tPortworxVolume (a Portworx Volume resource)\n"+ + " VolumeID:\t%v\n", + pwxVolume.VolumeID) +} + +func printISCSIVolumeSource(iscsi *corev1.ISCSIVolumeSource, w PrefixWriter) { + initiator := "" + if iscsi.InitiatorName != nil { + initiator = *iscsi.InitiatorName + } + w.Write(LEVEL_2, "Type:\tISCSI (an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod)\n"+ + " TargetPortal:\t%v\n"+ + " IQN:\t%v\n"+ + " Lun:\t%v\n"+ + " ISCSIInterface\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n"+ + " Portals:\t%v\n"+ + " DiscoveryCHAPAuth:\t%v\n"+ + " SessionCHAPAuth:\t%v\n"+ + " SecretRef:\t%v\n"+ + " InitiatorName:\t%v\n", + iscsi.TargetPortal, iscsi.IQN, iscsi.Lun, iscsi.ISCSIInterface, iscsi.FSType, iscsi.ReadOnly, iscsi.Portals, iscsi.DiscoveryCHAPAuth, iscsi.SessionCHAPAuth, iscsi.SecretRef, initiator) +} + +func printISCSIPersistentVolumeSource(iscsi *corev1.ISCSIPersistentVolumeSource, w PrefixWriter) { + initiatorName := "" + if iscsi.InitiatorName != nil { + initiatorName = *iscsi.InitiatorName + } + w.Write(LEVEL_2, "Type:\tISCSI (an ISCSI Disk resource that is attached to a kubelet's host machine and then exposed to the pod)\n"+ + " TargetPortal:\t%v\n"+ + " IQN:\t%v\n"+ + " Lun:\t%v\n"+ + " ISCSIInterface\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n"+ + " Portals:\t%v\n"+ + " DiscoveryCHAPAuth:\t%v\n"+ + " SessionCHAPAuth:\t%v\n"+ + " SecretRef:\t%v\n"+ + " InitiatorName:\t%v\n", + iscsi.TargetPortal, iscsi.IQN, iscsi.Lun, iscsi.ISCSIInterface, iscsi.FSType, iscsi.ReadOnly, iscsi.Portals, iscsi.DiscoveryCHAPAuth, iscsi.SessionCHAPAuth, iscsi.SecretRef, initiatorName) +} + +func printGlusterfsVolumeSource(glusterfs *corev1.GlusterfsVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tGlusterfs (a Glusterfs mount on the host that shares a pod's lifetime)\n"+ + " EndpointsName:\t%v\n"+ + " Path:\t%v\n"+ + " ReadOnly:\t%v\n", + glusterfs.EndpointsName, glusterfs.Path, glusterfs.ReadOnly) +} + +func printGlusterfsPersistentVolumeSource(glusterfs *corev1.GlusterfsPersistentVolumeSource, w PrefixWriter) { + endpointsNamespace := "" + if glusterfs.EndpointsNamespace != nil { + endpointsNamespace = *glusterfs.EndpointsNamespace + } + w.Write(LEVEL_2, "Type:\tGlusterfs (a Glusterfs mount on the host that shares a pod's lifetime)\n"+ + " EndpointsName:\t%v\n"+ + " EndpointsNamespace:\t%v\n"+ + " Path:\t%v\n"+ + " ReadOnly:\t%v\n", + glusterfs.EndpointsName, endpointsNamespace, glusterfs.Path, glusterfs.ReadOnly) +} + +func printPersistentVolumeClaimVolumeSource(claim *corev1.PersistentVolumeClaimVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tPersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)\n"+ + " ClaimName:\t%v\n"+ + " ReadOnly:\t%v\n", + claim.ClaimName, claim.ReadOnly) +} + +func printEphemeralVolumeSource(ephemeral *corev1.EphemeralVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tEphemeralVolume (an inline specification for a volume that gets created and deleted with the pod)\n") + if ephemeral.VolumeClaimTemplate != nil { + printPersistentVolumeClaim(NewNestedPrefixWriter(w, LEVEL_2), + &corev1.PersistentVolumeClaim{ + ObjectMeta: ephemeral.VolumeClaimTemplate.ObjectMeta, + Spec: ephemeral.VolumeClaimTemplate.Spec, + }, false /* not a full PVC */) + } +} + +func printRBDVolumeSource(rbd *corev1.RBDVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tRBD (a Rados Block Device mount on the host that shares a pod's lifetime)\n"+ + " CephMonitors:\t%v\n"+ + " RBDImage:\t%v\n"+ + " FSType:\t%v\n"+ + " RBDPool:\t%v\n"+ + " RadosUser:\t%v\n"+ + " Keyring:\t%v\n"+ + " SecretRef:\t%v\n"+ + " ReadOnly:\t%v\n", + rbd.CephMonitors, rbd.RBDImage, rbd.FSType, rbd.RBDPool, rbd.RadosUser, rbd.Keyring, rbd.SecretRef, rbd.ReadOnly) +} + +func printRBDPersistentVolumeSource(rbd *corev1.RBDPersistentVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tRBD (a Rados Block Device mount on the host that shares a pod's lifetime)\n"+ + " CephMonitors:\t%v\n"+ + " RBDImage:\t%v\n"+ + " FSType:\t%v\n"+ + " RBDPool:\t%v\n"+ + " RadosUser:\t%v\n"+ + " Keyring:\t%v\n"+ + " SecretRef:\t%v\n"+ + " ReadOnly:\t%v\n", + rbd.CephMonitors, rbd.RBDImage, rbd.FSType, rbd.RBDPool, rbd.RadosUser, rbd.Keyring, rbd.SecretRef, rbd.ReadOnly) +} + +func printDownwardAPIVolumeSource(d *corev1.DownwardAPIVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tDownwardAPI (a volume populated by information about the pod)\n Items:\n") + for _, mapping := range d.Items { + if mapping.FieldRef != nil { + w.Write(LEVEL_3, "%v -> %v\n", mapping.FieldRef.FieldPath, mapping.Path) + } + if mapping.ResourceFieldRef != nil { + w.Write(LEVEL_3, "%v -> %v\n", mapping.ResourceFieldRef.Resource, mapping.Path) + } + } +} + +func printAzureDiskVolumeSource(d *corev1.AzureDiskVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tAzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)\n"+ + " DiskName:\t%v\n"+ + " DiskURI:\t%v\n"+ + " Kind: \t%v\n"+ + " FSType:\t%v\n"+ + " CachingMode:\t%v\n"+ + " ReadOnly:\t%v\n", + d.DiskName, d.DataDiskURI, *d.Kind, *d.FSType, *d.CachingMode, *d.ReadOnly) +} + +func printVsphereVolumeSource(vsphere *corev1.VsphereVirtualDiskVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tvSphereVolume (a Persistent Disk resource in vSphere)\n"+ + " VolumePath:\t%v\n"+ + " FSType:\t%v\n"+ + " StoragePolicyName:\t%v\n", + vsphere.VolumePath, vsphere.FSType, vsphere.StoragePolicyName) +} + +func printPhotonPersistentDiskVolumeSource(photon *corev1.PhotonPersistentDiskVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tPhotonPersistentDisk (a Persistent Disk resource in photon platform)\n"+ + " PdID:\t%v\n"+ + " FSType:\t%v\n", + photon.PdID, photon.FSType) +} + +func printCinderVolumeSource(cinder *corev1.CinderVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tCinder (a Persistent Disk resource in OpenStack)\n"+ + " VolumeID:\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n"+ + " SecretRef:\t%v\n", + cinder.VolumeID, cinder.FSType, cinder.ReadOnly, cinder.SecretRef) +} + +func printCinderPersistentVolumeSource(cinder *corev1.CinderPersistentVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tCinder (a Persistent Disk resource in OpenStack)\n"+ + " VolumeID:\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n"+ + " SecretRef:\t%v\n", + cinder.VolumeID, cinder.FSType, cinder.ReadOnly, cinder.SecretRef) +} + +func printScaleIOVolumeSource(sio *corev1.ScaleIOVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tScaleIO (a persistent volume backed by a block device in ScaleIO)\n"+ + " Gateway:\t%v\n"+ + " System:\t%v\n"+ + " Protection Domain:\t%v\n"+ + " Storage Pool:\t%v\n"+ + " Storage Mode:\t%v\n"+ + " VolumeName:\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n", + sio.Gateway, sio.System, sio.ProtectionDomain, sio.StoragePool, sio.StorageMode, sio.VolumeName, sio.FSType, sio.ReadOnly) +} + +func printScaleIOPersistentVolumeSource(sio *corev1.ScaleIOPersistentVolumeSource, w PrefixWriter) { + var secretNS, secretName string + if sio.SecretRef != nil { + secretName = sio.SecretRef.Name + secretNS = sio.SecretRef.Namespace + } + w.Write(LEVEL_2, "Type:\tScaleIO (a persistent volume backed by a block device in ScaleIO)\n"+ + " Gateway:\t%v\n"+ + " System:\t%v\n"+ + " Protection Domain:\t%v\n"+ + " Storage Pool:\t%v\n"+ + " Storage Mode:\t%v\n"+ + " VolumeName:\t%v\n"+ + " SecretName:\t%v\n"+ + " SecretNamespace:\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n", + sio.Gateway, sio.System, sio.ProtectionDomain, sio.StoragePool, sio.StorageMode, sio.VolumeName, secretName, secretNS, sio.FSType, sio.ReadOnly) +} + +func printLocalVolumeSource(ls *corev1.LocalVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tLocalVolume (a persistent volume backed by local storage on a node)\n"+ + " Path:\t%v\n", + ls.Path) +} + +func printCephFSVolumeSource(cephfs *corev1.CephFSVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tCephFS (a CephFS mount on the host that shares a pod's lifetime)\n"+ + " Monitors:\t%v\n"+ + " Path:\t%v\n"+ + " User:\t%v\n"+ + " SecretFile:\t%v\n"+ + " SecretRef:\t%v\n"+ + " ReadOnly:\t%v\n", + cephfs.Monitors, cephfs.Path, cephfs.User, cephfs.SecretFile, cephfs.SecretRef, cephfs.ReadOnly) +} + +func printCephFSPersistentVolumeSource(cephfs *corev1.CephFSPersistentVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tCephFS (a CephFS mount on the host that shares a pod's lifetime)\n"+ + " Monitors:\t%v\n"+ + " Path:\t%v\n"+ + " User:\t%v\n"+ + " SecretFile:\t%v\n"+ + " SecretRef:\t%v\n"+ + " ReadOnly:\t%v\n", + cephfs.Monitors, cephfs.Path, cephfs.User, cephfs.SecretFile, cephfs.SecretRef, cephfs.ReadOnly) +} + +func printStorageOSVolumeSource(storageos *corev1.StorageOSVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tStorageOS (a StorageOS Persistent Disk resource)\n"+ + " VolumeName:\t%v\n"+ + " VolumeNamespace:\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n", + storageos.VolumeName, storageos.VolumeNamespace, storageos.FSType, storageos.ReadOnly) +} + +func printStorageOSPersistentVolumeSource(storageos *corev1.StorageOSPersistentVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tStorageOS (a StorageOS Persistent Disk resource)\n"+ + " VolumeName:\t%v\n"+ + " VolumeNamespace:\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n", + storageos.VolumeName, storageos.VolumeNamespace, storageos.FSType, storageos.ReadOnly) +} + +func printFCVolumeSource(fc *corev1.FCVolumeSource, w PrefixWriter) { + lun := "" + if fc.Lun != nil { + lun = strconv.Itoa(int(*fc.Lun)) + } + w.Write(LEVEL_2, "Type:\tFC (a Fibre Channel disk)\n"+ + " TargetWWNs:\t%v\n"+ + " LUN:\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n", + strings.Join(fc.TargetWWNs, ", "), lun, fc.FSType, fc.ReadOnly) +} + +func printAzureFileVolumeSource(azureFile *corev1.AzureFileVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tAzureFile (an Azure File Service mount on the host and bind mount to the pod)\n"+ + " SecretName:\t%v\n"+ + " ShareName:\t%v\n"+ + " ReadOnly:\t%v\n", + azureFile.SecretName, azureFile.ShareName, azureFile.ReadOnly) +} + +func printAzureFilePersistentVolumeSource(azureFile *corev1.AzureFilePersistentVolumeSource, w PrefixWriter) { + ns := "" + if azureFile.SecretNamespace != nil { + ns = *azureFile.SecretNamespace + } + w.Write(LEVEL_2, "Type:\tAzureFile (an Azure File Service mount on the host and bind mount to the pod)\n"+ + " SecretName:\t%v\n"+ + " SecretNamespace:\t%v\n"+ + " ShareName:\t%v\n"+ + " ReadOnly:\t%v\n", + azureFile.SecretName, ns, azureFile.ShareName, azureFile.ReadOnly) +} + +func printFlexPersistentVolumeSource(flex *corev1.FlexPersistentVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tFlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)\n"+ + " Driver:\t%v\n"+ + " FSType:\t%v\n"+ + " SecretRef:\t%v\n"+ + " ReadOnly:\t%v\n"+ + " Options:\t%v\n", + flex.Driver, flex.FSType, flex.SecretRef, flex.ReadOnly, flex.Options) +} + +func printFlexVolumeSource(flex *corev1.FlexVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tFlexVolume (a generic volume resource that is provisioned/attached using an exec based plugin)\n"+ + " Driver:\t%v\n"+ + " FSType:\t%v\n"+ + " SecretRef:\t%v\n"+ + " ReadOnly:\t%v\n"+ + " Options:\t%v\n", + flex.Driver, flex.FSType, flex.SecretRef, flex.ReadOnly, flex.Options) +} + +func printFlockerVolumeSource(flocker *corev1.FlockerVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tFlocker (a Flocker volume mounted by the Flocker agent)\n"+ + " DatasetName:\t%v\n"+ + " DatasetUUID:\t%v\n", + flocker.DatasetName, flocker.DatasetUUID) +} + +func printCSIVolumeSource(csi *corev1.CSIVolumeSource, w PrefixWriter) { + var readOnly bool + var fsType string + if csi.ReadOnly != nil && *csi.ReadOnly { + readOnly = true + } + if csi.FSType != nil { + fsType = *csi.FSType + } + w.Write(LEVEL_2, "Type:\tCSI (a Container Storage Interface (CSI) volume source)\n"+ + " Driver:\t%v\n"+ + " FSType:\t%v\n"+ + " ReadOnly:\t%v\n", + csi.Driver, fsType, readOnly) + printCSIPersistentVolumeAttributesMultiline(w, "VolumeAttributes", csi.VolumeAttributes) +} + +func printCSIPersistentVolumeSource(csi *corev1.CSIPersistentVolumeSource, w PrefixWriter) { + w.Write(LEVEL_2, "Type:\tCSI (a Container Storage Interface (CSI) volume source)\n"+ + " Driver:\t%v\n"+ + " FSType:\t%v\n"+ + " VolumeHandle:\t%v\n"+ + " ReadOnly:\t%v\n", + csi.Driver, csi.FSType, csi.VolumeHandle, csi.ReadOnly) + printCSIPersistentVolumeAttributesMultiline(w, "VolumeAttributes", csi.VolumeAttributes) +} + +func printCSIPersistentVolumeAttributesMultiline(w PrefixWriter, title string, annotations map[string]string) { + printCSIPersistentVolumeAttributesMultilineIndent(w, "", title, "\t", annotations, sets.NewString()) +} + +func printCSIPersistentVolumeAttributesMultilineIndent(w PrefixWriter, initialIndent, title, innerIndent string, attributes map[string]string, skip sets.String) { + w.Write(LEVEL_2, "%s%s:%s", initialIndent, title, innerIndent) + + if len(attributes) == 0 { + w.WriteLine("") + return + } + + // to print labels in the sorted order + keys := make([]string, 0, len(attributes)) + for key := range attributes { + if skip.Has(key) { + continue + } + keys = append(keys, key) + } + if len(attributes) == 0 { + w.WriteLine("") + return + } + sort.Strings(keys) + + for i, key := range keys { + if i != 0 { + w.Write(LEVEL_2, initialIndent) + w.Write(LEVEL_2, innerIndent) + } + line := fmt.Sprintf("%s=%s", key, attributes[key]) + if len(line) > maxAnnotationLen { + w.Write(LEVEL_2, "%s...\n", line[:maxAnnotationLen]) + } else { + w.Write(LEVEL_2, "%s\n", line) + } + } +} + +type PersistentVolumeDescriber struct { + clientset.Interface +} + +func (d *PersistentVolumeDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + c := d.CoreV1().PersistentVolumes() + + pv, err := c.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), pv, describerSettings.ChunkSize) + } + + return describePersistentVolume(pv, events) +} + +func printVolumeNodeAffinity(w PrefixWriter, affinity *corev1.VolumeNodeAffinity) { + w.Write(LEVEL_0, "Node Affinity:\t") + if affinity == nil || affinity.Required == nil { + w.WriteLine("") + return + } + w.WriteLine("") + + if affinity.Required != nil { + w.Write(LEVEL_1, "Required Terms:\t") + if len(affinity.Required.NodeSelectorTerms) == 0 { + w.WriteLine("") + } else { + w.WriteLine("") + for i, term := range affinity.Required.NodeSelectorTerms { + printNodeSelectorTermsMultilineWithIndent(w, LEVEL_2, fmt.Sprintf("Term %v", i), "\t", term.MatchExpressions) + } + } + } +} + +// printLabelsMultiline prints multiple labels with a user-defined alignment. +func printNodeSelectorTermsMultilineWithIndent(w PrefixWriter, indentLevel int, title, innerIndent string, reqs []corev1.NodeSelectorRequirement) { + w.Write(indentLevel, "%s:%s", title, innerIndent) + + if len(reqs) == 0 { + w.WriteLine("") + return + } + + for i, req := range reqs { + if i != 0 { + w.Write(indentLevel, "%s", innerIndent) + } + exprStr := fmt.Sprintf("%s %s", req.Key, strings.ToLower(string(req.Operator))) + if len(req.Values) > 0 { + exprStr = fmt.Sprintf("%s [%s]", exprStr, strings.Join(req.Values, ", ")) + } + w.Write(LEVEL_0, "%s\n", exprStr) + } +} + +func describePersistentVolume(pv *corev1.PersistentVolume, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", pv.Name) + printLabelsMultiline(w, "Labels", pv.ObjectMeta.Labels) + printAnnotationsMultiline(w, "Annotations", pv.ObjectMeta.Annotations) + w.Write(LEVEL_0, "Finalizers:\t%v\n", pv.ObjectMeta.Finalizers) + w.Write(LEVEL_0, "StorageClass:\t%s\n", storageutil.GetPersistentVolumeClass(pv)) + if pv.ObjectMeta.DeletionTimestamp != nil { + w.Write(LEVEL_0, "Status:\tTerminating (lasts %s)\n", translateTimestampSince(*pv.ObjectMeta.DeletionTimestamp)) + } else { + w.Write(LEVEL_0, "Status:\t%v\n", pv.Status.Phase) + } + if pv.Spec.ClaimRef != nil { + w.Write(LEVEL_0, "Claim:\t%s\n", pv.Spec.ClaimRef.Namespace+"/"+pv.Spec.ClaimRef.Name) + } else { + w.Write(LEVEL_0, "Claim:\t%s\n", "") + } + w.Write(LEVEL_0, "Reclaim Policy:\t%v\n", pv.Spec.PersistentVolumeReclaimPolicy) + w.Write(LEVEL_0, "Access Modes:\t%s\n", storageutil.GetAccessModesAsString(pv.Spec.AccessModes)) + if pv.Spec.VolumeMode != nil { + w.Write(LEVEL_0, "VolumeMode:\t%v\n", *pv.Spec.VolumeMode) + } + storage := pv.Spec.Capacity[corev1.ResourceStorage] + w.Write(LEVEL_0, "Capacity:\t%s\n", storage.String()) + printVolumeNodeAffinity(w, pv.Spec.NodeAffinity) + w.Write(LEVEL_0, "Message:\t%s\n", pv.Status.Message) + w.Write(LEVEL_0, "Source:\n") + + switch { + case pv.Spec.HostPath != nil: + printHostPathVolumeSource(pv.Spec.HostPath, w) + case pv.Spec.GCEPersistentDisk != nil: + printGCEPersistentDiskVolumeSource(pv.Spec.GCEPersistentDisk, w) + case pv.Spec.AWSElasticBlockStore != nil: + printAWSElasticBlockStoreVolumeSource(pv.Spec.AWSElasticBlockStore, w) + case pv.Spec.NFS != nil: + printNFSVolumeSource(pv.Spec.NFS, w) + case pv.Spec.ISCSI != nil: + printISCSIPersistentVolumeSource(pv.Spec.ISCSI, w) + case pv.Spec.Glusterfs != nil: + printGlusterfsPersistentVolumeSource(pv.Spec.Glusterfs, w) + case pv.Spec.RBD != nil: + printRBDPersistentVolumeSource(pv.Spec.RBD, w) + case pv.Spec.Quobyte != nil: + printQuobyteVolumeSource(pv.Spec.Quobyte, w) + case pv.Spec.VsphereVolume != nil: + printVsphereVolumeSource(pv.Spec.VsphereVolume, w) + case pv.Spec.Cinder != nil: + printCinderPersistentVolumeSource(pv.Spec.Cinder, w) + case pv.Spec.AzureDisk != nil: + printAzureDiskVolumeSource(pv.Spec.AzureDisk, w) + case pv.Spec.PhotonPersistentDisk != nil: + printPhotonPersistentDiskVolumeSource(pv.Spec.PhotonPersistentDisk, w) + case pv.Spec.PortworxVolume != nil: + printPortworxVolumeSource(pv.Spec.PortworxVolume, w) + case pv.Spec.ScaleIO != nil: + printScaleIOPersistentVolumeSource(pv.Spec.ScaleIO, w) + case pv.Spec.Local != nil: + printLocalVolumeSource(pv.Spec.Local, w) + case pv.Spec.CephFS != nil: + printCephFSPersistentVolumeSource(pv.Spec.CephFS, w) + case pv.Spec.StorageOS != nil: + printStorageOSPersistentVolumeSource(pv.Spec.StorageOS, w) + case pv.Spec.FC != nil: + printFCVolumeSource(pv.Spec.FC, w) + case pv.Spec.AzureFile != nil: + printAzureFilePersistentVolumeSource(pv.Spec.AzureFile, w) + case pv.Spec.FlexVolume != nil: + printFlexPersistentVolumeSource(pv.Spec.FlexVolume, w) + case pv.Spec.Flocker != nil: + printFlockerVolumeSource(pv.Spec.Flocker, w) + case pv.Spec.CSI != nil: + printCSIPersistentVolumeSource(pv.Spec.CSI, w) + default: + w.Write(LEVEL_1, "\n") + } + + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +type PersistentVolumeClaimDescriber struct { + clientset.Interface +} + +func (d *PersistentVolumeClaimDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + c := d.CoreV1().PersistentVolumeClaims(namespace) + + pvc, err := c.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + pc := d.CoreV1().Pods(namespace) + + pods, err := getPodsForPVC(pc, pvc.Name, describerSettings) + if err != nil { + return "", err + } + + events, _ := searchEvents(d.CoreV1(), pvc, describerSettings.ChunkSize) + + return describePersistentVolumeClaim(pvc, events, pods) +} + +func getPodsForPVC(c corev1client.PodInterface, pvcName string, settings DescriberSettings) ([]corev1.Pod, error) { + nsPods, err := getPodsInChunks(c, metav1.ListOptions{Limit: settings.ChunkSize}) + if err != nil { + return []corev1.Pod{}, err + } + + var pods []corev1.Pod + + for _, pod := range nsPods.Items { + for _, volume := range pod.Spec.Volumes { + if volume.VolumeSource.PersistentVolumeClaim != nil && volume.VolumeSource.PersistentVolumeClaim.ClaimName == pvcName { + pods = append(pods, pod) + } + } + } + + return pods, nil +} + +func describePersistentVolumeClaim(pvc *corev1.PersistentVolumeClaim, events *corev1.EventList, pods []corev1.Pod) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + printPersistentVolumeClaim(w, pvc, true) + printPodsMultiline(w, "Used By", pods) + + if len(pvc.Status.Conditions) > 0 { + w.Write(LEVEL_0, "Conditions:\n") + w.Write(LEVEL_1, "Type\tStatus\tLastProbeTime\tLastTransitionTime\tReason\tMessage\n") + w.Write(LEVEL_1, "----\t------\t-----------------\t------------------\t------\t-------\n") + for _, c := range pvc.Status.Conditions { + w.Write(LEVEL_1, "%v \t%v \t%s \t%s \t%v \t%v\n", + c.Type, + c.Status, + c.LastProbeTime.Time.Format(time.RFC1123Z), + c.LastTransitionTime.Time.Format(time.RFC1123Z), + c.Reason, + c.Message) + } + } + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +// printPersistentVolumeClaim is used for both PVCs and PersistentVolumeClaimTemplate. For the latter, +// we need to skip some fields which have no meaning. +func printPersistentVolumeClaim(w PrefixWriter, pvc *corev1.PersistentVolumeClaim, isFullPVC bool) { + if isFullPVC { + w.Write(LEVEL_0, "Name:\t%s\n", pvc.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", pvc.Namespace) + } + w.Write(LEVEL_0, "StorageClass:\t%s\n", storageutil.GetPersistentVolumeClaimClass(pvc)) + if isFullPVC { + if pvc.ObjectMeta.DeletionTimestamp != nil { + w.Write(LEVEL_0, "Status:\tTerminating (lasts %s)\n", translateTimestampSince(*pvc.ObjectMeta.DeletionTimestamp)) + } else { + w.Write(LEVEL_0, "Status:\t%v\n", pvc.Status.Phase) + } + } + w.Write(LEVEL_0, "Volume:\t%s\n", pvc.Spec.VolumeName) + printLabelsMultiline(w, "Labels", pvc.Labels) + printAnnotationsMultiline(w, "Annotations", pvc.Annotations) + if isFullPVC { + w.Write(LEVEL_0, "Finalizers:\t%v\n", pvc.ObjectMeta.Finalizers) + } + storage := pvc.Spec.Resources.Requests[corev1.ResourceStorage] + capacity := "" + accessModes := "" + if pvc.Spec.VolumeName != "" { + accessModes = storageutil.GetAccessModesAsString(pvc.Status.AccessModes) + storage = pvc.Status.Capacity[corev1.ResourceStorage] + capacity = storage.String() + } + w.Write(LEVEL_0, "Capacity:\t%s\n", capacity) + w.Write(LEVEL_0, "Access Modes:\t%s\n", accessModes) + if pvc.Spec.VolumeMode != nil { + w.Write(LEVEL_0, "VolumeMode:\t%v\n", *pvc.Spec.VolumeMode) + } + if pvc.Spec.DataSource != nil { + w.Write(LEVEL_0, "DataSource:\n") + if pvc.Spec.DataSource.APIGroup != nil { + w.Write(LEVEL_1, "APIGroup:\t%v\n", *pvc.Spec.DataSource.APIGroup) + } + w.Write(LEVEL_1, "Kind:\t%v\n", pvc.Spec.DataSource.Kind) + w.Write(LEVEL_1, "Name:\t%v\n", pvc.Spec.DataSource.Name) + } +} + +func describeContainers(label string, containers []corev1.Container, containerStatuses []corev1.ContainerStatus, + resolverFn EnvVarResolverFunc, w PrefixWriter, space string) { + statuses := map[string]corev1.ContainerStatus{} + for _, status := range containerStatuses { + statuses[status.Name] = status + } + + describeContainersLabel(containers, label, space, w) + + for _, container := range containers { + status, ok := statuses[container.Name] + describeContainerBasicInfo(container, status, ok, space, w) + describeContainerCommand(container, w) + if ok { + describeContainerState(status, w) + } + describeContainerResource(container, w) + describeContainerProbe(container, w) + if len(container.EnvFrom) > 0 { + describeContainerEnvFrom(container, resolverFn, w) + } + describeContainerEnvVars(container, resolverFn, w) + describeContainerVolumes(container, w) + } +} + +func describeContainersLabel(containers []corev1.Container, label, space string, w PrefixWriter) { + none := "" + if len(containers) == 0 { + none = " " + } + w.Write(LEVEL_0, "%s%s:%s\n", space, label, none) +} + +func describeContainerBasicInfo(container corev1.Container, status corev1.ContainerStatus, ok bool, space string, w PrefixWriter) { + nameIndent := "" + if len(space) > 0 { + nameIndent = " " + } + w.Write(LEVEL_1, "%s%v:\n", nameIndent, container.Name) + if ok { + w.Write(LEVEL_2, "Container ID:\t%s\n", status.ContainerID) + } + w.Write(LEVEL_2, "Image:\t%s\n", container.Image) + if ok { + w.Write(LEVEL_2, "Image ID:\t%s\n", status.ImageID) + } + portString := describeContainerPorts(container.Ports) + if strings.Contains(portString, ",") { + w.Write(LEVEL_2, "Ports:\t%s\n", portString) + } else { + w.Write(LEVEL_2, "Port:\t%s\n", stringOrNone(portString)) + } + hostPortString := describeContainerHostPorts(container.Ports) + if strings.Contains(hostPortString, ",") { + w.Write(LEVEL_2, "Host Ports:\t%s\n", hostPortString) + } else { + w.Write(LEVEL_2, "Host Port:\t%s\n", stringOrNone(hostPortString)) + } +} + +func describeContainerPorts(cPorts []corev1.ContainerPort) string { + ports := make([]string, 0, len(cPorts)) + for _, cPort := range cPorts { + ports = append(ports, fmt.Sprintf("%d/%s", cPort.ContainerPort, cPort.Protocol)) + } + return strings.Join(ports, ", ") +} + +func describeContainerHostPorts(cPorts []corev1.ContainerPort) string { + ports := make([]string, 0, len(cPorts)) + for _, cPort := range cPorts { + ports = append(ports, fmt.Sprintf("%d/%s", cPort.HostPort, cPort.Protocol)) + } + return strings.Join(ports, ", ") +} + +func describeContainerCommand(container corev1.Container, w PrefixWriter) { + if len(container.Command) > 0 { + w.Write(LEVEL_2, "Command:\n") + for _, c := range container.Command { + for _, s := range strings.Split(c, "\n") { + w.Write(LEVEL_3, "%s\n", s) + } + } + } + if len(container.Args) > 0 { + w.Write(LEVEL_2, "Args:\n") + for _, arg := range container.Args { + for _, s := range strings.Split(arg, "\n") { + w.Write(LEVEL_3, "%s\n", s) + } + } + } +} + +func describeContainerResource(container corev1.Container, w PrefixWriter) { + resources := container.Resources + if len(resources.Limits) > 0 { + w.Write(LEVEL_2, "Limits:\n") + } + for _, name := range SortedResourceNames(resources.Limits) { + quantity := resources.Limits[name] + w.Write(LEVEL_3, "%s:\t%s\n", name, quantity.String()) + } + + if len(resources.Requests) > 0 { + w.Write(LEVEL_2, "Requests:\n") + } + for _, name := range SortedResourceNames(resources.Requests) { + quantity := resources.Requests[name] + w.Write(LEVEL_3, "%s:\t%s\n", name, quantity.String()) + } +} + +func describeContainerState(status corev1.ContainerStatus, w PrefixWriter) { + describeStatus("State", status.State, w) + if status.LastTerminationState.Terminated != nil { + describeStatus("Last State", status.LastTerminationState, w) + } + w.Write(LEVEL_2, "Ready:\t%v\n", printBool(status.Ready)) + w.Write(LEVEL_2, "Restart Count:\t%d\n", status.RestartCount) +} + +func describeContainerProbe(container corev1.Container, w PrefixWriter) { + if container.LivenessProbe != nil { + probe := DescribeProbe(container.LivenessProbe) + w.Write(LEVEL_2, "Liveness:\t%s\n", probe) + } + if container.ReadinessProbe != nil { + probe := DescribeProbe(container.ReadinessProbe) + w.Write(LEVEL_2, "Readiness:\t%s\n", probe) + } + if container.StartupProbe != nil { + probe := DescribeProbe(container.StartupProbe) + w.Write(LEVEL_2, "Startup:\t%s\n", probe) + } +} + +func describeContainerVolumes(container corev1.Container, w PrefixWriter) { + // Show volumeMounts + none := "" + if len(container.VolumeMounts) == 0 { + none = "\t" + } + w.Write(LEVEL_2, "Mounts:%s\n", none) + sort.Sort(SortableVolumeMounts(container.VolumeMounts)) + for _, mount := range container.VolumeMounts { + flags := []string{} + if mount.ReadOnly { + flags = append(flags, "ro") + } else { + flags = append(flags, "rw") + } + if len(mount.SubPath) > 0 { + flags = append(flags, fmt.Sprintf("path=%q", mount.SubPath)) + } + w.Write(LEVEL_3, "%s from %s (%s)\n", mount.MountPath, mount.Name, strings.Join(flags, ",")) + } + // Show volumeDevices if exists + if len(container.VolumeDevices) > 0 { + w.Write(LEVEL_2, "Devices:%s\n", none) + sort.Sort(SortableVolumeDevices(container.VolumeDevices)) + for _, device := range container.VolumeDevices { + w.Write(LEVEL_3, "%s from %s\n", device.DevicePath, device.Name) + } + } +} + +func describeContainerEnvVars(container corev1.Container, resolverFn EnvVarResolverFunc, w PrefixWriter) { + none := "" + if len(container.Env) == 0 { + none = "\t" + } + w.Write(LEVEL_2, "Environment:%s\n", none) + + for _, e := range container.Env { + if e.ValueFrom == nil { + for i, s := range strings.Split(e.Value, "\n") { + if i == 0 { + w.Write(LEVEL_3, "%s:\t%s\n", e.Name, s) + } else { + w.Write(LEVEL_3, "\t%s\n", s) + } + } + continue + } + + switch { + case e.ValueFrom.FieldRef != nil: + var valueFrom string + if resolverFn != nil { + valueFrom = resolverFn(e) + } + w.Write(LEVEL_3, "%s:\t%s (%s:%s)\n", e.Name, valueFrom, e.ValueFrom.FieldRef.APIVersion, e.ValueFrom.FieldRef.FieldPath) + case e.ValueFrom.ResourceFieldRef != nil: + valueFrom, err := resourcehelper.ExtractContainerResourceValue(e.ValueFrom.ResourceFieldRef, &container) + if err != nil { + valueFrom = "" + } + resource := e.ValueFrom.ResourceFieldRef.Resource + if valueFrom == "0" && (resource == "limits.cpu" || resource == "limits.memory") { + valueFrom = "node allocatable" + } + w.Write(LEVEL_3, "%s:\t%s (%s)\n", e.Name, valueFrom, resource) + case e.ValueFrom.SecretKeyRef != nil: + optional := e.ValueFrom.SecretKeyRef.Optional != nil && *e.ValueFrom.SecretKeyRef.Optional + w.Write(LEVEL_3, "%s:\t\tOptional: %t\n", e.Name, e.ValueFrom.SecretKeyRef.Key, e.ValueFrom.SecretKeyRef.Name, optional) + case e.ValueFrom.ConfigMapKeyRef != nil: + optional := e.ValueFrom.ConfigMapKeyRef.Optional != nil && *e.ValueFrom.ConfigMapKeyRef.Optional + w.Write(LEVEL_3, "%s:\t\tOptional: %t\n", e.Name, e.ValueFrom.ConfigMapKeyRef.Key, e.ValueFrom.ConfigMapKeyRef.Name, optional) + } + } +} + +func describeContainerEnvFrom(container corev1.Container, resolverFn EnvVarResolverFunc, w PrefixWriter) { + none := "" + if len(container.EnvFrom) == 0 { + none = "\t" + } + w.Write(LEVEL_2, "Environment Variables from:%s\n", none) + + for _, e := range container.EnvFrom { + from := "" + name := "" + optional := false + if e.ConfigMapRef != nil { + from = "ConfigMap" + name = e.ConfigMapRef.Name + optional = e.ConfigMapRef.Optional != nil && *e.ConfigMapRef.Optional + } else if e.SecretRef != nil { + from = "Secret" + name = e.SecretRef.Name + optional = e.SecretRef.Optional != nil && *e.SecretRef.Optional + } + if len(e.Prefix) == 0 { + w.Write(LEVEL_3, "%s\t%s\tOptional: %t\n", name, from, optional) + } else { + w.Write(LEVEL_3, "%s\t%s with prefix '%s'\tOptional: %t\n", name, from, e.Prefix, optional) + } + } +} + +// DescribeProbe is exported for consumers in other API groups that have probes +func DescribeProbe(probe *corev1.Probe) string { + attrs := fmt.Sprintf("delay=%ds timeout=%ds period=%ds #success=%d #failure=%d", probe.InitialDelaySeconds, probe.TimeoutSeconds, probe.PeriodSeconds, probe.SuccessThreshold, probe.FailureThreshold) + switch { + case probe.Exec != nil: + return fmt.Sprintf("exec %v %s", probe.Exec.Command, attrs) + case probe.HTTPGet != nil: + url := &url.URL{} + url.Scheme = strings.ToLower(string(probe.HTTPGet.Scheme)) + if len(probe.HTTPGet.Port.String()) > 0 { + url.Host = net.JoinHostPort(probe.HTTPGet.Host, probe.HTTPGet.Port.String()) + } else { + url.Host = probe.HTTPGet.Host + } + url.Path = probe.HTTPGet.Path + return fmt.Sprintf("http-get %s %s", url.String(), attrs) + case probe.TCPSocket != nil: + return fmt.Sprintf("tcp-socket %s:%s %s", probe.TCPSocket.Host, probe.TCPSocket.Port.String(), attrs) + + case probe.GRPC != nil: + return fmt.Sprintf("grpc :%d %s %s", probe.GRPC.Port, *(probe.GRPC.Service), attrs) + } + return fmt.Sprintf("unknown %s", attrs) +} + +type EnvVarResolverFunc func(e corev1.EnvVar) string + +// EnvValueFrom is exported for use by describers in other packages +func EnvValueRetriever(pod *corev1.Pod) EnvVarResolverFunc { + return func(e corev1.EnvVar) string { + gv, err := schema.ParseGroupVersion(e.ValueFrom.FieldRef.APIVersion) + if err != nil { + return "" + } + gvk := gv.WithKind("Pod") + internalFieldPath, _, err := scheme.Scheme.ConvertFieldLabel(gvk, e.ValueFrom.FieldRef.FieldPath, "") + if err != nil { + return "" // pod validation should catch this on create + } + + valueFrom, err := fieldpath.ExtractFieldPathAsString(pod, internalFieldPath) + if err != nil { + return "" // pod validation should catch this on create + } + + return valueFrom + } +} + +func describeStatus(stateName string, state corev1.ContainerState, w PrefixWriter) { + switch { + case state.Running != nil: + w.Write(LEVEL_2, "%s:\tRunning\n", stateName) + w.Write(LEVEL_3, "Started:\t%v\n", state.Running.StartedAt.Time.Format(time.RFC1123Z)) + case state.Waiting != nil: + w.Write(LEVEL_2, "%s:\tWaiting\n", stateName) + if state.Waiting.Reason != "" { + w.Write(LEVEL_3, "Reason:\t%s\n", state.Waiting.Reason) + } + case state.Terminated != nil: + w.Write(LEVEL_2, "%s:\tTerminated\n", stateName) + if state.Terminated.Reason != "" { + w.Write(LEVEL_3, "Reason:\t%s\n", state.Terminated.Reason) + } + if state.Terminated.Message != "" { + w.Write(LEVEL_3, "Message:\t%s\n", state.Terminated.Message) + } + w.Write(LEVEL_3, "Exit Code:\t%d\n", state.Terminated.ExitCode) + if state.Terminated.Signal > 0 { + w.Write(LEVEL_3, "Signal:\t%d\n", state.Terminated.Signal) + } + w.Write(LEVEL_3, "Started:\t%s\n", state.Terminated.StartedAt.Time.Format(time.RFC1123Z)) + w.Write(LEVEL_3, "Finished:\t%s\n", state.Terminated.FinishedAt.Time.Format(time.RFC1123Z)) + default: + w.Write(LEVEL_2, "%s:\tWaiting\n", stateName) + } +} + +func describeVolumeClaimTemplates(templates []corev1.PersistentVolumeClaim, w PrefixWriter) { + if len(templates) == 0 { + w.Write(LEVEL_0, "Volume Claims:\t\n") + return + } + w.Write(LEVEL_0, "Volume Claims:\n") + for _, pvc := range templates { + w.Write(LEVEL_1, "Name:\t%s\n", pvc.Name) + w.Write(LEVEL_1, "StorageClass:\t%s\n", storageutil.GetPersistentVolumeClaimClass(&pvc)) + printLabelsMultilineWithIndent(w, " ", "Labels", "\t", pvc.Labels, sets.NewString()) + printLabelsMultilineWithIndent(w, " ", "Annotations", "\t", pvc.Annotations, sets.NewString()) + if capacity, ok := pvc.Spec.Resources.Requests[corev1.ResourceStorage]; ok { + w.Write(LEVEL_1, "Capacity:\t%s\n", capacity.String()) + } else { + w.Write(LEVEL_1, "Capacity:\t%s\n", "") + } + w.Write(LEVEL_1, "Access Modes:\t%s\n", pvc.Spec.AccessModes) + } +} + +func printBoolPtr(value *bool) string { + if value != nil { + return printBool(*value) + } + + return "" +} + +func printBool(value bool) string { + if value { + return "True" + } + + return "False" +} + +// ReplicationControllerDescriber generates information about a replication controller +// and the pods it has created. +type ReplicationControllerDescriber struct { + clientset.Interface +} + +func (d *ReplicationControllerDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + rc := d.CoreV1().ReplicationControllers(namespace) + pc := d.CoreV1().Pods(namespace) + + controller, err := rc.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + selector := labels.SelectorFromSet(controller.Spec.Selector) + running, waiting, succeeded, failed, err := getPodStatusForController(pc, selector, controller.UID, describerSettings) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), controller, describerSettings.ChunkSize) + } + + return describeReplicationController(controller, events, running, waiting, succeeded, failed) +} + +func describeReplicationController(controller *corev1.ReplicationController, events *corev1.EventList, running, waiting, succeeded, failed int) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", controller.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", controller.Namespace) + w.Write(LEVEL_0, "Selector:\t%s\n", labels.FormatLabels(controller.Spec.Selector)) + printLabelsMultiline(w, "Labels", controller.Labels) + printAnnotationsMultiline(w, "Annotations", controller.Annotations) + w.Write(LEVEL_0, "Replicas:\t%d current / %d desired\n", controller.Status.Replicas, *controller.Spec.Replicas) + w.Write(LEVEL_0, "Pods Status:\t%d Running / %d Waiting / %d Succeeded / %d Failed\n", running, waiting, succeeded, failed) + DescribePodTemplate(controller.Spec.Template, w) + if len(controller.Status.Conditions) > 0 { + w.Write(LEVEL_0, "Conditions:\n Type\tStatus\tReason\n") + w.Write(LEVEL_1, "----\t------\t------\n") + for _, c := range controller.Status.Conditions { + w.Write(LEVEL_1, "%v \t%v\t%v\n", c.Type, c.Status, c.Reason) + } + } + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func DescribePodTemplate(template *corev1.PodTemplateSpec, w PrefixWriter) { + w.Write(LEVEL_0, "Pod Template:\n") + if template == nil { + w.Write(LEVEL_1, "") + return + } + printLabelsMultiline(w, " Labels", template.Labels) + if len(template.Annotations) > 0 { + printAnnotationsMultiline(w, " Annotations", template.Annotations) + } + if len(template.Spec.ServiceAccountName) > 0 { + w.Write(LEVEL_1, "Service Account:\t%s\n", template.Spec.ServiceAccountName) + } + if len(template.Spec.InitContainers) > 0 { + describeContainers("Init Containers", template.Spec.InitContainers, nil, nil, w, " ") + } + describeContainers("Containers", template.Spec.Containers, nil, nil, w, " ") + describeVolumes(template.Spec.Volumes, w, " ") + describeTopologySpreadConstraints(template.Spec.TopologySpreadConstraints, w, " ") + if len(template.Spec.PriorityClassName) > 0 { + w.Write(LEVEL_1, "Priority Class Name:\t%s\n", template.Spec.PriorityClassName) + } +} + +// ReplicaSetDescriber generates information about a ReplicaSet and the pods it has created. +type ReplicaSetDescriber struct { + clientset.Interface +} + +func (d *ReplicaSetDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + rsc := d.AppsV1().ReplicaSets(namespace) + pc := d.CoreV1().Pods(namespace) + + rs, err := rsc.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + selector, err := metav1.LabelSelectorAsSelector(rs.Spec.Selector) + if err != nil { + return "", err + } + + running, waiting, succeeded, failed, getPodErr := getPodStatusForController(pc, selector, rs.UID, describerSettings) + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), rs, describerSettings.ChunkSize) + } + + return describeReplicaSet(rs, events, running, waiting, succeeded, failed, getPodErr) +} + +func describeReplicaSet(rs *appsv1.ReplicaSet, events *corev1.EventList, running, waiting, succeeded, failed int, getPodErr error) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", rs.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", rs.Namespace) + w.Write(LEVEL_0, "Selector:\t%s\n", metav1.FormatLabelSelector(rs.Spec.Selector)) + printLabelsMultiline(w, "Labels", rs.Labels) + printAnnotationsMultiline(w, "Annotations", rs.Annotations) + if controlledBy := printController(rs); len(controlledBy) > 0 { + w.Write(LEVEL_0, "Controlled By:\t%s\n", controlledBy) + } + w.Write(LEVEL_0, "Replicas:\t%d current / %d desired\n", rs.Status.Replicas, *rs.Spec.Replicas) + w.Write(LEVEL_0, "Pods Status:\t") + if getPodErr != nil { + w.Write(LEVEL_0, "error in fetching pods: %s\n", getPodErr) + } else { + w.Write(LEVEL_0, "%d Running / %d Waiting / %d Succeeded / %d Failed\n", running, waiting, succeeded, failed) + } + DescribePodTemplate(&rs.Spec.Template, w) + if len(rs.Status.Conditions) > 0 { + w.Write(LEVEL_0, "Conditions:\n Type\tStatus\tReason\n") + w.Write(LEVEL_1, "----\t------\t------\n") + for _, c := range rs.Status.Conditions { + w.Write(LEVEL_1, "%v \t%v\t%v\n", c.Type, c.Status, c.Reason) + } + } + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +// JobDescriber generates information about a job and the pods it has created. +type JobDescriber struct { + clientset.Interface +} + +func (d *JobDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + job, err := d.BatchV1().Jobs(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), job, describerSettings.ChunkSize) + } + + return describeJob(job, events) +} + +func describeJob(job *batchv1.Job, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", job.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", job.Namespace) + if selector, err := metav1.LabelSelectorAsSelector(job.Spec.Selector); err == nil { + w.Write(LEVEL_0, "Selector:\t%s\n", selector) + } else { + w.Write(LEVEL_0, "Selector:\tFailed to get selector: %s\n", err) + } + printLabelsMultiline(w, "Labels", job.Labels) + printAnnotationsMultiline(w, "Annotations", job.Annotations) + if controlledBy := printController(job); len(controlledBy) > 0 { + w.Write(LEVEL_0, "Controlled By:\t%s\n", controlledBy) + } + if job.Spec.Parallelism != nil { + w.Write(LEVEL_0, "Parallelism:\t%d\n", *job.Spec.Parallelism) + } + if job.Spec.Completions != nil { + w.Write(LEVEL_0, "Completions:\t%d\n", *job.Spec.Completions) + } else { + w.Write(LEVEL_0, "Completions:\t\n") + } + if job.Spec.CompletionMode != nil { + w.Write(LEVEL_0, "Completion Mode:\t%s\n", *job.Spec.CompletionMode) + } + if job.Status.StartTime != nil { + w.Write(LEVEL_0, "Start Time:\t%s\n", job.Status.StartTime.Time.Format(time.RFC1123Z)) + } + if job.Status.CompletionTime != nil { + w.Write(LEVEL_0, "Completed At:\t%s\n", job.Status.CompletionTime.Time.Format(time.RFC1123Z)) + } + if job.Status.StartTime != nil && job.Status.CompletionTime != nil { + w.Write(LEVEL_0, "Duration:\t%s\n", duration.HumanDuration(job.Status.CompletionTime.Sub(job.Status.StartTime.Time))) + } + if job.Spec.ActiveDeadlineSeconds != nil { + w.Write(LEVEL_0, "Active Deadline Seconds:\t%ds\n", *job.Spec.ActiveDeadlineSeconds) + } + if job.Status.Ready == nil { + w.Write(LEVEL_0, "Pods Statuses:\t%d Active / %d Succeeded / %d Failed\n", job.Status.Active, job.Status.Succeeded, job.Status.Failed) + } else { + w.Write(LEVEL_0, "Pods Statuses:\t%d Active (%d Ready) / %d Succeeded / %d Failed\n", job.Status.Active, *job.Status.Ready, job.Status.Succeeded, job.Status.Failed) + } + if job.Spec.CompletionMode != nil && *job.Spec.CompletionMode == batchv1.IndexedCompletion { + w.Write(LEVEL_0, "Completed Indexes:\t%s\n", capIndexesListOrNone(job.Status.CompletedIndexes, 50)) + } + DescribePodTemplate(&job.Spec.Template, w) + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func capIndexesListOrNone(indexes string, softLimit int) string { + if len(indexes) == 0 { + return "" + } + ix := softLimit + for ; ix < len(indexes); ix++ { + if indexes[ix] == ',' { + break + } + } + if ix >= len(indexes) { + return indexes + } + return indexes[:ix+1] + "..." +} + +// CronJobDescriber generates information about a cron job and the jobs it has created. +type CronJobDescriber struct { + client clientset.Interface +} + +func (d *CronJobDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + var events *corev1.EventList + + cronJob, err := d.client.BatchV1().CronJobs(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(d.client.CoreV1(), cronJob, describerSettings.ChunkSize) + } + return describeCronJob(cronJob, events) + } + + // TODO: drop this condition when beta disappears in 1.25 + cronJobBeta, err := d.client.BatchV1beta1().CronJobs(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + if describerSettings.ShowEvents { + events, _ = searchEvents(d.client.CoreV1(), cronJobBeta, describerSettings.ChunkSize) + } + return describeCronJobBeta(cronJobBeta, events) +} + +func describeCronJob(cronJob *batchv1.CronJob, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", cronJob.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", cronJob.Namespace) + printLabelsMultiline(w, "Labels", cronJob.Labels) + printAnnotationsMultiline(w, "Annotations", cronJob.Annotations) + w.Write(LEVEL_0, "Schedule:\t%s\n", cronJob.Spec.Schedule) + w.Write(LEVEL_0, "Concurrency Policy:\t%s\n", cronJob.Spec.ConcurrencyPolicy) + w.Write(LEVEL_0, "Suspend:\t%s\n", printBoolPtr(cronJob.Spec.Suspend)) + if cronJob.Spec.SuccessfulJobsHistoryLimit != nil { + w.Write(LEVEL_0, "Successful Job History Limit:\t%d\n", *cronJob.Spec.SuccessfulJobsHistoryLimit) + } else { + w.Write(LEVEL_0, "Successful Job History Limit:\t\n") + } + if cronJob.Spec.FailedJobsHistoryLimit != nil { + w.Write(LEVEL_0, "Failed Job History Limit:\t%d\n", *cronJob.Spec.FailedJobsHistoryLimit) + } else { + w.Write(LEVEL_0, "Failed Job History Limit:\t\n") + } + if cronJob.Spec.StartingDeadlineSeconds != nil { + w.Write(LEVEL_0, "Starting Deadline Seconds:\t%ds\n", *cronJob.Spec.StartingDeadlineSeconds) + } else { + w.Write(LEVEL_0, "Starting Deadline Seconds:\t\n") + } + describeJobTemplate(cronJob.Spec.JobTemplate, w) + if cronJob.Status.LastScheduleTime != nil { + w.Write(LEVEL_0, "Last Schedule Time:\t%s\n", cronJob.Status.LastScheduleTime.Time.Format(time.RFC1123Z)) + } else { + w.Write(LEVEL_0, "Last Schedule Time:\t\n") + } + printActiveJobs(w, "Active Jobs", cronJob.Status.Active) + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func describeJobTemplate(jobTemplate batchv1.JobTemplateSpec, w PrefixWriter) { + if jobTemplate.Spec.Selector != nil { + if selector, err := metav1.LabelSelectorAsSelector(jobTemplate.Spec.Selector); err == nil { + w.Write(LEVEL_0, "Selector:\t%s\n", selector) + } else { + w.Write(LEVEL_0, "Selector:\tFailed to get selector: %s\n", err) + } + } else { + w.Write(LEVEL_0, "Selector:\t\n") + } + if jobTemplate.Spec.Parallelism != nil { + w.Write(LEVEL_0, "Parallelism:\t%d\n", *jobTemplate.Spec.Parallelism) + } else { + w.Write(LEVEL_0, "Parallelism:\t\n") + } + if jobTemplate.Spec.Completions != nil { + w.Write(LEVEL_0, "Completions:\t%d\n", *jobTemplate.Spec.Completions) + } else { + w.Write(LEVEL_0, "Completions:\t\n") + } + if jobTemplate.Spec.ActiveDeadlineSeconds != nil { + w.Write(LEVEL_0, "Active Deadline Seconds:\t%ds\n", *jobTemplate.Spec.ActiveDeadlineSeconds) + } + DescribePodTemplate(&jobTemplate.Spec.Template, w) +} + +func describeCronJobBeta(cronJob *batchv1beta1.CronJob, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", cronJob.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", cronJob.Namespace) + printLabelsMultiline(w, "Labels", cronJob.Labels) + printAnnotationsMultiline(w, "Annotations", cronJob.Annotations) + w.Write(LEVEL_0, "Schedule:\t%s\n", cronJob.Spec.Schedule) + w.Write(LEVEL_0, "Concurrency Policy:\t%s\n", cronJob.Spec.ConcurrencyPolicy) + w.Write(LEVEL_0, "Suspend:\t%s\n", printBoolPtr(cronJob.Spec.Suspend)) + if cronJob.Spec.SuccessfulJobsHistoryLimit != nil { + w.Write(LEVEL_0, "Successful Job History Limit:\t%d\n", *cronJob.Spec.SuccessfulJobsHistoryLimit) + } else { + w.Write(LEVEL_0, "Successful Job History Limit:\t\n") + } + if cronJob.Spec.FailedJobsHistoryLimit != nil { + w.Write(LEVEL_0, "Failed Job History Limit:\t%d\n", *cronJob.Spec.FailedJobsHistoryLimit) + } else { + w.Write(LEVEL_0, "Failed Job History Limit:\t\n") + } + if cronJob.Spec.StartingDeadlineSeconds != nil { + w.Write(LEVEL_0, "Starting Deadline Seconds:\t%ds\n", *cronJob.Spec.StartingDeadlineSeconds) + } else { + w.Write(LEVEL_0, "Starting Deadline Seconds:\t\n") + } + describeJobTemplateBeta(cronJob.Spec.JobTemplate, w) + if cronJob.Status.LastScheduleTime != nil { + w.Write(LEVEL_0, "Last Schedule Time:\t%s\n", cronJob.Status.LastScheduleTime.Time.Format(time.RFC1123Z)) + } else { + w.Write(LEVEL_0, "Last Schedule Time:\t\n") + } + printActiveJobs(w, "Active Jobs", cronJob.Status.Active) + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func describeJobTemplateBeta(jobTemplate batchv1beta1.JobTemplateSpec, w PrefixWriter) { + if jobTemplate.Spec.Selector != nil { + if selector, err := metav1.LabelSelectorAsSelector(jobTemplate.Spec.Selector); err == nil { + w.Write(LEVEL_0, "Selector:\t%s\n", selector) + } else { + w.Write(LEVEL_0, "Selector:\tFailed to get selector: %s\n", err) + } + } else { + w.Write(LEVEL_0, "Selector:\t\n") + } + if jobTemplate.Spec.Parallelism != nil { + w.Write(LEVEL_0, "Parallelism:\t%d\n", *jobTemplate.Spec.Parallelism) + } else { + w.Write(LEVEL_0, "Parallelism:\t\n") + } + if jobTemplate.Spec.Completions != nil { + w.Write(LEVEL_0, "Completions:\t%d\n", *jobTemplate.Spec.Completions) + } else { + w.Write(LEVEL_0, "Completions:\t\n") + } + if jobTemplate.Spec.ActiveDeadlineSeconds != nil { + w.Write(LEVEL_0, "Active Deadline Seconds:\t%ds\n", *jobTemplate.Spec.ActiveDeadlineSeconds) + } + DescribePodTemplate(&jobTemplate.Spec.Template, w) +} + +func printActiveJobs(w PrefixWriter, title string, jobs []corev1.ObjectReference) { + w.Write(LEVEL_0, "%s:\t", title) + if len(jobs) == 0 { + w.WriteLine("") + return + } + + for i, job := range jobs { + if i != 0 { + w.Write(LEVEL_0, ", ") + } + w.Write(LEVEL_0, "%s", job.Name) + } + w.WriteLine("") +} + +// DaemonSetDescriber generates information about a daemon set and the pods it has created. +type DaemonSetDescriber struct { + clientset.Interface +} + +func (d *DaemonSetDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + dc := d.AppsV1().DaemonSets(namespace) + pc := d.CoreV1().Pods(namespace) + + daemon, err := dc.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + selector, err := metav1.LabelSelectorAsSelector(daemon.Spec.Selector) + if err != nil { + return "", err + } + running, waiting, succeeded, failed, err := getPodStatusForController(pc, selector, daemon.UID, describerSettings) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), daemon, describerSettings.ChunkSize) + } + + return describeDaemonSet(daemon, events, running, waiting, succeeded, failed) +} + +func describeDaemonSet(daemon *appsv1.DaemonSet, events *corev1.EventList, running, waiting, succeeded, failed int) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", daemon.Name) + selector, err := metav1.LabelSelectorAsSelector(daemon.Spec.Selector) + if err != nil { + // this shouldn't happen if LabelSelector passed validation + return err + } + w.Write(LEVEL_0, "Selector:\t%s\n", selector) + w.Write(LEVEL_0, "Node-Selector:\t%s\n", labels.FormatLabels(daemon.Spec.Template.Spec.NodeSelector)) + printLabelsMultiline(w, "Labels", daemon.Labels) + printAnnotationsMultiline(w, "Annotations", daemon.Annotations) + w.Write(LEVEL_0, "Desired Number of Nodes Scheduled: %d\n", daemon.Status.DesiredNumberScheduled) + w.Write(LEVEL_0, "Current Number of Nodes Scheduled: %d\n", daemon.Status.CurrentNumberScheduled) + w.Write(LEVEL_0, "Number of Nodes Scheduled with Up-to-date Pods: %d\n", daemon.Status.UpdatedNumberScheduled) + w.Write(LEVEL_0, "Number of Nodes Scheduled with Available Pods: %d\n", daemon.Status.NumberAvailable) + w.Write(LEVEL_0, "Number of Nodes Misscheduled: %d\n", daemon.Status.NumberMisscheduled) + w.Write(LEVEL_0, "Pods Status:\t%d Running / %d Waiting / %d Succeeded / %d Failed\n", running, waiting, succeeded, failed) + DescribePodTemplate(&daemon.Spec.Template, w) + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +// SecretDescriber generates information about a secret +type SecretDescriber struct { + clientset.Interface +} + +func (d *SecretDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + c := d.CoreV1().Secrets(namespace) + + secret, err := c.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + return describeSecret(secret) +} + +func describeSecret(secret *corev1.Secret) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", secret.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", secret.Namespace) + printLabelsMultiline(w, "Labels", secret.Labels) + printAnnotationsMultiline(w, "Annotations", secret.Annotations) + + w.Write(LEVEL_0, "\nType:\t%s\n", secret.Type) + + w.Write(LEVEL_0, "\nData\n====\n") + for k, v := range secret.Data { + switch { + case k == corev1.ServiceAccountTokenKey && secret.Type == corev1.SecretTypeServiceAccountToken: + w.Write(LEVEL_0, "%s:\t%s\n", k, string(v)) + default: + w.Write(LEVEL_0, "%s:\t%d bytes\n", k, len(v)) + } + } + + return nil + }) +} + +type IngressDescriber struct { + client clientset.Interface +} + +func (i *IngressDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + var events *corev1.EventList + + // try ingress/v1 first (v1.19) and fallback to ingress/v1beta if an err occurs + netV1, err := i.client.NetworkingV1().Ingresses(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(i.client.CoreV1(), netV1, describerSettings.ChunkSize) + } + return i.describeIngressV1(netV1, events) + } + netV1beta1, err := i.client.NetworkingV1beta1().Ingresses(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(i.client.CoreV1(), netV1beta1, describerSettings.ChunkSize) + } + return i.describeIngressV1beta1(netV1beta1, events) + } + return "", err +} + +func (i *IngressDescriber) describeBackendV1beta1(ns string, backend *networkingv1beta1.IngressBackend) string { + endpoints, err := i.client.CoreV1().Endpoints(ns).Get(context.TODO(), backend.ServiceName, metav1.GetOptions{}) + if err != nil { + return fmt.Sprintf("", err) + } + service, err := i.client.CoreV1().Services(ns).Get(context.TODO(), backend.ServiceName, metav1.GetOptions{}) + if err != nil { + return fmt.Sprintf("", err) + } + spName := "" + for i := range service.Spec.Ports { + sp := &service.Spec.Ports[i] + switch backend.ServicePort.Type { + case intstr.String: + if backend.ServicePort.StrVal == sp.Name { + spName = sp.Name + } + case intstr.Int: + if int32(backend.ServicePort.IntVal) == sp.Port { + spName = sp.Name + } + } + } + return formatEndpoints(endpoints, sets.NewString(spName)) +} + +func (i *IngressDescriber) describeBackendV1(ns string, backend *networkingv1.IngressBackend) string { + + if backend.Service != nil { + sb := serviceBackendStringer(backend.Service) + endpoints, err := i.client.CoreV1().Endpoints(ns).Get(context.TODO(), backend.Service.Name, metav1.GetOptions{}) + if err != nil { + return fmt.Sprintf("%v ()", sb, err) + } + service, err := i.client.CoreV1().Services(ns).Get(context.TODO(), backend.Service.Name, metav1.GetOptions{}) + if err != nil { + return fmt.Sprintf("%v()", sb, err) + } + spName := "" + for i := range service.Spec.Ports { + sp := &service.Spec.Ports[i] + if backend.Service.Port.Number != 0 && backend.Service.Port.Number == sp.Port { + spName = sp.Name + } else if len(backend.Service.Port.Name) > 0 && backend.Service.Port.Name == sp.Name { + spName = sp.Name + } + } + ep := formatEndpoints(endpoints, sets.NewString(spName)) + return fmt.Sprintf("%s (%s)", sb, ep) + } + if backend.Resource != nil { + ic := backend.Resource + apiGroup := "" + if ic.APIGroup != nil { + apiGroup = fmt.Sprintf("%v", *ic.APIGroup) + } + return fmt.Sprintf("APIGroup: %v, Kind: %v, Name: %v", apiGroup, ic.Kind, ic.Name) + } + return "" +} + +func (i *IngressDescriber) describeIngressV1(ing *networkingv1.Ingress, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%v\n", ing.Name) + printLabelsMultiline(w, "Labels", ing.Labels) + w.Write(LEVEL_0, "Namespace:\t%v\n", ing.Namespace) + w.Write(LEVEL_0, "Address:\t%v\n", ingressLoadBalancerStatusStringerV1(ing.Status.LoadBalancer, true)) + ingressClassName := "" + if ing.Spec.IngressClassName != nil { + ingressClassName = *ing.Spec.IngressClassName + } + w.Write(LEVEL_0, "Ingress Class:\t%v\n", ingressClassName) + def := ing.Spec.DefaultBackend + ns := ing.Namespace + defaultBackendDescribe := "" + if def != nil { + defaultBackendDescribe = i.describeBackendV1(ns, def) + } + w.Write(LEVEL_0, "Default backend:\t%s\n", defaultBackendDescribe) + if len(ing.Spec.TLS) != 0 { + describeIngressTLSV1(w, ing.Spec.TLS) + } + w.Write(LEVEL_0, "Rules:\n Host\tPath\tBackends\n") + w.Write(LEVEL_1, "----\t----\t--------\n") + count := 0 + for _, rules := range ing.Spec.Rules { + + if rules.HTTP == nil { + continue + } + count++ + host := rules.Host + if len(host) == 0 { + host = "*" + } + w.Write(LEVEL_1, "%s\t\n", host) + for _, path := range rules.HTTP.Paths { + w.Write(LEVEL_2, "\t%s \t%s\n", path.Path, i.describeBackendV1(ing.Namespace, &path.Backend)) + } + } + if count == 0 { + w.Write(LEVEL_1, "%s\t%s\t%s\n", "*", "*", defaultBackendDescribe) + } + printAnnotationsMultiline(w, "Annotations", ing.Annotations) + + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func (i *IngressDescriber) describeIngressV1beta1(ing *networkingv1beta1.Ingress, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%v\n", ing.Name) + printLabelsMultiline(w, "Labels", ing.Labels) + w.Write(LEVEL_0, "Namespace:\t%v\n", ing.Namespace) + w.Write(LEVEL_0, "Address:\t%v\n", ingressLoadBalancerStatusStringerV1beta1(ing.Status.LoadBalancer, true)) + ingressClassName := "" + if ing.Spec.IngressClassName != nil { + ingressClassName = *ing.Spec.IngressClassName + } + w.Write(LEVEL_0, "Ingress Class:\t%v\n", ingressClassName) + def := ing.Spec.Backend + ns := ing.Namespace + if def == nil { + w.Write(LEVEL_0, "Default backend:\t\n") + } else { + w.Write(LEVEL_0, "Default backend:\t%s\n", i.describeBackendV1beta1(ns, def)) + } + if len(ing.Spec.TLS) != 0 { + describeIngressTLSV1beta1(w, ing.Spec.TLS) + } + w.Write(LEVEL_0, "Rules:\n Host\tPath\tBackends\n") + w.Write(LEVEL_1, "----\t----\t--------\n") + count := 0 + for _, rules := range ing.Spec.Rules { + + if rules.HTTP == nil { + continue + } + count++ + host := rules.Host + if len(host) == 0 { + host = "*" + } + w.Write(LEVEL_1, "%s\t\n", host) + for _, path := range rules.HTTP.Paths { + w.Write(LEVEL_2, "\t%s \t%s (%s)\n", path.Path, backendStringer(&path.Backend), i.describeBackendV1beta1(ing.Namespace, &path.Backend)) + } + } + if count == 0 { + w.Write(LEVEL_1, "%s\t%s \t\n", "*", "*") + } + printAnnotationsMultiline(w, "Annotations", ing.Annotations) + + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func describeIngressTLSV1beta1(w PrefixWriter, ingTLS []networkingv1beta1.IngressTLS) { + w.Write(LEVEL_0, "TLS:\n") + for _, t := range ingTLS { + if t.SecretName == "" { + w.Write(LEVEL_1, "SNI routes %v\n", strings.Join(t.Hosts, ",")) + } else { + w.Write(LEVEL_1, "%v terminates %v\n", t.SecretName, strings.Join(t.Hosts, ",")) + } + } +} + +func describeIngressTLSV1(w PrefixWriter, ingTLS []networkingv1.IngressTLS) { + w.Write(LEVEL_0, "TLS:\n") + for _, t := range ingTLS { + if t.SecretName == "" { + w.Write(LEVEL_1, "SNI routes %v\n", strings.Join(t.Hosts, ",")) + } else { + w.Write(LEVEL_1, "%v terminates %v\n", t.SecretName, strings.Join(t.Hosts, ",")) + } + } +} + +type IngressClassDescriber struct { + client clientset.Interface +} + +func (i *IngressClassDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + var events *corev1.EventList + // try IngressClass/v1 first (v1.19) and fallback to IngressClass/v1beta if an err occurs + netV1, err := i.client.NetworkingV1().IngressClasses().Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(i.client.CoreV1(), netV1, describerSettings.ChunkSize) + } + return i.describeIngressClassV1(netV1, events) + } + netV1beta1, err := i.client.NetworkingV1beta1().IngressClasses().Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(i.client.CoreV1(), netV1beta1, describerSettings.ChunkSize) + } + return i.describeIngressClassV1beta1(netV1beta1, events) + } + return "", err +} + +func (i *IngressClassDescriber) describeIngressClassV1beta1(ic *networkingv1beta1.IngressClass, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", ic.Name) + printLabelsMultiline(w, "Labels", ic.Labels) + printAnnotationsMultiline(w, "Annotations", ic.Annotations) + w.Write(LEVEL_0, "Controller:\t%v\n", ic.Spec.Controller) + + if ic.Spec.Parameters != nil { + w.Write(LEVEL_0, "Parameters:\n") + if ic.Spec.Parameters.APIGroup != nil { + w.Write(LEVEL_1, "APIGroup:\t%v\n", *ic.Spec.Parameters.APIGroup) + } + w.Write(LEVEL_1, "Kind:\t%v\n", ic.Spec.Parameters.Kind) + w.Write(LEVEL_1, "Name:\t%v\n", ic.Spec.Parameters.Name) + } + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func (i *IngressClassDescriber) describeIngressClassV1(ic *networkingv1.IngressClass, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", ic.Name) + printLabelsMultiline(w, "Labels", ic.Labels) + printAnnotationsMultiline(w, "Annotations", ic.Annotations) + w.Write(LEVEL_0, "Controller:\t%v\n", ic.Spec.Controller) + + if ic.Spec.Parameters != nil { + w.Write(LEVEL_0, "Parameters:\n") + if ic.Spec.Parameters.APIGroup != nil { + w.Write(LEVEL_1, "APIGroup:\t%v\n", *ic.Spec.Parameters.APIGroup) + } + w.Write(LEVEL_1, "Kind:\t%v\n", ic.Spec.Parameters.Kind) + w.Write(LEVEL_1, "Name:\t%v\n", ic.Spec.Parameters.Name) + } + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +// ClusterCIDRDescriber generates information about a ClusterCIDR. +type ClusterCIDRDescriber struct { + client clientset.Interface +} + +func (c *ClusterCIDRDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + var events *corev1.EventList + + ccV1alpha1, err := c.client.NetworkingV1alpha1().ClusterCIDRs().Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(c.client.CoreV1(), ccV1alpha1, describerSettings.ChunkSize) + } + return c.describeClusterCIDRV1alpha1(ccV1alpha1, events) + } + return "", err +} + +func (c *ClusterCIDRDescriber) describeClusterCIDRV1alpha1(cc *networkingv1alpha1.ClusterCIDR, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%v\n", cc.Name) + printLabelsMultiline(w, "Labels", cc.Labels) + printAnnotationsMultiline(w, "Annotations", cc.Annotations) + + w.Write(LEVEL_0, "NodeSelector:\n") + if cc.Spec.NodeSelector != nil { + w.Write(LEVEL_1, "NodeSelector Terms:") + if len(cc.Spec.NodeSelector.NodeSelectorTerms) == 0 { + w.WriteLine("") + } else { + w.WriteLine("") + for i, term := range cc.Spec.NodeSelector.NodeSelectorTerms { + printNodeSelectorTermsMultilineWithIndent(w, LEVEL_2, fmt.Sprintf("Term %v", i), "\t", term.MatchExpressions) + } + } + } + + if cc.Spec.PerNodeHostBits != 0 { + w.Write(LEVEL_0, "PerNodeHostBits:\t%s\n", fmt.Sprint(cc.Spec.PerNodeHostBits)) + } + + if cc.Spec.IPv4 != "" { + w.Write(LEVEL_0, "IPv4:\t%s\n", cc.Spec.IPv4) + } + + if cc.Spec.IPv6 != "" { + w.Write(LEVEL_0, "IPv6:\t%s\n", cc.Spec.IPv6) + } + + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +// ServiceDescriber generates information about a service. +type ServiceDescriber struct { + clientset.Interface +} + +func (d *ServiceDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + c := d.CoreV1().Services(namespace) + + service, err := c.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + endpoints, _ := d.CoreV1().Endpoints(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), service, describerSettings.ChunkSize) + } + return describeService(service, endpoints, events) +} + +func buildIngressString(ingress []corev1.LoadBalancerIngress) string { + var buffer bytes.Buffer + + for i := range ingress { + if i != 0 { + buffer.WriteString(", ") + } + if ingress[i].IP != "" { + buffer.WriteString(ingress[i].IP) + } else { + buffer.WriteString(ingress[i].Hostname) + } + } + return buffer.String() +} + +func describeService(service *corev1.Service, endpoints *corev1.Endpoints, events *corev1.EventList) (string, error) { + if endpoints == nil { + endpoints = &corev1.Endpoints{} + } + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", service.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", service.Namespace) + printLabelsMultiline(w, "Labels", service.Labels) + printAnnotationsMultiline(w, "Annotations", service.Annotations) + w.Write(LEVEL_0, "Selector:\t%s\n", labels.FormatLabels(service.Spec.Selector)) + w.Write(LEVEL_0, "Type:\t%s\n", service.Spec.Type) + + if service.Spec.IPFamilyPolicy != nil { + w.Write(LEVEL_0, "IP Family Policy:\t%s\n", *(service.Spec.IPFamilyPolicy)) + } + + if len(service.Spec.IPFamilies) > 0 { + ipfamiliesStrings := make([]string, 0, len(service.Spec.IPFamilies)) + for _, family := range service.Spec.IPFamilies { + ipfamiliesStrings = append(ipfamiliesStrings, string(family)) + } + + w.Write(LEVEL_0, "IP Families:\t%s\n", strings.Join(ipfamiliesStrings, ",")) + } else { + w.Write(LEVEL_0, "IP Families:\t%s\n", "") + } + + w.Write(LEVEL_0, "IP:\t%s\n", service.Spec.ClusterIP) + if len(service.Spec.ClusterIPs) > 0 { + w.Write(LEVEL_0, "IPs:\t%s\n", strings.Join(service.Spec.ClusterIPs, ",")) + } else { + w.Write(LEVEL_0, "IPs:\t%s\n", "") + } + + if len(service.Spec.ExternalIPs) > 0 { + w.Write(LEVEL_0, "External IPs:\t%v\n", strings.Join(service.Spec.ExternalIPs, ",")) + } + if service.Spec.LoadBalancerIP != "" { + w.Write(LEVEL_0, "IP:\t%s\n", service.Spec.LoadBalancerIP) + } + if service.Spec.ExternalName != "" { + w.Write(LEVEL_0, "External Name:\t%s\n", service.Spec.ExternalName) + } + if len(service.Status.LoadBalancer.Ingress) > 0 { + list := buildIngressString(service.Status.LoadBalancer.Ingress) + w.Write(LEVEL_0, "LoadBalancer Ingress:\t%s\n", list) + } + for i := range service.Spec.Ports { + sp := &service.Spec.Ports[i] + + name := sp.Name + if name == "" { + name = "" + } + w.Write(LEVEL_0, "Port:\t%s\t%d/%s\n", name, sp.Port, sp.Protocol) + if sp.TargetPort.Type == intstr.Type(intstr.Int) { + w.Write(LEVEL_0, "TargetPort:\t%d/%s\n", sp.TargetPort.IntVal, sp.Protocol) + } else { + w.Write(LEVEL_0, "TargetPort:\t%s/%s\n", sp.TargetPort.StrVal, sp.Protocol) + } + if sp.NodePort != 0 { + w.Write(LEVEL_0, "NodePort:\t%s\t%d/%s\n", name, sp.NodePort, sp.Protocol) + } + w.Write(LEVEL_0, "Endpoints:\t%s\n", formatEndpoints(endpoints, sets.NewString(sp.Name))) + } + w.Write(LEVEL_0, "Session Affinity:\t%s\n", service.Spec.SessionAffinity) + if service.Spec.ExternalTrafficPolicy != "" { + w.Write(LEVEL_0, "External Traffic Policy:\t%s\n", service.Spec.ExternalTrafficPolicy) + } + if service.Spec.HealthCheckNodePort != 0 { + w.Write(LEVEL_0, "HealthCheck NodePort:\t%d\n", service.Spec.HealthCheckNodePort) + } + if len(service.Spec.LoadBalancerSourceRanges) > 0 { + w.Write(LEVEL_0, "LoadBalancer Source Ranges:\t%v\n", strings.Join(service.Spec.LoadBalancerSourceRanges, ",")) + } + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +// EndpointsDescriber generates information about an Endpoint. +type EndpointsDescriber struct { + clientset.Interface +} + +func (d *EndpointsDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + c := d.CoreV1().Endpoints(namespace) + + ep, err := c.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), ep, describerSettings.ChunkSize) + } + + return describeEndpoints(ep, events) +} + +func describeEndpoints(ep *corev1.Endpoints, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", ep.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", ep.Namespace) + printLabelsMultiline(w, "Labels", ep.Labels) + printAnnotationsMultiline(w, "Annotations", ep.Annotations) + + w.Write(LEVEL_0, "Subsets:\n") + for i := range ep.Subsets { + subset := &ep.Subsets[i] + + addresses := make([]string, 0, len(subset.Addresses)) + for _, addr := range subset.Addresses { + addresses = append(addresses, addr.IP) + } + addressesString := strings.Join(addresses, ",") + if len(addressesString) == 0 { + addressesString = "" + } + w.Write(LEVEL_1, "Addresses:\t%s\n", addressesString) + + notReadyAddresses := make([]string, 0, len(subset.NotReadyAddresses)) + for _, addr := range subset.NotReadyAddresses { + notReadyAddresses = append(notReadyAddresses, addr.IP) + } + notReadyAddressesString := strings.Join(notReadyAddresses, ",") + if len(notReadyAddressesString) == 0 { + notReadyAddressesString = "" + } + w.Write(LEVEL_1, "NotReadyAddresses:\t%s\n", notReadyAddressesString) + + if len(subset.Ports) > 0 { + w.Write(LEVEL_1, "Ports:\n") + w.Write(LEVEL_2, "Name\tPort\tProtocol\n") + w.Write(LEVEL_2, "----\t----\t--------\n") + for _, port := range subset.Ports { + name := port.Name + if len(name) == 0 { + name = "" + } + w.Write(LEVEL_2, "%s\t%d\t%s\n", name, port.Port, port.Protocol) + } + } + w.Write(LEVEL_0, "\n") + } + + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +// EndpointSliceDescriber generates information about an EndpointSlice. +type EndpointSliceDescriber struct { + clientset.Interface +} + +func (d *EndpointSliceDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + var events *corev1.EventList + // try endpointslice/v1 first (v1.21) and fallback to v1beta1 if error occurs + + epsV1, err := d.DiscoveryV1().EndpointSlices(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), epsV1, describerSettings.ChunkSize) + } + return describeEndpointSliceV1(epsV1, events) + } + + epsV1beta1, err := d.DiscoveryV1beta1().EndpointSlices(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), epsV1beta1, describerSettings.ChunkSize) + } + + return describeEndpointSliceV1beta1(epsV1beta1, events) +} + +func describeEndpointSliceV1(eps *discoveryv1.EndpointSlice, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", eps.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", eps.Namespace) + printLabelsMultiline(w, "Labels", eps.Labels) + printAnnotationsMultiline(w, "Annotations", eps.Annotations) + + w.Write(LEVEL_0, "AddressType:\t%s\n", string(eps.AddressType)) + + if len(eps.Ports) == 0 { + w.Write(LEVEL_0, "Ports: \n") + } else { + w.Write(LEVEL_0, "Ports:\n") + w.Write(LEVEL_1, "Name\tPort\tProtocol\n") + w.Write(LEVEL_1, "----\t----\t--------\n") + for _, port := range eps.Ports { + portName := "" + if port.Name != nil && len(*port.Name) > 0 { + portName = *port.Name + } + + portNum := "" + if port.Port != nil { + portNum = strconv.Itoa(int(*port.Port)) + } + + w.Write(LEVEL_1, "%s\t%s\t%s\n", portName, portNum, *port.Protocol) + } + } + + if len(eps.Endpoints) == 0 { + w.Write(LEVEL_0, "Endpoints: \n") + } else { + w.Write(LEVEL_0, "Endpoints:\n") + for i := range eps.Endpoints { + endpoint := &eps.Endpoints[i] + + addressesString := strings.Join(endpoint.Addresses, ", ") + if len(addressesString) == 0 { + addressesString = "" + } + w.Write(LEVEL_1, "- Addresses:\t%s\n", addressesString) + + w.Write(LEVEL_2, "Conditions:\n") + readyText := "" + if endpoint.Conditions.Ready != nil { + readyText = strconv.FormatBool(*endpoint.Conditions.Ready) + } + w.Write(LEVEL_3, "Ready:\t%s\n", readyText) + + hostnameText := "" + if endpoint.Hostname != nil { + hostnameText = *endpoint.Hostname + } + w.Write(LEVEL_2, "Hostname:\t%s\n", hostnameText) + + if endpoint.TargetRef != nil { + w.Write(LEVEL_2, "TargetRef:\t%s/%s\n", endpoint.TargetRef.Kind, endpoint.TargetRef.Name) + } + + nodeNameText := "" + if endpoint.NodeName != nil { + nodeNameText = *endpoint.NodeName + } + w.Write(LEVEL_2, "NodeName:\t%s\n", nodeNameText) + + zoneText := "" + if endpoint.Zone != nil { + zoneText = *endpoint.Zone + } + w.Write(LEVEL_2, "Zone:\t%s\n", zoneText) + } + } + + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func describeEndpointSliceV1beta1(eps *discoveryv1beta1.EndpointSlice, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", eps.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", eps.Namespace) + printLabelsMultiline(w, "Labels", eps.Labels) + printAnnotationsMultiline(w, "Annotations", eps.Annotations) + + w.Write(LEVEL_0, "AddressType:\t%s\n", string(eps.AddressType)) + + if len(eps.Ports) == 0 { + w.Write(LEVEL_0, "Ports: \n") + } else { + w.Write(LEVEL_0, "Ports:\n") + w.Write(LEVEL_1, "Name\tPort\tProtocol\n") + w.Write(LEVEL_1, "----\t----\t--------\n") + for _, port := range eps.Ports { + portName := "" + if port.Name != nil && len(*port.Name) > 0 { + portName = *port.Name + } + + portNum := "" + if port.Port != nil { + portNum = strconv.Itoa(int(*port.Port)) + } + + w.Write(LEVEL_1, "%s\t%s\t%s\n", portName, portNum, *port.Protocol) + } + } + + if len(eps.Endpoints) == 0 { + w.Write(LEVEL_0, "Endpoints: \n") + } else { + w.Write(LEVEL_0, "Endpoints:\n") + for i := range eps.Endpoints { + endpoint := &eps.Endpoints[i] + + addressesString := strings.Join(endpoint.Addresses, ",") + if len(addressesString) == 0 { + addressesString = "" + } + w.Write(LEVEL_1, "- Addresses:\t%s\n", addressesString) + + w.Write(LEVEL_2, "Conditions:\n") + readyText := "" + if endpoint.Conditions.Ready != nil { + readyText = strconv.FormatBool(*endpoint.Conditions.Ready) + } + w.Write(LEVEL_3, "Ready:\t%s\n", readyText) + + hostnameText := "" + if endpoint.Hostname != nil { + hostnameText = *endpoint.Hostname + } + w.Write(LEVEL_2, "Hostname:\t%s\n", hostnameText) + + if endpoint.TargetRef != nil { + w.Write(LEVEL_2, "TargetRef:\t%s/%s\n", endpoint.TargetRef.Kind, endpoint.TargetRef.Name) + } + + printLabelsMultilineWithIndent(w, " ", "Topology", "\t", endpoint.Topology, sets.NewString()) + } + } + + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +// ServiceAccountDescriber generates information about a service. +type ServiceAccountDescriber struct { + clientset.Interface +} + +func (d *ServiceAccountDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + c := d.CoreV1().ServiceAccounts(namespace) + + serviceAccount, err := c.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + tokens := []corev1.Secret{} + + // missingSecrets is the set of all secrets present in the + // serviceAccount but not present in the set of existing secrets. + missingSecrets := sets.NewString() + secrets := corev1.SecretList{} + err = runtimeresource.FollowContinue(&metav1.ListOptions{Limit: describerSettings.ChunkSize}, + func(options metav1.ListOptions) (runtime.Object, error) { + newList, err := d.CoreV1().Secrets(namespace).List(context.TODO(), options) + if err != nil { + return nil, runtimeresource.EnhanceListError(err, options, corev1.ResourceSecrets.String()) + } + secrets.Items = append(secrets.Items, newList.Items...) + return newList, nil + }) + + // errors are tolerated here in order to describe the serviceAccount with all + // of the secrets that it references, even if those secrets cannot be fetched. + if err == nil { + // existingSecrets is the set of all secrets remaining on a + // service account that are not present in the "tokens" slice. + existingSecrets := sets.NewString() + + for _, s := range secrets.Items { + if s.Type == corev1.SecretTypeServiceAccountToken { + name := s.Annotations[corev1.ServiceAccountNameKey] + uid := s.Annotations[corev1.ServiceAccountUIDKey] + if name == serviceAccount.Name && uid == string(serviceAccount.UID) { + tokens = append(tokens, s) + } + } + existingSecrets.Insert(s.Name) + } + + for _, s := range serviceAccount.Secrets { + if !existingSecrets.Has(s.Name) { + missingSecrets.Insert(s.Name) + } + } + for _, s := range serviceAccount.ImagePullSecrets { + if !existingSecrets.Has(s.Name) { + missingSecrets.Insert(s.Name) + } + } + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(d.CoreV1(), serviceAccount, describerSettings.ChunkSize) + } + + return describeServiceAccount(serviceAccount, tokens, missingSecrets, events) +} + +func describeServiceAccount(serviceAccount *corev1.ServiceAccount, tokens []corev1.Secret, missingSecrets sets.String, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", serviceAccount.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", serviceAccount.Namespace) + printLabelsMultiline(w, "Labels", serviceAccount.Labels) + printAnnotationsMultiline(w, "Annotations", serviceAccount.Annotations) + + var ( + emptyHeader = " " + pullHeader = "Image pull secrets:" + mountHeader = "Mountable secrets: " + tokenHeader = "Tokens: " + + pullSecretNames = []string{} + mountSecretNames = []string{} + tokenSecretNames = []string{} + ) + + for _, s := range serviceAccount.ImagePullSecrets { + pullSecretNames = append(pullSecretNames, s.Name) + } + for _, s := range serviceAccount.Secrets { + mountSecretNames = append(mountSecretNames, s.Name) + } + for _, s := range tokens { + tokenSecretNames = append(tokenSecretNames, s.Name) + } + + types := map[string][]string{ + pullHeader: pullSecretNames, + mountHeader: mountSecretNames, + tokenHeader: tokenSecretNames, + } + for _, header := range sets.StringKeySet(types).List() { + names := types[header] + if len(names) == 0 { + w.Write(LEVEL_0, "%s\t\n", header) + } else { + prefix := header + for _, name := range names { + if missingSecrets.Has(name) { + w.Write(LEVEL_0, "%s\t%s (not found)\n", prefix, name) + } else { + w.Write(LEVEL_0, "%s\t%s\n", prefix, name) + } + prefix = emptyHeader + } + } + } + + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +// RoleDescriber generates information about a node. +type RoleDescriber struct { + clientset.Interface +} + +func (d *RoleDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + role, err := d.RbacV1().Roles(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + breakdownRules := []rbacv1.PolicyRule{} + for _, rule := range role.Rules { + breakdownRules = append(breakdownRules, rbac.BreakdownRule(rule)...) + } + + compactRules, err := rbac.CompactRules(breakdownRules) + if err != nil { + return "", err + } + sort.Stable(rbac.SortableRuleSlice(compactRules)) + + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", role.Name) + printLabelsMultiline(w, "Labels", role.Labels) + printAnnotationsMultiline(w, "Annotations", role.Annotations) + + w.Write(LEVEL_0, "PolicyRule:\n") + w.Write(LEVEL_1, "Resources\tNon-Resource URLs\tResource Names\tVerbs\n") + w.Write(LEVEL_1, "---------\t-----------------\t--------------\t-----\n") + for _, r := range compactRules { + w.Write(LEVEL_1, "%s\t%v\t%v\t%v\n", CombineResourceGroup(r.Resources, r.APIGroups), r.NonResourceURLs, r.ResourceNames, r.Verbs) + } + + return nil + }) +} + +// ClusterRoleDescriber generates information about a node. +type ClusterRoleDescriber struct { + clientset.Interface +} + +func (d *ClusterRoleDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + role, err := d.RbacV1().ClusterRoles().Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + breakdownRules := []rbacv1.PolicyRule{} + for _, rule := range role.Rules { + breakdownRules = append(breakdownRules, rbac.BreakdownRule(rule)...) + } + + compactRules, err := rbac.CompactRules(breakdownRules) + if err != nil { + return "", err + } + sort.Stable(rbac.SortableRuleSlice(compactRules)) + + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", role.Name) + printLabelsMultiline(w, "Labels", role.Labels) + printAnnotationsMultiline(w, "Annotations", role.Annotations) + + w.Write(LEVEL_0, "PolicyRule:\n") + w.Write(LEVEL_1, "Resources\tNon-Resource URLs\tResource Names\tVerbs\n") + w.Write(LEVEL_1, "---------\t-----------------\t--------------\t-----\n") + for _, r := range compactRules { + w.Write(LEVEL_1, "%s\t%v\t%v\t%v\n", CombineResourceGroup(r.Resources, r.APIGroups), r.NonResourceURLs, r.ResourceNames, r.Verbs) + } + + return nil + }) +} + +func CombineResourceGroup(resource, group []string) string { + if len(resource) == 0 { + return "" + } + parts := strings.SplitN(resource[0], "/", 2) + combine := parts[0] + + if len(group) > 0 && group[0] != "" { + combine = combine + "." + group[0] + } + + if len(parts) == 2 { + combine = combine + "/" + parts[1] + } + return combine +} + +// RoleBindingDescriber generates information about a node. +type RoleBindingDescriber struct { + clientset.Interface +} + +func (d *RoleBindingDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + binding, err := d.RbacV1().RoleBindings(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", binding.Name) + printLabelsMultiline(w, "Labels", binding.Labels) + printAnnotationsMultiline(w, "Annotations", binding.Annotations) + + w.Write(LEVEL_0, "Role:\n") + w.Write(LEVEL_1, "Kind:\t%s\n", binding.RoleRef.Kind) + w.Write(LEVEL_1, "Name:\t%s\n", binding.RoleRef.Name) + + w.Write(LEVEL_0, "Subjects:\n") + w.Write(LEVEL_1, "Kind\tName\tNamespace\n") + w.Write(LEVEL_1, "----\t----\t---------\n") + for _, s := range binding.Subjects { + w.Write(LEVEL_1, "%s\t%s\t%s\n", s.Kind, s.Name, s.Namespace) + } + + return nil + }) +} + +// ClusterRoleBindingDescriber generates information about a node. +type ClusterRoleBindingDescriber struct { + clientset.Interface +} + +func (d *ClusterRoleBindingDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + binding, err := d.RbacV1().ClusterRoleBindings().Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", binding.Name) + printLabelsMultiline(w, "Labels", binding.Labels) + printAnnotationsMultiline(w, "Annotations", binding.Annotations) + + w.Write(LEVEL_0, "Role:\n") + w.Write(LEVEL_1, "Kind:\t%s\n", binding.RoleRef.Kind) + w.Write(LEVEL_1, "Name:\t%s\n", binding.RoleRef.Name) + + w.Write(LEVEL_0, "Subjects:\n") + w.Write(LEVEL_1, "Kind\tName\tNamespace\n") + w.Write(LEVEL_1, "----\t----\t---------\n") + for _, s := range binding.Subjects { + w.Write(LEVEL_1, "%s\t%s\t%s\n", s.Kind, s.Name, s.Namespace) + } + + return nil + }) +} + +// NodeDescriber generates information about a node. +type NodeDescriber struct { + clientset.Interface +} + +func (d *NodeDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + mc := d.CoreV1().Nodes() + node, err := mc.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + fieldSelector, err := fields.ParseSelector("spec.nodeName=" + name + ",status.phase!=" + string(corev1.PodSucceeded) + ",status.phase!=" + string(corev1.PodFailed)) + if err != nil { + return "", err + } + // in a policy aware setting, users may have access to a node, but not all pods + // in that case, we note that the user does not have access to the pods + canViewPods := true + initialOpts := metav1.ListOptions{ + FieldSelector: fieldSelector.String(), + Limit: describerSettings.ChunkSize, + } + nodeNonTerminatedPodsList, err := getPodsInChunks(d.CoreV1().Pods(namespace), initialOpts) + if err != nil { + if !apierrors.IsForbidden(err) { + return "", err + } + canViewPods = false + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + if ref, err := reference.GetReference(scheme.Scheme, node); err != nil { + klog.Errorf("Unable to construct reference to '%#v': %v", node, err) + } else { + // TODO: We haven't decided the namespace for Node object yet. + // there are two UIDs for host events: + // controller use node.uid + // kubelet use node.name + // TODO: Uniform use of UID + events, _ = searchEvents(d.CoreV1(), ref, describerSettings.ChunkSize) + + ref.UID = types.UID(ref.Name) + eventsInvName, _ := searchEvents(d.CoreV1(), ref, describerSettings.ChunkSize) + + // Merge the results of two queries + events.Items = append(events.Items, eventsInvName.Items...) + } + } + + return describeNode(node, nodeNonTerminatedPodsList, events, canViewPods, &LeaseDescriber{d}) +} + +type LeaseDescriber struct { + client clientset.Interface +} + +func describeNode(node *corev1.Node, nodeNonTerminatedPodsList *corev1.PodList, events *corev1.EventList, + canViewPods bool, ld *LeaseDescriber) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", node.Name) + if roles := findNodeRoles(node); len(roles) > 0 { + w.Write(LEVEL_0, "Roles:\t%s\n", strings.Join(roles, ",")) + } else { + w.Write(LEVEL_0, "Roles:\t%s\n", "") + } + printLabelsMultiline(w, "Labels", node.Labels) + printAnnotationsMultiline(w, "Annotations", node.Annotations) + w.Write(LEVEL_0, "CreationTimestamp:\t%s\n", node.CreationTimestamp.Time.Format(time.RFC1123Z)) + printNodeTaintsMultiline(w, "Taints", node.Spec.Taints) + w.Write(LEVEL_0, "Unschedulable:\t%v\n", node.Spec.Unschedulable) + + if ld != nil { + if lease, err := ld.client.CoordinationV1().Leases(corev1.NamespaceNodeLease).Get(context.TODO(), node.Name, metav1.GetOptions{}); err == nil { + describeNodeLease(lease, w) + } else { + w.Write(LEVEL_0, "Lease:\tFailed to get lease: %s\n", err) + } + } + + if len(node.Status.Conditions) > 0 { + w.Write(LEVEL_0, "Conditions:\n Type\tStatus\tLastHeartbeatTime\tLastTransitionTime\tReason\tMessage\n") + w.Write(LEVEL_1, "----\t------\t-----------------\t------------------\t------\t-------\n") + for _, c := range node.Status.Conditions { + w.Write(LEVEL_1, "%v \t%v \t%s \t%s \t%v \t%v\n", + c.Type, + c.Status, + c.LastHeartbeatTime.Time.Format(time.RFC1123Z), + c.LastTransitionTime.Time.Format(time.RFC1123Z), + c.Reason, + c.Message) + } + } + + w.Write(LEVEL_0, "Addresses:\n") + for _, address := range node.Status.Addresses { + w.Write(LEVEL_1, "%s:\t%s\n", address.Type, address.Address) + } + + printResourceList := func(resourceList corev1.ResourceList) { + resources := make([]corev1.ResourceName, 0, len(resourceList)) + for resource := range resourceList { + resources = append(resources, resource) + } + sort.Sort(SortableResourceNames(resources)) + for _, resource := range resources { + value := resourceList[resource] + w.Write(LEVEL_0, " %s:\t%s\n", resource, value.String()) + } + } + + if len(node.Status.Capacity) > 0 { + w.Write(LEVEL_0, "Capacity:\n") + printResourceList(node.Status.Capacity) + } + if len(node.Status.Allocatable) > 0 { + w.Write(LEVEL_0, "Allocatable:\n") + printResourceList(node.Status.Allocatable) + } + + w.Write(LEVEL_0, "System Info:\n") + w.Write(LEVEL_0, " Machine ID:\t%s\n", node.Status.NodeInfo.MachineID) + w.Write(LEVEL_0, " System UUID:\t%s\n", node.Status.NodeInfo.SystemUUID) + w.Write(LEVEL_0, " Boot ID:\t%s\n", node.Status.NodeInfo.BootID) + w.Write(LEVEL_0, " Kernel Version:\t%s\n", node.Status.NodeInfo.KernelVersion) + w.Write(LEVEL_0, " OS Image:\t%s\n", node.Status.NodeInfo.OSImage) + w.Write(LEVEL_0, " Operating System:\t%s\n", node.Status.NodeInfo.OperatingSystem) + w.Write(LEVEL_0, " Architecture:\t%s\n", node.Status.NodeInfo.Architecture) + w.Write(LEVEL_0, " Container Runtime Version:\t%s\n", node.Status.NodeInfo.ContainerRuntimeVersion) + w.Write(LEVEL_0, " Kubelet Version:\t%s\n", node.Status.NodeInfo.KubeletVersion) + w.Write(LEVEL_0, " Kube-Proxy Version:\t%s\n", node.Status.NodeInfo.KubeProxyVersion) + + // remove when .PodCIDR is depreciated + if len(node.Spec.PodCIDR) > 0 { + w.Write(LEVEL_0, "PodCIDR:\t%s\n", node.Spec.PodCIDR) + } + + if len(node.Spec.PodCIDRs) > 0 { + w.Write(LEVEL_0, "PodCIDRs:\t%s\n", strings.Join(node.Spec.PodCIDRs, ",")) + } + if len(node.Spec.ProviderID) > 0 { + w.Write(LEVEL_0, "ProviderID:\t%s\n", node.Spec.ProviderID) + } + if canViewPods && nodeNonTerminatedPodsList != nil { + describeNodeResource(nodeNonTerminatedPodsList, node, w) + } else { + w.Write(LEVEL_0, "Pods:\tnot authorized\n") + } + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func describeNodeLease(lease *coordinationv1.Lease, w PrefixWriter) { + w.Write(LEVEL_0, "Lease:\n") + holderIdentity := "" + if lease != nil && lease.Spec.HolderIdentity != nil { + holderIdentity = *lease.Spec.HolderIdentity + } + w.Write(LEVEL_1, "HolderIdentity:\t%s\n", holderIdentity) + acquireTime := "" + if lease != nil && lease.Spec.AcquireTime != nil { + acquireTime = lease.Spec.AcquireTime.Time.Format(time.RFC1123Z) + } + w.Write(LEVEL_1, "AcquireTime:\t%s\n", acquireTime) + renewTime := "" + if lease != nil && lease.Spec.RenewTime != nil { + renewTime = lease.Spec.RenewTime.Time.Format(time.RFC1123Z) + } + w.Write(LEVEL_1, "RenewTime:\t%s\n", renewTime) +} + +type StatefulSetDescriber struct { + client clientset.Interface +} + +func (p *StatefulSetDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + ps, err := p.client.AppsV1().StatefulSets(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + pc := p.client.CoreV1().Pods(namespace) + + selector, err := metav1.LabelSelectorAsSelector(ps.Spec.Selector) + if err != nil { + return "", err + } + + running, waiting, succeeded, failed, err := getPodStatusForController(pc, selector, ps.UID, describerSettings) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(p.client.CoreV1(), ps, describerSettings.ChunkSize) + } + + return describeStatefulSet(ps, selector, events, running, waiting, succeeded, failed) +} + +func describeStatefulSet(ps *appsv1.StatefulSet, selector labels.Selector, events *corev1.EventList, running, waiting, succeeded, failed int) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", ps.ObjectMeta.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", ps.ObjectMeta.Namespace) + w.Write(LEVEL_0, "CreationTimestamp:\t%s\n", ps.CreationTimestamp.Time.Format(time.RFC1123Z)) + w.Write(LEVEL_0, "Selector:\t%s\n", selector) + printLabelsMultiline(w, "Labels", ps.Labels) + printAnnotationsMultiline(w, "Annotations", ps.Annotations) + w.Write(LEVEL_0, "Replicas:\t%d desired | %d total\n", *ps.Spec.Replicas, ps.Status.Replicas) + w.Write(LEVEL_0, "Update Strategy:\t%s\n", ps.Spec.UpdateStrategy.Type) + if ps.Spec.UpdateStrategy.RollingUpdate != nil { + ru := ps.Spec.UpdateStrategy.RollingUpdate + if ru.Partition != nil { + w.Write(LEVEL_1, "Partition:\t%d\n", *ru.Partition) + if ru.MaxUnavailable != nil { + w.Write(LEVEL_1, "MaxUnavailable:\t%s\n", ru.MaxUnavailable.String()) + } + } + } + + w.Write(LEVEL_0, "Pods Status:\t%d Running / %d Waiting / %d Succeeded / %d Failed\n", running, waiting, succeeded, failed) + DescribePodTemplate(&ps.Spec.Template, w) + describeVolumeClaimTemplates(ps.Spec.VolumeClaimTemplates, w) + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +type CertificateSigningRequestDescriber struct { + client clientset.Interface +} + +func (p *CertificateSigningRequestDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + + var ( + crBytes []byte + metadata metav1.ObjectMeta + status string + signerName string + expirationSeconds *int32 + username string + events *corev1.EventList + ) + + if csr, err := p.client.CertificatesV1().CertificateSigningRequests().Get(context.TODO(), name, metav1.GetOptions{}); err == nil { + crBytes = csr.Spec.Request + metadata = csr.ObjectMeta + conditionTypes := []string{} + for _, c := range csr.Status.Conditions { + conditionTypes = append(conditionTypes, string(c.Type)) + } + status = extractCSRStatus(conditionTypes, csr.Status.Certificate) + signerName = csr.Spec.SignerName + expirationSeconds = csr.Spec.ExpirationSeconds + username = csr.Spec.Username + if describerSettings.ShowEvents { + events, _ = searchEvents(p.client.CoreV1(), csr, describerSettings.ChunkSize) + } + } else if csr, err := p.client.CertificatesV1beta1().CertificateSigningRequests().Get(context.TODO(), name, metav1.GetOptions{}); err == nil { + crBytes = csr.Spec.Request + metadata = csr.ObjectMeta + conditionTypes := []string{} + for _, c := range csr.Status.Conditions { + conditionTypes = append(conditionTypes, string(c.Type)) + } + status = extractCSRStatus(conditionTypes, csr.Status.Certificate) + if csr.Spec.SignerName != nil { + signerName = *csr.Spec.SignerName + } + expirationSeconds = csr.Spec.ExpirationSeconds + username = csr.Spec.Username + if describerSettings.ShowEvents { + events, _ = searchEvents(p.client.CoreV1(), csr, describerSettings.ChunkSize) + } + } else { + return "", err + } + + cr, err := certificate.ParseCSR(crBytes) + if err != nil { + return "", fmt.Errorf("Error parsing CSR: %v", err) + } + + return describeCertificateSigningRequest(metadata, signerName, expirationSeconds, username, cr, status, events) +} + +func describeCertificateSigningRequest(csr metav1.ObjectMeta, signerName string, expirationSeconds *int32, username string, cr *x509.CertificateRequest, status string, events *corev1.EventList) (string, error) { + printListHelper := func(w PrefixWriter, prefix, name string, values []string) { + if len(values) == 0 { + return + } + w.Write(LEVEL_0, prefix+name+":\t") + w.Write(LEVEL_0, strings.Join(values, "\n"+prefix+"\t")) + w.Write(LEVEL_0, "\n") + } + + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", csr.Name) + w.Write(LEVEL_0, "Labels:\t%s\n", labels.FormatLabels(csr.Labels)) + w.Write(LEVEL_0, "Annotations:\t%s\n", labels.FormatLabels(csr.Annotations)) + w.Write(LEVEL_0, "CreationTimestamp:\t%s\n", csr.CreationTimestamp.Time.Format(time.RFC1123Z)) + w.Write(LEVEL_0, "Requesting User:\t%s\n", username) + if len(signerName) > 0 { + w.Write(LEVEL_0, "Signer:\t%s\n", signerName) + } + if expirationSeconds != nil { + w.Write(LEVEL_0, "Requested Duration:\t%s\n", duration.HumanDuration(utilcsr.ExpirationSecondsToDuration(*expirationSeconds))) + } + w.Write(LEVEL_0, "Status:\t%s\n", status) + + w.Write(LEVEL_0, "Subject:\n") + w.Write(LEVEL_0, "\tCommon Name:\t%s\n", cr.Subject.CommonName) + w.Write(LEVEL_0, "\tSerial Number:\t%s\n", cr.Subject.SerialNumber) + printListHelper(w, "\t", "Organization", cr.Subject.Organization) + printListHelper(w, "\t", "Organizational Unit", cr.Subject.OrganizationalUnit) + printListHelper(w, "\t", "Country", cr.Subject.Country) + printListHelper(w, "\t", "Locality", cr.Subject.Locality) + printListHelper(w, "\t", "Province", cr.Subject.Province) + printListHelper(w, "\t", "StreetAddress", cr.Subject.StreetAddress) + printListHelper(w, "\t", "PostalCode", cr.Subject.PostalCode) + + if len(cr.DNSNames)+len(cr.EmailAddresses)+len(cr.IPAddresses)+len(cr.URIs) > 0 { + w.Write(LEVEL_0, "Subject Alternative Names:\n") + printListHelper(w, "\t", "DNS Names", cr.DNSNames) + printListHelper(w, "\t", "Email Addresses", cr.EmailAddresses) + var uris []string + for _, uri := range cr.URIs { + uris = append(uris, uri.String()) + } + printListHelper(w, "\t", "URIs", uris) + var ipaddrs []string + for _, ipaddr := range cr.IPAddresses { + ipaddrs = append(ipaddrs, ipaddr.String()) + } + printListHelper(w, "\t", "IP Addresses", ipaddrs) + } + + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +// HorizontalPodAutoscalerDescriber generates information about a horizontal pod autoscaler. +type HorizontalPodAutoscalerDescriber struct { + client clientset.Interface +} + +func (d *HorizontalPodAutoscalerDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + var events *corev1.EventList + + // autoscaling/v2 is introduced since v1.23 and autoscaling/v1 does not have full backward compatibility + // with autoscaling/v2, so describer will try to get and describe hpa v2 object firstly, if it fails, + // describer will fall back to do with hpa v1 object + hpaV2, err := d.client.AutoscalingV2().HorizontalPodAutoscalers(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(d.client.CoreV1(), hpaV2, describerSettings.ChunkSize) + } + return describeHorizontalPodAutoscalerV2(hpaV2, events, d) + } + + hpaV1, err := d.client.AutoscalingV1().HorizontalPodAutoscalers(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + if describerSettings.ShowEvents { + events, _ = searchEvents(d.client.CoreV1(), hpaV1, describerSettings.ChunkSize) + } + return describeHorizontalPodAutoscalerV1(hpaV1, events, d) + } + + return "", err +} + +func describeHorizontalPodAutoscalerV2(hpa *autoscalingv2.HorizontalPodAutoscaler, events *corev1.EventList, d *HorizontalPodAutoscalerDescriber) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", hpa.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", hpa.Namespace) + printLabelsMultiline(w, "Labels", hpa.Labels) + printAnnotationsMultiline(w, "Annotations", hpa.Annotations) + w.Write(LEVEL_0, "CreationTimestamp:\t%s\n", hpa.CreationTimestamp.Time.Format(time.RFC1123Z)) + w.Write(LEVEL_0, "Reference:\t%s/%s\n", + hpa.Spec.ScaleTargetRef.Kind, + hpa.Spec.ScaleTargetRef.Name) + w.Write(LEVEL_0, "Metrics:\t( current / target )\n") + for i, metric := range hpa.Spec.Metrics { + switch metric.Type { + case autoscalingv2.ExternalMetricSourceType: + if metric.External.Target.AverageValue != nil { + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].External != nil && + hpa.Status.CurrentMetrics[i].External.Current.AverageValue != nil { + current = hpa.Status.CurrentMetrics[i].External.Current.AverageValue.String() + } + w.Write(LEVEL_1, "%q (target average value):\t%s / %s\n", metric.External.Metric.Name, current, metric.External.Target.AverageValue.String()) + } else { + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].External != nil { + current = hpa.Status.CurrentMetrics[i].External.Current.Value.String() + } + w.Write(LEVEL_1, "%q (target value):\t%s / %s\n", metric.External.Metric.Name, current, metric.External.Target.Value.String()) + + } + case autoscalingv2.PodsMetricSourceType: + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].Pods != nil { + current = hpa.Status.CurrentMetrics[i].Pods.Current.AverageValue.String() + } + w.Write(LEVEL_1, "%q on pods:\t%s / %s\n", metric.Pods.Metric.Name, current, metric.Pods.Target.AverageValue.String()) + case autoscalingv2.ObjectMetricSourceType: + w.Write(LEVEL_1, "\"%s\" on %s/%s ", metric.Object.Metric.Name, metric.Object.DescribedObject.Kind, metric.Object.DescribedObject.Name) + if metric.Object.Target.Type == autoscalingv2.AverageValueMetricType { + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].Object != nil { + current = hpa.Status.CurrentMetrics[i].Object.Current.AverageValue.String() + } + w.Write(LEVEL_0, "(target average value):\t%s / %s\n", current, metric.Object.Target.AverageValue.String()) + } else { + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].Object != nil { + current = hpa.Status.CurrentMetrics[i].Object.Current.Value.String() + } + w.Write(LEVEL_0, "(target value):\t%s / %s\n", current, metric.Object.Target.Value.String()) + } + case autoscalingv2.ResourceMetricSourceType: + w.Write(LEVEL_1, "resource %s on pods", string(metric.Resource.Name)) + if metric.Resource.Target.AverageValue != nil { + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].Resource != nil { + current = hpa.Status.CurrentMetrics[i].Resource.Current.AverageValue.String() + } + w.Write(LEVEL_0, ":\t%s / %s\n", current, metric.Resource.Target.AverageValue.String()) + } else { + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].Resource != nil && hpa.Status.CurrentMetrics[i].Resource.Current.AverageUtilization != nil { + current = fmt.Sprintf("%d%% (%s)", *hpa.Status.CurrentMetrics[i].Resource.Current.AverageUtilization, hpa.Status.CurrentMetrics[i].Resource.Current.AverageValue.String()) + } + + target := "" + if metric.Resource.Target.AverageUtilization != nil { + target = fmt.Sprintf("%d%%", *metric.Resource.Target.AverageUtilization) + } + w.Write(LEVEL_1, "(as a percentage of request):\t%s / %s\n", current, target) + } + case autoscalingv2.ContainerResourceMetricSourceType: + w.Write(LEVEL_1, "resource %s of container \"%s\" on pods", string(metric.ContainerResource.Name), metric.ContainerResource.Container) + if metric.ContainerResource.Target.AverageValue != nil { + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].ContainerResource != nil { + current = hpa.Status.CurrentMetrics[i].ContainerResource.Current.AverageValue.String() + } + w.Write(LEVEL_0, ":\t%s / %s\n", current, metric.ContainerResource.Target.AverageValue.String()) + } else { + current := "" + if len(hpa.Status.CurrentMetrics) > i && hpa.Status.CurrentMetrics[i].ContainerResource != nil && hpa.Status.CurrentMetrics[i].ContainerResource.Current.AverageUtilization != nil { + current = fmt.Sprintf("%d%% (%s)", *hpa.Status.CurrentMetrics[i].ContainerResource.Current.AverageUtilization, hpa.Status.CurrentMetrics[i].ContainerResource.Current.AverageValue.String()) + } + + target := "" + if metric.ContainerResource.Target.AverageUtilization != nil { + target = fmt.Sprintf("%d%%", *metric.ContainerResource.Target.AverageUtilization) + } + w.Write(LEVEL_1, "(as a percentage of request):\t%s / %s\n", current, target) + } + default: + w.Write(LEVEL_1, "\n", string(metric.Type)) + } + } + minReplicas := "" + if hpa.Spec.MinReplicas != nil { + minReplicas = fmt.Sprintf("%d", *hpa.Spec.MinReplicas) + } + w.Write(LEVEL_0, "Min replicas:\t%s\n", minReplicas) + w.Write(LEVEL_0, "Max replicas:\t%d\n", hpa.Spec.MaxReplicas) + // only print the hpa behavior if present + if hpa.Spec.Behavior != nil { + w.Write(LEVEL_0, "Behavior:\n") + printDirectionBehavior(w, "Scale Up", hpa.Spec.Behavior.ScaleUp) + printDirectionBehavior(w, "Scale Down", hpa.Spec.Behavior.ScaleDown) + } + w.Write(LEVEL_0, "%s pods:\t", hpa.Spec.ScaleTargetRef.Kind) + w.Write(LEVEL_0, "%d current / %d desired\n", hpa.Status.CurrentReplicas, hpa.Status.DesiredReplicas) + + if len(hpa.Status.Conditions) > 0 { + w.Write(LEVEL_0, "Conditions:\n") + w.Write(LEVEL_1, "Type\tStatus\tReason\tMessage\n") + w.Write(LEVEL_1, "----\t------\t------\t-------\n") + for _, c := range hpa.Status.Conditions { + w.Write(LEVEL_1, "%v\t%v\t%v\t%v\n", c.Type, c.Status, c.Reason, c.Message) + } + } + + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +func printDirectionBehavior(w PrefixWriter, direction string, rules *autoscalingv2.HPAScalingRules) { + if rules != nil { + w.Write(LEVEL_1, "%s:\n", direction) + if rules.StabilizationWindowSeconds != nil { + w.Write(LEVEL_2, "Stabilization Window: %d seconds\n", *rules.StabilizationWindowSeconds) + } + if len(rules.Policies) > 0 { + if rules.SelectPolicy != nil { + w.Write(LEVEL_2, "Select Policy: %s\n", *rules.SelectPolicy) + } else { + w.Write(LEVEL_2, "Select Policy: %s\n", autoscalingv2.MaxChangePolicySelect) + } + w.Write(LEVEL_2, "Policies:\n") + for _, p := range rules.Policies { + w.Write(LEVEL_3, "- Type: %s\tValue: %d\tPeriod: %d seconds\n", p.Type, p.Value, p.PeriodSeconds) + } + } + } +} + +func describeHorizontalPodAutoscalerV1(hpa *autoscalingv1.HorizontalPodAutoscaler, events *corev1.EventList, d *HorizontalPodAutoscalerDescriber) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", hpa.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", hpa.Namespace) + printLabelsMultiline(w, "Labels", hpa.Labels) + printAnnotationsMultiline(w, "Annotations", hpa.Annotations) + w.Write(LEVEL_0, "CreationTimestamp:\t%s\n", hpa.CreationTimestamp.Time.Format(time.RFC1123Z)) + w.Write(LEVEL_0, "Reference:\t%s/%s\n", + hpa.Spec.ScaleTargetRef.Kind, + hpa.Spec.ScaleTargetRef.Name) + + if hpa.Spec.TargetCPUUtilizationPercentage != nil { + w.Write(LEVEL_0, "Target CPU utilization:\t%d%%\n", *hpa.Spec.TargetCPUUtilizationPercentage) + current := "" + if hpa.Status.CurrentCPUUtilizationPercentage != nil { + current = fmt.Sprintf("%d", *hpa.Status.CurrentCPUUtilizationPercentage) + } + w.Write(LEVEL_0, "Current CPU utilization:\t%s%%\n", current) + } + + minReplicas := "" + if hpa.Spec.MinReplicas != nil { + minReplicas = fmt.Sprintf("%d", *hpa.Spec.MinReplicas) + } + w.Write(LEVEL_0, "Min replicas:\t%s\n", minReplicas) + w.Write(LEVEL_0, "Max replicas:\t%d\n", hpa.Spec.MaxReplicas) + w.Write(LEVEL_0, "%s pods:\t", hpa.Spec.ScaleTargetRef.Kind) + w.Write(LEVEL_0, "%d current / %d desired\n", hpa.Status.CurrentReplicas, hpa.Status.DesiredReplicas) + + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +func describeNodeResource(nodeNonTerminatedPodsList *corev1.PodList, node *corev1.Node, w PrefixWriter) { + w.Write(LEVEL_0, "Non-terminated Pods:\t(%d in total)\n", len(nodeNonTerminatedPodsList.Items)) + w.Write(LEVEL_1, "Namespace\tName\t\tCPU Requests\tCPU Limits\tMemory Requests\tMemory Limits\tAge\n") + w.Write(LEVEL_1, "---------\t----\t\t------------\t----------\t---------------\t-------------\t---\n") + allocatable := node.Status.Capacity + if len(node.Status.Allocatable) > 0 { + allocatable = node.Status.Allocatable + } + + for _, pod := range nodeNonTerminatedPodsList.Items { + req, limit := resourcehelper.PodRequestsAndLimits(&pod) + cpuReq, cpuLimit, memoryReq, memoryLimit := req[corev1.ResourceCPU], limit[corev1.ResourceCPU], req[corev1.ResourceMemory], limit[corev1.ResourceMemory] + fractionCpuReq := float64(cpuReq.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100 + fractionCpuLimit := float64(cpuLimit.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100 + fractionMemoryReq := float64(memoryReq.Value()) / float64(allocatable.Memory().Value()) * 100 + fractionMemoryLimit := float64(memoryLimit.Value()) / float64(allocatable.Memory().Value()) * 100 + w.Write(LEVEL_1, "%s\t%s\t\t%s (%d%%)\t%s (%d%%)\t%s (%d%%)\t%s (%d%%)\t%s\n", pod.Namespace, pod.Name, + cpuReq.String(), int64(fractionCpuReq), cpuLimit.String(), int64(fractionCpuLimit), + memoryReq.String(), int64(fractionMemoryReq), memoryLimit.String(), int64(fractionMemoryLimit), translateTimestampSince(pod.CreationTimestamp)) + } + + w.Write(LEVEL_0, "Allocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n") + w.Write(LEVEL_1, "Resource\tRequests\tLimits\n") + w.Write(LEVEL_1, "--------\t--------\t------\n") + reqs, limits := getPodsTotalRequestsAndLimits(nodeNonTerminatedPodsList) + cpuReqs, cpuLimits, memoryReqs, memoryLimits, ephemeralstorageReqs, ephemeralstorageLimits := + reqs[corev1.ResourceCPU], limits[corev1.ResourceCPU], reqs[corev1.ResourceMemory], limits[corev1.ResourceMemory], reqs[corev1.ResourceEphemeralStorage], limits[corev1.ResourceEphemeralStorage] + fractionCpuReqs := float64(0) + fractionCpuLimits := float64(0) + if allocatable.Cpu().MilliValue() != 0 { + fractionCpuReqs = float64(cpuReqs.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100 + fractionCpuLimits = float64(cpuLimits.MilliValue()) / float64(allocatable.Cpu().MilliValue()) * 100 + } + fractionMemoryReqs := float64(0) + fractionMemoryLimits := float64(0) + if allocatable.Memory().Value() != 0 { + fractionMemoryReqs = float64(memoryReqs.Value()) / float64(allocatable.Memory().Value()) * 100 + fractionMemoryLimits = float64(memoryLimits.Value()) / float64(allocatable.Memory().Value()) * 100 + } + fractionEphemeralStorageReqs := float64(0) + fractionEphemeralStorageLimits := float64(0) + if allocatable.StorageEphemeral().Value() != 0 { + fractionEphemeralStorageReqs = float64(ephemeralstorageReqs.Value()) / float64(allocatable.StorageEphemeral().Value()) * 100 + fractionEphemeralStorageLimits = float64(ephemeralstorageLimits.Value()) / float64(allocatable.StorageEphemeral().Value()) * 100 + } + w.Write(LEVEL_1, "%s\t%s (%d%%)\t%s (%d%%)\n", + corev1.ResourceCPU, cpuReqs.String(), int64(fractionCpuReqs), cpuLimits.String(), int64(fractionCpuLimits)) + w.Write(LEVEL_1, "%s\t%s (%d%%)\t%s (%d%%)\n", + corev1.ResourceMemory, memoryReqs.String(), int64(fractionMemoryReqs), memoryLimits.String(), int64(fractionMemoryLimits)) + w.Write(LEVEL_1, "%s\t%s (%d%%)\t%s (%d%%)\n", + corev1.ResourceEphemeralStorage, ephemeralstorageReqs.String(), int64(fractionEphemeralStorageReqs), ephemeralstorageLimits.String(), int64(fractionEphemeralStorageLimits)) + + extResources := make([]string, 0, len(allocatable)) + hugePageResources := make([]string, 0, len(allocatable)) + for resource := range allocatable { + if resourcehelper.IsHugePageResourceName(resource) { + hugePageResources = append(hugePageResources, string(resource)) + } else if !resourcehelper.IsStandardContainerResourceName(string(resource)) && resource != corev1.ResourcePods { + extResources = append(extResources, string(resource)) + } + } + + sort.Strings(extResources) + sort.Strings(hugePageResources) + + for _, resource := range hugePageResources { + hugePageSizeRequests, hugePageSizeLimits, hugePageSizeAllocable := reqs[corev1.ResourceName(resource)], limits[corev1.ResourceName(resource)], allocatable[corev1.ResourceName(resource)] + fractionHugePageSizeRequests := float64(0) + fractionHugePageSizeLimits := float64(0) + if hugePageSizeAllocable.Value() != 0 { + fractionHugePageSizeRequests = float64(hugePageSizeRequests.Value()) / float64(hugePageSizeAllocable.Value()) * 100 + fractionHugePageSizeLimits = float64(hugePageSizeLimits.Value()) / float64(hugePageSizeAllocable.Value()) * 100 + } + w.Write(LEVEL_1, "%s\t%s (%d%%)\t%s (%d%%)\n", + resource, hugePageSizeRequests.String(), int64(fractionHugePageSizeRequests), hugePageSizeLimits.String(), int64(fractionHugePageSizeLimits)) + } + + for _, ext := range extResources { + extRequests, extLimits := reqs[corev1.ResourceName(ext)], limits[corev1.ResourceName(ext)] + w.Write(LEVEL_1, "%s\t%s\t%s\n", ext, extRequests.String(), extLimits.String()) + } +} + +func getPodsTotalRequestsAndLimits(podList *corev1.PodList) (reqs map[corev1.ResourceName]resource.Quantity, limits map[corev1.ResourceName]resource.Quantity) { + reqs, limits = map[corev1.ResourceName]resource.Quantity{}, map[corev1.ResourceName]resource.Quantity{} + for _, pod := range podList.Items { + podReqs, podLimits := resourcehelper.PodRequestsAndLimits(&pod) + for podReqName, podReqValue := range podReqs { + if value, ok := reqs[podReqName]; !ok { + reqs[podReqName] = podReqValue.DeepCopy() + } else { + value.Add(podReqValue) + reqs[podReqName] = value + } + } + for podLimitName, podLimitValue := range podLimits { + if value, ok := limits[podLimitName]; !ok { + limits[podLimitName] = podLimitValue.DeepCopy() + } else { + value.Add(podLimitValue) + limits[podLimitName] = value + } + } + } + return +} + +func DescribeEvents(el *corev1.EventList, w PrefixWriter) { + if len(el.Items) == 0 { + w.Write(LEVEL_0, "Events:\t\n") + return + } + w.Flush() + sort.Sort(event.SortableEvents(el.Items)) + w.Write(LEVEL_0, "Events:\n Type\tReason\tAge\tFrom\tMessage\n") + w.Write(LEVEL_1, "----\t------\t----\t----\t-------\n") + for _, e := range el.Items { + var interval string + firstTimestampSince := translateMicroTimestampSince(e.EventTime) + if e.EventTime.IsZero() { + firstTimestampSince = translateTimestampSince(e.FirstTimestamp) + } + if e.Series != nil { + interval = fmt.Sprintf("%s (x%d over %s)", translateMicroTimestampSince(e.Series.LastObservedTime), e.Series.Count, firstTimestampSince) + } else if e.Count > 1 { + interval = fmt.Sprintf("%s (x%d over %s)", translateTimestampSince(e.LastTimestamp), e.Count, firstTimestampSince) + } else { + interval = firstTimestampSince + } + source := e.Source.Component + if source == "" { + source = e.ReportingController + } + w.Write(LEVEL_1, "%v\t%v\t%s\t%v\t%v\n", + e.Type, + e.Reason, + interval, + source, + strings.TrimSpace(e.Message), + ) + } +} + +// DeploymentDescriber generates information about a deployment. +type DeploymentDescriber struct { + client clientset.Interface +} + +func (dd *DeploymentDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + d, err := dd.client.AppsV1().Deployments(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(dd.client.CoreV1(), d, describerSettings.ChunkSize) + } + + var oldRSs, newRSs []*appsv1.ReplicaSet + if oldResult, _, newResult, err := deploymentutil.GetAllReplicaSetsInChunks(d, dd.client.AppsV1(), describerSettings.ChunkSize); err == nil { + oldRSs = oldResult + if newResult != nil { + newRSs = append(newRSs, newResult) + } + } + + return describeDeployment(d, oldRSs, newRSs, events) +} + +func describeDeployment(d *appsv1.Deployment, oldRSs []*appsv1.ReplicaSet, newRSs []*appsv1.ReplicaSet, events *corev1.EventList) (string, error) { + selector, err := metav1.LabelSelectorAsSelector(d.Spec.Selector) + if err != nil { + return "", err + } + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", d.ObjectMeta.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", d.ObjectMeta.Namespace) + w.Write(LEVEL_0, "CreationTimestamp:\t%s\n", d.CreationTimestamp.Time.Format(time.RFC1123Z)) + printLabelsMultiline(w, "Labels", d.Labels) + printAnnotationsMultiline(w, "Annotations", d.Annotations) + w.Write(LEVEL_0, "Selector:\t%s\n", selector) + w.Write(LEVEL_0, "Replicas:\t%d desired | %d updated | %d total | %d available | %d unavailable\n", *(d.Spec.Replicas), d.Status.UpdatedReplicas, d.Status.Replicas, d.Status.AvailableReplicas, d.Status.UnavailableReplicas) + w.Write(LEVEL_0, "StrategyType:\t%s\n", d.Spec.Strategy.Type) + w.Write(LEVEL_0, "MinReadySeconds:\t%d\n", d.Spec.MinReadySeconds) + if d.Spec.Strategy.RollingUpdate != nil { + ru := d.Spec.Strategy.RollingUpdate + w.Write(LEVEL_0, "RollingUpdateStrategy:\t%s max unavailable, %s max surge\n", ru.MaxUnavailable.String(), ru.MaxSurge.String()) + } + DescribePodTemplate(&d.Spec.Template, w) + if len(d.Status.Conditions) > 0 { + w.Write(LEVEL_0, "Conditions:\n Type\tStatus\tReason\n") + w.Write(LEVEL_1, "----\t------\t------\n") + for _, c := range d.Status.Conditions { + w.Write(LEVEL_1, "%v \t%v\t%v\n", c.Type, c.Status, c.Reason) + } + } + + if len(oldRSs) > 0 || len(newRSs) > 0 { + w.Write(LEVEL_0, "OldReplicaSets:\t%s\n", printReplicaSetsByLabels(oldRSs)) + w.Write(LEVEL_0, "NewReplicaSet:\t%s\n", printReplicaSetsByLabels(newRSs)) + } + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +func printReplicaSetsByLabels(matchingRSs []*appsv1.ReplicaSet) string { + // Format the matching ReplicaSets into strings. + rsStrings := make([]string, 0, len(matchingRSs)) + for _, rs := range matchingRSs { + rsStrings = append(rsStrings, fmt.Sprintf("%s (%d/%d replicas created)", rs.Name, rs.Status.Replicas, *rs.Spec.Replicas)) + } + + list := strings.Join(rsStrings, ", ") + if list == "" { + return "" + } + return list +} + +func getPodStatusForController(c corev1client.PodInterface, selector labels.Selector, uid types.UID, settings DescriberSettings) ( + running, waiting, succeeded, failed int, err error) { + initialOpts := metav1.ListOptions{LabelSelector: selector.String(), Limit: settings.ChunkSize} + rcPods, err := getPodsInChunks(c, initialOpts) + if err != nil { + return + } + for _, pod := range rcPods.Items { + controllerRef := metav1.GetControllerOf(&pod) + // Skip pods that are orphans or owned by other controllers. + if controllerRef == nil || controllerRef.UID != uid { + continue + } + switch pod.Status.Phase { + case corev1.PodRunning: + running++ + case corev1.PodPending: + waiting++ + case corev1.PodSucceeded: + succeeded++ + case corev1.PodFailed: + failed++ + } + } + return +} + +func getPodsInChunks(c corev1client.PodInterface, initialOpts metav1.ListOptions) (*corev1.PodList, error) { + podList := &corev1.PodList{} + err := runtimeresource.FollowContinue(&initialOpts, + func(options metav1.ListOptions) (runtime.Object, error) { + newList, err := c.List(context.TODO(), options) + if err != nil { + return nil, runtimeresource.EnhanceListError(err, options, corev1.ResourcePods.String()) + } + podList.Items = append(podList.Items, newList.Items...) + return newList, nil + }) + return podList, err +} + +// ConfigMapDescriber generates information about a ConfigMap +type ConfigMapDescriber struct { + clientset.Interface +} + +func (d *ConfigMapDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + c := d.CoreV1().ConfigMaps(namespace) + + configMap, err := c.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", configMap.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", configMap.Namespace) + printLabelsMultiline(w, "Labels", configMap.Labels) + printAnnotationsMultiline(w, "Annotations", configMap.Annotations) + + w.Write(LEVEL_0, "\nData\n====\n") + for k, v := range configMap.Data { + w.Write(LEVEL_0, "%s:\n----\n", k) + w.Write(LEVEL_0, "%s\n", string(v)) + } + w.Write(LEVEL_0, "\nBinaryData\n====\n") + for k, v := range configMap.BinaryData { + w.Write(LEVEL_0, "%s: %s bytes\n", k, strconv.Itoa(len(v))) + } + w.Write(LEVEL_0, "\n") + + if describerSettings.ShowEvents { + events, err := searchEvents(d.CoreV1(), configMap, describerSettings.ChunkSize) + if err != nil { + return err + } + if events != nil { + DescribeEvents(events, w) + } + } + return nil + }) +} + +// NetworkPolicyDescriber generates information about a networkingv1.NetworkPolicy +type NetworkPolicyDescriber struct { + clientset.Interface +} + +func (d *NetworkPolicyDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + c := d.NetworkingV1().NetworkPolicies(namespace) + + networkPolicy, err := c.Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + return describeNetworkPolicy(networkPolicy) +} + +func describeNetworkPolicy(networkPolicy *networkingv1.NetworkPolicy) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", networkPolicy.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", networkPolicy.Namespace) + w.Write(LEVEL_0, "Created on:\t%s\n", networkPolicy.CreationTimestamp) + printLabelsMultiline(w, "Labels", networkPolicy.Labels) + printAnnotationsMultiline(w, "Annotations", networkPolicy.Annotations) + describeNetworkPolicySpec(networkPolicy.Spec, w) + return nil + }) +} + +func describeNetworkPolicySpec(nps networkingv1.NetworkPolicySpec, w PrefixWriter) { + w.Write(LEVEL_0, "Spec:\n") + w.Write(LEVEL_1, "PodSelector: ") + if len(nps.PodSelector.MatchLabels) == 0 && len(nps.PodSelector.MatchExpressions) == 0 { + w.Write(LEVEL_2, " (Allowing the specific traffic to all pods in this namespace)\n") + } else { + w.Write(LEVEL_2, "%s\n", metav1.FormatLabelSelector(&nps.PodSelector)) + } + + ingressEnabled, egressEnabled := getPolicyType(nps) + if ingressEnabled { + w.Write(LEVEL_1, "Allowing ingress traffic:\n") + printNetworkPolicySpecIngressFrom(nps.Ingress, " ", w) + } else { + w.Write(LEVEL_1, "Not affecting ingress traffic\n") + } + if egressEnabled { + w.Write(LEVEL_1, "Allowing egress traffic:\n") + printNetworkPolicySpecEgressTo(nps.Egress, " ", w) + } else { + w.Write(LEVEL_1, "Not affecting egress traffic\n") + + } + w.Write(LEVEL_1, "Policy Types: %v\n", policyTypesToString(nps.PolicyTypes)) +} + +func getPolicyType(nps networkingv1.NetworkPolicySpec) (bool, bool) { + var ingress, egress bool + for _, pt := range nps.PolicyTypes { + switch pt { + case networkingv1.PolicyTypeIngress: + ingress = true + case networkingv1.PolicyTypeEgress: + egress = true + } + } + + return ingress, egress +} + +func printNetworkPolicySpecIngressFrom(npirs []networkingv1.NetworkPolicyIngressRule, initialIndent string, w PrefixWriter) { + if len(npirs) == 0 { + w.Write(LEVEL_0, "%s%s\n", initialIndent, " (Selected pods are isolated for ingress connectivity)") + return + } + for i, npir := range npirs { + if len(npir.Ports) == 0 { + w.Write(LEVEL_0, "%s%s\n", initialIndent, "To Port: (traffic allowed to all ports)") + } else { + for _, port := range npir.Ports { + var proto corev1.Protocol + if port.Protocol != nil { + proto = *port.Protocol + } else { + proto = corev1.ProtocolTCP + } + w.Write(LEVEL_0, "%s%s: %s/%s\n", initialIndent, "To Port", port.Port, proto) + } + } + if len(npir.From) == 0 { + w.Write(LEVEL_0, "%s%s\n", initialIndent, "From: (traffic not restricted by source)") + } else { + for _, from := range npir.From { + w.Write(LEVEL_0, "%s%s\n", initialIndent, "From:") + if from.PodSelector != nil && from.NamespaceSelector != nil { + w.Write(LEVEL_1, "%s%s: %s\n", initialIndent, "NamespaceSelector", metav1.FormatLabelSelector(from.NamespaceSelector)) + w.Write(LEVEL_1, "%s%s: %s\n", initialIndent, "PodSelector", metav1.FormatLabelSelector(from.PodSelector)) + } else if from.PodSelector != nil { + w.Write(LEVEL_1, "%s%s: %s\n", initialIndent, "PodSelector", metav1.FormatLabelSelector(from.PodSelector)) + } else if from.NamespaceSelector != nil { + w.Write(LEVEL_1, "%s%s: %s\n", initialIndent, "NamespaceSelector", metav1.FormatLabelSelector(from.NamespaceSelector)) + } else if from.IPBlock != nil { + w.Write(LEVEL_1, "%sIPBlock:\n", initialIndent) + w.Write(LEVEL_2, "%sCIDR: %s\n", initialIndent, from.IPBlock.CIDR) + w.Write(LEVEL_2, "%sExcept: %v\n", initialIndent, strings.Join(from.IPBlock.Except, ", ")) + } + } + } + if i != len(npirs)-1 { + w.Write(LEVEL_0, "%s%s\n", initialIndent, "----------") + } + } +} + +func printNetworkPolicySpecEgressTo(npers []networkingv1.NetworkPolicyEgressRule, initialIndent string, w PrefixWriter) { + if len(npers) == 0 { + w.Write(LEVEL_0, "%s%s\n", initialIndent, " (Selected pods are isolated for egress connectivity)") + return + } + for i, nper := range npers { + if len(nper.Ports) == 0 { + w.Write(LEVEL_0, "%s%s\n", initialIndent, "To Port: (traffic allowed to all ports)") + } else { + for _, port := range nper.Ports { + var proto corev1.Protocol + if port.Protocol != nil { + proto = *port.Protocol + } else { + proto = corev1.ProtocolTCP + } + w.Write(LEVEL_0, "%s%s: %s/%s\n", initialIndent, "To Port", port.Port, proto) + } + } + if len(nper.To) == 0 { + w.Write(LEVEL_0, "%s%s\n", initialIndent, "To: (traffic not restricted by destination)") + } else { + for _, to := range nper.To { + w.Write(LEVEL_0, "%s%s\n", initialIndent, "To:") + if to.PodSelector != nil && to.NamespaceSelector != nil { + w.Write(LEVEL_1, "%s%s: %s\n", initialIndent, "NamespaceSelector", metav1.FormatLabelSelector(to.NamespaceSelector)) + w.Write(LEVEL_1, "%s%s: %s\n", initialIndent, "PodSelector", metav1.FormatLabelSelector(to.PodSelector)) + } else if to.PodSelector != nil { + w.Write(LEVEL_1, "%s%s: %s\n", initialIndent, "PodSelector", metav1.FormatLabelSelector(to.PodSelector)) + } else if to.NamespaceSelector != nil { + w.Write(LEVEL_1, "%s%s: %s\n", initialIndent, "NamespaceSelector", metav1.FormatLabelSelector(to.NamespaceSelector)) + } else if to.IPBlock != nil { + w.Write(LEVEL_1, "%sIPBlock:\n", initialIndent) + w.Write(LEVEL_2, "%sCIDR: %s\n", initialIndent, to.IPBlock.CIDR) + w.Write(LEVEL_2, "%sExcept: %v\n", initialIndent, strings.Join(to.IPBlock.Except, ", ")) + } + } + } + if i != len(npers)-1 { + w.Write(LEVEL_0, "%s%s\n", initialIndent, "----------") + } + } +} + +type StorageClassDescriber struct { + clientset.Interface +} + +func (s *StorageClassDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + sc, err := s.StorageV1().StorageClasses().Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(s.CoreV1(), sc, describerSettings.ChunkSize) + } + + return describeStorageClass(sc, events) +} + +func describeStorageClass(sc *storagev1.StorageClass, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", sc.Name) + w.Write(LEVEL_0, "IsDefaultClass:\t%s\n", storageutil.IsDefaultAnnotationText(sc.ObjectMeta)) + w.Write(LEVEL_0, "Annotations:\t%s\n", labels.FormatLabels(sc.Annotations)) + w.Write(LEVEL_0, "Provisioner:\t%s\n", sc.Provisioner) + w.Write(LEVEL_0, "Parameters:\t%s\n", labels.FormatLabels(sc.Parameters)) + w.Write(LEVEL_0, "AllowVolumeExpansion:\t%s\n", printBoolPtr(sc.AllowVolumeExpansion)) + if len(sc.MountOptions) == 0 { + w.Write(LEVEL_0, "MountOptions:\t\n") + } else { + w.Write(LEVEL_0, "MountOptions:\n") + for _, option := range sc.MountOptions { + w.Write(LEVEL_1, "%s\n", option) + } + } + if sc.ReclaimPolicy != nil { + w.Write(LEVEL_0, "ReclaimPolicy:\t%s\n", *sc.ReclaimPolicy) + } + if sc.VolumeBindingMode != nil { + w.Write(LEVEL_0, "VolumeBindingMode:\t%s\n", *sc.VolumeBindingMode) + } + if sc.AllowedTopologies != nil { + printAllowedTopologies(w, sc.AllowedTopologies) + } + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +type CSINodeDescriber struct { + clientset.Interface +} + +func (c *CSINodeDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + csi, err := c.StorageV1().CSINodes().Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(c.CoreV1(), csi, describerSettings.ChunkSize) + } + + return describeCSINode(csi, events) +} + +func describeCSINode(csi *storagev1.CSINode, events *corev1.EventList) (output string, err error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", csi.GetName()) + printLabelsMultiline(w, "Labels", csi.GetLabels()) + printAnnotationsMultiline(w, "Annotations", csi.GetAnnotations()) + w.Write(LEVEL_0, "CreationTimestamp:\t%s\n", csi.CreationTimestamp.Time.Format(time.RFC1123Z)) + w.Write(LEVEL_0, "Spec:\n") + if csi.Spec.Drivers != nil { + w.Write(LEVEL_1, "Drivers:\n") + for _, driver := range csi.Spec.Drivers { + w.Write(LEVEL_2, "%s:\n", driver.Name) + w.Write(LEVEL_3, "Node ID:\t%s\n", driver.NodeID) + if driver.Allocatable != nil && driver.Allocatable.Count != nil { + w.Write(LEVEL_3, "Allocatables:\n") + w.Write(LEVEL_4, "Count:\t%d\n", *driver.Allocatable.Count) + } + if driver.TopologyKeys != nil { + w.Write(LEVEL_3, "Topology Keys:\t%s\n", driver.TopologyKeys) + } + } + } + if events != nil { + DescribeEvents(events, w) + } + return nil + }) +} + +func printAllowedTopologies(w PrefixWriter, topologies []corev1.TopologySelectorTerm) { + w.Write(LEVEL_0, "AllowedTopologies:\t") + if len(topologies) == 0 { + w.WriteLine("") + return + } + w.WriteLine("") + for i, term := range topologies { + printTopologySelectorTermsMultilineWithIndent(w, LEVEL_1, fmt.Sprintf("Term %d", i), "\t", term.MatchLabelExpressions) + } +} + +func printTopologySelectorTermsMultilineWithIndent(w PrefixWriter, indentLevel int, title, innerIndent string, reqs []corev1.TopologySelectorLabelRequirement) { + w.Write(indentLevel, "%s:%s", title, innerIndent) + + if len(reqs) == 0 { + w.WriteLine("") + return + } + + for i, req := range reqs { + if i != 0 { + w.Write(indentLevel, "%s", innerIndent) + } + exprStr := fmt.Sprintf("%s %s", req.Key, "in") + if len(req.Values) > 0 { + exprStr = fmt.Sprintf("%s [%s]", exprStr, strings.Join(req.Values, ", ")) + } + w.Write(LEVEL_0, "%s\n", exprStr) + } +} + +type PodDisruptionBudgetDescriber struct { + clientset.Interface +} + +func (p *PodDisruptionBudgetDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + var ( + pdbv1 *policyv1.PodDisruptionBudget + pdbv1beta1 *policyv1beta1.PodDisruptionBudget + err error + ) + + pdbv1, err = p.PolicyV1().PodDisruptionBudgets(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err == nil { + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(p.CoreV1(), pdbv1, describerSettings.ChunkSize) + } + return describePodDisruptionBudgetV1(pdbv1, events) + } + + // try falling back to v1beta1 in NotFound error cases + if apierrors.IsNotFound(err) { + pdbv1beta1, err = p.PolicyV1beta1().PodDisruptionBudgets(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + } + if err == nil { + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(p.CoreV1(), pdbv1beta1, describerSettings.ChunkSize) + } + return describePodDisruptionBudgetV1beta1(pdbv1beta1, events) + } + + return "", err +} + +func describePodDisruptionBudgetV1(pdb *policyv1.PodDisruptionBudget, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", pdb.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", pdb.Namespace) + + if pdb.Spec.MinAvailable != nil { + w.Write(LEVEL_0, "Min available:\t%s\n", pdb.Spec.MinAvailable.String()) + } else if pdb.Spec.MaxUnavailable != nil { + w.Write(LEVEL_0, "Max unavailable:\t%s\n", pdb.Spec.MaxUnavailable.String()) + } + + if pdb.Spec.Selector != nil { + w.Write(LEVEL_0, "Selector:\t%s\n", metav1.FormatLabelSelector(pdb.Spec.Selector)) + } else { + w.Write(LEVEL_0, "Selector:\t\n") + } + w.Write(LEVEL_0, "Status:\n") + w.Write(LEVEL_2, "Allowed disruptions:\t%d\n", pdb.Status.DisruptionsAllowed) + w.Write(LEVEL_2, "Current:\t%d\n", pdb.Status.CurrentHealthy) + w.Write(LEVEL_2, "Desired:\t%d\n", pdb.Status.DesiredHealthy) + w.Write(LEVEL_2, "Total:\t%d\n", pdb.Status.ExpectedPods) + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +func describePodDisruptionBudgetV1beta1(pdb *policyv1beta1.PodDisruptionBudget, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", pdb.Name) + w.Write(LEVEL_0, "Namespace:\t%s\n", pdb.Namespace) + + if pdb.Spec.MinAvailable != nil { + w.Write(LEVEL_0, "Min available:\t%s\n", pdb.Spec.MinAvailable.String()) + } else if pdb.Spec.MaxUnavailable != nil { + w.Write(LEVEL_0, "Max unavailable:\t%s\n", pdb.Spec.MaxUnavailable.String()) + } + + if pdb.Spec.Selector != nil { + w.Write(LEVEL_0, "Selector:\t%s\n", metav1.FormatLabelSelector(pdb.Spec.Selector)) + } else { + w.Write(LEVEL_0, "Selector:\t\n") + } + w.Write(LEVEL_0, "Status:\n") + w.Write(LEVEL_2, "Allowed disruptions:\t%d\n", pdb.Status.DisruptionsAllowed) + w.Write(LEVEL_2, "Current:\t%d\n", pdb.Status.CurrentHealthy) + w.Write(LEVEL_2, "Desired:\t%d\n", pdb.Status.DesiredHealthy) + w.Write(LEVEL_2, "Total:\t%d\n", pdb.Status.ExpectedPods) + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +// PriorityClassDescriber generates information about a PriorityClass. +type PriorityClassDescriber struct { + clientset.Interface +} + +func (s *PriorityClassDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + pc, err := s.SchedulingV1().PriorityClasses().Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + var events *corev1.EventList + if describerSettings.ShowEvents { + events, _ = searchEvents(s.CoreV1(), pc, describerSettings.ChunkSize) + } + + return describePriorityClass(pc, events) +} + +func describePriorityClass(pc *schedulingv1.PriorityClass, events *corev1.EventList) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", pc.Name) + w.Write(LEVEL_0, "Value:\t%v\n", pc.Value) + w.Write(LEVEL_0, "GlobalDefault:\t%v\n", pc.GlobalDefault) + w.Write(LEVEL_0, "PreemptionPolicy:\t%s\n", *pc.PreemptionPolicy) + w.Write(LEVEL_0, "Description:\t%s\n", pc.Description) + + w.Write(LEVEL_0, "Annotations:\t%s\n", labels.FormatLabels(pc.Annotations)) + if events != nil { + DescribeEvents(events, w) + } + + return nil + }) +} + +// PodSecurityPolicyDescriber generates information about a PodSecuritypolicyv1beta1. +type PodSecurityPolicyDescriber struct { + clientset.Interface +} + +func (d *PodSecurityPolicyDescriber) Describe(namespace, name string, describerSettings DescriberSettings) (string, error) { + psp, err := d.PolicyV1beta1().PodSecurityPolicies().Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", err + } + + return describePodSecurityPolicy(psp) +} + +func describePodSecurityPolicy(psp *policyv1beta1.PodSecurityPolicy) (string, error) { + return tabbedString(func(out io.Writer) error { + w := NewPrefixWriter(out) + w.Write(LEVEL_0, "Name:\t%s\n", psp.Name) + + w.Write(LEVEL_0, "\nSettings:\n") + + w.Write(LEVEL_1, "Allow Privileged:\t%t\n", psp.Spec.Privileged) + if psp.Spec.AllowPrivilegeEscalation != nil { + w.Write(LEVEL_1, "Allow Privilege Escalation:\t%t\n", *psp.Spec.AllowPrivilegeEscalation) + } else { + w.Write(LEVEL_1, "Allow Privilege Escalation:\t\n") + } + w.Write(LEVEL_1, "Default Add Capabilities:\t%v\n", capsToString(psp.Spec.DefaultAddCapabilities)) + w.Write(LEVEL_1, "Required Drop Capabilities:\t%s\n", capsToString(psp.Spec.RequiredDropCapabilities)) + w.Write(LEVEL_1, "Allowed Capabilities:\t%s\n", capsToString(psp.Spec.AllowedCapabilities)) + w.Write(LEVEL_1, "Allowed Volume Types:\t%s\n", fsTypeToString(psp.Spec.Volumes)) + + if len(psp.Spec.AllowedFlexVolumes) > 0 { + w.Write(LEVEL_1, "Allowed FlexVolume Types:\t%s\n", flexVolumesToString(psp.Spec.AllowedFlexVolumes)) + } + + if len(psp.Spec.AllowedCSIDrivers) > 0 { + w.Write(LEVEL_1, "Allowed CSI Drivers:\t%s\n", csiDriversToString(psp.Spec.AllowedCSIDrivers)) + } + + if len(psp.Spec.AllowedUnsafeSysctls) > 0 { + w.Write(LEVEL_1, "Allowed Unsafe Sysctls:\t%s\n", sysctlsToString(psp.Spec.AllowedUnsafeSysctls)) + } + if len(psp.Spec.ForbiddenSysctls) > 0 { + w.Write(LEVEL_1, "Forbidden Sysctls:\t%s\n", sysctlsToString(psp.Spec.ForbiddenSysctls)) + } + w.Write(LEVEL_1, "Allow Host Network:\t%t\n", psp.Spec.HostNetwork) + w.Write(LEVEL_1, "Allow Host Ports:\t%s\n", hostPortRangeToString(psp.Spec.HostPorts)) + w.Write(LEVEL_1, "Allow Host PID:\t%t\n", psp.Spec.HostPID) + w.Write(LEVEL_1, "Allow Host IPC:\t%t\n", psp.Spec.HostIPC) + w.Write(LEVEL_1, "Read Only Root Filesystem:\t%v\n", psp.Spec.ReadOnlyRootFilesystem) + + w.Write(LEVEL_1, "SELinux Context Strategy: %s\t\n", string(psp.Spec.SELinux.Rule)) + var user, role, seLinuxType, level string + if psp.Spec.SELinux.SELinuxOptions != nil { + user = psp.Spec.SELinux.SELinuxOptions.User + role = psp.Spec.SELinux.SELinuxOptions.Role + seLinuxType = psp.Spec.SELinux.SELinuxOptions.Type + level = psp.Spec.SELinux.SELinuxOptions.Level + } + w.Write(LEVEL_2, "User:\t%s\n", stringOrNone(user)) + w.Write(LEVEL_2, "Role:\t%s\n", stringOrNone(role)) + w.Write(LEVEL_2, "Type:\t%s\n", stringOrNone(seLinuxType)) + w.Write(LEVEL_2, "Level:\t%s\n", stringOrNone(level)) + + w.Write(LEVEL_1, "Run As User Strategy: %s\t\n", string(psp.Spec.RunAsUser.Rule)) + w.Write(LEVEL_2, "Ranges:\t%s\n", idRangeToString(psp.Spec.RunAsUser.Ranges)) + + w.Write(LEVEL_1, "FSGroup Strategy: %s\t\n", string(psp.Spec.FSGroup.Rule)) + w.Write(LEVEL_2, "Ranges:\t%s\n", idRangeToString(psp.Spec.FSGroup.Ranges)) + + w.Write(LEVEL_1, "Supplemental Groups Strategy: %s\t\n", string(psp.Spec.SupplementalGroups.Rule)) + w.Write(LEVEL_2, "Ranges:\t%s\n", idRangeToString(psp.Spec.SupplementalGroups.Ranges)) + + return nil + }) +} + +func stringOrNone(s string) string { + return stringOrDefaultValue(s, "") +} + +func stringOrDefaultValue(s, defaultValue string) string { + if len(s) > 0 { + return s + } + return defaultValue +} + +func fsTypeToString(volumes []policyv1beta1.FSType) string { + strVolumes := []string{} + for _, v := range volumes { + strVolumes = append(strVolumes, string(v)) + } + return stringOrNone(strings.Join(strVolumes, ",")) +} + +func flexVolumesToString(flexVolumes []policyv1beta1.AllowedFlexVolume) string { + volumes := []string{} + for _, flexVolume := range flexVolumes { + volumes = append(volumes, "driver="+flexVolume.Driver) + } + return stringOrDefaultValue(strings.Join(volumes, ","), "") +} + +func csiDriversToString(csiDrivers []policyv1beta1.AllowedCSIDriver) string { + drivers := []string{} + for _, csiDriver := range csiDrivers { + drivers = append(drivers, "driver="+csiDriver.Name) + } + return stringOrDefaultValue(strings.Join(drivers, ","), "") +} + +func sysctlsToString(sysctls []string) string { + return stringOrNone(strings.Join(sysctls, ",")) +} + +func hostPortRangeToString(ranges []policyv1beta1.HostPortRange) string { + formattedString := "" + if ranges != nil { + strRanges := []string{} + for _, r := range ranges { + strRanges = append(strRanges, fmt.Sprintf("%d-%d", r.Min, r.Max)) + } + formattedString = strings.Join(strRanges, ",") + } + return stringOrNone(formattedString) +} + +func idRangeToString(ranges []policyv1beta1.IDRange) string { + formattedString := "" + if ranges != nil { + strRanges := []string{} + for _, r := range ranges { + strRanges = append(strRanges, fmt.Sprintf("%d-%d", r.Min, r.Max)) + } + formattedString = strings.Join(strRanges, ",") + } + return stringOrNone(formattedString) +} + +func capsToString(caps []corev1.Capability) string { + formattedString := "" + if caps != nil { + strCaps := []string{} + for _, c := range caps { + strCaps = append(strCaps, string(c)) + } + formattedString = strings.Join(strCaps, ",") + } + return stringOrNone(formattedString) +} + +func policyTypesToString(pts []networkingv1.PolicyType) string { + formattedString := "" + if pts != nil { + strPts := []string{} + for _, p := range pts { + strPts = append(strPts, string(p)) + } + formattedString = strings.Join(strPts, ", ") + } + return stringOrNone(formattedString) +} + +// newErrNoDescriber creates a new ErrNoDescriber with the names of the provided types. +func newErrNoDescriber(types ...reflect.Type) error { + names := make([]string, 0, len(types)) + for _, t := range types { + names = append(names, t.String()) + } + return ErrNoDescriber{Types: names} +} + +// Describers implements ObjectDescriber against functions registered via Add. Those functions can +// be strongly typed. Types are exactly matched (no conversion or assignable checks). +type Describers struct { + searchFns map[reflect.Type][]typeFunc +} + +// DescribeObject implements ObjectDescriber and will attempt to print the provided object to a string, +// if at least one describer function has been registered with the exact types passed, or if any +// describer can print the exact object in its first argument (the remainder will be provided empty +// values). If no function registered with Add can satisfy the passed objects, an ErrNoDescriber will +// be returned +// TODO: reorder and partial match extra. +func (d *Describers) DescribeObject(exact interface{}, extra ...interface{}) (string, error) { + exactType := reflect.TypeOf(exact) + fns, ok := d.searchFns[exactType] + if !ok { + return "", newErrNoDescriber(exactType) + } + if len(extra) == 0 { + for _, typeFn := range fns { + if len(typeFn.Extra) == 0 { + return typeFn.Describe(exact, extra...) + } + } + typeFn := fns[0] + for _, t := range typeFn.Extra { + v := reflect.New(t).Elem() + extra = append(extra, v.Interface()) + } + return fns[0].Describe(exact, extra...) + } + + types := make([]reflect.Type, 0, len(extra)) + for _, obj := range extra { + types = append(types, reflect.TypeOf(obj)) + } + for _, typeFn := range fns { + if typeFn.Matches(types) { + return typeFn.Describe(exact, extra...) + } + } + return "", newErrNoDescriber(append([]reflect.Type{exactType}, types...)...) +} + +// Add adds one or more describer functions to the Describer. The passed function must +// match the signature: +// +// func(...) (string, error) +// +// Any number of arguments may be provided. +func (d *Describers) Add(fns ...interface{}) error { + for _, fn := range fns { + fv := reflect.ValueOf(fn) + ft := fv.Type() + if ft.Kind() != reflect.Func { + return fmt.Errorf("expected func, got: %v", ft) + } + numIn := ft.NumIn() + if numIn == 0 { + return fmt.Errorf("expected at least one 'in' params, got: %v", ft) + } + if ft.NumOut() != 2 { + return fmt.Errorf("expected two 'out' params - (string, error), got: %v", ft) + } + types := make([]reflect.Type, 0, numIn) + for i := 0; i < numIn; i++ { + types = append(types, ft.In(i)) + } + if ft.Out(0) != reflect.TypeOf(string("")) { + return fmt.Errorf("expected string return, got: %v", ft) + } + var forErrorType error + // This convolution is necessary, otherwise TypeOf picks up on the fact + // that forErrorType is nil. + errorType := reflect.TypeOf(&forErrorType).Elem() + if ft.Out(1) != errorType { + return fmt.Errorf("expected error return, got: %v", ft) + } + + exact := types[0] + extra := types[1:] + if d.searchFns == nil { + d.searchFns = make(map[reflect.Type][]typeFunc) + } + fns := d.searchFns[exact] + fn := typeFunc{Extra: extra, Fn: fv} + fns = append(fns, fn) + d.searchFns[exact] = fns + } + return nil +} + +// typeFunc holds information about a describer function and the types it accepts +type typeFunc struct { + Extra []reflect.Type + Fn reflect.Value +} + +// Matches returns true when the passed types exactly match the Extra list. +func (fn typeFunc) Matches(types []reflect.Type) bool { + if len(fn.Extra) != len(types) { + return false + } + // reorder the items in array types and fn.Extra + // convert the type into string and sort them, check if they are matched + varMap := make(map[reflect.Type]bool) + for i := range fn.Extra { + varMap[fn.Extra[i]] = true + } + for i := range types { + if _, found := varMap[types[i]]; !found { + return false + } + } + return true +} + +// Describe invokes the nested function with the exact number of arguments. +func (fn typeFunc) Describe(exact interface{}, extra ...interface{}) (string, error) { + values := []reflect.Value{reflect.ValueOf(exact)} + for _, obj := range extra { + values = append(values, reflect.ValueOf(obj)) + } + out := fn.Fn.Call(values) + s := out[0].Interface().(string) + var err error + if !out[1].IsNil() { + err = out[1].Interface().(error) + } + return s, err +} + +// printLabelsMultiline prints multiple labels with a proper alignment. +func printLabelsMultiline(w PrefixWriter, title string, labels map[string]string) { + printLabelsMultilineWithIndent(w, "", title, "\t", labels, sets.NewString()) +} + +// printLabelsMultiline prints multiple labels with a user-defined alignment. +func printLabelsMultilineWithIndent(w PrefixWriter, initialIndent, title, innerIndent string, labels map[string]string, skip sets.String) { + w.Write(LEVEL_0, "%s%s:%s", initialIndent, title, innerIndent) + + if len(labels) == 0 { + w.WriteLine("") + return + } + + // to print labels in the sorted order + keys := make([]string, 0, len(labels)) + for key := range labels { + if skip.Has(key) { + continue + } + keys = append(keys, key) + } + if len(keys) == 0 { + w.WriteLine("") + return + } + sort.Strings(keys) + + for i, key := range keys { + if i != 0 { + w.Write(LEVEL_0, "%s", initialIndent) + w.Write(LEVEL_0, "%s", innerIndent) + } + w.Write(LEVEL_0, "%s=%s\n", key, labels[key]) + } +} + +// printTaintsMultiline prints multiple taints with a proper alignment. +func printNodeTaintsMultiline(w PrefixWriter, title string, taints []corev1.Taint) { + printTaintsMultilineWithIndent(w, "", title, "\t", taints) +} + +// printTaintsMultilineWithIndent prints multiple taints with a user-defined alignment. +func printTaintsMultilineWithIndent(w PrefixWriter, initialIndent, title, innerIndent string, taints []corev1.Taint) { + w.Write(LEVEL_0, "%s%s:%s", initialIndent, title, innerIndent) + + if len(taints) == 0 { + w.WriteLine("") + return + } + + // to print taints in the sorted order + sort.Slice(taints, func(i, j int) bool { + cmpKey := func(taint corev1.Taint) string { + return string(taint.Effect) + "," + taint.Key + } + return cmpKey(taints[i]) < cmpKey(taints[j]) + }) + + for i, taint := range taints { + if i != 0 { + w.Write(LEVEL_0, "%s", initialIndent) + w.Write(LEVEL_0, "%s", innerIndent) + } + w.Write(LEVEL_0, "%s\n", taint.ToString()) + } +} + +// printPodsMultiline prints multiple pods with a proper alignment. +func printPodsMultiline(w PrefixWriter, title string, pods []corev1.Pod) { + printPodsMultilineWithIndent(w, "", title, "\t", pods) +} + +// printPodsMultilineWithIndent prints multiple pods with a user-defined alignment. +func printPodsMultilineWithIndent(w PrefixWriter, initialIndent, title, innerIndent string, pods []corev1.Pod) { + w.Write(LEVEL_0, "%s%s:%s", initialIndent, title, innerIndent) + + if len(pods) == 0 { + w.WriteLine("") + return + } + + // to print pods in the sorted order + sort.Slice(pods, func(i, j int) bool { + cmpKey := func(pod corev1.Pod) string { + return pod.Name + } + return cmpKey(pods[i]) < cmpKey(pods[j]) + }) + + for i, pod := range pods { + if i != 0 { + w.Write(LEVEL_0, "%s", initialIndent) + w.Write(LEVEL_0, "%s", innerIndent) + } + w.Write(LEVEL_0, "%s\n", pod.Name) + } +} + +// printPodTolerationsMultiline prints multiple tolerations with a proper alignment. +func printPodTolerationsMultiline(w PrefixWriter, title string, tolerations []corev1.Toleration) { + printTolerationsMultilineWithIndent(w, "", title, "\t", tolerations) +} + +// printTolerationsMultilineWithIndent prints multiple tolerations with a user-defined alignment. +func printTolerationsMultilineWithIndent(w PrefixWriter, initialIndent, title, innerIndent string, tolerations []corev1.Toleration) { + w.Write(LEVEL_0, "%s%s:%s", initialIndent, title, innerIndent) + + if len(tolerations) == 0 { + w.WriteLine("") + return + } + + // to print tolerations in the sorted order + sort.Slice(tolerations, func(i, j int) bool { + return tolerations[i].Key < tolerations[j].Key + }) + + for i, toleration := range tolerations { + if i != 0 { + w.Write(LEVEL_0, "%s", initialIndent) + w.Write(LEVEL_0, "%s", innerIndent) + } + w.Write(LEVEL_0, "%s", toleration.Key) + if len(toleration.Value) != 0 { + w.Write(LEVEL_0, "=%s", toleration.Value) + } + if len(toleration.Effect) != 0 { + w.Write(LEVEL_0, ":%s", toleration.Effect) + } + // tolerations: + // - operator: "Exists" + // is a special case which tolerates everything + if toleration.Operator == corev1.TolerationOpExists && len(toleration.Value) == 0 { + if len(toleration.Key) != 0 || len(toleration.Effect) != 0 { + w.Write(LEVEL_0, " op=Exists") + } else { + w.Write(LEVEL_0, "op=Exists") + } + } + + if toleration.TolerationSeconds != nil { + w.Write(LEVEL_0, " for %ds", *toleration.TolerationSeconds) + } + w.Write(LEVEL_0, "\n") + } +} + +type flusher interface { + Flush() +} + +func tabbedString(f func(io.Writer) error) (string, error) { + out := new(tabwriter.Writer) + buf := &bytes.Buffer{} + out.Init(buf, 0, 8, 2, ' ', 0) + + err := f(out) + if err != nil { + return "", err + } + + out.Flush() + str := string(buf.String()) + return str, nil +} + +type SortableResourceNames []corev1.ResourceName + +func (list SortableResourceNames) Len() int { + return len(list) +} + +func (list SortableResourceNames) Swap(i, j int) { + list[i], list[j] = list[j], list[i] +} + +func (list SortableResourceNames) Less(i, j int) bool { + return list[i] < list[j] +} + +// SortedResourceNames returns the sorted resource names of a resource list. +func SortedResourceNames(list corev1.ResourceList) []corev1.ResourceName { + resources := make([]corev1.ResourceName, 0, len(list)) + for res := range list { + resources = append(resources, res) + } + sort.Sort(SortableResourceNames(resources)) + return resources +} + +type SortableResourceQuotas []corev1.ResourceQuota + +func (list SortableResourceQuotas) Len() int { + return len(list) +} + +func (list SortableResourceQuotas) Swap(i, j int) { + list[i], list[j] = list[j], list[i] +} + +func (list SortableResourceQuotas) Less(i, j int) bool { + return list[i].Name < list[j].Name +} + +type SortableVolumeMounts []corev1.VolumeMount + +func (list SortableVolumeMounts) Len() int { + return len(list) +} + +func (list SortableVolumeMounts) Swap(i, j int) { + list[i], list[j] = list[j], list[i] +} + +func (list SortableVolumeMounts) Less(i, j int) bool { + return list[i].MountPath < list[j].MountPath +} + +type SortableVolumeDevices []corev1.VolumeDevice + +func (list SortableVolumeDevices) Len() int { + return len(list) +} + +func (list SortableVolumeDevices) Swap(i, j int) { + list[i], list[j] = list[j], list[i] +} + +func (list SortableVolumeDevices) Less(i, j int) bool { + return list[i].DevicePath < list[j].DevicePath +} + +var maxAnnotationLen = 140 + +// printAnnotationsMultiline prints multiple annotations with a proper alignment. +// If annotation string is too long, we omit chars more than 200 length. +func printAnnotationsMultiline(w PrefixWriter, title string, annotations map[string]string) { + w.Write(LEVEL_0, "%s:\t", title) + + // to print labels in the sorted order + keys := make([]string, 0, len(annotations)) + for key := range annotations { + if skipAnnotations.Has(key) { + continue + } + keys = append(keys, key) + } + if len(keys) == 0 { + w.WriteLine("") + return + } + sort.Strings(keys) + indent := "\t" + for i, key := range keys { + if i != 0 { + w.Write(LEVEL_0, indent) + } + value := strings.TrimSuffix(annotations[key], "\n") + if (len(value)+len(key)+2) > maxAnnotationLen || strings.Contains(value, "\n") { + w.Write(LEVEL_0, "%s:\n", key) + for _, s := range strings.Split(value, "\n") { + w.Write(LEVEL_0, "%s %s\n", indent, shorten(s, maxAnnotationLen-2)) + } + } else { + w.Write(LEVEL_0, "%s: %s\n", key, value) + } + } +} + +func shorten(s string, maxLength int) string { + if len(s) > maxLength { + return s[:maxLength] + "..." + } + return s +} + +// translateMicroTimestampSince returns the elapsed time since timestamp in +// human-readable approximation. +func translateMicroTimestampSince(timestamp metav1.MicroTime) string { + if timestamp.IsZero() { + return "" + } + + return duration.HumanDuration(time.Since(timestamp.Time)) +} + +// translateTimestampSince returns the elapsed time since timestamp in +// human-readable approximation. +func translateTimestampSince(timestamp metav1.Time) string { + if timestamp.IsZero() { + return "" + } + + return duration.HumanDuration(time.Since(timestamp.Time)) +} + +// Pass ports=nil for all ports. +func formatEndpoints(endpoints *corev1.Endpoints, ports sets.String) string { + if len(endpoints.Subsets) == 0 { + return "" + } + list := []string{} + max := 3 + more := false + count := 0 + for i := range endpoints.Subsets { + ss := &endpoints.Subsets[i] + if len(ss.Ports) == 0 { + // It's possible to have headless services with no ports. + for i := range ss.Addresses { + if len(list) == max { + more = true + } + if !more { + list = append(list, ss.Addresses[i].IP) + } + count++ + } + } else { + // "Normal" services with ports defined. + for i := range ss.Ports { + port := &ss.Ports[i] + if ports == nil || ports.Has(port.Name) { + for i := range ss.Addresses { + if len(list) == max { + more = true + } + addr := &ss.Addresses[i] + if !more { + hostPort := net.JoinHostPort(addr.IP, strconv.Itoa(int(port.Port))) + list = append(list, hostPort) + } + count++ + } + } + } + } + } + ret := strings.Join(list, ",") + if more { + return fmt.Sprintf("%s + %d more...", ret, count-max) + } + return ret +} + +func extractCSRStatus(conditions []string, certificateBytes []byte) string { + var approved, denied, failed bool + for _, c := range conditions { + switch c { + case string(certificatesv1beta1.CertificateApproved): + approved = true + case string(certificatesv1beta1.CertificateDenied): + denied = true + case string(certificatesv1beta1.CertificateFailed): + failed = true + } + } + var status string + // must be in order of precedence + if denied { + status += "Denied" + } else if approved { + status += "Approved" + } else { + status += "Pending" + } + if failed { + status += ",Failed" + } + if len(certificateBytes) > 0 { + status += ",Issued" + } + return status +} + +// backendStringer behaves just like a string interface and converts the given backend to a string. +func serviceBackendStringer(backend *networkingv1.IngressServiceBackend) string { + if backend == nil { + return "" + } + var bPort string + if backend.Port.Number != 0 { + sNum := int64(backend.Port.Number) + bPort = strconv.FormatInt(sNum, 10) + } else { + bPort = backend.Port.Name + } + return fmt.Sprintf("%v:%v", backend.Name, bPort) +} + +// backendStringer behaves just like a string interface and converts the given backend to a string. +func backendStringer(backend *networkingv1beta1.IngressBackend) string { + if backend == nil { + return "" + } + return fmt.Sprintf("%v:%v", backend.ServiceName, backend.ServicePort.String()) +} + +// findNodeRoles returns the roles of a given node. +// The roles are determined by looking for: +// * a node-role.kubernetes.io/="" label +// * a kubernetes.io/role="" label +func findNodeRoles(node *corev1.Node) []string { + roles := sets.NewString() + for k, v := range node.Labels { + switch { + case strings.HasPrefix(k, LabelNodeRolePrefix): + if role := strings.TrimPrefix(k, LabelNodeRolePrefix); len(role) > 0 { + roles.Insert(role) + } + + case k == NodeLabelRole && v != "": + roles.Insert(v) + } + } + return roles.List() +} + +// ingressLoadBalancerStatusStringerV1 behaves mostly like a string interface and converts the given status to a string. +// `wide` indicates whether the returned value is meant for --o=wide output. If not, it's clipped to 16 bytes. +func ingressLoadBalancerStatusStringerV1(s networkingv1.IngressLoadBalancerStatus, wide bool) string { + ingress := s.Ingress + result := sets.NewString() + for i := range ingress { + if ingress[i].IP != "" { + result.Insert(ingress[i].IP) + } else if ingress[i].Hostname != "" { + result.Insert(ingress[i].Hostname) + } + } + + r := strings.Join(result.List(), ",") + if !wide && len(r) > LoadBalancerWidth { + r = r[0:(LoadBalancerWidth-3)] + "..." + } + return r +} + +// ingressLoadBalancerStatusStringerV1beta1 behaves mostly like a string interface and converts the given status to a string. +// `wide` indicates whether the returned value is meant for --o=wide output. If not, it's clipped to 16 bytes. +func ingressLoadBalancerStatusStringerV1beta1(s networkingv1beta1.IngressLoadBalancerStatus, wide bool) string { + ingress := s.Ingress + result := sets.NewString() + for i := range ingress { + if ingress[i].IP != "" { + result.Insert(ingress[i].IP) + } else if ingress[i].Hostname != "" { + result.Insert(ingress[i].Hostname) + } + } + + r := strings.Join(result.List(), ",") + if !wide && len(r) > LoadBalancerWidth { + r = r[0:(LoadBalancerWidth-3)] + "..." + } + return r +} + +// searchEvents finds events about the specified object. +// It is very similar to CoreV1.Events.Search, but supports the Limit parameter. +func searchEvents(client corev1client.EventsGetter, objOrRef runtime.Object, limit int64) (*corev1.EventList, error) { + ref, err := reference.GetReference(scheme.Scheme, objOrRef) + if err != nil { + return nil, err + } + stringRefKind := string(ref.Kind) + var refKind *string + if len(stringRefKind) > 0 { + refKind = &stringRefKind + } + stringRefUID := string(ref.UID) + var refUID *string + if len(stringRefUID) > 0 { + refUID = &stringRefUID + } + + e := client.Events(ref.Namespace) + fieldSelector := e.GetFieldSelector(&ref.Name, &ref.Namespace, refKind, refUID) + initialOpts := metav1.ListOptions{FieldSelector: fieldSelector.String(), Limit: limit} + eventList := &corev1.EventList{} + err = runtimeresource.FollowContinue(&initialOpts, + func(options metav1.ListOptions) (runtime.Object, error) { + newEvents, err := e.List(context.TODO(), options) + if err != nil { + return nil, runtimeresource.EnhanceListError(err, options, "events") + } + eventList.Items = append(eventList.Items, newEvents.Items...) + return newEvents, nil + }) + return eventList, err +} diff --git a/vendor/k8s.io/kubectl/pkg/describe/interface.go b/vendor/k8s.io/kubectl/pkg/describe/interface.go new file mode 100644 index 000000000..180821e1c --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/describe/interface.go @@ -0,0 +1,72 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package describe + +import ( + "fmt" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/cli-runtime/pkg/genericclioptions" +) + +const ( + // LoadBalancerWidth is the width how we describe load balancer + LoadBalancerWidth = 16 + + // LabelNodeRolePrefix is a label prefix for node roles + // It's copied over to here until it's merged in core: https://github.com/kubernetes/kubernetes/pull/39112 + LabelNodeRolePrefix = "node-role.kubernetes.io/" + + // NodeLabelRole specifies the role of a node + NodeLabelRole = "kubernetes.io/role" +) + +// DescriberFunc gives a way to display the specified RESTMapping type +type DescriberFunc func(restClientGetter genericclioptions.RESTClientGetter, mapping *meta.RESTMapping) (ResourceDescriber, error) + +// ResourceDescriber generates output for the named resource or an error +// if the output could not be generated. Implementers typically +// abstract the retrieval of the named object from a remote server. +type ResourceDescriber interface { + Describe(namespace, name string, describerSettings DescriberSettings) (output string, err error) +} + +// DescriberSettings holds display configuration for each object +// describer to control what is printed. +type DescriberSettings struct { + ShowEvents bool + ChunkSize int64 +} + +// ObjectDescriber is an interface for displaying arbitrary objects with extra +// information. Use when an object is in hand (on disk, or already retrieved). +// Implementers may ignore the additional information passed on extra, or use it +// by default. ObjectDescribers may return ErrNoDescriber if no suitable describer +// is found. +type ObjectDescriber interface { + DescribeObject(object interface{}, extra ...interface{}) (output string, err error) +} + +// ErrNoDescriber is a structured error indicating the provided object or objects +// cannot be described. +type ErrNoDescriber struct { + Types []string +} + +// Error implements the error interface. +func (e ErrNoDescriber) Error() string { + return fmt.Sprintf("no describer has been defined for %v", e.Types) +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/attachablepodforobject.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/attachablepodforobject.go new file mode 100644 index 000000000..7b18111fd --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/attachablepodforobject.go @@ -0,0 +1,54 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "fmt" + "sort" + "time" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/cli-runtime/pkg/genericclioptions" + corev1client "k8s.io/client-go/kubernetes/typed/core/v1" + "k8s.io/kubectl/pkg/util/podutils" +) + +// attachablePodForObject returns the pod to which to attach given an object. +func attachablePodForObject(restClientGetter genericclioptions.RESTClientGetter, object runtime.Object, timeout time.Duration) (*corev1.Pod, error) { + switch t := object.(type) { + case *corev1.Pod: + return t, nil + } + + clientConfig, err := restClientGetter.ToRESTConfig() + if err != nil { + return nil, err + } + clientset, err := corev1client.NewForConfig(clientConfig) + if err != nil { + return nil, err + } + + namespace, selector, err := SelectorsForObject(object) + if err != nil { + return nil, fmt.Errorf("cannot attach to %T: %v", object, err) + } + sortBy := func(pods []*corev1.Pod) sort.Interface { return sort.Reverse(podutils.ActivePods(pods)) } + pod, _, err := GetFirstPod(clientset, namespace, selector.String(), timeout, sortBy) + return pod, err +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/canbeexposed.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/canbeexposed.go new file mode 100644 index 000000000..b232ff853 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/canbeexposed.go @@ -0,0 +1,44 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "fmt" + + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime/schema" +) + +// Check whether the kind of resources could be exposed +func canBeExposed(kind schema.GroupKind) error { + switch kind { + case + corev1.SchemeGroupVersion.WithKind("ReplicationController").GroupKind(), + corev1.SchemeGroupVersion.WithKind("Service").GroupKind(), + corev1.SchemeGroupVersion.WithKind("Pod").GroupKind(), + appsv1.SchemeGroupVersion.WithKind("Deployment").GroupKind(), + appsv1.SchemeGroupVersion.WithKind("ReplicaSet").GroupKind(), + extensionsv1beta1.SchemeGroupVersion.WithKind("Deployment").GroupKind(), + extensionsv1beta1.SchemeGroupVersion.WithKind("ReplicaSet").GroupKind(): + // nothing to do here + default: + return fmt.Errorf("cannot expose a %s", kind) + } + return nil +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/helpers.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/helpers.go new file mode 100644 index 000000000..762d953f2 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/helpers.go @@ -0,0 +1,191 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "context" + "fmt" + "sort" + "time" + + appsv1 "k8s.io/api/apps/v1" + appsv1beta1 "k8s.io/api/apps/v1beta1" + appsv1beta2 "k8s.io/api/apps/v1beta2" + batchv1 "k8s.io/api/batch/v1" + corev1 "k8s.io/api/core/v1" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/watch" + coreclient "k8s.io/client-go/kubernetes/typed/core/v1" + watchtools "k8s.io/client-go/tools/watch" +) + +// GetFirstPod returns a pod matching the namespace and label selector +// and the number of all pods that match the label selector. +func GetFirstPod(client coreclient.PodsGetter, namespace string, selector string, timeout time.Duration, sortBy func([]*corev1.Pod) sort.Interface) (*corev1.Pod, int, error) { + options := metav1.ListOptions{LabelSelector: selector} + + podList, err := client.Pods(namespace).List(context.TODO(), options) + if err != nil { + return nil, 0, err + } + pods := []*corev1.Pod{} + for i := range podList.Items { + pod := podList.Items[i] + pods = append(pods, &pod) + } + if len(pods) > 0 { + sort.Sort(sortBy(pods)) + return pods[0], len(podList.Items), nil + } + + // Watch until we observe a pod + options.ResourceVersion = podList.ResourceVersion + w, err := client.Pods(namespace).Watch(context.TODO(), options) + if err != nil { + return nil, 0, err + } + defer w.Stop() + + condition := func(event watch.Event) (bool, error) { + return event.Type == watch.Added || event.Type == watch.Modified, nil + } + + ctx, cancel := watchtools.ContextWithOptionalTimeout(context.Background(), timeout) + defer cancel() + event, err := watchtools.UntilWithoutRetry(ctx, w, condition) + if err != nil { + return nil, 0, err + } + pod, ok := event.Object.(*corev1.Pod) + if !ok { + return nil, 0, fmt.Errorf("%#v is not a pod event", event) + } + return pod, 1, nil +} + +// SelectorsForObject returns the pod label selector for a given object +func SelectorsForObject(object runtime.Object) (namespace string, selector labels.Selector, err error) { + switch t := object.(type) { + case *extensionsv1beta1.ReplicaSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1.ReplicaSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1beta2.ReplicaSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + + case *corev1.ReplicationController: + namespace = t.Namespace + selector = labels.SelectorFromSet(t.Spec.Selector) + + case *appsv1.StatefulSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1beta1.StatefulSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1beta2.StatefulSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + + case *extensionsv1beta1.DaemonSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1.DaemonSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1beta2.DaemonSet: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + + case *extensionsv1beta1.Deployment: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1.Deployment: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1beta1.Deployment: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + case *appsv1beta2.Deployment: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + + case *batchv1.Job: + namespace = t.Namespace + selector, err = metav1.LabelSelectorAsSelector(t.Spec.Selector) + if err != nil { + return "", nil, fmt.Errorf("invalid label selector: %v", err) + } + + case *corev1.Service: + namespace = t.Namespace + if t.Spec.Selector == nil || len(t.Spec.Selector) == 0 { + return "", nil, fmt.Errorf("invalid service '%s': Service is defined without a selector", t.Name) + } + selector = labels.SelectorFromSet(t.Spec.Selector) + + default: + return "", nil, fmt.Errorf("selector for %T not implemented", object) + } + + return namespace, selector, nil +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/history.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/history.go new file mode 100644 index 000000000..20afbe985 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/history.go @@ -0,0 +1,478 @@ +/* +Copyright 2016 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "bytes" + "context" + "fmt" + "io" + "text/tabwriter" + + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/labels" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/util/json" + "k8s.io/apimachinery/pkg/util/strategicpatch" + "k8s.io/client-go/kubernetes" + clientappsv1 "k8s.io/client-go/kubernetes/typed/apps/v1" + "k8s.io/klog/v2" + "k8s.io/kubectl/pkg/apps" + "k8s.io/kubectl/pkg/describe" + deploymentutil "k8s.io/kubectl/pkg/util/deployment" + sliceutil "k8s.io/kubectl/pkg/util/slice" +) + +const ( + ChangeCauseAnnotation = "kubernetes.io/change-cause" +) + +// HistoryViewer provides an interface for resources have historical information. +type HistoryViewer interface { + ViewHistory(namespace, name string, revision int64) (string, error) + GetHistory(namespace, name string) (map[int64]runtime.Object, error) +} + +type HistoryVisitor struct { + clientset kubernetes.Interface + result HistoryViewer +} + +func (v *HistoryVisitor) VisitDeployment(elem apps.GroupKindElement) { + v.result = &DeploymentHistoryViewer{v.clientset} +} + +func (v *HistoryVisitor) VisitStatefulSet(kind apps.GroupKindElement) { + v.result = &StatefulSetHistoryViewer{v.clientset} +} + +func (v *HistoryVisitor) VisitDaemonSet(kind apps.GroupKindElement) { + v.result = &DaemonSetHistoryViewer{v.clientset} +} + +func (v *HistoryVisitor) VisitJob(kind apps.GroupKindElement) {} +func (v *HistoryVisitor) VisitPod(kind apps.GroupKindElement) {} +func (v *HistoryVisitor) VisitReplicaSet(kind apps.GroupKindElement) {} +func (v *HistoryVisitor) VisitReplicationController(kind apps.GroupKindElement) {} +func (v *HistoryVisitor) VisitCronJob(kind apps.GroupKindElement) {} + +// HistoryViewerFor returns an implementation of HistoryViewer interface for the given schema kind +func HistoryViewerFor(kind schema.GroupKind, c kubernetes.Interface) (HistoryViewer, error) { + elem := apps.GroupKindElement(kind) + visitor := &HistoryVisitor{ + clientset: c, + } + + // Determine which HistoryViewer we need here + err := elem.Accept(visitor) + + if err != nil { + return nil, fmt.Errorf("error retrieving history for %q, %v", kind.String(), err) + } + + if visitor.result == nil { + return nil, fmt.Errorf("no history viewer has been implemented for %q", kind.String()) + } + + return visitor.result, nil +} + +type DeploymentHistoryViewer struct { + c kubernetes.Interface +} + +// ViewHistory returns a revision-to-replicaset map as the revision history of a deployment +// TODO: this should be a describer +func (h *DeploymentHistoryViewer) ViewHistory(namespace, name string, revision int64) (string, error) { + allRSs, err := getDeploymentReplicaSets(h.c.AppsV1(), namespace, name) + if err != nil { + return "", err + } + + historyInfo := make(map[int64]*corev1.PodTemplateSpec) + for _, rs := range allRSs { + v, err := deploymentutil.Revision(rs) + if err != nil { + klog.Warningf("unable to get revision from replicaset %s for deployment %s in namespace %s: %v", rs.Name, name, namespace, err) + continue + } + historyInfo[v] = &rs.Spec.Template + changeCause := getChangeCause(rs) + if historyInfo[v].Annotations == nil { + historyInfo[v].Annotations = make(map[string]string) + } + if len(changeCause) > 0 { + historyInfo[v].Annotations[ChangeCauseAnnotation] = changeCause + } + } + + if len(historyInfo) == 0 { + return "No rollout history found.", nil + } + + if revision > 0 { + // Print details of a specific revision + template, ok := historyInfo[revision] + if !ok { + return "", fmt.Errorf("unable to find the specified revision") + } + return printTemplate(template) + } + + // Sort the revisionToChangeCause map by revision + revisions := make([]int64, 0, len(historyInfo)) + for r := range historyInfo { + revisions = append(revisions, r) + } + sliceutil.SortInts64(revisions) + + return tabbedString(func(out io.Writer) error { + fmt.Fprintf(out, "REVISION\tCHANGE-CAUSE\n") + for _, r := range revisions { + // Find the change-cause of revision r + changeCause := historyInfo[r].Annotations[ChangeCauseAnnotation] + if len(changeCause) == 0 { + changeCause = "" + } + fmt.Fprintf(out, "%d\t%s\n", r, changeCause) + } + return nil + }) +} + +// GetHistory returns the ReplicaSet revisions associated with a Deployment +func (h *DeploymentHistoryViewer) GetHistory(namespace, name string) (map[int64]runtime.Object, error) { + allRSs, err := getDeploymentReplicaSets(h.c.AppsV1(), namespace, name) + if err != nil { + return nil, err + } + + result := make(map[int64]runtime.Object) + for _, rs := range allRSs { + v, err := deploymentutil.Revision(rs) + if err != nil { + klog.Warningf("unable to get revision from replicaset %s for deployment %s in namespace %s: %v", rs.Name, name, namespace, err) + continue + } + result[v] = rs + } + + return result, nil +} + +func printTemplate(template *corev1.PodTemplateSpec) (string, error) { + buf := bytes.NewBuffer([]byte{}) + w := describe.NewPrefixWriter(buf) + describe.DescribePodTemplate(template, w) + return buf.String(), nil +} + +type DaemonSetHistoryViewer struct { + c kubernetes.Interface +} + +// ViewHistory returns a revision-to-history map as the revision history of a deployment +// TODO: this should be a describer +func (h *DaemonSetHistoryViewer) ViewHistory(namespace, name string, revision int64) (string, error) { + ds, history, err := daemonSetHistory(h.c.AppsV1(), namespace, name) + if err != nil { + return "", err + } + return printHistory(history, revision, func(history *appsv1.ControllerRevision) (*corev1.PodTemplateSpec, error) { + dsOfHistory, err := applyDaemonSetHistory(ds, history) + if err != nil { + return nil, err + } + return &dsOfHistory.Spec.Template, err + }) +} + +// GetHistory returns the revisions associated with a DaemonSet +func (h *DaemonSetHistoryViewer) GetHistory(namespace, name string) (map[int64]runtime.Object, error) { + ds, history, err := daemonSetHistory(h.c.AppsV1(), namespace, name) + if err != nil { + return nil, err + } + + result := make(map[int64]runtime.Object) + for _, h := range history { + applied, err := applyDaemonSetHistory(ds, h) + if err != nil { + return nil, err + } + result[h.Revision] = applied + } + + return result, nil +} + +// printHistory returns the podTemplate of the given revision if it is non-zero +// else returns the overall revisions +func printHistory(history []*appsv1.ControllerRevision, revision int64, getPodTemplate func(history *appsv1.ControllerRevision) (*corev1.PodTemplateSpec, error)) (string, error) { + historyInfo := make(map[int64]*appsv1.ControllerRevision) + for _, history := range history { + // TODO: for now we assume revisions don't overlap, we may need to handle it + historyInfo[history.Revision] = history + } + if len(historyInfo) == 0 { + return "No rollout history found.", nil + } + + // Print details of a specific revision + if revision > 0 { + history, ok := historyInfo[revision] + if !ok { + return "", fmt.Errorf("unable to find the specified revision") + } + podTemplate, err := getPodTemplate(history) + if err != nil { + return "", fmt.Errorf("unable to parse history %s", history.Name) + } + return printTemplate(podTemplate) + } + + // Print an overview of all Revisions + // Sort the revisionToChangeCause map by revision + revisions := make([]int64, 0, len(historyInfo)) + for r := range historyInfo { + revisions = append(revisions, r) + } + sliceutil.SortInts64(revisions) + + return tabbedString(func(out io.Writer) error { + fmt.Fprintf(out, "REVISION\tCHANGE-CAUSE\n") + for _, r := range revisions { + // Find the change-cause of revision r + changeCause := historyInfo[r].Annotations[ChangeCauseAnnotation] + if len(changeCause) == 0 { + changeCause = "" + } + fmt.Fprintf(out, "%d\t%s\n", r, changeCause) + } + return nil + }) +} + +type StatefulSetHistoryViewer struct { + c kubernetes.Interface +} + +// ViewHistory returns a list of the revision history of a statefulset +// TODO: this should be a describer +func (h *StatefulSetHistoryViewer) ViewHistory(namespace, name string, revision int64) (string, error) { + sts, history, err := statefulSetHistory(h.c.AppsV1(), namespace, name) + if err != nil { + return "", err + } + return printHistory(history, revision, func(history *appsv1.ControllerRevision) (*corev1.PodTemplateSpec, error) { + stsOfHistory, err := applyStatefulSetHistory(sts, history) + if err != nil { + return nil, err + } + return &stsOfHistory.Spec.Template, err + }) +} + +// GetHistory returns the revisions associated with a StatefulSet +func (h *StatefulSetHistoryViewer) GetHistory(namespace, name string) (map[int64]runtime.Object, error) { + sts, history, err := statefulSetHistory(h.c.AppsV1(), namespace, name) + if err != nil { + return nil, err + } + + result := make(map[int64]runtime.Object) + for _, h := range history { + applied, err := applyStatefulSetHistory(sts, h) + if err != nil { + return nil, err + } + result[h.Revision] = applied + } + + return result, nil +} + +func getDeploymentReplicaSets(apps clientappsv1.AppsV1Interface, namespace, name string) ([]*appsv1.ReplicaSet, error) { + deployment, err := apps.Deployments(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return nil, fmt.Errorf("failed to retrieve deployment %s: %v", name, err) + } + + _, oldRSs, newRS, err := deploymentutil.GetAllReplicaSets(deployment, apps) + if err != nil { + return nil, fmt.Errorf("failed to retrieve replica sets from deployment %s: %v", name, err) + } + + if newRS == nil { + return oldRSs, nil + } + return append(oldRSs, newRS), nil +} + +// controlledHistories returns all ControllerRevisions in namespace that selected by selector and owned by accessor +// TODO: Rename this to controllerHistory when other controllers have been upgraded +func controlledHistoryV1( + apps clientappsv1.AppsV1Interface, + namespace string, + selector labels.Selector, + accessor metav1.Object) ([]*appsv1.ControllerRevision, error) { + var result []*appsv1.ControllerRevision + historyList, err := apps.ControllerRevisions(namespace).List(context.TODO(), metav1.ListOptions{LabelSelector: selector.String()}) + if err != nil { + return nil, err + } + for i := range historyList.Items { + history := historyList.Items[i] + // Only add history that belongs to the API object + if metav1.IsControlledBy(&history, accessor) { + result = append(result, &history) + } + } + return result, nil +} + +// controlledHistories returns all ControllerRevisions in namespace that selected by selector and owned by accessor +func controlledHistory( + apps clientappsv1.AppsV1Interface, + namespace string, + selector labels.Selector, + accessor metav1.Object) ([]*appsv1.ControllerRevision, error) { + var result []*appsv1.ControllerRevision + historyList, err := apps.ControllerRevisions(namespace).List(context.TODO(), metav1.ListOptions{LabelSelector: selector.String()}) + if err != nil { + return nil, err + } + for i := range historyList.Items { + history := historyList.Items[i] + // Only add history that belongs to the API object + if metav1.IsControlledBy(&history, accessor) { + result = append(result, &history) + } + } + return result, nil +} + +// daemonSetHistory returns the DaemonSet named name in namespace and all ControllerRevisions in its history. +func daemonSetHistory( + apps clientappsv1.AppsV1Interface, + namespace, name string) (*appsv1.DaemonSet, []*appsv1.ControllerRevision, error) { + ds, err := apps.DaemonSets(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return nil, nil, fmt.Errorf("failed to retrieve DaemonSet %s: %v", name, err) + } + selector, err := metav1.LabelSelectorAsSelector(ds.Spec.Selector) + if err != nil { + return nil, nil, fmt.Errorf("failed to create selector for DaemonSet %s: %v", ds.Name, err) + } + accessor, err := meta.Accessor(ds) + if err != nil { + return nil, nil, fmt.Errorf("failed to create accessor for DaemonSet %s: %v", ds.Name, err) + } + history, err := controlledHistory(apps, ds.Namespace, selector, accessor) + if err != nil { + return nil, nil, fmt.Errorf("unable to find history controlled by DaemonSet %s: %v", ds.Name, err) + } + return ds, history, nil +} + +// statefulSetHistory returns the StatefulSet named name in namespace and all ControllerRevisions in its history. +func statefulSetHistory( + apps clientappsv1.AppsV1Interface, + namespace, name string) (*appsv1.StatefulSet, []*appsv1.ControllerRevision, error) { + sts, err := apps.StatefulSets(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return nil, nil, fmt.Errorf("failed to retrieve Statefulset %s: %s", name, err.Error()) + } + selector, err := metav1.LabelSelectorAsSelector(sts.Spec.Selector) + if err != nil { + return nil, nil, fmt.Errorf("failed to create selector for StatefulSet %s: %s", name, err.Error()) + } + accessor, err := meta.Accessor(sts) + if err != nil { + return nil, nil, fmt.Errorf("failed to obtain accessor for StatefulSet %s: %s", name, err.Error()) + } + history, err := controlledHistoryV1(apps, namespace, selector, accessor) + if err != nil { + return nil, nil, fmt.Errorf("unable to find history controlled by StatefulSet %s: %v", name, err) + } + return sts, history, nil +} + +// applyDaemonSetHistory returns a specific revision of DaemonSet by applying the given history to a copy of the given DaemonSet +func applyDaemonSetHistory(ds *appsv1.DaemonSet, history *appsv1.ControllerRevision) (*appsv1.DaemonSet, error) { + dsBytes, err := json.Marshal(ds) + if err != nil { + return nil, err + } + patched, err := strategicpatch.StrategicMergePatch(dsBytes, history.Data.Raw, ds) + if err != nil { + return nil, err + } + result := &appsv1.DaemonSet{} + err = json.Unmarshal(patched, result) + if err != nil { + return nil, err + } + return result, nil +} + +// applyStatefulSetHistory returns a specific revision of StatefulSet by applying the given history to a copy of the given StatefulSet +func applyStatefulSetHistory(sts *appsv1.StatefulSet, history *appsv1.ControllerRevision) (*appsv1.StatefulSet, error) { + stsBytes, err := json.Marshal(sts) + if err != nil { + return nil, err + } + patched, err := strategicpatch.StrategicMergePatch(stsBytes, history.Data.Raw, sts) + if err != nil { + return nil, err + } + result := &appsv1.StatefulSet{} + err = json.Unmarshal(patched, result) + if err != nil { + return nil, err + } + return result, nil +} + +// TODO: copied here until this becomes a describer +func tabbedString(f func(io.Writer) error) (string, error) { + out := new(tabwriter.Writer) + buf := &bytes.Buffer{} + out.Init(buf, 0, 8, 2, ' ', 0) + + err := f(out) + if err != nil { + return "", err + } + + out.Flush() + str := string(buf.String()) + return str, nil +} + +// getChangeCause returns the change-cause annotation of the input object +func getChangeCause(obj runtime.Object) string { + accessor, err := meta.Accessor(obj) + if err != nil { + return "" + } + return accessor.GetAnnotations()[ChangeCauseAnnotation] +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/historyviewer.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/historyviewer.go new file mode 100644 index 000000000..6ad9d217c --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/historyviewer.go @@ -0,0 +1,37 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/client-go/kubernetes" +) + +// historyViewer Returns a HistoryViewer for viewing change history +func historyViewer(restClientGetter genericclioptions.RESTClientGetter, mapping *meta.RESTMapping) (HistoryViewer, error) { + clientConfig, err := restClientGetter.ToRESTConfig() + if err != nil { + return nil, err + } + + external, err := kubernetes.NewForConfig(clientConfig) + if err != nil { + return nil, err + } + return HistoryViewerFor(mapping.GroupVersionKind.GroupKind(), external) +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/interface.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/interface.go new file mode 100644 index 000000000..83e13714f --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/interface.go @@ -0,0 +1,114 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "time" + + "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/client-go/rest" +) + +// LogsForObjectFunc is a function type that can tell you how to get logs for a runtime.object +type LogsForObjectFunc func(restClientGetter genericclioptions.RESTClientGetter, object, options runtime.Object, timeout time.Duration, allContainers bool) (map[v1.ObjectReference]rest.ResponseWrapper, error) + +// LogsForObjectFn gives a way to easily override the function for unit testing if needed. +var LogsForObjectFn LogsForObjectFunc = logsForObject + +// AttachablePodForObjectFunc is a function type that can tell you how to get the pod for which to attach a given object +type AttachablePodForObjectFunc func(restClientGetter genericclioptions.RESTClientGetter, object runtime.Object, timeout time.Duration) (*v1.Pod, error) + +// AttachablePodForObjectFn gives a way to easily override the function for unit testing if needed. +var AttachablePodForObjectFn AttachablePodForObjectFunc = attachablePodForObject + +// HistoryViewerFunc is a function type that can tell you how to view change history +type HistoryViewerFunc func(restClientGetter genericclioptions.RESTClientGetter, mapping *meta.RESTMapping) (HistoryViewer, error) + +// HistoryViewerFn gives a way to easily override the function for unit testing if needed +var HistoryViewerFn HistoryViewerFunc = historyViewer + +// StatusViewerFunc is a function type that can tell you how to print rollout status +type StatusViewerFunc func(mapping *meta.RESTMapping) (StatusViewer, error) + +// StatusViewerFn gives a way to easily override the function for unit testing if needed +var StatusViewerFn StatusViewerFunc = statusViewer + +// UpdatePodSpecForObjectFunc will call the provided function on the pod spec this object supports, +// return false if no pod spec is supported, or return an error. +type UpdatePodSpecForObjectFunc func(obj runtime.Object, fn func(*v1.PodSpec) error) (bool, error) + +// UpdatePodSpecForObjectFn gives a way to easily override the function for unit testing if needed +var UpdatePodSpecForObjectFn UpdatePodSpecForObjectFunc = updatePodSpecForObject + +// MapBasedSelectorForObjectFunc will call the provided function on mapping the baesd selector for object, +// return "" if object is not supported, or return an error. +type MapBasedSelectorForObjectFunc func(object runtime.Object) (string, error) + +// MapBasedSelectorForObjectFn gives a way to easily override the function for unit testing if needed +var MapBasedSelectorForObjectFn MapBasedSelectorForObjectFunc = mapBasedSelectorForObject + +// ProtocolsForObjectFunc will call the provided function on the protocols for the object, +// return nil-map if no protocols for the object, or return an error. +type ProtocolsForObjectFunc func(object runtime.Object) (map[string]string, error) + +// ProtocolsForObjectFn gives a way to easily override the function for unit testing if needed +var ProtocolsForObjectFn ProtocolsForObjectFunc = protocolsForObject + +// PortsForObjectFunc returns the ports associated with the provided object +type PortsForObjectFunc func(object runtime.Object) ([]string, error) + +// PortsForObjectFn gives a way to easily override the function for unit testing if needed +var PortsForObjectFn PortsForObjectFunc = portsForObject + +// CanBeExposedFunc is a function type that can tell you whether a given GroupKind is capable of being exposed +type CanBeExposedFunc func(kind schema.GroupKind) error + +// CanBeExposedFn gives a way to easily override the function for unit testing if needed +var CanBeExposedFn CanBeExposedFunc = canBeExposed + +// ObjectPauserFunc is a function type that marks the object in a given info as paused. +type ObjectPauserFunc func(runtime.Object) ([]byte, error) + +// ObjectPauserFn gives a way to easily override the function for unit testing if needed. +// Returns the patched object in bytes and any error that occurred during the encoding or +// in case the object is already paused. +var ObjectPauserFn ObjectPauserFunc = defaultObjectPauser + +// ObjectResumerFunc is a function type that marks the object in a given info as resumed. +type ObjectResumerFunc func(runtime.Object) ([]byte, error) + +// ObjectResumerFn gives a way to easily override the function for unit testing if needed. +// Returns the patched object in bytes and any error that occurred during the encoding or +// in case the object is already resumed. +var ObjectResumerFn ObjectResumerFunc = defaultObjectResumer + +// RollbackerFunc gives a way to change the rollback version of the specified RESTMapping type +type RollbackerFunc func(restClientGetter genericclioptions.RESTClientGetter, mapping *meta.RESTMapping) (Rollbacker, error) + +// RollbackerFn gives a way to easily override the function for unit testing if needed +var RollbackerFn RollbackerFunc = rollbacker + +// ObjectRestarterFunc is a function type that updates an annotation in a deployment to restart it.. +type ObjectRestarterFunc func(runtime.Object) ([]byte, error) + +// ObjectRestarterFn gives a way to easily override the function for unit testing if needed. +// Returns the patched object in bytes and any error that occurred during the encoding. +var ObjectRestarterFn ObjectRestarterFunc = defaultObjectRestarter diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/logsforobject.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/logsforobject.go new file mode 100644 index 000000000..c436c06d3 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/logsforobject.go @@ -0,0 +1,169 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "errors" + "fmt" + "os" + "sort" + "time" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/cli-runtime/pkg/genericclioptions" + corev1client "k8s.io/client-go/kubernetes/typed/core/v1" + "k8s.io/client-go/rest" + "k8s.io/client-go/tools/reference" + "k8s.io/kubectl/pkg/cmd/util/podcmd" + "k8s.io/kubectl/pkg/scheme" + "k8s.io/kubectl/pkg/util/podutils" +) + +func logsForObject(restClientGetter genericclioptions.RESTClientGetter, object, options runtime.Object, timeout time.Duration, allContainers bool) (map[corev1.ObjectReference]rest.ResponseWrapper, error) { + clientConfig, err := restClientGetter.ToRESTConfig() + if err != nil { + return nil, err + } + + clientset, err := corev1client.NewForConfig(clientConfig) + if err != nil { + return nil, err + } + return logsForObjectWithClient(clientset, object, options, timeout, allContainers) +} + +// this is split for easy test-ability +func logsForObjectWithClient(clientset corev1client.CoreV1Interface, object, options runtime.Object, timeout time.Duration, allContainers bool) (map[corev1.ObjectReference]rest.ResponseWrapper, error) { + opts, ok := options.(*corev1.PodLogOptions) + if !ok { + return nil, errors.New("provided options object is not a PodLogOptions") + } + + switch t := object.(type) { + case *corev1.PodList: + ret := make(map[corev1.ObjectReference]rest.ResponseWrapper) + for i := range t.Items { + currRet, err := logsForObjectWithClient(clientset, &t.Items[i], options, timeout, allContainers) + if err != nil { + return nil, err + } + for k, v := range currRet { + ret[k] = v + } + } + return ret, nil + + case *corev1.Pod: + // if allContainers is true, then we're going to locate all containers and then iterate through them. At that point, "allContainers" is false + if !allContainers { + currOpts := new(corev1.PodLogOptions) + if opts != nil { + opts.DeepCopyInto(currOpts) + } + // in case the "kubectl.kubernetes.io/default-container" annotation is present, we preset the opts.Containers to default to selected + // container. This gives users ability to preselect the most interesting container in pod. + if annotations := t.GetAnnotations(); annotations != nil && currOpts.Container == "" { + var defaultContainer string + if len(annotations[podcmd.DefaultContainerAnnotationName]) > 0 { + defaultContainer = annotations[podcmd.DefaultContainerAnnotationName] + } + if len(defaultContainer) > 0 { + if exists, _ := podcmd.FindContainerByName(t, defaultContainer); exists == nil { + fmt.Fprintf(os.Stderr, "Default container name %q not found in pod %s\n", defaultContainer, t.Name) + } else { + currOpts.Container = defaultContainer + } + } + } + + if currOpts.Container == "" { + // Default to the first container name(aligning behavior with `kubectl exec'). + currOpts.Container = t.Spec.Containers[0].Name + if len(t.Spec.Containers) > 1 || len(t.Spec.InitContainers) > 0 || len(t.Spec.EphemeralContainers) > 0 { + fmt.Fprintf(os.Stderr, "Defaulted container %q out of: %s\n", currOpts.Container, podcmd.AllContainerNames(t)) + } + } + + container, fieldPath := podcmd.FindContainerByName(t, currOpts.Container) + if container == nil { + return nil, fmt.Errorf("container %s is not valid for pod %s", currOpts.Container, t.Name) + } + ref, err := reference.GetPartialReference(scheme.Scheme, t, fieldPath) + if err != nil { + return nil, fmt.Errorf("Unable to construct reference to '%#v': %v", t, err) + } + + ret := make(map[corev1.ObjectReference]rest.ResponseWrapper, 1) + ret[*ref] = clientset.Pods(t.Namespace).GetLogs(t.Name, currOpts) + return ret, nil + } + + ret := make(map[corev1.ObjectReference]rest.ResponseWrapper) + for _, c := range t.Spec.InitContainers { + currOpts := opts.DeepCopy() + currOpts.Container = c.Name + currRet, err := logsForObjectWithClient(clientset, t, currOpts, timeout, false) + if err != nil { + return nil, err + } + for k, v := range currRet { + ret[k] = v + } + } + for _, c := range t.Spec.Containers { + currOpts := opts.DeepCopy() + currOpts.Container = c.Name + currRet, err := logsForObjectWithClient(clientset, t, currOpts, timeout, false) + if err != nil { + return nil, err + } + for k, v := range currRet { + ret[k] = v + } + } + for _, c := range t.Spec.EphemeralContainers { + currOpts := opts.DeepCopy() + currOpts.Container = c.Name + currRet, err := logsForObjectWithClient(clientset, t, currOpts, timeout, false) + if err != nil { + return nil, err + } + for k, v := range currRet { + ret[k] = v + } + } + + return ret, nil + } + + namespace, selector, err := SelectorsForObject(object) + if err != nil { + return nil, fmt.Errorf("cannot get the logs from %T: %v", object, err) + } + + sortBy := func(pods []*corev1.Pod) sort.Interface { return podutils.ByLogging(pods) } + pod, numPods, err := GetFirstPod(clientset, namespace, selector.String(), timeout, sortBy) + if err != nil { + return nil, err + } + if numPods > 1 { + fmt.Fprintf(os.Stderr, "Found %v pods, using pod/%v\n", numPods, pod.Name) + } + + return logsForObjectWithClient(clientset, pod, options, timeout, allContainers) +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/mapbasedselectorforobject.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/mapbasedselectorforobject.go new file mode 100644 index 000000000..729ae6d4f --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/mapbasedselectorforobject.go @@ -0,0 +1,160 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "fmt" + "strings" + + appsv1 "k8s.io/api/apps/v1" + appsv1beta1 "k8s.io/api/apps/v1beta1" + appsv1beta2 "k8s.io/api/apps/v1beta2" + corev1 "k8s.io/api/core/v1" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime" +) + +// mapBasedSelectorForObject returns the map-based selector associated with the provided object. If a +// new set-based selector is provided, an error is returned if the selector cannot be converted to a +// map-based selector +func mapBasedSelectorForObject(object runtime.Object) (string, error) { + // TODO: replace with a swagger schema based approach (identify pod selector via schema introspection) + switch t := object.(type) { + case *corev1.ReplicationController: + return MakeLabels(t.Spec.Selector), nil + + case *corev1.Pod: + if len(t.Labels) == 0 { + return "", fmt.Errorf("the pod has no labels and cannot be exposed") + } + return MakeLabels(t.Labels), nil + + case *corev1.Service: + if t.Spec.Selector == nil { + return "", fmt.Errorf("the service has no pod selector set") + } + return MakeLabels(t.Spec.Selector), nil + + case *extensionsv1beta1.Deployment: + // "extensions" deployments use pod template labels if selector is not set. + var labels map[string]string + if t.Spec.Selector != nil { + // TODO(madhusudancs): Make this smarter by admitting MatchExpressions with Equals + // operator, DoubleEquals operator and In operator with only one element in the set. + if len(t.Spec.Selector.MatchExpressions) > 0 { + return "", fmt.Errorf("couldn't convert expressions - \"%+v\" to map-based selector format", t.Spec.Selector.MatchExpressions) + } + labels = t.Spec.Selector.MatchLabels + } else { + labels = t.Spec.Template.Labels + } + if len(labels) == 0 { + return "", fmt.Errorf("the deployment has no labels or selectors and cannot be exposed") + } + return MakeLabels(labels), nil + + case *appsv1.Deployment: + // "apps" deployments must have the selector set. + if t.Spec.Selector == nil || len(t.Spec.Selector.MatchLabels) == 0 { + return "", fmt.Errorf("invalid deployment: no selectors, therefore cannot be exposed") + } + // TODO(madhusudancs): Make this smarter by admitting MatchExpressions with Equals + // operator, DoubleEquals operator and In operator with only one element in the set. + if len(t.Spec.Selector.MatchExpressions) > 0 { + return "", fmt.Errorf("couldn't convert expressions - \"%+v\" to map-based selector format", t.Spec.Selector.MatchExpressions) + } + return MakeLabels(t.Spec.Selector.MatchLabels), nil + + case *appsv1beta2.Deployment: + // "apps" deployments must have the selector set. + if t.Spec.Selector == nil || len(t.Spec.Selector.MatchLabels) == 0 { + return "", fmt.Errorf("invalid deployment: no selectors, therefore cannot be exposed") + } + // TODO(madhusudancs): Make this smarter by admitting MatchExpressions with Equals + // operator, DoubleEquals operator and In operator with only one element in the set. + if len(t.Spec.Selector.MatchExpressions) > 0 { + return "", fmt.Errorf("couldn't convert expressions - \"%+v\" to map-based selector format", t.Spec.Selector.MatchExpressions) + } + return MakeLabels(t.Spec.Selector.MatchLabels), nil + + case *appsv1beta1.Deployment: + // "apps" deployments must have the selector set. + if t.Spec.Selector == nil || len(t.Spec.Selector.MatchLabels) == 0 { + return "", fmt.Errorf("invalid deployment: no selectors, therefore cannot be exposed") + } + // TODO(madhusudancs): Make this smarter by admitting MatchExpressions with Equals + // operator, DoubleEquals operator and In operator with only one element in the set. + if len(t.Spec.Selector.MatchExpressions) > 0 { + return "", fmt.Errorf("couldn't convert expressions - \"%+v\" to map-based selector format", t.Spec.Selector.MatchExpressions) + } + return MakeLabels(t.Spec.Selector.MatchLabels), nil + + case *extensionsv1beta1.ReplicaSet: + // "extensions" replicasets use pod template labels if selector is not set. + var labels map[string]string + if t.Spec.Selector != nil { + // TODO(madhusudancs): Make this smarter by admitting MatchExpressions with Equals + // operator, DoubleEquals operator and In operator with only one element in the set. + if len(t.Spec.Selector.MatchExpressions) > 0 { + return "", fmt.Errorf("couldn't convert expressions - \"%+v\" to map-based selector format", t.Spec.Selector.MatchExpressions) + } + labels = t.Spec.Selector.MatchLabels + } else { + labels = t.Spec.Template.Labels + } + if len(labels) == 0 { + return "", fmt.Errorf("the replica set has no labels or selectors and cannot be exposed") + } + return MakeLabels(labels), nil + + case *appsv1.ReplicaSet: + // "apps" replicasets must have the selector set. + if t.Spec.Selector == nil || len(t.Spec.Selector.MatchLabels) == 0 { + return "", fmt.Errorf("invalid replicaset: no selectors, therefore cannot be exposed") + } + // TODO(madhusudancs): Make this smarter by admitting MatchExpressions with Equals + // operator, DoubleEquals operator and In operator with only one element in the set. + if len(t.Spec.Selector.MatchExpressions) > 0 { + return "", fmt.Errorf("couldn't convert expressions - \"%+v\" to map-based selector format", t.Spec.Selector.MatchExpressions) + } + return MakeLabels(t.Spec.Selector.MatchLabels), nil + + case *appsv1beta2.ReplicaSet: + // "apps" replicasets must have the selector set. + if t.Spec.Selector == nil || len(t.Spec.Selector.MatchLabels) == 0 { + return "", fmt.Errorf("invalid replicaset: no selectors, therefore cannot be exposed") + } + // TODO(madhusudancs): Make this smarter by admitting MatchExpressions with Equals + // operator, DoubleEquals operator and In operator with only one element in the set. + if len(t.Spec.Selector.MatchExpressions) > 0 { + return "", fmt.Errorf("couldn't convert expressions - \"%+v\" to map-based selector format", t.Spec.Selector.MatchExpressions) + } + return MakeLabels(t.Spec.Selector.MatchLabels), nil + + default: + return "", fmt.Errorf("cannot extract pod selector from %T", object) + } + +} + +func MakeLabels(labels map[string]string) string { + out := []string{} + for key, value := range labels { + out = append(out, fmt.Sprintf("%s=%s", key, value)) + } + return strings.Join(out, ",") +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectpauser.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectpauser.go new file mode 100644 index 000000000..f50daf255 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectpauser.go @@ -0,0 +1,65 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "errors" + "fmt" + + appsv1 "k8s.io/api/apps/v1" + appsv1beta1 "k8s.io/api/apps/v1beta1" + appsv1beta2 "k8s.io/api/apps/v1beta2" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/kubectl/pkg/scheme" +) + +// Currently only supports Deployments. +func defaultObjectPauser(obj runtime.Object) ([]byte, error) { + switch obj := obj.(type) { + case *extensionsv1beta1.Deployment: + if obj.Spec.Paused { + return nil, errors.New("is already paused") + } + obj.Spec.Paused = true + return runtime.Encode(scheme.Codecs.LegacyCodec(extensionsv1beta1.SchemeGroupVersion), obj) + + case *appsv1.Deployment: + if obj.Spec.Paused { + return nil, errors.New("is already paused") + } + obj.Spec.Paused = true + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1.SchemeGroupVersion), obj) + + case *appsv1beta2.Deployment: + if obj.Spec.Paused { + return nil, errors.New("is already paused") + } + obj.Spec.Paused = true + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta2.SchemeGroupVersion), obj) + + case *appsv1beta1.Deployment: + if obj.Spec.Paused { + return nil, errors.New("is already paused") + } + obj.Spec.Paused = true + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta1.SchemeGroupVersion), obj) + + default: + return nil, fmt.Errorf("pausing is not supported") + } +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectrestarter.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectrestarter.go new file mode 100644 index 000000000..cbcf7c882 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectrestarter.go @@ -0,0 +1,119 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "errors" + "fmt" + "time" + + appsv1 "k8s.io/api/apps/v1" + appsv1beta1 "k8s.io/api/apps/v1beta1" + appsv1beta2 "k8s.io/api/apps/v1beta2" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/kubectl/pkg/scheme" +) + +func defaultObjectRestarter(obj runtime.Object) ([]byte, error) { + switch obj := obj.(type) { + case *extensionsv1beta1.Deployment: + if obj.Spec.Paused { + return nil, errors.New("can't restart paused deployment (run rollout resume first)") + } + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(extensionsv1beta1.SchemeGroupVersion), obj) + + case *appsv1.Deployment: + if obj.Spec.Paused { + return nil, errors.New("can't restart paused deployment (run rollout resume first)") + } + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1.SchemeGroupVersion), obj) + + case *appsv1beta2.Deployment: + if obj.Spec.Paused { + return nil, errors.New("can't restart paused deployment (run rollout resume first)") + } + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta2.SchemeGroupVersion), obj) + + case *appsv1beta1.Deployment: + if obj.Spec.Paused { + return nil, errors.New("can't restart paused deployment (run rollout resume first)") + } + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta1.SchemeGroupVersion), obj) + + case *extensionsv1beta1.DaemonSet: + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(extensionsv1beta1.SchemeGroupVersion), obj) + + case *appsv1.DaemonSet: + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1.SchemeGroupVersion), obj) + + case *appsv1beta2.DaemonSet: + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta2.SchemeGroupVersion), obj) + + case *appsv1.StatefulSet: + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1.SchemeGroupVersion), obj) + + case *appsv1beta1.StatefulSet: + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta1.SchemeGroupVersion), obj) + + case *appsv1beta2.StatefulSet: + if obj.Spec.Template.ObjectMeta.Annotations == nil { + obj.Spec.Template.ObjectMeta.Annotations = make(map[string]string) + } + obj.Spec.Template.ObjectMeta.Annotations["kubectl.kubernetes.io/restartedAt"] = time.Now().Format(time.RFC3339) + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta2.SchemeGroupVersion), obj) + + default: + return nil, fmt.Errorf("restarting is not supported") + } +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectresumer.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectresumer.go new file mode 100644 index 000000000..783da6125 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/objectresumer.go @@ -0,0 +1,64 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "errors" + "fmt" + + appsv1 "k8s.io/api/apps/v1" + appsv1beta1 "k8s.io/api/apps/v1beta1" + appsv1beta2 "k8s.io/api/apps/v1beta2" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/kubectl/pkg/scheme" +) + +func defaultObjectResumer(obj runtime.Object) ([]byte, error) { + switch obj := obj.(type) { + case *extensionsv1beta1.Deployment: + if !obj.Spec.Paused { + return nil, errors.New("is not paused") + } + obj.Spec.Paused = false + return runtime.Encode(scheme.Codecs.LegacyCodec(extensionsv1beta1.SchemeGroupVersion), obj) + + case *appsv1.Deployment: + if !obj.Spec.Paused { + return nil, errors.New("is not paused") + } + obj.Spec.Paused = false + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1.SchemeGroupVersion), obj) + + case *appsv1beta2.Deployment: + if !obj.Spec.Paused { + return nil, errors.New("is not paused") + } + obj.Spec.Paused = false + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta2.SchemeGroupVersion), obj) + + case *appsv1beta1.Deployment: + if !obj.Spec.Paused { + return nil, errors.New("is not paused") + } + obj.Spec.Paused = false + return runtime.Encode(scheme.Codecs.LegacyCodec(appsv1beta1.SchemeGroupVersion), obj) + + default: + return nil, fmt.Errorf("resuming is not supported") + } +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/portsforobject.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/portsforobject.go new file mode 100644 index 000000000..6cc9a2a4e --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/portsforobject.go @@ -0,0 +1,78 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "fmt" + "strconv" + + appsv1 "k8s.io/api/apps/v1" + appsv1beta1 "k8s.io/api/apps/v1beta1" + appsv1beta2 "k8s.io/api/apps/v1beta2" + corev1 "k8s.io/api/core/v1" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime" +) + +func portsForObject(object runtime.Object) ([]string, error) { + switch t := object.(type) { + case *corev1.ReplicationController: + return getPorts(t.Spec.Template.Spec), nil + + case *corev1.Pod: + return getPorts(t.Spec), nil + + case *corev1.Service: + return getServicePorts(t.Spec), nil + + case *extensionsv1beta1.Deployment: + return getPorts(t.Spec.Template.Spec), nil + case *appsv1.Deployment: + return getPorts(t.Spec.Template.Spec), nil + case *appsv1beta2.Deployment: + return getPorts(t.Spec.Template.Spec), nil + case *appsv1beta1.Deployment: + return getPorts(t.Spec.Template.Spec), nil + + case *extensionsv1beta1.ReplicaSet: + return getPorts(t.Spec.Template.Spec), nil + case *appsv1.ReplicaSet: + return getPorts(t.Spec.Template.Spec), nil + case *appsv1beta2.ReplicaSet: + return getPorts(t.Spec.Template.Spec), nil + default: + return nil, fmt.Errorf("cannot extract ports from %T", object) + } +} + +func getPorts(spec corev1.PodSpec) []string { + result := []string{} + for _, container := range spec.Containers { + for _, port := range container.Ports { + result = append(result, strconv.Itoa(int(port.ContainerPort))) + } + } + return result +} + +func getServicePorts(spec corev1.ServiceSpec) []string { + result := []string{} + for _, servicePort := range spec.Ports { + result = append(result, strconv.Itoa(int(servicePort.Port))) + } + return result +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/protocolsforobject.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/protocolsforobject.go new file mode 100644 index 000000000..2e5e5a208 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/protocolsforobject.go @@ -0,0 +1,89 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "fmt" + "strconv" + + appsv1 "k8s.io/api/apps/v1" + appsv1beta1 "k8s.io/api/apps/v1beta1" + appsv1beta2 "k8s.io/api/apps/v1beta2" + corev1 "k8s.io/api/core/v1" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime" +) + +func protocolsForObject(object runtime.Object) (map[string]string, error) { + // TODO: replace with a swagger schema based approach (identify pod selector via schema introspection) + switch t := object.(type) { + case *corev1.ReplicationController: + return getProtocols(t.Spec.Template.Spec), nil + + case *corev1.Pod: + return getProtocols(t.Spec), nil + + case *corev1.Service: + return getServiceProtocols(t.Spec), nil + + case *extensionsv1beta1.Deployment: + return getProtocols(t.Spec.Template.Spec), nil + case *appsv1.Deployment: + return getProtocols(t.Spec.Template.Spec), nil + case *appsv1beta2.Deployment: + return getProtocols(t.Spec.Template.Spec), nil + case *appsv1beta1.Deployment: + return getProtocols(t.Spec.Template.Spec), nil + + case *extensionsv1beta1.ReplicaSet: + return getProtocols(t.Spec.Template.Spec), nil + case *appsv1.ReplicaSet: + return getProtocols(t.Spec.Template.Spec), nil + case *appsv1beta2.ReplicaSet: + return getProtocols(t.Spec.Template.Spec), nil + + default: + return nil, fmt.Errorf("cannot extract protocols from %T", object) + } +} + +func getProtocols(spec corev1.PodSpec) map[string]string { + result := make(map[string]string) + for _, container := range spec.Containers { + for _, port := range container.Ports { + // Empty protocol must be defaulted (TCP) + if len(port.Protocol) == 0 { + port.Protocol = corev1.ProtocolTCP + } + result[strconv.Itoa(int(port.ContainerPort))] = string(port.Protocol) + } + } + return result +} + +// Extracts the protocols exposed by a service from the given service spec. +func getServiceProtocols(spec corev1.ServiceSpec) map[string]string { + result := make(map[string]string) + for _, servicePort := range spec.Ports { + // Empty protocol must be defaulted (TCP) + if len(servicePort.Protocol) == 0 { + servicePort.Protocol = corev1.ProtocolTCP + } + result[strconv.Itoa(int(servicePort.Port))] = string(servicePort.Protocol) + } + return result +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollback.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollback.go new file mode 100644 index 000000000..a47fb2d86 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollback.go @@ -0,0 +1,500 @@ +/* +Copyright 2016 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "bytes" + "context" + "fmt" + "sort" + + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apiequality "k8s.io/apimachinery/pkg/api/equality" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/types" + "k8s.io/apimachinery/pkg/util/json" + "k8s.io/apimachinery/pkg/util/strategicpatch" + "k8s.io/client-go/kubernetes" + "k8s.io/kubectl/pkg/apps" + cmdutil "k8s.io/kubectl/pkg/cmd/util" + "k8s.io/kubectl/pkg/scheme" + deploymentutil "k8s.io/kubectl/pkg/util/deployment" +) + +const ( + rollbackSuccess = "rolled back" + rollbackSkipped = "skipped rollback" +) + +// Rollbacker provides an interface for resources that can be rolled back. +type Rollbacker interface { + Rollback(obj runtime.Object, updatedAnnotations map[string]string, toRevision int64, dryRunStrategy cmdutil.DryRunStrategy) (string, error) +} + +type RollbackVisitor struct { + clientset kubernetes.Interface + result Rollbacker +} + +func (v *RollbackVisitor) VisitDeployment(elem apps.GroupKindElement) { + v.result = &DeploymentRollbacker{v.clientset} +} + +func (v *RollbackVisitor) VisitStatefulSet(kind apps.GroupKindElement) { + v.result = &StatefulSetRollbacker{v.clientset} +} + +func (v *RollbackVisitor) VisitDaemonSet(kind apps.GroupKindElement) { + v.result = &DaemonSetRollbacker{v.clientset} +} + +func (v *RollbackVisitor) VisitJob(kind apps.GroupKindElement) {} +func (v *RollbackVisitor) VisitPod(kind apps.GroupKindElement) {} +func (v *RollbackVisitor) VisitReplicaSet(kind apps.GroupKindElement) {} +func (v *RollbackVisitor) VisitReplicationController(kind apps.GroupKindElement) {} +func (v *RollbackVisitor) VisitCronJob(kind apps.GroupKindElement) {} + +// RollbackerFor returns an implementation of Rollbacker interface for the given schema kind +func RollbackerFor(kind schema.GroupKind, c kubernetes.Interface) (Rollbacker, error) { + elem := apps.GroupKindElement(kind) + visitor := &RollbackVisitor{ + clientset: c, + } + + err := elem.Accept(visitor) + + if err != nil { + return nil, fmt.Errorf("error retrieving rollbacker for %q, %v", kind.String(), err) + } + + if visitor.result == nil { + return nil, fmt.Errorf("no rollbacker has been implemented for %q", kind) + } + + return visitor.result, nil +} + +type DeploymentRollbacker struct { + c kubernetes.Interface +} + +func (r *DeploymentRollbacker) Rollback(obj runtime.Object, updatedAnnotations map[string]string, toRevision int64, dryRunStrategy cmdutil.DryRunStrategy) (string, error) { + if toRevision < 0 { + return "", revisionNotFoundErr(toRevision) + } + accessor, err := meta.Accessor(obj) + if err != nil { + return "", fmt.Errorf("failed to create accessor for kind %v: %s", obj.GetObjectKind(), err.Error()) + } + name := accessor.GetName() + namespace := accessor.GetNamespace() + + // TODO: Fix this after kubectl has been removed from core. It is not possible to convert the runtime.Object + // to the external appsv1 Deployment without round-tripping through an internal version of Deployment. We're + // currently getting rid of all internal versions of resources. So we specifically request the appsv1 version + // here. This follows the same pattern as for DaemonSet and StatefulSet. + deployment, err := r.c.AppsV1().Deployments(namespace).Get(context.TODO(), name, metav1.GetOptions{}) + if err != nil { + return "", fmt.Errorf("failed to retrieve Deployment %s: %v", name, err) + } + + rsForRevision, err := deploymentRevision(deployment, r.c, toRevision) + if err != nil { + return "", err + } + if dryRunStrategy == cmdutil.DryRunClient { + return printTemplate(&rsForRevision.Spec.Template) + } + if deployment.Spec.Paused { + return "", fmt.Errorf("you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume' and try again") + } + + // Skip if the revision already matches current Deployment + if equalIgnoreHash(&rsForRevision.Spec.Template, &deployment.Spec.Template) { + return fmt.Sprintf("%s (current template already matches revision %d)", rollbackSkipped, toRevision), nil + } + + // remove hash label before patching back into the deployment + delete(rsForRevision.Spec.Template.Labels, appsv1.DefaultDeploymentUniqueLabelKey) + + // compute deployment annotations + annotations := map[string]string{} + for k := range annotationsToSkip { + if v, ok := deployment.Annotations[k]; ok { + annotations[k] = v + } + } + for k, v := range rsForRevision.Annotations { + if !annotationsToSkip[k] { + annotations[k] = v + } + } + + // make patch to restore + patchType, patch, err := getDeploymentPatch(&rsForRevision.Spec.Template, annotations) + if err != nil { + return "", fmt.Errorf("failed restoring revision %d: %v", toRevision, err) + } + + patchOptions := metav1.PatchOptions{} + if dryRunStrategy == cmdutil.DryRunServer { + patchOptions.DryRun = []string{metav1.DryRunAll} + } + // Restore revision + if _, err = r.c.AppsV1().Deployments(namespace).Patch(context.TODO(), name, patchType, patch, patchOptions); err != nil { + return "", fmt.Errorf("failed restoring revision %d: %v", toRevision, err) + } + return rollbackSuccess, nil +} + +// equalIgnoreHash returns true if two given podTemplateSpec are equal, ignoring the diff in value of Labels[pod-template-hash] +// We ignore pod-template-hash because: +// 1. The hash result would be different upon podTemplateSpec API changes +// (e.g. the addition of a new field will cause the hash code to change) +// 2. The deployment template won't have hash labels +func equalIgnoreHash(template1, template2 *corev1.PodTemplateSpec) bool { + t1Copy := template1.DeepCopy() + t2Copy := template2.DeepCopy() + // Remove hash labels from template.Labels before comparing + delete(t1Copy.Labels, appsv1.DefaultDeploymentUniqueLabelKey) + delete(t2Copy.Labels, appsv1.DefaultDeploymentUniqueLabelKey) + return apiequality.Semantic.DeepEqual(t1Copy, t2Copy) +} + +// annotationsToSkip lists the annotations that should be preserved from the deployment and not +// copied from the replicaset when rolling a deployment back +var annotationsToSkip = map[string]bool{ + corev1.LastAppliedConfigAnnotation: true, + deploymentutil.RevisionAnnotation: true, + deploymentutil.RevisionHistoryAnnotation: true, + deploymentutil.DesiredReplicasAnnotation: true, + deploymentutil.MaxReplicasAnnotation: true, + appsv1.DeprecatedRollbackTo: true, +} + +// getPatch returns a patch that can be applied to restore a Deployment to a +// previous version. If the returned error is nil the patch is valid. +func getDeploymentPatch(podTemplate *corev1.PodTemplateSpec, annotations map[string]string) (types.PatchType, []byte, error) { + // Create a patch of the Deployment that replaces spec.template + patch, err := json.Marshal([]interface{}{ + map[string]interface{}{ + "op": "replace", + "path": "/spec/template", + "value": podTemplate, + }, + map[string]interface{}{ + "op": "replace", + "path": "/metadata/annotations", + "value": annotations, + }, + }) + return types.JSONPatchType, patch, err +} + +func deploymentRevision(deployment *appsv1.Deployment, c kubernetes.Interface, toRevision int64) (revision *appsv1.ReplicaSet, err error) { + + _, allOldRSs, newRS, err := deploymentutil.GetAllReplicaSets(deployment, c.AppsV1()) + if err != nil { + return nil, fmt.Errorf("failed to retrieve replica sets from deployment %s: %v", deployment.Name, err) + } + allRSs := allOldRSs + if newRS != nil { + allRSs = append(allRSs, newRS) + } + + var ( + latestReplicaSet *appsv1.ReplicaSet + latestRevision = int64(-1) + previousReplicaSet *appsv1.ReplicaSet + previousRevision = int64(-1) + ) + for _, rs := range allRSs { + if v, err := deploymentutil.Revision(rs); err == nil { + if toRevision == 0 { + if latestRevision < v { + // newest one we've seen so far + previousRevision = latestRevision + previousReplicaSet = latestReplicaSet + latestRevision = v + latestReplicaSet = rs + } else if previousRevision < v { + // second newest one we've seen so far + previousRevision = v + previousReplicaSet = rs + } + } else if toRevision == v { + return rs, nil + } + } + } + + if toRevision > 0 { + return nil, revisionNotFoundErr(toRevision) + } + + if previousReplicaSet == nil { + return nil, fmt.Errorf("no rollout history found for deployment %q", deployment.Name) + } + return previousReplicaSet, nil +} + +type DaemonSetRollbacker struct { + c kubernetes.Interface +} + +func (r *DaemonSetRollbacker) Rollback(obj runtime.Object, updatedAnnotations map[string]string, toRevision int64, dryRunStrategy cmdutil.DryRunStrategy) (string, error) { + if toRevision < 0 { + return "", revisionNotFoundErr(toRevision) + } + accessor, err := meta.Accessor(obj) + if err != nil { + return "", fmt.Errorf("failed to create accessor for kind %v: %s", obj.GetObjectKind(), err.Error()) + } + ds, history, err := daemonSetHistory(r.c.AppsV1(), accessor.GetNamespace(), accessor.GetName()) + if err != nil { + return "", err + } + if toRevision == 0 && len(history) <= 1 { + return "", fmt.Errorf("no last revision to roll back to") + } + + toHistory := findHistory(toRevision, history) + if toHistory == nil { + return "", revisionNotFoundErr(toRevision) + } + + if dryRunStrategy == cmdutil.DryRunClient { + appliedDS, err := applyDaemonSetHistory(ds, toHistory) + if err != nil { + return "", err + } + return printPodTemplate(&appliedDS.Spec.Template) + } + + // Skip if the revision already matches current DaemonSet + done, err := daemonSetMatch(ds, toHistory) + if err != nil { + return "", err + } + if done { + return fmt.Sprintf("%s (current template already matches revision %d)", rollbackSkipped, toRevision), nil + } + + patchOptions := metav1.PatchOptions{} + if dryRunStrategy == cmdutil.DryRunServer { + patchOptions.DryRun = []string{metav1.DryRunAll} + } + // Restore revision + if _, err = r.c.AppsV1().DaemonSets(accessor.GetNamespace()).Patch(context.TODO(), accessor.GetName(), types.StrategicMergePatchType, toHistory.Data.Raw, patchOptions); err != nil { + return "", fmt.Errorf("failed restoring revision %d: %v", toRevision, err) + } + + return rollbackSuccess, nil +} + +// daemonMatch check if the given DaemonSet's template matches the template stored in the given history. +func daemonSetMatch(ds *appsv1.DaemonSet, history *appsv1.ControllerRevision) (bool, error) { + patch, err := getDaemonSetPatch(ds) + if err != nil { + return false, err + } + return bytes.Equal(patch, history.Data.Raw), nil +} + +// getPatch returns a strategic merge patch that can be applied to restore a Daemonset to a +// previous version. If the returned error is nil the patch is valid. The current state that we save is just the +// PodSpecTemplate. We can modify this later to encompass more state (or less) and remain compatible with previously +// recorded patches. +func getDaemonSetPatch(ds *appsv1.DaemonSet) ([]byte, error) { + dsBytes, err := json.Marshal(ds) + if err != nil { + return nil, err + } + var raw map[string]interface{} + err = json.Unmarshal(dsBytes, &raw) + if err != nil { + return nil, err + } + objCopy := make(map[string]interface{}) + specCopy := make(map[string]interface{}) + + // Create a patch of the DaemonSet that replaces spec.template + spec := raw["spec"].(map[string]interface{}) + template := spec["template"].(map[string]interface{}) + specCopy["template"] = template + template["$patch"] = "replace" + objCopy["spec"] = specCopy + patch, err := json.Marshal(objCopy) + return patch, err +} + +type StatefulSetRollbacker struct { + c kubernetes.Interface +} + +// toRevision is a non-negative integer, with 0 being reserved to indicate rolling back to previous configuration +func (r *StatefulSetRollbacker) Rollback(obj runtime.Object, updatedAnnotations map[string]string, toRevision int64, dryRunStrategy cmdutil.DryRunStrategy) (string, error) { + if toRevision < 0 { + return "", revisionNotFoundErr(toRevision) + } + accessor, err := meta.Accessor(obj) + if err != nil { + return "", fmt.Errorf("failed to create accessor for kind %v: %s", obj.GetObjectKind(), err.Error()) + } + sts, history, err := statefulSetHistory(r.c.AppsV1(), accessor.GetNamespace(), accessor.GetName()) + if err != nil { + return "", err + } + if toRevision == 0 && len(history) <= 1 { + return "", fmt.Errorf("no last revision to roll back to") + } + + toHistory := findHistory(toRevision, history) + if toHistory == nil { + return "", revisionNotFoundErr(toRevision) + } + + if dryRunStrategy == cmdutil.DryRunClient { + appliedSS, err := applyRevision(sts, toHistory) + if err != nil { + return "", err + } + return printPodTemplate(&appliedSS.Spec.Template) + } + + // Skip if the revision already matches current StatefulSet + done, err := statefulsetMatch(sts, toHistory) + if err != nil { + return "", err + } + if done { + return fmt.Sprintf("%s (current template already matches revision %d)", rollbackSkipped, toRevision), nil + } + + patchOptions := metav1.PatchOptions{} + if dryRunStrategy == cmdutil.DryRunServer { + patchOptions.DryRun = []string{metav1.DryRunAll} + } + // Restore revision + if _, err = r.c.AppsV1().StatefulSets(sts.Namespace).Patch(context.TODO(), sts.Name, types.StrategicMergePatchType, toHistory.Data.Raw, patchOptions); err != nil { + return "", fmt.Errorf("failed restoring revision %d: %v", toRevision, err) + } + + return rollbackSuccess, nil +} + +var appsCodec = scheme.Codecs.LegacyCodec(appsv1.SchemeGroupVersion) + +// applyRevision returns a new StatefulSet constructed by restoring the state in revision to set. If the returned error +// is nil, the returned StatefulSet is valid. +func applyRevision(set *appsv1.StatefulSet, revision *appsv1.ControllerRevision) (*appsv1.StatefulSet, error) { + patched, err := strategicpatch.StrategicMergePatch([]byte(runtime.EncodeOrDie(appsCodec, set)), revision.Data.Raw, set) + if err != nil { + return nil, err + } + result := &appsv1.StatefulSet{} + err = json.Unmarshal(patched, result) + if err != nil { + return nil, err + } + return result, nil +} + +// statefulsetMatch check if the given StatefulSet's template matches the template stored in the given history. +func statefulsetMatch(ss *appsv1.StatefulSet, history *appsv1.ControllerRevision) (bool, error) { + patch, err := getStatefulSetPatch(ss) + if err != nil { + return false, err + } + return bytes.Equal(patch, history.Data.Raw), nil +} + +// getStatefulSetPatch returns a strategic merge patch that can be applied to restore a StatefulSet to a +// previous version. If the returned error is nil the patch is valid. The current state that we save is just the +// PodSpecTemplate. We can modify this later to encompass more state (or less) and remain compatible with previously +// recorded patches. +func getStatefulSetPatch(set *appsv1.StatefulSet) ([]byte, error) { + str, err := runtime.Encode(appsCodec, set) + if err != nil { + return nil, err + } + var raw map[string]interface{} + if err := json.Unmarshal([]byte(str), &raw); err != nil { + return nil, err + } + objCopy := make(map[string]interface{}) + specCopy := make(map[string]interface{}) + spec := raw["spec"].(map[string]interface{}) + template := spec["template"].(map[string]interface{}) + specCopy["template"] = template + template["$patch"] = "replace" + objCopy["spec"] = specCopy + patch, err := json.Marshal(objCopy) + return patch, err +} + +// findHistory returns a controllerrevision of a specific revision from the given controllerrevisions. +// It returns nil if no such controllerrevision exists. +// If toRevision is 0, the last previously used history is returned. +func findHistory(toRevision int64, allHistory []*appsv1.ControllerRevision) *appsv1.ControllerRevision { + if toRevision == 0 && len(allHistory) <= 1 { + return nil + } + + // Find the history to rollback to + var toHistory *appsv1.ControllerRevision + if toRevision == 0 { + // If toRevision == 0, find the latest revision (2nd max) + sort.Sort(historiesByRevision(allHistory)) + toHistory = allHistory[len(allHistory)-2] + } else { + for _, h := range allHistory { + if h.Revision == toRevision { + // If toRevision != 0, find the history with matching revision + return h + } + } + } + + return toHistory +} + +// printPodTemplate converts a given pod template into a human-readable string. +func printPodTemplate(specTemplate *corev1.PodTemplateSpec) (string, error) { + podSpec, err := printTemplate(specTemplate) + if err != nil { + return "", err + } + return fmt.Sprintf("will roll back to %s", podSpec), nil +} + +func revisionNotFoundErr(r int64) error { + return fmt.Errorf("unable to find specified revision %v in history", r) +} + +// TODO: copied from daemon controller, should extract to a library +type historiesByRevision []*appsv1.ControllerRevision + +func (h historiesByRevision) Len() int { return len(h) } +func (h historiesByRevision) Swap(i, j int) { h[i], h[j] = h[j], h[i] } +func (h historiesByRevision) Less(i, j int) bool { + return h[i].Revision < h[j].Revision +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollbacker.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollbacker.go new file mode 100644 index 000000000..f02223690 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollbacker.go @@ -0,0 +1,37 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/client-go/kubernetes" +) + +// Returns a Rollbacker for changing the rollback version of the specified RESTMapping type or an error +func rollbacker(restClientGetter genericclioptions.RESTClientGetter, mapping *meta.RESTMapping) (Rollbacker, error) { + clientConfig, err := restClientGetter.ToRESTConfig() + if err != nil { + return nil, err + } + external, err := kubernetes.NewForConfig(clientConfig) + if err != nil { + return nil, err + } + + return RollbackerFor(mapping.GroupVersionKind.GroupKind(), external) +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollout_status.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollout_status.go new file mode 100644 index 000000000..86af73350 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/rollout_status.go @@ -0,0 +1,152 @@ +/* +Copyright 2016 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "fmt" + + appsv1 "k8s.io/api/apps/v1" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime" + "k8s.io/apimachinery/pkg/runtime/schema" + deploymentutil "k8s.io/kubectl/pkg/util/deployment" +) + +// StatusViewer provides an interface for resources that have rollout status. +type StatusViewer interface { + Status(obj runtime.Unstructured, revision int64) (string, bool, error) +} + +// StatusViewerFor returns a StatusViewer for the resource specified by kind. +func StatusViewerFor(kind schema.GroupKind) (StatusViewer, error) { + switch kind { + case extensionsv1beta1.SchemeGroupVersion.WithKind("Deployment").GroupKind(), + appsv1.SchemeGroupVersion.WithKind("Deployment").GroupKind(): + return &DeploymentStatusViewer{}, nil + case extensionsv1beta1.SchemeGroupVersion.WithKind("DaemonSet").GroupKind(), + appsv1.SchemeGroupVersion.WithKind("DaemonSet").GroupKind(): + return &DaemonSetStatusViewer{}, nil + case appsv1.SchemeGroupVersion.WithKind("StatefulSet").GroupKind(): + return &StatefulSetStatusViewer{}, nil + } + return nil, fmt.Errorf("no status viewer has been implemented for %v", kind) +} + +// DeploymentStatusViewer implements the StatusViewer interface. +type DeploymentStatusViewer struct{} + +// DaemonSetStatusViewer implements the StatusViewer interface. +type DaemonSetStatusViewer struct{} + +// StatefulSetStatusViewer implements the StatusViewer interface. +type StatefulSetStatusViewer struct{} + +// Status returns a message describing deployment status, and a bool value indicating if the status is considered done. +func (s *DeploymentStatusViewer) Status(obj runtime.Unstructured, revision int64) (string, bool, error) { + deployment := &appsv1.Deployment{} + err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), deployment) + if err != nil { + return "", false, fmt.Errorf("failed to convert %T to %T: %v", obj, deployment, err) + } + + if revision > 0 { + deploymentRev, err := deploymentutil.Revision(deployment) + if err != nil { + return "", false, fmt.Errorf("cannot get the revision of deployment %q: %v", deployment.Name, err) + } + if revision != deploymentRev { + return "", false, fmt.Errorf("desired revision (%d) is different from the running revision (%d)", revision, deploymentRev) + } + } + if deployment.Generation <= deployment.Status.ObservedGeneration { + cond := deploymentutil.GetDeploymentCondition(deployment.Status, appsv1.DeploymentProgressing) + if cond != nil && cond.Reason == deploymentutil.TimedOutReason { + return "", false, fmt.Errorf("deployment %q exceeded its progress deadline", deployment.Name) + } + if deployment.Spec.Replicas != nil && deployment.Status.UpdatedReplicas < *deployment.Spec.Replicas { + return fmt.Sprintf("Waiting for deployment %q rollout to finish: %d out of %d new replicas have been updated...\n", deployment.Name, deployment.Status.UpdatedReplicas, *deployment.Spec.Replicas), false, nil + } + if deployment.Status.Replicas > deployment.Status.UpdatedReplicas { + return fmt.Sprintf("Waiting for deployment %q rollout to finish: %d old replicas are pending termination...\n", deployment.Name, deployment.Status.Replicas-deployment.Status.UpdatedReplicas), false, nil + } + if deployment.Status.AvailableReplicas < deployment.Status.UpdatedReplicas { + return fmt.Sprintf("Waiting for deployment %q rollout to finish: %d of %d updated replicas are available...\n", deployment.Name, deployment.Status.AvailableReplicas, deployment.Status.UpdatedReplicas), false, nil + } + return fmt.Sprintf("deployment %q successfully rolled out\n", deployment.Name), true, nil + } + return fmt.Sprintf("Waiting for deployment spec update to be observed...\n"), false, nil +} + +// Status returns a message describing daemon set status, and a bool value indicating if the status is considered done. +func (s *DaemonSetStatusViewer) Status(obj runtime.Unstructured, revision int64) (string, bool, error) { + //ignoring revision as DaemonSets does not have history yet + + daemon := &appsv1.DaemonSet{} + err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), daemon) + if err != nil { + return "", false, fmt.Errorf("failed to convert %T to %T: %v", obj, daemon, err) + } + + if daemon.Spec.UpdateStrategy.Type != appsv1.RollingUpdateDaemonSetStrategyType { + return "", true, fmt.Errorf("rollout status is only available for %s strategy type", appsv1.RollingUpdateStatefulSetStrategyType) + } + if daemon.Generation <= daemon.Status.ObservedGeneration { + if daemon.Status.UpdatedNumberScheduled < daemon.Status.DesiredNumberScheduled { + return fmt.Sprintf("Waiting for daemon set %q rollout to finish: %d out of %d new pods have been updated...\n", daemon.Name, daemon.Status.UpdatedNumberScheduled, daemon.Status.DesiredNumberScheduled), false, nil + } + if daemon.Status.NumberAvailable < daemon.Status.DesiredNumberScheduled { + return fmt.Sprintf("Waiting for daemon set %q rollout to finish: %d of %d updated pods are available...\n", daemon.Name, daemon.Status.NumberAvailable, daemon.Status.DesiredNumberScheduled), false, nil + } + return fmt.Sprintf("daemon set %q successfully rolled out\n", daemon.Name), true, nil + } + return fmt.Sprintf("Waiting for daemon set spec update to be observed...\n"), false, nil +} + +// Status returns a message describing statefulset status, and a bool value indicating if the status is considered done. +func (s *StatefulSetStatusViewer) Status(obj runtime.Unstructured, revision int64) (string, bool, error) { + sts := &appsv1.StatefulSet{} + err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), sts) + if err != nil { + return "", false, fmt.Errorf("failed to convert %T to %T: %v", obj, sts, err) + } + + if sts.Spec.UpdateStrategy.Type != appsv1.RollingUpdateStatefulSetStrategyType { + return "", true, fmt.Errorf("rollout status is only available for %s strategy type", appsv1.RollingUpdateStatefulSetStrategyType) + } + if sts.Status.ObservedGeneration == 0 || sts.Generation > sts.Status.ObservedGeneration { + return "Waiting for statefulset spec update to be observed...\n", false, nil + } + if sts.Spec.Replicas != nil && sts.Status.ReadyReplicas < *sts.Spec.Replicas { + return fmt.Sprintf("Waiting for %d pods to be ready...\n", *sts.Spec.Replicas-sts.Status.ReadyReplicas), false, nil + } + if sts.Spec.UpdateStrategy.Type == appsv1.RollingUpdateStatefulSetStrategyType && sts.Spec.UpdateStrategy.RollingUpdate != nil { + if sts.Spec.Replicas != nil && sts.Spec.UpdateStrategy.RollingUpdate.Partition != nil { + if sts.Status.UpdatedReplicas < (*sts.Spec.Replicas - *sts.Spec.UpdateStrategy.RollingUpdate.Partition) { + return fmt.Sprintf("Waiting for partitioned roll out to finish: %d out of %d new pods have been updated...\n", + sts.Status.UpdatedReplicas, *sts.Spec.Replicas-*sts.Spec.UpdateStrategy.RollingUpdate.Partition), false, nil + } + } + return fmt.Sprintf("partitioned roll out complete: %d new pods have been updated...\n", + sts.Status.UpdatedReplicas), true, nil + } + if sts.Status.UpdateRevision != sts.Status.CurrentRevision { + return fmt.Sprintf("waiting for statefulset rolling update to complete %d pods at revision %s...\n", + sts.Status.UpdatedReplicas, sts.Status.UpdateRevision), false, nil + } + return fmt.Sprintf("statefulset rolling update complete %d pods at revision %s...\n", sts.Status.CurrentReplicas, sts.Status.CurrentRevision), true, nil + +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/statusviewer.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/statusviewer.go new file mode 100644 index 000000000..0d6dd39f4 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/statusviewer.go @@ -0,0 +1,26 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "k8s.io/apimachinery/pkg/api/meta" +) + +// statusViewer returns a StatusViewer for printing rollout status. +func statusViewer(mapping *meta.RESTMapping) (StatusViewer, error) { + return StatusViewerFor(mapping.GroupVersionKind.GroupKind()) +} diff --git a/vendor/k8s.io/kubectl/pkg/polymorphichelpers/updatepodspec.go b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/updatepodspec.go new file mode 100644 index 000000000..f386447c1 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/polymorphichelpers/updatepodspec.go @@ -0,0 +1,90 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package polymorphichelpers + +import ( + "fmt" + + appsv1 "k8s.io/api/apps/v1" + appsv1beta1 "k8s.io/api/apps/v1beta1" + appsv1beta2 "k8s.io/api/apps/v1beta2" + batchv1 "k8s.io/api/batch/v1" + batchv1beta1 "k8s.io/api/batch/v1beta1" + "k8s.io/api/core/v1" + extensionsv1beta1 "k8s.io/api/extensions/v1beta1" + "k8s.io/apimachinery/pkg/runtime" +) + +func updatePodSpecForObject(obj runtime.Object, fn func(*v1.PodSpec) error) (bool, error) { + switch t := obj.(type) { + case *v1.Pod: + return true, fn(&t.Spec) + // ReplicationController + case *v1.ReplicationController: + if t.Spec.Template == nil { + t.Spec.Template = &v1.PodTemplateSpec{} + } + return true, fn(&t.Spec.Template.Spec) + + // Deployment + case *extensionsv1beta1.Deployment: + return true, fn(&t.Spec.Template.Spec) + case *appsv1beta1.Deployment: + return true, fn(&t.Spec.Template.Spec) + case *appsv1beta2.Deployment: + return true, fn(&t.Spec.Template.Spec) + case *appsv1.Deployment: + return true, fn(&t.Spec.Template.Spec) + + // DaemonSet + case *extensionsv1beta1.DaemonSet: + return true, fn(&t.Spec.Template.Spec) + case *appsv1beta2.DaemonSet: + return true, fn(&t.Spec.Template.Spec) + case *appsv1.DaemonSet: + return true, fn(&t.Spec.Template.Spec) + + // ReplicaSet + case *extensionsv1beta1.ReplicaSet: + return true, fn(&t.Spec.Template.Spec) + case *appsv1beta2.ReplicaSet: + return true, fn(&t.Spec.Template.Spec) + case *appsv1.ReplicaSet: + return true, fn(&t.Spec.Template.Spec) + + // StatefulSet + case *appsv1beta1.StatefulSet: + return true, fn(&t.Spec.Template.Spec) + case *appsv1beta2.StatefulSet: + return true, fn(&t.Spec.Template.Spec) + case *appsv1.StatefulSet: + return true, fn(&t.Spec.Template.Spec) + + // Job + case *batchv1.Job: + return true, fn(&t.Spec.Template.Spec) + + // CronJob + case *batchv1beta1.CronJob: + return true, fn(&t.Spec.JobTemplate.Spec.Template.Spec) + case *batchv1.CronJob: + return true, fn(&t.Spec.JobTemplate.Spec.Template.Spec) + + default: + return false, fmt.Errorf("the object is not a pod or does not have a pod template: %T", t) + } +} diff --git a/vendor/k8s.io/kubectl/pkg/util/apply.go b/vendor/k8s.io/kubectl/pkg/util/apply.go new file mode 100644 index 000000000..77ea59384 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/apply.go @@ -0,0 +1,146 @@ +/* +Copyright 2014 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +import ( + "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/runtime" +) + +var metadataAccessor = meta.NewAccessor() + +// GetOriginalConfiguration retrieves the original configuration of the object +// from the annotation, or nil if no annotation was found. +func GetOriginalConfiguration(obj runtime.Object) ([]byte, error) { + annots, err := metadataAccessor.Annotations(obj) + if err != nil { + return nil, err + } + + if annots == nil { + return nil, nil + } + + original, ok := annots[v1.LastAppliedConfigAnnotation] + if !ok { + return nil, nil + } + + return []byte(original), nil +} + +// SetOriginalConfiguration sets the original configuration of the object +// as the annotation on the object for later use in computing a three way patch. +func setOriginalConfiguration(obj runtime.Object, original []byte) error { + if len(original) < 1 { + return nil + } + + annots, err := metadataAccessor.Annotations(obj) + if err != nil { + return err + } + + if annots == nil { + annots = map[string]string{} + } + + annots[v1.LastAppliedConfigAnnotation] = string(original) + return metadataAccessor.SetAnnotations(obj, annots) +} + +// GetModifiedConfiguration retrieves the modified configuration of the object. +// If annotate is true, it embeds the result as an annotation in the modified +// configuration. If an object was read from the command input, it will use that +// version of the object. Otherwise, it will use the version from the server. +func GetModifiedConfiguration(obj runtime.Object, annotate bool, codec runtime.Encoder) ([]byte, error) { + // First serialize the object without the annotation to prevent recursion, + // then add that serialization to it as the annotation and serialize it again. + var modified []byte + + // Otherwise, use the server side version of the object. + // Get the current annotations from the object. + annots, err := metadataAccessor.Annotations(obj) + if err != nil { + return nil, err + } + + if annots == nil { + annots = map[string]string{} + } + + original := annots[v1.LastAppliedConfigAnnotation] + delete(annots, v1.LastAppliedConfigAnnotation) + if err := metadataAccessor.SetAnnotations(obj, annots); err != nil { + return nil, err + } + + modified, err = runtime.Encode(codec, obj) + if err != nil { + return nil, err + } + + if annotate { + annots[v1.LastAppliedConfigAnnotation] = string(modified) + if err := metadataAccessor.SetAnnotations(obj, annots); err != nil { + return nil, err + } + + modified, err = runtime.Encode(codec, obj) + if err != nil { + return nil, err + } + } + + // Restore the object to its original condition. + annots[v1.LastAppliedConfigAnnotation] = original + if err := metadataAccessor.SetAnnotations(obj, annots); err != nil { + return nil, err + } + + return modified, nil +} + +// updateApplyAnnotation calls CreateApplyAnnotation if the last applied +// configuration annotation is already present. Otherwise, it does nothing. +func updateApplyAnnotation(obj runtime.Object, codec runtime.Encoder) error { + if original, err := GetOriginalConfiguration(obj); err != nil || len(original) <= 0 { + return err + } + return CreateApplyAnnotation(obj, codec) +} + +// CreateApplyAnnotation gets the modified configuration of the object, +// without embedding it again, and then sets it on the object as the annotation. +func CreateApplyAnnotation(obj runtime.Object, codec runtime.Encoder) error { + modified, err := GetModifiedConfiguration(obj, false, codec) + if err != nil { + return err + } + return setOriginalConfiguration(obj, modified) +} + +// CreateOrUpdateAnnotation creates the annotation used by +// kubectl apply only when createAnnotation is true +// Otherwise, only update the annotation when it already exists +func CreateOrUpdateAnnotation(createAnnotation bool, obj runtime.Object, codec runtime.Encoder) error { + if createAnnotation { + return CreateApplyAnnotation(obj, codec) + } + return updateApplyAnnotation(obj, codec) +} diff --git a/vendor/k8s.io/kubectl/pkg/util/certificate/certificate.go b/vendor/k8s.io/kubectl/pkg/util/certificate/certificate.go new file mode 100644 index 000000000..c55e5963f --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/certificate/certificate.go @@ -0,0 +1,38 @@ +/* +Copyright 2016 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package certificate + +import ( + "crypto/x509" + "encoding/pem" + "errors" +) + +// TODO(yue9944882): Remove this helper package once it's copied to k/api + +// ParseCSR extracts the CSR from the API object and decodes it. +func ParseCSR(pemBytes []byte) (*x509.CertificateRequest, error) { + block, _ := pem.Decode(pemBytes) + if block == nil || block.Type != "CERTIFICATE REQUEST" { + return nil, errors.New("PEM block type must be CERTIFICATE REQUEST") + } + csr, err := x509.ParseCertificateRequest(block.Bytes) + if err != nil { + return nil, err + } + return csr, nil +} diff --git a/vendor/k8s.io/kubectl/pkg/util/completion/completion.go b/vendor/k8s.io/kubectl/pkg/util/completion/completion.go new file mode 100644 index 000000000..154c5e685 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/completion/completion.go @@ -0,0 +1,455 @@ +/* +Copyright 2021 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package completion + +import ( + "bytes" + "fmt" + "io/ioutil" + "os" + "strings" + "time" + + "github.com/spf13/cobra" + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/cli-runtime/pkg/genericclioptions" + "k8s.io/cli-runtime/pkg/printers" + "k8s.io/kubectl/pkg/cmd/apiresources" + "k8s.io/kubectl/pkg/cmd/get" + cmdutil "k8s.io/kubectl/pkg/cmd/util" + "k8s.io/kubectl/pkg/polymorphichelpers" + "k8s.io/kubectl/pkg/scheme" +) + +var factory cmdutil.Factory + +// SetFactoryForCompletion Store the factory which is needed by the completion functions. +// Not all commands have access to the factory, so cannot pass it to the completion functions. +func SetFactoryForCompletion(f cmdutil.Factory) { + factory = f +} + +// ResourceTypeAndNameCompletionFunc Returns a completion function that completes resource types +// and resource names that match the toComplete prefix. It supports the / form. +func ResourceTypeAndNameCompletionFunc(f cmdutil.Factory) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) { + return resourceTypeAndNameCompletionFunc(f, nil, true) +} + +// SpecifiedResourceTypeAndNameCompletionFunc Returns a completion function that completes resource +// types limited to the specified allowedTypes, and resource names that match the toComplete prefix. +// It allows for multiple resources. It supports the / form. +func SpecifiedResourceTypeAndNameCompletionFunc(f cmdutil.Factory, allowedTypes []string) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) { + return resourceTypeAndNameCompletionFunc(f, allowedTypes, true) +} + +// SpecifiedResourceTypeAndNameNoRepeatCompletionFunc Returns a completion function that completes resource +// types limited to the specified allowedTypes, and resource names that match the toComplete prefix. +// It only allows for one resource. It supports the / form. +func SpecifiedResourceTypeAndNameNoRepeatCompletionFunc(f cmdutil.Factory, allowedTypes []string) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) { + return resourceTypeAndNameCompletionFunc(f, allowedTypes, false) +} + +// ResourceNameCompletionFunc Returns a completion function that completes as a first argument +// the resource names specified by the resourceType parameter, and which match the toComplete prefix. +// This function does NOT support the / form: it is meant to be used by commands +// that don't support that form. For commands that apply to pods and that support the / +// form, please use PodResourceNameCompletionFunc() +func ResourceNameCompletionFunc(f cmdutil.Factory, resourceType string) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) { + return func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + var comps []string + if len(args) == 0 { + comps = CompGetResource(f, cmd, resourceType, toComplete) + } + return comps, cobra.ShellCompDirectiveNoFileComp + } +} + +// PodResourceNameCompletionFunc Returns a completion function that completes: +// 1- pod names that match the toComplete prefix +// 2- resource types containing pods which match the toComplete prefix +func PodResourceNameCompletionFunc(f cmdutil.Factory) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) { + return func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + var comps []string + directive := cobra.ShellCompDirectiveNoFileComp + if len(args) == 0 { + comps, directive = doPodResourceCompletion(f, cmd, toComplete) + } + return comps, directive + } +} + +// PodResourceNameAndContainerCompletionFunc Returns a completion function that completes, as a first argument: +// 1- pod names that match the toComplete prefix +// 2- resource types containing pods which match the toComplete prefix +// and as a second argument the containers within the specified pod. +func PodResourceNameAndContainerCompletionFunc(f cmdutil.Factory) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) { + return func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + var comps []string + directive := cobra.ShellCompDirectiveNoFileComp + if len(args) == 0 { + comps, directive = doPodResourceCompletion(f, cmd, toComplete) + } else if len(args) == 1 { + podName := convertResourceNameToPodName(f, args[0]) + comps = CompGetContainers(f, cmd, podName, toComplete) + } + return comps, directive + } +} + +// ContainerCompletionFunc Returns a completion function that completes the containers within the +// pod specified by the first argument. The resource containing the pod can be specified in +// the / form. +func ContainerCompletionFunc(f cmdutil.Factory) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) { + return func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + var comps []string + // We need the pod name to be able to complete the container names, it must be in args[0]. + // That first argument can also be of the form / so we need to convert it. + if len(args) > 0 { + podName := convertResourceNameToPodName(f, args[0]) + comps = CompGetContainers(f, cmd, podName, toComplete) + } + return comps, cobra.ShellCompDirectiveNoFileComp + } +} + +// ContextCompletionFunc is a completion function that completes as a first argument the +// context names that match the toComplete prefix +func ContextCompletionFunc(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + if len(args) == 0 { + return ListContextsInConfig(toComplete), cobra.ShellCompDirectiveNoFileComp + } + return nil, cobra.ShellCompDirectiveNoFileComp +} + +// ClusterCompletionFunc is a completion function that completes as a first argument the +// cluster names that match the toComplete prefix +func ClusterCompletionFunc(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + if len(args) == 0 { + return ListClustersInConfig(toComplete), cobra.ShellCompDirectiveNoFileComp + } + return nil, cobra.ShellCompDirectiveNoFileComp +} + +// UserCompletionFunc is a completion function that completes as a first argument the +// user names that match the toComplete prefix +func UserCompletionFunc(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + if len(args) == 0 { + return ListUsersInConfig(toComplete), cobra.ShellCompDirectiveNoFileComp + } + return nil, cobra.ShellCompDirectiveNoFileComp +} + +// CompGetResource gets the list of the resource specified which begin with `toComplete`. +func CompGetResource(f cmdutil.Factory, cmd *cobra.Command, resourceName string, toComplete string) []string { + template := "{{ range .items }}{{ .metadata.name }} {{ end }}" + return CompGetFromTemplate(&template, f, "", cmd, []string{resourceName}, toComplete) +} + +// CompGetContainers gets the list of containers of the specified pod which begin with `toComplete`. +func CompGetContainers(f cmdutil.Factory, cmd *cobra.Command, podName string, toComplete string) []string { + template := "{{ range .spec.initContainers }}{{ .name }} {{end}}{{ range .spec.containers }}{{ .name }} {{ end }}" + return CompGetFromTemplate(&template, f, "", cmd, []string{"pod", podName}, toComplete) +} + +// CompGetFromTemplate executes a Get operation using the specified template and args and returns the results +// which begin with `toComplete`. +func CompGetFromTemplate(template *string, f cmdutil.Factory, namespace string, cmd *cobra.Command, args []string, toComplete string) []string { + buf := new(bytes.Buffer) + streams := genericclioptions.IOStreams{In: os.Stdin, Out: buf, ErrOut: ioutil.Discard} + o := get.NewGetOptions("kubectl", streams) + + // Get the list of names of the specified resource + o.PrintFlags.TemplateFlags.GoTemplatePrintFlags.TemplateArgument = template + format := "go-template" + o.PrintFlags.OutputFormat = &format + + // Do the steps Complete() would have done. + // We cannot actually call Complete() or Validate() as these function check for + // the presence of flags, which, in our case won't be there + if namespace != "" { + o.Namespace = namespace + o.ExplicitNamespace = true + } else { + var err error + o.Namespace, o.ExplicitNamespace, err = f.ToRawKubeConfigLoader().Namespace() + if err != nil { + return nil + } + } + + o.ToPrinter = func(mapping *meta.RESTMapping, outputObjects *bool, withNamespace bool, withKind bool) (printers.ResourcePrinterFunc, error) { + printer, err := o.PrintFlags.ToPrinter() + if err != nil { + return nil, err + } + return printer.PrintObj, nil + } + + o.Run(f, cmd, args) + + var comps []string + resources := strings.Split(buf.String(), " ") + for _, res := range resources { + if res != "" && strings.HasPrefix(res, toComplete) { + comps = append(comps, res) + } + } + return comps +} + +// ListContextsInConfig returns a list of context names which begin with `toComplete` +func ListContextsInConfig(toComplete string) []string { + config, err := factory.ToRawKubeConfigLoader().RawConfig() + if err != nil { + return nil + } + var ret []string + for name := range config.Contexts { + if strings.HasPrefix(name, toComplete) { + ret = append(ret, name) + } + } + return ret +} + +// ListClustersInConfig returns a list of cluster names which begin with `toComplete` +func ListClustersInConfig(toComplete string) []string { + config, err := factory.ToRawKubeConfigLoader().RawConfig() + if err != nil { + return nil + } + var ret []string + for name := range config.Clusters { + if strings.HasPrefix(name, toComplete) { + ret = append(ret, name) + } + } + return ret +} + +// ListUsersInConfig returns a list of user names which begin with `toComplete` +func ListUsersInConfig(toComplete string) []string { + config, err := factory.ToRawKubeConfigLoader().RawConfig() + if err != nil { + return nil + } + var ret []string + for name := range config.AuthInfos { + if strings.HasPrefix(name, toComplete) { + ret = append(ret, name) + } + } + return ret +} + +// compGetResourceList returns the list of api resources which begin with `toComplete`. +func compGetResourceList(restClientGetter genericclioptions.RESTClientGetter, cmd *cobra.Command, toComplete string) []string { + buf := new(bytes.Buffer) + streams := genericclioptions.IOStreams{In: os.Stdin, Out: buf, ErrOut: ioutil.Discard} + o := apiresources.NewAPIResourceOptions(streams) + + o.Complete(restClientGetter, cmd, nil) + + // Get the list of resources + o.Output = "name" + o.Cached = true + o.Verbs = []string{"get"} + // TODO:Should set --request-timeout=5s + + // Ignore errors as the output may still be valid + o.RunAPIResources() + + // Resources can be a comma-separated list. The last element is then + // the one we should complete. For example if toComplete=="pods,secre" + // we should return "pods,secrets" + prefix := "" + suffix := toComplete + lastIdx := strings.LastIndex(toComplete, ",") + if lastIdx != -1 { + prefix = toComplete[0 : lastIdx+1] + suffix = toComplete[lastIdx+1:] + } + var comps []string + resources := strings.Split(buf.String(), "\n") + for _, res := range resources { + if res != "" && strings.HasPrefix(res, suffix) { + comps = append(comps, fmt.Sprintf("%s%s", prefix, res)) + } + } + return comps +} + +// resourceTypeAndNameCompletionFunc Returns a completion function that completes resource types +// and resource names that match the toComplete prefix. It supports the / form. +func resourceTypeAndNameCompletionFunc(f cmdutil.Factory, allowedTypes []string, allowRepeat bool) func(*cobra.Command, []string, string) ([]string, cobra.ShellCompDirective) { + return func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { + var comps []string + directive := cobra.ShellCompDirectiveNoFileComp + + if len(args) > 0 && !strings.Contains(args[0], "/") { + // The first argument is of the form (e.g., pods) + // All following arguments should be a resource name. + if allowRepeat || len(args) == 1 { + comps = CompGetResource(f, cmd, args[0], toComplete) + + // Remove choices already on the command-line + if len(args) > 1 { + comps = cmdutil.Difference(comps, args[1:]) + } + } + } else { + slashIdx := strings.Index(toComplete, "/") + if slashIdx == -1 { + if len(args) == 0 { + // We are completing the first argument. We default to the normal + // form (not the form /). + // So we suggest resource types and let the shell add a space after + // the completion. + if len(allowedTypes) == 0 { + comps = compGetResourceList(f, cmd, toComplete) + } else { + for _, c := range allowedTypes { + if strings.HasPrefix(c, toComplete) { + comps = append(comps, c) + } + } + } + } else { + // Here we know the first argument contains a / (/). + // All other arguments must also use that form. + if allowRepeat { + // Since toComplete does not already contain a / we know we are completing a + // resource type. Disable adding a space after the completion, and add the / + directive |= cobra.ShellCompDirectiveNoSpace + + if len(allowedTypes) == 0 { + typeComps := compGetResourceList(f, cmd, toComplete) + for _, c := range typeComps { + comps = append(comps, fmt.Sprintf("%s/", c)) + } + } else { + for _, c := range allowedTypes { + if strings.HasPrefix(c, toComplete) { + comps = append(comps, fmt.Sprintf("%s/", c)) + } + } + } + } + } + } else { + // We are completing an argument of the form / + // and since the / is already present, we are completing the resource name. + if allowRepeat || len(args) == 0 { + resourceType := toComplete[:slashIdx] + toComplete = toComplete[slashIdx+1:] + nameComps := CompGetResource(f, cmd, resourceType, toComplete) + for _, c := range nameComps { + comps = append(comps, fmt.Sprintf("%s/%s", resourceType, c)) + } + + // Remove choices already on the command-line. + if len(args) > 0 { + comps = cmdutil.Difference(comps, args[0:]) + } + } + } + } + return comps, directive + } +} + +// doPodResourceCompletion Returns completions of: +// 1- pod names that match the toComplete prefix +// 2- resource types containing pods which match the toComplete prefix +func doPodResourceCompletion(f cmdutil.Factory, cmd *cobra.Command, toComplete string) ([]string, cobra.ShellCompDirective) { + var comps []string + directive := cobra.ShellCompDirectiveNoFileComp + slashIdx := strings.Index(toComplete, "/") + if slashIdx == -1 { + // Standard case, complete pod names + comps = CompGetResource(f, cmd, "pod", toComplete) + + // Also include resource choices for the / form, + // but only for resources that contain pods + resourcesWithPods := []string{ + "daemonsets", + "deployments", + "pods", + "jobs", + "replicasets", + "replicationcontrollers", + "services", + "statefulsets"} + + if len(comps) == 0 { + // If there are no pods to complete, we will only be completing + // /. We should disable adding a space after the /. + directive |= cobra.ShellCompDirectiveNoSpace + } + + for _, resource := range resourcesWithPods { + if strings.HasPrefix(resource, toComplete) { + comps = append(comps, fmt.Sprintf("%s/", resource)) + } + } + } else { + // Dealing with the / form, use the specified resource type + resourceType := toComplete[:slashIdx] + toComplete = toComplete[slashIdx+1:] + nameComps := CompGetResource(f, cmd, resourceType, toComplete) + for _, c := range nameComps { + comps = append(comps, fmt.Sprintf("%s/%s", resourceType, c)) + } + } + return comps, directive +} + +// convertResourceNameToPodName Converts a resource name to a pod name. +// If the resource name is of the form /, we use +// polymorphichelpers.AttachablePodForObjectFn(), if not, the resource name +// is already a pod name. +func convertResourceNameToPodName(f cmdutil.Factory, resourceName string) string { + var podName string + if !strings.Contains(resourceName, "/") { + // When we don't have the / form, the resource name is the pod name + podName = resourceName + } else { + // if the resource name is of the form /, we need to convert it to a pod name + ns, _, err := f.ToRawKubeConfigLoader().Namespace() + if err != nil { + return "" + } + + resourceWithPod, err := f.NewBuilder(). + WithScheme(scheme.Scheme, scheme.Scheme.PrioritizedVersionsAllGroups()...). + ContinueOnError(). + NamespaceParam(ns).DefaultNamespace(). + ResourceNames("pods", resourceName). + Do().Object() + if err != nil { + return "" + } + + // For shell completion, use a short timeout + forwardablePod, err := polymorphichelpers.AttachablePodForObjectFn(f, resourceWithPod, 100*time.Millisecond) + if err != nil { + return "" + } + podName = forwardablePod.Name + } + return podName +} diff --git a/vendor/k8s.io/kubectl/pkg/util/deployment/deployment.go b/vendor/k8s.io/kubectl/pkg/util/deployment/deployment.go new file mode 100644 index 000000000..f0352d9ef --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/deployment/deployment.go @@ -0,0 +1,257 @@ +/* +Copyright 2016 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package deployment + +import ( + "context" + "sort" + "strconv" + + appsv1 "k8s.io/api/apps/v1" + corev1 "k8s.io/api/core/v1" + apiequality "k8s.io/apimachinery/pkg/api/equality" + "k8s.io/apimachinery/pkg/api/meta" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" + intstrutil "k8s.io/apimachinery/pkg/util/intstr" + runtimeresource "k8s.io/cli-runtime/pkg/resource" + appsclient "k8s.io/client-go/kubernetes/typed/apps/v1" +) + +const ( + // RevisionAnnotation is the revision annotation of a deployment's replica sets which records its rollout sequence + RevisionAnnotation = "deployment.kubernetes.io/revision" + // RevisionHistoryAnnotation maintains the history of all old revisions that a replica set has served for a deployment. + RevisionHistoryAnnotation = "deployment.kubernetes.io/revision-history" + // DesiredReplicasAnnotation is the desired replicas for a deployment recorded as an annotation + // in its replica sets. Helps in separating scaling events from the rollout process and for + // determining if the new replica set for a deployment is really saturated. + DesiredReplicasAnnotation = "deployment.kubernetes.io/desired-replicas" + // MaxReplicasAnnotation is the maximum replicas a deployment can have at a given point, which + // is deployment.spec.replicas + maxSurge. Used by the underlying replica sets to estimate their + // proportions in case the deployment has surge replicas. + MaxReplicasAnnotation = "deployment.kubernetes.io/max-replicas" + // RollbackRevisionNotFound is not found rollback event reason + RollbackRevisionNotFound = "DeploymentRollbackRevisionNotFound" + // RollbackTemplateUnchanged is the template unchanged rollback event reason + RollbackTemplateUnchanged = "DeploymentRollbackTemplateUnchanged" + // RollbackDone is the done rollback event reason + RollbackDone = "DeploymentRollback" + // TimedOutReason is added in a deployment when its newest replica set fails to show any progress + // within the given deadline (progressDeadlineSeconds). + TimedOutReason = "ProgressDeadlineExceeded" +) + +// GetDeploymentCondition returns the condition with the provided type. +func GetDeploymentCondition(status appsv1.DeploymentStatus, condType appsv1.DeploymentConditionType) *appsv1.DeploymentCondition { + for i := range status.Conditions { + c := status.Conditions[i] + if c.Type == condType { + return &c + } + } + return nil +} + +// Revision returns the revision number of the input object. +func Revision(obj runtime.Object) (int64, error) { + acc, err := meta.Accessor(obj) + if err != nil { + return 0, err + } + v, ok := acc.GetAnnotations()[RevisionAnnotation] + if !ok { + return 0, nil + } + return strconv.ParseInt(v, 10, 64) +} + +// GetAllReplicaSets returns the old and new replica sets targeted by the given Deployment. It gets PodList and +// ReplicaSetList from client interface. Note that the first set of old replica sets doesn't include the ones +// with no pods, and the second set of old replica sets include all old replica sets. The third returned value +// is the new replica set, and it may be nil if it doesn't exist yet. +func GetAllReplicaSets(deployment *appsv1.Deployment, c appsclient.AppsV1Interface) ([]*appsv1.ReplicaSet, []*appsv1.ReplicaSet, *appsv1.ReplicaSet, error) { + rsList, err := listReplicaSets(deployment, rsListFromClient(c), nil) + if err != nil { + return nil, nil, nil, err + } + newRS := findNewReplicaSet(deployment, rsList) + oldRSes, allOldRSes := findOldReplicaSets(deployment, rsList, newRS) + return oldRSes, allOldRSes, newRS, nil +} + +// GetAllReplicaSetsInChunks is the same as GetAllReplicaSets, but accepts a chunk size argument. +// It returns the old and new replica sets targeted by the given Deployment. It gets PodList and +// ReplicaSetList from client interface. Note that the first set of old replica sets doesn't include the ones +// with no pods, and the second set of old replica sets include all old replica sets. The third returned value +// is the new replica set, and it may be nil if it doesn't exist yet. +func GetAllReplicaSetsInChunks(deployment *appsv1.Deployment, c appsclient.AppsV1Interface, chunkSize int64) ([]*appsv1.ReplicaSet, []*appsv1.ReplicaSet, *appsv1.ReplicaSet, error) { + rsList, err := listReplicaSets(deployment, rsListFromClient(c), &chunkSize) + if err != nil { + return nil, nil, nil, err + } + newRS := findNewReplicaSet(deployment, rsList) + oldRSes, allOldRSes := findOldReplicaSets(deployment, rsList, newRS) + return oldRSes, allOldRSes, newRS, nil +} + +// RsListFromClient returns an rsListFunc that wraps the given client. +func rsListFromClient(c appsclient.AppsV1Interface) rsListFunc { + return func(namespace string, initialOpts metav1.ListOptions) ([]*appsv1.ReplicaSet, error) { + rsList := &appsv1.ReplicaSetList{} + err := runtimeresource.FollowContinue(&initialOpts, + func(opts metav1.ListOptions) (runtime.Object, error) { + newRs, err := c.ReplicaSets(namespace).List(context.TODO(), opts) + if err != nil { + return nil, runtimeresource.EnhanceListError(err, opts, "replicasets") + } + rsList.Items = append(rsList.Items, newRs.Items...) + return newRs, nil + }) + if err != nil { + return nil, err + } + var ret []*appsv1.ReplicaSet + for i := range rsList.Items { + ret = append(ret, &rsList.Items[i]) + } + return ret, err + } +} + +// TODO: switch this to full namespacers +type rsListFunc func(string, metav1.ListOptions) ([]*appsv1.ReplicaSet, error) + +// listReplicaSets returns a slice of RSes the given deployment targets. +// Note that this does NOT attempt to reconcile ControllerRef (adopt/orphan), +// because only the controller itself should do that. +// However, it does filter out anything whose ControllerRef doesn't match. +func listReplicaSets(deployment *appsv1.Deployment, getRSList rsListFunc, chunkSize *int64) ([]*appsv1.ReplicaSet, error) { + // TODO: Right now we list replica sets by their labels. We should list them by selector, i.e. the replica set's selector + // should be a superset of the deployment's selector, see https://github.com/kubernetes/kubernetes/issues/19830. + namespace := deployment.Namespace + selector, err := metav1.LabelSelectorAsSelector(deployment.Spec.Selector) + if err != nil { + return nil, err + } + options := metav1.ListOptions{LabelSelector: selector.String()} + if chunkSize != nil { + options.Limit = *chunkSize + } + all, err := getRSList(namespace, options) + if err != nil { + return nil, err + } + // Only include those whose ControllerRef matches the Deployment. + owned := make([]*appsv1.ReplicaSet, 0, len(all)) + for _, rs := range all { + if metav1.IsControlledBy(rs, deployment) { + owned = append(owned, rs) + } + } + return owned, nil +} + +// EqualIgnoreHash returns true if two given podTemplateSpec are equal, ignoring the diff in value of Labels[pod-template-hash] +// We ignore pod-template-hash because: +// 1. The hash result would be different upon podTemplateSpec API changes +// (e.g. the addition of a new field will cause the hash code to change) +// 2. The deployment template won't have hash labels +func equalIgnoreHash(template1, template2 *corev1.PodTemplateSpec) bool { + t1Copy := template1.DeepCopy() + t2Copy := template2.DeepCopy() + // Remove hash labels from template.Labels before comparing + delete(t1Copy.Labels, appsv1.DefaultDeploymentUniqueLabelKey) + delete(t2Copy.Labels, appsv1.DefaultDeploymentUniqueLabelKey) + return apiequality.Semantic.DeepEqual(t1Copy, t2Copy) +} + +// FindNewReplicaSet returns the new RS this given deployment targets (the one with the same pod template). +func findNewReplicaSet(deployment *appsv1.Deployment, rsList []*appsv1.ReplicaSet) *appsv1.ReplicaSet { + sort.Sort(replicaSetsByCreationTimestamp(rsList)) + for i := range rsList { + if equalIgnoreHash(&rsList[i].Spec.Template, &deployment.Spec.Template) { + // In rare cases, such as after cluster upgrades, Deployment may end up with + // having more than one new ReplicaSets that have the same template as its template, + // see https://github.com/kubernetes/kubernetes/issues/40415 + // We deterministically choose the oldest new ReplicaSet. + return rsList[i] + } + } + // new ReplicaSet does not exist. + return nil +} + +// replicaSetsByCreationTimestamp sorts a list of ReplicaSet by creation timestamp, using their names as a tie breaker. +type replicaSetsByCreationTimestamp []*appsv1.ReplicaSet + +func (o replicaSetsByCreationTimestamp) Len() int { return len(o) } +func (o replicaSetsByCreationTimestamp) Swap(i, j int) { o[i], o[j] = o[j], o[i] } +func (o replicaSetsByCreationTimestamp) Less(i, j int) bool { + if o[i].CreationTimestamp.Equal(&o[j].CreationTimestamp) { + return o[i].Name < o[j].Name + } + return o[i].CreationTimestamp.Before(&o[j].CreationTimestamp) +} + +// // FindOldReplicaSets returns the old replica sets targeted by the given Deployment, with the given slice of RSes. +// // Note that the first set of old replica sets doesn't include the ones with no pods, and the second set of old replica sets include all old replica sets. +func findOldReplicaSets(deployment *appsv1.Deployment, rsList []*appsv1.ReplicaSet, newRS *appsv1.ReplicaSet) ([]*appsv1.ReplicaSet, []*appsv1.ReplicaSet) { + var requiredRSs []*appsv1.ReplicaSet + var allRSs []*appsv1.ReplicaSet + for _, rs := range rsList { + // Filter out new replica set + if newRS != nil && rs.UID == newRS.UID { + continue + } + allRSs = append(allRSs, rs) + if *(rs.Spec.Replicas) != 0 { + requiredRSs = append(requiredRSs, rs) + } + } + return requiredRSs, allRSs +} + +// ResolveFenceposts resolves both maxSurge and maxUnavailable. This needs to happen in one +// step. For example: +// +// 2 desired, max unavailable 1%, surge 0% - should scale old(-1), then new(+1), then old(-1), then new(+1) +// 1 desired, max unavailable 1%, surge 0% - should scale old(-1), then new(+1) +// 2 desired, max unavailable 25%, surge 1% - should scale new(+1), then old(-1), then new(+1), then old(-1) +// 1 desired, max unavailable 25%, surge 1% - should scale new(+1), then old(-1) +// 2 desired, max unavailable 0%, surge 1% - should scale new(+1), then old(-1), then new(+1), then old(-1) +// 1 desired, max unavailable 0%, surge 1% - should scale new(+1), then old(-1) +func ResolveFenceposts(maxSurge, maxUnavailable *intstrutil.IntOrString, desired int32) (int32, int32, error) { + surge, err := intstrutil.GetScaledValueFromIntOrPercent(intstrutil.ValueOrDefault(maxSurge, intstrutil.FromInt(0)), int(desired), true) + if err != nil { + return 0, 0, err + } + unavailable, err := intstrutil.GetScaledValueFromIntOrPercent(intstrutil.ValueOrDefault(maxUnavailable, intstrutil.FromInt(0)), int(desired), false) + if err != nil { + return 0, 0, err + } + + if surge == 0 && unavailable == 0 { + // Validation should never allow the user to explicitly use zero values for both maxSurge + // maxUnavailable. Due to rounding down maxUnavailable though, it may resolve to zero. + // If both fenceposts resolve to zero, then we should set maxUnavailable to 1 on the + // theory that surge might not work due to quota. + unavailable = 1 + } + + return int32(surge), int32(unavailable), nil +} diff --git a/vendor/k8s.io/kubectl/pkg/util/event/sorted_event_list.go b/vendor/k8s.io/kubectl/pkg/util/event/sorted_event_list.go new file mode 100644 index 000000000..9967f953e --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/event/sorted_event_list.go @@ -0,0 +1,36 @@ +/* +Copyright 2014 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package event + +import ( + corev1 "k8s.io/api/core/v1" +) + +// SortableEvents implements sort.Interface for []api.Event based on the Timestamp field +type SortableEvents []corev1.Event + +func (list SortableEvents) Len() int { + return len(list) +} + +func (list SortableEvents) Swap(i, j int) { + list[i], list[j] = list[j], list[i] +} + +func (list SortableEvents) Less(i, j int) bool { + return list[i].LastTimestamp.Time.Before(list[j].LastTimestamp.Time) +} diff --git a/vendor/k8s.io/kubectl/pkg/util/fieldpath/fieldpath.go b/vendor/k8s.io/kubectl/pkg/util/fieldpath/fieldpath.go new file mode 100644 index 000000000..af512af8c --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/fieldpath/fieldpath.go @@ -0,0 +1,112 @@ +/* +Copyright 2015 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package fieldpath + +import ( + "fmt" + "strings" + + "k8s.io/apimachinery/pkg/api/meta" + "k8s.io/apimachinery/pkg/util/sets" + "k8s.io/apimachinery/pkg/util/validation" +) + +// TODO(yue9944882): Remove this helper package once it's copied to k/apimachinery + +// FormatMap formats map[string]string to a string. +func FormatMap(m map[string]string) (fmtStr string) { + // output with keys in sorted order to provide stable output + keys := sets.NewString() + for key := range m { + keys.Insert(key) + } + for _, key := range keys.List() { + fmtStr += fmt.Sprintf("%v=%q\n", key, m[key]) + } + fmtStr = strings.TrimSuffix(fmtStr, "\n") + + return +} + +// ExtractFieldPathAsString extracts the field from the given object +// and returns it as a string. The object must be a pointer to an +// API type. +func ExtractFieldPathAsString(obj interface{}, fieldPath string) (string, error) { + accessor, err := meta.Accessor(obj) + if err != nil { + return "", nil + } + + if path, subscript, ok := SplitMaybeSubscriptedPath(fieldPath); ok { + switch path { + case "metadata.annotations": + if errs := validation.IsQualifiedName(strings.ToLower(subscript)); len(errs) != 0 { + return "", fmt.Errorf("invalid key subscript in %s: %s", fieldPath, strings.Join(errs, ";")) + } + return accessor.GetAnnotations()[subscript], nil + case "metadata.labels": + if errs := validation.IsQualifiedName(subscript); len(errs) != 0 { + return "", fmt.Errorf("invalid key subscript in %s: %s", fieldPath, strings.Join(errs, ";")) + } + return accessor.GetLabels()[subscript], nil + default: + return "", fmt.Errorf("fieldPath %q does not support subscript", fieldPath) + } + } + + switch fieldPath { + case "metadata.annotations": + return FormatMap(accessor.GetAnnotations()), nil + case "metadata.labels": + return FormatMap(accessor.GetLabels()), nil + case "metadata.name": + return accessor.GetName(), nil + case "metadata.namespace": + return accessor.GetNamespace(), nil + case "metadata.uid": + return string(accessor.GetUID()), nil + } + + return "", fmt.Errorf("unsupported fieldPath: %v", fieldPath) +} + +// SplitMaybeSubscriptedPath checks whether the specified fieldPath is +// subscripted, and +// - if yes, this function splits the fieldPath into path and subscript, and +// returns (path, subscript, true). +// - if no, this function returns (fieldPath, "", false). +// +// Example inputs and outputs: +// +// "metadata.annotations['myKey']" --> ("metadata.annotations", "myKey", true) +// "metadata.annotations['a[b]c']" --> ("metadata.annotations", "a[b]c", true) +// "metadata.labels['']" --> ("metadata.labels", "", true) +// "metadata.labels" --> ("metadata.labels", "", false) +func SplitMaybeSubscriptedPath(fieldPath string) (string, string, bool) { + if !strings.HasSuffix(fieldPath, "']") { + return fieldPath, "", false + } + s := strings.TrimSuffix(fieldPath, "']") + parts := strings.SplitN(s, "['", 2) + if len(parts) < 2 { + return fieldPath, "", false + } + if len(parts[0]) == 0 { + return fieldPath, "", false + } + return parts[0], parts[1], true +} diff --git a/vendor/k8s.io/kubectl/pkg/util/pod_port.go b/vendor/k8s.io/kubectl/pkg/util/pod_port.go new file mode 100644 index 000000000..6d78501a8 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/pod_port.go @@ -0,0 +1,36 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +import ( + "fmt" + + "k8s.io/api/core/v1" +) + +// LookupContainerPortNumberByName find containerPort number by its named port name +func LookupContainerPortNumberByName(pod v1.Pod, name string) (int32, error) { + for _, ctr := range pod.Spec.Containers { + for _, ctrportspec := range ctr.Ports { + if ctrportspec.Name == name { + return ctrportspec.ContainerPort, nil + } + } + } + + return int32(-1), fmt.Errorf("Pod '%s' does not have a named port '%s'", pod.Name, name) +} diff --git a/vendor/k8s.io/kubectl/pkg/util/podutils/podutils.go b/vendor/k8s.io/kubectl/pkg/util/podutils/podutils.go new file mode 100644 index 000000000..847eb7e88 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/podutils/podutils.go @@ -0,0 +1,188 @@ +/* +Copyright 2014 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package podutils + +import ( + "time" + + corev1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/utils/integer" +) + +// IsPodAvailable returns true if a pod is available; false otherwise. +// Precondition for an available pod is that it must be ready. On top +// of that, there are two cases when a pod can be considered available: +// 1. minReadySeconds == 0, or +// 2. LastTransitionTime (is set) + minReadySeconds < current time +func IsPodAvailable(pod *corev1.Pod, minReadySeconds int32, now metav1.Time) bool { + if !IsPodReady(pod) { + return false + } + + c := getPodReadyCondition(pod.Status) + minReadySecondsDuration := time.Duration(minReadySeconds) * time.Second + if minReadySeconds == 0 || !c.LastTransitionTime.IsZero() && c.LastTransitionTime.Add(minReadySecondsDuration).Before(now.Time) { + return true + } + return false +} + +// IsPodReady returns true if a pod is ready; false otherwise. +func IsPodReady(pod *corev1.Pod) bool { + return isPodReadyConditionTrue(pod.Status) +} + +// IsPodReadyConditionTrue returns true if a pod is ready; false otherwise. +func isPodReadyConditionTrue(status corev1.PodStatus) bool { + condition := getPodReadyCondition(status) + return condition != nil && condition.Status == corev1.ConditionTrue +} + +// GetPodReadyCondition extracts the pod ready condition from the given status and returns that. +// Returns nil if the condition is not present. +func getPodReadyCondition(status corev1.PodStatus) *corev1.PodCondition { + _, condition := getPodCondition(&status, corev1.PodReady) + return condition +} + +// GetPodCondition extracts the provided condition from the given status and returns that. +// Returns nil and -1 if the condition is not present, and the index of the located condition. +func getPodCondition(status *corev1.PodStatus, conditionType corev1.PodConditionType) (int, *corev1.PodCondition) { + if status == nil { + return -1, nil + } + return getPodConditionFromList(status.Conditions, conditionType) +} + +// GetPodConditionFromList extracts the provided condition from the given list of condition and +// returns the index of the condition and the condition. Returns -1 and nil if the condition is not present. +func getPodConditionFromList(conditions []corev1.PodCondition, conditionType corev1.PodConditionType) (int, *corev1.PodCondition) { + if conditions == nil { + return -1, nil + } + for i := range conditions { + if conditions[i].Type == conditionType { + return i, &conditions[i] + } + } + return -1, nil +} + +// ByLogging allows custom sorting of pods so the best one can be picked for getting its logs. +type ByLogging []*corev1.Pod + +func (s ByLogging) Len() int { return len(s) } +func (s ByLogging) Swap(i, j int) { s[i], s[j] = s[j], s[i] } + +func (s ByLogging) Less(i, j int) bool { + // 1. assigned < unassigned + if s[i].Spec.NodeName != s[j].Spec.NodeName && (len(s[i].Spec.NodeName) == 0 || len(s[j].Spec.NodeName) == 0) { + return len(s[i].Spec.NodeName) > 0 + } + // 2. PodRunning < PodUnknown < PodPending + m := map[corev1.PodPhase]int{corev1.PodRunning: 0, corev1.PodUnknown: 1, corev1.PodPending: 2} + if m[s[i].Status.Phase] != m[s[j].Status.Phase] { + return m[s[i].Status.Phase] < m[s[j].Status.Phase] + } + // 3. ready < not ready + if IsPodReady(s[i]) != IsPodReady(s[j]) { + return IsPodReady(s[i]) + } + // TODO: take availability into account when we push minReadySeconds information from deployment into pods, + // see https://github.com/kubernetes/kubernetes/issues/22065 + // 4. Been ready for more time < less time < empty time + if IsPodReady(s[i]) && IsPodReady(s[j]) && !podReadyTime(s[i]).Equal(podReadyTime(s[j])) { + return afterOrZero(podReadyTime(s[j]), podReadyTime(s[i])) + } + // 5. Pods with containers with higher restart counts < lower restart counts + if maxContainerRestarts(s[i]) != maxContainerRestarts(s[j]) { + return maxContainerRestarts(s[i]) > maxContainerRestarts(s[j]) + } + // 6. older pods < newer pods < empty timestamp pods + if !s[i].CreationTimestamp.Equal(&s[j].CreationTimestamp) { + return afterOrZero(&s[j].CreationTimestamp, &s[i].CreationTimestamp) + } + return false +} + +// ActivePods type allows custom sorting of pods so a controller can pick the best ones to delete. +type ActivePods []*corev1.Pod + +func (s ActivePods) Len() int { return len(s) } +func (s ActivePods) Swap(i, j int) { s[i], s[j] = s[j], s[i] } + +func (s ActivePods) Less(i, j int) bool { + // 1. Unassigned < assigned + // If only one of the pods is unassigned, the unassigned one is smaller + if s[i].Spec.NodeName != s[j].Spec.NodeName && (len(s[i].Spec.NodeName) == 0 || len(s[j].Spec.NodeName) == 0) { + return len(s[i].Spec.NodeName) == 0 + } + // 2. PodPending < PodUnknown < PodRunning + m := map[corev1.PodPhase]int{corev1.PodPending: 0, corev1.PodUnknown: 1, corev1.PodRunning: 2} + if m[s[i].Status.Phase] != m[s[j].Status.Phase] { + return m[s[i].Status.Phase] < m[s[j].Status.Phase] + } + // 3. Not ready < ready + // If only one of the pods is not ready, the not ready one is smaller + if IsPodReady(s[i]) != IsPodReady(s[j]) { + return !IsPodReady(s[i]) + } + // TODO: take availability into account when we push minReadySeconds information from deployment into pods, + // see https://github.com/kubernetes/kubernetes/issues/22065 + // 4. Been ready for empty time < less time < more time + // If both pods are ready, the latest ready one is smaller + if IsPodReady(s[i]) && IsPodReady(s[j]) && !podReadyTime(s[i]).Equal(podReadyTime(s[j])) { + return afterOrZero(podReadyTime(s[i]), podReadyTime(s[j])) + } + // 5. Pods with containers with higher restart counts < lower restart counts + if maxContainerRestarts(s[i]) != maxContainerRestarts(s[j]) { + return maxContainerRestarts(s[i]) > maxContainerRestarts(s[j]) + } + // 6. Empty creation time pods < newer pods < older pods + if !s[i].CreationTimestamp.Equal(&s[j].CreationTimestamp) { + return afterOrZero(&s[i].CreationTimestamp, &s[j].CreationTimestamp) + } + return false +} + +// afterOrZero checks if time t1 is after time t2; if one of them +// is zero, the zero time is seen as after non-zero time. +func afterOrZero(t1, t2 *metav1.Time) bool { + if t1.Time.IsZero() || t2.Time.IsZero() { + return t1.Time.IsZero() + } + return t1.After(t2.Time) +} + +func podReadyTime(pod *corev1.Pod) *metav1.Time { + for _, c := range pod.Status.Conditions { + // we only care about pod ready conditions + if c.Type == corev1.PodReady && c.Status == corev1.ConditionTrue { + return &c.LastTransitionTime + } + } + return &metav1.Time{} +} + +func maxContainerRestarts(pod *corev1.Pod) int { + maxRestarts := 0 + for _, c := range pod.Status.ContainerStatuses { + maxRestarts = integer.IntMax(maxRestarts, int(c.RestartCount)) + } + return maxRestarts +} diff --git a/vendor/k8s.io/kubectl/pkg/util/qos/qos.go b/vendor/k8s.io/kubectl/pkg/util/qos/qos.go new file mode 100644 index 000000000..2715e6375 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/qos/qos.go @@ -0,0 +1,98 @@ +/* +Copyright 2015 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package qos + +import ( + core "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "k8s.io/apimachinery/pkg/util/sets" +) + +var supportedQoSComputeResources = sets.NewString(string(core.ResourceCPU), string(core.ResourceMemory)) + +func isSupportedQoSComputeResource(name core.ResourceName) bool { + return supportedQoSComputeResources.Has(string(name)) +} + +// GetPodQOS returns the QoS class of a pod. +// A pod is besteffort if none of its containers have specified any requests or limits. +// A pod is guaranteed only when requests and limits are specified for all the containers and they are equal. +// A pod is burstable if limits and requests do not match across all containers. +func GetPodQOS(pod *core.Pod) core.PodQOSClass { + requests := core.ResourceList{} + limits := core.ResourceList{} + zeroQuantity := resource.MustParse("0") + isGuaranteed := true + allContainers := []core.Container{} + allContainers = append(allContainers, pod.Spec.Containers...) + allContainers = append(allContainers, pod.Spec.InitContainers...) + for _, container := range allContainers { + // process requests + for name, quantity := range container.Resources.Requests { + if !isSupportedQoSComputeResource(name) { + continue + } + if quantity.Cmp(zeroQuantity) == 1 { + delta := quantity.DeepCopy() + if _, exists := requests[name]; !exists { + requests[name] = delta + } else { + delta.Add(requests[name]) + requests[name] = delta + } + } + } + // process limits + qosLimitsFound := sets.NewString() + for name, quantity := range container.Resources.Limits { + if !isSupportedQoSComputeResource(name) { + continue + } + if quantity.Cmp(zeroQuantity) == 1 { + qosLimitsFound.Insert(string(name)) + delta := quantity.DeepCopy() + if _, exists := limits[name]; !exists { + limits[name] = delta + } else { + delta.Add(limits[name]) + limits[name] = delta + } + } + } + + if !qosLimitsFound.HasAll(string(core.ResourceMemory), string(core.ResourceCPU)) { + isGuaranteed = false + } + } + if len(requests) == 0 && len(limits) == 0 { + return core.PodQOSBestEffort + } + // Check is requests match limits for all resources. + if isGuaranteed { + for name, req := range requests { + if lim, exists := limits[name]; !exists || lim.Cmp(req) != 0 { + isGuaranteed = false + break + } + } + } + if isGuaranteed && + len(requests) == len(limits) { + return core.PodQOSGuaranteed + } + return core.PodQOSBurstable +} diff --git a/vendor/k8s.io/kubectl/pkg/util/rbac/rbac.go b/vendor/k8s.io/kubectl/pkg/util/rbac/rbac.go new file mode 100644 index 000000000..b5d1f2d52 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/rbac/rbac.go @@ -0,0 +1,135 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package rbac + +import ( + rbacv1 "k8s.io/api/rbac/v1" + "k8s.io/apimachinery/pkg/util/sets" + "reflect" + "strings" +) + +type simpleResource struct { + Group string + Resource string + ResourceNameExist bool + ResourceName string +} + +// CompactRules combines rules that contain a single APIGroup/Resource, differ only by verb, and contain no other attributes. +// this is a fast check, and works well with the decomposed "missing rules" list from a Covers check. +func CompactRules(rules []rbacv1.PolicyRule) ([]rbacv1.PolicyRule, error) { + compacted := make([]rbacv1.PolicyRule, 0, len(rules)) + + simpleRules := map[simpleResource]*rbacv1.PolicyRule{} + for _, rule := range rules { + if resource, isSimple := isSimpleResourceRule(&rule); isSimple { + if existingRule, ok := simpleRules[resource]; ok { + // Add the new verbs to the existing simple resource rule + if existingRule.Verbs == nil { + existingRule.Verbs = []string{} + } + existingVerbs := sets.NewString(existingRule.Verbs...) + for _, verb := range rule.Verbs { + if !existingVerbs.Has(verb) { + existingRule.Verbs = append(existingRule.Verbs, verb) + } + } + + } else { + // Copy the rule to accumulate matching simple resource rules into + simpleRules[resource] = rule.DeepCopy() + } + } else { + compacted = append(compacted, rule) + } + } + + // Once we've consolidated the simple resource rules, add them to the compacted list + for _, simpleRule := range simpleRules { + compacted = append(compacted, *simpleRule) + } + + return compacted, nil +} + +// isSimpleResourceRule returns true if the given rule contains verbs, a single resource, a single API group, at most one Resource Name, and no other values +func isSimpleResourceRule(rule *rbacv1.PolicyRule) (simpleResource, bool) { + resource := simpleResource{} + + // If we have "complex" rule attributes, return early without allocations or expensive comparisons + if len(rule.ResourceNames) > 1 || len(rule.NonResourceURLs) > 0 { + return resource, false + } + // If we have multiple api groups or resources, return early + if len(rule.APIGroups) != 1 || len(rule.Resources) != 1 { + return resource, false + } + + // Test if this rule only contains APIGroups/Resources/Verbs/ResourceNames + simpleRule := &rbacv1.PolicyRule{APIGroups: rule.APIGroups, Resources: rule.Resources, Verbs: rule.Verbs, ResourceNames: rule.ResourceNames} + if !reflect.DeepEqual(simpleRule, rule) { + return resource, false + } + + if len(rule.ResourceNames) == 0 { + resource = simpleResource{Group: rule.APIGroups[0], Resource: rule.Resources[0], ResourceNameExist: false} + } else { + resource = simpleResource{Group: rule.APIGroups[0], Resource: rule.Resources[0], ResourceNameExist: true, ResourceName: rule.ResourceNames[0]} + } + + return resource, true +} + +// BreakdownRule takes a rule and builds an equivalent list of rules that each have at most one verb, one +// resource, and one resource name +func BreakdownRule(rule rbacv1.PolicyRule) []rbacv1.PolicyRule { + subrules := []rbacv1.PolicyRule{} + for _, group := range rule.APIGroups { + for _, resource := range rule.Resources { + for _, verb := range rule.Verbs { + if len(rule.ResourceNames) > 0 { + for _, resourceName := range rule.ResourceNames { + subrules = append(subrules, rbacv1.PolicyRule{APIGroups: []string{group}, Resources: []string{resource}, Verbs: []string{verb}, ResourceNames: []string{resourceName}}) + } + + } else { + subrules = append(subrules, rbacv1.PolicyRule{APIGroups: []string{group}, Resources: []string{resource}, Verbs: []string{verb}}) + } + + } + } + } + + // Non-resource URLs are unique because they only combine with verbs. + for _, nonResourceURL := range rule.NonResourceURLs { + for _, verb := range rule.Verbs { + subrules = append(subrules, rbacv1.PolicyRule{NonResourceURLs: []string{nonResourceURL}, Verbs: []string{verb}}) + } + } + + return subrules +} + +// SortableRuleSlice is used to sort rule slice +type SortableRuleSlice []rbacv1.PolicyRule + +func (s SortableRuleSlice) Len() int { return len(s) } +func (s SortableRuleSlice) Swap(i, j int) { s[i], s[j] = s[j], s[i] } +func (s SortableRuleSlice) Less(i, j int) bool { + return strings.Compare(s[i].String(), s[j].String()) < 0 +} diff --git a/vendor/k8s.io/kubectl/pkg/util/resource/resource.go b/vendor/k8s.io/kubectl/pkg/util/resource/resource.go new file mode 100644 index 000000000..44ddf96ac --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/resource/resource.go @@ -0,0 +1,172 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package resource + +import ( + "fmt" + "math" + "strconv" + "strings" + + corev1 "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/api/resource" + "k8s.io/apimachinery/pkg/util/sets" +) + +// PodRequestsAndLimits returns a dictionary of all defined resources summed up for all +// containers of the pod. If pod overhead is non-nil, the pod overhead is added to the +// total container resource requests and to the total container limits which have a +// non-zero quantity. +func PodRequestsAndLimits(pod *corev1.Pod) (reqs, limits corev1.ResourceList) { + reqs, limits = corev1.ResourceList{}, corev1.ResourceList{} + for _, container := range pod.Spec.Containers { + addResourceList(reqs, container.Resources.Requests) + addResourceList(limits, container.Resources.Limits) + } + // init containers define the minimum of any resource + for _, container := range pod.Spec.InitContainers { + maxResourceList(reqs, container.Resources.Requests) + maxResourceList(limits, container.Resources.Limits) + } + + // Add overhead for running a pod to the sum of requests and to non-zero limits: + if pod.Spec.Overhead != nil { + addResourceList(reqs, pod.Spec.Overhead) + + for name, quantity := range pod.Spec.Overhead { + if value, ok := limits[name]; ok && !value.IsZero() { + value.Add(quantity) + limits[name] = value + } + } + } + return +} + +// addResourceList adds the resources in newList to list +func addResourceList(list, new corev1.ResourceList) { + for name, quantity := range new { + if value, ok := list[name]; !ok { + list[name] = quantity.DeepCopy() + } else { + value.Add(quantity) + list[name] = value + } + } +} + +// maxResourceList sets list to the greater of list/newList for every resource +// either list +func maxResourceList(list, new corev1.ResourceList) { + for name, quantity := range new { + if value, ok := list[name]; !ok { + list[name] = quantity.DeepCopy() + continue + } else { + if quantity.Cmp(value) > 0 { + list[name] = quantity.DeepCopy() + } + } + } +} + +// ExtractContainerResourceValue extracts the value of a resource +// in an already known container +func ExtractContainerResourceValue(fs *corev1.ResourceFieldSelector, container *corev1.Container) (string, error) { + divisor := resource.Quantity{} + if divisor.Cmp(fs.Divisor) == 0 { + divisor = resource.MustParse("1") + } else { + divisor = fs.Divisor + } + + switch fs.Resource { + case "limits.cpu": + return convertResourceCPUToString(container.Resources.Limits.Cpu(), divisor) + case "limits.memory": + return convertResourceMemoryToString(container.Resources.Limits.Memory(), divisor) + case "limits.ephemeral-storage": + return convertResourceEphemeralStorageToString(container.Resources.Limits.StorageEphemeral(), divisor) + case "requests.cpu": + return convertResourceCPUToString(container.Resources.Requests.Cpu(), divisor) + case "requests.memory": + return convertResourceMemoryToString(container.Resources.Requests.Memory(), divisor) + case "requests.ephemeral-storage": + return convertResourceEphemeralStorageToString(container.Resources.Requests.StorageEphemeral(), divisor) + } + // handle extended standard resources with dynamic names + // example: requests.hugepages- or limits.hugepages- + if strings.HasPrefix(fs.Resource, "requests.") { + resourceName := corev1.ResourceName(strings.TrimPrefix(fs.Resource, "requests.")) + if IsHugePageResourceName(resourceName) { + return convertResourceHugePagesToString(container.Resources.Requests.Name(resourceName, resource.BinarySI), divisor) + } + } + if strings.HasPrefix(fs.Resource, "limits.") { + resourceName := corev1.ResourceName(strings.TrimPrefix(fs.Resource, "limits.")) + if IsHugePageResourceName(resourceName) { + return convertResourceHugePagesToString(container.Resources.Limits.Name(resourceName, resource.BinarySI), divisor) + } + } + return "", fmt.Errorf("Unsupported container resource : %v", fs.Resource) +} + +// convertResourceCPUToString converts cpu value to the format of divisor and returns +// ceiling of the value. +func convertResourceCPUToString(cpu *resource.Quantity, divisor resource.Quantity) (string, error) { + c := int64(math.Ceil(float64(cpu.MilliValue()) / float64(divisor.MilliValue()))) + return strconv.FormatInt(c, 10), nil +} + +// convertResourceMemoryToString converts memory value to the format of divisor and returns +// ceiling of the value. +func convertResourceMemoryToString(memory *resource.Quantity, divisor resource.Quantity) (string, error) { + m := int64(math.Ceil(float64(memory.Value()) / float64(divisor.Value()))) + return strconv.FormatInt(m, 10), nil +} + +// convertResourceHugePagesToString converts hugepages value to the format of divisor and returns +// ceiling of the value. +func convertResourceHugePagesToString(hugePages *resource.Quantity, divisor resource.Quantity) (string, error) { + m := int64(math.Ceil(float64(hugePages.Value()) / float64(divisor.Value()))) + return strconv.FormatInt(m, 10), nil +} + +// convertResourceEphemeralStorageToString converts ephemeral storage value to the format of divisor and returns +// ceiling of the value. +func convertResourceEphemeralStorageToString(ephemeralStorage *resource.Quantity, divisor resource.Quantity) (string, error) { + m := int64(math.Ceil(float64(ephemeralStorage.Value()) / float64(divisor.Value()))) + return strconv.FormatInt(m, 10), nil +} + +var standardContainerResources = sets.NewString( + string(corev1.ResourceCPU), + string(corev1.ResourceMemory), + string(corev1.ResourceEphemeralStorage), +) + +// IsStandardContainerResourceName returns true if the container can make a resource request +// for the specified resource +func IsStandardContainerResourceName(str string) bool { + return standardContainerResources.Has(str) || IsHugePageResourceName(corev1.ResourceName(str)) +} + +// IsHugePageResourceName returns true if the resource name has the huge page +// resource prefix. +func IsHugePageResourceName(name corev1.ResourceName) bool { + return strings.HasPrefix(string(name), corev1.ResourceHugePagesPrefix) +} diff --git a/vendor/k8s.io/kubectl/pkg/util/service_port.go b/vendor/k8s.io/kubectl/pkg/util/service_port.go new file mode 100644 index 000000000..bc56ab7d6 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/service_port.go @@ -0,0 +1,59 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +import ( + "fmt" + + "k8s.io/api/core/v1" + "k8s.io/apimachinery/pkg/util/intstr" +) + +// LookupContainerPortNumberByServicePort implements +// the handling of resolving container named port, as well as ignoring targetPort when clusterIP=None +// It returns an error when a named port can't find a match (with -1 returned), or when the service does not +// declare such port (with the input port number returned). +func LookupContainerPortNumberByServicePort(svc v1.Service, pod v1.Pod, port int32) (int32, error) { + for _, svcportspec := range svc.Spec.Ports { + if svcportspec.Port != port { + continue + } + if svc.Spec.ClusterIP == v1.ClusterIPNone { + return port, nil + } + if svcportspec.TargetPort.Type == intstr.Int { + if svcportspec.TargetPort.IntValue() == 0 { + // targetPort is omitted, and the IntValue() would be zero + return svcportspec.Port, nil + } + return int32(svcportspec.TargetPort.IntValue()), nil + } + return LookupContainerPortNumberByName(pod, svcportspec.TargetPort.String()) + } + return port, fmt.Errorf("Service %s does not have a service port %d", svc.Name, port) +} + +// LookupServicePortNumberByName find service port number by its named port name +func LookupServicePortNumberByName(svc v1.Service, name string) (int32, error) { + for _, svcportspec := range svc.Spec.Ports { + if svcportspec.Name == name { + return svcportspec.Port, nil + } + } + + return int32(-1), fmt.Errorf("Service '%s' does not have a named port '%s'", svc.Name, name) +} diff --git a/vendor/k8s.io/kubectl/pkg/util/storage/storage.go b/vendor/k8s.io/kubectl/pkg/util/storage/storage.go new file mode 100644 index 000000000..1f25cf1ab --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/storage/storage.go @@ -0,0 +1,110 @@ +/* +Copyright 2018 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package storage + +import ( + "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "strings" +) + +// TODO(yue9944882): Remove this helper package once it's copied to k/api + +// IsDefaultStorageClassAnnotation represents a StorageClass annotation that +// marks a class as the default StorageClass +const IsDefaultStorageClassAnnotation = "storageclass.kubernetes.io/is-default-class" + +// BetaIsDefaultStorageClassAnnotation is the beta version of IsDefaultStorageClassAnnotation. +const BetaIsDefaultStorageClassAnnotation = "storageclass.beta.kubernetes.io/is-default-class" + +// IsDefaultAnnotationText returns a pretty Yes/No String if +// the annotation is set +func IsDefaultAnnotationText(obj metav1.ObjectMeta) string { + if obj.Annotations[IsDefaultStorageClassAnnotation] == "true" { + return "Yes" + } + if obj.Annotations[BetaIsDefaultStorageClassAnnotation] == "true" { + return "Yes" + } + + return "No" +} + +// GetAccessModesAsString returns a string representation of an array of access modes. +// modes, when present, are always in the same order: RWO,ROX,RWX,RWOP. +func GetAccessModesAsString(modes []v1.PersistentVolumeAccessMode) string { + modes = removeDuplicateAccessModes(modes) + modesStr := []string{} + if ContainsAccessMode(modes, v1.ReadWriteOnce) { + modesStr = append(modesStr, "RWO") + } + if ContainsAccessMode(modes, v1.ReadOnlyMany) { + modesStr = append(modesStr, "ROX") + } + if ContainsAccessMode(modes, v1.ReadWriteMany) { + modesStr = append(modesStr, "RWX") + } + if ContainsAccessMode(modes, v1.ReadWriteOncePod) { + modesStr = append(modesStr, "RWOP") + } + return strings.Join(modesStr, ",") +} + +// removeDuplicateAccessModes returns an array of access modes without any duplicates +func removeDuplicateAccessModes(modes []v1.PersistentVolumeAccessMode) []v1.PersistentVolumeAccessMode { + accessModes := []v1.PersistentVolumeAccessMode{} + for _, m := range modes { + if !ContainsAccessMode(accessModes, m) { + accessModes = append(accessModes, m) + } + } + return accessModes +} + +func ContainsAccessMode(modes []v1.PersistentVolumeAccessMode, mode v1.PersistentVolumeAccessMode) bool { + for _, m := range modes { + if m == mode { + return true + } + } + return false +} + +// GetPersistentVolumeClass returns StorageClassName. +func GetPersistentVolumeClass(volume *v1.PersistentVolume) string { + // Use beta annotation first + if class, found := volume.Annotations[v1.BetaStorageClassAnnotation]; found { + return class + } + + return volume.Spec.StorageClassName +} + +// GetPersistentVolumeClaimClass returns StorageClassName. If no storage class was +// requested, it returns "". +func GetPersistentVolumeClaimClass(claim *v1.PersistentVolumeClaim) string { + // Use beta annotation first + if class, found := claim.Annotations[v1.BetaStorageClassAnnotation]; found { + return class + } + + if claim.Spec.StorageClassName != nil { + return *claim.Spec.StorageClassName + } + + return "" +} diff --git a/vendor/k8s.io/kubectl/pkg/util/umask.go b/vendor/k8s.io/kubectl/pkg/util/umask.go new file mode 100644 index 000000000..3f0c4e83e --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/umask.go @@ -0,0 +1,29 @@ +//go:build !windows +// +build !windows + +/* +Copyright 2014 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +import ( + "golang.org/x/sys/unix" +) + +// Umask is a wrapper for `unix.Umask()` on non-Windows platforms +func Umask(mask int) (old int, err error) { + return unix.Umask(mask), nil +} diff --git a/vendor/k8s.io/kubectl/pkg/util/umask_windows.go b/vendor/k8s.io/kubectl/pkg/util/umask_windows.go new file mode 100644 index 000000000..67f6efb97 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/umask_windows.go @@ -0,0 +1,29 @@ +//go:build windows +// +build windows + +/* +Copyright 2014 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +import ( + "errors" +) + +// Umask returns an error on Windows +func Umask(mask int) (int, error) { + return 0, errors.New("platform and architecture is not supported") +} diff --git a/vendor/k8s.io/kubectl/pkg/util/util.go b/vendor/k8s.io/kubectl/pkg/util/util.go new file mode 100644 index 000000000..ea57d3b39 --- /dev/null +++ b/vendor/k8s.io/kubectl/pkg/util/util.go @@ -0,0 +1,93 @@ +/* +Copyright 2017 The Kubernetes Authors. + +Licensed under the Apache License, Version 2.0 (the "License"); +you may not use this file except in compliance with the License. +You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + +Unless required by applicable law or agreed to in writing, software +distributed under the License is distributed on an "AS IS" BASIS, +WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +See the License for the specific language governing permissions and +limitations under the License. +*/ + +package util + +import ( + "crypto/md5" + "errors" + "fmt" + "path" + "path/filepath" + "strings" + "time" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime" +) + +// ParseRFC3339 parses an RFC3339 date in either RFC3339Nano or RFC3339 format. +func ParseRFC3339(s string, nowFn func() metav1.Time) (metav1.Time, error) { + if t, timeErr := time.Parse(time.RFC3339Nano, s); timeErr == nil { + return metav1.Time{Time: t}, nil + } + t, err := time.Parse(time.RFC3339, s) + if err != nil { + return metav1.Time{}, err + } + return metav1.Time{Time: t}, nil +} + +// HashObject returns the hash of a Object hash by a Codec +func HashObject(obj runtime.Object, codec runtime.Codec) (string, error) { + data, err := runtime.Encode(codec, obj) + if err != nil { + return "", err + } + return fmt.Sprintf("%x", md5.Sum(data)), nil +} + +// ParseFileSource parses the source given. +// +// Acceptable formats include: +// 1. source-path: the basename will become the key name +// 2. source-name=source-path: the source-name will become the key name and +// source-path is the path to the key file. +// +// Key names cannot include '='. +func ParseFileSource(source string) (keyName, filePath string, err error) { + numSeparators := strings.Count(source, "=") + switch { + case numSeparators == 0: + return path.Base(filepath.ToSlash(source)), source, nil + case numSeparators == 1 && strings.HasPrefix(source, "="): + return "", "", fmt.Errorf("key name for file path %v missing", strings.TrimPrefix(source, "=")) + case numSeparators == 1 && strings.HasSuffix(source, "="): + return "", "", fmt.Errorf("file path for key name %v missing", strings.TrimSuffix(source, "=")) + case numSeparators > 1: + return "", "", errors.New("key names or file paths cannot contain '='") + default: + components := strings.Split(source, "=") + return components[0], components[1], nil + } +} + +// ParseLiteralSource parses the source key=val pair into its component pieces. +// This functionality is distinguished from strings.SplitN(source, "=", 2) since +// it returns an error in the case of empty keys, values, or a missing equals sign. +func ParseLiteralSource(source string) (keyName, value string, err error) { + // leading equal is invalid + if strings.Index(source, "=") == 0 { + return "", "", fmt.Errorf("invalid literal source %v, expected key=value", source) + } + // split after the first equal (so values can have the = character) + items := strings.SplitN(source, "=", 2) + if len(items) != 2 { + return "", "", fmt.Errorf("invalid literal source %v, expected key=value", source) + } + + return items[0], items[1], nil +} diff --git a/vendor/modules.txt b/vendor/modules.txt index cb2d8d76a..e91704b5c 100644 --- a/vendor/modules.txt +++ b/vendor/modules.txt @@ -66,6 +66,9 @@ github.com/evanphx/json-patch/v5 # github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d ## explicit github.com/exponent-io/jsonpath +# github.com/fatih/camelcase v1.0.0 +## explicit +github.com/fatih/camelcase # github.com/felixge/httpsnoop v1.0.3 ## explicit; go 1.13 github.com/felixge/httpsnoop @@ -196,6 +199,9 @@ github.com/google/shlex # github.com/google/uuid v1.3.0 ## explicit github.com/google/uuid +# github.com/gorilla/mux v1.8.1 +## explicit; go 1.20 +github.com/gorilla/mux # github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 ## explicit github.com/gregjones/httpcache @@ -1321,6 +1327,7 @@ k8s.io/client-go/tools/watch k8s.io/client-go/transport k8s.io/client-go/transport/spdy k8s.io/client-go/util/cert +k8s.io/client-go/util/certificate/csr k8s.io/client-go/util/connrotation k8s.io/client-go/util/exec k8s.io/client-go/util/flowcontrol @@ -1473,15 +1480,32 @@ k8s.io/kube-scheduler/config/v1beta3 k8s.io/kube-scheduler/extender/v1 # k8s.io/kubectl v0.26.3 => k8s.io/kubectl v0.26.3 ## explicit; go 1.19 +k8s.io/kubectl/pkg/apps +k8s.io/kubectl/pkg/cmd/apiresources k8s.io/kubectl/pkg/cmd/get +k8s.io/kubectl/pkg/cmd/logs k8s.io/kubectl/pkg/cmd/util +k8s.io/kubectl/pkg/cmd/util/podcmd +k8s.io/kubectl/pkg/describe +k8s.io/kubectl/pkg/polymorphichelpers k8s.io/kubectl/pkg/rawhttp k8s.io/kubectl/pkg/scheme +k8s.io/kubectl/pkg/util +k8s.io/kubectl/pkg/util/certificate +k8s.io/kubectl/pkg/util/completion +k8s.io/kubectl/pkg/util/deployment +k8s.io/kubectl/pkg/util/event +k8s.io/kubectl/pkg/util/fieldpath k8s.io/kubectl/pkg/util/i18n k8s.io/kubectl/pkg/util/interrupt k8s.io/kubectl/pkg/util/openapi k8s.io/kubectl/pkg/util/openapi/validation +k8s.io/kubectl/pkg/util/podutils +k8s.io/kubectl/pkg/util/qos +k8s.io/kubectl/pkg/util/rbac +k8s.io/kubectl/pkg/util/resource k8s.io/kubectl/pkg/util/slice +k8s.io/kubectl/pkg/util/storage k8s.io/kubectl/pkg/util/templates k8s.io/kubectl/pkg/util/term k8s.io/kubectl/pkg/validation