diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index ef9c3ab84..9e8300ce7 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -7,16 +7,14 @@
-->
#### What type of PR is this?
+
#### What does this PR do?
@@ -34,18 +32,11 @@ Fixes #
If there's anything specific you'd like your reviewer to pay attention to, mention it here
-->
-#### Screenshots (if applicable):
+#### Does this PR introduce a user-facing change?
```release-note
```
-
-#### Additional Comments:
-
-```release-note
-
-```
\ No newline at end of file
diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index 81336f3c6..e04f11d53 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -65,4 +65,40 @@ jobs:
uses: actions/setup-go@v4
with:
go-version: '1.20'
- - run: make test
\ No newline at end of file
+ - run: make test
+ e2e:
+ name: E2e test
+ needs: build
+ runs-on: ubuntu-22.04
+ steps:
+ # Free up disk space on Ubuntu
+ - name: Free Disk Space (Ubuntu)
+ uses: jlumbroso/free-disk-space@main
+ with:
+ # this might remove tools that are actually needed, if set to "true" but frees about 6 GB
+ tool-cache: false
+ # all of these default to true, but feel free to set to "false" if necessary for your workflow
+ android: true
+ dotnet: true
+ haskell: true
+ large-packages: false
+ docker-images: false
+ swap-storage: false
+ - name: Checkout code
+ uses: actions/checkout@v3
+ with:
+ fetch-depth: 0
+ - name: Install Go
+ uses: actions/setup-go@v4
+ with:
+ go-version: '1.20'
+ - name: Prepare e2e env
+ run: ./hack/prepare-e2e.sh
+ - name: Run e2e test
+ run: ./hack/rune2e.sh
+ - name: Upload logs
+ uses: actions/upload-artifact@v3
+ if: failure()
+ with:
+ name: kosmos-e2e-logs-${{ github.run_id }}
+ path: ${{ github.workspace }}/e2e-test/logs-*
diff --git a/.github/workflows/e2e.yml b/.github/workflows/e2e.yml
new file mode 100644
index 000000000..5fb84acd6
--- /dev/null
+++ b/.github/workflows/e2e.yml
@@ -0,0 +1,22 @@
+name: E2e Workflow
+on:
+ push:
+ pull_request:
+jobs:
+ e2e:
+ name: E2e test
+ needs: build
+ runs-on: ubuntu-22.04
+ steps:
+ - name: Checkout code
+ uses: actions/checkout@v3
+ with:
+ fetch-depth: 0
+ - name: Run e2e test
+ run: ./hack/rune2e.sh
+ - name: Upload logs
+ uses: actions/upload-artifact@v3
+ if: failure()
+ with:
+ name: kosmos-e2e-logs-${{ github.run_id }}
+ path: ${{ github.workspace }}/e2e-test/logs-*
diff --git a/.gitignore b/.gitignore
index eef595cab..badfed0fb 100644
--- a/.gitignore
+++ b/.gitignore
@@ -28,4 +28,6 @@ _delete/
kube-config
-__debug_bin*
\ No newline at end of file
+__debug_bin*
+
+ignore_dir
diff --git a/Makefile b/Makefile
index 878326c5c..f90b439fd 100644
--- a/Makefile
+++ b/Makefile
@@ -7,9 +7,10 @@ REGISTRY?="ghcr.io/kosmos-io"
REGISTRY_USER_NAME?=""
REGISTRY_PASSWORD?=""
REGISTRY_SERVER_ADDRESS?=""
+KIND_IMAGE_TAG?="v1.25.3"
TARGETS := clusterlink-controller-manager \
- clusterlink-operator \
+ kosmos-operator \
clusterlink-agent \
clusterlink-elector \
clusterlink-floater \
@@ -100,7 +101,7 @@ test:
upload-images: images
@echo "push images to $(REGISTRY)"
docker push ${REGISTRY}/clusterlink-controller-manager:${VERSION}
- docker push ${REGISTRY}/clusterlink-operator:${VERSION}
+ docker push ${REGISTRY}/kosmos-operator:${VERSION}
docker push ${REGISTRY}/clusterlink-agent:${VERSION}
docker push ${REGISTRY}/clusterlink-proxy:${VERSION}
docker push ${REGISTRY}/clusterlink-network-manager:${VERSION}
@@ -133,4 +134,11 @@ ifeq (, $(shell which golangci-lint))
GOLANGLINT_BIN=$(shell go env GOPATH)/bin/golangci-lint
else
GOLANGLINT_BIN=$(shell which golangci-lint)
-endif
\ No newline at end of file
+endif
+
+image-base-kind-builder:
+ docker buildx build \
+ -t $(REGISTRY)/node:$(KIND_IMAGE_TAG) \
+ --platform=linux/amd64,linux/arm64 \
+ --push \
+ -f cluster/images/buildx.kind.Dockerfile .
diff --git a/README.md b/README.md
index f4e25ee6a..c88c0e1ca 100644
--- a/README.md
+++ b/README.md
@@ -1,70 +1,81 @@
# KOSMOS
-Kosmos是移动云开源的分布式云原生联邦集群技术的集合,其名称kosmos:k代表**k**ubernetes,c**osmos**表示宇宙(希腊语),寓意kubernetes的无限扩展。目前,kosmos主要包括三大模块,分别是:**多集群网络**、**多集群管理编排**、**多集群调度**。此外,kosmos还配备一款kosmosctl工具, 可以快速进行kosmos组件部署、添加集群、测试网络连通性等工作。
+> English | [中文](README_zh.md)
-## 多集群网络
+Kosmos is an open-source, all-in-one distributed cloud-native solution. The name "kosmos" combines 'k' representing Kubernetes and 'cosmos' which means universe in Greek, symbolizing the limitless expansion of Kubernetes. Currently, Kosmos primarily consists of three major modules: ClusterLink, ClusterTree and Scheduler. Additionally, Kosmos is equipped with a tool called kosmosctl, which allows for quick deployment of Kosmos components, adding clusters, and testing network connectivity.
-Kosmos网络的目标是打通多个k8s集群之间的网络,该模块可以独立部署使用。Kosmos网络使`Pod`可以跨集群访问`Pod`、`Service`,就像它们在同一个集 群那样。目前,该模块主要具备以下功能:
-1. **跨集群PodIP、ServiceIP互访**:基于Linux隧道技术,实现了多个k8s集群的L3网络互通,即用户可以在联邦集群范围内进行`Pod-to-Pod`、`Pod-to-Service`访问。
-2. **多模式支持**:对于添加的集群,可以选择`P2P`或者`Gateway`模式,其中`P2P`模式适用于underlay网络互通情况,具有更短的网络路径和更优的性能。`Gateway`模式更具兼容性,适合混合云、多云场景。
-3. **支持全局IP分配**:Kosmos网络允许在联邦集群中存在两个或多个集群使用相同的`Pod/Service`网段,便于用户对存量集群的管理。Kosmos支持配置`PodCIDR/ServiceCIDR` 与 `GlobalCIDR` 的映射关系,`GlobalIP`全局唯一,对于存在网段冲突的集群服务,可以通过`GlobalIP`互访。
-4. **IPv6/IPv4 双栈支持**
+## ClusterLink
-### 网络架构
+The target of Kosmos networking is to establish connectivity between multiple Kubernetes clusters. This module can be deployed and used independently. Kosmos networking enables `Pods` to access `Pods` and `Services` across clusters, as if they were in the same cluster. Currently, this module primarily offers the following ability:
+1. **Cross-cluster PodIP and ServiceIP Make communication**: Linux vxlan base on tunneling technology, this enables L3 network connectivity across multiple Kubernetes clusters. This allows users to conduct `Pod-to-Pod` and `Pod-to-Service` communication within the global clusters scope.
+2. **Multi-Mode Support**: When join clusters, you can choose `P2P` or `Gateway` mode. When selecting the `P2P` mode, it is applicable for underlay network interconnection,offering shorter network paths and superior performance. When selecting the `Gateway` mode, it demonstrates superior compatibility, which is well-suited for hybrid and multi-cloud scenarios.
+3. **Support for Global IP Allocation**: Kosmos networking allows for the presence of two or more clusters within the global clusters to use the same `Pod/Service` network segments, making it convenient for users to manage subnet. Kosmos supports configuring the mapping relationship between `PodCIDR/ServiceCIDR` and `GlobalCIDR`. `GlobalIP` is globally unique, enabling cross-cluster communication for services with conflicting network segments through `GlobalIP`.
+4. **IPv6/IPv4 Dual-Stack Support**
-Kosmos多集群网络模块目前包含以下几个关键组件:
+### Network Architecture
+
+The Kosmos ClusterLink module currently includes the following key components:
-- `Controller-Manager`:用于收集所在集群的网络信息,监听网络设置的变化;
-- `Network-manager`:用于计算各个节点需要的网络配置;
-- `Agent`:是一个`Daemonset`,用于配置主机网络,例如隧道创建、路由、NAT等;
-- `Multi-Cluster-Coredns`: 实现多集群服务发现;
-- `Elector`:负责gateway节点选举;
+- `Controller-Manager`:Collectes network information of the current cluster and monitors changes in network settings.
+- `Network-manager`:Calculates the network configurations required for each node.
+- `Agent`:A DaemonSet used for configuring the host network, including tasks such as tunnel creation, routing, NAT, and so on.
+- `Multi-Cluster-Coredns`:Implements multi-cluster service discovery.
+- `Elector`:Elects the gateway node.
-### 快速开始
+## ClusterTree
-#### 本地启动
-通过以下命令可以快速在本地运行一个实验环境,该命令将基于`kind`(因此需要先安装docker)创建两个k8s集群,并部署ClusterLink。
-```bash
-./hack/local-up-clusterlink.sh
-```
-检查服务是否正常运行
-```bash
-kubectl --context=kind-cluster-host-local get pods -nclusterlink-system
-kubectl --context=kind-cluster-member1-local get pods -nclusterlink-system
-```
-确认跨集群网络是否打通
-```bash
-kubectl --context=kind-cluster-host-local exec -it -- ping
-```
+The Kosmos clustertree module realizes the tree-like scaling of Kubernetes and achieves cross-cluster orchestration of applications.
+
+
+Currently, it primarily supports the following ability:
+1. **Full Compatibility with k8s API**: Users can interact with the host cluster's `kube-apiserver` using tools like `kubectl`, `client-go`, and others just like they normally would. However, the `Pods` are actually distributed across the entire multi-cloud, multi-cluster environment.
+2. **Support for Stateful and k8s-native Applications**: In addition to stateless applications, Kosmos also facilitates the orchestration of stateful applications and k8s-native applications (interacting with `kube-apiserver`). Kosmos will automatically detect the storage and permission resources that `Pods`depend on, such as pv/pvc, sa, etc., and perform automatic bothway synchronization.
+3. **Diverse Pod Topology Constraints**: Users can easily control the distribution of Pods within the global clusters, such as by region, availability zone, cluster, or node. This helps achieve high availability and improve resource utilization.
+
+## Scheduler
+
+The Kosmos scheduling module is an extension developed on top of the Kubernetes scheduling framework, aiming to meet the container management needs in mixed-node and sub-cluster environments. It provides the following core features to enhance the flexibility and efficiency of container management:
-## 多集群管理编排
-Kosmos多集群管理编排模块实现了Kubernetes的树形扩展和应用的跨集群编排。
-
+1. **Flexible Node and Cluster Hybrid Scheduling**: The Kosmos scheduling module allows users to intelligently schedule workloads between real nodes and sub-clusters based on custom configurations. This enables users to make optimal use of resources across different nodes, ensuring the best performance and availability of workloads. Based on this capability, Kosmos enables workloads to achieve flexible cross-cloud and cross-cluster deployments.
-目前主要支持以下功能:
-1. **完全兼容k8s api**:用户可以像往常那样,使用 `kubectl`、`client-go`等工具与host集群的`kube-apiserver`交互,而`Pod`实际上是分布在整个多云多集群中。
-2. **有状态应用、k8s-native应用支持**:除了无状态应用,Kosmos还支持对有状态应用和 k8s-native(与 `kube-apiserver`存在交互)应用的编排。Kosmos会自动检测`Pod`依赖的存储、权限资源,例如:pv/pvc、sa等,并自动进行双向同步。
-3. **多样化Pod拓扑分布约束**:用户可以轻易的控制Pod在联邦集群中的分布,如:区域(Region)、可用区(Zone)、集群或者节点,有助于实现高可用并提升资源利用率。
+2. **Fine-grained Container Distribution Strategy**: By introducing Custom Resource Definitions (CRD), users can exert precise control over the distribution of workloads. The configuration of CRD allows users to explicitly specify the number of pods for the workload in different clusters and adjust the distribution ratio as needed.
-## 多集群调度(建设中)
-Kosmos调度模块是基于Kubernetes调度框架的扩展开发,旨在满足混合节点和子集群环境下的容器管理需求。这一调度器经过精心设计与定制,提供了以下核心功能,以增强容器管理的灵活性和效率:
+3. **Fine-grained Fragmented Resource Handling**: The Kosmos scheduling module intelligently detects fragmented resources within sub-clusters, effectively avoiding situations where pod deployment encounters insufficient resources in the sub-cluster. This helps ensure a more balanced allocation of resources across different nodes, enhancing system stability and performance.
+ Whether building a hybrid cloud environment or requiring flexible deployment of workloads across different clusters, the Kosmos scheduling module serves as a reliable solution, assisting users in managing containerized applications more efficiently.
-1. **灵活的节点和集群混合调度**: Kosmos调度模块允许用户依据自定义配置,轻松地将工作负载在真实节点和子集群之间智能地调度。这使得用户能够充分利用不同节点的资源,以确保工作负载在性能和可用性方面的最佳表现。基于该功能,Kosmos可以让工作负载实现灵活的跨云跨集群部署。
-2. **精细化的容器分发策略**: 通过引入自定义资源定义(CRD),用户可以精确控制工作负载的拓扑分布。CRD的配置允许用户明确指定工作负载的pod在不同集群中的数量,并根据需求调整分布比例。
-3. **细粒度的碎片资源整理**: Kosmos调度模块能够智能感知子集群中的碎片资源,有效避免了pod被调度之后部署时子集群资源不足的情况。这有助于确保工作负载在不同节点上的资源分配更均匀,提升系统的稳定性和性能。
+## Quick Start
+The following command allows you to quickly run an experimental environment with three clusters.
+Install the control plane in the host cluster.
+```Shell
+kosmosctl install --cni calico --default-nic eth0 (We build a network tunnel based the network interface value passed by the arg default-nic)
+```
+
+Join the two member clusters.
+```Shell
+kosmosctl join cluster --name cluster1 --kubeconfig ~/kubeconfig/cluster1-kubeconfig --cni calico --default-nic eth0 --enable-all
+kosmosctl join cluster --name cluster2 --kubeconfig ~/kubeconfig/cluster2-kubeconfig --cni calico --default-nic eth0 --enable-all
+```
+And then we can Use the Kosmos clusters like single cluster.
-无论是构建混合云环境还是需要在不同集群中进行工作负载的灵活部署,Kosmos调度模块都可作为可靠的解决方案,协助用户更高效地管理容器化应用。
+## Contact
+If you have questions, feel free to reach out to us in the following ways:
+- [Email](mailto:wuyingjun@cmss.chinamobile.com)
+- [WeChat](./docs/images/kosmos-WeChatIMG.png)
-## 贡献者
+## Contributors
-
+
Made with [contrib.rocks](https://contrib.rocks).
## License
+
Copyright 2023 the KOSMOS Authors. All rights reserved.
-Licensed under the Apache License, Version 2.0.
\ No newline at end of file
+Licensed under the Apache License, Version 2.0.
+
+
+
diff --git a/README_zh.md b/README_zh.md
index f4e25ee6a..f4df99343 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -1,5 +1,7 @@
# KOSMOS
+> [English](README.md) | 中文
+
Kosmos是移动云开源的分布式云原生联邦集群技术的集合,其名称kosmos:k代表**k**ubernetes,c**osmos**表示宇宙(希腊语),寓意kubernetes的无限扩展。目前,kosmos主要包括三大模块,分别是:**多集群网络**、**多集群管理编排**、**多集群调度**。此外,kosmos还配备一款kosmosctl工具, 可以快速进行kosmos组件部署、添加集群、测试网络连通性等工作。
## 多集群网络
@@ -21,33 +23,17 @@ Kosmos多集群网络模块目前包含以下几个关键组件:
- `Multi-Cluster-Coredns`: 实现多集群服务发现;
- `Elector`:负责gateway节点选举;
-### 快速开始
-
-#### 本地启动
-通过以下命令可以快速在本地运行一个实验环境,该命令将基于`kind`(因此需要先安装docker)创建两个k8s集群,并部署ClusterLink。
-```bash
-./hack/local-up-clusterlink.sh
-```
-检查服务是否正常运行
-```bash
-kubectl --context=kind-cluster-host-local get pods -nclusterlink-system
-kubectl --context=kind-cluster-member1-local get pods -nclusterlink-system
-```
-确认跨集群网络是否打通
-```bash
-kubectl --context=kind-cluster-host-local exec -it -- ping
-```
## 多集群管理编排
Kosmos多集群管理编排模块实现了Kubernetes的树形扩展和应用的跨集群编排。
-
+
目前主要支持以下功能:
1. **完全兼容k8s api**:用户可以像往常那样,使用 `kubectl`、`client-go`等工具与host集群的`kube-apiserver`交互,而`Pod`实际上是分布在整个多云多集群中。
2. **有状态应用、k8s-native应用支持**:除了无状态应用,Kosmos还支持对有状态应用和 k8s-native(与 `kube-apiserver`存在交互)应用的编排。Kosmos会自动检测`Pod`依赖的存储、权限资源,例如:pv/pvc、sa等,并自动进行双向同步。
3. **多样化Pod拓扑分布约束**:用户可以轻易的控制Pod在联邦集群中的分布,如:区域(Region)、可用区(Zone)、集群或者节点,有助于实现高可用并提升资源利用率。
-## 多集群调度(建设中)
+## 多集群调度
Kosmos调度模块是基于Kubernetes调度框架的扩展开发,旨在满足混合节点和子集群环境下的容器管理需求。这一调度器经过精心设计与定制,提供了以下核心功能,以增强容器管理的灵活性和效率:
1. **灵活的节点和集群混合调度**: Kosmos调度模块允许用户依据自定义配置,轻松地将工作负载在真实节点和子集群之间智能地调度。这使得用户能够充分利用不同节点的资源,以确保工作负载在性能和可用性方面的最佳表现。基于该功能,Kosmos可以让工作负载实现灵活的跨云跨集群部署。
@@ -56,8 +42,21 @@ Kosmos调度模块是基于Kubernetes调度框架的扩展开发,旨在满足
无论是构建混合云环境还是需要在不同集群中进行工作负载的灵活部署,Kosmos调度模块都可作为可靠的解决方案,协助用户更高效地管理容器化应用。
-## 贡献者
+## 快速开始
+通过以下命令可以快速在本地运行一个三个集群的实验环境:
+在主集群部署管理组件
+```bash
+kosmosctl install --cni calico --default-nic eth0 (参数default-nic 表示基于哪个网卡创建网络隧道)
+```
+加入两个子集群
+```bash
+kosmosctl join cluster --name cluster1 --kubeconfig ~/kubeconfig/cluster1-kubeconfig --cni calico --default-nic eth0 --enable-all
+kosmosctl join cluster --name cluster2 --kubeconfig ~/kubeconfig/cluster2-kubeconfig --cni calico --default-nic eth0 --enable-all
+```
+
+然后我们就可以像使用单集群一样去使用多集群了
+## 贡献者
diff --git a/cluster/images/buildx.floater.Dockerfile b/cluster/images/buildx.floater.Dockerfile
index 43091bc98..6cf8cf957 100644
--- a/cluster/images/buildx.floater.Dockerfile
+++ b/cluster/images/buildx.floater.Dockerfile
@@ -7,13 +7,4 @@ RUN apk add --no-cache ca-certificates
RUN apk update && apk upgrade
RUN apk add ip6tables iptables curl
-COPY ${TARGETPLATFORM}/certificate /bin/certificate/
-
COPY ${TARGETPLATFORM}/${BINARY} /bin/${BINARY}
-
-RUN adduser -D -g clusterlink -u 1002 clusterlink && \
- chown -R clusterlink:clusterlink /bin/certificate && \
- chown -R clusterlink:clusterlink /bin/${BINARY} && \
- chmod u+s /bin/ping
-
-USER clusterlink
diff --git a/cluster/images/buildx.kind.Dockerfile b/cluster/images/buildx.kind.Dockerfile
new file mode 100644
index 000000000..4347ca4fa
--- /dev/null
+++ b/cluster/images/buildx.kind.Dockerfile
@@ -0,0 +1,2 @@
+FROM kindest/node:v1.25.3
+RUN clean-install tcpdump && clean-install iputils-ping && clean-install iptables && clean-install net-tools
diff --git a/cluster/images/floater.Dockerfile b/cluster/images/floater.Dockerfile
index 4f747c1ec..2f61e7736 100644
--- a/cluster/images/floater.Dockerfile
+++ b/cluster/images/floater.Dockerfile
@@ -6,13 +6,4 @@ RUN apk add --no-cache ca-certificates
RUN apk update && apk upgrade
RUN apk add ip6tables iptables curl
-COPY certificate /bin/certificate/
-
COPY ${BINARY} /bin/${BINARY}
-
-RUN adduser -D -g clusterlink -u 1002 clusterlink && \
- chown -R clusterlink:clusterlink /bin/certificate && \
- chown -R clusterlink:clusterlink /bin/${BINARY} && \
- chmod u+s /bin/ping
-
-USER clusterlink
diff --git a/cmd/clusterlink/OWNERS b/cmd/clusterlink/OWNERS
new file mode 100644
index 000000000..b79835439
--- /dev/null
+++ b/cmd/clusterlink/OWNERS
@@ -0,0 +1,10 @@
+approvers:
+ - wuyingjun-lucky
+ - hanweisen
+ - wangyizhi1
+ - OrangeBao
+reviewers:
+ - wuyingjun-lucky
+ - hanweisen
+ - wangyizhi1
+ - OrangeBao
\ No newline at end of file
diff --git a/cmd/clusterlink/agent/app/agent.go b/cmd/clusterlink/agent/app/agent.go
index dc1b54abb..68447f5c2 100644
--- a/cmd/clusterlink/agent/app/agent.go
+++ b/cmd/clusterlink/agent/app/agent.go
@@ -15,7 +15,7 @@ import (
ctrl "sigs.k8s.io/controller-runtime"
"github.com/kosmos.io/kosmos/cmd/clusterlink/agent/app/options"
- linkagent "github.com/kosmos.io/kosmos/pkg/clusterlink/agent"
+ linkagent "github.com/kosmos.io/kosmos/pkg/clusterlink/agent-manager"
"github.com/kosmos.io/kosmos/pkg/clusterlink/network"
kosmosclientset "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
kosmosinformer "github.com/kosmos.io/kosmos/pkg/generated/informers/externalversions"
@@ -116,6 +116,17 @@ func run(ctx context.Context, opts *options.Options) error {
return err
}
+ autoDetectController := linkagent.AutoDetectReconciler{
+ Client: mgr.GetClient(),
+ NodeName: os.Getenv(utils.EnvNodeName),
+ ClusterName: os.Getenv(utils.EnvClusterName),
+ }
+
+ if err = autoDetectController.SetupWithManager(mgr); err != nil {
+ klog.Fatalf("Unable to create auto detect controller: %v", err)
+ return err
+ }
+
factory.Start(ctx.Done())
factory.WaitForCacheSync(ctx.Done())
diff --git a/cmd/clusterlink/floater/app/floater.go b/cmd/clusterlink/floater/app/floater.go
index 9349011f0..52534a01c 100644
--- a/cmd/clusterlink/floater/app/floater.go
+++ b/cmd/clusterlink/floater/app/floater.go
@@ -5,14 +5,19 @@ import (
"fmt"
"net/http"
"os"
+ "path/filepath"
+ "strconv"
"time"
"github.com/spf13/cobra"
+ "k8s.io/apimachinery/pkg/util/json"
cliflag "k8s.io/component-base/cli/flag"
"k8s.io/component-base/term"
"k8s.io/klog/v2"
"github.com/kosmos.io/kosmos/cmd/clusterlink/floater/app/options"
+ networkmanager "github.com/kosmos.io/kosmos/pkg/clusterlink/agent-manager/network-manager"
+ "github.com/kosmos.io/kosmos/pkg/clusterlink/network"
"github.com/kosmos.io/kosmos/pkg/sharedcli"
"github.com/kosmos.io/kosmos/pkg/sharedcli/klogflag"
)
@@ -28,7 +33,7 @@ func NewFloaterCommand(ctx context.Context) *cobra.Command {
if errs := opts.Validate(); len(errs) != 0 {
return errs.ToAggregate()
}
- if err := run(ctx, opts); err != nil {
+ if err := Run(ctx, opts); err != nil {
return err
}
return nil
@@ -63,8 +68,20 @@ func NewFloaterCommand(ctx context.Context) *cobra.Command {
return cmd
}
-func run(_ context.Context, _ *options.Options) error {
+func Run(_ context.Context, _ *options.Options) error {
+ enableAnalysis, err := strconv.ParseBool(os.Getenv("ENABLE_ANALYSIS"))
+ if err != nil {
+ klog.Errorf("env variable read error: %s", err)
+ }
+
+ if enableAnalysis {
+ if err = collectNetworkConfig(); err != nil {
+ return err
+ }
+ }
+
port := os.Getenv("PORT")
+ klog.Infof("PORT: ", port)
if len(port) == 0 {
port = "8889"
}
@@ -74,14 +91,46 @@ func run(_ context.Context, _ *options.Options) error {
klog.Errorf("response writer error: %s", err)
}
})
- srv := &http.Server{
- ReadTimeout: 5 * time.Second,
- WriteTimeout: 10 * time.Second,
- Addr: fmt.Sprintf(":%s", port),
+
+ server := &http.Server{
+ Addr: fmt.Sprintf(":%s", port),
+ ReadHeaderTimeout: 3 * time.Second,
+ }
+
+ err = server.ListenAndServe()
+ if err != nil {
+ klog.Errorf("launch server error: %s", err)
+ panic(err)
+ }
+
+ return nil
+}
+
+func collectNetworkConfig() error {
+ var nodeConfigSpecByte []byte
+
+ net := network.NewNetWork(false)
+ nManager := networkmanager.NewNetworkManager(net)
+ klog.Infof("Starting collect network config, create network manager...")
+
+ nodeConfigSpec, err := nManager.LoadSystemConfig()
+ if err != nil {
+ klog.Errorf("nodeConfigSpec query error: %s", err)
}
- if err := srv.ListenAndServeTLS("/bin/certificate/file.crt", "/bin/certificate/file.key"); err != nil {
- klog.Errorf("lanch server error: %s", err)
- return err
+ klog.Infof("load system config into nodeConfigSpec succeeded, nodeConfigSpec: [", nodeConfigSpec, "]")
+
+ nodeConfigSpecByte, err = json.Marshal(nodeConfigSpec)
+ if err != nil {
+ klog.Errorf("nodeConfigSpec marshal error: %s", err)
+ }
+ klog.Infof("marshal nodeConfigSpec into bytes succeeded, nodeConfigSpecByte: [", nodeConfigSpecByte, "]")
+
+ filePath := filepath.Join(os.Getenv("HOME"), "nodeconfig.json")
+ err = os.WriteFile(filePath, nodeConfigSpecByte, os.ModePerm)
+ if err != nil {
+ klog.Errorf("nodeconfig.json write error: %s", err)
}
+ klog.Infof("nodeconfig.json has been written, filePath: [", filePath, "]")
+
return nil
}
diff --git a/cmd/clusterlink/floater/certificate/file.crt b/cmd/clusterlink/floater/certificate/file.crt
deleted file mode 100644
index 0b76c10dc..000000000
--- a/cmd/clusterlink/floater/certificate/file.crt
+++ /dev/null
@@ -1,86 +0,0 @@
------BEGIN CERTIFICATE-----
-MIIF6jCCBNKgAwIBAgIMYDcO6CTAlMYK4STIMA0GCSqGSIb3DQEBCwUAMFAxCzAJ
-BgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSYwJAYDVQQDEx1H
-bG9iYWxTaWduIFJTQSBPViBTU0wgQ0EgMjAxODAeFw0yMDEwMzAwODAyMzJaFw0y
-MTEyMDEwODAyMzJaMHwxCzAJBgNVBAYTAkNOMQ8wDQYDVQQIDAbmsZ/oi48xDzAN
-BgNVBAcMBuiLj+W3njEzMDEGA1UECgwq5Lit56e777yI6IuP5bee77yJ6L2v5Lu2
-5oqA5pyv5pyJ6ZmQ5YWs5Y+4MRYwFAYDVQQDDA0qLmNtZWNsb3VkLmNuMIIBIjAN
-BgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAyzQPGYVZabBVx7DG3CCMAtjver7U
-cqcbTtCDjmYlwodvEn57kyeHg9kODphVB/yOm0+WSXlADjgxw7RMPIOfIIZGmuRz
-mEGNi4LCnIVAvsqLniCBEkG8sUENM/pNAc884r3u4oO0OV6t5t/ld9YZMlVBagJ4
-nzJUYraqE0rfq/5R+fcsTxndeckahm/7uRevGkrzDuDjIRVuuR1bCNOjaoWt2xTP
-PEUPWQt6CKI7OgaCP9IZnIT/GV8YFm6zmWjjk/cBs4U+gXZ4ltQgoioxgxwJ9+hm
-OiFgJ3azwApu8RtYParDfq6+jeL2QWjg/VfaHgQy2LERvhS+Iz7wAfUqKQIDAQAB
-o4ICljCCApIwDgYDVR0PAQH/BAQDAgWgMIGOBggrBgEFBQcBAQSBgTB/MEQGCCsG
-AQUFBzAChjhodHRwOi8vc2VjdXJlLmdsb2JhbHNpZ24uY29tL2NhY2VydC9nc3Jz
-YW92c3NsY2EyMDE4LmNydDA3BggrBgEFBQcwAYYraHR0cDovL29jc3AuZ2xvYmFs
-c2lnbi5jb20vZ3Nyc2FvdnNzbGNhMjAxODBWBgNVHSAETzBNMEEGCSsGAQQBoDIB
-FDA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBv
-c2l0b3J5LzAIBgZngQwBAgIwCQYDVR0TBAIwADAlBgNVHREEHjAcgg0qLmNtZWNs
-b3VkLmNuggtjbWVjbG91ZC5jbjAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUH
-AwIwHwYDVR0jBBgwFoAU+O9/8s14Z6jeb48kjYjxhwMCs+swHQYDVR0OBBYEFDot
-zODhWUl+ppPEO+ust45uvb4IMIIBBAYKKwYBBAHWeQIEAgSB9QSB8gDwAHYAb1N2
-rDHwMRnYmQCkURX/dxUcEdkCwQApBo2yCJo32RMAAAF1eIehVgAABAMARzBFAiEA
-nNltVQqd5NiFh6a1VliKYDs1OF42Qz2qrwpdDfjF/uICIAdpmaZsEYdAskX0cW+4
-PIbj76/eOmlP891LI2+bFSAEAHYA9lyUL9F3MCIUVBgIMJRWjuNNExkzv98MLyAL
-zE7xZOMAAAF1eIekEwAABAMARzBFAiBMge7OwxtF82j0KbsOpcBcjTxbyfN2s99i
-JZC56VCMfwIhAL9nHBI1/irtZAGlDJRmNHnnEeqBtBfw2lGusks1ooIpMA0GCSqG
-SIb3DQEBCwUAA4IBAQBR/ERRd3iJEL0w9vAnfCCRgCsuqnhC3m9ixB9klvOhcKbG
-UPQ+Y2q67fbTMN/Jp4p2odf6g2WGSYtFAxKG5/DSg4flDW48F1fPq/sOEaWRt8nV
-2uFvbMb9XA68zT9EostCYG4RZcALb8qTYMwTT72LqJ3ZpV4WCXlXZZapn90uAQPf
-AAkkqYCfXFnCyhzKanGEoFFgSzso1/9aNLd4SAY0/DbZWm3br+wQM7kWNwp65SYo
-rNEm2OW53lUcGG76fnPb6VmbwguCE4DDGbwigV3VqUMcTYp4LYirKeIu8iTTb5y/
-nL+hJqdg2U7MHpqt3aA9OTcLBkm6WyhM/qhKeXMa
------END CERTIFICATE-----
------BEGIN CERTIFICATE-----
-MIIETjCCAzagAwIBAgINAe5fIh38YjvUMzqFVzANBgkqhkiG9w0BAQsFADBMMSAw
-HgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMzETMBEGA1UEChMKR2xvYmFs
-U2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjAeFw0xODExMjEwMDAwMDBaFw0yODEx
-MjEwMDAwMDBaMFAxCzAJBgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52
-LXNhMSYwJAYDVQQDEx1HbG9iYWxTaWduIFJTQSBPViBTU0wgQ0EgMjAxODCCASIw
-DQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKdaydUMGCEAI9WXD+uu3Vxoa2uP
-UGATeoHLl+6OimGUSyZ59gSnKvuk2la77qCk8HuKf1UfR5NhDW5xUTolJAgvjOH3
-idaSz6+zpz8w7bXfIa7+9UQX/dhj2S/TgVprX9NHsKzyqzskeU8fxy7quRU6fBhM
-abO1IFkJXinDY+YuRluqlJBJDrnw9UqhCS98NE3QvADFBlV5Bs6i0BDxSEPouVq1
-lVW9MdIbPYa+oewNEtssmSStR8JvA+Z6cLVwzM0nLKWMjsIYPJLJLnNvBhBWk0Cq
-o8VS++XFBdZpaFwGue5RieGKDkFNm5KQConpFmvv73W+eka440eKHRwup08CAwEA
-AaOCASkwggElMA4GA1UdDwEB/wQEAwIBhjASBgNVHRMBAf8ECDAGAQH/AgEAMB0G
-A1UdDgQWBBT473/yzXhnqN5vjySNiPGHAwKz6zAfBgNVHSMEGDAWgBSP8Et/qC5F
-JK5NUPpjmove4t0bvDA+BggrBgEFBQcBAQQyMDAwLgYIKwYBBQUHMAGGImh0dHA6
-Ly9vY3NwMi5nbG9iYWxzaWduLmNvbS9yb290cjMwNgYDVR0fBC8wLTAroCmgJ4Yl
-aHR0cDovL2NybC5nbG9iYWxzaWduLmNvbS9yb290LXIzLmNybDBHBgNVHSAEQDA+
-MDwGBFUdIAAwNDAyBggrBgEFBQcCARYmaHR0cHM6Ly93d3cuZ2xvYmFsc2lnbi5j
-b20vcmVwb3NpdG9yeS8wDQYJKoZIhvcNAQELBQADggEBAJmQyC1fQorUC2bbmANz
-EdSIhlIoU4r7rd/9c446ZwTbw1MUcBQJfMPg+NccmBqixD7b6QDjynCy8SIwIVbb
-0615XoFYC20UgDX1b10d65pHBf9ZjQCxQNqQmJYaumxtf4z1s4DfjGRzNpZ5eWl0
-6r/4ngGPoJVpjemEuunl1Ig423g7mNA2eymw0lIYkN5SQwCuaifIFJ6GlazhgDEw
-fpolu4usBCOmmQDo8dIm7A9+O4orkjgTHY+GzYZSR+Y0fFukAj6KYXwidlNalFMz
-hriSqHKvoflShx8xpfywgVcvzfTO3PYkz6fiNJBonf6q8amaEsybwMbDqKWwIX7e
-SPY=
------END CERTIFICATE-----
------BEGIN CERTIFICATE-----
-MIIETjCCAzagAwIBAgINAe5fFp3/lzUrZGXWajANBgkqhkiG9w0BAQsFADBXMQsw
-CQYDVQQGEwJCRTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEQMA4GA1UECxMH
-Um9vdCBDQTEbMBkGA1UEAxMSR2xvYmFsU2lnbiBSb290IENBMB4XDTE4MDkxOTAw
-MDAwMFoXDTI4MDEyODEyMDAwMFowTDEgMB4GA1UECxMXR2xvYmFsU2lnbiBSb290
-IENBIC0gUjMxEzARBgNVBAoTCkdsb2JhbFNpZ24xEzARBgNVBAMTCkdsb2JhbFNp
-Z24wggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDMJXaQeQZ4Ihb1wIO2
-hMoonv0FdhHFrYhy/EYCQ8eyip0EXyTLLkvhYIJG4VKrDIFHcGzdZNHr9SyjD4I9
-DCuul9e2FIYQebs7E4B3jAjhSdJqYi8fXvqWaN+JJ5U4nwbXPsnLJlkNc96wyOkm
-DoMVxu9bi9IEYMpJpij2aTv2y8gokeWdimFXN6x0FNx04Druci8unPvQu7/1PQDh
-BjPogiuuU6Y6FnOM3UEOIDrAtKeh6bJPkC4yYOlXy7kEkmho5TgmYHWyn3f/kRTv
-riBJ/K1AFUjRAjFhGV64l++td7dkmnq/X8ET75ti+w1s4FRpFqkD2m7pg5NxdsZp
-hYIXAgMBAAGjggEiMIIBHjAOBgNVHQ8BAf8EBAMCAQYwDwYDVR0TAQH/BAUwAwEB
-/zAdBgNVHQ4EFgQUj/BLf6guRSSuTVD6Y5qL3uLdG7wwHwYDVR0jBBgwFoAUYHtm
-GkUNl8qJUC99BM00qP/8/UswPQYIKwYBBQUHAQEEMTAvMC0GCCsGAQUFBzABhiFo
-dHRwOi8vb2NzcC5nbG9iYWxzaWduLmNvbS9yb290cjEwMwYDVR0fBCwwKjAooCag
-JIYiaHR0cDovL2NybC5nbG9iYWxzaWduLmNvbS9yb290LmNybDBHBgNVHSAEQDA+
-MDwGBFUdIAAwNDAyBggrBgEFBQcCARYmaHR0cHM6Ly93d3cuZ2xvYmFsc2lnbi5j
-b20vcmVwb3NpdG9yeS8wDQYJKoZIhvcNAQELBQADggEBACNw6c/ivvVZrpRCb8RD
-M6rNPzq5ZBfyYgZLSPFAiAYXof6r0V88xjPy847dHx0+zBpgmYILrMf8fpqHKqV9
-D6ZX7qw7aoXW3r1AY/itpsiIsBL89kHfDwmXHjjqU5++BfQ+6tOfUBJ2vgmLwgtI
-fR4uUfaNU9OrH0Abio7tfftPeVZwXwzTjhuzp3ANNyuXlava4BJrHEDOxcd+7cJi
-WOx37XMiwor1hkOIreoTbv3Y/kIvuX1erRjvlJDKPSerJpSZdcfL03v3ykzTr1Eh
-kluEfSufFT90y1HonoMOFm8b50bOI7355KKL0jlrqnkckSziYSQtjipIcJDEHsXo
-4HA=
------END CERTIFICATE-----
diff --git a/cmd/clusterlink/floater/certificate/file.key b/cmd/clusterlink/floater/certificate/file.key
deleted file mode 100644
index 8f61ed6de..000000000
--- a/cmd/clusterlink/floater/certificate/file.key
+++ /dev/null
@@ -1,27 +0,0 @@
------BEGIN RSA PRIVATE KEY-----
-MIIEpAIBAAKCAQEAyzQPGYVZabBVx7DG3CCMAtjver7UcqcbTtCDjmYlwodvEn57
-kyeHg9kODphVB/yOm0+WSXlADjgxw7RMPIOfIIZGmuRzmEGNi4LCnIVAvsqLniCB
-EkG8sUENM/pNAc884r3u4oO0OV6t5t/ld9YZMlVBagJ4nzJUYraqE0rfq/5R+fcs
-Txndeckahm/7uRevGkrzDuDjIRVuuR1bCNOjaoWt2xTPPEUPWQt6CKI7OgaCP9IZ
-nIT/GV8YFm6zmWjjk/cBs4U+gXZ4ltQgoioxgxwJ9+hmOiFgJ3azwApu8RtYParD
-fq6+jeL2QWjg/VfaHgQy2LERvhS+Iz7wAfUqKQIDAQABAoIBADlQfLXRE/AoiXli
-liR+lZ8z+xAfBSM1mRE45PJkQ2BD/QM1Y7uU2bdJoJpjQxCWns6VuykMJxIbrYWq
-tBoZceelmAKWTzhxvO/NuQCW4TUvQgQe3Oj+W6+PTp8LiW7qOh0mP1vqlAned6R4
-IGwVmlPFEkdJXSZh9sVFCmGYq9AB0gcTw4bH+otbyZzFE1h8Vyt4CB3duCdPMR1F
-TNuH+olhOQFnBlICFZddn2f0k5J41RNuKH5Vz4o/BX5gylA0WsFmJuF2kdvyZEP9
-jhVypE/lC+9WNY2mFTDSkFx5sQ2NBFcovW4L4SyQmF8ga/6GPQEvCUzfVnuU+2i6
-vsA3W9UCgYEA5u+5cLqUGtQ0YQp3Fxt90aJgep9uCTVKEuEA6AGc0Nz30SlDa4ch
-4FFXS4MqETt9RBed/zlCypiL6sFKqVo7FlIGS2et7cswQEJKAu0P6cOE7AUcuEJd
-MdMh/ZY+RqUBdadNgOHVyi0+o9QO/SZNVyOF6xZqFITHBdinrIYW7gsCgYEA4UHO
-dgI93YXuZczsoo/ItRndNXLmOkE5d4IaSX9GI3tfqAEMSxx1sCqZu7DUwGiIAK1q
-GgVUfNJ/ueivT34AvMthNEXPfHCxb8ND6gJCdbEyNw0tFOHZKh7gfNdonen+TJ0G
-SzO6kRLTwYOtvpKqPUxJCay/PFXMbNTw39AKjRsCgYEAlr4mgvIXWQfphOqK4Bd+
-4ocmiQRmlDYnuvkKWWcsEJ4cWXig3KChuUX/QHhGzmbRls//vyiGc65trngrny4Z
-4bD7EN+FhfIa9ecPXqeVupZ4voN7wr73DF3wExKuZfixYjYp/hXsMoOkHtZ+Tjph
-Q58ZfGHuLqSZMTTCBnikoQ8CgYAbT6lCsZ7invx6p1ABncFOA+bINjgn1AStsr6R
-LrdIUgsVCZt99+NlCqU9FoGVGpdyzZPRt9e4kqUd21J2Jubb/SS5+8TeZ6N704cG
-dmOsdWGLPzO6FnAIJVo+iLeMffRxQZCjyY/TSx8VlWuZcZrmd7tbSvCc1iJFB8R0
-vnqpBQKBgQDiZY+lSVdEVsdIygsi7AeReSQnI66ZElO64QrE1s116zDmIvvDJwpT
-+8WFNvrt33PLrpbDgkPu6cenWGysl/munvfwwmULUZ+hddbxv2TUX7WvDnWMnfw8
-NuW0aG4YFjkyzGwKsWkMkKYZ2l9Ls4SYcnxsc+EfbhiQj5GrlPJgPQ==
------END RSA PRIVATE KEY-----
diff --git a/cmd/clustertree/cluster-manager/app/extersion_apps.go b/cmd/clustertree/cluster-manager/app/extersion_apps.go
new file mode 100644
index 000000000..1f9b11c36
--- /dev/null
+++ b/cmd/clustertree/cluster-manager/app/extersion_apps.go
@@ -0,0 +1,222 @@
+package app
+
+import (
+ "context"
+ "time"
+
+ "k8s.io/client-go/informers"
+ clientset "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/tools/clientcmd"
+ "k8s.io/client-go/util/flowcontrol"
+ "k8s.io/klog/v2"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+
+ "github.com/kosmos.io/kosmos/cmd/clustertree/cluster-manager/app/options"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/extensions/daemonset"
+ "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/generated/informers/externalversions"
+ "github.com/kosmos.io/kosmos/pkg/utils/flags"
+)
+
+// StartHostDaemonSetsController starts a new HostDaemonSetsController.
+func StartHostDaemonSetsController(ctx context.Context, opts *options.Options, workNum int) (*daemonset.HostDaemonSetsController, error) {
+ kubeconfig, err := clientcmd.BuildConfigFromFlags(opts.KubernetesOptions.Master, opts.KubernetesOptions.KubeConfig)
+ if err != nil {
+ klog.Errorf("Unable to build kubeconfig: %v", err)
+ }
+ kubeClient, err := clientset.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kubeClient: %v", err)
+ return nil, err
+ }
+ kosmosClient, err := versioned.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kosmosClient: %v", err)
+ }
+
+ kubeFactory := informers.NewSharedInformerFactory(kubeClient, 0)
+ kosmosFactory := externalversions.NewSharedInformerFactory(kosmosClient, 0)
+
+ controller, err := daemonset.NewHostDaemonSetsController(
+ kosmosFactory.Kosmos().V1alpha1().ShadowDaemonSets(),
+ kubeFactory.Apps().V1().ControllerRevisions(),
+ kubeFactory.Core().V1().Pods(),
+ kubeFactory.Core().V1().Nodes(),
+ kosmosClient,
+ kubeClient,
+ flowcontrol.NewBackOff(1*time.Second, 15*time.Minute),
+ )
+ kubeFactory.Start(ctx.Done())
+ kosmosFactory.Start(ctx.Done())
+ if err != nil {
+ return nil, err
+ }
+ go controller.Run(ctx, workNum)
+ return controller, nil
+}
+
+func StartDistributeController(ctx context.Context, opts *options.Options, workNum int) (*daemonset.DistributeController, error) {
+ kubeconfig, err := clientcmd.BuildConfigFromFlags(opts.KubernetesOptions.Master, opts.KubernetesOptions.KubeConfig)
+ if err != nil {
+ //klog.Errorf("Unable to build kubeconfig: %v", err)
+ klog.Errorf("Unable to build kubeconfig: %v", err)
+ }
+ kosmosClient, err := versioned.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kosmosClient: %v", err)
+ }
+
+ kosmosFactory := externalversions.NewSharedInformerFactory(kosmosClient, 0)
+ option := flags.Options{}
+ controller := daemonset.NewDistributeController(
+ kosmosClient,
+ kosmosFactory.Kosmos().V1alpha1().ShadowDaemonSets(),
+ kosmosFactory.Kosmos().V1alpha1().Clusters(),
+ option,
+ )
+ kosmosFactory.Start(ctx.Done())
+
+ controller.Run(ctx, workNum)
+
+ return controller, nil
+}
+
+func StartDaemonSetsController(ctx context.Context, opts *options.Options, workNum int) (*daemonset.DaemonSetsController, error) {
+ kubeconfig, err := clientcmd.BuildConfigFromFlags(opts.KubernetesOptions.Master, opts.KubernetesOptions.KubeConfig)
+ if err != nil {
+ klog.Errorf("Unable to build kubeconfig: %v", err)
+ }
+ kubeClient, err := clientset.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kubeClient: %v", err)
+ return nil, err
+ }
+ kosmosClient, err := versioned.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kosmosClient: %v", err)
+ return nil, err
+ }
+
+ kosmosFactory := externalversions.NewSharedInformerFactory(kosmosClient, 0)
+ option := flags.Options{}
+ controller := daemonset.NewDaemonSetsController(
+ kosmosFactory.Kosmos().V1alpha1().ShadowDaemonSets(),
+ kosmosFactory.Kosmos().V1alpha1().DaemonSets(),
+ kosmosFactory.Kosmos().V1alpha1().Clusters(),
+ kubeClient,
+ kosmosClient,
+ option,
+ )
+ kosmosFactory.Start(ctx.Done())
+
+ controller.Run(ctx, workNum)
+
+ return controller, nil
+}
+
+func StartDaemonSetsMirrorController(ctx context.Context, opts *options.Options, workNum int) (*daemonset.DaemonSetsMirrorController, error) {
+ kubeconfig, err := clientcmd.BuildConfigFromFlags(opts.KubernetesOptions.Master, opts.KubernetesOptions.KubeConfig)
+ if err != nil {
+ klog.Errorf("Unable to build kubeconfig: %v", err)
+ }
+ kubeClient, err := clientset.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kubeClient: %v", err)
+ return nil, err
+ }
+ kosmosClient, err := versioned.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kosmosClient: %v", err)
+ return nil, err
+ }
+ kosmosFactory := externalversions.NewSharedInformerFactory(kosmosClient, 0)
+ kubeFactory := informers.NewSharedInformerFactory(kubeClient, 0)
+ option := flags.Options{}
+ controller := daemonset.NewDaemonSetsMirrorController(
+ kosmosClient,
+ kubeClient,
+ kosmosFactory.Kosmos().V1alpha1().DaemonSets(),
+ kubeFactory.Apps().V1().DaemonSets(),
+ option,
+ )
+ kosmosFactory.Start(ctx.Done())
+ kubeFactory.Start(ctx.Done())
+ controller.Run(ctx, workNum)
+ return controller, nil
+}
+
+func StartPodReflectController(ctx context.Context, opts *options.Options, workNum int) (*daemonset.PodReflectorController, error) {
+ kubeconfig, err := clientcmd.BuildConfigFromFlags(opts.KubernetesOptions.Master, opts.KubernetesOptions.KubeConfig)
+ if err != nil {
+ klog.Errorf("Unable to build kubeconfig: %v", err)
+ }
+ kubeClient, err := clientset.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kubeClient: %v", err)
+ return nil, err
+ }
+ kosmosClient, err := versioned.NewForConfig(kubeconfig)
+ if err != nil {
+ klog.Errorf("Unable to create kosmosClient: %v", err)
+ return nil, err
+ }
+ kosmosFactory := externalversions.NewSharedInformerFactory(kosmosClient, 0)
+ kubeFactory := informers.NewSharedInformerFactory(kubeClient, 0)
+ option := flags.Options{}
+ controller := daemonset.NewPodReflectorController(
+ kubeClient,
+ kubeFactory.Apps().V1().DaemonSets(),
+ kosmosFactory.Kosmos().V1alpha1().DaemonSets(),
+ kosmosFactory.Kosmos().V1alpha1().Clusters(),
+ kubeFactory.Core().V1().Pods(),
+ option,
+ )
+ kosmosFactory.Start(ctx.Done())
+ kubeFactory.Start(ctx.Done())
+ controller.Run(ctx, workNum)
+ return controller, nil
+}
+
+type GlobalDaemonSetService struct {
+ ctx context.Context
+ opts *options.Options
+ defaultWorkNum int
+}
+
+func (g *GlobalDaemonSetService) SetupWithManager(mgr manager.Manager) error {
+ return mgr.Add(g)
+}
+
+func (g *GlobalDaemonSetService) Start(context.Context) error {
+ return enableGlobalDaemonSet(g.ctx, g.opts, g.defaultWorkNum)
+}
+
+func enableGlobalDaemonSet(ctx context.Context, opts *options.Options, defaultWorkNum int) error {
+ _, err := StartHostDaemonSetsController(ctx, opts, defaultWorkNum)
+ if err != nil {
+ klog.Errorf("start host daemonset controller failed: %v", err)
+ return err
+ }
+ _, err = StartDaemonSetsController(ctx, opts, defaultWorkNum)
+ if err != nil {
+ klog.Errorf("start daemon set controller failed: %v", err)
+ return err
+ }
+ _, err = StartDistributeController(ctx, opts, defaultWorkNum)
+ if err != nil {
+ klog.Errorf("start distribute controller failed: %v", err)
+ return err
+ }
+ _, err = StartDaemonSetsMirrorController(ctx, opts, defaultWorkNum)
+ if err != nil {
+ klog.Errorf("start daemon set mirror controller failed: %v", err)
+ return err
+ }
+
+ _, err = StartPodReflectController(ctx, opts, defaultWorkNum)
+ if err != nil {
+ klog.Errorf("start pod reflect controller failed: %v", err)
+ return err
+ }
+ return nil
+}
diff --git a/cmd/clustertree/cluster-manager/app/manager.go b/cmd/clustertree/cluster-manager/app/manager.go
index 00a5448f9..87dc0edb2 100644
--- a/cmd/clustertree/cluster-manager/app/manager.go
+++ b/cmd/clustertree/cluster-manager/app/manager.go
@@ -3,21 +3,39 @@ package app
import (
"context"
"fmt"
+ "os"
"github.com/spf13/cobra"
+ "k8s.io/apimachinery/pkg/util/uuid"
+ "k8s.io/client-go/dynamic"
+ "k8s.io/client-go/rest"
+ "k8s.io/client-go/tools/cache"
"k8s.io/client-go/tools/clientcmd"
+ "k8s.io/client-go/tools/leaderelection"
+ "k8s.io/client-go/tools/leaderelection/resourcelock"
cliflag "k8s.io/component-base/cli/flag"
"k8s.io/klog/v2"
controllerruntime "sigs.k8s.io/controller-runtime"
"github.com/kosmos.io/kosmos/cmd/clustertree/cluster-manager/app/options"
clusterManager "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/mcs"
+ podcontrollers "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pod"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pv"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pvc"
+ nodeserver "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/node-server"
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
"github.com/kosmos.io/kosmos/pkg/scheme"
"github.com/kosmos.io/kosmos/pkg/sharedcli/klogflag"
+ "github.com/kosmos.io/kosmos/pkg/utils"
)
-func NewAgentCommand(ctx context.Context) *cobra.Command {
- opts := options.NewOptions()
+func NewAgentCommand(ctx context.Context) (*cobra.Command, error) {
+ opts, err := options.NewOptions()
+ if err != nil {
+ return nil, err
+ }
cmd := &cobra.Command{
Use: "clustertree-cluster-manager",
@@ -26,7 +44,7 @@ func NewAgentCommand(ctx context.Context) *cobra.Command {
if errs := opts.Validate(); len(errs) != 0 {
return errs.ToAggregate()
}
- if err := run(ctx, opts); err != nil {
+ if err := leaderElectionRun(ctx, opts); err != nil {
return err
}
return nil
@@ -44,38 +62,244 @@ func NewAgentCommand(ctx context.Context) *cobra.Command {
cmd.Flags().AddFlagSet(genericFlagSet)
cmd.Flags().AddFlagSet(logsFlagSet)
- return cmd
+ return cmd, nil
+}
+
+func leaderElectionRun(ctx context.Context, opts *options.Options) error {
+ if !opts.LeaderElection.LeaderElect {
+ return run(ctx, opts)
+ }
+
+ kubeConfig, err := clientcmd.BuildConfigFromFlags("", opts.KubernetesOptions.KubeConfig)
+ if err != nil {
+ return err
+ }
+
+ id, err := os.Hostname()
+ if err != nil {
+ return err
+ }
+ id += "_" + string(uuid.NewUUID())
+
+ rl, err := resourcelock.NewFromKubeconfig(
+ opts.LeaderElection.ResourceLock,
+ opts.LeaderElection.ResourceNamespace,
+ opts.LeaderElection.ResourceName,
+ resourcelock.ResourceLockConfig{
+ Identity: id,
+ },
+ kubeConfig,
+ opts.LeaderElection.RenewDeadline.Duration,
+ )
+ if err != nil {
+ return err
+ }
+
+ leaderelection.RunOrDie(ctx, leaderelection.LeaderElectionConfig{
+ Lock: rl,
+ Name: opts.LeaderElection.ResourceName,
+ LeaseDuration: opts.LeaderElection.LeaseDuration.Duration,
+ RenewDeadline: opts.LeaderElection.RenewDeadline.Duration,
+ RetryPeriod: opts.LeaderElection.RetryPeriod.Duration,
+
+ Callbacks: leaderelection.LeaderCallbacks{
+ OnStartedLeading: func(ctx context.Context) {
+ klog.Warning("leader-election got, clustertree is awaking")
+ _ = run(ctx, opts)
+ os.Exit(0)
+ },
+ OnStoppedLeading: func() {
+ klog.Warning("leader-election lost, clustertree is dying")
+ os.Exit(0)
+ },
+ },
+ })
+ return nil
}
func run(ctx context.Context, opts *options.Options) error {
+ globalleafManager := leafUtils.GetGlobalLeafResourceManager()
+
config, err := clientcmd.BuildConfigFromFlags(opts.KubernetesOptions.Master, opts.KubernetesOptions.KubeConfig)
if err != nil {
panic(err)
}
config.QPS, config.Burst = opts.KubernetesOptions.QPS, opts.KubernetesOptions.Burst
+ configOptFunc := func(config *rest.Config) {
+ config.QPS = opts.KubernetesOptions.QPS
+ config.Burst = opts.KubernetesOptions.Burst
+ }
+
+ // init root client
+ rootClient, err := utils.NewClientFromConfigPath(opts.KubernetesOptions.KubeConfig, configOptFunc)
+ if err != nil {
+ return fmt.Errorf("could not build clientset for root cluster: %v", err)
+ }
+
+ // init Kosmos client
+ rootKosmosClient, err := utils.NewKosmosClientFromConfigPath(opts.KubernetesOptions.KubeConfig, configOptFunc)
+ if err != nil {
+ return fmt.Errorf("could not build kosmos clientset for root cluster: %v", err)
+ }
+
+ rootResourceManager := utils.NewResourceManager(rootClient, rootKosmosClient)
mgr, err := controllerruntime.NewManager(config, controllerruntime.Options{
- Logger: klog.Background(),
- Scheme: scheme.NewSchema(),
- LeaderElection: opts.LeaderElection.LeaderElect,
- LeaderElectionID: opts.LeaderElection.ResourceName,
- LeaderElectionNamespace: opts.LeaderElection.ResourceNamespace,
+ Logger: klog.Background(),
+ Scheme: scheme.NewSchema(),
+ LeaderElection: false,
+ MetricsBindAddress: "0",
+ HealthProbeBindAddress: "0",
})
if err != nil {
return fmt.Errorf("failed to build controller manager: %v", err)
}
- ClusterController := clusterManager.ClusterController{
- Master: mgr.GetClient(),
- EventRecorder: mgr.GetEventRecorderFor(clusterManager.ControllerName),
+ dynamicClient, err := dynamic.NewForConfig(config)
+ if err != nil {
+ klog.Errorf("Unable to create dynamicClient: %v", err)
+ return err
+ }
+
+ // add cluster controller
+ clusterController := clusterManager.ClusterController{
+ Root: mgr.GetClient(),
+ RootDynamic: dynamicClient,
+ RootClientset: rootClient,
+ EventRecorder: mgr.GetEventRecorderFor(clusterManager.ControllerName),
+ Options: opts,
+ RootResourceManager: rootResourceManager,
+ GlobalLeafManager: globalleafManager,
}
- if err = ClusterController.SetupWithManager(mgr); err != nil {
+ if err = clusterController.SetupWithManager(mgr); err != nil {
return fmt.Errorf("error starting %s: %v", clusterManager.ControllerName, err)
}
- if err := mgr.Start(ctx); err != nil {
- return fmt.Errorf("failed to start controller manager: %v", err)
+ if opts.MultiClusterService {
+ // add serviceExport controller
+ ServiceExportController := mcs.ServiceExportController{
+ RootClient: mgr.GetClient(),
+ EventRecorder: mgr.GetEventRecorderFor(mcs.ServiceExportControllerName),
+ Logger: mgr.GetLogger(),
+ ReservedNamespaces: opts.ReservedNamespaces,
+ }
+ if err = ServiceExportController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting %s: %v", mcs.ServiceExportControllerName, err)
+ }
+
+ // add auto create mcs resources controller
+ autoCreateMCSController := mcs.AutoCreateMCSController{
+ RootClient: mgr.GetClient(),
+ EventRecorder: mgr.GetEventRecorderFor(mcs.AutoCreateMCSControllerName),
+ Logger: mgr.GetLogger(),
+ AutoCreateMCSPrefix: opts.AutoCreateMCSPrefix,
+ RootKosmosClient: rootKosmosClient,
+ GlobalLeafManager: globalleafManager,
+ ReservedNamespaces: opts.ReservedNamespaces,
+ }
+ if err = autoCreateMCSController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting %s: %v", mcs.AutoCreateMCSControllerName, err)
+ }
+ }
+
+ if opts.DaemonSetController {
+ daemonSetController := &GlobalDaemonSetService{
+ opts: opts,
+ ctx: ctx,
+ defaultWorkNum: 1,
+ }
+ if err = daemonSetController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting global daemonset : %v", err)
+ }
+ }
+
+ // init rootPodController
+ rootPodReconciler := podcontrollers.RootPodReconciler{
+ GlobalLeafManager: globalleafManager,
+ RootClient: mgr.GetClient(),
+ DynamicRootClient: dynamicClient,
+ Options: opts,
+ }
+ if err := rootPodReconciler.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting rootPodReconciler %s: %v", podcontrollers.RootPodControllerName, err)
+ }
+
+ if !opts.OnewayStorageControllers {
+ rootPVCController := pvc.RootPVCController{
+ RootClient: mgr.GetClient(),
+ GlobalLeafManager: globalleafManager,
+ }
+ if err := rootPVCController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting root pvc controller %v", err)
+ }
+
+ rootPVController := pv.RootPVController{
+ RootClient: mgr.GetClient(),
+ GlobalLeafManager: globalleafManager,
+ }
+ if err := rootPVController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting root pv controller %v", err)
+ }
+ } else {
+ onewayPVController := pv.OnewayPVController{
+ Root: mgr.GetClient(),
+ RootDynamic: dynamicClient,
+ GlobalLeafManager: globalleafManager,
+ }
+ if err := onewayPVController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting oneway pv controller %v", err)
+ }
+
+ onewayPVCController := pvc.OnewayPVCController{
+ Root: mgr.GetClient(),
+ RootDynamic: dynamicClient,
+ GlobalLeafManager: globalleafManager,
+ }
+ if err := onewayPVCController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting oneway pvc controller %v", err)
+ }
+ }
+
+ // init commonController
+ for i, gvr := range controllers.SYNC_GVRS {
+ commonController := controllers.SyncResourcesReconciler{
+ GlobalLeafManager: globalleafManager,
+ GroupVersionResource: gvr,
+ Object: controllers.SYNC_OBJS[i],
+ DynamicRootClient: dynamicClient,
+ // DynamicLeafClient: clientDynamic,
+ ControllerName: "async-controller-" + gvr.Resource,
+ // Namespace: cluster.Spec.Namespace,
+ }
+ if err := commonController.SetupWithManager(mgr, gvr); err != nil {
+ klog.Errorf("Unable to create cluster node controller: %v", err)
+ return err
+ }
+ }
+
+ go func() {
+ if err = mgr.Start(ctx); err != nil {
+ klog.Errorf("failed to start controller manager: %v", err)
+ }
+ }()
+
+ nodeServer := nodeserver.NodeServer{
+ RootClient: mgr.GetClient(),
+ GlobalLeafManager: globalleafManager,
+ }
+ go func() {
+ if err := nodeServer.Start(ctx, opts); err != nil {
+ klog.Errorf("failed to start node server: %v", err)
+ }
+ }()
+
+ rootResourceManager.InformerFactory.Start(ctx.Done())
+ rootResourceManager.KosmosInformerFactory.Start(ctx.Done())
+ if !cache.WaitForCacheSync(ctx.Done(), rootResourceManager.EndpointSliceInformer.HasSynced, rootResourceManager.ServiceInformer.HasSynced) {
+ klog.Fatal("cluster manager: wait for informer factory failed")
}
+ <-ctx.Done()
+ klog.Info("cluster node manager stopped.")
return nil
}
diff --git a/cmd/clustertree/cluster-manager/app/options/options.go b/cmd/clustertree/cluster-manager/app/options/options.go
index dcac42c36..f9fbb320a 100644
--- a/cmd/clustertree/cluster-manager/app/options/options.go
+++ b/cmd/clustertree/cluster-manager/app/options/options.go
@@ -2,7 +2,10 @@ package options
import (
"github.com/spf13/pflag"
+ "k8s.io/client-go/tools/leaderelection/resourcelock"
componentbaseconfig "k8s.io/component-base/config"
+ "k8s.io/component-base/config/options"
+ componentbaseconfigv1alpha1 "k8s.io/component-base/config/v1alpha1"
)
const (
@@ -11,11 +14,32 @@ const (
DefaultKubeQPS = 40.0
DefaultKubeBurst = 60
+
+ CoreDNSServiceNamespace = "kube-system"
+ CoreDNSServiceName = "kube-dns"
)
type Options struct {
- LeaderElection componentbaseconfig.LeaderElectionConfiguration
- KubernetesOptions KubernetesOptions
+ LeaderElection componentbaseconfig.LeaderElectionConfiguration
+ KubernetesOptions KubernetesOptions
+ ListenPort int32
+ DaemonSetController bool
+ MultiClusterService bool
+
+ // If MultiClusterService is disabled, the clustertree will rewrite the dnsPolicy configuration for pods deployed in
+ // the leaf clusters, directing them to the root cluster's CoreDNS, thus facilitating access to services across all
+ // clusters.
+ RootCoreDNSServiceNamespace string
+ RootCoreDNSServiceName string
+
+ // Enable oneway storage controllers
+ OnewayStorageControllers bool
+
+ // AutoCreateMCSPrefix are the prefix of the namespace for service to auto create in leaf cluster
+ AutoCreateMCSPrefix []string
+
+ // ReservedNamespaces are the protected namespaces to prevent Kosmos for deleting system resources
+ ReservedNamespaces []string
}
type KubernetesOptions struct {
@@ -25,8 +49,20 @@ type KubernetesOptions struct {
Burst int `json:"burst,omitempty" yaml:"burst,omitempty"`
}
-func NewOptions() *Options {
- return &Options{}
+func NewOptions() (*Options, error) {
+ var leaderElection componentbaseconfigv1alpha1.LeaderElectionConfiguration
+ componentbaseconfigv1alpha1.RecommendedDefaultLeaderElectionConfiguration(&leaderElection)
+
+ leaderElection.ResourceName = LeaderElectionResourceName
+ leaderElection.ResourceNamespace = LeaderElectionNamespace
+ leaderElection.ResourceLock = resourcelock.LeasesResourceLock
+
+ var opts Options
+ if err := componentbaseconfigv1alpha1.Convert_v1alpha1_LeaderElectionConfiguration_To_config_LeaderElectionConfiguration(&leaderElection, &opts.LeaderElection, nil); err != nil {
+ return nil, err
+ }
+
+ return &opts, nil
}
func (o *Options) AddFlags(flags *pflag.FlagSet) {
@@ -34,11 +70,17 @@ func (o *Options) AddFlags(flags *pflag.FlagSet) {
return
}
- flags.BoolVar(&o.LeaderElection.LeaderElect, "leader-elect", true, "Start a leader election client and gain leadership before executing the main loop. Enable this when running replicated components for high availability.")
- flags.StringVar(&o.LeaderElection.ResourceName, "leader-elect-resource-name", LeaderElectionResourceName, "The name of resource object that is used for locking during leader election.")
- flags.StringVar(&o.LeaderElection.ResourceNamespace, "leader-elect-resource-namespace", LeaderElectionNamespace, "The namespace of resource object that is used for locking during leader election.")
flags.Float32Var(&o.KubernetesOptions.QPS, "kube-qps", DefaultKubeQPS, "QPS to use while talking with kube-apiserver.")
flags.IntVar(&o.KubernetesOptions.Burst, "kube-burst", DefaultKubeBurst, "Burst to use while talking with kube-apiserver.")
flags.StringVar(&o.KubernetesOptions.KubeConfig, "kubeconfig", "", "Path for kubernetes kubeconfig file, if left blank, will use in cluster way.")
flags.StringVar(&o.KubernetesOptions.Master, "master", "", "Used to generate kubeconfig for downloading, if not specified, will use host in kubeconfig.")
+ flags.Int32Var(&o.ListenPort, "listen-port", 10250, "Listen port for requests from the kube-apiserver.")
+ flags.BoolVar(&o.DaemonSetController, "daemonset-controller", false, "Turn on or off daemonset controller.")
+ flags.BoolVar(&o.MultiClusterService, "multi-cluster-service", false, "Turn on or off mcs support.")
+ flags.StringVar(&o.RootCoreDNSServiceNamespace, "root-coredns-service-namespace", CoreDNSServiceNamespace, "The namespace of the CoreDNS service in the root cluster, used to locate the CoreDNS service when MultiClusterService is disabled.")
+ flags.StringVar(&o.RootCoreDNSServiceName, "root-coredns-service-name", CoreDNSServiceName, "The name of the CoreDNS service in the root cluster, used to locate the CoreDNS service when MultiClusterService is disabled.")
+ flags.BoolVar(&o.OnewayStorageControllers, "oneway-storage-controllers", false, "Turn on or off oneway storage controllers.")
+ flags.StringSliceVar(&o.AutoCreateMCSPrefix, "auto-mcs-prefix", []string{}, "The prefix of namespace for service to auto create mcs resources")
+ flags.StringSliceVar(&o.ReservedNamespaces, "reserved-namespaces", []string{"kube-system"}, "The namespaces protected by Kosmos that the controller-manager will skip.")
+ options.BindLeaderElectionFlags(&o.LeaderElection, flags)
}
diff --git a/cmd/clustertree/cluster-manager/main.go b/cmd/clustertree/cluster-manager/main.go
index 3c440637d..f89385609 100644
--- a/cmd/clustertree/cluster-manager/main.go
+++ b/cmd/clustertree/cluster-manager/main.go
@@ -5,13 +5,17 @@ import (
apiserver "k8s.io/apiserver/pkg/server"
"k8s.io/component-base/cli"
+ "k8s.io/klog"
"github.com/kosmos.io/kosmos/cmd/clustertree/cluster-manager/app"
)
func main() {
ctx := apiserver.SetupSignalContext()
- cmd := app.NewAgentCommand(ctx)
+ cmd, err := app.NewAgentCommand(ctx)
+ if err != nil {
+ klog.Errorf("error happened when new agent command, err: %v", err)
+ }
code := cli.Run(cmd)
os.Exit(code)
}
diff --git a/cmd/clusterlink/operator/app/operator.go b/cmd/operator/app/operator.go
similarity index 88%
rename from cmd/clusterlink/operator/app/operator.go
rename to cmd/operator/app/operator.go
index b1a9824c9..782e47581 100644
--- a/cmd/clusterlink/operator/app/operator.go
+++ b/cmd/operator/app/operator.go
@@ -16,8 +16,8 @@ import (
"k8s.io/klog/v2"
ctrl "sigs.k8s.io/controller-runtime"
- "github.com/kosmos.io/kosmos/cmd/clusterlink/operator/app/options"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator"
+ "github.com/kosmos.io/kosmos/cmd/operator/app/options"
+ "github.com/kosmos.io/kosmos/pkg/operator"
"github.com/kosmos.io/kosmos/pkg/scheme"
"github.com/kosmos.io/kosmos/pkg/sharedcli"
"github.com/kosmos.io/kosmos/pkg/sharedcli/klogflag"
@@ -29,14 +29,14 @@ func NewOperatorCommand(ctx context.Context) *cobra.Command {
opts := options.NewOptions()
cmd := &cobra.Command{
- Use: "clusterlink-operator",
- Long: `Configure the network based on clusternodes and clusters`,
+ Use: "kosmos-operator",
+ Long: `Deploy Kosmos components according to the cluster`,
RunE: func(cmd *cobra.Command, args []string) error {
// validate options
if errs := opts.Validate(); len(errs) != 0 {
return errs.ToAggregate()
}
- if err := run(ctx, opts); err != nil {
+ if err := Run(ctx, opts); err != nil {
return err
}
return nil
@@ -70,12 +70,12 @@ func NewOperatorCommand(ctx context.Context) *cobra.Command {
return cmd
}
-func run(ctx context.Context, opts *options.Options) error {
+func Run(ctx context.Context, opts *options.Options) error {
restConfig, err := clientcmd.BuildConfigFromFlags("", opts.KubeConfig)
if err != nil {
return fmt.Errorf("error building kubeconfig: %s", err.Error())
}
- clientset, err := kubernetes.NewForConfig(restConfig)
+ clientSet, err := kubernetes.NewForConfig(restConfig)
if err != nil {
return fmt.Errorf("error get kubeclient: %v", err)
}
@@ -88,7 +88,7 @@ func run(ctx context.Context, opts *options.Options) error {
}
} else {
// try get kubeconfig from configmap
- cm, err := clientset.CoreV1().ConfigMaps(utils.DefaultNamespace).Get(context.Background(), opts.ExternalKubeConfigName, metav1.GetOptions{})
+ cm, err := clientSet.CoreV1().ConfigMaps(utils.DefaultNamespace).Get(context.Background(), opts.ExternalKubeConfigName, metav1.GetOptions{})
if err != nil {
return fmt.Errorf("failed to configmap %s: %v", opts.ExternalKubeConfigName, err)
}
diff --git a/cmd/clusterlink/operator/app/options/options.go b/cmd/operator/app/options/options.go
similarity index 100%
rename from cmd/clusterlink/operator/app/options/options.go
rename to cmd/operator/app/options/options.go
diff --git a/cmd/clusterlink/operator/app/options/validation.go b/cmd/operator/app/options/validation.go
similarity index 100%
rename from cmd/clusterlink/operator/app/options/validation.go
rename to cmd/operator/app/options/validation.go
diff --git a/cmd/clusterlink/operator/main.go b/cmd/operator/main.go
similarity index 79%
rename from cmd/clusterlink/operator/main.go
rename to cmd/operator/main.go
index 89666ee02..88f6d1761 100644
--- a/cmd/clusterlink/operator/main.go
+++ b/cmd/operator/main.go
@@ -6,7 +6,7 @@ import (
apiserver "k8s.io/apiserver/pkg/server"
"k8s.io/component-base/cli"
- "github.com/kosmos.io/kosmos/cmd/clusterlink/operator/app"
+ "github.com/kosmos.io/kosmos/cmd/operator/app"
)
func main() {
diff --git a/deploy/clusterlink-agent-proxy-configmap.yml b/deploy/clusterlink-agent-proxy-configmap.yml
index e96914028..068f4ad50 100644
--- a/deploy/clusterlink-agent-proxy-configmap.yml
+++ b/deploy/clusterlink-agent-proxy-configmap.yml
@@ -2,7 +2,7 @@ apiVersion: v1
kind: ConfigMap
metadata:
name: clusterlink-agent-proxy
- namespace: clusterlink-system
+ namespace: kosmos-system
data:
kubeconfig: |
apiVersion: v1
diff --git a/deploy/clusterlink-agent.yaml b/deploy/clusterlink-agent.yaml
index 58040410a..6ab502061 100644
--- a/deploy/clusterlink-agent.yaml
+++ b/deploy/clusterlink-agent.yaml
@@ -2,7 +2,7 @@ apiVersion: apps/v1
kind: DaemonSet
metadata:
name: clusterlink-agent
- namespace: clusterlink-system
+ namespace: kosmos-system
spec:
selector:
matchLabels:
@@ -28,6 +28,7 @@ spec:
command:
- clusterlink-agent
- -kubeconfig=/etc/clusterlink/kubeconfig
+ - --v=4
env:
- name: CLUSTER_NAME
value: ""
@@ -46,6 +47,12 @@ spec:
- mountPath: /etc/clusterlink/kubeconfig
name: proxy-config
readOnly: true
+ - mountPath: /run/xtables.lock
+ name: iptableslock
+ readOnly: false
+ - mountPath: /lib/modules
+ name: lib-modules
+ readOnly: true
terminationGracePeriodSeconds: 30
securityContext:
privileged: true
@@ -55,3 +62,12 @@ spec:
configMap:
defaultMode: 420
name: proxy-config
+ - hostPath:
+ path: /run/xtables.lock
+ type: FileOrCreate
+ name: iptableslock
+ - name: lib-modules
+ hostPath:
+ path: /lib/modules
+
+
diff --git a/deploy/clusterlink-controller-manager.yml b/deploy/clusterlink-controller-manager.yml
index ac4e7c306..c8e5c014e 100644
--- a/deploy/clusterlink-controller-manager.yml
+++ b/deploy/clusterlink-controller-manager.yml
@@ -2,13 +2,13 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: clusterlink-controller-manager
- namespace: clusterlink-system
+ namespace: kosmos-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: clusterlink-controller-manager
- namespace: clusterlink-system
+ namespace: kosmos-system
labels:
app: clusterlink-controller-manager
spec:
diff --git a/deploy/clusterlink-datapanel-rbac.yml b/deploy/clusterlink-datapanel-rbac.yml
index aeedf7d50..d4aa77a1a 100644
--- a/deploy/clusterlink-datapanel-rbac.yml
+++ b/deploy/clusterlink-datapanel-rbac.yml
@@ -20,7 +20,7 @@ roleRef:
subjects:
- kind: ServiceAccount
name: clusterlink-controller-manager
- namespace: clusterlink-system
+ namespace: kosmos-system
- kind: ServiceAccount
name: clusterlink-operator
- namespace: clusterlink-system
+ namespace: kosmos-system
diff --git a/deploy/clusterlink-elector-rbac.yml b/deploy/clusterlink-elector-rbac.yml
index 1bc2dde9b..771eaf210 100644
--- a/deploy/clusterlink-elector-rbac.yml
+++ b/deploy/clusterlink-elector-rbac.yml
@@ -20,4 +20,4 @@ roleRef:
subjects:
- kind: ServiceAccount
name: clusterlink-elector
- namespace: clusterlink-system
+ namespace: kosmos-system
diff --git a/deploy/clusterlink-elector.yml b/deploy/clusterlink-elector.yml
index cee28626f..42840cafe 100644
--- a/deploy/clusterlink-elector.yml
+++ b/deploy/clusterlink-elector.yml
@@ -2,7 +2,7 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: clusterlink-elector
- namespace: clusterlink-system
+ namespace: kosmos-system
---
apiVersion: apps/v1
kind: Deployment
@@ -10,7 +10,7 @@ metadata:
labels:
app: elector
name: clusterlink-elector
- namespace: clusterlink-system
+ namespace: kosmos-system
spec:
replicas: 2
selector:
@@ -21,6 +21,23 @@ spec:
labels:
app: elector
spec:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kosmos.io/exclude
+ operator: DoesNotExist
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchExpressions:
+ - key: app
+ operator: In
+ values:
+ - elector
+ namespaces:
+ - kosmos-system
+ topologyKey: kubernetes.io/hostname
containers:
- command:
- clusterlink-elector
diff --git a/deploy/clusterlink-namespace.yml b/deploy/clusterlink-namespace.yml
index 0cd622aec..64ebe0f7d 100644
--- a/deploy/clusterlink-namespace.yml
+++ b/deploy/clusterlink-namespace.yml
@@ -1,4 +1,4 @@
apiVersion: v1
kind: Namespace
metadata:
- name: clusterlink-system
\ No newline at end of file
+ name: kosmos-system
\ No newline at end of file
diff --git a/deploy/clusterlink-network-manager.yml b/deploy/clusterlink-network-manager.yml
index c08fa334b..fd2d4e630 100644
--- a/deploy/clusterlink-network-manager.yml
+++ b/deploy/clusterlink-network-manager.yml
@@ -2,7 +2,7 @@ apiVersion: v1
kind: ServiceAccount
metadata:
name: clusterlink-network-manager
- namespace: clusterlink-system
+ namespace: kosmos-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
@@ -26,13 +26,13 @@ roleRef:
subjects:
- kind: ServiceAccount
name: clusterlink-network-manager
- namespace: clusterlink-system
+ namespace: kosmos-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: clusterlink-network-manager
- namespace: clusterlink-system
+ namespace: kosmos-system
labels:
app: clusterlink-network-manager
spec:
@@ -56,7 +56,7 @@ spec:
values:
- clusterlink-network-manager
namespaces:
- - clusterlink-system
+ - kosmos-system
topologyKey: kubernetes.io/hostname
containers:
- name: manager
@@ -64,7 +64,7 @@ spec:
imagePullPolicy: IfNotPresent
command:
- clusterlink-network-manager
- - v=4
+ - --v=4
resources:
limits:
memory: 500Mi
diff --git a/deploy/clusterlink-proxy.yml b/deploy/clusterlink-proxy.yml
index c380eb9bc..453a293b0 100644
--- a/deploy/clusterlink-proxy.yml
+++ b/deploy/clusterlink-proxy.yml
@@ -2,7 +2,7 @@ apiVersion: v1
kind: Service
metadata:
name: clusterlink-proxy-service
- namespace: clusterlink-system
+ namespace: kosmos-system
spec:
selector:
app: clusterlink-proxy
@@ -17,7 +17,7 @@ apiVersion: apps/v1
kind: Deployment
metadata:
name: clusterlink-proxy
- namespace: clusterlink-system
+ namespace: kosmos-system
labels:
app: clusterlink-proxy
spec:
diff --git a/deploy/clustertree-cluster-manager.yml b/deploy/clustertree-cluster-manager.yml
new file mode 100644
index 000000000..03d2dad30
--- /dev/null
+++ b/deploy/clustertree-cluster-manager.yml
@@ -0,0 +1,87 @@
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: clustertree
+ namespace: kosmos-system
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: clustertree
+rules:
+ - apiGroups: ['*']
+ resources: ['*']
+ verbs: ["*"]
+ - nonResourceURLs: ['*']
+ verbs: ["get"]
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: clustertree
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: clustertree
+subjects:
+ - kind: ServiceAccount
+ name: clustertree
+ namespace: kosmos-system
+---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: clustertree-cluster-manager
+ namespace: kosmos-system
+type: Opaque
+data:
+ cert.pem: __CERT__
+ key.pem: __KEY__
+
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: clustertree-cluster-manager
+ namespace: kosmos-system
+ labels:
+ app: clustertree-cluster-manager
+spec:
+ replicas: 1
+ selector:
+ matchLabels:
+ app: clustertree-cluster-manager
+ template:
+ metadata:
+ labels:
+ app: clustertree-cluster-manager
+ spec:
+ serviceAccountName: clustertree
+ containers:
+ - name: clustertree-cluster-manager
+ image: ghcr.io/kosmos-io/clustertree-cluster-manager:__VERSION__
+ imagePullPolicy: IfNotPresent
+ env:
+ - name: APISERVER_CERT_LOCATION
+ value: /etc/cluster-tree/cert/cert.pem
+ - name: APISERVER_KEY_LOCATION
+ value: /etc/cluster-tree/cert/key.pem
+ - name: LEAF_NODE_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.podIP
+ - name: PREFERRED-ADDRESS-TYPE
+ value: InternalDNS
+ volumeMounts:
+ - name: credentials
+ mountPath: "/etc/cluster-tree/cert"
+ readOnly: true
+ command:
+ - clustertree-cluster-manager
+ - --multi-cluster-service=true
+ - --auto-mcs-prefix=kosmos-e2e
+ - --v=4
+ volumes:
+ - name: credentials
+ secret:
+ secretName: clustertree-cluster-manager
diff --git a/deploy/clustertree-knode-controllers.yml b/deploy/clustertree-knode-controllers.yml
deleted file mode 100644
index e178fbaf2..000000000
--- a/deploy/clustertree-knode-controllers.yml
+++ /dev/null
@@ -1,73 +0,0 @@
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: clustertree-cluster-manager
- namespace: kosmos-system
----
-apiVersion: v1
-data:
- Kubeconfig: |
- apiVersion: v1
- clusters:
- - cluster:
- insecure-skip-tls-verify: true
- server: https://[2409:8720:4a00::1:644a:5f13]:6443
- name: cluster.local
- contexts:
- - context:
- cluster: cluster.local
- user: kubernetes-admin
- name: kubernetes-admin@cluster.local
- current-context: kubernetes-admin@cluster.local
- kind: Config
- preferences: {}
- users:
- - name: kubernetes-admin
- user:
- client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJS1ltanowaHhvU0F3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TXpBMk1UVXhNRFF4TWpOYUZ3MHpNekEyTVRJeE1EUXhNalZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTBJS0cvQnVXRXJSL0w3QmEKWExmYUZpbVovRTY5VlpTMjNyd2JORWk4U2pSUEs4SE41VU53UXlBNEIwVG1WTkxLYU5vWWVOd2pmWDRWYlVJbwpMY29uU2dudGttamRuc1g1SURDRWY2UjFGT3pzVXFMbFR3aXJ4cjlSNGw0QjdrQi95T1ordFlhQzNUaWFvNGc4Ck9LdkVaU0tWblRvVUptcU5CSHRCZkoxVFU0VE12cmFoaTN2b2YwWWpKR0o4ck1PYmVtWERuOTU1S1JucWlJTGYKZ1ZZY3pOL0pDRldvbEoybkNNeGhMcnNWMlZQakNnN2s3UWZMNDJEUlBZZS9icXN1S01Mc0VoTzVPOTh5Y1JnaApZNkhnV2thazc3U3RmOHNoUnFyRE5vTjhDbnp5S0wrbmdMTEJYUW9tUDkxZHNMOVlEU2FrSkI2SlQ3Ly9xa2RDCnNtVXROUUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JRclhiUHhLdDdCSWVEQk9MZG43WkdRcVlVUwpVakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBZUNROXJ6WGdmd2JaaEUzeXA3bmNxQjlzcFdXdXpjQUFkRDVFCmtXSU1jZ3Rwc1RxTUR5YlVhOUp6WTdRNExtSEI4M3psYTRJWmJNVnFDVTdXTlZFLzZQYU5rNGUzU1Q1NERqbEYKUmFRTjBWMHdQTitIRldOZTkwRlFiSCtYMGp2b3ZEU3JIL2t4WGdLWnFNbTlHMWZRRW9jUG1wWWdpdWUvTi81VwpSdkI4Y1B2RlpMTjVhNG9Wa3NjUEp5Z2dRbzFDbU5PNmFNQkFaYlVmZjR4QjVuL0t2eklDT0pCTTJiK3R3VVNkCmpzcjU2ZkxCYk5MbUNaTWpURDk3SnRrMmgvMmU0dHFUY3dON0RsK2ZpRTdSYm9VMGJmekxBVjFUQXhLbHI2NGQKdFBPbXBDY0xRV0lVWnFIMzNMM3I5VHRBVUtsZHB2SjlvMlo3eEkzNW5XSTVZdWxGOFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
- client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMElLRy9CdVdFclIvTDdCYVhMZmFGaW1aL0U2OVZaUzIzcndiTkVpOFNqUlBLOEhOCjVVTndReUE0QjBUbVZOTEthTm9ZZU53amZYNFZiVUlvTGNvblNnbnRrbWpkbnNYNUlEQ0VmNlIxRk96c1VxTGwKVHdpcnhyOVI0bDRCN2tCL3lPWit0WWFDM1RpYW80ZzhPS3ZFWlNLVm5Ub1VKbXFOQkh0QmZKMVRVNFRNdnJhaAppM3ZvZjBZakpHSjhyTU9iZW1YRG45NTVLUm5xaUlMZmdWWWN6Ti9KQ0ZXb2xKMm5DTXhoTHJzVjJWUGpDZzdrCjdRZkw0MkRSUFllL2Jxc3VLTUxzRWhPNU85OHljUmdoWTZIZ1drYWs3N1N0ZjhzaFJxckROb044Q256eUtMK24KZ0xMQlhRb21QOTFkc0w5WURTYWtKQjZKVDcvL3FrZENzbVV0TlFJREFRQUJBb0lCQUd0UVlxenFmY2pPd1E4SQpVdG1aZmxNZHdqVUxTWUw4Y1VvZHdscWNmTndzSS9zL1dmci9SSTRuek81ZzFiTWVjaktZM1RPSENYVVRLVy84Ck5yV3FiNkk1amQ1bXZubHpKdzhjS1hXUWJQb0NIbmRCZzRlenpNVVR2czhrMXhXS2VMb3JkMWR5RFhSU0o3UzIKNzFlemYvY1ZYNjkyTHR5K3hpbGlUb2dXYU1aNGlsb2hMWEFEanlIY0JMUFR0RjdqbHI5ajVFZXlSOTJYcENMbAoxaTY1TGxNeE11aE40eUVNZS9uUWY2Wi81NXJQNkVqNTNud2VkTEppTFRhNjJDbGJSWlVPV3BtdEtncGVsYVQ3Cis3TlppQ0VadWNaSm0zNnp1Ukw1eEpROEYyVUoyUkduVjlyeFdKcm91bm5uT3E3TzRpYnlaSzcvT0k4V0UvTW4KUVNHRVNrRUNnWUVBMVRxcEgxcE1YK09LMjdPbWhDYkJXVFhrZ25WanRDSEZtRk5MQnpPMUIrcWZIT2EvemJ5bgpOZHlYVE5MOHdhaHBTWlY3WGdaV0R2VDU1M3p4MjlsbHZXU3JlZlpZRG1XSDhYaXRRSXliUlA0ei91K3NoN0hiCjQrclAvMjNPbW9aM3FwaHFvMDRmTTRyY3h2NjZSYThzS0N6VmpXVUtwang2K0JtSW11bjVobmtDZ1lFQStsV0kKRmRGT0pFYnlNWmt6Q1Vib2xBQkUwQlV4bjZTUG5TRVBaaDZyU2hISVo5QUpLVi9oc3dTOHhaMk1wV0ZMOTN3ZQpoMHZBV1h3Z0IweTBybUM5WVhaekdmSkR6RXZQSjg1blliK0lzcTkvN3VnUmZCRUF2a0cxbENheExoSmt4YW1lCko0Q3FSQWc4YTJTazA4NFRISVlhQlZzQU44bjZsUU0zNS9DeUhaMENnWUFmcHBWMEVmTkVTSUpVR2xhZFJ5TnMKR3BQUXlad0RJUUF6bkNtRzZDWDNCdHlYYmFrSzRQWHhDTTFzbWVUcTJoVEcxMmw0aTNnNndDSllPak9zYnBpcgpoRVh2MUtFOWdkU3NBejIwVnlxMUV3YWswTzdMTlp0dU9XeW1mYVl0U2NoNWlpWktGMDZLV0JKdGQySXU5ZEdZCkpRK043WEduTzFNRmdNVEdPZlRRQVFLQmdRQ0NKV1dTc2phRjlieUV2TGtqNFpHWklHcW1JOTZndU5WUlE1YlYKNkt2MDNqbnFmdVhFZE96S1BYUkc2Um51QVIrVmt4bnNEUjM3WitUZTVxb28zbktXOFJYMkwxWEFLTW1TVUdTLwpGT3prdVFreUU4VERVN09uTmxKSXE3VUIxdDQ5UlduTDc4Q1ZqaEtiWXIrdXZqeUJYOWEzWWhCQzhPY3VBWFpYClIzUFNvUUtCZ0ZyalIxNmdORWVmdDRqZEdCd29FNXJFR1RtMm1GQlVhSitrcmtNKzFWbEhoazJPa1ZySSt5a1IKbURFMnlPc2lkQXhYazMzVThKT3Rva2ZVc0gzM1pQWXZNa29pMXdETUhBNVFOTWUvNkV4U0dGczNuRytzYkt4egoyNVM3bnpPUVVPSEFnOUptK0VvWlVzNnE4ck4xMy84eEpDRnptVFNnakhVZWUvTXpGRWxlCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
-kind: ConfigMap
-metadata:
- name: host-kubeconfig
- namespace: kosmos-system
----
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: clustertree-cluster-manager
- namespace: kosmos-system
- labels:
- app: clustertree-cluster-manager
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: clustertree-cluster-manager
- template:
- metadata:
- labels:
- app: clustertree-cluster-manager
- spec:
- containers:
- - name: clustertree-cluster-manager
- image: ghcr.io/kosmos-io/clustertree-cluster-manager:__VERSION__
- imagePullPolicy: Always
- command:
- - clustertree-cluster-manager
- - --kube-api-qps=500
- - --kube-api-burst=1000
- - --kubeconfig=/etc/kube/config
- - --leader-elect=false
- volumeMounts:
- - mountPath: /etc/kube
- name: config-volume
- readOnly: true
- serviceAccountName: clustertree-cluster-manager
- volumes:
- - configMap:
- defaultMode: 420
- items:
- - key: kubeconfig
- path: config
- name: host-kubeconfig
- name: config-volume
\ No newline at end of file
diff --git a/deploy/crds/kosmos.io_clusters.yaml b/deploy/crds/kosmos.io_clusters.yaml
index 2666514ba..cbf674914 100644
--- a/deploy/crds/kosmos.io_clusters.yaml
+++ b/deploy/crds/kosmos.io_clusters.yaml
@@ -16,10 +16,10 @@ spec:
scope: Cluster
versions:
- additionalPrinterColumns:
- - jsonPath: .spec.networkType
+ - jsonPath: .spec.clusterLinkOptions.networkType
name: NETWORK_TYPE
type: string
- - jsonPath: .spec.ipFamily
+ - jsonPath: .spec.clusterLinkOptions.ipFamily
name: IP_FAMILY
type: string
name: v1alpha1
@@ -41,88 +41,234 @@ spec:
spec:
description: Spec is the specification for the behaviour of the cluster.
properties:
- bridgeCIDRs:
- default:
- ip: 220.0.0.0/8
- ip6: 9470::/16
+ clusterLinkOptions:
properties:
- ip:
+ autodetectionMethod:
type: string
- ip6:
+ bridgeCIDRs:
+ default:
+ ip: 220.0.0.0/8
+ ip6: 9470::/16
+ properties:
+ ip:
+ type: string
+ ip6:
+ type: string
+ required:
+ - ip
+ - ip6
+ type: object
+ cni:
+ default: calico
type: string
- required:
- - ip
- - ip6
+ defaultNICName:
+ default: '*'
+ type: string
+ enable:
+ default: true
+ type: boolean
+ globalCIDRsMap:
+ additionalProperties:
+ type: string
+ type: object
+ ipFamily:
+ default: all
+ type: string
+ localCIDRs:
+ default:
+ ip: 210.0.0.0/8
+ ip6: 9480::/16
+ properties:
+ ip:
+ type: string
+ ip6:
+ type: string
+ required:
+ - ip
+ - ip6
+ type: object
+ networkType:
+ default: p2p
+ enum:
+ - p2p
+ - gateway
+ type: string
+ nicNodeNames:
+ items:
+ properties:
+ interfaceName:
+ type: string
+ nodeName:
+ items:
+ type: string
+ type: array
+ required:
+ - interfaceName
+ - nodeName
+ type: object
+ type: array
+ useIPPool:
+ default: false
+ type: boolean
type: object
- cni:
- default: calico
- type: string
- defaultNICName:
- default: '*'
- type: string
- globalCIDRsMap:
- additionalProperties:
- type: string
+ clusterTreeOptions:
+ properties:
+ enable:
+ default: true
+ type: boolean
+ leafModels:
+ description: LeafModels provide an api to arrange the member cluster
+ with some rules to pretend one or more leaf node
+ items:
+ properties:
+ labels:
+ additionalProperties:
+ type: string
+ description: Labels that will be setting in the pretended
+ Node labels
+ type: object
+ leafNodeName:
+ description: LeafNodeName defines leaf name If nil or empty,
+ the leaf node name will generate by controller and fill
+ in cluster link status
+ type: string
+ nodeSelector:
+ description: NodeSelector is a selector to select member
+ cluster nodes to pretend a leaf node in clusterTree.
+ properties:
+ labelSelector:
+ description: LabelSelector is a filter to select member
+ cluster nodes to pretend a leaf node in clusterTree
+ by labels. It will work on second level schedule on
+ pod create in member clusters.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label
+ selector requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a
+ selector that contains values, a key, and an
+ operator that relates the key and values.
+ properties:
+ key:
+ description: key is the label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty. If the
+ operator is Exists or DoesNotExist, the
+ values array must be empty. This array is
+ replaced during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is "In",
+ and the values array contains only "value". The
+ requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ nodeName:
+ description: NodeName is Member cluster origin node
+ Name
+ type: string
+ type: object
+ taints:
+ description: Taints attached to the leaf pretended Node.
+ If nil or empty, controller will set the default no-schedule
+ taint
+ items:
+ description: The node this Taint is attached to has the
+ "effect" on any pod that does not tolerate the Taint.
+ properties:
+ effect:
+ description: Required. The effect of the taint on
+ pods that do not tolerate the taint. Valid effects
+ are NoSchedule, PreferNoSchedule and NoExecute.
+ type: string
+ key:
+ description: Required. The taint key to be applied
+ to a node.
+ type: string
+ timeAdded:
+ description: TimeAdded represents the time at which
+ the taint was added. It is only written for NoExecute
+ taints.
+ format: date-time
+ type: string
+ value:
+ description: The taint value corresponding to the
+ taint key.
+ type: string
+ required:
+ - effect
+ - key
+ type: object
+ type: array
+ type: object
+ type: array
type: object
imageRepository:
type: string
- ipFamily:
- default: all
- type: string
kubeconfig:
format: byte
type: string
- localCIDRs:
- default:
- ip: 210.0.0.0/8
- ip6: 9480::/16
- properties:
- ip:
- type: string
- ip6:
- type: string
- required:
- - ip
- - ip6
- type: object
namespace:
- default: clusterlink-system
+ default: kosmos-system
type: string
- networkType:
- default: p2p
- enum:
- - p2p
- - gateway
- type: string
- nicNodeNames:
- items:
- properties:
- interfaceName:
- type: string
- nodeName:
- items:
- type: string
- type: array
- required:
- - interfaceName
- - nodeName
- type: object
- type: array
- useIPPool:
- default: false
- type: boolean
type: object
status:
description: Status describes the current status of a cluster.
properties:
- podCIDRs:
- items:
- type: string
- type: array
- serviceCIDRs:
- items:
- type: string
- type: array
+ clusterLinkStatus:
+ description: ClusterLinkStatus contain the cluster network information
+ properties:
+ podCIDRs:
+ items:
+ type: string
+ type: array
+ serviceCIDRs:
+ items:
+ type: string
+ type: array
+ type: object
+ clusterTreeStatus:
+ description: ClusterTreeStatus contain the member cluster leafNode
+ end status
+ properties:
+ leafNodeItems:
+ description: LeafNodeItems represents list of the leaf node Items
+ calculating in each member cluster.
+ items:
+ properties:
+ leafNodeName:
+ description: LeafNodeName represents the leaf node name
+ generate by controller. suggest name format like cluster-shortLabel-number
+ like member-az1-1
+ type: string
+ required:
+ - leafNodeName
+ type: object
+ type: array
+ type: object
type: object
required:
- spec
diff --git a/deploy/crds/kosmos.io_knodes.yaml b/deploy/crds/kosmos.io_knodes.yaml
index 193826fd1..647c18313 100644
--- a/deploy/crds/kosmos.io_knodes.yaml
+++ b/deploy/crds/kosmos.io_knodes.yaml
@@ -35,6 +35,9 @@ spec:
properties:
disableTaint:
type: boolean
+ kubeAPIBurst:
+ default: 100
+ type: integer
kubeconfig:
format: byte
type: string
diff --git a/deploy/crds/kosmos.io_podconversions.yaml b/deploy/crds/kosmos.io_podconversions.yaml
new file mode 100644
index 000000000..e9318211c
--- /dev/null
+++ b/deploy/crds/kosmos.io_podconversions.yaml
@@ -0,0 +1,1255 @@
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.11.0
+ creationTimestamp: null
+ name: podconversions.kosmos.io
+spec:
+ group: kosmos.io
+ names:
+ kind: PodConversion
+ listKind: PodConversionList
+ plural: podconversions
+ shortNames:
+ - pc
+ - pcs
+ singular: podconversion
+ scope: Namespaced
+ versions:
+ - name: v1alpha1
+ schema:
+ openAPIV3Schema:
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: Spec is the specification for the behaviour of the podConversion.
+ properties:
+ converters:
+ description: Converters are some converter for convert pod when pod
+ synced from root cluster to leaf cluster pod will use these converters
+ to scheduled in leaf cluster
+ properties:
+ affinityConverter:
+ description: AffinityConverter used to modify the pod's Affinity
+ when pod synced to leaf cluster
+ properties:
+ affinity:
+ description: Affinity is a group of affinity scheduling rules.
+ properties:
+ nodeAffinity:
+ description: Describes node affinity scheduling rules
+ for the pod.
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule
+ pods to nodes that satisfy the affinity expressions
+ specified by this field, but it may choose a node
+ that violates one or more of the expressions. The
+ node that is most preferred is the one with the
+ greatest sum of weights, i.e. for each node that
+ meets all of the scheduling requirements (resource
+ request, requiredDuringScheduling affinity expressions,
+ etc.), compute a sum by iterating through the elements
+ of this field and adding "weight" to the sum if
+ the node matches the corresponding matchExpressions;
+ the node(s) with the highest sum are the most preferred.
+ items:
+ description: An empty preferred scheduling term
+ matches all objects with implicit weight 0 (i.e.
+ it's a no-op). A null preferred scheduling term
+ matches no objects (i.e. is also a no-op).
+ properties:
+ preference:
+ description: A node selector term, associated
+ with the corresponding weight.
+ properties:
+ matchExpressions:
+ description: A list of node selector requirements
+ by node's labels.
+ items:
+ description: A node selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty.
+ If the operator is Gt or Lt, the
+ values array must have a single
+ element, which will be interpreted
+ as an integer. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchFields:
+ description: A list of node selector requirements
+ by node's fields.
+ items:
+ description: A node selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty.
+ If the operator is Gt or Lt, the
+ values array must have a single
+ element, which will be interpreted
+ as an integer. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ type: object
+ x-kubernetes-map-type: atomic
+ weight:
+ description: Weight associated with matching
+ the corresponding nodeSelectorTerm, in the
+ range 1-100.
+ format: int32
+ type: integer
+ required:
+ - preference
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the
+ affinity requirements specified by this field cease
+ to be met at some point during pod execution (e.g.
+ due to an update), the system may or may not try
+ to eventually evict the pod from its node.
+ properties:
+ nodeSelectorTerms:
+ description: Required. A list of node selector
+ terms. The terms are ORed.
+ items:
+ description: A null or empty node selector term
+ matches no objects. The requirements of them
+ are ANDed. The TopologySelectorTerm type implements
+ a subset of the NodeSelectorTerm.
+ properties:
+ matchExpressions:
+ description: A list of node selector requirements
+ by node's labels.
+ items:
+ description: A node selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty.
+ If the operator is Gt or Lt, the
+ values array must have a single
+ element, which will be interpreted
+ as an integer. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchFields:
+ description: A list of node selector requirements
+ by node's fields.
+ items:
+ description: A node selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty.
+ If the operator is Gt or Lt, the
+ values array must have a single
+ element, which will be interpreted
+ as an integer. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ type: object
+ x-kubernetes-map-type: atomic
+ type: array
+ required:
+ - nodeSelectorTerms
+ type: object
+ x-kubernetes-map-type: atomic
+ type: object
+ podAffinity:
+ description: Describes pod affinity scheduling rules (e.g.
+ co-locate this pod in the same node, zone, etc. as some
+ other pod(s)).
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule
+ pods to nodes that satisfy the affinity expressions
+ specified by this field, but it may choose a node
+ that violates one or more of the expressions. The
+ node that is most preferred is the one with the
+ greatest sum of weights, i.e. for each node that
+ meets all of the scheduling requirements (resource
+ request, requiredDuringScheduling affinity expressions,
+ etc.), compute a sum by iterating through the elements
+ of this field and adding "weight" to the sum if
+ the node has pods which matches the corresponding
+ podAffinityTerm; the node(s) with the highest sum
+ are the most preferred.
+ items:
+ description: The weights of all of the matched WeightedPodAffinityTerm
+ fields are added per-node to find the most preferred
+ node(s)
+ properties:
+ podAffinityTerm:
+ description: Required. A pod affinity term,
+ associated with the corresponding weight.
+ properties:
+ labelSelector:
+ description: A label query over a set of
+ resources, in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label
+ key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty.
+ This array is replaced during
+ a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of
+ {key,value} pairs. A single {key,value}
+ in the matchLabels map is equivalent
+ to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are
+ ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaceSelector:
+ description: A label query over the set
+ of namespaces that the term applies to.
+ The term is applied to the union of the
+ namespaces selected by this field and
+ the ones listed in the namespaces field.
+ null selector and null or empty namespaces
+ list means "this pod's namespace". An
+ empty selector ({}) matches all namespaces.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label
+ key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty.
+ This array is replaced during
+ a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of
+ {key,value} pairs. A single {key,value}
+ in the matchLabels map is equivalent
+ to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are
+ ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaces:
+ description: namespaces specifies a static
+ list of namespace names that the term
+ applies to. The term is applied to the
+ union of the namespaces listed in this
+ field and the ones selected by namespaceSelector.
+ null or empty namespaces list and null
+ namespaceSelector means "this pod's namespace".
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located
+ (affinity) or not co-located (anti-affinity)
+ with the pods matching the labelSelector
+ in the specified namespaces, where co-located
+ is defined as running on a node whose
+ value of the label with key topologyKey
+ matches that of any node on which any
+ of the selected pods is running. Empty
+ topologyKey is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ weight:
+ description: weight associated with matching
+ the corresponding podAffinityTerm, in the
+ range 1-100.
+ format: int32
+ type: integer
+ required:
+ - podAffinityTerm
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the
+ affinity requirements specified by this field cease
+ to be met at some point during pod execution (e.g.
+ due to a pod label update), the system may or may
+ not try to eventually evict the pod from its node.
+ When there are multiple elements, the lists of nodes
+ corresponding to each podAffinityTerm are intersected,
+ i.e. all terms must be satisfied.
+ items:
+ description: Defines a set of pods (namely those
+ matching the labelSelector relative to the given
+ namespace(s)) that this pod should be co-located
+ (affinity) or not co-located (anti-affinity) with,
+ where co-located is defined as running on a node
+ whose value of the label with key
+ matches that of any node on which a pod of the
+ set of pods is running
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents a
+ key's relationship to a set of values.
+ Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of
+ string values. If the operator is
+ In or NotIn, the values array must
+ be non-empty. If the operator is
+ Exists or DoesNotExist, the values
+ array must be empty. This array
+ is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaceSelector:
+ description: A label query over the set of namespaces
+ that the term applies to. The term is applied
+ to the union of the namespaces selected by
+ this field and the ones listed in the namespaces
+ field. null selector and null or empty namespaces
+ list means "this pod's namespace". An empty
+ selector ({}) matches all namespaces.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents a
+ key's relationship to a set of values.
+ Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of
+ string values. If the operator is
+ In or NotIn, the values array must
+ be non-empty. If the operator is
+ Exists or DoesNotExist, the values
+ array must be empty. This array
+ is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaces:
+ description: namespaces specifies a static list
+ of namespace names that the term applies to.
+ The term is applied to the union of the namespaces
+ listed in this field and the ones selected
+ by namespaceSelector. null or empty namespaces
+ list and null namespaceSelector means "this
+ pod's namespace".
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located (affinity)
+ or not co-located (anti-affinity) with the
+ pods matching the labelSelector in the specified
+ namespaces, where co-located is defined as
+ running on a node whose value of the label
+ with key topologyKey matches that of any node
+ on which any of the selected pods is running.
+ Empty topologyKey is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ type: array
+ type: object
+ podAntiAffinity:
+ description: Describes pod anti-affinity scheduling rules
+ (e.g. avoid putting this pod in the same node, zone,
+ etc. as some other pod(s)).
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule
+ pods to nodes that satisfy the anti-affinity expressions
+ specified by this field, but it may choose a node
+ that violates one or more of the expressions. The
+ node that is most preferred is the one with the
+ greatest sum of weights, i.e. for each node that
+ meets all of the scheduling requirements (resource
+ request, requiredDuringScheduling anti-affinity
+ expressions, etc.), compute a sum by iterating through
+ the elements of this field and adding "weight" to
+ the sum if the node has pods which matches the corresponding
+ podAffinityTerm; the node(s) with the highest sum
+ are the most preferred.
+ items:
+ description: The weights of all of the matched WeightedPodAffinityTerm
+ fields are added per-node to find the most preferred
+ node(s)
+ properties:
+ podAffinityTerm:
+ description: Required. A pod affinity term,
+ associated with the corresponding weight.
+ properties:
+ labelSelector:
+ description: A label query over a set of
+ resources, in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label
+ key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty.
+ This array is replaced during
+ a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of
+ {key,value} pairs. A single {key,value}
+ in the matchLabels map is equivalent
+ to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are
+ ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaceSelector:
+ description: A label query over the set
+ of namespaces that the term applies to.
+ The term is applied to the union of the
+ namespaces selected by this field and
+ the ones listed in the namespaces field.
+ null selector and null or empty namespaces
+ list means "this pod's namespace". An
+ empty selector ({}) matches all namespaces.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label
+ key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty.
+ This array is replaced during
+ a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of
+ {key,value} pairs. A single {key,value}
+ in the matchLabels map is equivalent
+ to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are
+ ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaces:
+ description: namespaces specifies a static
+ list of namespace names that the term
+ applies to. The term is applied to the
+ union of the namespaces listed in this
+ field and the ones selected by namespaceSelector.
+ null or empty namespaces list and null
+ namespaceSelector means "this pod's namespace".
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located
+ (affinity) or not co-located (anti-affinity)
+ with the pods matching the labelSelector
+ in the specified namespaces, where co-located
+ is defined as running on a node whose
+ value of the label with key topologyKey
+ matches that of any node on which any
+ of the selected pods is running. Empty
+ topologyKey is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ weight:
+ description: weight associated with matching
+ the corresponding podAffinityTerm, in the
+ range 1-100.
+ format: int32
+ type: integer
+ required:
+ - podAffinityTerm
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the anti-affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the
+ anti-affinity requirements specified by this field
+ cease to be met at some point during pod execution
+ (e.g. due to a pod label update), the system may
+ or may not try to eventually evict the pod from
+ its node. When there are multiple elements, the
+ lists of nodes corresponding to each podAffinityTerm
+ are intersected, i.e. all terms must be satisfied.
+ items:
+ description: Defines a set of pods (namely those
+ matching the labelSelector relative to the given
+ namespace(s)) that this pod should be co-located
+ (affinity) or not co-located (anti-affinity) with,
+ where co-located is defined as running on a node
+ whose value of the label with key
+ matches that of any node on which a pod of the
+ set of pods is running
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents a
+ key's relationship to a set of values.
+ Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of
+ string values. If the operator is
+ In or NotIn, the values array must
+ be non-empty. If the operator is
+ Exists or DoesNotExist, the values
+ array must be empty. This array
+ is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaceSelector:
+ description: A label query over the set of namespaces
+ that the term applies to. The term is applied
+ to the union of the namespaces selected by
+ this field and the ones listed in the namespaces
+ field. null selector and null or empty namespaces
+ list means "this pod's namespace". An empty
+ selector ({}) matches all namespaces.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents a
+ key's relationship to a set of values.
+ Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of
+ string values. If the operator is
+ In or NotIn, the values array must
+ be non-empty. If the operator is
+ Exists or DoesNotExist, the values
+ array must be empty. This array
+ is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaces:
+ description: namespaces specifies a static list
+ of namespace names that the term applies to.
+ The term is applied to the union of the namespaces
+ listed in this field and the ones selected
+ by namespaceSelector. null or empty namespaces
+ list and null namespaceSelector means "this
+ pod's namespace".
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located (affinity)
+ or not co-located (anti-affinity) with the
+ pods matching the labelSelector in the specified
+ namespaces, where co-located is defined as
+ running on a node whose value of the label
+ with key topologyKey matches that of any node
+ on which any of the selected pods is running.
+ Empty topologyKey is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ type: array
+ type: object
+ type: object
+ conversionType:
+ description: ConversionType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ type: string
+ required:
+ - conversionType
+ type: object
+ nodeNameConverter:
+ description: NodeNameConverter used to modify the pod's nodeName
+ when pod synced to leaf cluster
+ properties:
+ conversionType:
+ description: ConversionType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ type: string
+ nodeName:
+ type: string
+ required:
+ - conversionType
+ type: object
+ nodeSelectorConverter:
+ description: NodeSelectorConverter used to modify the pod's NodeSelector
+ when pod synced to leaf cluster
+ properties:
+ conversionType:
+ description: ConversionType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ enum:
+ - add
+ - remove
+ - replace
+ type: string
+ nodeSelector:
+ additionalProperties:
+ type: string
+ type: object
+ required:
+ - conversionType
+ type: object
+ schedulerNameConverter:
+ description: SchedulerNameConverter used to modify the pod's nodeName
+ when pod synced to leaf cluster
+ properties:
+ conversionType:
+ description: ConversionType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ type: string
+ schedulerName:
+ type: string
+ required:
+ - conversionType
+ type: object
+ topologySpreadConstraintsConverter:
+ description: TopologySpreadConstraintsConverter used to modify
+ the pod's TopologySpreadConstraints when pod synced to leaf
+ cluster
+ properties:
+ conversionType:
+ description: ConversionType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ type: string
+ topologySpreadConstraints:
+ description: TopologySpreadConstraints describes how a group
+ of pods ought to spread across topology domains. Scheduler
+ will schedule pods in a way which abides by the constraints.
+ All topologySpreadConstraints are ANDed.
+ items:
+ description: TopologySpreadConstraint specifies how to spread
+ matching pods among the given topology.
+ properties:
+ labelSelector:
+ description: LabelSelector is used to find matching
+ pods. Pods that match this label selector are counted
+ to determine the number of pods in their corresponding
+ topology domain.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label
+ selector requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a
+ selector that contains values, a key, and an
+ operator that relates the key and values.
+ properties:
+ key:
+ description: key is the label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty. If the
+ operator is Exists or DoesNotExist, the
+ values array must be empty. This array is
+ replaced during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is "In",
+ and the values array contains only "value". The
+ requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ matchLabelKeys:
+ description: MatchLabelKeys is a set of pod label keys
+ to select the pods over which spreading will be calculated.
+ The keys are used to lookup values from the incoming
+ pod labels, those key-value labels are ANDed with
+ labelSelector to select the group of existing pods
+ over which spreading will be calculated for the incoming
+ pod. Keys that don't exist in the incoming pod labels
+ will be ignored. A null or empty list means only match
+ against labelSelector.
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ maxSkew:
+ description: 'MaxSkew describes the degree to which
+ pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`,
+ it is the maximum permitted difference between the
+ number of matching pods in the target topology and
+ the global minimum. The global minimum is the minimum
+ number of matching pods in an eligible domain or zero
+ if the number of eligible domains is less than MinDomains.
+ For example, in a 3-zone cluster, MaxSkew is set to
+ 1, and pods with the same labelSelector spread as
+ 2/2/1: In this case, the global minimum is 1. | zone1
+ | zone2 | zone3 | | P P | P P | P | - if MaxSkew
+ is 1, incoming pod can only be scheduled to zone3
+ to become 2/2/2; scheduling it onto zone1(zone2) would
+ make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1).
+ - if MaxSkew is 2, incoming pod can be scheduled onto
+ any zone. When `whenUnsatisfiable=ScheduleAnyway`,
+ it is used to give higher precedence to topologies
+ that satisfy it. It''s a required field. Default value
+ is 1 and 0 is not allowed.'
+ format: int32
+ type: integer
+ minDomains:
+ description: "MinDomains indicates a minimum number
+ of eligible domains. When the number of eligible domains
+ with matching topology keys is less than minDomains,
+ Pod Topology Spread treats \"global minimum\" as 0,
+ and then the calculation of Skew is performed. And
+ when the number of eligible domains with matching
+ topology keys equals or greater than minDomains, this
+ value has no effect on scheduling. As a result, when
+ the number of eligible domains is less than minDomains,
+ scheduler won't schedule more than maxSkew Pods to
+ those domains. If value is nil, the constraint behaves
+ as if MinDomains is equal to 1. Valid values are integers
+ greater than 0. When value is not nil, WhenUnsatisfiable
+ must be DoNotSchedule. \n For example, in a 3-zone
+ cluster, MaxSkew is set to 2, MinDomains is set to
+ 5 and pods with the same labelSelector spread as 2/2/2:
+ | zone1 | zone2 | zone3 | | P P | P P | P P |
+ The number of domains is less than 5(MinDomains),
+ so \"global minimum\" is treated as 0. In this situation,
+ new pod with the same labelSelector cannot be scheduled,
+ because computed skew will be 3(3 - 0) if new Pod
+ is scheduled to any of the three zones, it will violate
+ MaxSkew. \n This is a beta field and requires the
+ MinDomainsInPodTopologySpread feature gate to be enabled
+ (enabled by default)."
+ format: int32
+ type: integer
+ nodeAffinityPolicy:
+ description: "NodeAffinityPolicy indicates how we will
+ treat Pod's nodeAffinity/nodeSelector when calculating
+ pod topology spread skew. Options are: - Honor: only
+ nodes matching nodeAffinity/nodeSelector are included
+ in the calculations. - Ignore: nodeAffinity/nodeSelector
+ are ignored. All nodes are included in the calculations.
+ \n If this value is nil, the behavior is equivalent
+ to the Honor policy. This is a beta-level feature
+ default enabled by the NodeInclusionPolicyInPodTopologySpread
+ feature flag."
+ type: string
+ nodeTaintsPolicy:
+ description: "NodeTaintsPolicy indicates how we will
+ treat node taints when calculating pod topology spread
+ skew. Options are: - Honor: nodes without taints,
+ along with tainted nodes for which the incoming pod
+ has a toleration, are included. - Ignore: node taints
+ are ignored. All nodes are included. \n If this value
+ is nil, the behavior is equivalent to the Ignore policy.
+ This is a beta-level feature default enabled by the
+ NodeInclusionPolicyInPodTopologySpread feature flag."
+ type: string
+ topologyKey:
+ description: TopologyKey is the key of node labels.
+ Nodes that have a label with this key and identical
+ values are considered to be in the same topology.
+ We consider each as a "bucket", and try
+ to put balanced number of pods into each bucket. We
+ define a domain as a particular instance of a topology.
+ Also, we define an eligible domain as a domain whose
+ nodes meet the requirements of nodeAffinityPolicy
+ and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname",
+ each Node is a domain of that topology. And, if TopologyKey
+ is "topology.kubernetes.io/zone", each zone is a domain
+ of that topology. It's a required field.
+ type: string
+ whenUnsatisfiable:
+ description: 'WhenUnsatisfiable indicates how to deal
+ with a pod if it doesn''t satisfy the spread constraint.
+ - DoNotSchedule (default) tells the scheduler not
+ to schedule it. - ScheduleAnyway tells the scheduler
+ to schedule the pod in any location, but giving higher
+ precedence to topologies that would help reduce the
+ skew. A constraint is considered "Unsatisfiable" for
+ an incoming pod if and only if every possible node
+ assignment for that pod would violate "MaxSkew" on
+ some topology. For example, in a 3-zone cluster, MaxSkew
+ is set to 1, and pods with the same labelSelector
+ spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P
+ | P | P | If WhenUnsatisfiable is set to DoNotSchedule,
+ incoming pod can only be scheduled to zone2(zone3)
+ to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3)
+ satisfies MaxSkew(1). In other words, the cluster
+ can still be imbalanced, but scheduler won''t make
+ it *more* imbalanced. It''s a required field.'
+ type: string
+ required:
+ - maxSkew
+ - topologyKey
+ - whenUnsatisfiable
+ type: object
+ type: array
+ required:
+ - conversionType
+ type: object
+ type: object
+ labelSelector:
+ description: A label query over a set of resources. If name is not
+ empty, labelSelector will be ignored.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector requirements.
+ The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector that
+ contains values, a key, and an operator that relates the key
+ and values.
+ properties:
+ key:
+ description: key is the label key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents a key's relationship to
+ a set of values. Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If the
+ operator is In or NotIn, the values array must be non-empty.
+ If the operator is Exists or DoesNotExist, the values
+ array must be empty. This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A single
+ {key,value} in the matchLabels map is equivalent to an element
+ of matchExpressions, whose key field is "key", the operator
+ is "In", and the values array contains only "value". The requirements
+ are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ required:
+ - labelSelector
+ type: object
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
diff --git a/deploy/crds/kosmos.io_podconvertpolicies.yaml b/deploy/crds/kosmos.io_podconvertpolicies.yaml
new file mode 100644
index 000000000..1010ffffc
--- /dev/null
+++ b/deploy/crds/kosmos.io_podconvertpolicies.yaml
@@ -0,0 +1,1319 @@
+---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.11.0
+ creationTimestamp: null
+ name: podconvertpolicies.kosmos.io
+spec:
+ group: kosmos.io
+ names:
+ kind: PodConvertPolicy
+ listKind: PodConvertPolicyList
+ plural: podconvertpolicies
+ shortNames:
+ - pc
+ - pcs
+ singular: podconvertpolicy
+ scope: Namespaced
+ versions:
+ - name: v1alpha1
+ schema:
+ openAPIV3Schema:
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: Spec is the specification for the behaviour of the podConversion.
+ properties:
+ converters:
+ description: Converters are some converter for convert pod when pod
+ synced from root cluster to leaf cluster pod will use these converters
+ to scheduled in leaf cluster
+ properties:
+ affinityConverter:
+ description: AffinityConverter used to modify the pod's Affinity
+ when pod synced to leaf cluster
+ properties:
+ affinity:
+ description: Affinity is a group of affinity scheduling rules.
+ properties:
+ nodeAffinity:
+ description: Describes node affinity scheduling rules
+ for the pod.
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule
+ pods to nodes that satisfy the affinity expressions
+ specified by this field, but it may choose a node
+ that violates one or more of the expressions. The
+ node that is most preferred is the one with the
+ greatest sum of weights, i.e. for each node that
+ meets all of the scheduling requirements (resource
+ request, requiredDuringScheduling affinity expressions,
+ etc.), compute a sum by iterating through the elements
+ of this field and adding "weight" to the sum if
+ the node matches the corresponding matchExpressions;
+ the node(s) with the highest sum are the most preferred.
+ items:
+ description: An empty preferred scheduling term
+ matches all objects with implicit weight 0 (i.e.
+ it's a no-op). A null preferred scheduling term
+ matches no objects (i.e. is also a no-op).
+ properties:
+ preference:
+ description: A node selector term, associated
+ with the corresponding weight.
+ properties:
+ matchExpressions:
+ description: A list of node selector requirements
+ by node's labels.
+ items:
+ description: A node selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty.
+ If the operator is Gt or Lt, the
+ values array must have a single
+ element, which will be interpreted
+ as an integer. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchFields:
+ description: A list of node selector requirements
+ by node's fields.
+ items:
+ description: A node selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty.
+ If the operator is Gt or Lt, the
+ values array must have a single
+ element, which will be interpreted
+ as an integer. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ type: object
+ x-kubernetes-map-type: atomic
+ weight:
+ description: Weight associated with matching
+ the corresponding nodeSelectorTerm, in the
+ range 1-100.
+ format: int32
+ type: integer
+ required:
+ - preference
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the
+ affinity requirements specified by this field cease
+ to be met at some point during pod execution (e.g.
+ due to an update), the system may or may not try
+ to eventually evict the pod from its node.
+ properties:
+ nodeSelectorTerms:
+ description: Required. A list of node selector
+ terms. The terms are ORed.
+ items:
+ description: A null or empty node selector term
+ matches no objects. The requirements of them
+ are ANDed. The TopologySelectorTerm type implements
+ a subset of the NodeSelectorTerm.
+ properties:
+ matchExpressions:
+ description: A list of node selector requirements
+ by node's labels.
+ items:
+ description: A node selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty.
+ If the operator is Gt or Lt, the
+ values array must have a single
+ element, which will be interpreted
+ as an integer. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchFields:
+ description: A list of node selector requirements
+ by node's fields.
+ items:
+ description: A node selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty.
+ If the operator is Gt or Lt, the
+ values array must have a single
+ element, which will be interpreted
+ as an integer. This array is replaced
+ during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ type: object
+ x-kubernetes-map-type: atomic
+ type: array
+ required:
+ - nodeSelectorTerms
+ type: object
+ x-kubernetes-map-type: atomic
+ type: object
+ podAffinity:
+ description: Describes pod affinity scheduling rules (e.g.
+ co-locate this pod in the same node, zone, etc. as some
+ other pod(s)).
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule
+ pods to nodes that satisfy the affinity expressions
+ specified by this field, but it may choose a node
+ that violates one or more of the expressions. The
+ node that is most preferred is the one with the
+ greatest sum of weights, i.e. for each node that
+ meets all of the scheduling requirements (resource
+ request, requiredDuringScheduling affinity expressions,
+ etc.), compute a sum by iterating through the elements
+ of this field and adding "weight" to the sum if
+ the node has pods which matches the corresponding
+ podAffinityTerm; the node(s) with the highest sum
+ are the most preferred.
+ items:
+ description: The weights of all of the matched WeightedPodAffinityTerm
+ fields are added per-node to find the most preferred
+ node(s)
+ properties:
+ podAffinityTerm:
+ description: Required. A pod affinity term,
+ associated with the corresponding weight.
+ properties:
+ labelSelector:
+ description: A label query over a set of
+ resources, in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label
+ key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty.
+ This array is replaced during
+ a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of
+ {key,value} pairs. A single {key,value}
+ in the matchLabels map is equivalent
+ to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are
+ ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaceSelector:
+ description: A label query over the set
+ of namespaces that the term applies to.
+ The term is applied to the union of the
+ namespaces selected by this field and
+ the ones listed in the namespaces field.
+ null selector and null or empty namespaces
+ list means "this pod's namespace". An
+ empty selector ({}) matches all namespaces.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label
+ key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty.
+ This array is replaced during
+ a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of
+ {key,value} pairs. A single {key,value}
+ in the matchLabels map is equivalent
+ to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are
+ ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaces:
+ description: namespaces specifies a static
+ list of namespace names that the term
+ applies to. The term is applied to the
+ union of the namespaces listed in this
+ field and the ones selected by namespaceSelector.
+ null or empty namespaces list and null
+ namespaceSelector means "this pod's namespace".
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located
+ (affinity) or not co-located (anti-affinity)
+ with the pods matching the labelSelector
+ in the specified namespaces, where co-located
+ is defined as running on a node whose
+ value of the label with key topologyKey
+ matches that of any node on which any
+ of the selected pods is running. Empty
+ topologyKey is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ weight:
+ description: weight associated with matching
+ the corresponding podAffinityTerm, in the
+ range 1-100.
+ format: int32
+ type: integer
+ required:
+ - podAffinityTerm
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the
+ affinity requirements specified by this field cease
+ to be met at some point during pod execution (e.g.
+ due to a pod label update), the system may or may
+ not try to eventually evict the pod from its node.
+ When there are multiple elements, the lists of nodes
+ corresponding to each podAffinityTerm are intersected,
+ i.e. all terms must be satisfied.
+ items:
+ description: Defines a set of pods (namely those
+ matching the labelSelector relative to the given
+ namespace(s)) that this pod should be co-located
+ (affinity) or not co-located (anti-affinity) with,
+ where co-located is defined as running on a node
+ whose value of the label with key
+ matches that of any node on which a pod of the
+ set of pods is running
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents a
+ key's relationship to a set of values.
+ Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of
+ string values. If the operator is
+ In or NotIn, the values array must
+ be non-empty. If the operator is
+ Exists or DoesNotExist, the values
+ array must be empty. This array
+ is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaceSelector:
+ description: A label query over the set of namespaces
+ that the term applies to. The term is applied
+ to the union of the namespaces selected by
+ this field and the ones listed in the namespaces
+ field. null selector and null or empty namespaces
+ list means "this pod's namespace". An empty
+ selector ({}) matches all namespaces.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents a
+ key's relationship to a set of values.
+ Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of
+ string values. If the operator is
+ In or NotIn, the values array must
+ be non-empty. If the operator is
+ Exists or DoesNotExist, the values
+ array must be empty. This array
+ is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaces:
+ description: namespaces specifies a static list
+ of namespace names that the term applies to.
+ The term is applied to the union of the namespaces
+ listed in this field and the ones selected
+ by namespaceSelector. null or empty namespaces
+ list and null namespaceSelector means "this
+ pod's namespace".
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located (affinity)
+ or not co-located (anti-affinity) with the
+ pods matching the labelSelector in the specified
+ namespaces, where co-located is defined as
+ running on a node whose value of the label
+ with key topologyKey matches that of any node
+ on which any of the selected pods is running.
+ Empty topologyKey is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ type: array
+ type: object
+ podAntiAffinity:
+ description: Describes pod anti-affinity scheduling rules
+ (e.g. avoid putting this pod in the same node, zone,
+ etc. as some other pod(s)).
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule
+ pods to nodes that satisfy the anti-affinity expressions
+ specified by this field, but it may choose a node
+ that violates one or more of the expressions. The
+ node that is most preferred is the one with the
+ greatest sum of weights, i.e. for each node that
+ meets all of the scheduling requirements (resource
+ request, requiredDuringScheduling anti-affinity
+ expressions, etc.), compute a sum by iterating through
+ the elements of this field and adding "weight" to
+ the sum if the node has pods which matches the corresponding
+ podAffinityTerm; the node(s) with the highest sum
+ are the most preferred.
+ items:
+ description: The weights of all of the matched WeightedPodAffinityTerm
+ fields are added per-node to find the most preferred
+ node(s)
+ properties:
+ podAffinityTerm:
+ description: Required. A pod affinity term,
+ associated with the corresponding weight.
+ properties:
+ labelSelector:
+ description: A label query over a set of
+ resources, in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label
+ key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty.
+ This array is replaced during
+ a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of
+ {key,value} pairs. A single {key,value}
+ in the matchLabels map is equivalent
+ to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are
+ ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaceSelector:
+ description: A label query over the set
+ of namespaces that the term applies to.
+ The term is applied to the union of the
+ namespaces selected by this field and
+ the ones listed in the namespaces field.
+ null selector and null or empty namespaces
+ list means "this pod's namespace". An
+ empty selector ({}) matches all namespaces.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label
+ key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty.
+ This array is replaced during
+ a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of
+ {key,value} pairs. A single {key,value}
+ in the matchLabels map is equivalent
+ to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are
+ ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaces:
+ description: namespaces specifies a static
+ list of namespace names that the term
+ applies to. The term is applied to the
+ union of the namespaces listed in this
+ field and the ones selected by namespaceSelector.
+ null or empty namespaces list and null
+ namespaceSelector means "this pod's namespace".
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located
+ (affinity) or not co-located (anti-affinity)
+ with the pods matching the labelSelector
+ in the specified namespaces, where co-located
+ is defined as running on a node whose
+ value of the label with key topologyKey
+ matches that of any node on which any
+ of the selected pods is running. Empty
+ topologyKey is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ weight:
+ description: weight associated with matching
+ the corresponding podAffinityTerm, in the
+ range 1-100.
+ format: int32
+ type: integer
+ required:
+ - podAffinityTerm
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the anti-affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the
+ anti-affinity requirements specified by this field
+ cease to be met at some point during pod execution
+ (e.g. due to a pod label update), the system may
+ or may not try to eventually evict the pod from
+ its node. When there are multiple elements, the
+ lists of nodes corresponding to each podAffinityTerm
+ are intersected, i.e. all terms must be satisfied.
+ items:
+ description: Defines a set of pods (namely those
+ matching the labelSelector relative to the given
+ namespace(s)) that this pod should be co-located
+ (affinity) or not co-located (anti-affinity) with,
+ where co-located is defined as running on a node
+ whose value of the label with key
+ matches that of any node on which a pod of the
+ set of pods is running
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents a
+ key's relationship to a set of values.
+ Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of
+ string values. If the operator is
+ In or NotIn, the values array must
+ be non-empty. If the operator is
+ Exists or DoesNotExist, the values
+ array must be empty. This array
+ is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaceSelector:
+ description: A label query over the set of namespaces
+ that the term applies to. The term is applied
+ to the union of the namespaces selected by
+ this field and the ones listed in the namespaces
+ field. null selector and null or empty namespaces
+ list means "this pod's namespace". An empty
+ selector ({}) matches all namespaces.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents a
+ key's relationship to a set of values.
+ Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of
+ string values. If the operator is
+ In or NotIn, the values array must
+ be non-empty. If the operator is
+ Exists or DoesNotExist, the values
+ array must be empty. This array
+ is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ namespaces:
+ description: namespaces specifies a static list
+ of namespace names that the term applies to.
+ The term is applied to the union of the namespaces
+ listed in this field and the ones selected
+ by namespaceSelector. null or empty namespaces
+ list and null namespaceSelector means "this
+ pod's namespace".
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located (affinity)
+ or not co-located (anti-affinity) with the
+ pods matching the labelSelector in the specified
+ namespaces, where co-located is defined as
+ running on a node whose value of the label
+ with key topologyKey matches that of any node
+ on which any of the selected pods is running.
+ Empty topologyKey is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ type: array
+ type: object
+ type: object
+ convertType:
+ description: ConvertType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ enum:
+ - add
+ - remove
+ - replace
+ type: string
+ required:
+ - convertType
+ type: object
+ nodeNameConverter:
+ description: NodeNameConverter used to modify the pod's nodeName
+ when pod synced to leaf cluster
+ properties:
+ convertType:
+ description: ConvertType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ enum:
+ - add
+ - remove
+ - replace
+ type: string
+ nodeName:
+ type: string
+ required:
+ - convertType
+ type: object
+ nodeSelectorConverter:
+ description: NodeSelectorConverter used to modify the pod's NodeSelector
+ when pod synced to leaf cluster
+ properties:
+ convertType:
+ description: ConvertType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ enum:
+ - add
+ - remove
+ - replace
+ type: string
+ nodeSelector:
+ additionalProperties:
+ type: string
+ type: object
+ required:
+ - convertType
+ type: object
+ schedulerNameConverter:
+ description: SchedulerNameConverter used to modify the pod's nodeName
+ when pod synced to leaf cluster
+ properties:
+ convertType:
+ description: ConvertType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ enum:
+ - add
+ - remove
+ - replace
+ type: string
+ schedulerName:
+ type: string
+ required:
+ - convertType
+ type: object
+ topologySpreadConstraintsConverter:
+ description: TopologySpreadConstraintsConverter used to modify
+ the pod's TopologySpreadConstraints when pod synced to leaf
+ cluster
+ properties:
+ convertType:
+ description: ConvertType if the operation type when convert
+ pod from root cluster to leaf cluster.
+ enum:
+ - add
+ - remove
+ - replace
+ type: string
+ topologySpreadConstraints:
+ description: TopologySpreadConstraints describes how a group
+ of pods ought to spread across topology domains. Scheduler
+ will schedule pods in a way which abides by the constraints.
+ All topologySpreadConstraints are ANDed.
+ items:
+ description: TopologySpreadConstraint specifies how to spread
+ matching pods among the given topology.
+ properties:
+ labelSelector:
+ description: LabelSelector is used to find matching
+ pods. Pods that match this label selector are counted
+ to determine the number of pods in their corresponding
+ topology domain.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label
+ selector requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a
+ selector that contains values, a key, and an
+ operator that relates the key and values.
+ properties:
+ key:
+ description: key is the label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty. If the
+ operator is Exists or DoesNotExist, the
+ values array must be empty. This array is
+ replaced during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is "In",
+ and the values array contains only "value". The
+ requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ matchLabelKeys:
+ description: MatchLabelKeys is a set of pod label keys
+ to select the pods over which spreading will be calculated.
+ The keys are used to lookup values from the incoming
+ pod labels, those key-value labels are ANDed with
+ labelSelector to select the group of existing pods
+ over which spreading will be calculated for the incoming
+ pod. Keys that don't exist in the incoming pod labels
+ will be ignored. A null or empty list means only match
+ against labelSelector.
+ items:
+ type: string
+ type: array
+ x-kubernetes-list-type: atomic
+ maxSkew:
+ description: 'MaxSkew describes the degree to which
+ pods may be unevenly distributed. When `whenUnsatisfiable=DoNotSchedule`,
+ it is the maximum permitted difference between the
+ number of matching pods in the target topology and
+ the global minimum. The global minimum is the minimum
+ number of matching pods in an eligible domain or zero
+ if the number of eligible domains is less than MinDomains.
+ For example, in a 3-zone cluster, MaxSkew is set to
+ 1, and pods with the same labelSelector spread as
+ 2/2/1: In this case, the global minimum is 1. | zone1
+ | zone2 | zone3 | | P P | P P | P | - if MaxSkew
+ is 1, incoming pod can only be scheduled to zone3
+ to become 2/2/2; scheduling it onto zone1(zone2) would
+ make the ActualSkew(3-1) on zone1(zone2) violate MaxSkew(1).
+ - if MaxSkew is 2, incoming pod can be scheduled onto
+ any zone. When `whenUnsatisfiable=ScheduleAnyway`,
+ it is used to give higher precedence to topologies
+ that satisfy it. It''s a required field. Default value
+ is 1 and 0 is not allowed.'
+ format: int32
+ type: integer
+ minDomains:
+ description: "MinDomains indicates a minimum number
+ of eligible domains. When the number of eligible domains
+ with matching topology keys is less than minDomains,
+ Pod Topology Spread treats \"global minimum\" as 0,
+ and then the calculation of Skew is performed. And
+ when the number of eligible domains with matching
+ topology keys equals or greater than minDomains, this
+ value has no effect on scheduling. As a result, when
+ the number of eligible domains is less than minDomains,
+ scheduler won't schedule more than maxSkew Pods to
+ those domains. If value is nil, the constraint behaves
+ as if MinDomains is equal to 1. Valid values are integers
+ greater than 0. When value is not nil, WhenUnsatisfiable
+ must be DoNotSchedule. \n For example, in a 3-zone
+ cluster, MaxSkew is set to 2, MinDomains is set to
+ 5 and pods with the same labelSelector spread as 2/2/2:
+ | zone1 | zone2 | zone3 | | P P | P P | P P |
+ The number of domains is less than 5(MinDomains),
+ so \"global minimum\" is treated as 0. In this situation,
+ new pod with the same labelSelector cannot be scheduled,
+ because computed skew will be 3(3 - 0) if new Pod
+ is scheduled to any of the three zones, it will violate
+ MaxSkew. \n This is a beta field and requires the
+ MinDomainsInPodTopologySpread feature gate to be enabled
+ (enabled by default)."
+ format: int32
+ type: integer
+ nodeAffinityPolicy:
+ description: "NodeAffinityPolicy indicates how we will
+ treat Pod's nodeAffinity/nodeSelector when calculating
+ pod topology spread skew. Options are: - Honor: only
+ nodes matching nodeAffinity/nodeSelector are included
+ in the calculations. - Ignore: nodeAffinity/nodeSelector
+ are ignored. All nodes are included in the calculations.
+ \n If this value is nil, the behavior is equivalent
+ to the Honor policy. This is a beta-level feature
+ default enabled by the NodeInclusionPolicyInPodTopologySpread
+ feature flag."
+ type: string
+ nodeTaintsPolicy:
+ description: "NodeTaintsPolicy indicates how we will
+ treat node taints when calculating pod topology spread
+ skew. Options are: - Honor: nodes without taints,
+ along with tainted nodes for which the incoming pod
+ has a toleration, are included. - Ignore: node taints
+ are ignored. All nodes are included. \n If this value
+ is nil, the behavior is equivalent to the Ignore policy.
+ This is a beta-level feature default enabled by the
+ NodeInclusionPolicyInPodTopologySpread feature flag."
+ type: string
+ topologyKey:
+ description: TopologyKey is the key of node labels.
+ Nodes that have a label with this key and identical
+ values are considered to be in the same topology.
+ We consider each as a "bucket", and try
+ to put balanced number of pods into each bucket. We
+ define a domain as a particular instance of a topology.
+ Also, we define an eligible domain as a domain whose
+ nodes meet the requirements of nodeAffinityPolicy
+ and nodeTaintsPolicy. e.g. If TopologyKey is "kubernetes.io/hostname",
+ each Node is a domain of that topology. And, if TopologyKey
+ is "topology.kubernetes.io/zone", each zone is a domain
+ of that topology. It's a required field.
+ type: string
+ whenUnsatisfiable:
+ description: 'WhenUnsatisfiable indicates how to deal
+ with a pod if it doesn''t satisfy the spread constraint.
+ - DoNotSchedule (default) tells the scheduler not
+ to schedule it. - ScheduleAnyway tells the scheduler
+ to schedule the pod in any location, but giving higher
+ precedence to topologies that would help reduce the
+ skew. A constraint is considered "Unsatisfiable" for
+ an incoming pod if and only if every possible node
+ assignment for that pod would violate "MaxSkew" on
+ some topology. For example, in a 3-zone cluster, MaxSkew
+ is set to 1, and pods with the same labelSelector
+ spread as 3/1/1: | zone1 | zone2 | zone3 | | P P P
+ | P | P | If WhenUnsatisfiable is set to DoNotSchedule,
+ incoming pod can only be scheduled to zone2(zone3)
+ to become 3/2/1(3/1/2) as ActualSkew(2-1) on zone2(zone3)
+ satisfies MaxSkew(1). In other words, the cluster
+ can still be imbalanced, but scheduler won''t make
+ it *more* imbalanced. It''s a required field.'
+ type: string
+ required:
+ - maxSkew
+ - topologyKey
+ - whenUnsatisfiable
+ type: object
+ type: array
+ required:
+ - convertType
+ type: object
+ type: object
+ labelSelector:
+ description: A label query over a set of resources. If name is not
+ empty, labelSelector will be ignored.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector requirements.
+ The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector that
+ contains values, a key, and an operator that relates the key
+ and values.
+ properties:
+ key:
+ description: key is the label key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents a key's relationship to
+ a set of values. Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If the
+ operator is In or NotIn, the values array must be non-empty.
+ If the operator is Exists or DoesNotExist, the values
+ array must be empty. This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A single
+ {key,value} in the matchLabels map is equivalent to an element
+ of matchExpressions, whose key field is "key", the operator
+ is "In", and the values array contains only "value". The requirements
+ are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ leafNodeSelector:
+ description: A label query over a set of resources. If name is not
+ empty, LeafNodeSelector will be ignored.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector requirements.
+ The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector that
+ contains values, a key, and an operator that relates the key
+ and values.
+ properties:
+ key:
+ description: key is the label key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents a key's relationship to
+ a set of values. Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If the
+ operator is In or NotIn, the values array must be non-empty.
+ If the operator is Exists or DoesNotExist, the values
+ array must be empty. This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A single
+ {key,value} in the matchLabels map is equivalent to an element
+ of matchExpressions, whose key field is "key", the operator
+ is "In", and the values array contains only "value". The requirements
+ are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ required:
+ - labelSelector
+ type: object
+ required:
+ - spec
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
diff --git a/deploy/crds/kosmos.io_shadowdaemonsets.yaml b/deploy/crds/kosmos.io_shadowdaemonsets.yaml
index c2030de44..f944f7055 100644
--- a/deploy/crds/kosmos.io_shadowdaemonsets.yaml
+++ b/deploy/crds/kosmos.io_shadowdaemonsets.yaml
@@ -64,6 +64,8 @@ spec:
of an object. Servers should convert recognized schemas to the latest
internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
type: string
+ cluster:
+ type: string
daemonSetSpec:
description: DaemonSetSpec is the specification of a daemon set.
properties:
@@ -204,8 +206,6 @@ spec:
object represents. Servers may infer this from the endpoint the client
submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
type: string
- knode:
- type: string
metadata:
type: object
refType:
diff --git a/deploy/clusterlink-operator.yml b/deploy/kosmos-operator.yml
similarity index 71%
rename from deploy/clusterlink-operator.yml
rename to deploy/kosmos-operator.yml
index e6f4d4eb9..481a03f15 100644
--- a/deploy/clusterlink-operator.yml
+++ b/deploy/kosmos-operator.yml
@@ -1,14 +1,14 @@
apiVersion: v1
kind: ServiceAccount
metadata:
- name: clusterlink-operator
- namespace: clusterlink-system
+ name: kosmos-operator
+ namespace: kosmos-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
- name: clusterlink-operator
- namespace: clusterlink-system
+ name: kosmos-operator
+ namespace: kosmos-system
labels:
app: operator
spec:
@@ -21,7 +21,7 @@ spec:
labels:
app: operator
spec:
- serviceAccountName: clusterlink-operator
+ serviceAccountName: kosmos-operator
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -32,15 +32,15 @@ spec:
values:
- operator
namespaces:
- - clusterlink-system
+ - kosmos-system
topologyKey: kubernetes.io/hostname
containers:
- name: operator
- image: ghcr.io/kosmos-io/clusterlink-operator:__VERSION__
+ image: ghcr.io/kosmos-io/kosmos-operator:__VERSION__
imagePullPolicy: IfNotPresent
command:
- - clusterlink-operator
- - --controlpanelconfig=/etc/clusterlink/kubeconfig
+ - kosmos-operator
+ - --controlpanelconfig=/etc/kosmos-operator/kubeconfig
resources:
limits:
memory: 200Mi
@@ -51,12 +51,10 @@ spec:
env:
- name: VERSION
value: __VERSION__
- - name: CLUSTER_NAME
- value: __CLUSTER_NAME__
- name: USE_PROXY
value: "false"
volumeMounts:
- - mountPath: /etc/clusterlink
+ - mountPath: /etc/kosmos-operator
name: proxy-config
readOnly: true
volumes:
diff --git a/deploy/kosmos-rbac.yml b/deploy/kosmos-rbac.yml
new file mode 100644
index 000000000..844ffd034
--- /dev/null
+++ b/deploy/kosmos-rbac.yml
@@ -0,0 +1,29 @@
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: kosmos
+rules:
+ - apiGroups: ['*']
+ resources: ['*']
+ verbs: ["*"]
+ - nonResourceURLs: ['*']
+ verbs: ["*"]
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: kosmos
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: kosmos
+subjects:
+ - kind: ServiceAccount
+ name: kosmos-control
+ namespace: kosmos-system
+ - kind: ServiceAccount
+ name: clusterlink-controller-manager
+ namespace: kosmos-system
+ - kind: ServiceAccount
+ name: kosmos-operator
+ namespace: kosmos-system
diff --git a/deploy/scheduler/deployment.yaml b/deploy/scheduler/deployment.yaml
index bd50e07d7..45bee02d4 100644
--- a/deploy/scheduler/deployment.yaml
+++ b/deploy/scheduler/deployment.yaml
@@ -23,12 +23,10 @@ spec:
defaultMode: 420
containers:
- name: kosmos-scheduler
- image: ghcr.io/kosmos-io/scheduler:0.0.2
+ image: ghcr.io/kosmos-io/scheduler:__VERSION__
+ imagePullPolicy: IfNotPresent
command:
- scheduler
- - --leader-elect=true
- - --leader-elect-resource-name=kosmos-scheduler
- - --leader-elect-resource-namespace=kosmos-system
- --config=/etc/kubernetes/kube-scheduler/scheduler-config.yaml
resources:
requests:
@@ -67,8 +65,10 @@ data:
kind: KubeSchedulerConfiguration
leaderElection:
leaderElect: true
+ resourceName: kosmos-scheduler
+ resourceNamespace: kosmos-system
profiles:
- - schedulerName: kosmos-scheduler
+ - schedulerName: default-scheduler
plugins:
preFilter:
disabled:
@@ -78,9 +78,9 @@ data:
filter:
disabled:
- name: "VolumeBinding"
- - name: "TaintToleration"
+ # - name: "TaintToleration"
enabled:
- - name: "KNodeTaintToleration"
+ # - name: "KNodeTaintToleration"
- name: "KnodeVolumeBinding"
score:
disabled:
diff --git a/deploy/scheduler/rbac.yaml b/deploy/scheduler/rbac.yaml
index 0f008445d..d7f9c28ee 100644
--- a/deploy/scheduler/rbac.yaml
+++ b/deploy/scheduler/rbac.yaml
@@ -62,8 +62,6 @@ rules:
- ''
resources:
- endpoints
- resourceNames:
- - kube-scheduler
- verbs:
- get
- list
diff --git a/docs/images/clustertree-arch.png b/docs/images/clustertree-arch.png
new file mode 100644
index 000000000..add83396a
Binary files /dev/null and b/docs/images/clustertree-arch.png differ
diff --git a/docs/images/knode-arch.png b/docs/images/knode-arch.png
deleted file mode 100644
index 19095bac7..000000000
Binary files a/docs/images/knode-arch.png and /dev/null differ
diff --git a/docs/images/kosmos-WeChatIMG.png b/docs/images/kosmos-WeChatIMG.png
new file mode 100644
index 000000000..48a72a1d3
Binary files /dev/null and b/docs/images/kosmos-WeChatIMG.png differ
diff --git a/docs/images/link-arch.png b/docs/images/link-arch.png
index 9be6a2dec..cf6180e3a 100644
Binary files a/docs/images/link-arch.png and b/docs/images/link-arch.png differ
diff --git a/docs/proposals/leafnodegenerate/README.md b/docs/proposals/leafnodegenerate/README.md
new file mode 100644
index 000000000..0d76cd5a1
--- /dev/null
+++ b/docs/proposals/leafnodegenerate/README.md
@@ -0,0 +1,12 @@
+# Kosmos ClusterTree leaf node generate rules
+
+## Summary
+Provide a member cluster to pretend one or more Leaf Nodes in clusterTree by some rules like Node labelSelector
+
+## Motivation & User Stories
+1、Some products can provide idle Nodes to join Kosmos
+2、Some products want an easy way to make a second level pod schedule in their member clusters.
+
+## Design Details
+### Architecture
+![leaf_node_rules](img/leaf-nodes.png)
\ No newline at end of file
diff --git a/docs/proposals/leafnodegenerate/img/leaf-nodes.png b/docs/proposals/leafnodegenerate/img/leaf-nodes.png
new file mode 100644
index 000000000..3e54b844f
Binary files /dev/null and b/docs/proposals/leafnodegenerate/img/leaf-nodes.png differ
diff --git a/go.mod b/go.mod
index f32a6a5d9..cadaa3ff4 100644
--- a/go.mod
+++ b/go.mod
@@ -4,24 +4,29 @@ go 1.20
require (
github.com/bep/debounce v1.2.1
- github.com/coreos/go-iptables v0.6.0
+ github.com/containerd/console v1.0.3
+ github.com/containerd/containerd v1.6.14
+ github.com/coreos/go-iptables v0.7.1-0.20231102141700-50d824baaa46
+ github.com/docker/docker v24.0.6+incompatible
github.com/evanphx/json-patch v4.12.0+incompatible
+ github.com/go-logr/logr v1.2.3
github.com/gogo/protobuf v1.3.2
github.com/google/go-cmp v0.5.9
- github.com/mattbaird/jsonpatch v0.0.0-20230413205102-771768614e91
+ github.com/gorilla/mux v1.8.1
github.com/olekukonko/tablewriter v0.0.4
github.com/onsi/ginkgo/v2 v2.9.2
github.com/onsi/gomega v1.27.4
github.com/pkg/errors v0.9.1
github.com/projectcalico/api v0.0.0-20230602153125-fb7148692637
github.com/projectcalico/calico v1.11.0-cni-plugin.0.20220623222645-a52cb86dbaad
+ github.com/sirupsen/logrus v1.9.0
+ github.com/spf13/cast v1.6.0
github.com/spf13/cobra v1.6.0
github.com/spf13/pflag v1.0.5
github.com/vishvananda/netlink v1.2.1-beta.2.0.20220630165224-c591ada0fb2b
- golang.org/x/sync v0.1.0
- golang.org/x/sys v0.6.0
+ golang.org/x/sys v0.12.0
golang.org/x/time v0.3.0
- golang.org/x/tools v0.7.0
+ golang.org/x/tools v0.13.0
k8s.io/api v0.26.3
k8s.io/apiextensions-apiserver v0.26.3
k8s.io/apimachinery v0.26.3
@@ -31,11 +36,13 @@ require (
k8s.io/cluster-bootstrap v0.26.3
k8s.io/code-generator v0.26.3
k8s.io/component-base v0.26.3
+ k8s.io/component-helpers v0.26.3
k8s.io/klog v1.0.0
k8s.io/klog/v2 v2.90.0
k8s.io/kube-openapi v0.0.0-20230303024457-afdc3dddf62d
+ k8s.io/kube-scheduler v0.0.0
k8s.io/kubectl v0.26.3
- k8s.io/kubernetes v0.0.0-00010101000000-000000000000
+ k8s.io/kubernetes v1.13.0
k8s.io/metrics v0.26.3
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448
sigs.k8s.io/controller-runtime v0.14.5
@@ -43,21 +50,29 @@ require (
)
require (
- cloud.google.com/go/compute/metadata v0.2.3 // indirect
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 // indirect
- github.com/BurntSushi/toml v1.0.0 // indirect
github.com/MakeNowJust/heredoc v1.0.0 // indirect
+ github.com/Microsoft/go-winio v0.6.0 // indirect
+ github.com/Microsoft/hcsshim v0.9.6 // indirect
github.com/NYTimes/gziphandler v1.1.1 // indirect
github.com/antlr/antlr4/runtime/Go/antlr v1.4.10 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/blang/semver/v4 v4.0.0 // indirect
github.com/cenkalti/backoff/v4 v4.1.3 // indirect
- github.com/cespare/xxhash/v2 v2.1.2 // indirect
+ github.com/cespare/xxhash/v2 v2.2.0 // indirect
github.com/chai2010/gettext-go v1.0.2 // indirect
+ github.com/containerd/cgroups v1.0.4 // indirect
+ github.com/containerd/continuity v0.3.0 // indirect
+ github.com/containerd/fifo v1.0.0 // indirect
+ github.com/containerd/ttrpc v1.1.0 // indirect
+ github.com/containerd/typeurl v1.0.2 // indirect
github.com/coreos/go-semver v0.3.1 // indirect
github.com/coreos/go-systemd/v22 v22.3.3-0.20220203105225-a9a7ef127534 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect
github.com/docker/distribution v2.8.1+incompatible // indirect
+ github.com/docker/go-connections v0.4.0 // indirect
+ github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c // indirect
+ github.com/docker/go-units v0.5.0 // indirect
github.com/emicklei/go-restful/v3 v3.9.0 // indirect
github.com/evanphx/json-patch/v5 v5.6.0 // indirect
github.com/exponent-io/jsonpath v0.0.0-20151013193312-d6023ce2651d // indirect
@@ -66,7 +81,6 @@ require (
github.com/fsnotify/fsnotify v1.6.0 // indirect
github.com/fvbommel/sortorder v1.0.1 // indirect
github.com/go-errors/errors v1.0.1 // indirect
- github.com/go-logr/logr v1.2.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/go-openapi/jsonpointer v0.19.6 // indirect
github.com/go-openapi/jsonreference v0.20.1 // indirect
@@ -74,6 +88,7 @@ require (
github.com/go-playground/locales v0.12.1 // indirect
github.com/go-playground/universal-translator v0.0.0-20170327191703-71201497bace // indirect
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 // indirect
+ github.com/gogo/googleapis v1.4.1 // indirect
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
github.com/golang/protobuf v1.5.3 // indirect
github.com/google/btree v1.0.1 // indirect
@@ -85,21 +100,23 @@ require (
github.com/google/uuid v1.3.0 // indirect
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7 // indirect
github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0 // indirect
- github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0 // indirect
- github.com/imdario/mergo v0.3.9 // indirect
+ github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3 // indirect
+ github.com/imdario/mergo v0.3.12 // indirect
github.com/inconshreveable/mousetrap v1.0.1 // indirect
github.com/josharian/intern v1.0.0 // indirect
github.com/json-iterator/go v1.1.12 // indirect
github.com/kelseyhightower/envconfig v0.0.0-20180517194557-dd1402a4d99d // indirect
- github.com/kr/pretty v0.3.0 // indirect
+ github.com/klauspost/compress v1.11.13 // indirect
github.com/leodido/go-urn v0.0.0-20181204092800-a67a23e1c1af // indirect
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de // indirect
github.com/mailru/easyjson v0.7.7 // indirect
github.com/mattn/go-runewidth v0.0.7 // indirect
- github.com/matttproud/golang_protobuf_extensions v1.0.2 // indirect
+ github.com/matttproud/golang_protobuf_extensions v1.0.4 // indirect
github.com/mitchellh/go-wordwrap v1.0.0 // indirect
+ github.com/moby/locker v1.0.1 // indirect
github.com/moby/spdystream v0.2.0 // indirect
github.com/moby/sys/mountinfo v0.6.2 // indirect
+ github.com/moby/sys/signal v0.6.0 // indirect
github.com/moby/term v0.0.0-20220808134915-39b0c02b01ae // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
@@ -107,7 +124,11 @@ require (
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
- github.com/opencontainers/selinux v1.10.0 // indirect
+ github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 // indirect
+ github.com/opencontainers/runc v1.1.4 // indirect
+ github.com/opencontainers/runtime-spec v1.1.0-rc.1 // indirect
+ github.com/opencontainers/selinux v1.10.1 // indirect
+ github.com/pelletier/go-toml v1.9.5 // indirect
github.com/peterbourgon/diskv v2.0.1+incompatible // indirect
github.com/projectcalico/go-json v0.0.0-20161128004156-6219dc7339ba // indirect
github.com/projectcalico/go-yaml-wrapper v0.0.0-20191112210931-090425220c54 // indirect
@@ -116,7 +137,6 @@ require (
github.com/prometheus/common v0.37.0 // indirect
github.com/prometheus/procfs v0.8.0 // indirect
github.com/russross/blackfriday/v2 v2.1.0 // indirect
- github.com/sirupsen/logrus v1.9.0 // indirect
github.com/stoewer/go-strcase v1.2.0 // indirect
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f // indirect
github.com/xlab/treeprint v1.1.0 // indirect
@@ -124,6 +144,7 @@ require (
go.etcd.io/etcd/client/pkg/v3 v3.5.7 // indirect
go.etcd.io/etcd/client/v2 v2.305.7 // indirect
go.etcd.io/etcd/client/v3 v3.5.7 // indirect
+ go.opencensus.io v0.24.0 // indirect
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.35.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.35.0 // indirect
go.opentelemetry.io/otel v1.10.0 // indirect
@@ -138,30 +159,29 @@ require (
go.uber.org/atomic v1.9.0 // indirect
go.uber.org/multierr v1.8.0 // indirect
go.uber.org/zap v1.24.0 // indirect
- golang.org/x/crypto v0.1.0 // indirect
- golang.org/x/mod v0.9.0 // indirect
- golang.org/x/net v0.8.0 // indirect
- golang.org/x/oauth2 v0.0.0-20221014153046-6fdb5e3db783 // indirect
- golang.org/x/term v0.6.0 // indirect
- golang.org/x/text v0.8.0 // indirect
+ golang.org/x/crypto v0.13.0 // indirect
+ golang.org/x/mod v0.12.0 // indirect
+ golang.org/x/net v0.15.0 // indirect
+ golang.org/x/oauth2 v0.4.0 // indirect
+ golang.org/x/sync v0.3.0 // indirect
+ golang.org/x/term v0.12.0 // indirect
+ golang.org/x/text v0.13.0 // indirect
golang.zx2c4.com/wireguard/wgctrl v0.0.0-20200324154536-ceff61240acf // indirect
gomodules.xyz/jsonpatch/v2 v2.2.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
- google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e // indirect
- google.golang.org/grpc v1.50.1 // indirect
- google.golang.org/protobuf v1.28.1 // indirect
+ google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f // indirect
+ google.golang.org/grpc v1.53.0-dev.0.20230123225046-4075ef07c5d5 // indirect
+ google.golang.org/protobuf v1.28.2-0.20230118093459-a9481185b34d // indirect
gopkg.in/go-playground/validator.v9 v9.27.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.0.0 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
- k8s.io/cloud-provider v0.23.3 // indirect
- k8s.io/component-helpers v0.26.3 // indirect
- k8s.io/csi-translation-lib v0.23.3 // indirect
+ k8s.io/cloud-provider v0.26.3 // indirect
+ k8s.io/csi-translation-lib v0.26.3 // indirect
k8s.io/dynamic-resource-allocation v0.0.0 // indirect
k8s.io/gengo v0.0.0-20220902162205-c0856e24416d // indirect
k8s.io/kms v0.26.3 // indirect
- k8s.io/kube-scheduler v0.0.0 // indirect
k8s.io/mount-utils v0.23.3 // indirect
sigs.k8s.io/apiserver-network-proxy/konnectivity-client v0.0.36 // indirect
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd // indirect
@@ -173,6 +193,7 @@ require (
replace (
golang.org/x/oauth2 => golang.org/x/oauth2 v0.1.0
+ golang.zx2c4.com/wireguard => golang.zx2c4.com/wireguard v0.0.0-20231022001213-2e0774f246fb
k8s.io/api => k8s.io/api v0.26.3
k8s.io/apiextensions-apiserver => k8s.io/apiextensions-apiserver v0.26.3
k8s.io/apimachinery => k8s.io/apimachinery v0.26.3
@@ -188,6 +209,7 @@ replace (
k8s.io/cri-api => k8s.io/cri-api v0.26.3
k8s.io/csi-translation-lib => k8s.io/csi-translation-lib v0.26.3
k8s.io/dynamic-resource-allocation => k8s.io/dynamic-resource-allocation v0.26.3
+ k8s.io/kms => k8s.io/kms v0.26.3
k8s.io/kube-aggregator => k8s.io/kube-aggregator v0.26.3
k8s.io/kube-controller-manager => k8s.io/kube-controller-manager v0.26.3
k8s.io/kube-proxy => k8s.io/kube-proxy v0.26.3
diff --git a/go.sum b/go.sum
index eb058a665..2d14e37f2 100644
--- a/go.sum
+++ b/go.sum
@@ -1,8 +1,11 @@
+bazil.org/fuse v0.0.0-20160811212531-371fbbdaa898/go.mod h1:Xbm+BRKSBEpa4q4hTSxohYNQpsxXPbPry4JJWOB3LB8=
+bitbucket.org/bertimus9/systemstat v0.5.0/go.mod h1:EkUWPp8lKFPMXP8vnbpT5JDI0W/sTiLZAvN8ONWErHY=
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.38.0/go.mod h1:990N+gfupTy94rShfmMCWGDn0LpTmnzTp2qbd1dvSRU=
cloud.google.com/go v0.44.1/go.mod h1:iSa0KzasP4Uvy3f1mN/7PiObzGgflwredwwASm/v6AU=
cloud.google.com/go v0.44.2/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
+cloud.google.com/go v0.44.3/go.mod h1:60680Gw3Yr4ikxnPRS/oxxkBccT6SA1yMk63TGekxKY=
cloud.google.com/go v0.45.1/go.mod h1:RpBamKRgapWJb87xiFSdk4g1CME7QZg3uwTez+TSTjc=
cloud.google.com/go v0.46.3/go.mod h1:a6bKKbmY7er1mI7TEI4lsAkts/mkhTSZK8w33B4RAg0=
cloud.google.com/go v0.50.0/go.mod h1:r9sluTvynVuxRIOHXQEHMFffphuXHOMZMycpNR5e6To=
@@ -15,6 +18,7 @@ cloud.google.com/go v0.62.0/go.mod h1:jmCYTdRCQuc1PHIIJ/maLInMho30T/Y0M4hTdTShOY
cloud.google.com/go v0.65.0/go.mod h1:O5N8zS7uWy9vkA9vayVHs65eM1ubvY4h553ofrNHObY=
cloud.google.com/go v0.72.0/go.mod h1:M+5Vjvlc2wnp6tjzE102Dw08nGShTscUx2nZMufOKPI=
cloud.google.com/go v0.74.0/go.mod h1:VV1xSbzvo+9QJOxLDaJfTjx5e+MePCpCWwvftOeQmWk=
+cloud.google.com/go v0.75.0/go.mod h1:VGuuCn7PG0dwsd5XPVm2Mm3wlh3EL55/79EKB6hlPTY=
cloud.google.com/go v0.78.0/go.mod h1:QjdrLG0uq+YwhjoVOLsS1t7TW8fs36kLs4XO5R5ECHg=
cloud.google.com/go v0.79.0/go.mod h1:3bzgcEeQlzbuEAYu4mrWhKqWjmpprinYgKJLgKHnbb8=
cloud.google.com/go v0.81.0/go.mod h1:mk/AM35KwGk/Nm2YSeZbxXdrNK3KZOYHmLkOqC2V6E0=
@@ -26,59 +30,430 @@ cloud.google.com/go v0.93.3/go.mod h1:8utlLll2EF5XMAV15woO4lSbWQlk8rer9aLOfLh7+Y
cloud.google.com/go v0.94.1/go.mod h1:qAlAugsXlC+JWO+Bke5vCtc9ONxjQT3drlTTnAplMW4=
cloud.google.com/go v0.97.0/go.mod h1:GF7l59pYBVlXQIBLx3a761cZ41F9bBH3JUlihCt2Udc=
cloud.google.com/go v0.99.0/go.mod h1:w0Xx2nLzqWJPuozYQX+hFfCSI8WioryfRDzkoI/Y2ZA=
+cloud.google.com/go v0.100.1/go.mod h1:fs4QogzfH5n2pBXBP9vRiU+eCny7lD2vmFZy79Iuw1U=
cloud.google.com/go v0.100.2/go.mod h1:4Xra9TjzAeYHrl5+oeLlzbM2k3mjVhZh4UqTZ//w99A=
-cloud.google.com/go v0.102.0 h1:DAq3r8y4mDgyB/ZPJ9v/5VJNqjgJAxTn6ZYLlUywOu8=
cloud.google.com/go v0.102.0/go.mod h1:oWcCzKlqJ5zgHQt9YsaeTY9KzIvjyy0ArmiBUgpQ+nc=
+cloud.google.com/go v0.102.1/go.mod h1:XZ77E9qnTEnrgEOvr4xzfdX5TRo7fB4T2F4O6+34hIU=
+cloud.google.com/go v0.104.0/go.mod h1:OO6xxXdJyvuJPcEPBLN9BJPD+jep5G1+2U5B5gkRYtA=
+cloud.google.com/go v0.105.0/go.mod h1:PrLgOJNe5nfE9UMxKxgXj4mD3voiP+YQ6gdt6KMFOKM=
+cloud.google.com/go v0.107.0 h1:qkj22L7bgkl6vIeZDlOY2po43Mx/TIa2Wsa7VR+PEww=
+cloud.google.com/go v0.107.0/go.mod h1:wpc2eNrD7hXUTy8EKS10jkxpZBjASrORK7goS+3YX2I=
+cloud.google.com/go/accessapproval v1.4.0/go.mod h1:zybIuC3KpDOvotz59lFe5qxRZx6C75OtwbisN56xYB4=
+cloud.google.com/go/accessapproval v1.5.0/go.mod h1:HFy3tuiGvMdcd/u+Cu5b9NkO1pEICJ46IR82PoUdplw=
+cloud.google.com/go/accesscontextmanager v1.3.0/go.mod h1:TgCBehyr5gNMz7ZaH9xubp+CE8dkrszb4oK9CWyvD4o=
+cloud.google.com/go/accesscontextmanager v1.4.0/go.mod h1:/Kjh7BBu/Gh83sv+K60vN9QE5NJcd80sU33vIe2IFPE=
+cloud.google.com/go/aiplatform v1.22.0/go.mod h1:ig5Nct50bZlzV6NvKaTwmplLLddFx0YReh9WfTO5jKw=
+cloud.google.com/go/aiplatform v1.24.0/go.mod h1:67UUvRBKG6GTayHKV8DBv2RtR1t93YRu5B1P3x99mYY=
+cloud.google.com/go/aiplatform v1.27.0/go.mod h1:Bvxqtl40l0WImSb04d0hXFU7gDOiq9jQmorivIiWcKg=
+cloud.google.com/go/analytics v0.11.0/go.mod h1:DjEWCu41bVbYcKyvlws9Er60YE4a//bK6mnhWvQeFNI=
+cloud.google.com/go/analytics v0.12.0/go.mod h1:gkfj9h6XRf9+TS4bmuhPEShsh3hH8PAZzm/41OOhQd4=
+cloud.google.com/go/apigateway v1.3.0/go.mod h1:89Z8Bhpmxu6AmUxuVRg/ECRGReEdiP3vQtk4Z1J9rJk=
+cloud.google.com/go/apigateway v1.4.0/go.mod h1:pHVY9MKGaH9PQ3pJ4YLzoj6U5FUDeDFBllIz7WmzJoc=
+cloud.google.com/go/apigeeconnect v1.3.0/go.mod h1:G/AwXFAKo0gIXkPTVfZDd2qA1TxBXJ3MgMRBQkIi9jc=
+cloud.google.com/go/apigeeconnect v1.4.0/go.mod h1:kV4NwOKqjvt2JYR0AoIWo2QGfoRtn/pkS3QlHp0Ni04=
+cloud.google.com/go/appengine v1.4.0/go.mod h1:CS2NhuBuDXM9f+qscZ6V86m1MIIqPj3WC/UoEuR1Sno=
+cloud.google.com/go/appengine v1.5.0/go.mod h1:TfasSozdkFI0zeoxW3PTBLiNqRmzraodCWatWI9Dmak=
+cloud.google.com/go/area120 v0.5.0/go.mod h1:DE/n4mp+iqVyvxHN41Vf1CR602GiHQjFPusMFW6bGR4=
+cloud.google.com/go/area120 v0.6.0/go.mod h1:39yFJqWVgm0UZqWTOdqkLhjoC7uFfgXRC8g/ZegeAh0=
+cloud.google.com/go/artifactregistry v1.6.0/go.mod h1:IYt0oBPSAGYj/kprzsBjZ/4LnG/zOcHyFHjWPCi6SAQ=
+cloud.google.com/go/artifactregistry v1.7.0/go.mod h1:mqTOFOnGZx8EtSqK/ZWcsm/4U8B77rbcLP6ruDU2Ixk=
+cloud.google.com/go/artifactregistry v1.8.0/go.mod h1:w3GQXkJX8hiKN0v+at4b0qotwijQbYUqF2GWkZzAhC0=
+cloud.google.com/go/artifactregistry v1.9.0/go.mod h1:2K2RqvA2CYvAeARHRkLDhMDJ3OXy26h3XW+3/Jh2uYc=
+cloud.google.com/go/asset v1.5.0/go.mod h1:5mfs8UvcM5wHhqtSv8J1CtxxaQq3AdBxxQi2jGW/K4o=
+cloud.google.com/go/asset v1.7.0/go.mod h1:YbENsRK4+xTiL+Ofoj5Ckf+O17kJtgp3Y3nn4uzZz5s=
+cloud.google.com/go/asset v1.8.0/go.mod h1:mUNGKhiqIdbr8X7KNayoYvyc4HbbFO9URsjbytpUaW0=
+cloud.google.com/go/asset v1.9.0/go.mod h1:83MOE6jEJBMqFKadM9NLRcs80Gdw76qGuHn8m3h8oHQ=
+cloud.google.com/go/asset v1.10.0/go.mod h1:pLz7uokL80qKhzKr4xXGvBQXnzHn5evJAEAtZiIb0wY=
+cloud.google.com/go/assuredworkloads v1.5.0/go.mod h1:n8HOZ6pff6re5KYfBXcFvSViQjDwxFkAkmUFffJRbbY=
+cloud.google.com/go/assuredworkloads v1.6.0/go.mod h1:yo2YOk37Yc89Rsd5QMVECvjaMKymF9OP+QXWlKXUkXw=
+cloud.google.com/go/assuredworkloads v1.7.0/go.mod h1:z/736/oNmtGAyU47reJgGN+KVoYoxeLBoj4XkKYscNI=
+cloud.google.com/go/assuredworkloads v1.8.0/go.mod h1:AsX2cqyNCOvEQC8RMPnoc0yEarXQk6WEKkxYfL6kGIo=
+cloud.google.com/go/assuredworkloads v1.9.0/go.mod h1:kFuI1P78bplYtT77Tb1hi0FMxM0vVpRC7VVoJC3ZoT0=
+cloud.google.com/go/automl v1.5.0/go.mod h1:34EjfoFGMZ5sgJ9EoLsRtdPSNZLcfflJR39VbVNS2M0=
+cloud.google.com/go/automl v1.6.0/go.mod h1:ugf8a6Fx+zP0D59WLhqgTDsQI9w07o64uf/Is3Nh5p8=
+cloud.google.com/go/automl v1.7.0/go.mod h1:RL9MYCCsJEOmt0Wf3z9uzG0a7adTT1fe+aObgSpkCt8=
+cloud.google.com/go/automl v1.8.0/go.mod h1:xWx7G/aPEe/NP+qzYXktoBSDfjO+vnKMGgsApGJJquM=
+cloud.google.com/go/baremetalsolution v0.3.0/go.mod h1:XOrocE+pvK1xFfleEnShBlNAXf+j5blPPxrhjKgnIFc=
+cloud.google.com/go/baremetalsolution v0.4.0/go.mod h1:BymplhAadOO/eBa7KewQ0Ppg4A4Wplbn+PsFKRLo0uI=
+cloud.google.com/go/batch v0.3.0/go.mod h1:TR18ZoAekj1GuirsUsR1ZTKN3FC/4UDnScjT8NXImFE=
+cloud.google.com/go/batch v0.4.0/go.mod h1:WZkHnP43R/QCGQsZ+0JyG4i79ranE2u8xvjq/9+STPE=
+cloud.google.com/go/beyondcorp v0.2.0/go.mod h1:TB7Bd+EEtcw9PCPQhCJtJGjk/7TC6ckmnSFS+xwTfm4=
+cloud.google.com/go/beyondcorp v0.3.0/go.mod h1:E5U5lcrcXMsCuoDNyGrpyTm/hn7ne941Jz2vmksAxW8=
cloud.google.com/go/bigquery v1.0.1/go.mod h1:i/xbL2UlR5RvWAURpBYZTtm/cXjCha9lbfbpx4poX+o=
cloud.google.com/go/bigquery v1.3.0/go.mod h1:PjpwJnslEMmckchkHFfq+HTD2DmtT67aNFKH1/VBDHE=
cloud.google.com/go/bigquery v1.4.0/go.mod h1:S8dzgnTigyfTmLBfrtrhyYhwRxG72rYxvftPBK2Dvzc=
cloud.google.com/go/bigquery v1.5.0/go.mod h1:snEHRnqQbz117VIFhE8bmtwIDY80NLUZUMb4Nv6dBIg=
cloud.google.com/go/bigquery v1.7.0/go.mod h1://okPTzCYNXSlb24MZs83e2Do+h+VXtc4gLoIoXIAPc=
cloud.google.com/go/bigquery v1.8.0/go.mod h1:J5hqkt3O0uAFnINi6JXValWIb1v0goeZM77hZzJN/fQ=
+cloud.google.com/go/bigquery v1.42.0/go.mod h1:8dRTJxhtG+vwBKzE5OseQn/hiydoQN3EedCaOdYmxRA=
+cloud.google.com/go/bigquery v1.43.0/go.mod h1:ZMQcXHsl+xmU1z36G2jNGZmKp9zNY5BUua5wDgmNCfw=
+cloud.google.com/go/bigquery v1.44.0/go.mod h1:0Y33VqXTEsbamHJvJHdFmtqHvMIY28aK1+dFsvaChGc=
+cloud.google.com/go/billing v1.4.0/go.mod h1:g9IdKBEFlItS8bTtlrZdVLWSSdSyFUZKXNS02zKMOZY=
+cloud.google.com/go/billing v1.5.0/go.mod h1:mztb1tBc3QekhjSgmpf/CV4LzWXLzCArwpLmP2Gm88s=
+cloud.google.com/go/billing v1.6.0/go.mod h1:WoXzguj+BeHXPbKfNWkqVtDdzORazmCjraY+vrxcyvI=
+cloud.google.com/go/billing v1.7.0/go.mod h1:q457N3Hbj9lYwwRbnlD7vUpyjq6u5U1RAOArInEiD5Y=
+cloud.google.com/go/binaryauthorization v1.1.0/go.mod h1:xwnoWu3Y84jbuHa0zd526MJYmtnVXn0syOjaJgy4+dM=
+cloud.google.com/go/binaryauthorization v1.2.0/go.mod h1:86WKkJHtRcv5ViNABtYMhhNWRrD1Vpi//uKEy7aYEfI=
+cloud.google.com/go/binaryauthorization v1.3.0/go.mod h1:lRZbKgjDIIQvzYQS1p99A7/U1JqvqeZg0wiI5tp6tg0=
+cloud.google.com/go/binaryauthorization v1.4.0/go.mod h1:tsSPQrBd77VLplV70GUhBf/Zm3FsKmgSqgm4UmiDItk=
+cloud.google.com/go/certificatemanager v1.3.0/go.mod h1:n6twGDvcUBFu9uBgt4eYvvf3sQ6My8jADcOVwHmzadg=
+cloud.google.com/go/certificatemanager v1.4.0/go.mod h1:vowpercVFyqs8ABSmrdV+GiFf2H/ch3KyudYQEMM590=
+cloud.google.com/go/channel v1.8.0/go.mod h1:W5SwCXDJsq/rg3tn3oG0LOxpAo6IMxNa09ngphpSlnk=
+cloud.google.com/go/channel v1.9.0/go.mod h1:jcu05W0my9Vx4mt3/rEHpfxc9eKi9XwsdDL8yBMbKUk=
+cloud.google.com/go/cloudbuild v1.3.0/go.mod h1:WequR4ULxlqvMsjDEEEFnOG5ZSRSgWOywXYDb1vPE6U=
+cloud.google.com/go/cloudbuild v1.4.0/go.mod h1:5Qwa40LHiOXmz3386FrjrYM93rM/hdRr7b53sySrTqA=
+cloud.google.com/go/clouddms v1.3.0/go.mod h1:oK6XsCDdW4Ib3jCCBugx+gVjevp2TMXFtgxvPSee3OM=
+cloud.google.com/go/clouddms v1.4.0/go.mod h1:Eh7sUGCC+aKry14O1NRljhjyrr0NFC0G2cjwX0cByRk=
+cloud.google.com/go/cloudtasks v1.5.0/go.mod h1:fD92REy1x5woxkKEkLdvavGnPJGEn8Uic9nWuLzqCpY=
+cloud.google.com/go/cloudtasks v1.6.0/go.mod h1:C6Io+sxuke9/KNRkbQpihnW93SWDU3uXt92nu85HkYI=
+cloud.google.com/go/cloudtasks v1.7.0/go.mod h1:ImsfdYWwlWNJbdgPIIGJWC+gemEGTBK/SunNQQNCAb4=
+cloud.google.com/go/cloudtasks v1.8.0/go.mod h1:gQXUIwCSOI4yPVK7DgTVFiiP0ZW/eQkydWzwVMdHxrI=
cloud.google.com/go/compute v0.1.0/go.mod h1:GAesmwr110a34z04OlxYkATPBEfVhkymfTBXtfbBFow=
cloud.google.com/go/compute v1.3.0/go.mod h1:cCZiE1NHEtai4wiufUhW8I8S1JKkAnhnQJWM7YD99wM=
cloud.google.com/go/compute v1.5.0/go.mod h1:9SMHyhJlzhlkJqrPAc839t2BZFTSk6Jdj6mkzQJeu0M=
cloud.google.com/go/compute v1.6.0/go.mod h1:T29tfhtVbq1wvAPo0E3+7vhgmkOYeXjhFvz/FMzPu0s=
cloud.google.com/go/compute v1.6.1/go.mod h1:g85FgpzFvNULZ+S8AYq87axRKuf2Kh7deLqV/jJ3thU=
cloud.google.com/go/compute v1.7.0/go.mod h1:435lt8av5oL9P3fv1OEzSbSUe+ybHXGMPQHHZWZxy9U=
-cloud.google.com/go/compute v1.14.0 h1:hfm2+FfxVmnRlh6LpB7cg1ZNU+5edAHmW679JePztk0=
+cloud.google.com/go/compute v1.10.0/go.mod h1:ER5CLbMxl90o2jtNbGSbtfOpQKR0t15FOtRsugnLrlU=
+cloud.google.com/go/compute v1.12.0/go.mod h1:e8yNOBcBONZU1vJKCvCoDw/4JQsA0dpM4x/6PIIOocU=
+cloud.google.com/go/compute v1.12.1/go.mod h1:e8yNOBcBONZU1vJKCvCoDw/4JQsA0dpM4x/6PIIOocU=
+cloud.google.com/go/compute v1.13.0/go.mod h1:5aPTS0cUNMIc1CE546K+Th6weJUNQErARyZtRXDJ8GE=
+cloud.google.com/go/compute v1.14.0/go.mod h1:YfLtxrj9sU4Yxv+sXzZkyPjEyPBZfXHUvjxega5vAdo=
+cloud.google.com/go/compute v1.15.1 h1:7UGq3QknM33pw5xATlpzeoomNxsacIVvTqTTvbfajmE=
+cloud.google.com/go/compute v1.15.1/go.mod h1:bjjoF/NtFUrkD/urWfdHaKuOPDR5nWIs63rR+SXhcpA=
+cloud.google.com/go/compute/metadata v0.1.0/go.mod h1:Z1VN+bulIf6bt4P/C37K4DyZYZEXYonfTBHHFPO/4UU=
+cloud.google.com/go/compute/metadata v0.2.1/go.mod h1:jgHgmJd2RKBGzXqF5LR2EZMGxBkeanZ9wwa75XHJgOM=
cloud.google.com/go/compute/metadata v0.2.3 h1:mg4jlk7mCAj6xXp9UJ4fjI9VUI5rubuGBW5aJ7UnBMY=
cloud.google.com/go/compute/metadata v0.2.3/go.mod h1:VAV5nSsACxMJvgaAuX6Pk2AawlZn8kiOGuCv6gTkwuA=
+cloud.google.com/go/contactcenterinsights v1.3.0/go.mod h1:Eu2oemoePuEFc/xKFPjbTuPSj0fYJcPls9TFlPNnHHY=
+cloud.google.com/go/contactcenterinsights v1.4.0/go.mod h1:L2YzkGbPsv+vMQMCADxJoT9YiTTnSEd6fEvCeHTYVck=
+cloud.google.com/go/container v1.6.0/go.mod h1:Xazp7GjJSeUYo688S+6J5V+n/t+G5sKBTFkKNudGRxg=
+cloud.google.com/go/container v1.7.0/go.mod h1:Dp5AHtmothHGX3DwwIHPgq45Y8KmNsgN3amoYfxVkLo=
+cloud.google.com/go/containeranalysis v0.5.1/go.mod h1:1D92jd8gRR/c0fGMlymRgxWD3Qw9C1ff6/T7mLgVL8I=
+cloud.google.com/go/containeranalysis v0.6.0/go.mod h1:HEJoiEIu+lEXM+k7+qLCci0h33lX3ZqoYFdmPcoO7s4=
+cloud.google.com/go/datacatalog v1.3.0/go.mod h1:g9svFY6tuR+j+hrTw3J2dNcmI0dzmSiyOzm8kpLq0a0=
+cloud.google.com/go/datacatalog v1.5.0/go.mod h1:M7GPLNQeLfWqeIm3iuiruhPzkt65+Bx8dAKvScX8jvs=
+cloud.google.com/go/datacatalog v1.6.0/go.mod h1:+aEyF8JKg+uXcIdAmmaMUmZ3q1b/lKLtXCmXdnc0lbc=
+cloud.google.com/go/datacatalog v1.7.0/go.mod h1:9mEl4AuDYWw81UGc41HonIHH7/sn52H0/tc8f8ZbZIE=
+cloud.google.com/go/datacatalog v1.8.0/go.mod h1:KYuoVOv9BM8EYz/4eMFxrr4DUKhGIOXxZoKYF5wdISM=
+cloud.google.com/go/dataflow v0.6.0/go.mod h1:9QwV89cGoxjjSR9/r7eFDqqjtvbKxAK2BaYU6PVk9UM=
+cloud.google.com/go/dataflow v0.7.0/go.mod h1:PX526vb4ijFMesO1o202EaUmouZKBpjHsTlCtB4parQ=
+cloud.google.com/go/dataform v0.3.0/go.mod h1:cj8uNliRlHpa6L3yVhDOBrUXH+BPAO1+KFMQQNSThKo=
+cloud.google.com/go/dataform v0.4.0/go.mod h1:fwV6Y4Ty2yIFL89huYlEkwUPtS7YZinZbzzj5S9FzCE=
+cloud.google.com/go/dataform v0.5.0/go.mod h1:GFUYRe8IBa2hcomWplodVmUx/iTL0FrsauObOM3Ipr0=
+cloud.google.com/go/datafusion v1.4.0/go.mod h1:1Zb6VN+W6ALo85cXnM1IKiPw+yQMKMhB9TsTSRDo/38=
+cloud.google.com/go/datafusion v1.5.0/go.mod h1:Kz+l1FGHB0J+4XF2fud96WMmRiq/wj8N9u007vyXZ2w=
+cloud.google.com/go/datalabeling v0.5.0/go.mod h1:TGcJ0G2NzcsXSE/97yWjIZO0bXj0KbVlINXMG9ud42I=
+cloud.google.com/go/datalabeling v0.6.0/go.mod h1:WqdISuk/+WIGeMkpw/1q7bK/tFEZxsrFJOJdY2bXvTQ=
+cloud.google.com/go/dataplex v1.3.0/go.mod h1:hQuRtDg+fCiFgC8j0zV222HvzFQdRd+SVX8gdmFcZzA=
+cloud.google.com/go/dataplex v1.4.0/go.mod h1:X51GfLXEMVJ6UN47ESVqvlsRplbLhcsAt0kZCCKsU0A=
+cloud.google.com/go/dataproc v1.7.0/go.mod h1:CKAlMjII9H90RXaMpSxQ8EU6dQx6iAYNPcYPOkSbi8s=
+cloud.google.com/go/dataproc v1.8.0/go.mod h1:5OW+zNAH0pMpw14JVrPONsxMQYMBqJuzORhIBfBn9uI=
+cloud.google.com/go/dataqna v0.5.0/go.mod h1:90Hyk596ft3zUQ8NkFfvICSIfHFh1Bc7C4cK3vbhkeo=
+cloud.google.com/go/dataqna v0.6.0/go.mod h1:1lqNpM7rqNLVgWBJyk5NF6Uen2PHym0jtVJonplVsDA=
cloud.google.com/go/datastore v1.0.0/go.mod h1:LXYbyblFSglQ5pkeyhO+Qmw7ukd3C+pD7TKLgZqpHYE=
cloud.google.com/go/datastore v1.1.0/go.mod h1:umbIZjpQpHh4hmRpGhH4tLFup+FVzqBi1b3c64qFpCk=
+cloud.google.com/go/datastore v1.10.0/go.mod h1:PC5UzAmDEkAmkfaknstTYbNpgE49HAgW2J1gcgUfmdM=
+cloud.google.com/go/datastream v1.2.0/go.mod h1:i/uTP8/fZwgATHS/XFu0TcNUhuA0twZxxQ3EyCUQMwo=
+cloud.google.com/go/datastream v1.3.0/go.mod h1:cqlOX8xlyYF/uxhiKn6Hbv6WjwPPuI9W2M9SAXwaLLQ=
+cloud.google.com/go/datastream v1.4.0/go.mod h1:h9dpzScPhDTs5noEMQVWP8Wx8AFBRyS0s8KWPx/9r0g=
+cloud.google.com/go/datastream v1.5.0/go.mod h1:6TZMMNPwjUqZHBKPQ1wwXpb0d5VDVPl2/XoS5yi88q4=
+cloud.google.com/go/deploy v1.4.0/go.mod h1:5Xghikd4VrmMLNaF6FiRFDlHb59VM59YoDQnOUdsH/c=
+cloud.google.com/go/deploy v1.5.0/go.mod h1:ffgdD0B89tToyW/U/D2eL0jN2+IEV/3EMuXHA0l4r+s=
+cloud.google.com/go/dialogflow v1.15.0/go.mod h1:HbHDWs33WOGJgn6rfzBW1Kv807BE3O1+xGbn59zZWI4=
+cloud.google.com/go/dialogflow v1.16.1/go.mod h1:po6LlzGfK+smoSmTBnbkIZY2w8ffjz/RcGSS+sh1el0=
+cloud.google.com/go/dialogflow v1.17.0/go.mod h1:YNP09C/kXA1aZdBgC/VtXX74G/TKn7XVCcVumTflA+8=
+cloud.google.com/go/dialogflow v1.18.0/go.mod h1:trO7Zu5YdyEuR+BhSNOqJezyFQ3aUzz0njv7sMx/iek=
+cloud.google.com/go/dialogflow v1.19.0/go.mod h1:JVmlG1TwykZDtxtTXujec4tQ+D8SBFMoosgy+6Gn0s0=
+cloud.google.com/go/dlp v1.6.0/go.mod h1:9eyB2xIhpU0sVwUixfBubDoRwP+GjeUoxxeueZmqvmM=
+cloud.google.com/go/dlp v1.7.0/go.mod h1:68ak9vCiMBjbasxeVD17hVPxDEck+ExiHavX8kiHG+Q=
+cloud.google.com/go/documentai v1.7.0/go.mod h1:lJvftZB5NRiFSX4moiye1SMxHx0Bc3x1+p9e/RfXYiU=
+cloud.google.com/go/documentai v1.8.0/go.mod h1:xGHNEB7CtsnySCNrCFdCyyMz44RhFEEX2Q7UD0c5IhU=
+cloud.google.com/go/documentai v1.9.0/go.mod h1:FS5485S8R00U10GhgBC0aNGrJxBP8ZVpEeJ7PQDZd6k=
+cloud.google.com/go/documentai v1.10.0/go.mod h1:vod47hKQIPeCfN2QS/jULIvQTugbmdc0ZvxxfQY1bg4=
+cloud.google.com/go/domains v0.6.0/go.mod h1:T9Rz3GasrpYk6mEGHh4rymIhjlnIuB4ofT1wTxDeT4Y=
+cloud.google.com/go/domains v0.7.0/go.mod h1:PtZeqS1xjnXuRPKE/88Iru/LdfoRyEHYA9nFQf4UKpg=
+cloud.google.com/go/edgecontainer v0.1.0/go.mod h1:WgkZ9tp10bFxqO8BLPqv2LlfmQF1X8lZqwW4r1BTajk=
+cloud.google.com/go/edgecontainer v0.2.0/go.mod h1:RTmLijy+lGpQ7BXuTDa4C4ssxyXT34NIuHIgKuP4s5w=
+cloud.google.com/go/errorreporting v0.3.0/go.mod h1:xsP2yaAp+OAW4OIm60An2bbLpqIhKXdWR/tawvl7QzU=
+cloud.google.com/go/essentialcontacts v1.3.0/go.mod h1:r+OnHa5jfj90qIfZDO/VztSFqbQan7HV75p8sA+mdGI=
+cloud.google.com/go/essentialcontacts v1.4.0/go.mod h1:8tRldvHYsmnBCHdFpvU+GL75oWiBKl80BiqlFh9tp+8=
+cloud.google.com/go/eventarc v1.7.0/go.mod h1:6ctpF3zTnaQCxUjHUdcfgcA1A2T309+omHZth7gDfmc=
+cloud.google.com/go/eventarc v1.8.0/go.mod h1:imbzxkyAU4ubfsaKYdQg04WS1NvncblHEup4kvF+4gw=
+cloud.google.com/go/filestore v1.3.0/go.mod h1:+qbvHGvXU1HaKX2nD0WEPo92TP/8AQuCVEBXNY9z0+w=
+cloud.google.com/go/filestore v1.4.0/go.mod h1:PaG5oDfo9r224f8OYXURtAsY+Fbyq/bLYoINEK8XQAI=
cloud.google.com/go/firestore v1.1.0/go.mod h1:ulACoGHTpvq5r8rxGJ4ddJZBZqakUQqClKRT5SZwBmk=
+cloud.google.com/go/firestore v1.9.0/go.mod h1:HMkjKHNTtRyZNiMzu7YAsLr9K3X2udY2AMwDaMEQiiE=
+cloud.google.com/go/functions v1.6.0/go.mod h1:3H1UA3qiIPRWD7PeZKLvHZ9SaQhR26XIJcC0A5GbvAk=
+cloud.google.com/go/functions v1.7.0/go.mod h1:+d+QBcWM+RsrgZfV9xo6KfA1GlzJfxcfZcRPEhDDfzg=
+cloud.google.com/go/functions v1.8.0/go.mod h1:RTZ4/HsQjIqIYP9a9YPbU+QFoQsAlYgrwOXJWHn1POY=
+cloud.google.com/go/functions v1.9.0/go.mod h1:Y+Dz8yGguzO3PpIjhLTbnqV1CWmgQ5UwtlpzoyquQ08=
+cloud.google.com/go/gaming v1.5.0/go.mod h1:ol7rGcxP/qHTRQE/RO4bxkXq+Fix0j6D4LFPzYTIrDM=
+cloud.google.com/go/gaming v1.6.0/go.mod h1:YMU1GEvA39Qt3zWGyAVA9bpYz/yAhTvaQ1t2sK4KPUA=
+cloud.google.com/go/gaming v1.7.0/go.mod h1:LrB8U7MHdGgFG851iHAfqUdLcKBdQ55hzXy9xBJz0+w=
+cloud.google.com/go/gaming v1.8.0/go.mod h1:xAqjS8b7jAVW0KFYeRUxngo9My3f33kFmua++Pi+ggM=
+cloud.google.com/go/gkebackup v0.2.0/go.mod h1:XKvv/4LfG829/B8B7xRkk8zRrOEbKtEam6yNfuQNH60=
+cloud.google.com/go/gkebackup v0.3.0/go.mod h1:n/E671i1aOQvUxT541aTkCwExO/bTer2HDlj4TsBRAo=
+cloud.google.com/go/gkeconnect v0.5.0/go.mod h1:c5lsNAg5EwAy7fkqX/+goqFsU1Da/jQFqArp+wGNr/o=
+cloud.google.com/go/gkeconnect v0.6.0/go.mod h1:Mln67KyU/sHJEBY8kFZ0xTeyPtzbq9StAVvEULYK16A=
+cloud.google.com/go/gkehub v0.9.0/go.mod h1:WYHN6WG8w9bXU0hqNxt8rm5uxnk8IH+lPY9J2TV7BK0=
+cloud.google.com/go/gkehub v0.10.0/go.mod h1:UIPwxI0DsrpsVoWpLB0stwKCP+WFVG9+y977wO+hBH0=
+cloud.google.com/go/gkemulticloud v0.3.0/go.mod h1:7orzy7O0S+5kq95e4Hpn7RysVA7dPs8W/GgfUtsPbrA=
+cloud.google.com/go/gkemulticloud v0.4.0/go.mod h1:E9gxVBnseLWCk24ch+P9+B2CoDFJZTyIgLKSalC7tuI=
+cloud.google.com/go/grafeas v0.2.0/go.mod h1:KhxgtF2hb0P191HlY5besjYm6MqTSTj3LSI+M+ByZHc=
+cloud.google.com/go/gsuiteaddons v1.3.0/go.mod h1:EUNK/J1lZEZO8yPtykKxLXI6JSVN2rg9bN8SXOa0bgM=
+cloud.google.com/go/gsuiteaddons v1.4.0/go.mod h1:rZK5I8hht7u7HxFQcFei0+AtfS9uSushomRlg+3ua1o=
+cloud.google.com/go/iam v0.1.0/go.mod h1:vcUNEa0pEm0qRVpmWepWaFMIAI8/hjB9mO8rNCJtF6c=
cloud.google.com/go/iam v0.3.0/go.mod h1:XzJPvDayI+9zsASAFO68Hk07u3z+f+JrT2xXNdp4bnY=
+cloud.google.com/go/iam v0.5.0/go.mod h1:wPU9Vt0P4UmCux7mqtRu6jcpPAb74cP1fh50J3QpkUc=
+cloud.google.com/go/iam v0.6.0/go.mod h1:+1AH33ueBne5MzYccyMHtEKqLE4/kJOibtffMHDMFMc=
+cloud.google.com/go/iam v0.7.0/go.mod h1:H5Br8wRaDGNc8XP3keLc4unfUUZeyH3Sfl9XpQEYOeg=
+cloud.google.com/go/iam v0.8.0/go.mod h1:lga0/y3iH6CX7sYqypWJ33hf7kkfXJag67naqGESjkE=
+cloud.google.com/go/iap v1.4.0/go.mod h1:RGFwRJdihTINIe4wZ2iCP0zF/qu18ZwyKxrhMhygBEc=
+cloud.google.com/go/iap v1.5.0/go.mod h1:UH/CGgKd4KyohZL5Pt0jSKE4m3FR51qg6FKQ/z/Ix9A=
+cloud.google.com/go/ids v1.1.0/go.mod h1:WIuwCaYVOzHIj2OhN9HAwvW+DBdmUAdcWlFxRl+KubM=
+cloud.google.com/go/ids v1.2.0/go.mod h1:5WXvp4n25S0rA/mQWAg1YEEBBq6/s+7ml1RDCW1IrcY=
+cloud.google.com/go/iot v1.3.0/go.mod h1:r7RGh2B61+B8oz0AGE+J72AhA0G7tdXItODWsaA2oLs=
+cloud.google.com/go/iot v1.4.0/go.mod h1:dIDxPOn0UvNDUMD8Ger7FIaTuvMkj+aGk94RPP0iV+g=
+cloud.google.com/go/kms v1.4.0/go.mod h1:fajBHndQ+6ubNw6Ss2sSd+SWvjL26RNo/dr7uxsnnOA=
+cloud.google.com/go/kms v1.5.0/go.mod h1:QJS2YY0eJGBg3mnDfuaCyLauWwBJiHRboYxJ++1xJNg=
+cloud.google.com/go/kms v1.6.0/go.mod h1:Jjy850yySiasBUDi6KFUwUv2n1+o7QZFyuUJg6OgjA0=
+cloud.google.com/go/language v1.4.0/go.mod h1:F9dRpNFQmJbkaop6g0JhSBXCNlO90e1KWx5iDdxbWic=
+cloud.google.com/go/language v1.6.0/go.mod h1:6dJ8t3B+lUYfStgls25GusK04NLh3eDLQnWM3mdEbhI=
+cloud.google.com/go/language v1.7.0/go.mod h1:DJ6dYN/W+SQOjF8e1hLQXMF21AkH2w9wiPzPCJa2MIE=
+cloud.google.com/go/language v1.8.0/go.mod h1:qYPVHf7SPoNNiCL2Dr0FfEFNil1qi3pQEyygwpgVKB8=
+cloud.google.com/go/lifesciences v0.5.0/go.mod h1:3oIKy8ycWGPUyZDR/8RNnTOYevhaMLqh5vLUXs9zvT8=
+cloud.google.com/go/lifesciences v0.6.0/go.mod h1:ddj6tSX/7BOnhxCSd3ZcETvtNr8NZ6t/iPhY2Tyfu08=
+cloud.google.com/go/logging v1.6.1/go.mod h1:5ZO0mHHbvm8gEmeEUHrmDlTDSu5imF6MUP9OfilNXBw=
+cloud.google.com/go/longrunning v0.1.1/go.mod h1:UUFxuDWkv22EuY93jjmDMFT5GPQKeFVJBIF6QlTqdsE=
+cloud.google.com/go/longrunning v0.3.0/go.mod h1:qth9Y41RRSUE69rDcOn6DdK3HfQfsUI0YSmW3iIlLJc=
+cloud.google.com/go/managedidentities v1.3.0/go.mod h1:UzlW3cBOiPrzucO5qWkNkh0w33KFtBJU281hacNvsdE=
+cloud.google.com/go/managedidentities v1.4.0/go.mod h1:NWSBYbEMgqmbZsLIyKvxrYbtqOsxY1ZrGM+9RgDqInM=
+cloud.google.com/go/maps v0.1.0/go.mod h1:BQM97WGyfw9FWEmQMpZ5T6cpovXXSd1cGmFma94eubI=
+cloud.google.com/go/mediatranslation v0.5.0/go.mod h1:jGPUhGTybqsPQn91pNXw0xVHfuJ3leR1wj37oU3y1f4=
+cloud.google.com/go/mediatranslation v0.6.0/go.mod h1:hHdBCTYNigsBxshbznuIMFNe5QXEowAuNmmC7h8pu5w=
+cloud.google.com/go/memcache v1.4.0/go.mod h1:rTOfiGZtJX1AaFUrOgsMHX5kAzaTQ8azHiuDoTPzNsE=
+cloud.google.com/go/memcache v1.5.0/go.mod h1:dk3fCK7dVo0cUU2c36jKb4VqKPS22BTkf81Xq617aWM=
+cloud.google.com/go/memcache v1.6.0/go.mod h1:XS5xB0eQZdHtTuTF9Hf8eJkKtR3pVRCcvJwtm68T3rA=
+cloud.google.com/go/memcache v1.7.0/go.mod h1:ywMKfjWhNtkQTxrWxCkCFkoPjLHPW6A7WOTVI8xy3LY=
+cloud.google.com/go/metastore v1.5.0/go.mod h1:2ZNrDcQwghfdtCwJ33nM0+GrBGlVuh8rakL3vdPY3XY=
+cloud.google.com/go/metastore v1.6.0/go.mod h1:6cyQTls8CWXzk45G55x57DVQ9gWg7RiH65+YgPsNh9s=
+cloud.google.com/go/metastore v1.7.0/go.mod h1:s45D0B4IlsINu87/AsWiEVYbLaIMeUSoxlKKDqBGFS8=
+cloud.google.com/go/metastore v1.8.0/go.mod h1:zHiMc4ZUpBiM7twCIFQmJ9JMEkDSyZS9U12uf7wHqSI=
+cloud.google.com/go/monitoring v1.7.0/go.mod h1:HpYse6kkGo//7p6sT0wsIC6IBDET0RhIsnmlA53dvEk=
+cloud.google.com/go/monitoring v1.8.0/go.mod h1:E7PtoMJ1kQXWxPjB6mv2fhC5/15jInuulFdYYtlcvT4=
+cloud.google.com/go/networkconnectivity v1.4.0/go.mod h1:nOl7YL8odKyAOtzNX73/M5/mGZgqqMeryi6UPZTk/rA=
+cloud.google.com/go/networkconnectivity v1.5.0/go.mod h1:3GzqJx7uhtlM3kln0+x5wyFvuVH1pIBJjhCpjzSt75o=
+cloud.google.com/go/networkconnectivity v1.6.0/go.mod h1:OJOoEXW+0LAxHh89nXd64uGG+FbQoeH8DtxCHVOMlaM=
+cloud.google.com/go/networkconnectivity v1.7.0/go.mod h1:RMuSbkdbPwNMQjB5HBWD5MpTBnNm39iAVpC3TmsExt8=
+cloud.google.com/go/networkmanagement v1.4.0/go.mod h1:Q9mdLLRn60AsOrPc8rs8iNV6OHXaGcDdsIQe1ohekq8=
+cloud.google.com/go/networkmanagement v1.5.0/go.mod h1:ZnOeZ/evzUdUsnvRt792H0uYEnHQEMaz+REhhzJRcf4=
+cloud.google.com/go/networksecurity v0.5.0/go.mod h1:xS6fOCoqpVC5zx15Z/MqkfDwH4+m/61A3ODiDV1xmiQ=
+cloud.google.com/go/networksecurity v0.6.0/go.mod h1:Q5fjhTr9WMI5mbpRYEbiexTzROf7ZbDzvzCrNl14nyU=
+cloud.google.com/go/notebooks v1.2.0/go.mod h1:9+wtppMfVPUeJ8fIWPOq1UnATHISkGXGqTkxeieQ6UY=
+cloud.google.com/go/notebooks v1.3.0/go.mod h1:bFR5lj07DtCPC7YAAJ//vHskFBxA5JzYlH68kXVdk34=
+cloud.google.com/go/notebooks v1.4.0/go.mod h1:4QPMngcwmgb6uw7Po99B2xv5ufVoIQ7nOGDyL4P8AgA=
+cloud.google.com/go/notebooks v1.5.0/go.mod h1:q8mwhnP9aR8Hpfnrc5iN5IBhrXUy8S2vuYs+kBJ/gu0=
+cloud.google.com/go/optimization v1.1.0/go.mod h1:5po+wfvX5AQlPznyVEZjGJTMr4+CAkJf2XSTQOOl9l4=
+cloud.google.com/go/optimization v1.2.0/go.mod h1:Lr7SOHdRDENsh+WXVmQhQTrzdu9ybg0NecjHidBq6xs=
+cloud.google.com/go/orchestration v1.3.0/go.mod h1:Sj5tq/JpWiB//X/q3Ngwdl5K7B7Y0KZ7bfv0wL6fqVA=
+cloud.google.com/go/orchestration v1.4.0/go.mod h1:6W5NLFWs2TlniBphAViZEVhrXRSMgUGDfW7vrWKvsBk=
+cloud.google.com/go/orgpolicy v1.4.0/go.mod h1:xrSLIV4RePWmP9P3tBl8S93lTmlAxjm06NSm2UTmKvE=
+cloud.google.com/go/orgpolicy v1.5.0/go.mod h1:hZEc5q3wzwXJaKrsx5+Ewg0u1LxJ51nNFlext7Tanwc=
+cloud.google.com/go/osconfig v1.7.0/go.mod h1:oVHeCeZELfJP7XLxcBGTMBvRO+1nQ5tFG9VQTmYS2Fs=
+cloud.google.com/go/osconfig v1.8.0/go.mod h1:EQqZLu5w5XA7eKizepumcvWx+m8mJUhEwiPqWiZeEdg=
+cloud.google.com/go/osconfig v1.9.0/go.mod h1:Yx+IeIZJ3bdWmzbQU4fxNl8xsZ4amB+dygAwFPlvnNo=
+cloud.google.com/go/osconfig v1.10.0/go.mod h1:uMhCzqC5I8zfD9zDEAfvgVhDS8oIjySWh+l4WK6GnWw=
+cloud.google.com/go/oslogin v1.4.0/go.mod h1:YdgMXWRaElXz/lDk1Na6Fh5orF7gvmJ0FGLIs9LId4E=
+cloud.google.com/go/oslogin v1.5.0/go.mod h1:D260Qj11W2qx/HVF29zBg+0fd6YCSjSqLUkY/qEenQU=
+cloud.google.com/go/oslogin v1.6.0/go.mod h1:zOJ1O3+dTU8WPlGEkFSh7qeHPPSoxrcMbbK1Nm2iX70=
+cloud.google.com/go/oslogin v1.7.0/go.mod h1:e04SN0xO1UNJ1M5GP0vzVBFicIe4O53FOfcixIqTyXo=
+cloud.google.com/go/phishingprotection v0.5.0/go.mod h1:Y3HZknsK9bc9dMi+oE8Bim0lczMU6hrX0UpADuMefr0=
+cloud.google.com/go/phishingprotection v0.6.0/go.mod h1:9Y3LBLgy0kDTcYET8ZH3bq/7qni15yVUoAxiFxnlSUA=
+cloud.google.com/go/policytroubleshooter v1.3.0/go.mod h1:qy0+VwANja+kKrjlQuOzmlvscn4RNsAc0e15GGqfMxg=
+cloud.google.com/go/policytroubleshooter v1.4.0/go.mod h1:DZT4BcRw3QoO8ota9xw/LKtPa8lKeCByYeKTIf/vxdE=
+cloud.google.com/go/privatecatalog v0.5.0/go.mod h1:XgosMUvvPyxDjAVNDYxJ7wBW8//hLDDYmnsNcMGq1K0=
+cloud.google.com/go/privatecatalog v0.6.0/go.mod h1:i/fbkZR0hLN29eEWiiwue8Pb+GforiEIBnV9yrRUOKI=
cloud.google.com/go/pubsub v1.0.1/go.mod h1:R0Gpsv3s54REJCy4fxDixWD93lHJMoZTyQ2kNxGRt3I=
cloud.google.com/go/pubsub v1.1.0/go.mod h1:EwwdRX2sKPjnvnqCa270oGRyludottCI76h+R3AArQw=
cloud.google.com/go/pubsub v1.2.0/go.mod h1:jhfEVHT8odbXTkndysNHCcx0awwzvfOlguIAii9o8iA=
cloud.google.com/go/pubsub v1.3.1/go.mod h1:i+ucay31+CNRpDW4Lu78I4xXG+O1r/MAHgjpRVR+TSU=
+cloud.google.com/go/pubsub v1.26.0/go.mod h1:QgBH3U/jdJy/ftjPhTkyXNj543Tin1pRYcdcPRnFIRI=
+cloud.google.com/go/pubsub v1.27.1/go.mod h1:hQN39ymbV9geqBnfQq6Xf63yNhUAhv9CZhzp5O6qsW0=
+cloud.google.com/go/pubsublite v1.5.0/go.mod h1:xapqNQ1CuLfGi23Yda/9l4bBCKz/wC3KIJ5gKcxveZg=
+cloud.google.com/go/recaptchaenterprise v1.3.1/go.mod h1:OdD+q+y4XGeAlxRaMn1Y7/GveP6zmq76byL6tjPE7d4=
+cloud.google.com/go/recaptchaenterprise/v2 v2.1.0/go.mod h1:w9yVqajwroDNTfGuhmOjPDN//rZGySaf6PtFVcSCa7o=
+cloud.google.com/go/recaptchaenterprise/v2 v2.2.0/go.mod h1:/Zu5jisWGeERrd5HnlS3EUGb/D335f9k51B/FVil0jk=
+cloud.google.com/go/recaptchaenterprise/v2 v2.3.0/go.mod h1:O9LwGCjrhGHBQET5CA7dd5NwwNQUErSgEDit1DLNTdo=
+cloud.google.com/go/recaptchaenterprise/v2 v2.4.0/go.mod h1:Am3LHfOuBstrLrNCBrlI5sbwx9LBg3te2N6hGvHn2mE=
+cloud.google.com/go/recaptchaenterprise/v2 v2.5.0/go.mod h1:O8LzcHXN3rz0j+LBC91jrwI3R+1ZSZEWrfL7XHgNo9U=
+cloud.google.com/go/recommendationengine v0.5.0/go.mod h1:E5756pJcVFeVgaQv3WNpImkFP8a+RptV6dDLGPILjvg=
+cloud.google.com/go/recommendationengine v0.6.0/go.mod h1:08mq2umu9oIqc7tDy8sx+MNJdLG0fUi3vaSVbztHgJ4=
+cloud.google.com/go/recommender v1.5.0/go.mod h1:jdoeiBIVrJe9gQjwd759ecLJbxCDED4A6p+mqoqDvTg=
+cloud.google.com/go/recommender v1.6.0/go.mod h1:+yETpm25mcoiECKh9DEScGzIRyDKpZ0cEhWGo+8bo+c=
+cloud.google.com/go/recommender v1.7.0/go.mod h1:XLHs/W+T8olwlGOgfQenXBTbIseGclClff6lhFVe9Bs=
+cloud.google.com/go/recommender v1.8.0/go.mod h1:PkjXrTT05BFKwxaUxQmtIlrtj0kph108r02ZZQ5FE70=
+cloud.google.com/go/redis v1.7.0/go.mod h1:V3x5Jq1jzUcg+UNsRvdmsfuFnit1cfe3Z/PGyq/lm4Y=
+cloud.google.com/go/redis v1.8.0/go.mod h1:Fm2szCDavWzBk2cDKxrkmWBqoCiL1+Ctwq7EyqBCA/A=
+cloud.google.com/go/redis v1.9.0/go.mod h1:HMYQuajvb2D0LvMgZmLDZW8V5aOC/WxstZHiy4g8OiA=
+cloud.google.com/go/redis v1.10.0/go.mod h1:ThJf3mMBQtW18JzGgh41/Wld6vnDDc/F/F35UolRZPM=
+cloud.google.com/go/resourcemanager v1.3.0/go.mod h1:bAtrTjZQFJkiWTPDb1WBjzvc6/kifjj4QBYuKCCoqKA=
+cloud.google.com/go/resourcemanager v1.4.0/go.mod h1:MwxuzkumyTX7/a3n37gmsT3py7LIXwrShilPh3P1tR0=
+cloud.google.com/go/resourcesettings v1.3.0/go.mod h1:lzew8VfESA5DQ8gdlHwMrqZs1S9V87v3oCnKCWoOuQU=
+cloud.google.com/go/resourcesettings v1.4.0/go.mod h1:ldiH9IJpcrlC3VSuCGvjR5of/ezRrOxFtpJoJo5SmXg=
+cloud.google.com/go/retail v1.8.0/go.mod h1:QblKS8waDmNUhghY2TI9O3JLlFk8jybHeV4BF19FrE4=
+cloud.google.com/go/retail v1.9.0/go.mod h1:g6jb6mKuCS1QKnH/dpu7isX253absFl6iE92nHwlBUY=
+cloud.google.com/go/retail v1.10.0/go.mod h1:2gDk9HsL4HMS4oZwz6daui2/jmKvqShXKQuB2RZ+cCc=
+cloud.google.com/go/retail v1.11.0/go.mod h1:MBLk1NaWPmh6iVFSz9MeKG/Psyd7TAgm6y/9L2B4x9Y=
+cloud.google.com/go/run v0.2.0/go.mod h1:CNtKsTA1sDcnqqIFR3Pb5Tq0usWxJJvsWOCPldRU3Do=
+cloud.google.com/go/run v0.3.0/go.mod h1:TuyY1+taHxTjrD0ZFk2iAR+xyOXEA0ztb7U3UNA0zBo=
+cloud.google.com/go/scheduler v1.4.0/go.mod h1:drcJBmxF3aqZJRhmkHQ9b3uSSpQoltBPGPxGAWROx6s=
+cloud.google.com/go/scheduler v1.5.0/go.mod h1:ri073ym49NW3AfT6DZi21vLZrG07GXr5p3H1KxN5QlI=
+cloud.google.com/go/scheduler v1.6.0/go.mod h1:SgeKVM7MIwPn3BqtcBntpLyrIJftQISRrYB5ZtT+KOk=
+cloud.google.com/go/scheduler v1.7.0/go.mod h1:jyCiBqWW956uBjjPMMuX09n3x37mtyPJegEWKxRsn44=
+cloud.google.com/go/secretmanager v1.6.0/go.mod h1:awVa/OXF6IiyaU1wQ34inzQNc4ISIDIrId8qE5QGgKA=
+cloud.google.com/go/secretmanager v1.8.0/go.mod h1:hnVgi/bN5MYHd3Gt0SPuTPPp5ENina1/LxM+2W9U9J4=
+cloud.google.com/go/secretmanager v1.9.0/go.mod h1:b71qH2l1yHmWQHt9LC80akm86mX8AL6X1MA01dW8ht4=
+cloud.google.com/go/security v1.5.0/go.mod h1:lgxGdyOKKjHL4YG3/YwIL2zLqMFCKs0UbQwgyZmfJl4=
+cloud.google.com/go/security v1.7.0/go.mod h1:mZklORHl6Bg7CNnnjLH//0UlAlaXqiG7Lb9PsPXLfD0=
+cloud.google.com/go/security v1.8.0/go.mod h1:hAQOwgmaHhztFhiQ41CjDODdWP0+AE1B3sX4OFlq+GU=
+cloud.google.com/go/security v1.9.0/go.mod h1:6Ta1bO8LXI89nZnmnsZGp9lVoVWXqsVbIq/t9dzI+2Q=
+cloud.google.com/go/security v1.10.0/go.mod h1:QtOMZByJVlibUT2h9afNDWRZ1G96gVywH8T5GUSb9IA=
+cloud.google.com/go/securitycenter v1.13.0/go.mod h1:cv5qNAqjY84FCN6Y9z28WlkKXyWsgLO832YiWwkCWcU=
+cloud.google.com/go/securitycenter v1.14.0/go.mod h1:gZLAhtyKv85n52XYWt6RmeBdydyxfPeTrpToDPw4Auc=
+cloud.google.com/go/securitycenter v1.15.0/go.mod h1:PeKJ0t8MoFmmXLXWm41JidyzI3PJjd8sXWaVqg43WWk=
+cloud.google.com/go/securitycenter v1.16.0/go.mod h1:Q9GMaLQFUD+5ZTabrbujNWLtSLZIZF7SAR0wWECrjdk=
+cloud.google.com/go/servicecontrol v1.4.0/go.mod h1:o0hUSJ1TXJAmi/7fLJAedOovnujSEvjKCAFNXPQ1RaU=
+cloud.google.com/go/servicecontrol v1.5.0/go.mod h1:qM0CnXHhyqKVuiZnGKrIurvVImCs8gmqWsDoqe9sU1s=
+cloud.google.com/go/servicedirectory v1.4.0/go.mod h1:gH1MUaZCgtP7qQiI+F+A+OpeKF/HQWgtAddhTbhL2bs=
+cloud.google.com/go/servicedirectory v1.5.0/go.mod h1:QMKFL0NUySbpZJ1UZs3oFAmdvVxhhxB6eJ/Vlp73dfg=
+cloud.google.com/go/servicedirectory v1.6.0/go.mod h1:pUlbnWsLH9c13yGkxCmfumWEPjsRs1RlmJ4pqiNjVL4=
+cloud.google.com/go/servicedirectory v1.7.0/go.mod h1:5p/U5oyvgYGYejufvxhgwjL8UVXjkuw7q5XcG10wx1U=
+cloud.google.com/go/servicemanagement v1.4.0/go.mod h1:d8t8MDbezI7Z2R1O/wu8oTggo3BI2GKYbdG4y/SJTco=
+cloud.google.com/go/servicemanagement v1.5.0/go.mod h1:XGaCRe57kfqu4+lRxaFEAuqmjzF0r+gWHjWqKqBvKFo=
+cloud.google.com/go/serviceusage v1.3.0/go.mod h1:Hya1cozXM4SeSKTAgGXgj97GlqUvF5JaoXacR1JTP/E=
+cloud.google.com/go/serviceusage v1.4.0/go.mod h1:SB4yxXSaYVuUBYUml6qklyONXNLt83U0Rb+CXyhjEeU=
+cloud.google.com/go/shell v1.3.0/go.mod h1:VZ9HmRjZBsjLGXusm7K5Q5lzzByZmJHf1d0IWHEN5X4=
+cloud.google.com/go/shell v1.4.0/go.mod h1:HDxPzZf3GkDdhExzD/gs8Grqk+dmYcEjGShZgYa9URw=
+cloud.google.com/go/spanner v1.41.0/go.mod h1:MLYDBJR/dY4Wt7ZaMIQ7rXOTLjYrmxLE/5ve9vFfWos=
+cloud.google.com/go/speech v1.6.0/go.mod h1:79tcr4FHCimOp56lwC01xnt/WPJZc4v3gzyT7FoBkCM=
+cloud.google.com/go/speech v1.7.0/go.mod h1:KptqL+BAQIhMsj1kOP2la5DSEEerPDuOP/2mmkhHhZQ=
+cloud.google.com/go/speech v1.8.0/go.mod h1:9bYIl1/tjsAnMgKGHKmBZzXKEkGgtU+MpdDPTE9f7y0=
+cloud.google.com/go/speech v1.9.0/go.mod h1:xQ0jTcmnRFFM2RfX/U+rk6FQNUF6DQlydUSyoooSpco=
cloud.google.com/go/storage v1.0.0/go.mod h1:IhtSnM/ZTZV8YYJWCY8RULGVqBDmpoyjwiyrjsg+URw=
cloud.google.com/go/storage v1.5.0/go.mod h1:tpKbwo567HUNpVclU5sGELwQWBDZ8gh0ZeosJ0Rtdos=
cloud.google.com/go/storage v1.6.0/go.mod h1:N7U0C8pVQ/+NIKOBQyamJIeKQKkZ+mxpohlUTyfDhBk=
cloud.google.com/go/storage v1.8.0/go.mod h1:Wv1Oy7z6Yz3DshWRJFhqM/UCfaWIRTdp0RXyy7KQOVs=
cloud.google.com/go/storage v1.10.0/go.mod h1:FLPqc6j+Ki4BU591ie1oL6qBQGu2Bl/tZ9ullr3+Kg0=
+cloud.google.com/go/storage v1.14.0/go.mod h1:GrKmX003DSIwi9o29oFT7YDnHYwZoctc3fOKtUw0Xmo=
cloud.google.com/go/storage v1.22.1/go.mod h1:S8N1cAStu7BOeFfE8KAQzmyyLkK8p/vmRq6kuBTW58Y=
+cloud.google.com/go/storage v1.23.0/go.mod h1:vOEEDNFnciUMhBeT6hsJIn3ieU5cFRmzeLgDvXzfIXc=
+cloud.google.com/go/storage v1.27.0/go.mod h1:x9DOL8TK/ygDUMieqwfhdpQryTeEkhGKMi80i/iqR2s=
+cloud.google.com/go/storagetransfer v1.5.0/go.mod h1:dxNzUopWy7RQevYFHewchb29POFv3/AaBgnhqzqiK0w=
+cloud.google.com/go/storagetransfer v1.6.0/go.mod h1:y77xm4CQV/ZhFZH75PLEXY0ROiS7Gh6pSKrM8dJyg6I=
+cloud.google.com/go/talent v1.1.0/go.mod h1:Vl4pt9jiHKvOgF9KoZo6Kob9oV4lwd/ZD5Cto54zDRw=
+cloud.google.com/go/talent v1.2.0/go.mod h1:MoNF9bhFQbiJ6eFD3uSsg0uBALw4n4gaCaEjBw9zo8g=
+cloud.google.com/go/talent v1.3.0/go.mod h1:CmcxwJ/PKfRgd1pBjQgU6W3YBwiewmUzQYH5HHmSCmM=
+cloud.google.com/go/talent v1.4.0/go.mod h1:ezFtAgVuRf8jRsvyE6EwmbTK5LKciD4KVnHuDEFmOOA=
+cloud.google.com/go/texttospeech v1.4.0/go.mod h1:FX8HQHA6sEpJ7rCMSfXuzBcysDAuWusNNNvN9FELDd8=
+cloud.google.com/go/texttospeech v1.5.0/go.mod h1:oKPLhR4n4ZdQqWKURdwxMy0uiTS1xU161C8W57Wkea4=
+cloud.google.com/go/tpu v1.3.0/go.mod h1:aJIManG0o20tfDQlRIej44FcwGGl/cD0oiRyMKG19IQ=
+cloud.google.com/go/tpu v1.4.0/go.mod h1:mjZaX8p0VBgllCzF6wcU2ovUXN9TONFLd7iz227X2Xg=
+cloud.google.com/go/trace v1.3.0/go.mod h1:FFUE83d9Ca57C+K8rDl/Ih8LwOzWIV1krKgxg6N0G28=
+cloud.google.com/go/trace v1.4.0/go.mod h1:UG0v8UBqzusp+z63o7FK74SdFE+AXpCLdFb1rshXG+Y=
+cloud.google.com/go/translate v1.3.0/go.mod h1:gzMUwRjvOqj5i69y/LYLd8RrNQk+hOmIXTi9+nb3Djs=
+cloud.google.com/go/translate v1.4.0/go.mod h1:06Dn/ppvLD6WvA5Rhdp029IX2Mi3Mn7fpMRLPvXT5Wg=
+cloud.google.com/go/video v1.8.0/go.mod h1:sTzKFc0bUSByE8Yoh8X0mn8bMymItVGPfTuUBUyRgxk=
+cloud.google.com/go/video v1.9.0/go.mod h1:0RhNKFRF5v92f8dQt0yhaHrEuH95m068JYOvLZYnJSw=
+cloud.google.com/go/videointelligence v1.6.0/go.mod h1:w0DIDlVRKtwPCn/C4iwZIJdvC69yInhW0cfi+p546uU=
+cloud.google.com/go/videointelligence v1.7.0/go.mod h1:k8pI/1wAhjznARtVT9U1llUaFNPh7muw8QyOUpavru4=
+cloud.google.com/go/videointelligence v1.8.0/go.mod h1:dIcCn4gVDdS7yte/w+koiXn5dWVplOZkE+xwG9FgK+M=
+cloud.google.com/go/videointelligence v1.9.0/go.mod h1:29lVRMPDYHikk3v8EdPSaL8Ku+eMzDljjuvRs105XoU=
+cloud.google.com/go/vision v1.2.0/go.mod h1:SmNwgObm5DpFBme2xpyOyasvBc1aPdjvMk2bBk0tKD0=
+cloud.google.com/go/vision/v2 v2.2.0/go.mod h1:uCdV4PpN1S0jyCyq8sIM42v2Y6zOLkZs+4R9LrGYwFo=
+cloud.google.com/go/vision/v2 v2.3.0/go.mod h1:UO61abBx9QRMFkNBbf1D8B1LXdS2cGiiCRx0vSpZoUo=
+cloud.google.com/go/vision/v2 v2.4.0/go.mod h1:VtI579ll9RpVTrdKdkMzckdnwMyX2JILb+MhPqRbPsY=
+cloud.google.com/go/vision/v2 v2.5.0/go.mod h1:MmaezXOOE+IWa+cS7OhRRLK2cNv1ZL98zhqFFZaaH2E=
+cloud.google.com/go/vmmigration v1.2.0/go.mod h1:IRf0o7myyWFSmVR1ItrBSFLFD/rJkfDCUTO4vLlJvsE=
+cloud.google.com/go/vmmigration v1.3.0/go.mod h1:oGJ6ZgGPQOFdjHuocGcLqX4lc98YQ7Ygq8YQwHh9A7g=
+cloud.google.com/go/vmwareengine v0.1.0/go.mod h1:RsdNEf/8UDvKllXhMz5J40XxDrNJNN4sagiox+OI208=
+cloud.google.com/go/vpcaccess v1.4.0/go.mod h1:aQHVbTWDYUR1EbTApSVvMq1EnT57ppDmQzZ3imqIk4w=
+cloud.google.com/go/vpcaccess v1.5.0/go.mod h1:drmg4HLk9NkZpGfCmZ3Tz0Bwnm2+DKqViEpeEpOq0m8=
+cloud.google.com/go/webrisk v1.4.0/go.mod h1:Hn8X6Zr+ziE2aNd8SliSDWpEnSS1u4R9+xXZmFiHmGE=
+cloud.google.com/go/webrisk v1.5.0/go.mod h1:iPG6fr52Tv7sGk0H6qUFzmL3HHZev1htXuWDEEsqMTg=
+cloud.google.com/go/webrisk v1.6.0/go.mod h1:65sW9V9rOosnc9ZY7A7jsy1zoHS5W9IAXv6dGqhMQMc=
+cloud.google.com/go/webrisk v1.7.0/go.mod h1:mVMHgEYH0r337nmt1JyLthzMr6YxwN1aAIEc2fTcq7A=
+cloud.google.com/go/websecurityscanner v1.3.0/go.mod h1:uImdKm2wyeXQevQJXeh8Uun/Ym1VqworNDlBXQevGMo=
+cloud.google.com/go/websecurityscanner v1.4.0/go.mod h1:ebit/Fp0a+FWu5j4JOmJEV8S8CzdTkAS77oDsiSqYWQ=
+cloud.google.com/go/workflows v1.6.0/go.mod h1:6t9F5h/unJz41YqfBmqSASJSXccBLtD1Vwf+KmJENM0=
+cloud.google.com/go/workflows v1.7.0/go.mod h1:JhSrZuVZWuiDfKEFxU0/F1PQjmpnpcoISEXH2bcHC3M=
+cloud.google.com/go/workflows v1.8.0/go.mod h1:ysGhmEajwZxGn1OhGOGKsTXc5PyxOc0vfKf5Af+to4M=
+cloud.google.com/go/workflows v1.9.0/go.mod h1:ZGkj1aFIOd9c8Gerkjjq7OW7I5+l6cSvT3ujaO/WwSA=
dmitri.shuralyov.com/gpu/mtl v0.0.0-20190408044501-666a987793e9/go.mod h1:H6x//7gZCb22OMCxBHrMx7a5I7Hp++hsVxbQ4BYO7hU=
+github.com/Azure/azure-sdk-for-go v16.2.1+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/azure-sdk-for-go v55.0.0+incompatible/go.mod h1:9XXNKU+eRnpl9moKnB4QOLf1HestfXbmab5FXxiDBjc=
+github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1 h1:UQHMgLO+TxOElx5B5HZ4hJQsoJ/PvUvKRhJHDQXO8P8=
github.com/Azure/go-ansiterm v0.0.0-20210617225240-d185dfc1b5a1/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
+github.com/Azure/go-autorest v10.8.1+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
+github.com/Azure/go-autorest v14.2.0+incompatible/go.mod h1:r+4oMnoxhatjLLJ6zxSWATqVooLgysK6ZNox3g/xq24=
+github.com/Azure/go-autorest/autorest v0.11.27/go.mod h1:7l8ybrIdUmGqZMTD0sRtAr8NvbHjfofbf8RSP2q7w7U=
+github.com/Azure/go-autorest/autorest/adal v0.9.18/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
+github.com/Azure/go-autorest/autorest/adal v0.9.20/go.mod h1:XVVeme+LZwABT8K5Lc3hA4nAe8LDBVle26gTrguhhPQ=
+github.com/Azure/go-autorest/autorest/date v0.3.0/go.mod h1:BI0uouVdmngYNUzGWeSYnokU+TrmwEsOqdt8Y6sso74=
+github.com/Azure/go-autorest/autorest/mocks v0.4.1/go.mod h1:LTp+uSrOhSkaKrUy935gNZuuIPPVsHlr9DSOxSayd+k=
+github.com/Azure/go-autorest/autorest/mocks v0.4.2/go.mod h1:Vy7OitM9Kei0i1Oj+LvyAWMXJHeKH1MVlzFugfVrmyU=
+github.com/Azure/go-autorest/autorest/to v0.4.0/go.mod h1:fE8iZBn7LQR7zH/9XU2NcPR4o9jEImooCeWJcYV/zLE=
+github.com/Azure/go-autorest/autorest/validation v0.1.0/go.mod h1:Ha3z/SqBeaalWQvokg3NZAlQTalVMtOIAs1aGK7G6u8=
+github.com/Azure/go-autorest/logger v0.2.1/go.mod h1:T9E3cAhj2VqvPOtCYAvby9aBXkZmbF5NWuPV8+WeEW8=
+github.com/Azure/go-autorest/tracing v0.6.0/go.mod h1:+vhtPC754Xsa23ID7GlGsrdKBpUA79WCAKPPZVC2DeU=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
-github.com/BurntSushi/toml v1.0.0 h1:dtDWrepsVPfW9H/4y7dDgFc2MBUSeJhlaDtK13CxFlU=
-github.com/BurntSushi/toml v1.0.0/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
+github.com/BurntSushi/toml v1.2.1 h1:9F2/+DoOYIOksmaJFPw1tGFy1eDnIJXg+UHjuD8lTak=
+github.com/BurntSushi/toml v1.2.1/go.mod h1:CxXYINrC8qIiEnFrOxCa7Jy5BFHlXnUU2pbicEuybxQ=
github.com/BurntSushi/xgb v0.0.0-20160522181843-27f122750802/go.mod h1:IVnqGOEym/WlBOVXweHU+Q+/VP0lqqI8lqeDx9IjBqo=
+github.com/GoogleCloudPlatform/k8s-cloud-provider v1.18.1-0.20220218231025-f11817397a1b/go.mod h1:FNj4KYEAAHfYu68kRYolGoxkaJn+6mdEsaM12VTwuI0=
+github.com/JeffAshton/win_pdh v0.0.0-20161109143554-76bb4ee9f0ab/go.mod h1:3VYc5hodBMJ5+l/7J4xAyMeuM2PNuepvHlGs8yilUCA=
github.com/MakeNowJust/heredoc v1.0.0 h1:cXCdzVdstXyiTqTvfqk9SDHpKNjxuom+DOlyEeQ4pzQ=
github.com/MakeNowJust/heredoc v1.0.0/go.mod h1:mG5amYoWBHf8vpLOuehzbGGw0EHxpZZ6lCpQ4fNJ8LE=
+github.com/Microsoft/go-winio v0.4.11/go.mod h1:VhR8bwka0BXejwEJY73c50VrPtXAaKcyvVC4A4RozmA=
+github.com/Microsoft/go-winio v0.4.14/go.mod h1:qXqCSQ3Xa7+6tgxaGTIe4Kpcdsi+P8jBhyzoq1bpyYA=
+github.com/Microsoft/go-winio v0.4.15-0.20190919025122-fc70bd9a86b5/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
+github.com/Microsoft/go-winio v0.4.15/go.mod h1:tTuCMEN+UleMWgg9dVx4Hu52b1bJo+59jBh3ajtinzw=
+github.com/Microsoft/go-winio v0.4.16-0.20201130162521-d1ffc52c7331/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
+github.com/Microsoft/go-winio v0.4.16/go.mod h1:XB6nPKklQyQ7GC9LdcBEcBl8PF76WugXOPRXwdLnMv0=
+github.com/Microsoft/go-winio v0.4.17-0.20210211115548-6eac466e5fa3/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.4.17-0.20210324224401-5516f17a5958/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.4.17/go.mod h1:JPGBdM1cNvN/6ISo+n8V5iA4v8pBzdOpzfwIujj1a84=
+github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
+github.com/Microsoft/go-winio v0.6.0 h1:slsWYD/zyx7lCXoZVlvQrj0hPTM1HI4+v1sIda2yDvg=
+github.com/Microsoft/go-winio v0.6.0/go.mod h1:cTAf44im0RAYeL23bpB+fzCyDH2MJiz2BO69KH/soAE=
+github.com/Microsoft/hcsshim v0.8.6/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.7-0.20190325164909-8abdbb8205e4/go.mod h1:Op3hHsoHPAvb6lceZHDtd9OkTew38wNoXnJs8iY7rUg=
+github.com/Microsoft/hcsshim v0.8.7/go.mod h1:OHd7sQqRFrYd3RmSgbgji+ctCwkbq2wbEYNSzOYtcBQ=
+github.com/Microsoft/hcsshim v0.8.9/go.mod h1:5692vkUqntj1idxauYlpoINNKeqCiG6Sg38RRsjT5y8=
+github.com/Microsoft/hcsshim v0.8.14/go.mod h1:NtVKoYxQuTLx6gEq0L96c9Ju4JbRJ4nY2ow3VK6a9Lg=
+github.com/Microsoft/hcsshim v0.8.15/go.mod h1:x38A4YbHbdxJtc0sF6oIz+RG0npwSCAvn69iY6URG00=
+github.com/Microsoft/hcsshim v0.8.16/go.mod h1:o5/SZqmR7x9JNKsW3pu+nqHm0MF8vbA+VxGOoXdC600=
+github.com/Microsoft/hcsshim v0.8.21/go.mod h1:+w2gRZ5ReXQhFOrvSQeNfhrYB/dg3oDwTOcER2fw4I4=
+github.com/Microsoft/hcsshim v0.8.22/go.mod h1:91uVCVzvX2QD16sMCenoxxXo6L1wJnLMX2PSufFMtF0=
+github.com/Microsoft/hcsshim v0.9.6 h1:VwnDOgLeoi2du6dAznfmspNqTiwczvjv4K7NxuY9jsY=
+github.com/Microsoft/hcsshim v0.9.6/go.mod h1:7pLA8lDk46WKDWlVsENo92gC0XFa8rbKfyFRBqxEbCc=
+github.com/Microsoft/hcsshim/test v0.0.0-20201218223536-d3e5debf77da/go.mod h1:5hlzMzRKMLyo42nCZ9oml8AdTlq/0cvIaBv6tK1RehU=
+github.com/Microsoft/hcsshim/test v0.0.0-20210227013316-43a75bb4edd3/go.mod h1:mw7qgWloBUl75W/gVH3cQszUg1+gUITj7D6NY7ywVnY=
github.com/NYTimes/gziphandler v0.0.0-20170623195520-56545f4a5d46/go.mod h1:3wb06e3pkSAbeQ52E9H9iFoQsEEwGN64994WTCIhntQ=
github.com/NYTimes/gziphandler v1.1.1 h1:ZUDjpQae29j0ryrS0u/B8HZfJBtBQHjqw2rQ2cqUQ3I=
github.com/NYTimes/gziphandler v1.1.1/go.mod h1:n/CVRwUEOgIxrgPvAQhUUr9oeUtvrhMomdKFjzJNB0c=
github.com/OneOfOne/xxhash v1.2.2/go.mod h1:HSdplMjZKSmBqAxg5vPj2TmRDmfkzw+cTzAElWljhcU=
github.com/PuerkitoBio/purell v1.1.1/go.mod h1:c11w/QuzBsJSee3cPx9rAFu61PvFxuPbtSwDGJws/X0=
github.com/PuerkitoBio/urlesc v0.0.0-20170810143723-de5bf2ad4578/go.mod h1:uGdkoq3SwY9Y+13GIhn11/XLaGBb4BfwItxLd5jeuXE=
+github.com/Shopify/logrus-bugsnag v0.0.0-20171204204709-577dee27f20d/go.mod h1:HI8ITrYtUY+O+ZhtlqUnD8+KwNPOyugEhfP9fdUIaEQ=
github.com/alecthomas/template v0.0.0-20160405071501-a0175ee3bccc/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/template v0.0.0-20190718012654-fb15b899a751/go.mod h1:LOuyumcjzFXgccqObfd/Ljyb9UuFJ6TxHnclSeseNhc=
github.com/alecthomas/units v0.0.0-20151022065526-2efee857e7cf/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190717042225-c3de453c63f4/go.mod h1:ybxpYRFXyAe+OPACYpWeL0wqObRcbAqCMya13uyzqw0=
github.com/alecthomas/units v0.0.0-20190924025748-f65c72e2690d/go.mod h1:rBZYJk541a8SKzHPHnH3zbiI+7dagKZ0cgpgrD7Fyho=
github.com/alessio/shellescape v1.2.2/go.mod h1:PZAiSCk0LJaZkiCSkPv8qIobYglO3FPpyFjDCtHLS30=
+github.com/alexflint/go-filemutex v0.0.0-20171022225611-72bdc8eae2ae/go.mod h1:CgnQgUtFrFz9mxFNtED3jI5tLDjKlOM+oUF/sTk6ps0=
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
github.com/antlr/antlr4/runtime/Go/antlr v0.0.0-20220418222510-f25a4f6275ed/go.mod h1:F7bn7fEU90QkQ3tnmaTx3LTKLEDqnwWODIYppRQ5hnY=
github.com/antlr/antlr4/runtime/Go/antlr v1.4.10 h1:yL7+Jz0jTC6yykIK/Wh74gnTJnrGr5AyrNMXuA0gves=
@@ -90,8 +465,13 @@ github.com/armon/go-radix v0.0.0-20180808171621-7fddfc383310/go.mod h1:ufUuZ+zHj
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5/go.mod h1:wHh0iHkYZB8zMSxRWpUBQtwG5a7fFgvEO+odwuTv2gs=
github.com/asaskevich/govalidator v0.0.0-20190424111038-f61b66f89f4a/go.mod h1:lB+ZfQJz7igIIfQNfa7Ml4HSf2uFQQRzpGGRXenZAgY=
+github.com/aws/aws-sdk-go v1.15.11/go.mod h1:mFuSZ37Z9YOHbQEwBWztmVzqXrEkub65tZoCYDt7FT0=
+github.com/aws/aws-sdk-go v1.35.24/go.mod h1:tlPOdRjfxPBpNIwqDj61rmsnA85v9jc0Ps9+muhnW+k=
+github.com/aws/aws-sdk-go v1.44.116/go.mod h1:y4AeaBuwd2Lk+GepC1E9v0qOiTws0MIWAX4oIKwKHZo=
+github.com/bazelbuild/rules_go v0.38.1/go.mod h1:TMHmtfpvyfsxaqfL9WnahCsXMWDMICTw7XeK9yVb+YU=
github.com/benbjohnson/clock v1.1.0 h1:Q92kusRqC1XV2MjkWETPvjJVqKetz1OzxZB7mHJLju8=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
+github.com/beorn7/perks v0.0.0-20160804104726-4c0e84591b9a/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.0/go.mod h1:KWe93zE9D1o94FZ5RNwFwVgaQK1VOXiVxmqh+CedLV8=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
@@ -99,71 +479,240 @@ github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6r
github.com/bep/debounce v1.2.1 h1:v67fRdBA9UQu2NhLFXrSg0Brw7CexQekrBwDMM8bzeY=
github.com/bep/debounce v1.2.1/go.mod h1:H8yggRPQKLUhUoqrJC1bO2xNya7vanpDl7xR3ISbCJ0=
github.com/bgentry/speakeasy v0.1.0/go.mod h1:+zsyZBPWlz7T6j88CTgSN5bM796AkVf0kBD4zp0CCIs=
+github.com/bitly/go-simplejson v0.5.0/go.mod h1:cXHtHw4XUPsvGaxgjIAn8PhEWG9NfngEKAMDJEczWVA=
+github.com/bits-and-blooms/bitset v1.2.0/go.mod h1:gIdJ4wp64HaoK2YrL1Q5/N7Y16edYb8uY+O0FJTyyDA=
github.com/bketelsen/crypt v0.0.3-0.20200106085610-5cbc8cc4026c/go.mod h1:MKsuJmJgSg28kpZDP6UIiPt0e0Oz0kqKNGyRaWEPv84=
+github.com/blang/semver v3.1.0+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
+github.com/blang/semver v3.5.1+incompatible/go.mod h1:kRBLl5iJ+tD4TcOOxsy/0fnwebNt5EWlYSAyrTnjyyk=
github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM=
github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ=
+github.com/bmizerany/assert v0.0.0-20160611221934-b7ed37b82869/go.mod h1:Ekp36dRnpXw/yCqJaO+ZrUyxD+3VXMFFr56k5XYrpB4=
+github.com/bshuster-repo/logrus-logstash-hook v0.4.1/go.mod h1:zsTqEiSzDgAa/8GZR7E1qaXrhYNDKBYy5/dWPTIflbk=
+github.com/buger/jsonparser v0.0.0-20180808090653-f4dd9f5a6b44/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
+github.com/bugsnag/bugsnag-go v0.0.0-20141110184014-b1d153021fcd/go.mod h1:2oa8nejYd4cQ/b0hMIopN0lCRxU0bueqREvZLWFrtK8=
+github.com/bugsnag/osext v0.0.0-20130617224835-0dd3f918b21b/go.mod h1:obH5gd0BsqsP2LwDJ9aOkm/6J86V6lyAXCoQWGw3K50=
+github.com/bugsnag/panicwrap v0.0.0-20151223152923-e2c28503fcd0/go.mod h1:D/8v3kj0zr8ZAKg1AQ6crr+5VwKN5eIywRkfhyM/+dE=
+github.com/cenkalti/backoff v2.2.1+incompatible/go.mod h1:90ReRw6GdpyfrHakVjL/QHaoyV4aDUVVkXQJJJ3NXXM=
github.com/cenkalti/backoff/v4 v4.1.1/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
github.com/cenkalti/backoff/v4 v4.1.3 h1:cFAlzYUlVYDysBEH2T5hyJZMh3+5+WCBvSnK6Q8UtC4=
github.com/cenkalti/backoff/v4 v4.1.3/go.mod h1:scbssz8iZGpm3xbr14ovlUdkxfGXNInqkPWOWmG2CLw=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/census-instrumentation/opencensus-proto v0.3.0/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
+github.com/census-instrumentation/opencensus-proto v0.4.1/go.mod h1:4T9NM4+4Vw91VeyqjLS6ao50K5bOcLKN6Q42XnYaRYw=
github.com/certifi/gocertifi v0.0.0-20191021191039-0944d244cd40/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/certifi/gocertifi v0.0.0-20200922220541-2c3bb06c6054/go.mod h1:sGbDF6GwGcLpkNXPUTkMRoywsNa/ol15pxFe6ERfguA=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/cespare/xxhash/v2 v2.1.1/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
-github.com/cespare/xxhash/v2 v2.1.2 h1:YRXhKfTDauu4ajMg1TPgFO5jnlC2HCbmLXMcTG5cbYE=
github.com/cespare/xxhash/v2 v2.1.2/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
+github.com/cespare/xxhash/v2 v2.2.0 h1:DC2CZ1Ep5Y4k3ZQ899DldepgrayRUGE6BBZ/cd9Cj44=
+github.com/cespare/xxhash/v2 v2.2.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/chai2010/gettext-go v1.0.2 h1:1Lwwip6Q2QGsAdl/ZKPCwTe9fe0CjlUbqj5bFNSjIRk=
github.com/chai2010/gettext-go v1.0.2/go.mod h1:y+wnP2cHYaVj19NZhYKAwEMH2CI1gNHeQQ+5AjwawxA=
+github.com/checkpoint-restore/go-criu/v4 v4.1.0/go.mod h1:xUQBLp4RLc5zJtWY++yjOoMoB5lihDt7fai+75m+rGw=
+github.com/checkpoint-restore/go-criu/v5 v5.0.0/go.mod h1:cfwC0EG7HMUenopBsUf9d89JlCLQIfgVcNsNN0t6T2M=
+github.com/checkpoint-restore/go-criu/v5 v5.3.0/go.mod h1:E/eQpaFtUKGOOSEBZgmKAcn+zUUwWxqcaKZlF54wK8E=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
+github.com/cilium/ebpf v0.0.0-20200110133405-4032b1d8aae3/go.mod h1:MA5e5Lr8slmEg9bt0VpxxWqJlO4iwu3FBdHUzV7wQVg=
+github.com/cilium/ebpf v0.0.0-20200702112145-1c8d4c9ef775/go.mod h1:7cR51M8ViRLIdUjrmSXlK9pkrsDlLHbO8jiB8X8JnOc=
+github.com/cilium/ebpf v0.2.0/go.mod h1:To2CFviqOWL/M0gIMsvSMlqe7em/l1ALkX1PyjrX2Qs=
+github.com/cilium/ebpf v0.4.0/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
+github.com/cilium/ebpf v0.6.2/go.mod h1:4tRaxcgiL706VnOzHOdBlY8IEAIdxINsQBcU4xJJXRs=
+github.com/cilium/ebpf v0.7.0/go.mod h1:/oI2+1shJiTGAMgl6/RgJr36Eo1jzrRcAWbcXO2usCA=
+github.com/cilium/ebpf v0.9.3/go.mod h1:w27N4UjpaQ9X/DGrSugxUG+H+NhgntDuPb5lCzxCn8A=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/cncf/udpa/go v0.0.0-20200629203442-efcf912fb354/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20201120205902-5459f2c99403/go.mod h1:WmhPx2Nbnhtbo57+VJT5O0JRkEi1Wbu0z5j0R8u5Hbk=
github.com/cncf/udpa/go v0.0.0-20210930031921-04548b0d99d4/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
+github.com/cncf/udpa/go v0.0.0-20220112060539-c52dc94e7fbe/go.mod h1:6pvJx4me5XPnfI9Z40ddWsdw2W/uZgQLFXToKeRcDiI=
github.com/cncf/xds/go v0.0.0-20210312221358-fbca930ec8ed/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210805033703-aa0b78936158/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20210922020428-25de7278fc84/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211001041855-01bcc9b48dfe/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cncf/xds/go v0.0.0-20211011173535-cb28da3451f1/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20220314180256-7f1daf1720fc/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
+github.com/cncf/xds/go v0.0.0-20230105202645-06c439db220b/go.mod h1:eXthEFrGJvWHgFFCl3hGmgk+/aYT6PnTQLykKQRLhEs=
github.com/cockroachdb/datadriven v0.0.0-20200714090401-bf6692d28da5/go.mod h1:h6jFvWxBdQXxjopDMZyH2UVceIRfR84bdzbkoKrsWNo=
github.com/cockroachdb/errors v1.2.4/go.mod h1:rQD95gz6FARkaKkQXUksEje/d9a6wBJoCr5oaCLELYA=
github.com/cockroachdb/logtags v0.0.0-20190617123548-eb05cc24525f/go.mod h1:i/u985jwjWRlyHXQbwatDASoW0RMlZ/3i9yJHE2xLkI=
+github.com/container-storage-interface/spec v1.7.0/go.mod h1:JYuzLqr9VVNoDJl44xp/8fmCOvWPDKzuGTwCoklhuqk=
+github.com/containerd/aufs v0.0.0-20200908144142-dab0cbea06f4/go.mod h1:nukgQABAEopAHvB6j7cnP5zJ+/3aVcE7hCYqvIwAHyE=
+github.com/containerd/aufs v0.0.0-20201003224125-76a6863f2989/go.mod h1:AkGGQs9NM2vtYHaUen+NljV0/baGCAPELGm2q9ZXpWU=
+github.com/containerd/aufs v0.0.0-20210316121734-20793ff83c97/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
+github.com/containerd/aufs v1.0.0/go.mod h1:kL5kd6KM5TzQjR79jljyi4olc1Vrx6XBlcyj3gNv2PU=
+github.com/containerd/btrfs v0.0.0-20201111183144-404b9149801e/go.mod h1:jg2QkJcsabfHugurUvvPhS3E08Oxiuh5W/g1ybB4e0E=
+github.com/containerd/btrfs v0.0.0-20210316141732-918d888fb676/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
+github.com/containerd/btrfs v1.0.0/go.mod h1:zMcX3qkXTAi9GI50+0HOeuV8LU2ryCE/V2vG/ZBiTss=
+github.com/containerd/cgroups v0.0.0-20190717030353-c4b9ac5c7601/go.mod h1:X9rLEHIqSf/wfK8NsPqxJmeZgW4pcfzdXITDrUSJ6uI=
+github.com/containerd/cgroups v0.0.0-20190919134610-bf292b21730f/go.mod h1:OApqhQ4XNSNC13gXIwDjhOQxjWa/NxkwZXJ1EvqT0ko=
+github.com/containerd/cgroups v0.0.0-20200531161412-0dbf7f05ba59/go.mod h1:pA0z1pT8KYB3TCXK/ocprsh7MAkoW8bZVzPdih9snmM=
+github.com/containerd/cgroups v0.0.0-20200710171044-318312a37340/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
+github.com/containerd/cgroups v0.0.0-20200824123100-0b889c03f102/go.mod h1:s5q4SojHctfxANBDvMeIaIovkq29IP48TKAxnhYRxvo=
+github.com/containerd/cgroups v0.0.0-20210114181951-8a68de567b68/go.mod h1:ZJeTFisyysqgcCdecO57Dj79RfL0LNeGiFUqLYQRYLE=
+github.com/containerd/cgroups v1.0.1/go.mod h1:0SJrPIenamHDcZhEcJMNBB85rHcUsw4f25ZfBiPYRkU=
+github.com/containerd/cgroups v1.0.4 h1:jN/mbWBEaz+T1pi5OFtnkQ+8qnmEbAr1Oo1FRm5B0dA=
+github.com/containerd/cgroups v1.0.4/go.mod h1:nLNQtsF7Sl2HxNebu77i1R0oDlhiTG+kO4JTrUzo6IA=
+github.com/containerd/console v0.0.0-20180822173158-c12b1e7919c1/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20181022165439-0650fd9eeb50/go.mod h1:Tj/on1eG8kiEhd0+fhSDzsPAFESxzBBvdyEgyryXffw=
+github.com/containerd/console v0.0.0-20191206165004-02ecf6a7291e/go.mod h1:8Pf4gM6VEbTNRIT26AyyU7hxdQU3MvAvxVI0sc00XBE=
+github.com/containerd/console v1.0.1/go.mod h1:XUsP6YE/mKtz6bxc+I8UiKKTP04qjQL4qcS3XoQ5xkw=
+github.com/containerd/console v1.0.2/go.mod h1:ytZPjGgY2oeTkAONYafi2kSj0aYggsf8acV1PGKCbzQ=
+github.com/containerd/console v1.0.3 h1:lIr7SlA5PxZyMV30bDW0MGbiOPXwc63yRuCP0ARubLw=
+github.com/containerd/console v1.0.3/go.mod h1:7LqA/THxQ86k76b8c/EMSiaJ3h1eZkMkXar0TQ1gf3U=
+github.com/containerd/containerd v1.2.10/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.0-beta.2.0.20190828155532-0293cbd26c69/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.0/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.1-0.20191213020239-082f7e3aed57/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.3.2/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.0-beta.2.0.20200729163537-40b22ef07410/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.1/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.3/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.9/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.4.13/go.mod h1:bC6axHOhabU15QhwfG7w5PipXdVtMXFTttgp+kVtyUA=
+github.com/containerd/containerd v1.5.0-beta.1/go.mod h1:5HfvG1V2FsKesEGQ17k5/T7V960Tmcumvqn8Mc+pCYQ=
+github.com/containerd/containerd v1.5.0-beta.3/go.mod h1:/wr9AVtEM7x9c+n0+stptlo/uBBoBORwEx6ardVcmKU=
+github.com/containerd/containerd v1.5.0-beta.4/go.mod h1:GmdgZd2zA2GYIBZ0w09ZvgqEq8EfBp/m3lcVZIvPHhI=
+github.com/containerd/containerd v1.5.0-rc.0/go.mod h1:V/IXoMqNGgBlabz3tHD2TWDoTJseu1FGOKuoA4nNb2s=
+github.com/containerd/containerd v1.5.1/go.mod h1:0DOxVqwDy2iZvrZp2JUx/E+hS0UNTVn7dJnIOwtYR4g=
+github.com/containerd/containerd v1.5.7/go.mod h1:gyvv6+ugqY25TiXxcZC3L5yOeYgEw0QMhscqVp1AR9c=
+github.com/containerd/containerd v1.6.14 h1:W+d0AJKVG3ioTZZyQwcw1Y3vvo6ZDYzAcjDcY4tkgGI=
+github.com/containerd/containerd v1.6.14/go.mod h1:U2NnBPIhzJDm59xF7xB2MMHnKtggpZ+phKg8o2TKj2c=
+github.com/containerd/continuity v0.0.0-20190426062206-aaeac12a7ffc/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20190815185530-f2a389ac0a02/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20191127005431-f65d91d395eb/go.mod h1:GL3xCUCBDV3CZiTSEKksMWbLE66hEyuu9qyDOOqM47Y=
+github.com/containerd/continuity v0.0.0-20200710164510-efbc4488d8fe/go.mod h1:cECdGN1O8G9bgKTlLhuPJimka6Xb/Gg7vYzCTNVxhvo=
+github.com/containerd/continuity v0.0.0-20201208142359-180525291bb7/go.mod h1:kR3BEg7bDFaEddKm54WSmrol1fKWDU1nKYkgrcgZT7Y=
+github.com/containerd/continuity v0.0.0-20210208174643-50096c924a4e/go.mod h1:EXlVlkqNba9rJe3j7w3Xa924itAMLgZH4UD/Q4PExuQ=
+github.com/containerd/continuity v0.1.0/go.mod h1:ICJu0PwR54nI0yPEnJ6jcS+J7CZAUXrLh8lPo2knzsM=
+github.com/containerd/continuity v0.3.0 h1:nisirsYROK15TAMVukJOUyGJjz4BNQJBVsNvAXZJ/eg=
+github.com/containerd/continuity v0.3.0/go.mod h1:wJEAIwKOm/pBZuBd0JmeTvnLquTB1Ag8espWhkykbPM=
+github.com/containerd/fifo v0.0.0-20180307165137-3d5202aec260/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/fifo v0.0.0-20190226154929-a9fb20d87448/go.mod h1:ODA38xgv3Kuk8dQz2ZQXpnv/UZZUHUCL7pnLehbXgQI=
+github.com/containerd/fifo v0.0.0-20200410184934-f15a3290365b/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
+github.com/containerd/fifo v0.0.0-20201026212402-0724c46b320c/go.mod h1:jPQ2IAeZRCYxpS/Cm1495vGFww6ecHmMk1YJH2Q5ln0=
+github.com/containerd/fifo v0.0.0-20210316144830-115abcc95a1d/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
+github.com/containerd/fifo v1.0.0 h1:6PirWBr9/L7GDamKr+XM0IeUFXu5mf3M/BPpH9gaLBU=
+github.com/containerd/fifo v1.0.0/go.mod h1:ocF/ME1SX5b1AOlWi9r677YJmCPSwwWnQ9O123vzpE4=
+github.com/containerd/go-cni v1.0.1/go.mod h1:+vUpYxKvAF72G9i1WoDOiPGRtQpqsNW/ZHtSlv++smU=
+github.com/containerd/go-cni v1.0.2/go.mod h1:nrNABBHzu0ZwCug9Ije8hL2xBCYh/pjfMb1aZGrrohk=
+github.com/containerd/go-runc v0.0.0-20180907222934-5a6d9f37cfa3/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/go-runc v0.0.0-20190911050354-e029b79d8cda/go.mod h1:IV7qH3hrUgRmyYrtgEeGWJfWbgcHL9CSRruz2Vqcph0=
+github.com/containerd/go-runc v0.0.0-20200220073739-7016d3ce2328/go.mod h1:PpyHrqVs8FTi9vpyHwPwiNEGaACDxT/N/pLcvMSRA9g=
+github.com/containerd/go-runc v0.0.0-20201020171139-16b287bc67d0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
+github.com/containerd/go-runc v1.0.0/go.mod h1:cNU0ZbCgCQVZK4lgG3P+9tn9/PaJNmoDXPpoJhDR+Ok=
+github.com/containerd/imgcrypt v1.0.1/go.mod h1:mdd8cEPW7TPgNG4FpuP3sGBiQ7Yi/zak9TYCG3juvb0=
+github.com/containerd/imgcrypt v1.0.4-0.20210301171431-0ae5c75f59ba/go.mod h1:6TNsg0ctmizkrOgXRNQjAPFWpMYRWuiB6dSF4Pfa5SA=
+github.com/containerd/imgcrypt v1.1.1-0.20210312161619-7ed62a527887/go.mod h1:5AZJNI6sLHJljKuI9IHnw1pWqo/F0nGDOuR9zgTs7ow=
+github.com/containerd/imgcrypt v1.1.1/go.mod h1:xpLnwiQmEUJPvQoAapeb2SNCxz7Xr6PJrXQb0Dpc4ms=
+github.com/containerd/nri v0.0.0-20201007170849-eb1350a75164/go.mod h1:+2wGSDGFYfE5+So4M5syatU0N0f0LbWpuqyMi4/BE8c=
+github.com/containerd/nri v0.0.0-20210316161719-dbaa18c31c14/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
+github.com/containerd/nri v0.1.0/go.mod h1:lmxnXF6oMkbqs39FiCt1s0R2HSMhcLel9vNL3m4AaeY=
+github.com/containerd/stargz-snapshotter/estargz v0.4.1/go.mod h1:x7Q9dg9QYb4+ELgxmo4gBUeJB0tl5dqH1Sdz0nJU1QM=
+github.com/containerd/ttrpc v0.0.0-20190828154514-0e0f228740de/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v0.0.0-20190828172938-92c8520ef9f8/go.mod h1:PvCDdDGpgqzQIzDW1TphrGLssLDZp2GuS+X5DkEJB8o=
+github.com/containerd/ttrpc v0.0.0-20191028202541-4f1b8fe65a5c/go.mod h1:LPm1u0xBw8r8NOKoOdNMeVHSawSsltak+Ihv+etqsE8=
+github.com/containerd/ttrpc v1.0.1/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/ttrpc v1.0.2/go.mod h1:UAxOpgT9ziI0gJrmKvgcZivgxOp8iFPSk8httJEt98Y=
+github.com/containerd/ttrpc v1.1.0 h1:GbtyLRxb0gOLR0TYQWt3O6B0NvT8tMdorEHqIQo/lWI=
+github.com/containerd/ttrpc v1.1.0/go.mod h1:XX4ZTnoOId4HklF4edwc4DcqskFZuvXB1Evzy5KFQpQ=
+github.com/containerd/typeurl v0.0.0-20180627222232-a93fcdb778cd/go.mod h1:Cm3kwCdlkCfMSHURc+r6fwoGH6/F1hH3S4sg0rLFWPc=
+github.com/containerd/typeurl v0.0.0-20190911142611-5eb25027c9fd/go.mod h1:GeKYzf2pQcqv7tJ0AoCuuhtnqhva5LNU3U+OyKxxJpk=
+github.com/containerd/typeurl v1.0.1/go.mod h1:TB1hUtrpaiO88KEK56ijojHS1+NeF0izUACaJW2mdXg=
+github.com/containerd/typeurl v1.0.2 h1:Chlt8zIieDbzQFzXzAeBEF92KhExuE4p9p92/QmY7aY=
+github.com/containerd/typeurl v1.0.2/go.mod h1:9trJWW2sRlGub4wZJRTW83VtbOLS6hwcDZXTn6oPz9s=
+github.com/containerd/zfs v0.0.0-20200918131355-0a33824f23a2/go.mod h1:8IgZOBdv8fAgXddBT4dBXJPtxyRsejFIpXoklgxgEjw=
+github.com/containerd/zfs v0.0.0-20210301145711-11e8f1707f62/go.mod h1:A9zfAbMlQwE+/is6hi0Xw8ktpL+6glmqZYtevJgaB8Y=
+github.com/containerd/zfs v0.0.0-20210315114300-dde8f0fda960/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containerd/zfs v0.0.0-20210324211415-d5c4544f0433/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containerd/zfs v1.0.0/go.mod h1:m+m51S1DvAP6r3FcmYCp54bQ34pyOwTieQDNRIRHsFY=
+github.com/containernetworking/cni v0.7.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.0/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/cni v0.8.1/go.mod h1:LGwApLUm2FpoOfxTDEeq8T9ipbpZ61X79hmU3w8FmsY=
+github.com/containernetworking/plugins v0.8.6/go.mod h1:qnw5mN19D8fIwkqW7oHHYDHVlzhJpcY6TQxn/fUyDDM=
+github.com/containernetworking/plugins v0.9.1/go.mod h1:xP/idU2ldlzN6m4p5LmGiwRDjeJr6FLK6vuiUwoH7P8=
+github.com/containers/ocicrypt v1.0.1/go.mod h1:MeJDzk1RJHv89LjsH0Sp5KTY3ZYkjXO/C+bKAeWFIrc=
+github.com/containers/ocicrypt v1.1.0/go.mod h1:b8AOe0YR67uU8OqfVNcznfFpAzu3rdgUV4GP9qXPfu4=
+github.com/containers/ocicrypt v1.1.1/go.mod h1:Dm55fwWm1YZAjYRaJ94z2mfZikIyIN4B0oB3dj3jFxY=
+github.com/coredns/caddy v1.1.0/go.mod h1:A6ntJQlAWuQfFlsd9hvigKbo2WS0VUs2l1e2F+BawD4=
+github.com/coredns/corefile-migration v1.0.17/go.mod h1:XnhgULOEouimnzgn0t4WPuFDN2/PJQcTxdWKC5eXNGE=
github.com/coreos/bbolt v1.3.2/go.mod h1:iRUV2dpdMOn7Bo10OQBFzIJO9kkE559Wcmn+qkEiiKk=
github.com/coreos/etcd v3.3.10+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/etcd v3.3.13+incompatible/go.mod h1:uF7uidLiAD3TWHmW31ZFd/JWoc32PjwdhPthX9715RE=
github.com/coreos/go-etcd v2.0.0+incompatible/go.mod h1:Jez6KQU2B/sWsbdaef3ED8NzMklzPG4d5KIOhIy30Tk=
-github.com/coreos/go-iptables v0.6.0 h1:is9qnZMPYjLd8LYqmm/qlE+wwEgJIkTYdhV3rfZo4jk=
-github.com/coreos/go-iptables v0.6.0/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
+github.com/coreos/go-iptables v0.4.5/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
+github.com/coreos/go-iptables v0.5.0/go.mod h1:/mVI274lEDI2ns62jHCDnCyBF9Iwsmekav8Dbxlm1MU=
+github.com/coreos/go-iptables v0.7.1-0.20231102141700-50d824baaa46 h1:AVVvARdGRuTtYO/DetrN9Z1G0kMbrqV7KLOH/J4byiM=
+github.com/coreos/go-iptables v0.7.1-0.20231102141700-50d824baaa46/go.mod h1:Qe8Bv2Xik5FyTXwgIbLAnv2sWSBmvWdFETJConOQ//Q=
github.com/coreos/go-oidc v2.1.0+incompatible/go.mod h1:CgnwVTmzoESiwO9qyAFEMiHoZ1nMCKZlZ9V6mm3/LKc=
github.com/coreos/go-semver v0.2.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.0/go.mod h1:nnelYz7RCh+5ahJtPPxZlU+153eP4D4r3EedlOD2RNk=
github.com/coreos/go-semver v0.3.1 h1:yi21YpKnrx1gt5R+la8n5WgS0kCrsPp33dmEyHReZr4=
github.com/coreos/go-semver v0.3.1/go.mod h1:irMmmIw/7yzSRPWryHsK7EYSg09caPQL03VsM8rvUec=
+github.com/coreos/go-systemd v0.0.0-20161114122254-48702e0da86b/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd v0.0.0-20180511133405-39ca1b05acc7/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/coreos/go-systemd v0.0.0-20190321100706-95778dfbb74e/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
+github.com/coreos/go-systemd/v22 v22.0.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
+github.com/coreos/go-systemd/v22 v22.1.0/go.mod h1:xO0FLkIi5MaZafQlIrOotqXZ90ih+1atmu1JpKERPPk=
github.com/coreos/go-systemd/v22 v22.3.2/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/go-systemd/v22 v22.3.3-0.20220203105225-a9a7ef127534 h1:rtAn27wIbmOGUs7RIbVgPEjb31ehTVniDwPGXyMxm5U=
github.com/coreos/go-systemd/v22 v22.3.3-0.20220203105225-a9a7ef127534/go.mod h1:Y58oyj3AT4RCenI/lSvhwexgC+NSVTIJ3seZv2GcEnc=
github.com/coreos/pkg v0.0.0-20180928190104-399ea9e2e55f/go.mod h1:E3G3o1h8I7cfcXa63jLwjI0eiQQMgzzUDFVpN/nH/eA=
github.com/cpuguy83/go-md2man v1.0.10/go.mod h1:SmD6nW6nTyfqj6ABTjUi3V3JVMnlJmwcJI5acqYI6dE=
+github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/cpuguy83/go-md2man/v2 v2.0.0/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
+github.com/cpuguy83/go-md2man/v2 v2.0.1/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/cpuguy83/go-md2man/v2 v2.0.2/go.mod h1:tgQtvFlXSQOSOSIRvRPT7W67SCa46tRHOmNcaadrF8o=
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/creack/pty v1.1.11 h1:07n33Z8lZxZ2qwegKbObQohDhXDQxiMMz1NOUGYlesw=
github.com/creack/pty v1.1.11/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
+github.com/cyphar/filepath-securejoin v0.2.2/go.mod h1:FpkQEhXnPnOthhzymB7CGsFk2G9VLXONKD9G7QGMM+4=
+github.com/cyphar/filepath-securejoin v0.2.3/go.mod h1:aPGpWjXOXUn2NCNjFvBE6aRxGGx79pTxQpKOJNYHHl4=
+github.com/d2g/dhcp4 v0.0.0-20170904100407-a1d1b6c41b1c/go.mod h1:Ct2BUK8SB0YC1SMSibvLzxjeJLnrYEVLULFNiHY9YfQ=
+github.com/d2g/dhcp4client v1.0.0/go.mod h1:j0hNfjhrt2SxUOw55nL0ATM/z4Yt3t2Kd1mW34z5W5s=
+github.com/d2g/dhcp4server v0.0.0-20181031114812-7d4a0a7f59a5/go.mod h1:Eo87+Kg/IX2hfWJfwxMzLyuSZyxSoAug2nGa1G2QAi8=
+github.com/d2g/hardwareaddr v0.0.0-20190221164911-e7d9fbe030e4/go.mod h1:bMl4RjIciD2oAxI7DmWRx6gbeqrkoLqv3MV0vzNad+I=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
+github.com/daviddengcn/go-colortext v1.0.0/go.mod h1:zDqEI5NVUop5QPpVJUxE9UO10hRnmkD5G4Pmri9+m4c=
+github.com/denverdino/aliyungo v0.0.0-20190125010748-a747050bb1ba/go.mod h1:dV8lFg6daOBZbT6/BDGIz6Y3WFGn8juu6G+CQ6LHtl0=
+github.com/dgrijalva/jwt-go v0.0.0-20170104182250-a601269ab70c/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgrijalva/jwt-go v3.2.0+incompatible/go.mod h1:E3ru+11k8xSBh+hMPgOLZmtrrCbhqsmaPHjLKYnJCaQ=
github.com/dgryski/go-sip13 v0.0.0-20181026042036-e10d5fee7954/go.mod h1:vAd38F8PWV+bWy6jNmig1y/TA+kYO4g3RSRF0IAv0no=
+github.com/dnaeon/go-vcr v1.0.1/go.mod h1:aBB1+wY4s93YsC3HHjMBMrwTj2R9FHDzUr9KyGc8n1E=
+github.com/dnaeon/go-vcr v1.2.0/go.mod h1:R4UdLID7HZT3taECzJs4YgbbH6PIGXB6W/sc5OLb6RQ=
+github.com/docker/cli v0.0.0-20191017083524-a8ff7f821017/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
+github.com/docker/distribution v0.0.0-20190905152932-14b96e55d84c/go.mod h1:0+TTO4EOBfRPhZXAeF1Vu+W3hHZ8eLp8PgKVZlcvtFY=
+github.com/docker/distribution v2.7.1-0.20190205005809-0d3efadf0154+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/distribution v2.7.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/distribution v2.8.1+incompatible h1:Q50tZOPR6T/hjNsyc9g8/syEs6bk8XXApsHjKukMl68=
github.com/docker/distribution v2.8.1+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
+github.com/docker/docker v1.4.2-0.20190924003213-a8608b5b67c7/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker v20.10.18+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker v24.0.6+incompatible h1:hceabKCtUgDqPu+qm0NgsaXf28Ljf4/pWFL7xjWWDgE=
+github.com/docker/docker v24.0.6+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
+github.com/docker/docker-credential-helpers v0.6.3/go.mod h1:WRaJzqw3CTB9bk10avuGsjVBZsD05qeibJ1/TYlvc0Y=
+github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
+github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
+github.com/docker/go-events v0.0.0-20170721190031-9461782956ad/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
+github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c h1:+pKlWGMw7gf6bQ+oDZB4KHQFypsfjYlq/C4rfL7D3g8=
+github.com/docker/go-events v0.0.0-20190806004212-e31b211e4f1c/go.mod h1:Uw6UezgYA44ePAFQYUehOuCzmy5zmg/+nl2ZfMWGkpA=
+github.com/docker/go-metrics v0.0.0-20180209012529-399ea8c73916/go.mod h1:/u0gXw0Gay3ceNrsHubL3BtdOL2fHf93USgMTe0W5dI=
+github.com/docker/go-metrics v0.0.1/go.mod h1:cG1hvH2utMXtqgqqYE9plW6lDxS3/5ayHzueweSI3Vw=
+github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
+github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
+github.com/docker/libtrust v0.0.0-20150114040149-fa567046d9b1/go.mod h1:cyGadeNEkKy96OOhEzfZl+yxihPEzKnqJwvfuSUqbZE=
github.com/docopt/docopt-go v0.0.0-20180111231733-ee0de3bc6815/go.mod h1:WwZ+bS3ebgob9U8Nd0kOddGdZWjyMGR8Wziv+TBNwSE=
+github.com/dustin/go-humanize v0.0.0-20171111073723-bb3d318650d4/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/dustin/go-humanize v1.0.0 h1:VSnTsYCnlFHaM2/igO1h6X3HA71jcobQuxemgkq4zYo=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153 h1:yUdfgN0XgIJw7foRItutHYUIhlcKzcSf5vDpdhQAKTc=
github.com/elazarl/goproxy v0.0.0-20180725130230-947c36da3153/go.mod h1:/Zj4wYkgs4iZTTu3o/KG3Itv/qCCa8VVMlb3i9OVuzc=
+github.com/emicklei/go-restful v0.0.0-20170410110728-ff4f55a20633/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
+github.com/emicklei/go-restful v2.9.5+incompatible/go.mod h1:otzb+WCGbkyDHkqmQmT5YD2WR4BBwUdeQoFo8l/7tVs=
github.com/emicklei/go-restful/v3 v3.8.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
github.com/emicklei/go-restful/v3 v3.9.0 h1:XwGDlfxEnQZzuopoqxwSEllNcCOM9DhhFyhFIIGKwxE=
github.com/emicklei/go-restful/v3 v3.9.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc=
@@ -176,9 +725,14 @@ github.com/envoyproxy/go-control-plane v0.9.9-0.20210217033140-668b12f5399d/go.m
github.com/envoyproxy/go-control-plane v0.9.9-0.20210512163311-63b5d3c536b0/go.mod h1:hliV/p42l8fGbc6Y9bQ70uLwIvmJyVE5k4iMKlh8wCQ=
github.com/envoyproxy/go-control-plane v0.9.10-0.20210907150352-cf90f659a021/go.mod h1:AFq3mo9L8Lqqiid3OhADV3RfLJnjiw63cSpi+fDTRC0=
github.com/envoyproxy/go-control-plane v0.10.2-0.20220325020618-49ff273808a1/go.mod h1:KJwIaB5Mv44NWtYuAOFCVOjcI94vtpEz2JU/D2v6IjE=
+github.com/envoyproxy/go-control-plane v0.10.3/go.mod h1:fJJn/j26vwOu972OllsvAgJJM//w9BV6Fxbg2LuVd34=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
+github.com/envoyproxy/protoc-gen-validate v0.6.7/go.mod h1:dyJXwwfPK2VSqiB9Klm1J6romD608Ba7Hij42vrOBCo=
+github.com/envoyproxy/protoc-gen-validate v0.9.1/go.mod h1:OKNgG7TCp5pF4d6XftA0++PMirau2/yoOwVac3AbF2w=
+github.com/euank/go-kmsg-parser v2.0.0+incompatible/go.mod h1:MhmAMZ8V4CYH4ybgdRwPr2TU5ThnS43puaKEMpja1uw=
github.com/evanphx/json-patch v0.5.2/go.mod h1:ZWS5hhDbVDyob71nXKNL0+PWn6ToqBHMikGIFbs31qQ=
github.com/evanphx/json-patch v4.5.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
+github.com/evanphx/json-patch v4.11.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch v4.12.0+incompatible h1:4onqiflcdA9EOZ4RxV643DvftH5pOlLGNtQ5lPWQu84=
github.com/evanphx/json-patch v4.12.0+incompatible/go.mod h1:50XU6AFN0ol/bzJsmQLiYLvXMP4fmwYFNcr97nuDLSk=
github.com/evanphx/json-patch/v5 v5.0.0/go.mod h1:G79N1coSVB93tBe7j6PhzjmR3/2VvlbKOFpnXhI9Bw4=
@@ -191,14 +745,21 @@ github.com/fatih/camelcase v1.0.0/go.mod h1:yN2Sb0lFhZJUdVvtELVWefmrXpuZESvPmqwo
github.com/fatih/color v1.7.0/go.mod h1:Zm6kSWBoL9eyXnKyktHP6abPY2pDugNf5KwzbycvMj4=
github.com/felixge/httpsnoop v1.0.3 h1:s/nj+GCswXYzN5v2DpNMuMQYe+0DDwt5WVCU6CWBdXk=
github.com/felixge/httpsnoop v1.0.3/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
+github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible h1:7ZaBxOI7TMoYBfyA3cQHErNNyAWIKUMIwqxEtgHOs5c=
github.com/form3tech-oss/jwt-go v3.2.3+incompatible/go.mod h1:pbq4aXjuKjdthFRnoDwaVPLA+WlJuPGy+QneDUgJi2k=
+github.com/frankban/quicktest v1.11.3/go.mod h1:wRf/ReqHper53s+kmmSZizM8NamnL3IM0I9ntUbOk+k=
+github.com/frankban/quicktest v1.14.0/go.mod h1:NeW+ay9A/U67EYXNFA1nPE8e/tnQv/09mUdL/ijj8og=
+github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/fsnotify/fsnotify v1.4.9/go.mod h1:znqG4EE+3YCdAaPaxE2ZRY/06pZUdp0tY4IgpuI1SZQ=
github.com/fsnotify/fsnotify v1.6.0 h1:n+5WquG0fcWoWp6xPWfHdbskMCQaFnG6PfBrh1Ky4HY=
github.com/fsnotify/fsnotify v1.6.0/go.mod h1:sl3t1tCWJFWoRz9R8WJCbQihKKwmorjAbSClcnxKAGw=
+github.com/fullsailor/pkcs7 v0.0.0-20190404230743-d7302db945fa/go.mod h1:KnogPXtdwXqoenmZCw6S+25EAm2MkxbG0deNDu4cbSA=
github.com/fvbommel/sortorder v1.0.1 h1:dSnXLt4mJYH25uDDGa3biZNQsozaUWDSWeKJ0qqFfzE=
github.com/fvbommel/sortorder v1.0.1/go.mod h1:uk88iVf1ovNn1iLfgUVU2F9o5eO30ui720w+kxuqRs0=
+github.com/garyburd/redigo v0.0.0-20150301180006-535138d7bcd7/go.mod h1:NR3MbYisc3/PwhQ00EMzDiPmrwpPxAn5GI05/YaO1SY=
+github.com/getkin/kin-openapi v0.76.0/go.mod h1:660oXbgy5JFMKreazJaQTw7o+X00qeSyhcnluiMv+Xg=
github.com/getsentry/raven-go v0.2.0/go.mod h1:KungGk8q33+aIAZUIVWZDr2OfAEBsO49PX4NzFV5kcQ=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/go-errors/errors v1.0.1 h1:LUHzmkK3GUKUrL/1gfBUxAHzcev3apQlezX/+O7ma6w=
@@ -206,6 +767,7 @@ github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm
github.com/go-gl/glfw v0.0.0-20190409004039-e6da0acd62b1/go.mod h1:vR7hzQXu2zJy9AVAgeJqvqgH9Q5CA+iKCZ2gyEVpxRU=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20191125211704-12ad95a8df72/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
github.com/go-gl/glfw/v3.3/glfw v0.0.0-20200222043503-6f7a984d4dc4/go.mod h1:tQ2UAYgL5IevRw8kRxooKSPJfGvJ9fJQFa0TUsXzTg8=
+github.com/go-ini/ini v1.25.4/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-kit/kit v0.8.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2as=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
@@ -230,6 +792,7 @@ github.com/go-openapi/jsonpointer v0.19.5/go.mod h1:Pl9vOtqEWErmShwVjC8pYs9cog34
github.com/go-openapi/jsonpointer v0.19.6 h1:eCs3fxoIi3Wh6vtgmLTOjdhSpiqphQ+DaPn38N2ZdrE=
github.com/go-openapi/jsonpointer v0.19.6/go.mod h1:osyAmYz/mB/C3I+WsTTSgw1ONzaLJoLCyoi6/zppojs=
github.com/go-openapi/jsonreference v0.19.3/go.mod h1:rjx6GuL8TTa9VaixXglHmQmIL98+wF9xc8zWvFonSJ8=
+github.com/go-openapi/jsonreference v0.19.5/go.mod h1:RdybgQwPxbL4UEjuAruzK1x3nE69AqPYEJeo/TWfEeg=
github.com/go-openapi/jsonreference v0.20.0/go.mod h1:Ag74Ico3lPc+zR+qjn4XBUmXymS4zJbYVCZmcgkasdo=
github.com/go-openapi/jsonreference v0.20.1 h1:FBLnyygC4/IZZr893oiomc9XaghoveYTrLC1F86HID8=
github.com/go-openapi/jsonreference v0.20.1/go.mod h1:Bl1zwGIM8/wsvqjsOQLJ/SH+En5Ap4rVB5KVcIDZG2k=
@@ -246,12 +809,26 @@ github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg78
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572 h1:tfuBGBXKqDEevZMzYi5KSi8KkcZtzBcTgAUUtapy0OI=
github.com/go-task/slim-sprig v0.0.0-20230315185526-52ccab3ef572/go.mod h1:9Pwr4B2jHnOSGXyyzV8ROjYa2ojvAY6HCGYYfMoC3Ls=
github.com/gobuffalo/flect v0.2.0/go.mod h1:W3K3X9ksuZfir8f/LrfVtWmCDQFfayuylOJ7sz/Fj80=
+github.com/godbus/dbus v0.0.0-20151105175453-c7fdd8b5cd55/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20180201030542-885f9cc04c9c/go.mod h1:/YcGZj5zSblfDWMMoOzV4fas9FZnQYTkDnsGvmh2Grw=
+github.com/godbus/dbus v0.0.0-20190422162347-ade71ed3457e/go.mod h1:bBOAhwG1umN6/6ZUMtDFBMQR8jRg9O75tm9K00oMsK4=
+github.com/godbus/dbus/v5 v5.0.3/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
github.com/godbus/dbus/v5 v5.0.4/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
+github.com/godbus/dbus/v5 v5.0.6/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
+github.com/gofrs/flock v0.8.0/go.mod h1:F1TvTiK9OcQqauNUHlbJvyl9Qa1QvF/gOUDKA14jxHU=
+github.com/gofrs/uuid v4.0.0+incompatible/go.mod h1:b2aQJv3Z4Fp6yNu3cdSllBxTCLRxnplIgP/c0N/04lM=
+github.com/gogo/googleapis v1.2.0/go.mod h1:Njal3psf3qN6dwBtQfUmBZh2ybovJ0tlu3o/AC7HYjU=
+github.com/gogo/googleapis v1.4.0/go.mod h1:5YRNX2z1oM5gXdAkurHa942MDgEJyk02w4OecKY87+c=
+github.com/gogo/googleapis v1.4.1 h1:1Yx4Myt7BxzvUr5ldGSbwYiZG6t9wGBZ+8/fX3Wvtq0=
+github.com/gogo/googleapis v1.4.1/go.mod h1:2lpHqI5OcWCtVElxXnPt+s8oJvMpySlOyM6xDCrzib4=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.2.1/go.mod h1:hp+jE20tsWTFYpLwKvXlhS1hjn+gTNwPg2I6zVXpSg4=
+github.com/gogo/protobuf v1.3.0/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.1/go.mod h1:SlYgWuQ5SjCEi6WLHjHCa1yvBfUnHcTbrrZtXPKa29o=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
+github.com/golang-jwt/jwt/v4 v4.0.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
+github.com/golang-jwt/jwt/v4 v4.2.0/go.mod h1:/xlHOz8bRuivTWchD4jCa+NbatV+wEUSzwAxVc6locg=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/glog v1.0.0 h1:nfP3RFugxnNRyKgeWd4oI1nYvXpxrx8ck8ZrcizshdQ=
github.com/golang/glog v1.0.0/go.mod h1:EWib/APOK0SL3dFbYqvxE3UYd8E6s1ouQ7iEp/0LWV4=
@@ -290,10 +867,15 @@ github.com/golang/protobuf v1.5.2/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiu
github.com/golang/protobuf v1.5.3 h1:KhyjKVUg7Usr/dYsdSqoFveMYd5ko72D+zANwlG1mmg=
github.com/golang/protobuf v1.5.3/go.mod h1:XVQd3VNwM+JqD3oG2Ue2ip4fOMUkwXdXDdiuN0vRsmY=
github.com/golang/snappy v0.0.3/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q=
+github.com/golangplus/bytes v0.0.0-20160111154220-45c989fe5450/go.mod h1:Bk6SMAONeMXrxql8uvOKuAZSu8aM5RUGv+1C6IJaEho=
+github.com/golangplus/bytes v1.0.0/go.mod h1:AdRaCFwmc/00ZzELMWb01soso6W1R/++O1XL80yAn+A=
+github.com/golangplus/fmt v1.0.0/go.mod h1:zpM0OfbMCjPtd2qkTD/jX2MgiFCqklhSUFyDW44gVQE=
+github.com/golangplus/testing v1.0.0/go.mod h1:ZDreixUV3YzhoVraIDyOzHrr76p6NUh6k/pPg/Q3gYA=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.0/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4=
github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA=
+github.com/google/cadvisor v0.46.0/go.mod h1:YnCDnR8amaS0HoMEjheOI0TMPzFKCBLc30mciLEjwGI=
github.com/google/cel-go v0.12.6 h1:kjeKudqV0OygrAqA9fX6J55S8gj+Jre2tckIm5RoG4M=
github.com/google/cel-go v0.12.6/go.mod h1:Jk7ljRzLBhkmiAwBoUxB1sZSCVBAzkqPF25olK/iRDw=
github.com/google/gnostic v0.5.7-v3refs h1:FhTMOKj2VhjpouxvWJAV1TL304uMlb9zcDqkl6cEI54=
@@ -314,6 +896,7 @@ github.com/google/go-cmp v0.5.7/go.mod h1:n+brtR0CgQNWTVd5ZUFpTBC8YFBDLK/h/bpaJ8
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
+github.com/google/go-containerregistry v0.5.1/go.mod h1:Ct15B4yir3PLOP5jsy0GNeYVaIZs/MK/Jz5any1wFW0=
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.1.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0=
@@ -331,6 +914,7 @@ github.com/google/pprof v0.0.0-20200430221834-fc25d7d30c6d/go.mod h1:ZgVRPoUq/hf
github.com/google/pprof v0.0.0-20200708004538-1a94d8640e99/go.mod h1:ZgVRPoUq/hfqzAqh7sHMqb3I9Rq5C59dIz2SbBwJ4eM=
github.com/google/pprof v0.0.0-20201023163331-3e6fc7fc9c4c/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20201203190320-1bf35d6f28c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
+github.com/google/pprof v0.0.0-20201218002935-b9804c9f04c2/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210122040257-d980be63207e/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210226084205-cbba55b83ad5/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
@@ -342,10 +926,15 @@ github.com/google/pprof v0.0.0-20211214055906-6f57359322fd/go.mod h1:KgnwoLYCZ8I
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4=
github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ=
+github.com/google/subcommands v1.0.2-0.20190508160503-636abe8753b8/go.mod h1:ZjhPrFU+Olkh9WazFPsl27BQ4UPiG37m3yTrtFlrHVk=
+github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
+github.com/google/uuid v1.2.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.3.0 h1:t6JiXgmwXMjEs8VusXIJk2BXHsn+wx8BZdTaoZ5fu7I=
github.com/google/uuid v1.3.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/enterprise-certificate-proxy v0.0.0-20220520183353-fd19c99a87aa/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
+github.com/googleapis/enterprise-certificate-proxy v0.1.0/go.mod h1:17drOmN3MwGY7t0e+Ei9b45FFGA3fBs3x36SsCg1hq8=
+github.com/googleapis/enterprise-certificate-proxy v0.2.0/go.mod h1:8C0jb7/mgJe/9KK8Lm7X9ctZC2t60YyIpYEI16jx0Qg=
github.com/googleapis/gax-go/v2 v2.0.4/go.mod h1:0Wqv26UfaUD9n4G6kQubkQ+KchISgw+vpHVxEJEs9eg=
github.com/googleapis/gax-go/v2 v2.0.5/go.mod h1:DWXyrwAJ9X0FpwwEdw+IPEYBICEFu5mhpdKc/us6bOk=
github.com/googleapis/gax-go/v2 v2.1.0/go.mod h1:Q3nei7sK6ybPYH7twZdmQpAd1MKb7pfu6SK+H1/DsU0=
@@ -353,9 +942,21 @@ github.com/googleapis/gax-go/v2 v2.1.1/go.mod h1:hddJymUZASv3XPyGkUpKj8pPO47Rmb0
github.com/googleapis/gax-go/v2 v2.2.0/go.mod h1:as02EH8zWkzwUoLbBaFeQ+arQaj/OthfcblKl4IGNaM=
github.com/googleapis/gax-go/v2 v2.3.0/go.mod h1:b8LNqSzNabLiUpXKkY7HAR5jr6bIT99EXz9pXxye9YM=
github.com/googleapis/gax-go/v2 v2.4.0/go.mod h1:XOTVJ59hdnfJLIP/dh8n5CGryZR2LxK9wbMD5+iXC6c=
+github.com/googleapis/gax-go/v2 v2.5.1/go.mod h1:h6B0KMMFNtI2ddbGJn3T3ZbwkeT6yqEF02fYlzkUCyo=
+github.com/googleapis/gax-go/v2 v2.6.0/go.mod h1:1mjbznJAPHFpesgE5ucqfYEscaz5kMdcIDwU/6+DDoY=
+github.com/googleapis/gax-go/v2 v2.7.0/go.mod h1:TEop28CZZQ2y+c0VxMUmu1lV+fQx57QpBWsYpwqHJx8=
github.com/googleapis/gnostic v0.3.1/go.mod h1:on+2t9HRStVgn95RSsFWFz+6Q0Snyqv1awfrALZdbtU=
+github.com/googleapis/gnostic v0.5.1/go.mod h1:6U4PtQXGIEt/Z3h5MAT7FNofLnw9vXk2cUuW7uA/OeU=
+github.com/googleapis/gnostic v0.5.5/go.mod h1:7+EbHbldMins07ALC74bsA81Ovc97DwqyJO1AENw9kA=
github.com/googleapis/go-type-adapters v1.0.0/go.mod h1:zHW75FOG2aur7gAO2B+MLby+cLsWGBF62rFAi7WjWO4=
+github.com/googleapis/google-cloud-go-testing v0.0.0-20200911160855-bcd43fbb19e8/go.mod h1:dvDLG8qkwmyD9a/MJJN3XJcT3xFxOKAvTZGvuZmac9g=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
+github.com/gorilla/handlers v0.0.0-20150720190736-60c7bfde3e33/go.mod h1:Qkdc/uu4tH4g6mTK6auzZ766c4CA0Ng8+o/OAirnOIQ=
+github.com/gorilla/mux v1.7.2/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/mux v1.7.3/go.mod h1:1lud6UwP+6orDFRuTfBEV8e9/aOM/c4fVVCaMa2zaAs=
+github.com/gorilla/mux v1.8.0/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
+github.com/gorilla/mux v1.8.1 h1:TuBL49tXwgrFYWhqrNgrUNEY92u81SPhu7sTdzQEiWY=
+github.com/gorilla/mux v1.8.1/go.mod h1:AKf9I4AEqPTmMytcMc0KkNouC66V3BtZ4qD5fmWSiMQ=
github.com/gorilla/websocket v1.4.0/go.mod h1:E7qHFY5m1UJ88s3WnNqhKjPHQ0heANvMoAMk2YaljkQ=
github.com/gorilla/websocket v1.4.2 h1:+/TMaTYc4QFitKJxsQ7Yye35DkWvkdLcvGKqM+x0Ufc=
github.com/gorilla/websocket v1.4.2/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
@@ -369,15 +970,20 @@ github.com/grpc-ecosystem/go-grpc-prometheus v1.2.0/go.mod h1:8NvIoxWQoOIhqOTXgf
github.com/grpc-ecosystem/grpc-gateway v1.9.0/go.mod h1:vNeuVxBJEsws4ogUvrchl83t/GYV9WGTSLVdBhOQFDY=
github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4M0+kPpLofRdBo=
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
-github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0 h1:BZHcxBETFHIdVyhyEfOvn/RdU/QGdLI4y34qQGjGWO0=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0/go.mod h1:hgWBS7lorOAVIJEQMi4ZsPv9hVvWI6+ch50m39Pf2Ks=
+github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3 h1:lLT7ZLSzGLI08vc9cpd+tYmNWjdKDqyr/2L+f6U12Fk=
+github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3/go.mod h1:o//XUCC/F+yRGJoPO/VU0GSB0f8Nhgmxx0VIRUvaC0w=
+github.com/hanwen/go-fuse/v2 v2.3.0/go.mod h1:xKwi1cF7nXAOBCXujD5ie0ZKsxc8GGSA1rlMJc+8IJs=
github.com/hashicorp/consul/api v1.1.0/go.mod h1:VmuI/Lkw1nC05EYQWNKwWGbkg+FbDBtguAZLlVdkD9Q=
github.com/hashicorp/consul/sdk v0.1.1/go.mod h1:VKf9jXwCTEY1QZP2MOLRhb5i/I/ssyNV1vwHyQBF0x8=
+github.com/hashicorp/errwrap v0.0.0-20141028054710-7554cd9344ce/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-cleanhttp v0.5.1/go.mod h1:JpRdi6/HCYpAwUzNwuwqhbovhLtngrth3wmdIIUrZ80=
github.com/hashicorp/go-immutable-radix v1.0.0/go.mod h1:0y9vanUI8NX6FsYoO3zeMjhV/C5i9g4Q3DwcSNZ4P60=
github.com/hashicorp/go-msgpack v0.5.3/go.mod h1:ahLV/dePpqEmjfWmKiqvPkv/twdG7iPBM1vqhUKIvfM=
+github.com/hashicorp/go-multierror v0.0.0-20161216184304-ed905158d874/go.mod h1:JMRHfdO9jKNzS/+BTlxCjKNQHg/jZAft8U7LloJvN7I=
github.com/hashicorp/go-multierror v1.0.0/go.mod h1:dHtQlpGsu+cZNNAkkCN/P3hoUDHhCYQXV3UM06sGGrk=
+github.com/hashicorp/go-multierror v1.1.0/go.mod h1:spPvp8C1qA32ftKqdAHm4hHTbPw+vmowP0z+KUhOZdA=
github.com/hashicorp/go-rootcerts v1.0.0/go.mod h1:K6zTfqpRlCUIjkwsN4Z+hiSfzSTQa6eBIzfwKfwNnHU=
github.com/hashicorp/go-sockaddr v1.0.0/go.mod h1:7Xibr9yA9JjQq1JpNB2Vw7kxv8xerXegt+ozgdvDeDU=
github.com/hashicorp/go-syslog v1.0.0/go.mod h1:qPfqrKkXGihmCqbJM2mZgkZGvKG1dFdvsLplgctolz4=
@@ -393,16 +999,28 @@ github.com/hashicorp/mdns v1.0.0/go.mod h1:tL+uN++7HEJ6SQLQ2/p+z2pH24WQKWjBPkE0m
github.com/hashicorp/memberlist v0.1.3/go.mod h1:ajVTdAv/9Im8oMAAj5G31PhhMCZJV2pPBoIllUwCN7I=
github.com/hashicorp/serf v0.8.2/go.mod h1:6hOLApaqBFA1NXqRQAsxw9QxuDEvNxSQRwA/JwenrHc=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
+github.com/iancoleman/strcase v0.2.0/go.mod h1:iwCmte+B7n89clKwxIoIXy/HfoL7AsD47ZCWhYzw7ho=
github.com/ianlancetaylor/demangle v0.0.0-20181102032728-5e5cf60278f6/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/ianlancetaylor/demangle v0.0.0-20210905161508-09a460cdf81d/go.mod h1:aYm2/VgdVmcIU8iMfdMvDMsRAQjcfZSKFby6HOFvi/w=
github.com/imdario/mergo v0.3.6/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
-github.com/imdario/mergo v0.3.9 h1:UauaLniWCFHWd+Jp9oCEkTBj8VO/9DKg3PV3VCNMDIg=
+github.com/imdario/mergo v0.3.8/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
github.com/imdario/mergo v0.3.9/go.mod h1:2EnlNZ0deacrJVfApfmtdGgDfMuh/nq6Ok1EcJh5FfA=
+github.com/imdario/mergo v0.3.10/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.11/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
+github.com/imdario/mergo v0.3.12 h1:b6R2BslTbIEToALKP7LxUvijTsNI9TAe80pLWN2g/HU=
+github.com/imdario/mergo v0.3.12/go.mod h1:jmQim1M+e3UYxmgPu/WyfjB3N3VflVyUjjjwH0dnCYA=
github.com/inconshreveable/mousetrap v1.0.0/go.mod h1:PxqpIevigyE2G7u3NXJIT2ANytuPF1OarO4DADm73n8=
github.com/inconshreveable/mousetrap v1.0.1 h1:U3uMjPSQEBMNp1lFxmllqCPM6P5u/Xq7Pgzkat/bFNc=
github.com/inconshreveable/mousetrap v1.0.1/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
+github.com/ishidawataru/sctp v0.0.0-20190723014705-7c296d48a2b5/go.mod h1:DM4VvS+hD/kDi1U1QsX2fnZowwBhqD0Dk3bRPKF/Oc8=
+github.com/j-keck/arping v0.0.0-20160618110441-2cf9dc699c56/go.mod h1:ymszkNOg6tORTn+6F6j+Jc8TOr5osrynvN6ivFWZ2GA=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
+github.com/jmespath/go-jmespath v0.0.0-20160202185014-0b12d6b521d8/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.0.0-20160803190731-bd40a432e4c7/go.mod h1:Nht3zPeWKUH0NzdCt2Blrr5ys8VGpn0CEB0cQHVjt7k=
+github.com/jmespath/go-jmespath v0.4.0/go.mod h1:T8mJZnbsbmF+m6zOOFylbeCJqk5+pHWvzYPziyZiYoo=
+github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfCI6z80xFu9LTZmf1ZRjMHUOPmWr69U=
+github.com/joefitzgerald/rainbow-reporter v0.1.0/go.mod h1:481CNgqmVHQZzdIbN52CupLJyoVwB10FQ/IQlF1pdL8=
github.com/jonboulle/clockwork v0.1.0/go.mod h1:Ii8DK3G1RaLaWxj9trq07+26W01tbo22gdxWY5EU2bo=
github.com/jonboulle/clockwork v0.2.2 h1:UOGuzwb1PwsrDAObMuhUnj0p5ULPj8V/xJ7Kx9qUBdQ=
github.com/jonboulle/clockwork v0.2.2/go.mod h1:Pkfl5aHPm1nk2H9h0bjmnJD/BcgbGXUBGnn1kMkgxc8=
@@ -412,6 +1030,7 @@ github.com/jpillora/backoff v1.0.0/go.mod h1:J/6gKK9jxlEcS3zixgDgUAsiuZ7yrSoa/FX
github.com/jsimonetti/rtnetlink v0.0.0-20190606172950-9527aa82566a/go.mod h1:Oz+70psSo5OFh8DBl0Zv2ACw7Esh6pPUphlvZG9x7uw=
github.com/jsimonetti/rtnetlink v0.0.0-20200117123717-f846d4f6c1f4/go.mod h1:WGuG/smIU4J/54PblvSbh+xvCZmpJnFgr3ds6Z55XMQ=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
+github.com/json-iterator/go v1.1.7/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.10/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.11/go.mod h1:KdQUCv79m/52Kvf8AW2vK1V8akMuk1QjK/uOdHXbAo4=
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
@@ -421,28 +1040,40 @@ github.com/jstemmer/go-junit-report v0.9.1/go.mod h1:Brl9GWCQeLvo8nXZwPNNblvFj/X
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/julienschmidt/httprouter v1.2.0/go.mod h1:SYymIcj16QtmaHHD7aYtjjsJG7VTCxuUUipMqKk8s4w=
github.com/julienschmidt/httprouter v1.3.0/go.mod h1:JR6WtHb+2LUe8TCKY3cZOxFyyO8IZAc4RVcycCCAKdM=
+github.com/karrick/godirwalk v1.17.0/go.mod h1:j4mkqPuvaLI8mp1DroR3P6ad7cyYd4c1qeJ3RV7ULlk=
github.com/kelseyhightower/envconfig v0.0.0-20180517194557-dd1402a4d99d h1:Tqg6yg0as+P38tbKytv9/yk+ifNq0CrvjlgADEniKog=
github.com/kelseyhightower/envconfig v0.0.0-20180517194557-dd1402a4d99d/go.mod h1:cccZRl6mQpaq41TPp5QxidR+Sa3axMbJDNb//FQX6Gg=
github.com/kisielk/errcheck v1.1.0/go.mod h1:EZBBE59ingxPouuu3KfxchcWSUPOHkagtvWXihfKN4Q=
github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
+github.com/klauspost/compress v1.11.3/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
+github.com/klauspost/compress v1.11.13 h1:eSvu8Tmq6j2psUJqJrLcWH6K3w5Dwc+qipbaA6eVEN4=
+github.com/klauspost/compress v1.11.13/go.mod h1:aoV0uJVorq1K+umq18yTdKaF57EivdYsUV+/s2qKfXs=
github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
github.com/konsorten/go-windows-terminal-sequences v1.0.3/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ=
+github.com/kr/fs v0.1.0/go.mod h1:FFnZGqtBN9Gxj7eW1uZ42v5BccTP0vu6NEaFoC2HwRg=
github.com/kr/logfmt v0.0.0-20140226030751-b84e30acd515/go.mod h1:+0opPa2QZZtGFBFZlji/RkVcI2GknAs/DXo4wKdlNEc=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.0/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
-github.com/kr/pretty v0.3.0 h1:WgNl7dwNpEZ6jJ9k1snq4pZsg7DOEN8hP9Xw0Tsjwk0=
github.com/kr/pretty v0.3.0/go.mod h1:640gp4NfQd8pI5XOwp5fnNeVWj67G7CFk/SaSQn7NBk=
+github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
+github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
github.com/leodido/go-urn v0.0.0-20181204092800-a67a23e1c1af h1:EhEGUQX36JFkvSWzrwGjjTJxrx7atfJdxv8cxFzmaB0=
github.com/leodido/go-urn v0.0.0-20181204092800-a67a23e1c1af/go.mod h1:+cyI34gQWZcE1eQU7NVgKkkzdXDQHr1dBMtdAPozLkw=
+github.com/libopenstorage/openstorage v1.0.0/go.mod h1:Sp1sIObHjat1BeXhfMqLZ14wnOzEhNx2YQedreMcUyc=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de h1:9TO3cAIGXtEhnIaL+V+BEER86oLrvS+kWobKpbJuye0=
github.com/liggitt/tabwriter v0.0.0-20181228230101-89fcab3d43de/go.mod h1:zAbeS9B/r2mtpb6U+EI2rYA5OAXxsYw6wTamcNW+zcE=
+github.com/linuxkit/virtsock v0.0.0-20201010232012-f8cee7dfc7a3/go.mod h1:3r6x7q95whyfWQpmGZTu3gk3v2YkMi05HEzl7Tf7YEo=
+github.com/lithammer/dedent v1.1.0/go.mod h1:jrXYCQtgg0nJiN+StA2KgR7w6CiQNv9Fd/Z9BP0jIOc=
+github.com/lyft/protoc-gen-star v0.6.0/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
+github.com/lyft/protoc-gen-star v0.6.1/go.mod h1:TGAoBVkt8w7MPG72TrKIu85MIdXwDuzJYeZuUPFPNwA=
github.com/magiconair/properties v1.8.0/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/magiconair/properties v1.8.1/go.mod h1:PppfXfuXeibc/6YijjN8zIbojt8czPbwD3XqdrwzmxQ=
github.com/mailru/easyjson v0.0.0-20190614124828-94de47d64c63/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
@@ -450,8 +1081,8 @@ github.com/mailru/easyjson v0.0.0-20190626092158-b2ccc519800e/go.mod h1:C1wdFJiN
github.com/mailru/easyjson v0.7.6/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
github.com/mailru/easyjson v0.7.7 h1:UGYAvKxe3sBsEDzO8ZeWOSlIQfWFlxbzLZe7hwFURr0=
github.com/mailru/easyjson v0.7.7/go.mod h1:xzfreul335JAWq5oZzymOObrkdz5UnU4kGfJJLY9Nlc=
-github.com/mattbaird/jsonpatch v0.0.0-20230413205102-771768614e91 h1:JnZSkFP1/GLwKCEuuWVhsacvbDQIVa5BRwAwd+9k2Vw=
-github.com/mattbaird/jsonpatch v0.0.0-20230413205102-771768614e91/go.mod h1:M1qoD/MqPgTZIk0EWKB38wE28ACRfVcn+cU08jyArI0=
+github.com/marstr/guid v1.1.0/go.mod h1:74gB1z2wpxxInTG6yaqA7KrtM0NZ+RbrcqDvYHefzho=
+github.com/mattbaird/jsonpatch v0.0.0-20171005235357-81af80346b1a/go.mod h1:M1qoD/MqPgTZIk0EWKB38wE28ACRfVcn+cU08jyArI0=
github.com/mattn/go-colorable v0.0.9/go.mod h1:9vuHe8Xs5qXnSaW/c/ABM9alt+Vo+STaOChaDxuIBZU=
github.com/mattn/go-colorable v0.1.2/go.mod h1:U0ppj6V5qS13XJ6of8GYAs25YV2eR4EVcfRqFIhoBtE=
github.com/mattn/go-isatty v0.0.3/go.mod h1:M+lRXTBqGeGNdLjl/ufCoiOlB5xdOkqRJdNxMWT7Zi4=
@@ -459,15 +1090,22 @@ github.com/mattn/go-isatty v0.0.8/go.mod h1:Iq45c/XA43vh69/j3iqttzPXn0bhXyGjM0Hd
github.com/mattn/go-isatty v0.0.12/go.mod h1:cbi8OIDigv2wuxKPP5vlRcQ1OAZbq2CE4Kysco4FUpU=
github.com/mattn/go-runewidth v0.0.7 h1:Ei8KR0497xHyKJPAv59M1dkC+rOZCMBJ+t3fZ+twI54=
github.com/mattn/go-runewidth v0.0.7/go.mod h1:H031xJmbD/WCDINGzjvQ9THkh0rPKHF+m2gUSrubnMI=
+github.com/mattn/go-shellwords v1.0.3/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
+github.com/mattn/go-shellwords v1.0.6/go.mod h1:3xCvwCdWdlDJUrvuMn7Wuy9eWs4pE8vqg+NOMyg4B2o=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
-github.com/matttproud/golang_protobuf_extensions v1.0.2 h1:hAHbPm5IJGijwng3PWk09JkG9WeqChjprR5s9bBZ+OM=
github.com/matttproud/golang_protobuf_extensions v1.0.2/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
+github.com/matttproud/golang_protobuf_extensions v1.0.4 h1:mmDVorXM7PCGKw94cs5zkfA9PSy5pEvNWRP0ET0TIVo=
+github.com/matttproud/golang_protobuf_extensions v1.0.4/go.mod h1:BSXmuO+STAnVfrANrmjBb36TMTDstsz7MSK+HVaYKv4=
+github.com/maxbrunsfeld/counterfeiter/v6 v6.2.2/go.mod h1:eD9eIE7cdwcMi9rYluz88Jz2VyhSmden33/aXg4oVIY=
github.com/mdlayher/genetlink v1.0.0/go.mod h1:0rJ0h4itni50A86M2kHcgS85ttZazNt7a8H2a2cw0Gc=
github.com/mdlayher/netlink v0.0.0-20190409211403-11939a169225/go.mod h1:eQB3mZE4aiYnlUsyGGCOpPETfdQq4Jhsgf1fk3cwQaA=
github.com/mdlayher/netlink v1.0.0/go.mod h1:KxeJAFOFLG6AjpyDkQ/iIhxygIUKD+vcwqcnu43w/+M=
github.com/mdlayher/netlink v1.1.0/go.mod h1:H4WCitaheIsdF9yOYu8CFmCgQthAPIWZmcKp9uZHgmY=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
+github.com/miekg/pkcs11 v1.0.3/go.mod h1:XsNlhZGX73bx86s2hdc/FuaLm2CPZJemRLMA+WTFxgs=
github.com/mikioh/ipaddr v0.0.0-20190404000644-d465c8ab6721/go.mod h1:Ickgr2WtCLZ2MDGd4Gr0geeCH5HybhRJbonOgQpvSxc=
+github.com/mindprince/gonvml v0.0.0-20190828220739-9ebdce4bb989/go.mod h1:2eu9pRWp8mo84xCg6KswZ+USQHjwgRhNp06sozOdsTY=
+github.com/mistifyio/go-zfs v2.1.2-0.20190413222219-f784269be439+incompatible/go.mod h1:8AuVvqP/mXw1px98n46wfvcGfQ4ci2FwoAjKYxuo3Z4=
github.com/mitchellh/cli v1.0.0/go.mod h1:hNIlj7HEI86fIcpObd7a0FcrxTWetlwJDGcceTlRvqc=
github.com/mitchellh/go-homedir v1.0.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
github.com/mitchellh/go-homedir v1.1.0/go.mod h1:SfyaCUpYCn1Vlf4IUYiD9fPX4A5wJrkLzIz1N1q0pr0=
@@ -479,10 +1117,20 @@ github.com/mitchellh/iochan v1.0.0/go.mod h1:JwYml1nuB7xOzsp52dPpHFffvOCDupsG0Qu
github.com/mitchellh/mapstructure v0.0.0-20160808181253-ca63d7c062ee/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y=
github.com/mitchellh/mapstructure v1.4.1/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo=
+github.com/mitchellh/osext v0.0.0-20151018003038-5e2d6d41470f/go.mod h1:OkQIRizQZAeMln+1tSwduZz7+Af5oFlKirV/MSYes2A=
+github.com/moby/ipvs v1.0.1/go.mod h1:2pngiyseZbIKXNv7hsKj3O9UEz30c53MT9005gt2hxQ=
+github.com/moby/locker v1.0.1 h1:fOXqR41zeveg4fFODix+1Ch4mj/gT0NE1XJbp/epuBg=
+github.com/moby/locker v1.0.1/go.mod h1:S7SDdo5zpBK84bzzVlKr2V0hz+7x9hWbYC/kq7oQppc=
github.com/moby/spdystream v0.2.0 h1:cjW1zVyyoiM0T7b6UoySUFqzXMoqRckQtXwGPiBhOM8=
github.com/moby/spdystream v0.2.0/go.mod h1:f7i0iNDQJ059oMTcWxx8MA/zKFIuD/lY+0GqbN2Wy8c=
+github.com/moby/sys/mountinfo v0.4.0/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
+github.com/moby/sys/mountinfo v0.4.1/go.mod h1:rEr8tzG/lsIZHBtN/JjGG+LMYx9eXgW2JI+6q0qou+A=
+github.com/moby/sys/mountinfo v0.5.0/go.mod h1:3bMD3Rg+zkqx8MRYPi7Pyb0Ie97QEBmdxbhnCLlSvSU=
github.com/moby/sys/mountinfo v0.6.2 h1:BzJjoreD5BMFNmD9Rus6gdd1pLuecOFPt8wC+Vygl78=
github.com/moby/sys/mountinfo v0.6.2/go.mod h1:IJb6JQeOklcdMU9F5xQ8ZALD+CUr5VlGpwtX+VE0rpI=
+github.com/moby/sys/signal v0.6.0 h1:aDpY94H8VlhTGa9sNYUFCFsMZIUh5wm0B6XkIoJj/iY=
+github.com/moby/sys/signal v0.6.0/go.mod h1:GQ6ObYZfqacOwTtlXvcmh9A26dVRul/hbOZn88Kg8Tg=
+github.com/moby/sys/symlink v0.1.0/go.mod h1:GGDODQmbFOjFsXvfLVn3+ZRxkch54RkSiGqsZeMYowQ=
github.com/moby/term v0.0.0-20220808134915-39b0c02b01ae h1:O4SWKdcHVCvYqyDV+9CJA1fcDN2L11Bule0iFy3YlAI=
github.com/moby/term v0.0.0-20220808134915-39b0c02b01ae/go.mod h1:E2VnQOmVuvZB6UYnnDB0qG5Nq/1tD9acaOpo6xmt0Kw=
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
@@ -492,8 +1140,14 @@ github.com/modern-go/reflect2 v0.0.0-20180701023420-4b7aa43c6742/go.mod h1:bx2lN
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
+github.com/modocache/gover v0.0.0-20171022184752-b58185e213c5/go.mod h1:caMODM3PzxT8aQXRPkAt8xlV/e7d7w8GM5g0fa5F0D8=
+github.com/mohae/deepcopy v0.0.0-20170308212314-bb9b5e7adda9/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
+github.com/mohae/deepcopy v0.0.0-20170603005431-491d3605edfb/go.mod h1:TaXosZuwdSHYgviHp1DAtfrULt5eUgsSMsZf+YrPgl8=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0=
github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4=
+github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
+github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
+github.com/mrunalp/fileutils v0.5.0/go.mod h1:M1WthSahJixYnrXQl/DFQuteStB1weuxD2QJNHXfbSQ=
github.com/munnerz/goautoneg v0.0.0-20120707110453-a547fc61f48d/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
@@ -501,6 +1155,7 @@ github.com/mwitkow/go-conntrack v0.0.0-20161129095857-cc309e4a2223/go.mod h1:qRW
github.com/mwitkow/go-conntrack v0.0.0-20190716064945-2f068394615f/go.mod h1:qRWi+5nqEBWmkhHvq77mSJWrCKwh8bxhgT7d/eI7P4U=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f h1:y5//uYreIhSUg3J1GEMiLbxo1LJaP8RfCpH6pymGZus=
github.com/mxk/go-flowrate v0.0.0-20140419014527-cca7078d478f/go.mod h1:ZdcZmHo+o7JKHSa8/e818NopupXU1YMK5fe1lsApnBw=
+github.com/ncw/swift v1.0.47/go.mod h1:23YIA4yWVnGwv2dQlN4bB7egfYX6YLn0Yo/S6zZO/ZM=
github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno=
github.com/nxadm/tail v1.4.4/go.mod h1:kenIhsEOeOJmVchQTgglprH7qJGnHDVpk1VPCcaMI8A=
github.com/nxadm/tail v1.4.8 h1:nPr65rt6Y5JFSKQO7qToXr7pePgD6Gwiw05lkbyAQTE=
@@ -508,8 +1163,14 @@ github.com/nxadm/tail v1.4.8/go.mod h1:+ncqLTQzXmGhMZNUePPaPqPvBxHAIsmXswZKocGu+
github.com/oklog/ulid v1.3.1/go.mod h1:CirwcVhetQ6Lv90oh/F+FBtV6XMibvdAFo93nm5qn4U=
github.com/olekukonko/tablewriter v0.0.4 h1:vHD/YYe1Wolo78koG299f7V/VAS08c6IpCLn+Ejf/w8=
github.com/olekukonko/tablewriter v0.0.4/go.mod h1:zq6QwlOf5SlnkVbMSr5EoBv3636FWnp+qbPhuoO21uA=
+github.com/onsi/ginkgo v0.0.0-20151202141238-7f8ab55aaf3b/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v0.0.0-20170829012221-11459a886d9c/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.8.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.1/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.10.3/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.11.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
+github.com/onsi/ginkgo v1.12.0/go.mod h1:oUhWkIvk5aDxtKvDDuw8gItl8pKl42LzjC9KZE0HfGg=
github.com/onsi/ginkgo v1.12.1/go.mod h1:zj2OWP4+oCPe1qIXoGWkgMRwljMUYCdkwsT2108oapk=
github.com/onsi/ginkgo v1.14.0/go.mod h1:iSB4RoI2tjJc9BBv4NKIKWKya62Rps+oPG/Lv9klQyY=
github.com/onsi/ginkgo v1.16.4 h1:29JGrr5oVBm5ulCWet69zQkzWipVXIol6ygQUe/EzNc=
@@ -521,9 +1182,15 @@ github.com/onsi/ginkgo/v2 v2.3.0/go.mod h1:Eew0uilEqZmIEZr8JrvYlvOM7Rr6xzTmMV8Ay
github.com/onsi/ginkgo/v2 v2.4.0/go.mod h1:iHkDK1fKGcBoEHT5W7YBq4RFWaQulw+caOMkAt4OrFo=
github.com/onsi/ginkgo/v2 v2.9.2 h1:BA2GMJOtfGAfagzYtrAlufIP0lq6QERkFmHLMLPwFSU=
github.com/onsi/ginkgo/v2 v2.9.2/go.mod h1:WHcJJG2dIlcCqVfBAwUCrJxSPFb6v4azBwgxeMeDuts=
+github.com/onsi/gomega v0.0.0-20151007035656-2152b45fa28a/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
+github.com/onsi/gomega v0.0.0-20170829124025-dcabb60a477c/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
+github.com/onsi/gomega v1.5.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
+github.com/onsi/gomega v1.7.0/go.mod h1:ex+gbHU/CVuBBDIJjb2X0qEXbFg53c61hWP/1CpauHY=
github.com/onsi/gomega v1.7.1/go.mod h1:XdKZgCCFLUoM/7CFJVPcG8C1xQ1AJ0vpAezJrB7JYyY=
github.com/onsi/gomega v1.8.1/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA=
+github.com/onsi/gomega v1.9.0/go.mod h1:Ho0h+IUsWyvy1OpqCwxlQ/21gkhVunqlU8fDGcoTdcA=
github.com/onsi/gomega v1.10.1/go.mod h1:iN09h71vgCQne3DLsj+A5owkum+a2tYe+TOCB1ybHNo=
+github.com/onsi/gomega v1.10.3/go.mod h1:V9xEwhxec5O8UDM77eCW8vLymOMltsqPVYWrpDsH8xc=
github.com/onsi/gomega v1.17.0/go.mod h1:HnhC7FXeEQY45zxNK3PPoIUhzk/80Xly9PcubAlGdZY=
github.com/onsi/gomega v1.19.0/go.mod h1:LY+I3pBVzYsTBU1AnDwOSxaYi9WoWiqgwooUqq9yPro=
github.com/onsi/gomega v1.20.1/go.mod h1:DtrZpjmvpn2mPm4YWQa0/ALMDj9v4YxLgojwPeREyVo=
@@ -532,20 +1199,56 @@ github.com/onsi/gomega v1.22.1/go.mod h1:x6n7VNe4hw0vkyYUM4mjIXx3JbLiPaBPNgB7PRQ
github.com/onsi/gomega v1.23.0/go.mod h1:Z/NWtiqwBrwUt4/2loMmHL63EDLnYHmVbuBpDr2vQAg=
github.com/onsi/gomega v1.27.4 h1:Z2AnStgsdSayCMDiCU42qIz+HLqEPcgiOCXjAU/w+8E=
github.com/onsi/gomega v1.27.4/go.mod h1:riYq/GJKh8hhoM01HN6Vmuy93AarCXCBGpvFDK3q3fQ=
+github.com/opencontainers/go-digest v0.0.0-20170106003457-a6d0ee40d420/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v0.0.0-20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0-rc1/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
+github.com/opencontainers/go-digest v1.0.0-rc1.0.20180430190053-c9281466c8b2/go.mod h1:cMLVZDEM3+U2I4VmLI6N8jQYUd2OVphdqWwCJHrFt2s=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
-github.com/opencontainers/selinux v1.10.0 h1:rAiKF8hTcgLI3w0DHm6i0ylVVcOrlgR1kK99DRLDhyU=
+github.com/opencontainers/image-spec v1.0.0/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/image-spec v1.0.1/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799 h1:rc3tiVYb5z54aKaDfakKn0dDjIyPpTtszkjuMzyt7ec=
+github.com/opencontainers/image-spec v1.0.3-0.20211202183452-c5a74bcca799/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
+github.com/opencontainers/runc v0.0.0-20190115041553-12f6a991201f/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v0.1.1/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc8.0.20190926000215-3e425f80a8c9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc9/go.mod h1:qT5XzbpPznkRYVz/mWwUaVBUv2rmF59PVA73FjuZG0U=
+github.com/opencontainers/runc v1.0.0-rc93/go.mod h1:3NOsor4w32B2tC0Zbl8Knk4Wg84SM2ImC1fxBuqJ/H0=
+github.com/opencontainers/runc v1.0.2/go.mod h1:aTaHFFwQXuA71CiyxOdFFIorAoemI04suvGRQFzWTD0=
+github.com/opencontainers/runc v1.1.4 h1:nRCz/8sKg6K6jgYAFLDlXzPeITBZJyX28DBVhWD+5dg=
+github.com/opencontainers/runc v1.1.4/go.mod h1:1J5XiS+vdZ3wCyZybsuxXZWGrgSr8fFJHLXuG2PsnNg=
+github.com/opencontainers/runtime-spec v0.1.2-0.20190507144316-5b71a03e2700/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2-0.20190207185410-29686dbc5559/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.2/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20200929063507-e6143ca7d51d/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.0.3-0.20210326190908-1c3f411f0417/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-spec v1.1.0-rc.1 h1:wHa9jroFfKGQqFHj0I1fMRKLl0pfj+ynAqBxo3v6u9w=
+github.com/opencontainers/runtime-spec v1.1.0-rc.1/go.mod h1:jwyrGlmzljRJv/Fgzds9SsS/C5hL+LL3ko9hs6T5lQ0=
+github.com/opencontainers/runtime-tools v0.0.0-20181011054405-1d69bd0f9c39/go.mod h1:r3f7wjNzSs2extwzU3Y+6pKfobzPh+kKFJ3ofN+3nfs=
+github.com/opencontainers/selinux v1.6.0/go.mod h1:VVGKuOLlE7v4PJyT6h7mNWvq1rzqiriPsEqVhc+svHE=
+github.com/opencontainers/selinux v1.8.0/go.mod h1:RScLhm78qiWa2gbVCcGkC7tCGdgk3ogry1nUQF8Evvo=
+github.com/opencontainers/selinux v1.8.2/go.mod h1:MUIHuUEvKB1wtJjQdOyYRgOnLD2xAPP8dBsCoU0KuF8=
github.com/opencontainers/selinux v1.10.0/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
+github.com/opencontainers/selinux v1.10.1 h1:09LIPVRP3uuZGQvgR+SgMSNBd1Eb3vlRbGqQpoHsF8w=
+github.com/opencontainers/selinux v1.10.1/go.mod h1:2i0OySw99QjzBBQByd1Gr9gSjvuho1lHsJxIJ3gGbJI=
github.com/opentracing/opentracing-go v1.1.0/go.mod h1:UkNAQd3GIcIGf0SeVgPpRdFStlNbqXla1AfSYxPUl2o=
github.com/pascaldekloe/goe v0.0.0-20180627143212-57f6aae5913c/go.mod h1:lzWF7FIEvWOWxwDKqyGYQf6ZUaNfKdP144TG7ZOy1lc=
github.com/pelletier/go-toml v1.2.0/go.mod h1:5z9KED0ma1S8pY6P1sdut58dfprrGBbd/94hg7ilaic=
github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE=
+github.com/pelletier/go-toml v1.8.1/go.mod h1:T2/BmBdy8dvIRq1a/8aqjN41wvWlN4lrapLU/GW4pbc=
+github.com/pelletier/go-toml v1.9.5 h1:4yBQzkHv+7BHq2PQUZF3Mx0IYxG7LsP222s7Agd3ve8=
+github.com/pelletier/go-toml v1.9.5/go.mod h1:u1nR/EPcESfeI/szUZKdtJ0xRNbUoANCkoOuaOx1Y+c=
github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI=
github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU=
github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/errors v0.8.1-0.20171018195549-f15c970de5b7/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
+github.com/pkg/sftp v1.10.1/go.mod h1:lYOWFsE0bwd1+KfKJaKeuokY15vzFx25BLbzYYoAxZI=
+github.com/pkg/sftp v1.13.1/go.mod h1:3HaPG6Dq1ILlpPZRO0HVMrsydcdLt6HRDccSgb87qRg=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/posener/complete v1.1.1/go.mod h1:em0nMJCgc9GFtwrmVmEMR/ZL6WyhyjMBndrE9hABlRI=
@@ -558,32 +1261,43 @@ github.com/projectcalico/go-json v0.0.0-20161128004156-6219dc7339ba h1:aaF2byUCZ
github.com/projectcalico/go-json v0.0.0-20161128004156-6219dc7339ba/go.mod h1:q8EdCgBdMQzgiX/uk4GXLWLk+gIHd1a7mWUAamJKDb4=
github.com/projectcalico/go-yaml-wrapper v0.0.0-20191112210931-090425220c54 h1:Jt2Pic9dxgJisekm8q2WV9FaWxUJhhRfwHSP640drww=
github.com/projectcalico/go-yaml-wrapper v0.0.0-20191112210931-090425220c54/go.mod h1:UgC0aTQ2KMDxlX3lU/stndk7DMUBJqzN40yFiILHgxc=
+github.com/prometheus/client_golang v0.0.0-20180209125602-c332b6f63c06/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.1/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v0.9.3/go.mod h1:/TN21ttK/J9q6uSwhBd54HahCDft0ttaMvbicHlPoso=
github.com/prometheus/client_golang v1.0.0/go.mod h1:db9x61etRT2tGnBNRi70OPL5FsnadC4Ky3P0J6CfImo=
+github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQP1xR9D75/vuwEF3g=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.11.1/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
+github.com/prometheus/client_golang v1.13.0/go.mod h1:vTeo+zgvILHsnnj/39Ou/1fPN5nJFOEMgftOUOmlvYQ=
github.com/prometheus/client_golang v1.14.0 h1:nJdhIvne2eSX/XRAFV9PcvFFRbrjbcTUj0VP62TMhnw=
github.com/prometheus/client_golang v1.14.0/go.mod h1:8vpkKitgIVNcqrRBWh1C4TIUQgYNtG/XQE4E/Zae36Y=
+github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190129233127-fd36f4220a90/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.2.0/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.3.0 h1:UBgGFHqYdG/TPFD1B1ogZywDqEkwp3fBMvqdiQ7Xew4=
github.com/prometheus/client_model v0.3.0/go.mod h1:LDGWKZIo7rky3hgvBe+caln+Dr3dPggB5dvjtD7w9+w=
+github.com/prometheus/common v0.0.0-20180110214958-89604d197083/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.0.0-20181113130724-41aa239b4cce/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.4.0/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
github.com/prometheus/common v0.4.1/go.mod h1:TNfzLD0ON7rHzMJeJkieUDPYmFC7Snx/y86RQel1bk4=
+github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+Zk0j9GMYc=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.37.0 h1:ccBbHCgIiT9uSoFY0vX8H3zsNR5eLt17/RQLUvn8pXE=
github.com/prometheus/common v0.37.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
+github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20190507164030-5867b95ac084/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.0.0-20190522114515-bc1a522cf7b1/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
github.com/prometheus/procfs v0.0.2/go.mod h1:TjEm7ze935MbeOT/UhFTIMYKhuLP4wbCsTZCD3I8kEA=
+github.com/prometheus/procfs v0.0.3/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDaekg4FpcdQ=
+github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.0.11/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
@@ -591,20 +1305,31 @@ github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1
github.com/prometheus/procfs v0.8.0 h1:ODq8ZFEaYeCaZOJlZZdJA2AbQR98dSHSM1KW/You5mo=
github.com/prometheus/procfs v0.8.0/go.mod h1:z7EfXMXOkbkqb9IINtpCn86r/to3BnA0uaxHdg830/4=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
+github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/fastuuid v0.0.0-20150106093220-6724a57986af/go.mod h1:XWv6SoW27p1b0cqNHllgS5HIMJraePCO15w5zCzIWYg=
github.com/rogpeppe/fastuuid v1.2.0/go.mod h1:jVj6XXZzXRy/MSR5jhDC/2q6DgLz+nrA6LYCDYWNEvQ=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
-github.com/rogpeppe/go-internal v1.6.1 h1:/FiVV8dS/e+YqF2JvO3yXRFbBLTIuSDkuC7aBOAvL+k=
github.com/rogpeppe/go-internal v1.6.1/go.mod h1:xXDCJY+GAPziupqXw64V24skbSoqbTEfhy4qGm1nDQc=
+github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
+github.com/rubiojr/go-vhd v0.0.0-20200706105327-02e210299021/go.mod h1:DM5xW0nvfNNm2uytzsvhI3OnX8uzaRAg8UX/CnDqbto=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/russross/blackfriday/v2 v2.1.0 h1:JIOH55/0cWyOuilr9/qlrm0BSXldqnqwMsf35Ld67mk=
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
+github.com/safchain/ethtool v0.0.0-20190326074333-42ed695e3de8/go.mod h1:Z0q5wiBQGYcxhMZ6gUqHn6pYNLypFAvaL3UvgZLR0U4=
+github.com/satori/go.uuid v1.2.0/go.mod h1:dA0hQrYB0VpLJoorglMZABFdXlWrHn1NEOzdhQKdks0=
+github.com/sclevine/spec v1.2.0/go.mod h1:W4J29eT/Kzv7/b9IWLB055Z+qvVC9vt0Arko24q7p+U=
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
+github.com/seccomp/libseccomp-golang v0.9.1/go.mod h1:GbW5+tmTXfcxTToHLXlScSlAvWlF4P2Ca7zGrPiEpWo=
+github.com/seccomp/libseccomp-golang v0.9.2-0.20220502022130-f33da4d89646/go.mod h1:JA8cRccbGaA1s33RQf7Y1+q9gHmZX1yB/z9WDN1C6fg=
github.com/sergi/go-diff v1.1.0 h1:we8PVUC3FE2uYfodKH/nBHMSetSfHDR6scGdBi+erh0=
+github.com/sergi/go-diff v1.1.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
+github.com/sirupsen/logrus v1.0.4-0.20170822132746-89742aefa4b2/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
+github.com/sirupsen/logrus v1.0.6/go.mod h1:pMByvHTf9Beacp5x1UXfOR9xyW/9antXMhjMPG0dEzc=
github.com/sirupsen/logrus v1.2.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo=
+github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q=
github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE=
github.com/sirupsen/logrus v1.6.0/go.mod h1:7uNnSEd1DgxDLC74fIahvMZmmYsHGZGEOFrfsX/uA88=
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
@@ -612,6 +1337,7 @@ github.com/sirupsen/logrus v1.8.1/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic
github.com/sirupsen/logrus v1.9.0 h1:trlNQbNUG3OdDrDil03MCb1H2o9nJ1x4/5LYw7byDE0=
github.com/sirupsen/logrus v1.9.0/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/smartystreets/assertions v0.0.0-20180927180507-b2de0cb4f26d/go.mod h1:OnSkiWE9lh6wB0YB77sQom3nweQdgAjqCqsofrRNTgc=
+github.com/smartystreets/goconvey v0.0.0-20190330032615-68dc04aab96a/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/smartystreets/goconvey v1.6.4/go.mod h1:syvi0/a8iFYH4r/RixwvyeAJjdLS9QV7WQ/tjFTllLA=
github.com/soheilhy/cmux v0.1.4/go.mod h1:IM3LyeVVIOuxMH7sFAkER9+bJ4dT7Ms6E4xg4kGIyLM=
github.com/soheilhy/cmux v0.1.5 h1:jjzc5WVemNEDTLwv9tlmemhC73tI08BNOIGwBOo10Js=
@@ -619,26 +1345,39 @@ github.com/soheilhy/cmux v0.1.5/go.mod h1:T7TcVDs9LWfQgPlPsdngu6I6QIoyIFZDDC6sNE
github.com/spaolacci/murmur3 v0.0.0-20180118202830-f09979ecbc72/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/spf13/afero v1.1.2/go.mod h1:j4pytiNVoe2o6bmDsKpLACNPDBIoEAkihy7loJ1B0CQ=
github.com/spf13/afero v1.2.2/go.mod h1:9ZxEEn6pIJ8Rxe320qSDBk6AsU0r9pR7Q4OcevTdifk=
+github.com/spf13/afero v1.3.3/go.mod h1:5KUK8ByomD5Ti5Artl0RtHeI5pTF7MIDuXL3yY520V4=
+github.com/spf13/afero v1.6.0/go.mod h1:Ai8FlHk4v/PARR026UzYexafAt9roJ7LcLMAmO6Z93I=
+github.com/spf13/afero v1.9.2/go.mod h1:iUV7ddyEEZPO5gA3zD4fJt6iStLlL+Lg4m2cihcDf8Y=
github.com/spf13/cast v1.3.0/go.mod h1:Qx5cxh0v+4UWYiBimWS+eyWzqEqokIECu5etghLkUJE=
+github.com/spf13/cast v1.6.0 h1:GEiTHELF+vaR5dhz3VqZfFSzZjYbgeKDpBxQVS4GYJ0=
+github.com/spf13/cast v1.6.0/go.mod h1:ancEpBxwJDODSW/UG4rDrAqiKolqNNh2DX3mk86cAdo=
+github.com/spf13/cobra v0.0.2-0.20171109065643-2da4a54c5cee/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
+github.com/spf13/cobra v0.0.3/go.mod h1:1l0Ry5zgKvJasoi3XT1TypsSe7PqH0Sj9dhYf7v3XqQ=
github.com/spf13/cobra v0.0.5/go.mod h1:3K3wKZymM7VvHMDS9+Akkh4K60UwM26emMESw8tLCHU=
github.com/spf13/cobra v1.0.0/go.mod h1:/6GTrnGXV9HjY+aR4k0oJ5tcvakLuG6EuKReYlHNrgE=
github.com/spf13/cobra v1.1.3/go.mod h1:pGADOWyqRD/YMrPZigI/zbliZ2wVD/23d+is3pSWzOo=
+github.com/spf13/cobra v1.4.0/go.mod h1:Wo4iy3BUC+X2Fybo0PDqwJIv3dNRiZLHQymsfxlB84g=
github.com/spf13/cobra v1.6.0 h1:42a0n6jwCot1pUmomAp4T7DeMD+20LFv4Q54pxLf2LI=
github.com/spf13/cobra v1.6.0/go.mod h1:IOw/AERYS7UzyrGinqmz6HLUo219MORXGxhbaJUqzrY=
github.com/spf13/jwalterweatherman v1.0.0/go.mod h1:cQK4TGJAtQXfYWX+Ddv3mKDzgVb68N+wFjFa4jdeBTo=
+github.com/spf13/pflag v1.0.1-0.20171106142849-4c012f6dcd95/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.3/go.mod h1:DYY7MBk1bdzusC3SYhjObp+wFpr4gzcvqqNjLnInEg4=
github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/spf13/viper v1.3.2/go.mod h1:ZiWeW+zYFKm7srdB9IoDzzZXaJaI5eL9QjNiN/DMA2s=
github.com/spf13/viper v1.4.0/go.mod h1:PTJ7Z/lr49W6bUbkmS1V3by4uWynFiR9p7+dSq/yZzE=
github.com/spf13/viper v1.7.0/go.mod h1:8WkrPz2fc9jxqZNCJI/76HCieCp4Q8HaLFoCha5qpdg=
+github.com/stefanberger/go-pkcs11uri v0.0.0-20201008174630-78d3cae3a980/go.mod h1:AO3tvPzVZ/ayst6UlUKUv6rcPQInYe3IknH3jYhAKu8=
github.com/stoewer/go-strcase v1.2.0 h1:Z2iHWqGXH00XYgqDmNgQbIBxf3wrNq0F3feEy0ainaU=
github.com/stoewer/go-strcase v1.2.0/go.mod h1:IBiWB2sKIp3wVVQ3Y035++gc+knqhUQag1KpM8ahLw8=
+github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
+github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
+github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
@@ -650,16 +1389,35 @@ github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO
github.com/stretchr/testify v1.8.1 h1:w7B6lhMri9wdJUVmEZPGGhZzrYTPvgJArz7wNPgYKsk=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/subosito/gotenv v1.2.0/go.mod h1:N0PQaV/YGNqwC0u51sEeR/aUtSLEXKX9iv69rRypqCw=
+github.com/syndtr/gocapability v0.0.0-20170704070218-db04d3cc01c8/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/syndtr/gocapability v0.0.0-20180916011248-d98352740cb2/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/syndtr/gocapability v0.0.0-20200815063812-42c35b437635/go.mod h1:hkRG7XYTFWNJGYcbNJQlaLq0fg1yr4J4t/NcTQtrfww=
+github.com/tchap/go-patricia v2.2.6+incompatible/go.mod h1:bmLyhP68RS6kStMGxByiQ23RP/odRBOTVjwp2cDyi6I=
github.com/tmc/grpc-websocket-proxy v0.0.0-20190109142713-0ad062ec5ee5/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802 h1:uruHq4dN7GR16kFc5fp3d1RIYzJW5onx8Ybykw2YQFA=
github.com/tmc/grpc-websocket-proxy v0.0.0-20201229170055-e5319fda7802/go.mod h1:ncp9v5uamzpCO7NfCPTXjqaC+bZgJeR0sMTm6dMHP7U=
github.com/ugorji/go v1.1.4/go.mod h1:uQMGLiO92mf5W77hV/PUCpI3pbzQx3CRekS0kk+RGrc=
github.com/ugorji/go/codec v0.0.0-20181204163529-d75b2dcb6bc8/go.mod h1:VFNgLljTbGfSG7qAOspJ7OScBnGdDN/yBr0sguwnwf0=
+github.com/urfave/cli v0.0.0-20171014202726-7bc6a0acffa5/go.mod h1:70zkFmudgCuE/ngEzBv17Jvp/497gISqfk5gWijbERA=
+github.com/urfave/cli v1.22.1/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
+github.com/urfave/cli v1.22.2/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
+github.com/vishvananda/netlink v0.0.0-20181108222139-023a6dafdcdf/go.mod h1:+SR5DhBJrl6ZM7CoCKvpw5BKroDKQ+PJqOg65H/2ktk=
+github.com/vishvananda/netlink v1.1.0/go.mod h1:cTgwzPIzzgDAYoQrMm0EdrjRUBkTqKYppBueQtXaqoE=
+github.com/vishvananda/netlink v1.1.1-0.20201029203352-d40f9887b852/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
+github.com/vishvananda/netlink v1.1.1-0.20211118161826-650dca95af54/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
github.com/vishvananda/netlink v1.2.1-beta.2.0.20220630165224-c591ada0fb2b h1:CyMWBGvc1ZOvUBxW51DVTSIIAeJWWJJs+Ko3ouM/AVI=
github.com/vishvananda/netlink v1.2.1-beta.2.0.20220630165224-c591ada0fb2b/go.mod h1:twkDnbuQxJYemMlGd4JFIcuhgX83tXhKS2B/PRMpOho=
+github.com/vishvananda/netns v0.0.0-20180720170159-13995c7128cc/go.mod h1:ZjcWmFBXmLKZu9Nxj3WKYEafiSqer2rnvPr0en9UNpI=
+github.com/vishvananda/netns v0.0.0-20191106174202-0a2b9b5464df/go.mod h1:JP3t17pCcGlemwknint6hfoeCVQrEMVwxRLRjXpq+BU=
github.com/vishvananda/netns v0.0.0-20200728191858-db3c7e526aae/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f h1:p4VB7kIXpOQvVn1ZaTIVp+3vuYAXFe3OJEvjbUYJLaA=
github.com/vishvananda/netns v0.0.0-20210104183010-2eb08e3e575f/go.mod h1:DD4vA1DwXk04H54A1oHXtwZmA0grkVMdPxx/VGLCah0=
+github.com/vmware/govmomi v0.20.3/go.mod h1:URlwyTFZX72RmxtxuaFL2Uj3fD1JTvZdx59bHWk6aFU=
+github.com/willf/bitset v1.1.11-0.20200630133818-d5bec3311243/go.mod h1:RjeCKbqT1RxIR/KWY6phxZiaY1IyutSBfGjNPySAYV4=
+github.com/willf/bitset v1.1.11/go.mod h1:83CECat5yLh5zVOf4P1ErAgKA5UDvKtgyUABdr3+MjI=
+github.com/xeipuuv/gojsonpointer v0.0.0-20180127040702-4e3ac2762d5f/go.mod h1:N2zxlSyiKSe5eX1tZViRH5QA0qijqEDrYZiPEAiq3wU=
+github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415/go.mod h1:GwrjFmJcFw6At/Gs6z4yjiIwzuJ1/+UwLxMQDVQXShQ=
+github.com/xeipuuv/gojsonschema v0.0.0-20180618132009-1d523034197f/go.mod h1:5yf86TLmAcydyeJq5YvxkGPE2fm/u4myDekKRoLuqhs=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2 h1:eY9dn8+vbi4tKz5Qo6v2eYzo7kUS51QINcR5jNpbZS8=
github.com/xiang90/probing v0.0.0-20190116061207-43a291ad63a2/go.mod h1:UETIi67q53MR2AWcXfiuqkDkRtnGDLqkBTpCHuJHxtU=
github.com/xlab/treeprint v1.1.0 h1:G/1DjNkPpfZCFt9CSh6b5/nY4VimlbHF3Rh4obvtzDk=
@@ -672,7 +1430,12 @@ github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.1/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
+github.com/yvasiyarov/go-metrics v0.0.0-20140926110328-57bccd1ccd43/go.mod h1:aX5oPXxHm3bOH+xeAttToC8pqch2ScQN/JoXYupl6xs=
+github.com/yvasiyarov/gorelic v0.0.0-20141212073537-a9bba5b9ab50/go.mod h1:NUSPSUX/bi6SeDMUh6brw0nXpxHnc96TguQh0+r/ssA=
+github.com/yvasiyarov/newrelic_platform_go v0.0.0-20140908184405-b21fdbd4370f/go.mod h1:GlGEuHIJweS1mbCqG+7vt2nvWLzLLnRHbXz5JKd/Qbg=
go.etcd.io/bbolt v1.3.2/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
+go.etcd.io/bbolt v1.3.3/go.mod h1:IbVyRI1SCnLcuJnV2u8VeU0CEYM7e686BmAb1XKL+uU=
+go.etcd.io/bbolt v1.3.5/go.mod h1:G5EMThwa9y8QZGBClrRx5EY+Yw9kAhnjy3bSjsnlVTQ=
go.etcd.io/bbolt v1.3.6 h1:/ecaJf0sk1l4l6V4awd65v2C3ILy7MSj+s/x1ADCIMU=
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
go.etcd.io/etcd/api/v3 v3.5.5/go.mod h1:KFtNaxGDw4Yx/BA4iPPwevUTAuqcsPxzyX8PHydchN8=
@@ -693,6 +1456,7 @@ go.etcd.io/etcd/raft/v3 v3.5.5 h1:Ibz6XyZ60OYyRopu73lLM/P+qco3YtlZMOhnXNS051I=
go.etcd.io/etcd/raft/v3 v3.5.5/go.mod h1:76TA48q03g1y1VpTue92jZLr9lIHKUNcYdZOOGyx8rI=
go.etcd.io/etcd/server/v3 v3.5.5 h1:jNjYm/9s+f9A9r6+SC4RvNaz6AqixpOvhrFdT0PvIj0=
go.etcd.io/etcd/server/v3 v3.5.5/go.mod h1:rZ95vDw/jrvsbj9XpTqPrTAB9/kzchVdhRirySPkUBc=
+go.mozilla.org/pkcs7 v0.0.0-20200128120323-432b2356ecb1/go.mod h1:SNgMg+EgDFwmvSmLRTNKC5fegJjB7v23qTQ0XLGUNHk=
go.opencensus.io v0.21.0/go.mod h1:mSImk1erAIZhrmZN+AvHh14ztQfjbGwt4TtuofqLduU=
go.opencensus.io v0.22.0/go.mod h1:+kGneAE2xo2IficOXnaByMWTGM9T73dGwxeWcUqIpI8=
go.opencensus.io v0.22.2/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
@@ -700,11 +1464,15 @@ go.opencensus.io v0.22.3/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.4/go.mod h1:yxeiOL68Rb0Xd1ddK5vPZ/oVn4vY4Ynel7k9FzqtOIw=
go.opencensus.io v0.22.5/go.mod h1:5pWMHQbX5EPX2/62yrJeAkowc+lfs/XD7Uxpq3pI6kk=
go.opencensus.io v0.23.0/go.mod h1:XItmlyltB5F7CS4xOC1DcqMoFqwtC6OG2xF7mCv7P7E=
+go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
+go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
+go.opentelemetry.io/contrib/instrumentation/github.com/emicklei/go-restful/otelrestful v0.35.0/go.mod h1:DQYkU9srMFqLUTVA/7/WlRHdnYDB7wyMMlle2ktMjfI=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.25.0/go.mod h1:E5NNboN0UqSAki0Atn9kVwaN7I+l25gGxDqBueo/74E=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.35.0 h1:xFSRQBbXF6VvYRf2lqMJXxoB72XI1K/azav8TekHHSw=
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.35.0/go.mod h1:h8TWwRAhQpOd0aM5nYsRD8+flnkj+526GEIVlarH7eY=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.35.0 h1:Ajldaqhxqw/gNzQA45IKFWLdG7jZuXX/wBW1d5qvbUI=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.35.0/go.mod h1:9NiG9I2aHTKkcxqCILhjtyNA1QEiCjdBACv4IvrFQ+c=
+go.opentelemetry.io/contrib/propagators/b3 v1.10.0/go.mod h1:oxvamQ/mTDFQVugml/uFS59+aEUnFLhmd1wsG+n5MOE=
go.opentelemetry.io/otel v1.0.1/go.mod h1:OPEOD4jIT2SlZPMmwT6FqZz2C0ZNdQqiWcoK6M0SNFU=
go.opentelemetry.io/otel v1.8.0/go.mod h1:2pkj+iMj0o03Y+cW6/m8Y4WkRdYN3AvCXCnzRMp9yvM=
go.opentelemetry.io/otel v1.10.0 h1:Y7DTJMR6zs1xkS/upamJYk0SxxN4C9AqRd77jmZnyY4=
@@ -728,6 +1496,7 @@ go.opentelemetry.io/otel/trace v1.10.0 h1:npQMbR8o7mum8uF95yFbOEJffhs1sbCOfDh8zA
go.opentelemetry.io/otel/trace v1.10.0/go.mod h1:Sij3YYczqAdz+EhmGhE6TpTxUO5/F/AzrK+kxfGqySM=
go.opentelemetry.io/proto/otlp v0.7.0/go.mod h1:PqfVotwruBrMGOCsRd/89rSnXhoiJIqeYNgFYFoEGnI=
go.opentelemetry.io/proto/otlp v0.9.0/go.mod h1:1vKfU9rv61e9EVGthD1zNvUbiwPcimSsOPU9brfSHJg=
+go.opentelemetry.io/proto/otlp v0.15.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
go.opentelemetry.io/proto/otlp v0.19.0 h1:IVN6GR+mhC4s5yfcTbmzHYODqvWAp3ZedA2SJPI1Nnw=
go.opentelemetry.io/proto/otlp v0.19.0/go.mod h1:H7XAot3MsfNsj7EXtrA2q5xSNQ10UqI405h3+duxN4U=
go.starlark.net v0.0.0-20200306205701-8dd3e2ee1dd5 h1:+FNtrFTmVw0YZGpBGX56XDee331t6JAXeK2bcyhLOOc=
@@ -748,20 +1517,29 @@ go.uber.org/zap v1.17.0/go.mod h1:MXVU+bhUf/A7Xi2HNOnopQOrmycQ5Ih87HtOu4q5SSo=
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.24.0 h1:FiJd5l1UOLj0wCgbSE0rwwXHzEdAZS6hiiSnxJN/D60=
go.uber.org/zap v1.24.0/go.mod h1:2kMP+WWQ8aoFoedH3T2sq6iJ2yDWpHbP0f6MQbS9Gkg=
+golang.org/x/crypto v0.0.0-20171113213409-9f005a07e0d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20180904163835-0709b304e793/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
+golang.org/x/crypto v0.0.0-20181009213950-7c1a557ab941/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181029021203-45a5f77698d3/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20181203042331-505ab145d0a9/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190605123033-f99c8df09eb5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
-golang.org/x/crypto v0.0.0-20191002192127-34f69633bfdc/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190701094942-4def268fd1a4/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
+golang.org/x/crypto v0.0.0-20190820162420-60c769a6c586/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200204104054-c9f3fb736b72/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20200728195943-123391ffb6de/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
+golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
+golang.org/x/crypto v0.0.0-20210421170649-83a5a9bb288b/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20211108221036-ceb1ce70b4fa/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
+golang.org/x/crypto v0.0.0-20211215153901-e495a2d5b3d3/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.0.0-20220411220226-7b82a4e95df4/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
-golang.org/x/crypto v0.1.0 h1:MDRAIl0xIo9Io2xV565hzXHw3zVseKrJKodhohM5CjU=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
+golang.org/x/crypto v0.13.0 h1:mvySKfSWJ+UKUii46M40LOvyWfN0s2U+46/jDd0e6Ck=
+golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190306152737-a1d7652674e8/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20190510132918-efd6b22b2522/go.mod h1:ZjyILWgesfNpC6sMxTJOJm9Kp84zZh5NQWvqDGG3Qr8=
@@ -772,6 +1550,8 @@ golang.org/x/exp v0.0.0-20191227195350-da58074b4299/go.mod h1:2RIsYlXP63K8oxa1u0
golang.org/x/exp v0.0.0-20200119233911-0405dc783f0a/go.mod h1:2RIsYlXP63K8oxa1u096TMicItID8zy7Y6sNkU49FU4=
golang.org/x/exp v0.0.0-20200207192155-f17229e696bd/go.mod h1:J/WKrq2StrnmMY6+EHIKF9dgMWnmCNThgcyBT1FY9mM=
golang.org/x/exp v0.0.0-20200224162631-6cc2880d07d6/go.mod h1:3jZMyOhIsHpP37uCMkUooju7aAi5cS1Q23tOzKc+0MU=
+golang.org/x/exp v0.0.0-20230725093048-515e97ebf090/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
+golang.org/x/exp/typeparams v0.0.0-20221208152030-732eee02a75a/go.mod h1:AbB0pIl9nAr9wVwH+Z2ZpaocVmF5I4GyWCDIsVjR0bk=
golang.org/x/image v0.0.0-20190227222117-0694c2d4d067/go.mod h1:kZ7UVZpmo3dzQBMxlp+ypCbDeSB+sBbTgSJuh5dn5js=
golang.org/x/image v0.0.0-20190802002840-cff245a6509b/go.mod h1:FeLwcggjj3mMvU+oOTbSwawSJRM1uh48EjtB4UJZlP0=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
@@ -797,14 +1577,19 @@ golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.1/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
+golang.org/x/mod v0.5.0/go.mod h1:5OXOZSfqPIIbmVBIIKWRFfZjPR0E5r58TLhUjH0a2Ro=
golang.org/x/mod v0.6.0-dev.0.20220106191415-9b9b3d81d5e3/go.mod h1:3p9vT2HGsQu2K1YbXdKPJLVgG5VJdoTa1poYQBtP1AY=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
-golang.org/x/mod v0.9.0 h1:KENHtAZL2y3NLMYZeHY9DW8HW8V+kQyJsY/V9JlKvCs=
-golang.org/x/mod v0.9.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.7.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.11.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
+golang.org/x/mod v0.12.0 h1:rmsUpXtvNzj340zd98LZ4KntptpfRHwpFOHG188oHXc=
+golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
+golang.org/x/net v0.0.0-20181011144130-49bb7cea24b1/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181023162649-9b4f9f5ad519/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181114220301-adae6a3d119a/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181201002055-351d144fa1fc/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
@@ -817,11 +1602,12 @@ golang.org/x/net v0.0.0-20190503192946-f4e77d36d62c/go.mod h1:t9HGtf8HONx5eT2rtn
golang.org/x/net v0.0.0-20190522155817-f3200d17e092/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20190619014844-b5b0513f8c1b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190628185345-da137c7871d7/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190724013045-ca1201d0de80/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
-golang.org/x/net v0.0.0-20191003171128-d98b1b443823/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
+golang.org/x/net v0.0.0-20191004110552-13f9640d40b9/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191007182048-72f939374954/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20191209160850-c0dbc17a3553/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200114155413-6afb5195e5aa/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
@@ -838,11 +1624,13 @@ golang.org/x/net v0.0.0-20200520182314-0ba52f642ac2/go.mod h1:qpuaurCH72eLCgpAm/
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
+golang.org/x/net v0.0.0-20201006153459-a7d1128ccaa0/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201031054903-ff519b6c9102/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201202161906-c7110b5ffcbb/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201209123823-ac852fbbde11/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
+golang.org/x/net v0.0.0-20201224014010-6772e930b67b/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210316092652-d523dce5a7f4/go.mod h1:RBQZq4jEuRlivfhVLdyRGr576XBO4/greRjx4P4O3yc=
@@ -850,6 +1638,8 @@ golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96b
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210503060351-7fd8e65b6420/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210813160813-60bc85c4be6d/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
+golang.org/x/net v0.0.0-20210825183410-e898025ed96a/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211015210444-4f30a5c0130f/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
@@ -858,11 +1648,21 @@ golang.org/x/net v0.0.0-20220325170049-de3da57026de/go.mod h1:CfG3xpIq0wQ8r1q4Su
golang.org/x/net v0.0.0-20220412020605-290c469a71a5/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220607020251-c690dde0001d/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220617184016-355a448f1bc9/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
+golang.org/x/net v0.0.0-20220909164309-bea034e7d591/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
+golang.org/x/net v0.0.0-20221012135044-0b7e1fb9d458/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
+golang.org/x/net v0.0.0-20221014081412-f15817d10f9b/go.mod h1:YDH+HFinaLZZlnHAfSS6ZXJJ9M9t4Dl22yv3iI2vPwk=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
+golang.org/x/net v0.2.0/go.mod h1:KqCZLdyyvdV855qA2rE3GC2aiw5xGR5TEjj8smXukLY=
+golang.org/x/net v0.3.0/go.mod h1:MBQ8lrhLObU/6UmLb4fmbmk5OcyYmqtbGd/9yIeKjEE=
+golang.org/x/net v0.5.0/go.mod h1:DivGGAXEgPSlEBzxGzZI+ZLohi+xUj054jfeKui00ws=
+golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.7.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
-golang.org/x/net v0.8.0 h1:Zrh2ngAOFYneWTAIAPethzeaQLuHwhuBkuV6ZiRnUaQ=
-golang.org/x/net v0.8.0/go.mod h1:QVkue5JL9kW//ek3r6jTKnTFis1tRmNAW2P1shuFdJc=
+golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
+golang.org/x/net v0.15.0 h1:ugBLEUaxABaB5AJqW9enI0ACdci2RUd4eP51NTBvuJ8=
+golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/oauth2 v0.1.0 h1:isLCZuhj4v+tYv7eskaN4v/TM+A1begWWgyVJDdl1+Y=
golang.org/x/oauth2 v0.1.0/go.mod h1:G9FE4dLTsbXUu90h/Pf85g4w1D+SSAgR+q46nJZ8M4A=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@@ -878,8 +1678,10 @@ golang.org/x/sync v0.0.0-20201207232520-09787c993a3a/go.mod h1:RxMgew5VJxzue5/jJ
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220601150217-0de741cfad7f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
-golang.org/x/sync v0.1.0 h1:wsuoTGHzEhffawBOhz5CYhcrV4IdKZbEyZjBMuTp12o=
+golang.org/x/sync v0.0.0-20220929204114-8fcdb60fdcc0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
+golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
+golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sys v0.0.0-20180823144017-11551d06cbcc/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@@ -896,25 +1698,37 @@ golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190502145724-3ef323f4f1fd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190507160741-ecd444e8653b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190514135907-3a4b5fb9f71f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190522044717-8097e1b27ff5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190602015325-4c4f7f33c9ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190606165138-5da285871e9c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190606203320-7fc4e5ec1444/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190624142023-c5567b49c5d0/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190626221950-04f50cda93cb/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190801041406-cbf593c0f2f3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190812073006-9eafafc0a87e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190826190057-c7b8b68b1456/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190904154756-749cb33beabd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191001151750-bb3f8db39f24/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191002063906-3421d5a6bb1c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
-golang.org/x/sys v0.0.0-20191003212358-c178f38b412c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191005200804-aed5e4c7ecf9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191008105621-543471e840be/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191022100944-742c48ecaeb7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191115151921-52ab43148777/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191120155948-bd437916bb0e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20191210023423-ac6580df4449/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191228213918-04cbcbbfeed8/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200106162015-b016eb3dc98e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200113162924-86b910548bc1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200116001909-b77594299b42/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200120151820-655fe14d7479/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200122134326-e047566fdf82/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200124204421-9fbb57f87de9/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200202164722-d101bd2416d5/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200212091648-12a6c2dcc1e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200217220822-9197077df867/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -928,26 +1742,36 @@ golang.org/x/sys v0.0.0-20200515095857-1151b9dac4a9/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200523222454-059865788121/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200615200032-f1bc736245b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200622214017-ed371f2e16b4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200728102440-3e129f6d46b1/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200803210538-64077c9b5642/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200817155316-9781c653f443/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200905004654-be1d3432aa8f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200909081042-eff7692f9009/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200916030750-2334cc1a136f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20200922070232-aee5d888a860/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200923182605-d9f96fdee20d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201117170446-d9b008d0a637/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201201145000-ef89a241ccb3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20201202213521-69691e467435/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210104204734-6f8348627aad/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210112080510-489259a85091/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210220050731-9a76102bfb43/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210225134936-a50acf3fe073/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210305230114-8fe3ee5dd75b/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210315160823-c6e025ad8005/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210320140829-1e4c9ba3b0c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210324051608-47abb6519492/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210403161142-5e06dd20ab57/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423185535-09eb48e85fd7/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
+golang.org/x/sys v0.0.0-20210426230700-d19ff857e887/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210514084401-e8d321eab015/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210603081109-ebe580a85c40/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -956,10 +1780,15 @@ golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210616094352-59db8d763f22/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210806184541-e5e7981a1069/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210816183151-1e6c022a8912/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210823070655-63515b42dcdf/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20210906170528-6f6e22806c34/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210908233432-aa78b53d3365/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211007075335-d3039528d8ac/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211019181941-9d821ace8654/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211025201205-69cdffdb9359/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211103235746-7861aae1554b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20211116061358-0a5406a5449c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211124211545-fe61309f8881/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211210111614-af8b64212486/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -969,25 +1798,38 @@ golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220319134239-a9b59b0215f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220328115105-d36c6a25d886/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220405210540-1e041c57c461/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220412211240-33da011f77ad/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220422013727-9388b58f7150/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220502124256-b6088ccd6cba/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220503163025-988cb79eb6c6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220610221304-9f5ed59c137d/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220615213510-4f61da869c0c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220624220833-87e55d714810/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220728004956-3c1f35247d10/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220908164124-27713097b956/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.0.0-20220928140112-f11e5e49a4ec/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.4.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
-golang.org/x/sys v0.6.0 h1:MVltZSvRTcU2ljQOhs94SXPftV6DCNnZViHeQps87pQ=
-golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
+golang.org/x/sys v0.12.0 h1:CM0HF96J0hcLAwsHPJZjfdNzs0gftsLfgKt57wWHJ0o=
+golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
+golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
+golang.org/x/term v0.3.0/go.mod h1:q750SLmJuPmVoN1blW3UFBPREJfb1KmY3vwxfr+nFDA=
+golang.org/x/term v0.4.0/go.mod h1:9P2UbLfCdcvo3p/nzKvsmas4TnlujnuoV9hGgYzW1lQ=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
-golang.org/x/term v0.6.0 h1:clScbb1cHjoCkyRbWwBEUZ5H/tIFu5TAXIqaZD0Gcjw=
-golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
+golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
+golang.org/x/term v0.12.0 h1:/ZfYdc3zq+q02Rv9vGqTeSItdzZTSNDmfTi0mBAuidU=
+golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
@@ -997,15 +1839,22 @@ golang.org/x/text v0.3.4/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
+golang.org/x/text v0.3.8/go.mod h1:E6s5w1FMmriuDzIBO73fBruAKo1PCIq6d2Q6DHfQ8WQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.5.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
+golang.org/x/text v0.6.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
-golang.org/x/text v0.8.0 h1:57P1ETyNKtuIjB4SRd15iJxuhj8Gc416Y78H3qgMh68=
-golang.org/x/text v0.8.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
+golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
+golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
+golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20190308202827-9d24e82272b4/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20191024005414-555d28b269f0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20200416051211-89c76fbcd5d1/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20210220033141-f8bda1e9f3ba/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20220210224613-90d013bbcef8/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.0.0-20220922220347-f3bd1da661af/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
+golang.org/x/time v0.1.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.3.0 h1:rg5rLMjNzMS1RkNLzCG38eapWhnYLFYXDXj2gOlr8j4=
golang.org/x/time v0.3.0/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180221164845-07fd8470d635/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
@@ -1024,6 +1873,7 @@ golang.org/x/tools v0.0.0-20190606124116-d0a3d012864b/go.mod h1:/rFqwRUd4F7ZHNgw
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190624222133-a101b041ded4/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20190628153133-6cdbf07be9d0/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
+golang.org/x/tools v0.0.0-20190706070813-72ffa07ba3db/go.mod h1:jcCCGcm9btYwXyDqrUWc6MKQKKGJCWEQ3AfLSRIbEuI=
golang.org/x/tools v0.0.0-20190816200558-6889da9d5479/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190911174233-4f2ddba30aff/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20190920225731-5eefd052ad72/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
@@ -1058,12 +1908,14 @@ golang.org/x/tools v0.0.0-20200729194436-6467de6f59a7/go.mod h1:njjCfa9FT2d7l9Bc
golang.org/x/tools v0.0.0-20200804011535-6c149bb5ef0d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200825202427-b303f430e36d/go.mod h1:njjCfa9FT2d7l9Bc6FUM5FLjQPp3cFF28FI3qnDFljA=
golang.org/x/tools v0.0.0-20200904185747-39188db58858/go.mod h1:Cj7w3i3Rnn0Xh82ur9kSqwfTHTeVxaDqrfMjpcNT6bE=
+golang.org/x/tools v0.0.0-20200916195026-c9a70fc28ce3/go.mod h1:z6u4i615ZeAfBE4XtMziQW1fSVJXACjjbWkB/mvPzlU=
golang.org/x/tools v0.0.0-20201110124207-079ba7bd75cd/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201201161351-ac6f37ff4c2a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201208233053-a543418bbed2/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20201224043029-2b0845dc783e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210105154028-b0ab187a4818/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
+golang.org/x/tools v0.0.0-20210108195828-e2f9c7f1fc8e/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.2/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
@@ -1073,8 +1925,11 @@ golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.10/go.mod h1:Uh6Zz+xoGYZom868N8YTex3t7RhtHDBrE8Gzo9bV56E=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
-golang.org/x/tools v0.7.0 h1:W4OVu8VVOaIO0yzWMNdepAulS7YfoS3Zabrm8DOXXU4=
-golang.org/x/tools v0.7.0/go.mod h1:4pg6aUX35JBAogB10C9AtvVL+qowtN4pT3CGSQex14s=
+golang.org/x/tools v0.3.0/go.mod h1:/rWhSS2+zyEVwoJf8YAX6L2f0ntZ7Kn/mGgAWcipA5k=
+golang.org/x/tools v0.4.1-0.20221208213631-3f74d914ae6d/go.mod h1:UE5sM2OK9E/d67R0ANs2xJizIymRP5gJU295PvKXxjQ=
+golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
+golang.org/x/tools v0.13.0 h1:Iey4qkscZuv0VvIt8E0neZjtPVQFSc870HQ448QgEmQ=
+golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
@@ -1082,12 +1937,15 @@ golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8T
golang.org/x/xerrors v0.0.0-20220411194840-2f41105eb62f/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20220517211312-f3a8303e98df/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
golang.org/x/xerrors v0.0.0-20220609144429-65e65417b02f/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
-golang.zx2c4.com/wireguard v0.0.20200121/go.mod h1:P2HsVp8SKwZEufsnezXZA4GRX/T49/HlU7DGuelXsU4=
+golang.org/x/xerrors v0.0.0-20220907171357-04be3eba64a2/go.mod h1:K8+ghG5WaK9qNqU5K3HdILfMLy1f3aNYFI/wnl100a8=
+golang.zx2c4.com/wintun v0.0.0-20230126152724-0fa3db229ce2/go.mod h1:deeaetjYA+DHMHg+sMSMI58GrEteJUUzzw7en6TJQcI=
+golang.zx2c4.com/wireguard v0.0.0-20231022001213-2e0774f246fb/go.mod h1:tkCQ4FQXmpAgYVh++1cq16/dH4QJtmvpRv19DWGAHSA=
golang.zx2c4.com/wireguard/wgctrl v0.0.0-20200324154536-ceff61240acf h1:rWUZHukj3poXegPQMZOXgxjTGIBe3mLNHNVvL5DsHus=
golang.zx2c4.com/wireguard/wgctrl v0.0.0-20200324154536-ceff61240acf/go.mod h1:UdS9frhv65KTfwxME1xE8+rHYoFpbm36gOud1GhBe9c=
gomodules.xyz/jsonpatch/v2 v2.0.1/go.mod h1:IhYNNY4jnS53ZnfE4PAmpKtDpTCj1JFXc+3mwe7XcUU=
gomodules.xyz/jsonpatch/v2 v2.2.0 h1:4pT439QV83L+G9FkcCriY6EkpcK6r6bK+A5FBUMI7qY=
gomodules.xyz/jsonpatch/v2 v2.2.0/go.mod h1:WXp+iVDkoLQqPudfQ9GBlwB2eZ5DKOnjQZCYdOS8GPY=
+google.golang.org/api v0.0.0-20160322025152-9bf6e6e569ff/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.4.0/go.mod h1:8k5glujaEP+g9n7WNsDg8QP6cUVNI86fCNMcbazEtwE=
google.golang.org/api v0.7.0/go.mod h1:WtwebWUNSVBH/HAw79HIFXZNqEvBhG+Ra+ax0hx3E3M=
google.golang.org/api v0.8.0/go.mod h1:o4eAsZoiT+ibD93RtjEohWalFOjRDx6CVaqeizhEnKg=
@@ -1117,6 +1975,7 @@ google.golang.org/api v0.54.0/go.mod h1:7C4bFFOvVDGXjfDTAsgGwDgAxRDeQ4X8NvUedIt6
google.golang.org/api v0.55.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
google.golang.org/api v0.56.0/go.mod h1:38yMfeP1kfjsl8isn0tliTjIb1rJXcQi4UXlbqivdVE=
google.golang.org/api v0.57.0/go.mod h1:dVPlbZyBo2/OjBpmvNdpn2GRm6rPy75jyU7bmhdrMgI=
+google.golang.org/api v0.60.0/go.mod h1:d7rl65NZAkEQ90JFzqBjcRq1TVeG5ZoGV3sSpEnnVb4=
google.golang.org/api v0.61.0/go.mod h1:xQRti5UdCmoCEqFxcz93fTl338AVqDgyaDRuOZ3hg9I=
google.golang.org/api v0.63.0/go.mod h1:gs4ij2ffTRXwuzzgJl/56BdwJaA194ijkfn++9tDuPo=
google.golang.org/api v0.67.0/go.mod h1:ShHKP8E60yPsKNw/w8w+VYaj9H6buA5UqDp8dhbQZ6g=
@@ -1124,9 +1983,21 @@ google.golang.org/api v0.70.0/go.mod h1:Bs4ZM2HGifEvXwd50TtW70ovgJffJYw2oRCOFU/S
google.golang.org/api v0.71.0/go.mod h1:4PyU6e6JogV1f9eA4voyrTY2batOLdgZ5qZ5HOCc4j8=
google.golang.org/api v0.74.0/go.mod h1:ZpfMZOVRMywNyvJFeqL9HRWBgAuRfSjJFpe9QtRRyDs=
google.golang.org/api v0.75.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
+google.golang.org/api v0.77.0/go.mod h1:pU9QmyHLnzlpar1Mjt4IbapUCy8J+6HD6GeELN69ljA=
google.golang.org/api v0.78.0/go.mod h1:1Sg78yoMLOhlQTeF+ARBoytAcH1NNyyl390YMy6rKmw=
google.golang.org/api v0.80.0/go.mod h1:xY3nI94gbvBrE0J6NHXhxOmW97HG7Khjkku6AFB3Hyg=
google.golang.org/api v0.84.0/go.mod h1:NTsGnUFJMYROtiquksZHBWtHfeMC7iYthki7Eq3pa8o=
+google.golang.org/api v0.85.0/go.mod h1:AqZf8Ep9uZ2pyTvgL+x0D3Zt0eoT9b5E8fmzfu6FO2g=
+google.golang.org/api v0.90.0/go.mod h1:+Sem1dnrKlrXMR/X0bPnMWyluQe4RsNoYfmNLhOIkzw=
+google.golang.org/api v0.93.0/go.mod h1:+Sem1dnrKlrXMR/X0bPnMWyluQe4RsNoYfmNLhOIkzw=
+google.golang.org/api v0.95.0/go.mod h1:eADj+UBuxkh5zlrSntJghuNeg8HwQ1w5lTKkuqaETEI=
+google.golang.org/api v0.96.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
+google.golang.org/api v0.97.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
+google.golang.org/api v0.98.0/go.mod h1:w7wJQLTM+wvQpNf5JyEcBoxK0RH7EDrh/L4qfsuJ13s=
+google.golang.org/api v0.99.0/go.mod h1:1YOf74vkVndF7pG6hIHuINsM7eWwpVTAfNMNiL91A08=
+google.golang.org/api v0.100.0/go.mod h1:ZE3Z2+ZOr87Rx7dqFsdRQkRBk36kDtp/h+QpHbB7a70=
+google.golang.org/api v0.102.0/go.mod h1:3VFl6/fzoA+qNuS1N1/VfXY4LjoXN/wzeIp7TweWwGo=
+google.golang.org/api v0.103.0/go.mod h1:hGtW6nK1AC+d9si/UBhw8Xli+QMOf6xyNAyJw4qU9w0=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.5.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
@@ -1135,11 +2006,13 @@ google.golang.org/appengine v1.6.5/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCID
google.golang.org/appengine v1.6.6/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
google.golang.org/appengine v1.6.7 h1:FZR1q0exgwxzPzp/aF+VccGrSfxfPpkBqjIIEq3ru6c=
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
+google.golang.org/cloud v0.0.0-20151119220103-975617b05ea8/go.mod h1:0H1ncTHf11KCFhTc/+EFRbzSCOZx+VUbRMk55Yv5MYk=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20190307195333-5fe7a883aa19/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190418145605-e7d98fc518a7/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190425155659-357c62f0e4bb/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190502173448-54afdca5d873/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
+google.golang.org/genproto v0.0.0-20190522204451-c2c4e71fbf69/go.mod h1:z3L6/3dTEVtUr6QSP8miRzeRqwQOioJ9I66odjN4I7s=
google.golang.org/genproto v0.0.0-20190801165951-fa694d86fc64/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20190911173649-1774047e7e51/go.mod h1:IbNlFCBrqXvoKpeg0TB2l7cyZUmoaFKYIwrEpbDKLA8=
@@ -1148,6 +2021,7 @@ google.golang.org/genproto v0.0.0-20191115194625-c23dd37a84c9/go.mod h1:n3cpQtvx
google.golang.org/genproto v0.0.0-20191216164720-4f79533eabd1/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20191230161307-f3c370f40bfb/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200115191322-ca5a22157cba/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
+google.golang.org/genproto v0.0.0-20200117163144-32f20d992d24/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200122232147-0452cf42e150/go.mod h1:n3cpQtvxv34hfy77yVDNjmbRyujviMdxYliBSkLhpCc=
google.golang.org/genproto v0.0.0-20200204135345-fa8e72b47b90/go.mod h1:GmwEX6Z4W5gMy59cAlVYjN9JhxgbQH6Gn+gFDQe2lzA=
google.golang.org/genproto v0.0.0-20200212174721-66ed5ce911ce/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
@@ -1162,6 +2036,7 @@ google.golang.org/genproto v0.0.0-20200511104702-f5ebc3bea380/go.mod h1:55QSHmfG
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587/go.mod h1:YsZOwe1myG/8QRHRsmBRE1LrgQY60beZKjly0O1fX9U=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
+google.golang.org/genproto v0.0.0-20200527145253-8367513e4ece/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200618031413-b414f8b61790/go.mod h1:jDfRM7FcilCzHH/e9qn6dsT145K34l5v+OpcnNgKAAA=
google.golang.org/genproto v0.0.0-20200729003335-053ba62fc06f/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20200804131852-c06518451d9c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
@@ -1172,7 +2047,9 @@ google.golang.org/genproto v0.0.0-20201109203340-2640f1f9cdfb/go.mod h1:FWY/as6D
google.golang.org/genproto v0.0.0-20201201144952-b05cb90ed32e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201210142538-e3217bee35cc/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20201214200347-8c77b98c765d/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210108203827-ffc7fda8c3d7/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210222152913-aa3ee6e6a81c/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
+google.golang.org/genproto v0.0.0-20210226172003-ab064af71705/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210303154014-9728d6b83eeb/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210310155132-4ce2db91004e/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
google.golang.org/genproto v0.0.0-20210319143718-93e7006c17a6/go.mod h1:FWY/as6DDZQgahTzZj3fqbO1CbirC29ZNUFHwi0/+no=
@@ -1194,6 +2071,7 @@ google.golang.org/genproto v0.0.0-20210831024726-fe130286e0e2/go.mod h1:eFjDcFEc
google.golang.org/genproto v0.0.0-20210903162649-d08c68adba83/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210909211513-a8c4777a87af/go.mod h1:eFjDcFEctNawg4eG61bRv87N7iHBWyVhJu7u1kqDUXY=
google.golang.org/genproto v0.0.0-20210924002016-3dee208752a0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
+google.golang.org/genproto v0.0.0-20211021150943-2b146023228c/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211118181313-81c1377c94b1/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211206160659-862468c7d6e0/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
google.golang.org/genproto v0.0.0-20211208223120-3a66f561d7aa/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
@@ -1205,6 +2083,7 @@ google.golang.org/genproto v0.0.0-20220222213610-43724f9ea8cf/go.mod h1:kGP+zUP2
google.golang.org/genproto v0.0.0-20220304144024-325a89244dc8/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220310185008-1973136f34c6/go.mod h1:kGP+zUP2Ddo0ayMi4YuN7C3WZyJvGLZRh8Z5wnAqvEI=
google.golang.org/genproto v0.0.0-20220324131243-acbaeb5b85eb/go.mod h1:hAL49I2IFola2sVEjAn7MEwsja0xp51I0tlGAf9hz4E=
+google.golang.org/genproto v0.0.0-20220329172620-7be39ac1afc7/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220407144326-9054f6ed7bac/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220413183235-5e96e2839df9/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
google.golang.org/genproto v0.0.0-20220414192740-2d67ff6cf2b4/go.mod h1:8w6bsBMX6yCPbAVTeqQHvzxW0EIFigd5lZyahWgyfDo=
@@ -1216,13 +2095,45 @@ google.golang.org/genproto v0.0.0-20220518221133-4f43b3371335/go.mod h1:RAyBrSAP
google.golang.org/genproto v0.0.0-20220523171625-347a074981d8/go.mod h1:RAyBrSAP7Fh3Nc84ghnVLDPuV51xc9agzmm4Ph6i0Q4=
google.golang.org/genproto v0.0.0-20220608133413-ed9918b62aac/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
google.golang.org/genproto v0.0.0-20220616135557-88e70c0c3a90/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
-google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e h1:S9GbmC1iCgvbLyAokVCwiO6tVIrU9Y7c5oMx1V/ki/Y=
+google.golang.org/genproto v0.0.0-20220617124728-180714bec0ad/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
+google.golang.org/genproto v0.0.0-20220624142145-8cd45d7dbd1f/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
+google.golang.org/genproto v0.0.0-20220628213854-d9e0b6570c03/go.mod h1:KEWEmljWE5zPzLBa/oHl6DaEt9LmfH6WtH1OHIvleBA=
+google.golang.org/genproto v0.0.0-20220722212130-b98a9ff5e252/go.mod h1:GkXuJDJ6aQ7lnJcRF+SJVgFdQhypqgl3LB1C9vabdRE=
+google.golang.org/genproto v0.0.0-20220801145646-83ce21fca29f/go.mod h1:iHe1svFLAZg9VWz891+QbRMwUv9O/1Ww+/mngYeThbc=
+google.golang.org/genproto v0.0.0-20220815135757-37a418bb8959/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220817144833-d7fd3f11b9b1/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220822174746-9e6da59bd2fc/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220829144015-23454907ede3/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220829175752-36a9c930ecbf/go.mod h1:dbqgFATTzChvnt+ujMdZwITVAJHFtfyN1qUhDqEiIlk=
+google.golang.org/genproto v0.0.0-20220913154956-18f8339a66a5/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220914142337-ca0e39ece12f/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220915135415-7fd63a7952de/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220916172020-2692e8806bfa/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220919141832-68c03719ef51/go.mod h1:0Nb8Qy+Sk5eDzHnzlStwW3itdNaWoZA5XeSG+R3JHSo=
+google.golang.org/genproto v0.0.0-20220920201722-2b89144ce006/go.mod h1:ht8XFiar2npT/g4vkk7O0WYS1sHOHbdujxbEp7CJWbw=
+google.golang.org/genproto v0.0.0-20220926165614-551eb538f295/go.mod h1:woMGP53BroOrRY3xTxlbr8Y3eB/nzAvvFM83q7kG2OI=
+google.golang.org/genproto v0.0.0-20220926220553-6981cbe3cfce/go.mod h1:woMGP53BroOrRY3xTxlbr8Y3eB/nzAvvFM83q7kG2OI=
+google.golang.org/genproto v0.0.0-20221010155953-15ba04fc1c0e/go.mod h1:3526vdqwhZAwq4wsRUaVG555sVgsNmIjRtO7t/JH29U=
+google.golang.org/genproto v0.0.0-20221014173430-6e2ab493f96b/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM=
+google.golang.org/genproto v0.0.0-20221014213838-99cd37c6964a/go.mod h1:1vXfmgAz9N9Jx0QA82PqRVauvCz1SGSz739p0f183jM=
+google.golang.org/genproto v0.0.0-20221024153911-1573dae28c9c/go.mod h1:9qHF0xnpdSfF6knlcsnpzUu5y+rpwgbvsyGAZPBMg4s=
google.golang.org/genproto v0.0.0-20221024183307-1bc688fe9f3e/go.mod h1:9qHF0xnpdSfF6knlcsnpzUu5y+rpwgbvsyGAZPBMg4s=
+google.golang.org/genproto v0.0.0-20221027153422-115e99e71e1c/go.mod h1:CGI5F/G+E5bKwmfYo09AXuVN4dD894kIKUFmVbP2/Fo=
+google.golang.org/genproto v0.0.0-20221114212237-e4508ebdbee1/go.mod h1:rZS5c/ZVYMaOGBfO68GWtjOw/eLaZM1X6iVtgjZ+EWg=
+google.golang.org/genproto v0.0.0-20221117204609-8f9c96812029/go.mod h1:rZS5c/ZVYMaOGBfO68GWtjOw/eLaZM1X6iVtgjZ+EWg=
+google.golang.org/genproto v0.0.0-20221118155620-16455021b5e6/go.mod h1:rZS5c/ZVYMaOGBfO68GWtjOw/eLaZM1X6iVtgjZ+EWg=
+google.golang.org/genproto v0.0.0-20221201164419-0e50fba7f41c/go.mod h1:rZS5c/ZVYMaOGBfO68GWtjOw/eLaZM1X6iVtgjZ+EWg=
+google.golang.org/genproto v0.0.0-20221202195650-67e5cbc046fd/go.mod h1:cTsE614GARnxrLsqKREzmNYJACSWWpAWdNMwnD7c2BE=
+google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f h1:BWUVssLB0HVOSY78gIdvk1dTVYtT1y8SBWtPYuTJ/6w=
+google.golang.org/genproto v0.0.0-20230110181048-76db0878b65f/go.mod h1:RGgjbofJ8xD9Sq1VVhDM1Vok1vRONV+rg+CjzG4SZKM=
+google.golang.org/grpc v0.0.0-20160317175043-d3ddb4469d5a/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.20.1/go.mod h1:10oTOabMzJvdu6/UiuZezV6QK5dSlG84ov/aaiqXj38=
google.golang.org/grpc v1.21.0/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.21.1/go.mod h1:oYelfM1adQP15Ek0mdvEgi9Df8B9CZIaU1084ijfRaM=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.23.1/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
+google.golang.org/grpc v1.24.0/go.mod h1:XDChyiUovWa60DnaeDeZmSW86xtLtjtZbwvSiRnRtcA=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.26.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
@@ -1252,9 +2163,13 @@ google.golang.org/grpc v1.45.0/go.mod h1:lN7owxKUQEqMfSyQikvvk5tf/6zMPsrK+ONuO11
google.golang.org/grpc v1.46.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc v1.46.2/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc v1.47.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
+google.golang.org/grpc v1.48.0/go.mod h1:vN9eftEi1UMyUsIF80+uQXhHjbXYbm0uXoFCACuMGWk=
google.golang.org/grpc v1.49.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
-google.golang.org/grpc v1.50.1 h1:DS/BukOZWp8s6p4Dt/tOaJaTQyPyOoCcrjroHuCeLzY=
+google.golang.org/grpc v1.50.0/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
google.golang.org/grpc v1.50.1/go.mod h1:ZgQEeidpAuNRZ8iRrlBKXZQP1ghovWIVhdJRyCDK+GI=
+google.golang.org/grpc v1.51.0/go.mod h1:wgNDFcnuBGmxLKI/qn4T+m5BtEBYXJPvibbUPsAIPww=
+google.golang.org/grpc v1.53.0-dev.0.20230123225046-4075ef07c5d5 h1:qq9WB3Dez2tMAKtZTVtZsZSmTkDgPeXx+FRPt5kLEkM=
+google.golang.org/grpc v1.53.0-dev.0.20230123225046-4075ef07c5d5/go.mod h1:OnIrk0ipVdj4N5d9IUoFUx72/VlD7+jUsHwZgwSMQpw=
google.golang.org/grpc/cmd/protoc-gen-go-grpc v1.1.0/go.mod h1:6Kw0yEErY5E/yWrBtf03jp27GLLJujG4z/JK95pnjjw=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
@@ -1270,10 +2185,13 @@ google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp0
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
-google.golang.org/protobuf v1.28.1 h1:d0NfwRgPtno5B1Wa6L2DAG+KivqkdutMf1UhdNx175w=
google.golang.org/protobuf v1.28.1/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+google.golang.org/protobuf v1.28.2-0.20230118093459-a9481185b34d h1:qp0AnQCvRCMlu9jBjtdbTaaEmThIgZOrbVyDEOcmKhQ=
+google.golang.org/protobuf v1.28.2-0.20230118093459-a9481185b34d/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
+gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
gopkg.in/alecthomas/kingpin.v2 v2.2.6/go.mod h1:FMv+mEhP44yOT+4EoQTLFTRgOQ1FBLkstjWtayDeSgw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
+gopkg.in/check.v1 v1.0.0-20141024133853-64131543e789/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20190902080502-41f04d3bba15/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -1281,6 +2199,8 @@ gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntN
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/fsnotify.v1 v1.4.7/go.mod h1:Tz8NjZHkW78fSQdbUxIjBTcgA1z1m8ZHf0WmKUhAMys=
+gopkg.in/gcfg.v1 v1.2.0/go.mod h1:yesOnuUOFQAhST5vPY4nbZsb/huCgGGXlipJsBn0b3o=
+gopkg.in/gemnasium/logrus-airbrake-hook.v2 v2.1.2/go.mod h1:Xk6kEKp8OKb+X14hQBKWaSkCsqBpgog8nAV2xsGOxlo=
gopkg.in/go-playground/assert.v1 v1.2.1 h1:xoYuJVE7KT85PYWrN730RguIQO0ePzVRfFMXadIrXTM=
gopkg.in/go-playground/validator.v9 v9.27.0 h1:wCg/0hk9RzcB0CYw8pYV6FiBYug1on0cpco9YZF8jqA=
gopkg.in/go-playground/validator.v9 v9.27.0/go.mod h1:+c9/zcJMFNgbLvly1L1V+PpxWdVbfP1avr/N00E2vyQ=
@@ -1291,8 +2211,11 @@ gopkg.in/natefinch/lumberjack.v2 v2.0.0 h1:1Lc07Kr7qY4U2YPouBjpCLxpiyxIVoxqXgkXL
gopkg.in/natefinch/lumberjack.v2 v2.0.0/go.mod h1:l0ndWWf7gzL7RNwBG7wST/UCcT4T24xpD6X8LsfU/+k=
gopkg.in/resty.v1 v1.12.0/go.mod h1:mDo4pnntr5jdWRML875a/NmxYqAlA73dVijT2AXvQQo=
gopkg.in/square/go-jose.v2 v2.2.2/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
+gopkg.in/square/go-jose.v2 v2.3.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
+gopkg.in/square/go-jose.v2 v2.5.1/go.mod h1:M9dMgbHiYLoDGQrXy7OpJDJWiKiU//h+vD76mk0e1AI=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7 h1:uRGJdciOHaEIrze2W8Q3AKkepLTh2hOroT7a+7czfdQ=
gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWDmTeBkI65Dw0HsyUHuEVlX15mw=
+gopkg.in/warnings.v0 v0.1.1/go.mod h1:jksf8JmL6Qr/oQM2OXTHunEvvTAsrWBLb6OOjuVWRNI=
gopkg.in/yaml.v2 v2.0.0-20170812160011-eb3733d160e7/go.mod h1:JAlM8MvJe8wmxCU4Bli9HhUf9+ttbYbLASfIpnQbh74=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
@@ -1310,9 +2233,13 @@ gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
+gotest.tools v2.2.0+incompatible h1:VsBPFP1AI068pPrMxtb/S8Zkgf9xEmTLJjfM+P5UIEo=
+gotest.tools v2.2.0+incompatible/go.mod h1:DsYFclhRJ6vuDpmuTbkuFWG+y2sxOXAzmJt81HFBacw=
gotest.tools/v3 v3.0.2/go.mod h1:3SzNCllyD9/Y+b5r9JIKQ474KzkZyqLqEfYqMsX94Bk=
-gotest.tools/v3 v3.0.3 h1:4AuOwCGf4lLR9u3YOe2awrHygurzhO/HeQ6laiA6Sx0=
gotest.tools/v3 v3.0.3/go.mod h1:Z7Lb0S5l+klDB31fvDQX8ss/FlKDxtlFlw3Oa8Ymbl8=
+gotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o=
+gotest.tools/v3 v3.4.0/go.mod h1:CtbdzLSsqVhDgMtKsx03ird5YTGB3ar27v0u/yKBW5g=
+gvisor.dev/gvisor v0.0.0-20230927004350-cbd86285d259/go.mod h1:AVgIgHMwK63XvmAzWG9vLQ41YnVHN0du0tEC46fI7yY=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190418001031-e561f6794a2a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
@@ -1320,6 +2247,7 @@ honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWh
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
honnef.co/go/tools v0.0.1-2020.1.3/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
honnef.co/go/tools v0.0.1-2020.1.4/go.mod h1:X/FiERA/W4tHapMX5mGpAtMSVEeEUOyHaw9vFzvIQ3k=
+honnef.co/go/tools v0.4.2/go.mod h1:36ZgoUOrqOk1GxwHhyryEkq8FQWkUO2xGuSMhUCcdvA=
k8s.io/api v0.26.3 h1:emf74GIQMTik01Aum9dPP0gAypL8JTLl/lHa4V9RFSU=
k8s.io/api v0.26.3/go.mod h1:PXsqwPMXBSBcL1lJ9CYDKy7kIReUydukS5JiRlxC3qE=
k8s.io/apiextensions-apiserver v0.26.3 h1:5PGMm3oEzdB1W/FTMgGIDmm100vn7IaUP5er36dB+YE=
@@ -1342,10 +2270,13 @@ k8s.io/component-base v0.26.3 h1:oC0WMK/ggcbGDTkdcqefI4wIZRYdK3JySx9/HADpV0g=
k8s.io/component-base v0.26.3/go.mod h1:5kj1kZYwSC6ZstHJN7oHBqcJC6yyn41eR+Sqa/mQc8E=
k8s.io/component-helpers v0.26.3 h1:eQ682yg1GiIGAsde+l2xL0P2yMYJOvypWsz6h6FtkZo=
k8s.io/component-helpers v0.26.3/go.mod h1:feC+CaxJXULs5TSD3lG8K5ecftOkF8eY0pHQgd7koEI=
+k8s.io/controller-manager v0.26.3/go.mod h1:YS449osPmX9Q4xLcyuqEfVzqYcEDxjzzr1kMABouA1I=
+k8s.io/cri-api v0.26.3/go.mod h1:Oo8O7MKFPNDxfDf2LmrF/3Hf30q1C6iliGuv3la3tIA=
k8s.io/csi-translation-lib v0.26.3 h1:XpC3yeBSz+qls9zk3Tqg6r2sZxFePGe1fIlSlmds5XA=
k8s.io/csi-translation-lib v0.26.3/go.mod h1:i++hspfN3AZsA5cYQ1QFnflIQhirsoc/8bOLkaVJG/g=
k8s.io/dynamic-resource-allocation v0.26.3 h1:WBI4xDoMySaH5VJe0y514O3rC4gyiJppC0EkDo6mHq0=
k8s.io/dynamic-resource-allocation v0.26.3/go.mod h1:8nUg20y2DgAv8v+Yuv7O5+rLZa7jbjsU3KvDcBiv2b8=
+k8s.io/gengo v0.0.0-20201113003025-83324d819ded/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/gengo v0.0.0-20210813121822-485abfe95c7c/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
k8s.io/gengo v0.0.0-20220902162205-c0856e24416d h1:U9tB195lKdzwqicbJvyJeOXV7Klv+wNAWENRnXEGi08=
k8s.io/gengo v0.0.0-20220902162205-c0856e24416d/go.mod h1:FiNAH4ZV3gBg2Kwh89tzAEV2be7d5xI0vBa/VySYy3E=
@@ -1353,26 +2284,41 @@ k8s.io/klog v1.0.0 h1:Pt+yjF5aB1xDSVbau4VsWe+dQNzA0qv1LlXdC2dF6Q8=
k8s.io/klog v1.0.0/go.mod h1:4Bi6QPql/J/LkTDqv7R/cd3hPo4k2DG6Ptcz060Ez5I=
k8s.io/klog/v2 v2.0.0/go.mod h1:PBfzABfn139FHAV07az/IF9Wp1bkk3vpT2XSJ76fSDE=
k8s.io/klog/v2 v2.2.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/klog/v2 v2.4.0/go.mod h1:Od+F08eJP+W3HUb4pSrPpgp9DGU4GzlpG/TmITuYh/Y=
+k8s.io/klog/v2 v2.30.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/klog/v2 v2.80.1/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/klog/v2 v2.90.0 h1:VkTxIV/FjRXn1fgNNcKGM8cfmL1Z33ZjXRTVxKCoF5M=
k8s.io/klog/v2 v2.90.0/go.mod h1:y1WjHnz7Dj687irZUWR/WLkLc5N1YHtjLdmgWjndZn0=
k8s.io/kms v0.26.3 h1:+rC4BMeMBkH5hrfZt9WFMRrs2m3vY2rXymisNactcTY=
k8s.io/kms v0.26.3/go.mod h1:69qGnf1NsFOQP07fBYqNLZklqEHSJF024JqYCaeVxHg=
+k8s.io/kube-aggregator v0.26.3/go.mod h1:SgBESB/+PfZAyceTPIanfQ7GtX9G/+mjfUbTHg3Twbo=
+k8s.io/kube-controller-manager v0.26.3/go.mod h1:Dt7WNicFEaOr5R1X1Am8rkPFf+rsG3xTHuwiKl11TBU=
+k8s.io/kube-openapi v0.0.0-20211115234752-e816edb12b65/go.mod h1:sX9MT8g7NVZM5lVL/j8QyCCJe8YSMW30QvGZWaCIDIk=
+k8s.io/kube-openapi v0.0.0-20220401212409-b28bf2818661/go.mod h1:daOouuuwd9JXpv1L7Y34iV3yf6nxzipkKMWWlqlvK9M=
k8s.io/kube-openapi v0.0.0-20221012153701-172d655c2280/go.mod h1:+Axhij7bCpeqhklhUTe3xmOn6bWxolyZEeyaFpjGtl4=
k8s.io/kube-openapi v0.0.0-20230303024457-afdc3dddf62d h1:VcFq5n7wCJB2FQMCIHfC+f+jNcGgNMar1uKd6rVlifU=
k8s.io/kube-openapi v0.0.0-20230303024457-afdc3dddf62d/go.mod h1:y5VtZWM9sHHc2ZodIH/6SHzXj+TPU5USoA8lcIeKEKY=
+k8s.io/kube-proxy v0.26.3/go.mod h1:995p2WbS0zgatqXcB81/4vdhdOrP1cq6jqmPtDmaPhc=
k8s.io/kube-scheduler v0.26.3 h1:IOMIG5dUILnJIMnCbTUQokTb37Sd28vexquBSeajk6c=
k8s.io/kube-scheduler v0.26.3/go.mod h1:mbym95ZdWOaftRMmdLRcAam9MCNf7iUjnPfZTl8uHTA=
k8s.io/kubectl v0.26.3 h1:bZ5SgFyeEXw6XTc1Qji0iNdtqAC76lmeIIQULg2wNXM=
k8s.io/kubectl v0.26.3/go.mod h1:02+gv7Qn4dupzN3fi/9OvqqdW+uG/4Zi56vc4Zmsp1g=
+k8s.io/kubelet v0.26.3/go.mod h1:yd5GJNMOFLMKxP1rmZhg6etbYAbdTimF87fBIBtRimA=
k8s.io/kubernetes v1.26.3 h1:LtjNGNNpCTRyrWhDJMwTWDX+4h+GLwfULS8pu0xzSdk=
k8s.io/kubernetes v1.26.3/go.mod h1:NxzR7U7mS+OGa3J/qweI86Pek//mlfHqDgt6NNGdz8g=
+k8s.io/legacy-cloud-providers v0.26.3/go.mod h1:Scn0CIcptay5seel6MhAzLtoBseK+fL46uJSP84cnPo=
k8s.io/metrics v0.26.3 h1:pHI8XtmBbGGdh7bL0s2C3v93fJfxyktHPAFsnRYnDTo=
k8s.io/metrics v0.26.3/go.mod h1:NNnWARAAz+ZJTs75Z66fJTV7jHcVb3GtrlDszSIr3fE=
k8s.io/mount-utils v0.26.3 h1:FxMDiPLCkrYgonfSaKHWltLNkyTg3Q/Xrwn94uwhd8k=
k8s.io/mount-utils v0.26.3/go.mod h1:95yx9K6N37y8YZ0/lUh9U6ITosMODNaW0/v4wvaa0Xw=
+k8s.io/pod-security-admission v0.26.3/go.mod h1:9I+AV3O26WYsn4jpCD8WvdJy3xvqBWYz43kz0jwco1k=
+k8s.io/sample-apiserver v0.26.3/go.mod h1:M01DHxlAtd35ntPp5YdBoXLLOMN2+ApPlrLPKvATzWA=
+k8s.io/system-validators v1.8.0/go.mod h1:gP1Ky+R9wtrSiFbrpEPwWMeYz9yqyy1S/KOh0Vci7WI=
k8s.io/utils v0.0.0-20200603063816-c1c6865ac451/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20201110183641-67b214c5f920/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20210802155522-efc7438f0176/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20211116205334-6203023598ed/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
+k8s.io/utils v0.0.0-20220210201930-3a6ce19ff2f9/go.mod h1:jPW/WVKK9YHAvNhRxK0md/EJ228hCsBRufyofKtW8HA=
k8s.io/utils v0.0.0-20221107191617-1a15be271d1d/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448 h1:KTgPnR10d5zhztWptI952TNtt/4u5h3IzDXkdIMuo2Y=
k8s.io/utils v0.0.0-20221128185143-99ec85e7a448/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0=
@@ -1385,16 +2331,20 @@ sigs.k8s.io/controller-runtime v0.6.1/go.mod h1:XRYBPdbf5XJu9kpS84VJiZ7h/u1hF3gE
sigs.k8s.io/controller-runtime v0.14.5 h1:6xaWFqzT5KuAQ9ufgUaj1G/+C4Y1GRkhrxl+BJ9i+5s=
sigs.k8s.io/controller-runtime v0.14.5/go.mod h1:WqIdsAY6JBsjfc/CqO0CORmNtoCtE4S6qbPc9s68h+0=
sigs.k8s.io/controller-tools v0.3.0/go.mod h1:enhtKGfxZD1GFEoMgP8Fdbu+uKQ/cq1/WGJhdVChfvI=
+sigs.k8s.io/json v0.0.0-20211020170558-c049b76a60c6/go.mod h1:p4QtZmO4uMYipTQNzagwnNoseA6OxSUutVw05NhYDRs=
sigs.k8s.io/json v0.0.0-20220713155537-f223a00ba0e2/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd h1:EDPBXCAspyGV4jQlpZSudPeMmr1bNJefnuqLsRAsHZo=
sigs.k8s.io/json v0.0.0-20221116044647-bc3834ca7abd/go.mod h1:B8JuhiUyNFVKdsE8h686QcCxMaH6HrOAZj4vswFpcB0=
sigs.k8s.io/kind v0.8.1/go.mod h1:oNKTxUVPYkV9lWzY6CVMNluVq8cBsyq+UgPJdvA3uu4=
sigs.k8s.io/kustomize/api v0.12.1 h1:7YM7gW3kYBwtKvoY216ZzY+8hM+lV53LUayghNRJ0vM=
sigs.k8s.io/kustomize/api v0.12.1/go.mod h1:y3JUhimkZkR6sbLNwfJHxvo1TCLwuwm14sCYnkH6S1s=
+sigs.k8s.io/kustomize/cmd/config v0.10.9/go.mod h1:T0s850zPV3wKfBALA0dyeP/K74jlJcoP8Pr9ZWwE3MQ=
+sigs.k8s.io/kustomize/kustomize/v4 v4.5.7/go.mod h1:VSNKEH9D9d9bLiWEGbS6Xbg/Ih0tgQalmPvntzRxZ/Q=
sigs.k8s.io/kustomize/kyaml v0.13.9 h1:Qz53EAaFFANyNgyOEJbT/yoIHygK40/ZcvU3rgry2Tk=
sigs.k8s.io/kustomize/kyaml v0.13.9/go.mod h1:QsRbD0/KcU+wdk0/L0fIp2KLnohkVzs6fQ85/nOXac4=
sigs.k8s.io/mcs-api v0.1.0 h1:edDbg0oRGfXw8TmZjKYep06LcJLv/qcYLidejnUp0PM=
sigs.k8s.io/mcs-api v0.1.0/go.mod h1:gGiAryeFNB4GBsq2LBmVqSgKoobLxt+p7ii/WG5QYYw=
+sigs.k8s.io/structured-merge-diff/v4 v4.0.2/go.mod h1:bJZC9H9iH24zzfZ/41RGcq60oK1F7G282QMXDPYydCw=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3 h1:PRbqxJClWWYMNV1dhaG4NsibJbArud9kFxnAMREiWFE=
sigs.k8s.io/structured-merge-diff/v4 v4.2.3/go.mod h1:qjx8mGObPmV2aSZepjQjbmb2ihdVs8cGKBraizNC69E=
sigs.k8s.io/yaml v1.2.0/go.mod h1:yfXDCHCao9+ENCvLSE62v9VSji2MKu5jeNfTrofGhJc=
diff --git a/hack/OWNERS b/hack/OWNERS
new file mode 100644
index 000000000..c6b629db9
--- /dev/null
+++ b/hack/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+ - wuyingjun-lucky
+ - duanmengkk
+reviewers:
+ - wuyingjun-lucky
+ - duanmengkk
diff --git a/hack/build.sh b/hack/build.sh
index d88cdc889..3d547f6c9 100755
--- a/hack/build.sh
+++ b/hack/build.sh
@@ -52,10 +52,6 @@ function build_binary_for_platform() {
-o "_output/bin/${platform}/$target" \
"${target_pkg}"
set +x
-
- if [[ "${target}" == "clusterlink-floater" ]]; then
- cp -r "cmd/clusterlink/floater/certificate" "_output/bin/${platform}/"
- fi
}
build_binary "$@"
diff --git a/hack/cluster.sh b/hack/cluster.sh
index 4f1401a15..6a04e1f5b 100755
--- a/hack/cluster.sh
+++ b/hack/cluster.sh
@@ -4,14 +4,18 @@ set -o errexit
set -o nounset
set -o pipefail
+HOST_CLUSTER_NAME="cluster-host"
CURRENT="$(dirname "${BASH_SOURCE[0]}")"
ROOT=$(dirname "${BASH_SOURCE[0]}")/..
-DEFAULT_NAMESPACE="clusterlink-system"
-KIND_IMAGE="ghcr.io/kosmos-io/kindest/node:v1.25.3_1"
+KIND_IMAGE="ghcr.io/kosmos-io/node:v1.25.3"
# true: when cluster is exist, reuse exist one!
REUSE=${REUSE:-false}
VERSION=${VERSION:-latest}
+# default cert and key for node server https
+CERT=$(cat ${ROOT}/pkg/cert/crt.pem | base64 -w 0)
+KEY=$(cat ${ROOT}/pkg/cert/key.pem | base64 -w 0)
+
CN_ZONE=${CN_ZONE:-false}
if [ $REUSE == true ]; then
@@ -45,8 +49,6 @@ function create_cluster() {
sed -e "s|__POD_CIDR__|$podcidr|g" -e "s|__SERVICE_CIDR__|$servicecidr|g" -e "w ${CLUSTER_DIR}/calicoconfig" "${CURRENT}/clustertemplete/calicoconfig"
fi
-
-
if [[ "$(kind get clusters | grep -c "${clustername}")" -eq 1 && "${REUSE}" = true ]]; then
echo "cluster ${clustername} exist reuse it"
else
@@ -69,6 +71,9 @@ function create_cluster() {
docker pull docker.io/calico/kube-controllers:v3.25.0
docker pull docker.io/calico/node:v3.25.0
docker pull docker.io/calico/csi:v3.25.0
+ docker pull docker.io/percona:5.7
+ docker pull docker.io/library/nginx:latest
+ docker pull docker.io/library/busybox:latest
else
docker pull quay.m.daocloud.io/tigera/operator:v1.29.0
docker pull docker.m.daocloud.io/calico/cni:v3.25.0
@@ -77,6 +82,9 @@ function create_cluster() {
docker pull docker.m.daocloud.io/calico/kube-controllers:v3.25.0
docker pull docker.m.daocloud.io/calico/node:v3.25.0
docker pull docker.m.daocloud.io/calico/csi:v3.25.0
+ docker pull docker.m.daocloud.io/percona:5.7
+ docker pull docker.m.daocloud.io/library/nginx:latest
+ docker pull docker.m.daocloud.io/library/busybox:latest
docker tag quay.m.daocloud.io/tigera/operator:v1.29.0 quay.io/tigera/operator:v1.29.0
docker tag docker.m.daocloud.io/calico/cni:v3.25.0 docker.io/calico/cni:v3.25.0
@@ -85,6 +93,9 @@ function create_cluster() {
docker tag docker.m.daocloud.io/calico/kube-controllers:v3.25.0 docker.io/calico/kube-controllers:v3.25.0
docker tag docker.m.daocloud.io/calico/node:v3.25.0 docker.io/calico/node:v3.25.0
docker tag docker.m.daocloud.io/calico/csi:v3.25.0 docker.io/calico/csi:v3.25.0
+ docker tag docker.m.daocloud.io/percona:5.7 docker.io/percona:5.7
+ docker tag docker.m.daocloud.io/library/nginx:latest docker.io/library/nginx:latest
+ docker tag docker.m.daocloud.io/library/busybox:latest docker.io/library/busybox:latest
fi
kind load docker-image -n "$clustername" quay.io/tigera/operator:v1.29.0
@@ -94,7 +105,36 @@ function create_cluster() {
kind load docker-image -n "$clustername" docker.io/calico/kube-controllers:v3.25.0
kind load docker-image -n "$clustername" docker.io/calico/node:v3.25.0
kind load docker-image -n "$clustername" docker.io/calico/csi:v3.25.0
-
+ kind load docker-image -n "$clustername" docker.io/percona:5.7
+ kind load docker-image -n "$clustername" docker.io/library/nginx:latest
+ kind load docker-image -n "$clustername" docker.io/library/busybox:latest
+
+ if "${clustername}" == $HOST_CLUSTER_NAME ; then
+ if [ "${CN_ZONE}" == false ]; then
+ docker pull docker.io/bitpoke/mysql-operator-orchestrator:v0.6.3
+ docker pull docker.io/prom/mysqld-exporter:v0.13.0
+ docker pull docker.io/bitpoke/mysql-operator-sidecar-8.0:v0.6.3
+ docker pull docker.io/bitpoke/mysql-operator-sidecar-5.7:v0.6.3
+ docker pull docker.io/bitpoke/mysql-operator:v0.6.3
+ else
+ docker pull docker.m.daocloud.io/bitpoke/mysql-operator-orchestrator:v0.6.3
+ docker pull docker.m.daocloud.io/prom/mysqld-exporter:v0.13.0
+ docker pull docker.m.daocloud.io/bitpoke/mysql-operator-sidecar-8.0:v0.6.3
+ docker pull docker.m.daocloud.io/bitpoke/mysql-operator-sidecar-5.7:v0.6.3
+ docker pull docker.m.daocloud.io/bitpoke/mysql-operator:v0.6.3
+
+ docker tag docker.m.daocloud.io/bitpoke/mysql-operator-orchestrator:v0.6.3 docker.io/bitpoke/mysql-operator-orchestrator:v0.6.3
+ docker tag docker.m.daocloud.io/prom/mysqld-exporter:v0.13.0 docker.io/prom/mysqld-exporter:v0.13.0
+ docker tag docker.m.daocloud.io/bitpoke/mysql-operator-sidecar-8.0:v0.6.3 docker.io/bitpoke/mysql-operator-sidecar-8.0:v0.6.3
+ docker tag docker.m.daocloud.io/bitpoke/mysql-operator-sidecar-5.7:v0.6.3 docker.io/bitpoke/mysql-operator-sidecar-5.7:v0.6.3
+ docker tag docker.m.daocloud.io/bitpoke/mysql-operator:v0.6.3 docker.io/bitpoke/mysql-operator:v0.6.3
+ fi
+ kind load docker-image -n "$clustername" docker.io/bitpoke/mysql-operator-orchestrator:v0.6.3
+ kind load docker-image -n "$clustername" docker.io/prom/mysqld-exporter:v0.13.0
+ kind load docker-image -n "$clustername" docker.io/bitpoke/mysql-operator-sidecar-8.0:v0.6.3
+ kind load docker-image -n "$clustername" docker.io/bitpoke/mysql-operator-sidecar-5.7:v0.6.3
+ kind load docker-image -n "$clustername" docker.io/bitpoke/mysql-operator:v0.6.3
+ fi
kubectl --context="kind-${clustername}" create -f "$CURRENT/calicooperator/tigera-operator.yaml" || $("${REUSE}" -eq "true")
kind export kubeconfig --name "$clustername"
util::wait_for_crd installations.operator.tigera.io
@@ -108,37 +148,50 @@ function create_cluster() {
300
echo "all node ready"
+ kubectl --context="kind-${clustername}" apply -f "$ROOT"/deploy/crds/mcs
}
function join_cluster() {
local host_cluster=$1
local member_cluster=$2
- local container_ip_port
+ local kubeconfig_path="${ROOT}/environments/${member_cluster}/kubeconfig"
+ local base64_kubeconfig=$(base64 -w 0 < "$kubeconfig_path")
+ echo " base64 kubeconfig successfully converted: $base64_kubeconfig "
+
+ local common_metadata=""
+ if [ "$host_cluster" == "$member_cluster" ]; then
+ common_metadata="annotations:
+ kosmos.io/cluster-role: root"
+ fi
+
cat < 0) exit 0; else exit 1}'" \
+ 300
+ echo "cluster $clustername deploy kosmos-scheduler success"
+
+ docker exec ${clustername}-control-plane /bin/sh -c "mv /etc/kubernetes/manifests/kube-scheduler.yaml /etc/kubernetes"
}
-function load_clusterlink_images() {
+function load_cluster_images() {
local -r clustername=$1
- kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink/clusterlink-network-manager:"${VERSION}"
- kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink/clusterlink-controller-manager:"${VERSION}"
- kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink/clusterlink-elector:"${VERSION}"
- kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink/clusterlink-operator:"${VERSION}"
- kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink/clusterlink-agent:"${VERSION}"
- kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink/clusterlink-proxy:"${VERSION}"
+ kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink-network-manager:"${VERSION}"
+ kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink-controller-manager:"${VERSION}"
+ kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink-elector:"${VERSION}"
+ kind load docker-image -n "$clustername" ghcr.io/kosmos-io/kosmos-operator:"${VERSION}"
+ kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink-agent:"${VERSION}"
+ kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clusterlink-proxy:"${VERSION}"
+ kind load docker-image -n "$clustername" ghcr.io/kosmos-io/clustertree-cluster-manager:"${VERSION}"
+ kind load docker-image -n "$clustername" ghcr.io/kosmos-io/scheduler:"${VERSION}"
}
function delete_cluster() {
diff --git a/hack/clustertemplete/kindconfig b/hack/clustertemplete/kindconfig
index 1da57bce6..c0f612fa2 100644
--- a/hack/clustertemplete/kindconfig
+++ b/hack/clustertemplete/kindconfig
@@ -2,9 +2,9 @@ kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
-- role: worker
+#- role: worker
networking:
ipFamily: __IP_FAMILY__
disableDefaultCNI: true # disable kindnet
podSubnet: __POD_CIDR__
- serviceSubnet: __SERVICE_CIDR__
\ No newline at end of file
+ serviceSubnet: __SERVICE_CIDR__
diff --git a/hack/generate/generate.go b/hack/generate/generate.go
index 1e832344f..61f9935cc 100644
--- a/hack/generate/generate.go
+++ b/hack/generate/generate.go
@@ -8,28 +8,94 @@ import (
"go/parser"
"go/token"
"os"
+ "strings"
"github.com/kosmos.io/kosmos/hack/projectpath"
)
+func readFileAndTransformBackQuote(path string) (string, error) {
+ content, err := os.ReadFile(path)
+ if err != nil {
+ return "", err
+ }
+ parts := strings.Split(string(content), "`")
+ return strings.Join(parts, "` + \"`\" + `"), nil
+}
+
+func updateAST(node *ast.File, name string, value string) {
+ found := false
+ ast.Inspect(node, func(n ast.Node) bool {
+ if ident, ok := n.(*ast.Ident); ok && ident.Obj != nil && ident.Obj.Kind == ast.Con && ident.Obj.Name == name {
+ valueSpec := ident.Obj.Decl.(*ast.ValueSpec)
+ valueSpec.Values[0] = &ast.BasicLit{
+ Kind: token.STRING,
+ Value: fmt.Sprintf("`%s`", value),
+ }
+ found = true
+ return false
+ }
+ return true
+ })
+
+ if !found {
+ // Add new variable if not found
+ valueSpec := &ast.ValueSpec{
+ Names: []*ast.Ident{ast.NewIdent(name)},
+ Values: []ast.Expr{
+ &ast.BasicLit{
+ Kind: token.STRING,
+ Value: fmt.Sprintf("`%s`", value),
+ },
+ },
+ }
+ decl := &ast.GenDecl{
+ Tok: token.CONST,
+ Specs: []ast.Spec{valueSpec},
+ }
+ node.Decls = append(node.Decls, decl)
+ }
+}
+
func main() {
- clusterNodeCRD, err := os.ReadFile(fmt.Sprintf("%s/deploy/crds/kosmos.io_clusternodes.yaml", projectpath.Root))
+ clusterNodeCRD, err := readFileAndTransformBackQuote(fmt.Sprintf("%s/deploy/crds/kosmos.io_clusternodes.yaml", projectpath.Root))
+ if err != nil {
+ fmt.Println("can not read file:", err)
+ return
+ }
+ clusterCRD, err := readFileAndTransformBackQuote(fmt.Sprintf("%s/deploy/crds/kosmos.io_clusters.yaml", projectpath.Root))
+ if err != nil {
+ fmt.Println("can not read file:", err)
+ return
+ }
+ nodeConfigCRD, err := readFileAndTransformBackQuote(fmt.Sprintf("%s/deploy/crds/kosmos.io_nodeconfigs.yaml", projectpath.Root))
+ if err != nil {
+ fmt.Println("can not read file:", err)
+ return
+ }
+ serviceImportCRD, err := readFileAndTransformBackQuote(fmt.Sprintf("%s/deploy/crds/mcs/multicluster.x-k8s.io_serviceimports.yaml", projectpath.Root))
+ if err != nil {
+ fmt.Println("can not read file:", err)
+ return
+ }
+ serviceExportCRD, err := readFileAndTransformBackQuote(fmt.Sprintf("%s/deploy/crds/mcs/multicluster.x-k8s.io_serviceexports.yaml", projectpath.Root))
if err != nil {
fmt.Println("can not read file:", err)
return
}
- clusterCRD, err := os.ReadFile(fmt.Sprintf("%s/deploy/crds/kosmos.io_clusters.yaml", projectpath.Root))
+
+ daemonSetCRD, err := readFileAndTransformBackQuote(fmt.Sprintf("%s/deploy/crds/kosmos.io_daemonsets.yaml", projectpath.Root))
if err != nil {
fmt.Println("can not read file:", err)
return
}
- nodeConfigCRD, err := os.ReadFile(fmt.Sprintf("%s/deploy/crds/kosmos.io_nodeconfigs.yaml", projectpath.Root))
+
+ shadowDaemonSetCRD, err := readFileAndTransformBackQuote(fmt.Sprintf("%s/deploy/crds/kosmos.io_shadowdaemonsets.yaml", projectpath.Root))
if err != nil {
fmt.Println("can not read file:", err)
return
}
- filename := fmt.Sprintf("%s/pkg/clusterlinkctl/initmaster/ctlmaster/manifests_crd.go", projectpath.Root)
+ filename := fmt.Sprintf("%s/pkg/kosmosctl/manifest/manifest_crds.go", projectpath.Root)
fset := token.NewFileSet()
node, err := parser.ParseFile(fset, filename, nil, parser.ParseComments)
if err != nil {
@@ -37,41 +103,13 @@ func main() {
os.Exit(1)
}
- ast.Inspect(node, func(n ast.Node) bool {
- if ident, ok := n.(*ast.Ident); ok && ident.Obj != nil && ident.Obj.Kind == ast.Con && ident.Obj.Name == "ClusterNode" {
- valueSpec := ident.Obj.Decl.(*ast.ValueSpec)
- valueSpec.Values[0] = &ast.BasicLit{
- Kind: token.STRING,
- Value: fmt.Sprintf("`%s`", clusterNodeCRD),
- }
- return false
- }
- return true
- })
-
- ast.Inspect(node, func(n ast.Node) bool {
- if ident, ok := n.(*ast.Ident); ok && ident.Obj != nil && ident.Obj.Kind == ast.Con && ident.Obj.Name == "Cluster" {
- valueSpec := ident.Obj.Decl.(*ast.ValueSpec)
- valueSpec.Values[0] = &ast.BasicLit{
- Kind: token.STRING,
- Value: fmt.Sprintf("`%s`", clusterCRD),
- }
- return false
- }
- return true
- })
-
- ast.Inspect(node, func(n ast.Node) bool {
- if ident, ok := n.(*ast.Ident); ok && ident.Obj != nil && ident.Obj.Kind == ast.Con && ident.Obj.Name == "NodeConfig" {
- valueSpec := ident.Obj.Decl.(*ast.ValueSpec)
- valueSpec.Values[0] = &ast.BasicLit{
- Kind: token.STRING,
- Value: fmt.Sprintf("`%s`", nodeConfigCRD),
- }
- return false
- }
- return true
- })
+ updateAST(node, "ClusterNode", clusterNodeCRD)
+ updateAST(node, "Cluster", clusterCRD)
+ updateAST(node, "NodeConfig", nodeConfigCRD)
+ updateAST(node, "ServiceImport", serviceImportCRD)
+ updateAST(node, "ServiceExport", serviceExportCRD)
+ updateAST(node, "DaemonSet", daemonSetCRD)
+ updateAST(node, "ShadowDaemonSet", shadowDaemonSetCRD)
var buf bytes.Buffer
err = format.Node(&buf, fset, node)
@@ -80,7 +118,6 @@ func main() {
os.Exit(1)
}
code := buf.String()
- //fmt.Println(code)
err = os.WriteFile(filename, []byte(code), 0600)
if err != nil {
diff --git a/hack/install-go.sh b/hack/install-go.sh
new file mode 100644
index 000000000..bc5c4d754
--- /dev/null
+++ b/hack/install-go.sh
@@ -0,0 +1,43 @@
+#!/usr/bin/env bash
+
+set -o errexit
+set -o nounset
+set -o pipefail
+
+# Function to install Go if it is not already installed
+install_go() {
+ echo "Go is not installed. Installing..."
+
+ # Specify the Go version you want to install
+ GO_VERSION="1.20" # Change this to the desired Go version
+
+ # Set the Go installation path
+ GO_INSTALL_PATH="/usr/local"
+
+ # Download and install Go
+ curl -O https://golang.org/dl/go$GO_VERSION.linux-amd64.tar.gz
+ tar -C $GO_INSTALL_PATH -xzf go$GO_VERSION.linux-amd64.tar.gz
+
+ # Set Go environment variables
+ export GOROOT=$GO_INSTALL_PATH/go
+ export GOPATH=$HOME/go
+ export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
+
+ # Cleanup downloaded files
+ rm go$GO_VERSION.linux-amd64.tar.gz
+
+ echo "Go installation complete."
+}
+
+# Check if Go is installed
+if ! command -v go &> /dev/null; then
+ install_go
+fi
+
+# Verify the Go version
+if ! go version | grep -q "go1.20"; then
+ echo "Installed Go version does not match the required version (1.20)."
+ install_go
+fi
+
+echo "Go is installed and the version is correct."
diff --git a/hack/install_kind_kubectl.sh b/hack/install_kind_kubectl.sh
new file mode 100644
index 000000000..fbfe1f0c9
--- /dev/null
+++ b/hack/install_kind_kubectl.sh
@@ -0,0 +1,40 @@
+#!/usr/bin/env bash
+
+set -o errexit
+set -o nounset
+set -o pipefail
+
+ROOT="$(dirname "${BASH_SOURCE[0]}")"
+source "${ROOT}/util.sh"
+
+# Make sure go exists and the go version is a viable version.
+if command -v go &> /dev/null; then
+ util::verify_go_version
+else
+ source "$(dirname "${BASH_SOURCE[0]}")/install-go.sh"
+fi
+
+# Make sure docker exists
+util::cmd_must_exist "docker"
+
+# install kind and kubectl
+kind_version=v0.20.0
+echo -n "Preparing: 'kind' existence check - "
+if util::cmd_exist kind; then
+ echo "passed"
+else
+ echo "not pass"
+ util::install_tools "sigs.k8s.io/kind" $kind_version
+fi
+# get arch name and os name in bootstrap
+BS_ARCH=$(go env GOARCH)
+BS_OS=$(go env GOOS)
+# check arch and os name before installing
+util::install_environment_check "${BS_ARCH}" "${BS_OS}"
+echo -n "Preparing: 'kubectl' existence check - "
+if util::cmd_exist kubectl; then
+ echo "passed"
+else
+ echo "not pass"
+ util::install_kubectl "" "${BS_ARCH}" "${BS_OS}"
+fi
diff --git a/hack/local-down-clusterlink.sh b/hack/local-down-clusterlink.sh
index 514e02ccb..5faad623c 100755
--- a/hack/local-down-clusterlink.sh
+++ b/hack/local-down-clusterlink.sh
@@ -4,9 +4,10 @@ set -o errexit
set -o nounset
set -o pipefail
-HOST_CLUSTER_NAME="cluster-host-local"
+HOST_CLUSTER_NAME="cluster-host"
-MEMBER1_CLUSTER_NAME="cluster-member1-local"
+MEMBER1_CLUSTER_NAME="cluster-member1"
+MEMBER2_CLUSTER_NAME="cluster-member2"
ROOT="$(dirname "${BASH_SOURCE[0]}")"
source "$(dirname "${BASH_SOURCE[0]}")/cluster.sh"
@@ -14,6 +15,7 @@ source "$(dirname "${BASH_SOURCE[0]}")/cluster.sh"
#cluster cluster
delete_cluster $HOST_CLUSTER_NAME
delete_cluster $MEMBER1_CLUSTER_NAME
+delete_cluster $MEMBER2_CLUSTER_NAME
-echo "clusterlink local down success"
\ No newline at end of file
+echo "clusterlink local down success"
diff --git a/hack/local-up-clusterlink.sh b/hack/local-up-clusterlink.sh
index 246b6fc1b..1f1a6db79 100755
--- a/hack/local-up-clusterlink.sh
+++ b/hack/local-up-clusterlink.sh
@@ -4,29 +4,38 @@ set -o errexit
set -o nounset
set -o pipefail
-HOST_CLUSTER_NAME="cluster-host-local"
+HOST_CLUSTER_NAME="cluster-host"
HOST_CLUSTER_POD_CIDR="10.233.64.0/18"
HOST_CLUSTER_SERVICE_CIDR="10.233.0.0/18"
-MEMBER1_CLUSTER_NAME="cluster-member1-local"
+MEMBER1_CLUSTER_NAME="cluster-member1"
MEMBER1_CLUSTER_POD_CIDR="10.234.64.0/18"
MEMBER1_CLUSTER_SERVICE_CIDR="10.234.0.0/18"
+MEMBER2_CLUSTER_NAME="cluster-member2"
+MEMBER2_CLUSTER_POD_CIDR="10.235.64.0/18"
+MEMBER2_CLUSTER_SERVICE_CIDR="10.235.0.0/18"
+
export VERSION="latest"
ROOT="$(dirname "${BASH_SOURCE[0]}")"
+source "$(dirname "${BASH_SOURCE[0]}")/install_kind_kubectl.sh"
source "$(dirname "${BASH_SOURCE[0]}")/cluster.sh"
make images GOOS="linux" --directory="${ROOT}"
#cluster cluster
create_cluster $HOST_CLUSTER_NAME $HOST_CLUSTER_POD_CIDR $HOST_CLUSTER_SERVICE_CIDR true
create_cluster $MEMBER1_CLUSTER_NAME $MEMBER1_CLUSTER_POD_CIDR $MEMBER1_CLUSTER_SERVICE_CIDR true
-#deploy clusterlink
-deploy_clusterlink $HOST_CLUSTER_NAME
-load_clusterlink_images $MEMBER1_CLUSTER_NAME
+create_cluster $MEMBER2_CLUSTER_NAME $MEMBER2_CLUSTER_POD_CIDR $MEMBER2_CLUSTER_SERVICE_CIDR true
+
+#deploy cluster
+deploy_cluster $HOST_CLUSTER_NAME
+load_cluster_images $MEMBER1_CLUSTER_NAME
+load_cluster_images $MEMBER2_CLUSTER_NAME
#join cluster
join_cluster $HOST_CLUSTER_NAME $HOST_CLUSTER_NAME
join_cluster $HOST_CLUSTER_NAME $MEMBER1_CLUSTER_NAME
+join_cluster $HOST_CLUSTER_NAME $MEMBER2_CLUSTER_NAME
-echo "clusterlink local start success enjoy it!"
+echo "cluster local start success enjoy it!"
diff --git a/hack/prepare-e2e.sh b/hack/prepare-e2e.sh
new file mode 100755
index 000000000..763bcffc7
--- /dev/null
+++ b/hack/prepare-e2e.sh
@@ -0,0 +1,42 @@
+#!/usr/bin/env bash
+
+set -o errexit
+set -o nounset
+set -o pipefail
+
+ENABLED_MYSQL_E2E=${MYSQL_E2E:-true}
+
+KUBECONFIG_PATH=${KUBECONFIG_PATH:-"${HOME}/.kube"}
+export KUBECONFIG=$KUBECONFIG_PATH/"config"
+
+HOST_CLUSTER_NAME="cluster-host"
+HOST_CLUSTER_POD_CIDR="10.233.64.0/18"
+HOST_CLUSTER_SERVICE_CIDR="10.233.0.0/18"
+
+MEMBER1_CLUSTER_NAME="cluster-member1"
+MEMBER1_CLUSTER_POD_CIDR="10.234.64.0/18"
+MEMBER1_CLUSTER_SERVICE_CIDR="10.234.0.0/18"
+
+MEMBER2_CLUSTER_NAME="cluster-member2"
+MEMBER2_CLUSTER_POD_CIDR="10.235.64.0/18"
+MEMBER2_CLUSTER_SERVICE_CIDR="10.235.0.0/18"
+
+ROOT="$(dirname "${BASH_SOURCE[0]}")"
+export VERSION="latest"
+source "$(dirname "${BASH_SOURCE[0]}")/install_kind_kubectl.sh"
+source "$(dirname "${BASH_SOURCE[0]}")/cluster.sh"
+make images GOOS="linux" --directory="${ROOT}"
+
+#cluster cluster
+create_cluster $HOST_CLUSTER_NAME $HOST_CLUSTER_POD_CIDR $HOST_CLUSTER_SERVICE_CIDR
+create_cluster $MEMBER1_CLUSTER_NAME $MEMBER1_CLUSTER_POD_CIDR $MEMBER1_CLUSTER_SERVICE_CIDR false
+create_cluster $MEMBER2_CLUSTER_NAME $MEMBER2_CLUSTER_POD_CIDR $MEMBER2_CLUSTER_SERVICE_CIDR fasle
+#deploy cluster
+deploy_cluster $HOST_CLUSTER_NAME
+load_cluster_images $MEMBER1_CLUSTER_NAME
+load_cluster_images $MEMBER2_CLUSTER_NAME
+
+#join cluster
+join_cluster $HOST_CLUSTER_NAME $HOST_CLUSTER_NAME
+join_cluster $HOST_CLUSTER_NAME $MEMBER1_CLUSTER_NAME
+join_cluster $HOST_CLUSTER_NAME $MEMBER2_CLUSTER_NAME
diff --git a/hack/rune2e.sh b/hack/rune2e.sh
index 52a3899a7..2f2e1e9cd 100755
--- a/hack/rune2e.sh
+++ b/hack/rune2e.sh
@@ -7,35 +7,48 @@ set -o pipefail
KUBECONFIG_PATH=${KUBECONFIG_PATH:-"${HOME}/.kube"}
export KUBECONFIG=$KUBECONFIG_PATH/"config"
-HOST_CLUSTER_NAME="cluster-host"
-HOST_CLUSTER_POD_CIDR="10.233.64.0/18"
-HOST_CLUSTER_SERVICE_CIDR="10.233.0.0/18"
+E2E_NAMESPACE="kosmos-e2e"
+HOST_CLUSTER_NAME="cluster-host"
MEMBER1_CLUSTER_NAME="cluster-member1"
-MEMBER1_CLUSTER_POD_CIDR="10.234.64.0/18"
-MEMBER1_CLUSTER_SERVICE_CIDR="10.234.0.0/18"
-
MEMBER2_CLUSTER_NAME="cluster-member2"
-MEMBER2_CLUSTER_POD_CIDR="10.235.64.0/18"
-MEMBER2_CLUSTER_SERVICE_CIDR="10.235.0.0/18"
ROOT="$(dirname "${BASH_SOURCE[0]}")"
-export VERSION="latest"
-source "$(dirname "${BASH_SOURCE[0]}")/cluster.sh"
-make images GOOS="linux" --directory="${ROOT}"
+source "${ROOT}/util.sh"
+
+# e2e for nginx and mcs
+kubectl --context="kind-${HOST_CLUSTER_NAME}" apply -f "${ROOT}"/../test/e2e/deploy/nginx
+util::wait_for_condition "nginx are ready" \
+ "kubectl --context=kind-${HOST_CLUSTER_NAME} -n ${E2E_NAMESPACE} get pod -l app=nginx | awk 'NR>1 {if (\$3 == \"Running\") exit 0; else exit 1; }'" \
+ 120
+
+util::wait_for_condition "mcs of member1 are ready" \
+ "[ \$(kubectl --context=kind-${MEMBER1_CLUSTER_NAME} -n ${E2E_NAMESPACE} get endpointslices.discovery.k8s.io --no-headers -l kubernetes.io\/service-name=nginx-service | wc -l) -eq 1 ] " \
+ 120
-#cluster cluster
-create_cluster $HOST_CLUSTER_NAME $HOST_CLUSTER_POD_CIDR $HOST_CLUSTER_SERVICE_CIDR
-create_cluster $MEMBER1_CLUSTER_NAME $MEMBER1_CLUSTER_POD_CIDR $MEMBER1_CLUSTER_SERVICE_CIDR true
-#deploy clusterlink
-deploy_clusterlink $HOST_CLUSTER_NAME
-load_clusterlink_images $MEMBER1_CLUSTER_NAME
+util::wait_for_condition "mcs of member2 are ready" \
+ "[ \$(kubectl --context=kind-${MEMBER2_CLUSTER_NAME} -n ${E2E_NAMESPACE} get endpointslices.discovery.k8s.io --no-headers -l kubernetes.io\/service-name=nginx-service | wc -l) -eq 1 ] " \
+ 120
-#join cluster
-join_cluster $HOST_CLUSTER_NAME $HOST_CLUSTER_NAME
-join_cluster $HOST_CLUSTER_NAME $MEMBER1_CLUSTER_NAME
+nginx_service_ip=$(kubectl -n kosmos-e2e get svc nginx-service -o=jsonpath='{.spec.clusterIP}')
-echo "e2e test enviroment init success"
+# e2e test for access nginx service
+sleep 100 && docker exec -i ${HOST_CLUSTER_NAME}-control-plane sh -c "curl -sSf -m 5 ${nginx_service_ip}:80" && echo "success" || { echo "fail"; exit 1; }
+
+# e2e for mysql-operator
+kubectl --context="kind-cluster-host" apply -f "${ROOT}"/../test/e2e/deploy/mysql-operator
+util::wait_for_condition "mysql operator are ready" \
+ "kubectl --context=kind-${HOST_CLUSTER_NAME} get pods -n mysql-operator mysql-operator-0 | awk 'NR>1 {if (\$3 == \"Running\") exit 0; else exit 1; }'" \
+ 300
+
+#kubectl --context="kind-cluster-host" exec -it /bin/sh -c
+kubectl --context="kind-${HOST_CLUSTER_NAME}" apply -f "${ROOT}"/../test/e2e/deploy/cr
+
+util::wait_for_condition "mysql cr are ready" \
+ "[ \$(kubectl get pods -n kosmos-e2e --field-selector=status.phase=Running --no-headers | wc -l) -eq 2 ]" \
+ 1200
+
+echo "E2e test of mysql-operator success"
# Install ginkgo
GO111MODULE=on go install github.com/onsi/ginkgo/v2/ginkgo
@@ -44,7 +57,7 @@ set +e
ginkgo -v --race --trace --fail-fast -p --randomize-all ./test/e2e/ --
TESTING_RESULT=$?
-LOG_PATH=$ROOT/e2e-logs
+LOG_PATH=$ROOT/../e2e-logs
echo "Collect logs to $LOG_PATH..."
mkdir -p "$LOG_PATH"
@@ -56,6 +69,10 @@ echo "Collecting $MEMBER1_CLUSTER_NAME logs..."
mkdir -p "$MEMBER1_CLUSTER_NAME/$MEMBER1_CLUSTER_NAME"
kind export logs --name="$MEMBER1_CLUSTER_NAME" "$LOG_PATH/$MEMBER1_CLUSTER_NAME"
+echo "Collecting $MEMBER2_CLUSTER_NAME logs..."
+mkdir -p "$MEMBER2_CLUSTER_NAME/$MEMBER2_CLUSTER_NAME"
+kind export logs --name="$MEMBER2_CLUSTER_NAME" "$LOG_PATH/$MEMBER2_CLUSTER_NAME"
+
#TODO delete cluster
-exit $TESTING_RESULT
\ No newline at end of file
+exit $TESTING_RESULT
diff --git a/hack/update-crds.sh b/hack/update-crds.sh
index 6d44b72bc..caffc8b0d 100755
--- a/hack/update-crds.sh
+++ b/hack/update-crds.sh
@@ -14,4 +14,4 @@ export PATH=$PATH:$GOPATH/bin
controller-gen crd paths=./pkg/apis/kosmos/... output:crd:dir="${REPO_ROOT}/deploy/crds"
-go run "${REPO_ROOT}/hack/generate/generate.go"
\ No newline at end of file
+#go run "${REPO_ROOT}/hack/generate/generate.go"
diff --git a/hack/util.sh b/hack/util.sh
index e9a620e9c..d879d7837 100755
--- a/hack/util.sh
+++ b/hack/util.sh
@@ -15,7 +15,7 @@ MIN_Go_VERSION=go1.19.0
CLUSTERLINK_TARGET_SOURCE=(
scheduler=cmd/scheduler
clusterlink-proxy=cmd/clusterlink/proxy
- clusterlink-operator=cmd/clusterlink/operator
+ kosmos-operator=cmd/operator
clusterlink-elector=cmd/clusterlink/elector
clusterlink-agent=cmd/clusterlink/agent
clusterlink-floater=cmd/clusterlink/floater
diff --git a/pkg/apis/kosmos/v1alpha1/cluster_types.go b/pkg/apis/kosmos/v1alpha1/cluster_types.go
index fa51372a0..8ba7bf20c 100644
--- a/pkg/apis/kosmos/v1alpha1/cluster_types.go
+++ b/pkg/apis/kosmos/v1alpha1/cluster_types.go
@@ -1,6 +1,7 @@
package v1alpha1
import (
+ corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
@@ -8,8 +9,8 @@ import (
// +genclient:nonNamespaced
// +kubebuilder:resource:scope="Cluster"
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
-// +kubebuilder:printcolumn:name="NETWORK_TYPE",type=string,JSONPath=`.spec.networkType`
-// +kubebuilder:printcolumn:name="IP_FAMILY",type=string,JSONPath=`.spec.ipFamily`
+// +kubebuilder:printcolumn:name="NETWORK_TYPE",type=string,JSONPath=`.spec.clusterLinkOptions.networkType`
+// +kubebuilder:printcolumn:name="IP_FAMILY",type=string,JSONPath=`.spec.clusterLinkOptions.ipFamily`
type Cluster struct {
metav1.TypeMeta `json:",inline"`
@@ -24,32 +25,66 @@ type Cluster struct {
}
type ClusterSpec struct {
+ // +optional
+ Kubeconfig []byte `json:"kubeconfig,omitempty"`
+
+ // +kubebuilder:default=kosmos-system
+ // +optional
+ Namespace string `json:"namespace"`
+
+ // +optional
+ ImageRepository string `json:"imageRepository,omitempty"`
+
+ // +optional
+ ClusterLinkOptions *ClusterLinkOptions `json:"clusterLinkOptions,omitempty"`
+
+ // +optional
+ ClusterTreeOptions *ClusterTreeOptions `json:"clusterTreeOptions,omitempty"`
+}
+
+type ClusterStatus struct {
+ // ClusterLinkStatus contain the cluster network information
+ // +optional
+ ClusterLinkStatus ClusterLinkStatus `json:"clusterLinkStatus,omitempty"`
+
+ // ClusterTreeStatus contain the member cluster leafNode end status
+ // +optional
+ ClusterTreeStatus ClusterTreeStatus `json:"clusterTreeStatus,omitempty"`
+}
+
+type ClusterLinkOptions struct {
+ // +kubebuilder:default=true
+ // +optional
+ Enable bool `json:"enable"`
+
// +kubebuilder:default=calico
// +optional
CNI string `json:"cni"`
+
// +kubebuilder:validation:Enum=p2p;gateway
// +kubebuilder:default=p2p
// +optional
NetworkType NetworkType `json:"networkType"`
+
// +kubebuilder:default=all
// +optional
- IPFamily IPFamilyType `json:"ipFamily"`
- ImageRepository string `json:"imageRepository,omitempty"`
- // +kubebuilder:default=kosmos-system
- // +optional
- Namespace string `json:"namespace"`
+ IPFamily IPFamilyType `json:"ipFamily"`
// +kubebuilder:default=false
// +optional
UseIPPool bool `json:"useIPPool,omitempty"`
+
// +kubebuilder:default={ip:"210.0.0.0/8",ip6:"9480::/16"}
// +optional
LocalCIDRs VxlanCIDRs `json:"localCIDRs,omitempty"`
+
// +kubebuilder:default={ip:"220.0.0.0/8",ip6:"9470::/16"}
// +optional
BridgeCIDRs VxlanCIDRs `json:"bridgeCIDRs,omitempty"`
+
// +optional
NICNodeNames []NICNodeNames `json:"nicNodeNames,omitempty"`
+
// +kubebuilder:default=*
// +optional
DefaultNICName string `json:"defaultNICName,omitempty"`
@@ -58,16 +93,70 @@ type ClusterSpec struct {
GlobalCIDRsMap map[string]string `json:"globalCIDRsMap,omitempty"`
// +optional
- Kubeconfig []byte `json:"kubeconfig,omitempty"`
+ AutodetectionMethod string `json:"autodetectionMethod,omitempty"`
}
-type ClusterStatus struct {
+type ClusterTreeOptions struct {
+ // +kubebuilder:default=true
+ // +optional
+ Enable bool `json:"enable"`
+
+ // LeafModels provide an api to arrange the member cluster with some rules to pretend one or more leaf node
+ // +optional
+ LeafModels []LeafModel `json:"leafModels,omitempty"`
+}
+
+type LeafModel struct {
+ // LeafNodeName defines leaf name
+ // If nil or empty, the leaf node name will generate by controller and fill in cluster link status
+ // +optional
+ LeafNodeName string `json:"leafNodeName,omitempty"`
+
+ // Labels that will be setting in the pretended Node labels
+ // +optional
+ Labels map[string]string `json:"labels,omitempty" protobuf:"bytes,11,rep,name=labels"`
+
+ // Taints attached to the leaf pretended Node.
+ // If nil or empty, controller will set the default no-schedule taint
+ // +optional
+ Taints []corev1.Taint `json:"taints,omitempty"`
+
+ // NodeSelector is a selector to select member cluster nodes to pretend a leaf node in clusterTree.
+ // +optional
+ NodeSelector NodeSelector `json:"nodeSelector,omitempty"`
+}
+
+type NodeSelector struct {
+ // NodeName is Member cluster origin node Name
+ // +optional
+ NodeName string `json:"nodeName,omitempty"`
+
+ // LabelSelector is a filter to select member cluster nodes to pretend a leaf node in clusterTree by labels.
+ // It will work on second level schedule on pod create in member clusters.
+ // +optional
+ LabelSelector *metav1.LabelSelector `json:"labelSelector,omitempty"`
+}
+
+type ClusterLinkStatus struct {
// +optional
PodCIDRs []string `json:"podCIDRs,omitempty"`
// +optional
ServiceCIDRs []string `json:"serviceCIDRs,omitempty"`
}
+type ClusterTreeStatus struct {
+ // LeafNodeItems represents list of the leaf node Items calculating in each member cluster.
+ // +optional
+ LeafNodeItems []LeafNodeItem `json:"leafNodeItems,omitempty"`
+}
+
+type LeafNodeItem struct {
+ // LeafNodeName represents the leaf node name generate by controller.
+ // suggest name format like cluster-shortLabel-number like member-az1-1
+ // +required
+ LeafNodeName string `json:"leafNodeName"`
+}
+
type VxlanCIDRs struct {
IP string `json:"ip"`
IP6 string `json:"ip6"`
@@ -87,9 +176,15 @@ type ClusterList struct {
}
func (c *Cluster) IsP2P() bool {
- return c.Spec.NetworkType == NetworkTypeP2P
+ if c.Spec.ClusterLinkOptions == nil {
+ return false
+ }
+ return c.Spec.ClusterLinkOptions.NetworkType == NetworkTypeP2P
}
func (c *Cluster) IsGateway() bool {
- return c.Spec.NetworkType == NetWorkTypeGateWay
+ if c.Spec.ClusterLinkOptions == nil {
+ return false
+ }
+ return c.Spec.ClusterLinkOptions.NetworkType == NetWorkTypeGateWay
}
diff --git a/pkg/apis/kosmos/v1alpha1/daemonset_types.go b/pkg/apis/kosmos/v1alpha1/daemonset_types.go
index 7b73bc1c7..8e3cd2c69 100644
--- a/pkg/apis/kosmos/v1alpha1/daemonset_types.go
+++ b/pkg/apis/kosmos/v1alpha1/daemonset_types.go
@@ -90,7 +90,7 @@ type ShadowDaemonSet struct {
Status DaemonSetStatus `json:"status,omitempty" protobuf:"bytes,3,opt,name=status"`
// +optional
- Knode string `json:"knode"`
+ Cluster string `json:"cluster"`
RefType RefType `json:"refType"`
}
diff --git a/pkg/apis/kosmos/v1alpha1/knode_types.go b/pkg/apis/kosmos/v1alpha1/knode_types.go
index 427bf95d2..7fb63db77 100644
--- a/pkg/apis/kosmos/v1alpha1/knode_types.go
+++ b/pkg/apis/kosmos/v1alpha1/knode_types.go
@@ -28,10 +28,6 @@ type KnodeSpec struct {
// +optional
Kubeconfig []byte `json:"kubeconfig,omitempty"`
- // +kubebuilder:default=50
- // +optional
- KubeAPIQPS float32 `json:"kubeAPIQPS,omitempty"`
-
// +kubebuilder:default=100
// +optional
KubeAPIBurst int `json:"kubeAPIBurst,omitempty"`
diff --git a/pkg/apis/kosmos/v1alpha1/podconvertpolicy_types.go b/pkg/apis/kosmos/v1alpha1/podconvertpolicy_types.go
new file mode 100644
index 000000000..e1da2a0aa
--- /dev/null
+++ b/pkg/apis/kosmos/v1alpha1/podconvertpolicy_types.go
@@ -0,0 +1,132 @@
+package v1alpha1
+
+import (
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+// +genclient
+// +kubebuilder:subresource:status
+// +kubebuilder:resource:shortName=pc;pcs
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+type PodConvertPolicy struct {
+ metav1.TypeMeta `json:",inline"`
+ metav1.ObjectMeta `json:"metadata,omitempty"`
+
+ // Spec is the specification for the behaviour of the podConversion.
+ // +required
+ Spec PodConvertPolicySpec `json:"spec"`
+}
+
+type PodConvertPolicySpec struct {
+ // A label query over a set of resources.
+ // If name is not empty, labelSelector will be ignored.
+ // +required
+ LabelSelector metav1.LabelSelector `json:"labelSelector"`
+
+ // A label query over a set of resources.
+ // If name is not empty, LeafNodeSelector will be ignored.
+ // +option
+ LeafNodeSelector *metav1.LabelSelector `json:"leafNodeSelector,omitempty"`
+
+ // Converters are some converter for convert pod when pod synced from root cluster to leaf cluster
+ // pod will use these converters to scheduled in leaf cluster
+ // +optional
+ Converters *Converters `json:"converters,omitempty"`
+}
+
+// Converters are some converter for pod to scheduled in leaf cluster
+type Converters struct {
+ // +optional
+ SchedulerNameConverter *SchedulerNameConverter `json:"schedulerNameConverter,omitempty"`
+ // +optional
+ NodeNameConverter *NodeNameConverter `json:"nodeNameConverter,omitempty"`
+ // +optional
+ NodeSelectorConverter *NodeSelectorConverter `json:"nodeSelectorConverter,omitempty"`
+ // +optional
+ AffinityConverter *AffinityConverter `json:"affinityConverter,omitempty"`
+ // +optional
+ TopologySpreadConstraintsConverter *TopologySpreadConstraintsConverter `json:"topologySpreadConstraintsConverter,omitempty"`
+}
+
+// ConvertType if the operation type when convert pod from root cluster to leaf cluster.
+type ConvertType string
+
+// These are valid conversion types.
+const (
+ Add ConvertType = "add"
+ Remove ConvertType = "remove"
+ Replace ConvertType = "replace"
+)
+
+// SchedulerNameConverter used to modify the pod's nodeName when pod synced to leaf cluster
+type SchedulerNameConverter struct {
+ // +kubebuilder:validation:Enum=add;remove;replace
+ // +required
+ ConvertType ConvertType `json:"convertType"`
+
+ // +optional
+ SchedulerName string `json:"schedulerName,omitempty"`
+}
+
+// NodeNameConverter used to modify the pod's nodeName when pod synced to leaf cluster
+type NodeNameConverter struct {
+ // +kubebuilder:validation:Enum=add;remove;replace
+ // +required
+ ConvertType ConvertType `json:"convertType"`
+
+ // +optional
+ NodeName string `json:"nodeName,omitempty"`
+}
+
+// NodeSelectorConverter used to modify the pod's NodeSelector when pod synced to leaf cluster
+type NodeSelectorConverter struct {
+ // +kubebuilder:validation:Enum=add;remove;replace
+ // +required
+ ConvertType ConvertType `json:"convertType"`
+
+ // +optional
+ NodeSelector map[string]string `json:"nodeSelector,omitempty"`
+}
+
+// TolerationConverter used to modify the pod's Tolerations when pod synced to leaf cluster
+type TolerationConverter struct {
+ // +kubebuilder:validation:Enum=add;remove;replace
+ // +required
+ ConvertType ConvertType `json:"convertType"`
+
+ // +optional
+ Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
+}
+
+// AffinityConverter used to modify the pod's Affinity when pod synced to leaf cluster
+type AffinityConverter struct {
+ // +kubebuilder:validation:Enum=add;remove;replace
+ // +required
+ ConvertType ConvertType `json:"convertType"`
+
+ // +optional
+ Affinity *corev1.Affinity `json:"affinity,omitempty"`
+}
+
+// TopologySpreadConstraintsConverter used to modify the pod's TopologySpreadConstraints when pod synced to leaf cluster
+type TopologySpreadConstraintsConverter struct {
+ // +kubebuilder:validation:Enum=add;remove;replace
+ // +required
+ ConvertType ConvertType `json:"convertType"`
+
+ // TopologySpreadConstraints describes how a group of pods ought to spread across topology
+ // domains. Scheduler will schedule pods in a way which abides by the constraints.
+ // All topologySpreadConstraints are ANDed.
+ // +optional
+ TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"`
+}
+
+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object
+
+type PodConvertPolicyList struct {
+ metav1.TypeMeta `json:",inline"`
+ metav1.ListMeta `json:"metadata"`
+ Items []PodConvertPolicy `json:"items"`
+}
diff --git a/pkg/apis/kosmos/v1alpha1/zz_generated.deepcopy.go b/pkg/apis/kosmos/v1alpha1/zz_generated.deepcopy.go
index 50d4223b8..181799bf6 100644
--- a/pkg/apis/kosmos/v1alpha1/zz_generated.deepcopy.go
+++ b/pkg/apis/kosmos/v1alpha1/zz_generated.deepcopy.go
@@ -7,10 +7,32 @@ package v1alpha1
import (
appsv1 "k8s.io/api/apps/v1"
- v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
runtime "k8s.io/apimachinery/pkg/runtime"
)
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *AffinityConverter) DeepCopyInto(out *AffinityConverter) {
+ *out = *in
+ if in.Affinity != nil {
+ in, out := &in.Affinity, &out.Affinity
+ *out = new(v1.Affinity)
+ (*in).DeepCopyInto(*out)
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new AffinityConverter.
+func (in *AffinityConverter) DeepCopy() *AffinityConverter {
+ if in == nil {
+ return nil
+ }
+ out := new(AffinityConverter)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Arp) DeepCopyInto(out *Arp) {
*out = *in
@@ -55,6 +77,64 @@ func (in *Cluster) DeepCopyObject() runtime.Object {
return nil
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ClusterLinkOptions) DeepCopyInto(out *ClusterLinkOptions) {
+ *out = *in
+ out.LocalCIDRs = in.LocalCIDRs
+ out.BridgeCIDRs = in.BridgeCIDRs
+ if in.NICNodeNames != nil {
+ in, out := &in.NICNodeNames, &out.NICNodeNames
+ *out = make([]NICNodeNames, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ if in.GlobalCIDRsMap != nil {
+ in, out := &in.GlobalCIDRsMap, &out.GlobalCIDRsMap
+ *out = make(map[string]string, len(*in))
+ for key, val := range *in {
+ (*out)[key] = val
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterLinkOptions.
+func (in *ClusterLinkOptions) DeepCopy() *ClusterLinkOptions {
+ if in == nil {
+ return nil
+ }
+ out := new(ClusterLinkOptions)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ClusterLinkStatus) DeepCopyInto(out *ClusterLinkStatus) {
+ *out = *in
+ if in.PodCIDRs != nil {
+ in, out := &in.PodCIDRs, &out.PodCIDRs
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
+ if in.ServiceCIDRs != nil {
+ in, out := &in.ServiceCIDRs, &out.ServiceCIDRs
+ *out = make([]string, len(*in))
+ copy(*out, *in)
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterLinkStatus.
+func (in *ClusterLinkStatus) DeepCopy() *ClusterLinkStatus {
+ if in == nil {
+ return nil
+ }
+ out := new(ClusterLinkStatus)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterList) DeepCopyInto(out *ClusterList) {
*out = *in
@@ -194,27 +274,21 @@ func (in *ClusterNodeStatus) DeepCopy() *ClusterNodeStatus {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterSpec) DeepCopyInto(out *ClusterSpec) {
*out = *in
- out.LocalCIDRs = in.LocalCIDRs
- out.BridgeCIDRs = in.BridgeCIDRs
- if in.NICNodeNames != nil {
- in, out := &in.NICNodeNames, &out.NICNodeNames
- *out = make([]NICNodeNames, len(*in))
- for i := range *in {
- (*in)[i].DeepCopyInto(&(*out)[i])
- }
- }
- if in.GlobalCIDRsMap != nil {
- in, out := &in.GlobalCIDRsMap, &out.GlobalCIDRsMap
- *out = make(map[string]string, len(*in))
- for key, val := range *in {
- (*out)[key] = val
- }
- }
if in.Kubeconfig != nil {
in, out := &in.Kubeconfig, &out.Kubeconfig
*out = make([]byte, len(*in))
copy(*out, *in)
}
+ if in.ClusterLinkOptions != nil {
+ in, out := &in.ClusterLinkOptions, &out.ClusterLinkOptions
+ *out = new(ClusterLinkOptions)
+ (*in).DeepCopyInto(*out)
+ }
+ if in.ClusterTreeOptions != nil {
+ in, out := &in.ClusterTreeOptions, &out.ClusterTreeOptions
+ *out = new(ClusterTreeOptions)
+ (*in).DeepCopyInto(*out)
+ }
return
}
@@ -231,16 +305,8 @@ func (in *ClusterSpec) DeepCopy() *ClusterSpec {
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ClusterStatus) DeepCopyInto(out *ClusterStatus) {
*out = *in
- if in.PodCIDRs != nil {
- in, out := &in.PodCIDRs, &out.PodCIDRs
- *out = make([]string, len(*in))
- copy(*out, *in)
- }
- if in.ServiceCIDRs != nil {
- in, out := &in.ServiceCIDRs, &out.ServiceCIDRs
- *out = make([]string, len(*in))
- copy(*out, *in)
- }
+ in.ClusterLinkStatus.DeepCopyInto(&out.ClusterLinkStatus)
+ in.ClusterTreeStatus.DeepCopyInto(&out.ClusterTreeStatus)
return
}
@@ -254,6 +320,91 @@ func (in *ClusterStatus) DeepCopy() *ClusterStatus {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ClusterTreeOptions) DeepCopyInto(out *ClusterTreeOptions) {
+ *out = *in
+ if in.LeafModels != nil {
+ in, out := &in.LeafModels, &out.LeafModels
+ *out = make([]LeafModel, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterTreeOptions.
+func (in *ClusterTreeOptions) DeepCopy() *ClusterTreeOptions {
+ if in == nil {
+ return nil
+ }
+ out := new(ClusterTreeOptions)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *ClusterTreeStatus) DeepCopyInto(out *ClusterTreeStatus) {
+ *out = *in
+ if in.LeafNodeItems != nil {
+ in, out := &in.LeafNodeItems, &out.LeafNodeItems
+ *out = make([]LeafNodeItem, len(*in))
+ copy(*out, *in)
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new ClusterTreeStatus.
+func (in *ClusterTreeStatus) DeepCopy() *ClusterTreeStatus {
+ if in == nil {
+ return nil
+ }
+ out := new(ClusterTreeStatus)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *Converters) DeepCopyInto(out *Converters) {
+ *out = *in
+ if in.SchedulerNameConverter != nil {
+ in, out := &in.SchedulerNameConverter, &out.SchedulerNameConverter
+ *out = new(SchedulerNameConverter)
+ **out = **in
+ }
+ if in.NodeNameConverter != nil {
+ in, out := &in.NodeNameConverter, &out.NodeNameConverter
+ *out = new(NodeNameConverter)
+ **out = **in
+ }
+ if in.NodeSelectorConverter != nil {
+ in, out := &in.NodeSelectorConverter, &out.NodeSelectorConverter
+ *out = new(NodeSelectorConverter)
+ (*in).DeepCopyInto(*out)
+ }
+ if in.AffinityConverter != nil {
+ in, out := &in.AffinityConverter, &out.AffinityConverter
+ *out = new(AffinityConverter)
+ (*in).DeepCopyInto(*out)
+ }
+ if in.TopologySpreadConstraintsConverter != nil {
+ in, out := &in.TopologySpreadConstraintsConverter, &out.TopologySpreadConstraintsConverter
+ *out = new(TopologySpreadConstraintsConverter)
+ (*in).DeepCopyInto(*out)
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new Converters.
+func (in *Converters) DeepCopy() *Converters {
+ if in == nil {
+ return nil
+ }
+ out := new(Converters)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *DaemonSet) DeepCopyInto(out *DaemonSet) {
*out = *in
@@ -320,7 +471,7 @@ func (in *DaemonSetSpec) DeepCopyInto(out *DaemonSetSpec) {
*out = *in
if in.Selector != nil {
in, out := &in.Selector, &out.Selector
- *out = new(v1.LabelSelector)
+ *out = new(metav1.LabelSelector)
(*in).DeepCopyInto(*out)
}
in.Template.DeepCopyInto(&out.Template)
@@ -506,7 +657,7 @@ func (in *KnodeStatus) DeepCopyInto(out *KnodeStatus) {
*out = *in
if in.Conditions != nil {
in, out := &in.Conditions, &out.Conditions
- *out = make([]v1.Condition, len(*in))
+ *out = make([]metav1.Condition, len(*in))
for i := range *in {
(*in)[i].DeepCopyInto(&(*out)[i])
}
@@ -524,6 +675,53 @@ func (in *KnodeStatus) DeepCopy() *KnodeStatus {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *LeafModel) DeepCopyInto(out *LeafModel) {
+ *out = *in
+ if in.Labels != nil {
+ in, out := &in.Labels, &out.Labels
+ *out = make(map[string]string, len(*in))
+ for key, val := range *in {
+ (*out)[key] = val
+ }
+ }
+ if in.Taints != nil {
+ in, out := &in.Taints, &out.Taints
+ *out = make([]v1.Taint, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ in.NodeSelector.DeepCopyInto(&out.NodeSelector)
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LeafModel.
+func (in *LeafModel) DeepCopy() *LeafModel {
+ if in == nil {
+ return nil
+ }
+ out := new(LeafModel)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *LeafNodeItem) DeepCopyInto(out *LeafNodeItem) {
+ *out = *in
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new LeafNodeItem.
+func (in *LeafNodeItem) DeepCopy() *LeafNodeItem {
+ if in == nil {
+ return nil
+ }
+ out := new(LeafNodeItem)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *NICNodeNames) DeepCopyInto(out *NICNodeNames) {
*out = *in
@@ -665,6 +863,153 @@ func (in *NodeConfigStatus) DeepCopy() *NodeConfigStatus {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *NodeNameConverter) DeepCopyInto(out *NodeNameConverter) {
+ *out = *in
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeNameConverter.
+func (in *NodeNameConverter) DeepCopy() *NodeNameConverter {
+ if in == nil {
+ return nil
+ }
+ out := new(NodeNameConverter)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *NodeSelector) DeepCopyInto(out *NodeSelector) {
+ *out = *in
+ if in.LabelSelector != nil {
+ in, out := &in.LabelSelector, &out.LabelSelector
+ *out = new(metav1.LabelSelector)
+ (*in).DeepCopyInto(*out)
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeSelector.
+func (in *NodeSelector) DeepCopy() *NodeSelector {
+ if in == nil {
+ return nil
+ }
+ out := new(NodeSelector)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *NodeSelectorConverter) DeepCopyInto(out *NodeSelectorConverter) {
+ *out = *in
+ if in.NodeSelector != nil {
+ in, out := &in.NodeSelector, &out.NodeSelector
+ *out = make(map[string]string, len(*in))
+ for key, val := range *in {
+ (*out)[key] = val
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new NodeSelectorConverter.
+func (in *NodeSelectorConverter) DeepCopy() *NodeSelectorConverter {
+ if in == nil {
+ return nil
+ }
+ out := new(NodeSelectorConverter)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *PodConvertPolicy) DeepCopyInto(out *PodConvertPolicy) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ObjectMeta.DeepCopyInto(&out.ObjectMeta)
+ in.Spec.DeepCopyInto(&out.Spec)
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodConvertPolicy.
+func (in *PodConvertPolicy) DeepCopy() *PodConvertPolicy {
+ if in == nil {
+ return nil
+ }
+ out := new(PodConvertPolicy)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *PodConvertPolicy) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *PodConvertPolicyList) DeepCopyInto(out *PodConvertPolicyList) {
+ *out = *in
+ out.TypeMeta = in.TypeMeta
+ in.ListMeta.DeepCopyInto(&out.ListMeta)
+ if in.Items != nil {
+ in, out := &in.Items, &out.Items
+ *out = make([]PodConvertPolicy, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodConvertPolicyList.
+func (in *PodConvertPolicyList) DeepCopy() *PodConvertPolicyList {
+ if in == nil {
+ return nil
+ }
+ out := new(PodConvertPolicyList)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyObject is an autogenerated deepcopy function, copying the receiver, creating a new runtime.Object.
+func (in *PodConvertPolicyList) DeepCopyObject() runtime.Object {
+ if c := in.DeepCopy(); c != nil {
+ return c
+ }
+ return nil
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *PodConvertPolicySpec) DeepCopyInto(out *PodConvertPolicySpec) {
+ *out = *in
+ in.LabelSelector.DeepCopyInto(&out.LabelSelector)
+ if in.LeafNodeSelector != nil {
+ in, out := &in.LeafNodeSelector, &out.LeafNodeSelector
+ *out = new(metav1.LabelSelector)
+ (*in).DeepCopyInto(*out)
+ }
+ if in.Converters != nil {
+ in, out := &in.Converters, &out.Converters
+ *out = new(Converters)
+ (*in).DeepCopyInto(*out)
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new PodConvertPolicySpec.
+func (in *PodConvertPolicySpec) DeepCopy() *PodConvertPolicySpec {
+ if in == nil {
+ return nil
+ }
+ out := new(PodConvertPolicySpec)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *Proxy) DeepCopyInto(out *Proxy) {
*out = *in
@@ -706,6 +1051,22 @@ func (in *Route) DeepCopy() *Route {
return out
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *SchedulerNameConverter) DeepCopyInto(out *SchedulerNameConverter) {
+ *out = *in
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new SchedulerNameConverter.
+func (in *SchedulerNameConverter) DeepCopy() *SchedulerNameConverter {
+ if in == nil {
+ return nil
+ }
+ out := new(SchedulerNameConverter)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *ShadowDaemonSet) DeepCopyInto(out *ShadowDaemonSet) {
*out = *in
@@ -767,6 +1128,52 @@ func (in *ShadowDaemonSetList) DeepCopyObject() runtime.Object {
return nil
}
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *TolerationConverter) DeepCopyInto(out *TolerationConverter) {
+ *out = *in
+ if in.Tolerations != nil {
+ in, out := &in.Tolerations, &out.Tolerations
+ *out = make([]v1.Toleration, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TolerationConverter.
+func (in *TolerationConverter) DeepCopy() *TolerationConverter {
+ if in == nil {
+ return nil
+ }
+ out := new(TolerationConverter)
+ in.DeepCopyInto(out)
+ return out
+}
+
+// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
+func (in *TopologySpreadConstraintsConverter) DeepCopyInto(out *TopologySpreadConstraintsConverter) {
+ *out = *in
+ if in.TopologySpreadConstraints != nil {
+ in, out := &in.TopologySpreadConstraints, &out.TopologySpreadConstraints
+ *out = make([]v1.TopologySpreadConstraint, len(*in))
+ for i := range *in {
+ (*in)[i].DeepCopyInto(&(*out)[i])
+ }
+ }
+ return
+}
+
+// DeepCopy is an autogenerated deepcopy function, copying the receiver, creating a new TopologySpreadConstraintsConverter.
+func (in *TopologySpreadConstraintsConverter) DeepCopy() *TopologySpreadConstraintsConverter {
+ if in == nil {
+ return nil
+ }
+ out := new(TopologySpreadConstraintsConverter)
+ in.DeepCopyInto(out)
+ return out
+}
+
// DeepCopyInto is an autogenerated deepcopy function, copying the receiver, writing into out. in must be non-nil.
func (in *VxlanCIDRs) DeepCopyInto(out *VxlanCIDRs) {
*out = *in
diff --git a/pkg/apis/kosmos/v1alpha1/zz_generated.register.go b/pkg/apis/kosmos/v1alpha1/zz_generated.register.go
index 385db658c..42af84880 100644
--- a/pkg/apis/kosmos/v1alpha1/zz_generated.register.go
+++ b/pkg/apis/kosmos/v1alpha1/zz_generated.register.go
@@ -52,6 +52,8 @@ func addKnownTypes(scheme *runtime.Scheme) error {
&KnodeList{},
&NodeConfig{},
&NodeConfigList{},
+ &PodConvertPolicy{},
+ &PodConvertPolicyList{},
&Proxy{},
&ShadowDaemonSet{},
&ShadowDaemonSetList{},
diff --git a/pkg/cert/cert.go b/pkg/cert/cert.go
new file mode 100644
index 000000000..5bd2fae7e
--- /dev/null
+++ b/pkg/cert/cert.go
@@ -0,0 +1,28 @@
+package cert
+
+import (
+ _ "embed"
+ "encoding/base64"
+)
+
+//go:embed crt.pem
+var crt []byte
+
+//go:embed key.pem
+var key []byte
+
+func GetCrtEncode() string {
+ return base64.StdEncoding.EncodeToString(crt)
+}
+
+func GetKeyEncode() string {
+ return base64.StdEncoding.EncodeToString(key)
+}
+
+func GetCrt() []byte {
+ return crt
+}
+
+func GetKey() []byte {
+ return key
+}
diff --git a/pkg/cert/crt.pem b/pkg/cert/crt.pem
new file mode 100644
index 000000000..bd6a817d9
--- /dev/null
+++ b/pkg/cert/crt.pem
@@ -0,0 +1,19 @@
+-----BEGIN CERTIFICATE-----
+MIIDEzCCAfsCFH4WHXLqM/y7lcp+lOzUGymu0kdFMA0GCSqGSIb3DQEBCwUAMEUx
+CzAJBgNVBAYTAlpIMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRl
+cm5ldCBXaWRnaXRzIFB0eSBMdGQwIBcNMjMxMTIyMDY0MzMwWhgPMzAyMzAzMjUw
+NjQzMzBaMEUxCzAJBgNVBAYTAlpIMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYD
+VQQKDBhJbnRlcm5ldCBXaWRnaXRzIFB0eSBMdGQwggEiMA0GCSqGSIb3DQEBAQUA
+A4IBDwAwggEKAoIBAQCI+Kfjgljh9HaATEzxBjA8YwwsPvkUiR/5lEpKkpF9JpgB
+uYz5F/btwHYncFCmvY60aJLwv4xn06tZeijaNgZf9o5HIOFhosjosi5pCQnUszcQ
+8DJ+5iwXHPrmCOv6ncZCpt2XsbBQ7k/gW4Buzvb+FN85p+n6GsRz3R+JJ62MI4JO
+4QAhtTyyCunCp2mp5kAa0l9iemWkXUV4qW07RcLUgmmsjyEwgsz3hsYjZ/wAaGTp
+GdEBAbhk9/lJSYFy/0TRG/evi/6Ba2jiYTrELa7Y0elrTsL+ulxs7jUH43hQ29VD
+oF3ufcwDJdrOcvQ53c9LRLUh6UIrLFVPZJ1SOeJTAgMBAAEwDQYJKoZIhvcNAQEL
+BQADggEBABWP5dBbHhLw+ppBIWolwkNzEIlBplUooMFotDhNTmsXk5MzSUmu3sJT
+ejR/sLP5HKS644FblpF/8nSdvrPQ8oyfEc91itQT9CS4v8KAF9my7+/6y5iJDxYW
+Cp8lsSvnK1pr766NKF2og+8Z1QrRunCmuc8Vf8UhLpdXAFCygR/oAwc6Y7qrH1Uz
+sijQE3ybRCvreGlLdNTouq2/nlGUvUtbABAd/G2U40xMvP438gOIBlfE4i3if6Ys
+7og6ZbagAcVc+MH3owL6NkYM2dUU2h8w83CPXpML9hHGWpRNtL808q0jLWOxlFVR
+xrUPjT8yrA4b2OeqkntiV9ybTpNVBVk=
+-----END CERTIFICATE-----
diff --git a/pkg/cert/key.pem b/pkg/cert/key.pem
new file mode 100644
index 000000000..a04afeed1
--- /dev/null
+++ b/pkg/cert/key.pem
@@ -0,0 +1,28 @@
+-----BEGIN PRIVATE KEY-----
+MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCI+Kfjgljh9HaA
+TEzxBjA8YwwsPvkUiR/5lEpKkpF9JpgBuYz5F/btwHYncFCmvY60aJLwv4xn06tZ
+eijaNgZf9o5HIOFhosjosi5pCQnUszcQ8DJ+5iwXHPrmCOv6ncZCpt2XsbBQ7k/g
+W4Buzvb+FN85p+n6GsRz3R+JJ62MI4JO4QAhtTyyCunCp2mp5kAa0l9iemWkXUV4
+qW07RcLUgmmsjyEwgsz3hsYjZ/wAaGTpGdEBAbhk9/lJSYFy/0TRG/evi/6Ba2ji
+YTrELa7Y0elrTsL+ulxs7jUH43hQ29VDoF3ufcwDJdrOcvQ53c9LRLUh6UIrLFVP
+ZJ1SOeJTAgMBAAECggEAGe9DEr/mhnocSfSoiOaMEZMLhgEydmH0bPRYEMCpzZGW
+LJVujOetuJy9goAwtTGlKKG4WN9b/XjFs/5+Z7rdACSWEf+2zR7efbjnMrokY2K/
+pXRli0OXy5SQKSg9Tkm7dXlU8dkSMnC9LRUGP3TurXNURP13PwT8d5fB1d1ubd8w
+bbdcHVvzKZU5T7rdcLBsZ5/70eSqyJstcNDcNr28yI3xY0a2z2NX1XTnRbWXMayF
+PNNOJuoX/mmwxhCkLxyoVsSIz1nlyV3MKXKlDMK2I+LW0LqkvD97Xh3U2P895LTC
+BqRaX3FfFaMjJ5feS/A3CUZdeiPLjT8MWlphTCAR+QKBgQC5YEiwJxV4W+2LfCjD
+32HNIHXWEbvxaS9Ebv93pKolxOqaAptEL81+LRKEhdDRMYQHqHZzHxM75RM0YIYq
+njFLflFWSYskTn+x5z11Re1BS54XgNh05qp7OPpQdBTWkg8fw/eKK3g90fdXDrkn
+7XAK9A/9sJAlAMnVlrqGRDKu5wKBgQC9J3TFJMUt6SNaayFauh6DcJsOq+YSFnVy
+B4hktfI1TkCmW5OPTq/rMUKAe9ejd/ujDtNfpvcnJF0vCTStIeciKZM0s0pAb8nU
+QAxtrsdfPhYJQUO10ycYuiCZZ421/A7QSf4XaZMRUUHO477gFj5MI5n+ysuDwjXn
+m1Arl4/ftQKBgQCe4YErSTRDpjageFfQGWMPlqSoRybYMBjNBH18o+sY1/9i5J0D
+Ah2T6TmXz8E7qr7IeYCcBqRLj3i4SYp0eIUzeR5pYDsbcRRM/C5WlwpUDmV/K3Va
+LGEtn5Ya4oMBrMm9pg5BpCQ4h/7/5KSZLg37tVcHTg8dR+G1aKyRa14tPQKBgGxS
+mymHLDBlkexm63wEmBLXusSFJsV2/R0nOTHLjICAZr+eM/veqRn8ZMQlp9EilgXE
+KMJfYKyWw5J7KCJ6Bt5mhrmobz5FhoS5hSSO8fgWGxKDwJ3w5TPg62hOiDYOugEI
+Tq3jtOg264PqotW7h0OdI8RpKHE1GB+hryC3tBn9AoGAcR1OTKP1S66EcsR/6+o6
+kS9VEAH/4t181f1Km+DGJ5i9GgAQ7OHlqeFZ37JV+MhcjIPVa9lLVGjFTbU9yKun
+hYjbFaAevlvPz95iRWydgYiiEXr877EPS8YO6WzhFXJnSEBNEIkSNKr+ba/860/Z
+MdXk6ivt94ELiDoQMENIq9s=
+-----END PRIVATE KEY-----
diff --git a/pkg/clusterlink/agent/controller.go b/pkg/clusterlink/agent-manager/agent_controller.go
similarity index 78%
rename from pkg/clusterlink/agent/controller.go
rename to pkg/clusterlink/agent-manager/agent_controller.go
index 2a662ef9a..835bcf0c1 100644
--- a/pkg/clusterlink/agent/controller.go
+++ b/pkg/clusterlink/agent-manager/agent_controller.go
@@ -2,7 +2,6 @@ package agent
import (
"context"
- "fmt"
"time"
apierrors "k8s.io/apimachinery/pkg/api/errors"
@@ -18,7 +17,8 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
- networkmanager "github.com/kosmos.io/kosmos/pkg/clusterlink/agent/network-manager"
+ networkmanager "github.com/kosmos.io/kosmos/pkg/clusterlink/agent-manager/network-manager"
+ "github.com/kosmos.io/kosmos/pkg/clusterlink/controllers/node"
"github.com/kosmos.io/kosmos/pkg/clusterlink/network"
kosmosv1alpha1lister "github.com/kosmos.io/kosmos/pkg/generated/listers/kosmos/v1alpha1"
)
@@ -40,34 +40,46 @@ type Reconciler struct {
}
func NetworkManager() *networkmanager.NetworkManager {
- net := network.NewNetWork()
+ net := network.NewNetWork(true)
return networkmanager.NewNetworkManager(net)
}
-var predicatesFunc = predicate.Funcs{
- CreateFunc: func(createEvent event.CreateEvent) bool {
- return true
- },
- UpdateFunc: func(updateEvent event.UpdateEvent) bool {
- return true
- },
- DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
- return true
- },
- GenericFunc: func(genericEvent event.GenericEvent) bool {
- return true
- },
-}
-
func (r *Reconciler) SetupWithManager(mgr manager.Manager) error {
if r.Client == nil {
r.Client = mgr.GetClient()
}
+ skipEvent := func(obj client.Object) bool {
+ eventObj, ok := obj.(*kosmosv1alpha1.NodeConfig)
+ if !ok {
+ return false
+ }
+
+ if eventObj.Name != node.ClusterNodeName(r.ClusterName, r.NodeName) {
+ klog.Infof("reconcile node name: %s, current node name: %s-%s", eventObj.Name, r.ClusterName, r.NodeName)
+ return false
+ }
+
+ return true
+ }
+
return ctrl.NewControllerManagedBy(mgr).
Named(controllerName).
WithOptions(controller.Options{}).
- For(&kosmosv1alpha1.NodeConfig{}, builder.WithPredicates(predicatesFunc)).
+ For(&kosmosv1alpha1.NodeConfig{}, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ return skipEvent(createEvent.Object)
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return skipEvent(updateEvent.ObjectNew)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return skipEvent(deleteEvent.Object)
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return skipEvent(genericEvent.Object)
+ },
+ })).
Complete(r)
}
@@ -93,16 +105,10 @@ func (r *Reconciler) Reconcile(ctx context.Context, request reconcile.Request) (
r.logResult(nodeConfigSyncStatus)
return reconcile.Result{}, nil
}
- klog.Errorf("get clusternode %s error: %v", request.NamespacedName, err)
+ klog.Errorf("get nodeconfig %s error: %v", request.NamespacedName, err)
return reconcile.Result{RequeueAfter: RequeueTime}, nil
}
- klog.Infof("reconcile node name: %s, current node name: %s-%s", reconcileNode.Name, r.ClusterName, r.NodeName)
- if reconcileNode.Name != fmt.Sprintf("%s-%s", r.ClusterName, r.NodeName) {
- klog.Infof("not match, drop this event.")
- return reconcile.Result{}, nil
- }
-
localCluster, err := r.ClusterLister.Get(r.ClusterName)
if err != nil {
klog.Errorf("could not get local cluster, clusterNode: %s, err: %v", r.NodeName, err)
diff --git a/pkg/clusterlink/agent-manager/auto_detect_controller.go b/pkg/clusterlink/agent-manager/auto_detect_controller.go
new file mode 100644
index 000000000..40c292506
--- /dev/null
+++ b/pkg/clusterlink/agent-manager/auto_detect_controller.go
@@ -0,0 +1,232 @@
+package agent
+
+import (
+ "context"
+ "fmt"
+ "strings"
+ "time"
+
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/klog/v2"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/handler"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+ "sigs.k8s.io/controller-runtime/pkg/source"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/clusterlink/controllers/node"
+ "github.com/kosmos.io/kosmos/pkg/clusterlink/network"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ interfacepolicy "github.com/kosmos.io/kosmos/pkg/utils/interface-policy"
+ "github.com/kosmos.io/kosmos/pkg/utils/lifted/autodetection"
+)
+
+const (
+ AutoDetectControllerName = "cluster-node-controller"
+ AutoDetectRequeueTime = 10 * time.Second
+)
+
+const (
+ AUTODETECTION_METHOD_CAN_REACH = "can-reach="
+)
+
+type AutoDetectReconciler struct {
+ client.Client
+ ClusterName string
+ NodeName string
+}
+
+func (r *AutoDetectReconciler) SetupWithManager(mgr manager.Manager) error {
+ if r.Client == nil {
+ r.Client = mgr.GetClient()
+ }
+
+ skipEvent := func(obj client.Object) bool {
+ eventObj, ok := obj.(*kosmosv1alpha1.ClusterNode)
+ if !ok {
+ return false
+ }
+
+ if eventObj.Name != node.ClusterNodeName(r.ClusterName, r.NodeName) {
+ klog.V(4).Infof("skip event, reconcile node name: %s, current node name: %s-%s", eventObj.Name, r.ClusterName, r.NodeName)
+ return false
+ }
+
+ return true
+ }
+
+ return ctrl.NewControllerManagedBy(mgr).
+ Named(AutoDetectControllerName).
+ WithOptions(controller.Options{}).
+ For(&kosmosv1alpha1.ClusterNode{}, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ return skipEvent(createEvent.Object)
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return skipEvent(updateEvent.ObjectNew)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return false
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return skipEvent(genericEvent.Object)
+ },
+ })).
+ Watches(&source.Kind{Type: &kosmosv1alpha1.Cluster{}}, handler.EnqueueRequestsFromMapFunc(r.newClusterMapFunc())).
+ Complete(r)
+}
+
+func (r *AutoDetectReconciler) newClusterMapFunc() handler.MapFunc {
+ return func(a client.Object) []reconcile.Request {
+ var requests []reconcile.Request
+ cluster := a.(*kosmosv1alpha1.Cluster)
+ klog.V(4).Infof("auto detect cluster change: %s, currentNode cluster name: %s", cluster.Name, r.ClusterName)
+ if cluster.Name == r.ClusterName {
+ requests = append(requests, reconcile.Request{NamespacedName: types.NamespacedName{
+ Name: node.ClusterNodeName(r.ClusterName, r.NodeName),
+ }})
+ }
+ return requests
+ }
+}
+
+func (r *AutoDetectReconciler) detectInterfaceName(ctx context.Context) (string, error) {
+ var Cluster kosmosv1alpha1.Cluster
+
+ if err := r.Get(ctx, types.NamespacedName{
+ Name: r.ClusterName,
+ Namespace: "",
+ }, &Cluster); err != nil {
+ return "", err
+ }
+
+ if Cluster.Spec.ClusterLinkOptions != nil {
+ defaultNICName := interfacepolicy.GetInterfaceName(Cluster.Spec.ClusterLinkOptions.NICNodeNames, r.NodeName, Cluster.Spec.ClusterLinkOptions.DefaultNICName)
+
+ if defaultNICName != network.AutoSelectInterfaceFlag {
+ return defaultNICName, nil
+ }
+
+ method := Cluster.Spec.ClusterLinkOptions.AutodetectionMethod
+ // TODO: set default reachable ip when defaultNICName == * and meth == ""
+ if method == "" {
+ method = fmt.Sprintf("%s%s", AUTODETECTION_METHOD_CAN_REACH, "8.8.8.8")
+ }
+ if strings.HasPrefix(method, AUTODETECTION_METHOD_CAN_REACH) {
+ // Autodetect the IP by connecting a UDP socket to a supplied address.
+ destStr := strings.TrimPrefix(method, AUTODETECTION_METHOD_CAN_REACH)
+
+ version := 4
+ if utils.IsIPv6(destStr) {
+ version = 6
+ }
+
+ if i, _, err := autodetection.ReachDestination(destStr, version); err != nil {
+ return "", err
+ } else {
+ return i.Name, nil
+ }
+ }
+ }
+ return "", fmt.Errorf("can not detect nic")
+}
+
+func detectIP(interfaceName string) (string, string) {
+ detectFunc := func(version int) (string, error) {
+ _, n, err := autodetection.FilteredEnumeration([]string{interfaceName}, nil, nil, version)
+ if err != nil {
+ return "", fmt.Errorf("auto detect interface error: %v, version: %d", err, version)
+ }
+
+ if len(n.IP) == 0 {
+ return "", fmt.Errorf("auto detect interface error: ip is nil, version: %d", version)
+ }
+ return n.IP.String(), nil
+ }
+
+ ipv4, err := detectFunc(4)
+ if err != nil {
+ klog.Warning(err)
+ }
+ ipv6, err := detectFunc(6)
+ if err != nil {
+ klog.Warning(err)
+ }
+ return ipv4, ipv6
+}
+
+func shouldUpdate(old, new kosmosv1alpha1.ClusterNode) bool {
+ return old.Spec.IP != new.Spec.IP ||
+ old.Spec.IP6 != new.Spec.IP6 ||
+ old.Spec.InterfaceName != new.Spec.InterfaceName
+}
+
+func (r *AutoDetectReconciler) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ klog.V(4).Infof("########################### auto_detect_controller starts to reconcile %s ###########################", request.NamespacedName)
+ defer klog.V(4).Infof("####################### auto_detect_controller finished to reconcile %s ###########################", request.NamespacedName)
+
+ // get clusternode
+ var clusterNode kosmosv1alpha1.ClusterNode
+ if err := r.Get(ctx, request.NamespacedName, &clusterNode); err != nil {
+ if apierrors.IsNotFound(err) {
+ klog.V(4).Infof("auto_detect_controller cluster node not found %s", request.NamespacedName)
+ return reconcile.Result{}, nil
+ }
+ klog.Errorf("get clusternode %s error: %v", request.NamespacedName, err)
+ return reconcile.Result{RequeueAfter: AutoDetectRequeueTime}, nil
+ }
+
+ // skip when deleting
+ if !clusterNode.GetDeletionTimestamp().IsZero() {
+ return reconcile.Result{}, nil
+ }
+
+ // update clusterNode
+ newClusterNode := clusterNode.DeepCopy()
+
+ currentInterfaceName, err := r.detectInterfaceName(ctx)
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ klog.V(4).Infof("cluster is not found, %s", request.NamespacedName)
+ return reconcile.Result{}, nil
+ }
+ klog.Errorf("get cluster %s error: %v", request.NamespacedName, err)
+ return reconcile.Result{RequeueAfter: AutoDetectRequeueTime}, nil
+ }
+
+ // update interface
+ newClusterNode.Spec.InterfaceName = currentInterfaceName
+
+ klog.V(4).Infof("auto detect interface name: %s", currentInterfaceName)
+
+ // detect IP by Name
+ ipv4, ipv6 := detectIP(currentInterfaceName)
+ klog.V(4).Infof("auto detect ipv4: %s, ipv6: %s", ipv4, ipv6)
+
+ if ipv4 != "" {
+ newClusterNode.Spec.IP = ipv4
+ }
+ if ipv6 != "" {
+ newClusterNode.Spec.IP6 = ipv6
+ }
+
+ if shouldUpdate(*newClusterNode, clusterNode) {
+ if err := r.Update(ctx, newClusterNode); err != nil {
+ klog.Errorf("update clusternode %s error: %v", request.NamespacedName, err)
+ return reconcile.Result{RequeueAfter: AutoDetectRequeueTime}, nil
+ } else {
+ klog.V(4).Infof("update clusternode interface: %s, ipv4: %s, ipv6:%s, successed!", newClusterNode.Spec.InterfaceName, newClusterNode.Spec.IP, newClusterNode.Spec.IP6)
+ }
+ } else {
+ klog.V(4).Info("clusternode is not need to update")
+ }
+
+ return reconcile.Result{}, nil
+}
diff --git a/pkg/clusterlink/agent/network-manager/network_manager.go b/pkg/clusterlink/agent-manager/network-manager/network_manager.go
similarity index 100%
rename from pkg/clusterlink/agent/network-manager/network_manager.go
rename to pkg/clusterlink/agent-manager/network-manager/network_manager.go
diff --git a/pkg/clusterlink/controllers/calicoippool/calicoippool_controller.go b/pkg/clusterlink/controllers/calicoippool/calicoippool_controller.go
index 32303a34a..ba409c794 100644
--- a/pkg/clusterlink/controllers/calicoippool/calicoippool_controller.go
+++ b/pkg/clusterlink/controllers/calicoippool/calicoippool_controller.go
@@ -339,8 +339,8 @@ func (c *Controller) Reconcile(key utils.QueueKey) error {
}
klog.Infof("start reconcile cluster %s", cluster.Name)
- if cluster.Spec.CNI != utils.CNITypeCalico {
- klog.Infof("cluster %s cni type is %s skip reconcile", cluster.Name, cluster.Spec.CNI)
+ if cluster.Spec.ClusterLinkOptions.CNI != utils.CNITypeCalico {
+ klog.Infof("cluster %s cni type is %s skip reconcile", cluster.Name, cluster.Spec.ClusterLinkOptions.CNI)
return nil
}
for ipPool := range c.globalExtIPPoolSet {
@@ -355,9 +355,9 @@ func (c *Controller) Reconcile(key utils.QueueKey) error {
return cidr
}
}
- cidrMap := cluster.Spec.GlobalCIDRsMap
- podCIDRS := cluster.Status.PodCIDRs
- serviceCIDR := cluster.Status.ServiceCIDRs
+ cidrMap := cluster.Spec.ClusterLinkOptions.GlobalCIDRsMap
+ podCIDRS := cluster.Status.ClusterLinkStatus.PodCIDRs
+ serviceCIDR := cluster.Status.ClusterLinkStatus.ServiceCIDRs
for _, cidr := range podCIDRS {
extIPPool := ExternalClusterIPPool{
cluster: cluster.Name,
diff --git a/pkg/clusterlink/controllers/cluster/cluster_controller.go b/pkg/clusterlink/controllers/cluster/cluster_controller.go
index 306de3c64..c3353484f 100644
--- a/pkg/clusterlink/controllers/cluster/cluster_controller.go
+++ b/pkg/clusterlink/controllers/cluster/cluster_controller.go
@@ -11,6 +11,7 @@ import (
calicoclient "github.com/projectcalico/calico/libcalico-go/lib/clientv3"
"github.com/projectcalico/calico/libcalico-go/lib/options"
cwatch "github.com/projectcalico/calico/libcalico-go/lib/watch"
+ "github.com/spf13/cast"
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/meta"
@@ -39,8 +40,9 @@ import (
// KubeFlannelNetworkConfig
const (
FlannelCNI = "flannel"
- KubeFlannelNamespace = "kube-flannel"
+ CiliumCNI = "cilium"
KubeFlannelConfigMap = "kube-flannel-cfg"
+ KubeCiliumConfigMap = "cilium-config"
KubeFlannelNetworkConf = "net-conf.json"
KubeFlannelIPPool = "Network"
)
@@ -96,7 +98,6 @@ func (c *Controller) Start(ctx context.Context) error {
informer := factory.Core().V1().Pods().Informer()
c.podLister = factory.Core().V1().Pods().Lister()
podFilterFunc := func(pod *corev1.Pod) bool {
- //TODO 确认这个写法是否正确
return pod.Labels["component"] == "kube-apiserver"
}
@@ -124,7 +125,13 @@ func (c *Controller) Start(ctx context.Context) error {
return err
}
- if cluster.Spec.CNI == FlannelCNI {
+ if cluster.Spec.ClusterLinkOptions.CNI == CiliumCNI {
+ c.setClusterPodCIDRFun, err = c.initCiliumInformer(ctx, cluster, c.kubeClient)
+ if err != nil {
+ klog.Errorf("cluster %s initCiliumInformer err: %v", err)
+ return err
+ }
+ } else if cluster.Spec.ClusterLinkOptions.CNI == FlannelCNI {
c.setClusterPodCIDRFun, err = c.initFlannelInformer(ctx, cluster, c.kubeClient)
if err != nil {
klog.Errorf("cluster %s initCalicoInformer err: %v", err)
@@ -202,7 +209,7 @@ func (c *Controller) Reconcile(key utils.QueueKey) error {
return err
}
- reconcileCluster.Status.ServiceCIDRs = serviceCIDRS
+ reconcileCluster.Status.ClusterLinkStatus.ServiceCIDRs = serviceCIDRS
//TODO use sub resource
_, err = c.clusterLinkClient.KosmosV1alpha1().Clusters().Update(context.TODO(), reconcileCluster, metav1.UpdateOptions{})
if err != nil {
@@ -213,7 +220,6 @@ func (c *Controller) Reconcile(key utils.QueueKey) error {
}
func (c *Controller) initCalicoInformer(context context.Context, cluster *clusterlinkv1alpha1.Cluster, dynamicClient dynamic.Interface) (SetClusterPodCIDRFun, error) {
- //TODO 这里应该判断cluster的cni插件类型,如果是calico才去观测ippool事件,否则可能都没有ippool这个资源对象,watch一个不存在的资源对象可能会导致这里报错
dynamicInformerFactory := dynamicinformer.NewDynamicSharedInformerFactory(dynamicClient, 0)
gvr := schema.GroupVersionResource{
Group: "crd.projectcalico.org",
@@ -289,7 +295,7 @@ func (c *Controller) initCalicoInformer(context context.Context, cluster *cluste
}
}
}
- cluster.Status.PodCIDRs = podCIDRS
+ cluster.Status.ClusterLinkStatus.PodCIDRs = podCIDRS
return nil
}, nil
}
@@ -364,14 +370,14 @@ func (c *Controller) initCalicoWatcherWithEtcdBackend(ctx context.Context, clust
podCIDRs = append(podCIDRs, ippool.Spec.CIDR)
}
}
- cluster.Status.PodCIDRs = podCIDRs
+ cluster.Status.ClusterLinkStatus.PodCIDRs = podCIDRs
return nil
}, nil
}
// todo by wuyingjun-lucky
func (c *Controller) initFlannelInformer(context context.Context, cluster *clusterlinkv1alpha1.Cluster, kubeClient kubernetes.Interface) (SetClusterPodCIDRFun, error) {
- informerFactory := informers.NewSharedInformerFactoryWithOptions(kubeClient, 0, informers.WithNamespace(KubeFlannelNamespace))
+ informerFactory := informers.NewSharedInformerFactory(kubeClient, 0)
lister := informerFactory.Core().V1().ConfigMaps().Lister()
_, err := informerFactory.Core().V1().ConfigMaps().Informer().AddEventHandler(
cache.ResourceEventHandlerFuncs{
@@ -434,7 +440,60 @@ func (c *Controller) initFlannelInformer(context context.Context, cluster *clust
break
}
}
- cluster.Status.PodCIDRs = podCIDRS
+ cluster.Status.ClusterLinkStatus.PodCIDRs = podCIDRS
+ return nil
+ }, nil
+}
+
+func (c *Controller) initCiliumInformer(ctx context.Context, cluster *clusterlinkv1alpha1.Cluster, kubeClient *kubernetes.Clientset) (SetClusterPodCIDRFun, error) {
+ informerFactory := informers.NewSharedInformerFactory(kubeClient, 0)
+ lister := informerFactory.Core().V1().ConfigMaps().Lister()
+ _, err := informerFactory.Core().V1().ConfigMaps().Informer().AddEventHandler(
+ cache.FilteringResourceEventHandler{
+ FilterFunc: func(obj interface{}) bool {
+ cm, ok := obj.(*corev1.ConfigMap)
+ if !ok {
+ return false
+ }
+ return cm.Name == KubeCiliumConfigMap && cm.Namespace == "kube-system"
+ },
+ Handler: cache.ResourceEventHandlerFuncs{
+ AddFunc: func(obj interface{}) {
+ c.onChange(cluster)
+ },
+ UpdateFunc: func(oldObj, newObj interface{}) {
+ c.onChange(cluster)
+ },
+ DeleteFunc: func(obj interface{}) {
+ c.onChange(cluster)
+ },
+ },
+ })
+
+ if err != nil {
+ return nil, err
+ }
+ informerFactory.Start(ctx.Done())
+ informerFactory.WaitForCacheSync(ctx.Done())
+ return func(cluster *clusterlinkv1alpha1.Cluster) error {
+ cm, err := lister.ConfigMaps("kube-system").Get(KubeCiliumConfigMap)
+ if err != nil {
+ return err
+ }
+ var podCIDRS []string
+ ipv4CIDR, ok := cm.Data["cluster-pool-ipv4-cidr"]
+ if !ok {
+ klog.Warningf("cluster-pool-ipv4-cidr not found in cilium-config ConfigMap")
+ }
+ podCIDRS = append(podCIDRS, cast.ToStringSlice(ipv4CIDR)...)
+
+ ipv6CIDR, ok := cm.Data["cluster-pool-ipv6-cidr"]
+ if !ok {
+ klog.Warningf("cluster-pool-ipv6-cidr not found in cilium-config ConfigMap")
+ }
+ podCIDRS = append(podCIDRS, cast.ToStringSlice(ipv6CIDR)...)
+
+ cluster.Status.ClusterLinkStatus.PodCIDRs = podCIDRS
return nil
}, nil
}
diff --git a/pkg/clusterlink/controllers/node/node_controller.go b/pkg/clusterlink/controllers/node/node_controller.go
index 02ddd3949..68a230aed 100644
--- a/pkg/clusterlink/controllers/node/node_controller.go
+++ b/pkg/clusterlink/controllers/node/node_controller.go
@@ -23,7 +23,7 @@ import (
clusterlinkv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
"github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
- interfacepolicy "github.com/kosmos.io/kosmos/pkg/utils/interface-policy"
+ "github.com/kosmos.io/kosmos/pkg/utils"
)
const (
@@ -41,16 +41,19 @@ type Reconciler struct {
var predicatesFunc = predicate.Funcs{
CreateFunc: func(createEvent event.CreateEvent) bool {
- return true
+ node := createEvent.Object.(*corev1.Node)
+ return !utils.IsKosmosNode(node) && !utils.IsExcludeNode(node)
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
- return true
+ node := updateEvent.ObjectNew.(*corev1.Node)
+ return !utils.IsKosmosNode(node) && !utils.IsExcludeNode(node)
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
- return true
+ node := deleteEvent.Object.(*corev1.Node)
+ return !utils.IsKosmosNode(node) && !utils.IsExcludeNode(node)
},
GenericFunc: func(genericEvent event.GenericEvent) bool {
- return true
+ return false
},
}
@@ -104,18 +107,11 @@ func (r *Reconciler) Reconcile(ctx context.Context, request reconcile.Request) (
},
},
}
- cluster, err := r.ClusterLinkClient.KosmosV1alpha1().Clusters().Get(ctx, r.ClusterName, metav1.GetOptions{})
- if err != nil {
- klog.Errorf("get cluster %s err: %v", r.ClusterName, err)
- return reconcile.Result{Requeue: true}, nil
- }
- err = CreateOrUpdateClusterNode(r.ClusterLinkClient, clusterNode, func(n *clusterlinkv1alpha1.ClusterNode) error {
+ err := CreateOrUpdateClusterNode(r.ClusterLinkClient, clusterNode, func(n *clusterlinkv1alpha1.ClusterNode) error {
n.Spec.NodeName = node.Name
n.Spec.ClusterName = r.ClusterName
- n.Spec.IP = internalIP
- n.Spec.IP6 = internalIP6
- n.Spec.InterfaceName = interfacepolicy.GetInterfaceName(cluster.Spec.NICNodeNames, node.Name, cluster.Spec.DefaultNICName)
+ // n.Spec.InterfaceName while set by clusterlink-agent
return nil
})
if err != nil {
diff --git a/pkg/clusterlink/controllers/nodecidr/nodecidr_controller.go b/pkg/clusterlink/controllers/nodecidr/nodecidr_controller.go
index 3f6b1774a..d529040ce 100644
--- a/pkg/clusterlink/controllers/nodecidr/nodecidr_controller.go
+++ b/pkg/clusterlink/controllers/nodecidr/nodecidr_controller.go
@@ -102,7 +102,7 @@ func (c *controller) Start(ctx context.Context) error {
clusterInformerFactory.WaitForCacheSync(stopCh)
// third step: init CNI Adapter
- if cluster.Spec.CNI == calicoCNI {
+ if cluster.Spec.ClusterLinkOptions.CNI == calicoCNI {
c.cniAdapter = NewCalicoAdapter(c.config, c.clusterNodeLister, c.processor)
} else {
c.cniAdapter = NewCommonAdapter(c.config, c.clusterNodeLister, c.processor)
diff --git a/pkg/clusterlink/network-manager/controller.go b/pkg/clusterlink/network-manager/controller.go
index 7bde7592c..50de9a4c9 100644
--- a/pkg/clusterlink/network-manager/controller.go
+++ b/pkg/clusterlink/network-manager/controller.go
@@ -123,7 +123,19 @@ func (c *Controller) Reconcile(ctx context.Context, request reconcile.Request) (
}
str := c.NetworkManager.GetConfigsString()
- klog.V(4).Infof(str)
+ klog.V(6).Infof("the nodeConfigs of this round of calculations: %s", str)
+
+ // clear nodeConfigs
+ for i, nc := range nodeConfigList.Items {
+ if _, ok := nodeConfigs[nc.Name]; !ok {
+ err = c.Delete(ctx, &nodeConfigList.Items[i])
+ if err != nil {
+ klog.Warningf("failed to delete nodeConfig %s, err: %v", nc.Name, err)
+ continue
+ }
+ klog.Infof("nodeConfig %s has been deleted", nc.Name)
+ }
+ }
for nodeName, config := range nodeConfigs {
nc := &clusterlinkv1alpha1.NodeConfig{
diff --git a/pkg/clusterlink/network-manager/handlers/globalmap.go b/pkg/clusterlink/network-manager/handlers/globalmap.go
index 823685b98..3c8393e46 100644
--- a/pkg/clusterlink/network-manager/handlers/globalmap.go
+++ b/pkg/clusterlink/network-manager/handlers/globalmap.go
@@ -23,10 +23,10 @@ func (h *GlobalMap) Do(c *Context) (err error) {
for _, n := range nodes {
cluster := c.Filter.GetClusterByName(n.Spec.ClusterName)
- globalMap := cluster.Spec.GlobalCIDRsMap
+ globalMap := cluster.Spec.ClusterLinkOptions.GlobalCIDRsMap
if len(globalMap) > 0 {
- for src, dst := range cluster.Spec.GlobalCIDRsMap {
+ for src, dst := range cluster.Spec.ClusterLinkOptions.GlobalCIDRsMap {
ipType := helpers.GetIPType(src)
var vxBridge string
diff --git a/pkg/clusterlink/network-manager/handlers/host_network.go b/pkg/clusterlink/network-manager/handlers/host_network.go
index 18e177e30..8496c57bf 100644
--- a/pkg/clusterlink/network-manager/handlers/host_network.go
+++ b/pkg/clusterlink/network-manager/handlers/host_network.go
@@ -24,13 +24,13 @@ func (h *HostNetwork) Do(c *Context) (err error) {
c.Results[n.Name].Iptables = append(c.Results[n.Name].Iptables, v1alpha1.Iptables{
Table: "nat",
Chain: constants.IPTablesPostRoutingChain,
- Rule: fmt.Sprintf("-s %s -o %s -j MASQUERADE", cluster.Spec.LocalCIDRs.IP, constants.VXLAN_BRIDGE_NAME),
+ Rule: fmt.Sprintf("-s %s -o %s -j MASQUERADE", cluster.Spec.ClusterLinkOptions.LocalCIDRs.IP, constants.VXLAN_BRIDGE_NAME),
})
c.Results[n.Name].Iptables = append(c.Results[n.Name].Iptables, v1alpha1.Iptables{
Table: "nat",
Chain: constants.IPTablesPostRoutingChain,
- Rule: fmt.Sprintf("-s %s -j MASQUERADE", cluster.Spec.BridgeCIDRs.IP),
+ Rule: fmt.Sprintf("-s %s -j MASQUERADE", cluster.Spec.ClusterLinkOptions.BridgeCIDRs.IP),
})
}
@@ -38,13 +38,13 @@ func (h *HostNetwork) Do(c *Context) (err error) {
c.Results[n.Name].Iptables = append(c.Results[n.Name].Iptables, v1alpha1.Iptables{
Table: "nat",
Chain: constants.IPTablesPostRoutingChain,
- Rule: fmt.Sprintf("-s %s -o %s -j MASQUERADE", cluster.Spec.LocalCIDRs.IP6, constants.VXLAN_BRIDGE_NAME_6),
+ Rule: fmt.Sprintf("-s %s -o %s -j MASQUERADE", cluster.Spec.ClusterLinkOptions.LocalCIDRs.IP6, constants.VXLAN_BRIDGE_NAME_6),
})
c.Results[n.Name].Iptables = append(c.Results[n.Name].Iptables, v1alpha1.Iptables{
Table: "nat",
Chain: constants.IPTablesPostRoutingChain,
- Rule: fmt.Sprintf("-s %s -j MASQUERADE", cluster.Spec.BridgeCIDRs.IP6),
+ Rule: fmt.Sprintf("-s %s -j MASQUERADE", cluster.Spec.ClusterLinkOptions.BridgeCIDRs.IP6),
})
}
}
diff --git a/pkg/clusterlink/network-manager/handlers/pod_routes.go b/pkg/clusterlink/network-manager/handlers/pod_routes.go
index 8cba943db..2d208f6df 100644
--- a/pkg/clusterlink/network-manager/handlers/pod_routes.go
+++ b/pkg/clusterlink/network-manager/handlers/pod_routes.go
@@ -27,16 +27,38 @@ func (h *PodRoutes) Do(c *Context) (err error) {
if cluster.IsP2P() {
podCIDRs = target.Spec.PodCIDRs
} else {
- podCIDRs = cluster.Status.PodCIDRs
+ podCIDRs = cluster.Status.ClusterLinkStatus.PodCIDRs
}
- podCIDRs = ConvertToGlobalCIDRs(podCIDRs, cluster.Spec.GlobalCIDRsMap)
+ podCIDRs = FilterByIPFamily(podCIDRs, cluster.Spec.ClusterLinkOptions.IPFamily)
+ podCIDRs = ConvertToGlobalCIDRs(podCIDRs, cluster.Spec.ClusterLinkOptions.GlobalCIDRsMap)
BuildRoutes(c, target, podCIDRs)
}
return nil
}
+func convertIPFamilyTypeToIPType(familyType v1alpha1.IPFamilyType) helpers.IPType {
+ if familyType == v1alpha1.IPFamilyTypeIPV4 {
+ return helpers.IPV4
+ }
+ return helpers.IPV6
+}
+
+func FilterByIPFamily(cidrs []string, familyType v1alpha1.IPFamilyType) (results []string) {
+ if familyType == v1alpha1.IPFamilyTypeALL {
+ return cidrs
+ }
+ specifiedIPType := convertIPFamilyTypeToIPType(familyType)
+ for _, cidr := range cidrs {
+ ipType := helpers.GetIPType(cidr)
+ if ipType == specifiedIPType {
+ results = append(results, cidr)
+ }
+ }
+ return
+}
+
func ConvertToGlobalCIDRs(cidrs []string, globalCIDRMap map[string]string) []string {
if len(globalCIDRMap) == 0 {
return cidrs
@@ -64,6 +86,14 @@ func ifCIDRConflictWithSelf(selfCIDRs []string, tarCIDR string) bool {
return false
}
+func SupportIPType(cluster *v1alpha1.Cluster, ipType helpers.IPType) bool {
+ if cluster.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeALL {
+ return true
+ }
+ specifiedIPType := convertIPFamilyTypeToIPType(cluster.Spec.ClusterLinkOptions.IPFamily)
+ return specifiedIPType == ipType
+}
+
func BuildRoutes(ctx *Context, target *v1alpha1.ClusterNode, cidrs []string) {
otherClusterNodes := ctx.Filter.GetAllNodesExceptCluster(target.Spec.ClusterName)
@@ -81,6 +111,11 @@ func BuildRoutes(ctx *Context, target *v1alpha1.ClusterNode, cidrs []string) {
}
targetDev := ctx.GetDeviceFromResults(target.Name, vxBridge)
+ if targetDev == nil {
+ klog.Warning("cannot find the target dev, nodeName: %s, devName: %s", target.Name, vxBridge)
+ continue
+ }
+
targetIP, _, err := net.ParseCIDR(targetDev.Addr)
if err != nil {
klog.Warning("cannot parse target dev addr, nodeName: %s, devName: %s", target.Name, vxBridge)
@@ -89,8 +124,11 @@ func BuildRoutes(ctx *Context, target *v1alpha1.ClusterNode, cidrs []string) {
for _, n := range otherClusterNodes {
srcCluster := ctx.Filter.GetClusterByName(n.Spec.ClusterName)
+ if !SupportIPType(srcCluster, ipType) {
+ continue
+ }
- allCIDRs := append(srcCluster.Status.PodCIDRs, srcCluster.Status.ServiceCIDRs...)
+ allCIDRs := append(srcCluster.Status.ClusterLinkStatus.PodCIDRs, srcCluster.Status.ClusterLinkStatus.ServiceCIDRs...)
if ifCIDRConflictWithSelf(allCIDRs, cidr) {
continue
}
@@ -105,7 +143,17 @@ func BuildRoutes(ctx *Context, target *v1alpha1.ClusterNode, cidrs []string) {
}
gw := ctx.Filter.GetGatewayNodeByClusterName(n.Spec.ClusterName)
+ if gw == nil {
+ klog.Warning("cannot find gateway node, cluster name: %s", n.Spec.ClusterName)
+ continue
+ }
+
gwDev := ctx.GetDeviceFromResults(gw.Name, vxLocal)
+ if gwDev == nil {
+ klog.Warning("cannot find the gw dev, nodeName: %s, devName: %s", gw.Name, vxLocal)
+ continue
+ }
+
gwIP, _, err := net.ParseCIDR(gwDev.Addr)
if err != nil {
klog.Warning("cannot parse gw dev addr, nodeName: %s, devName: %s", gw.Name, vxLocal)
diff --git a/pkg/clusterlink/network-manager/handlers/svc_routes.go b/pkg/clusterlink/network-manager/handlers/svc_routes.go
index d21a98d48..55341ef25 100644
--- a/pkg/clusterlink/network-manager/handlers/svc_routes.go
+++ b/pkg/clusterlink/network-manager/handlers/svc_routes.go
@@ -9,9 +9,10 @@ func (h *ServiceRoutes) Do(c *Context) (err error) {
for _, target := range gwNodes {
cluster := c.Filter.GetClusterByName(target.Spec.ClusterName)
- serviceCIDRs := cluster.Status.ServiceCIDRs
+ serviceCIDRs := cluster.Status.ClusterLinkStatus.ServiceCIDRs
- serviceCIDRs = ConvertToGlobalCIDRs(serviceCIDRs, cluster.Spec.GlobalCIDRsMap)
+ serviceCIDRs = FilterByIPFamily(serviceCIDRs, cluster.Spec.ClusterLinkOptions.IPFamily)
+ serviceCIDRs = ConvertToGlobalCIDRs(serviceCIDRs, cluster.Spec.ClusterLinkOptions.GlobalCIDRsMap)
BuildRoutes(c, target, serviceCIDRs)
}
diff --git a/pkg/clusterlink/network-manager/handlers/vxbridge_network.go b/pkg/clusterlink/network-manager/handlers/vxbridge_network.go
index be2a48b21..e80012cd2 100644
--- a/pkg/clusterlink/network-manager/handlers/vxbridge_network.go
+++ b/pkg/clusterlink/network-manager/handlers/vxbridge_network.go
@@ -40,18 +40,18 @@ func (h *VxBridgeNetwork) Do(c *Context) (err error) {
func (h *VxBridgeNetwork) needToCreateVxBridge(c *Context, clusterNode *v1alpha1.ClusterNode, cluster *v1alpha1.Cluster) bool {
return c.Filter.SupportIPv4(clusterNode) &&
clusterNode.Spec.IP != "" &&
- cluster.Spec.BridgeCIDRs.IP != ""
+ cluster.Spec.ClusterLinkOptions.BridgeCIDRs.IP != ""
}
func (h *VxBridgeNetwork) needToCreateVxBridge6(c *Context, clusterNode *v1alpha1.ClusterNode, cluster *v1alpha1.Cluster) bool {
return c.Filter.SupportIPv6(clusterNode) &&
clusterNode.Spec.IP6 != "" &&
- cluster.Spec.BridgeCIDRs.IP6 != ""
+ cluster.Spec.ClusterLinkOptions.BridgeCIDRs.IP6 != ""
}
func (h *VxBridgeNetwork) createVxBridge(c *Context, clusterNode *v1alpha1.ClusterNode, cluster *v1alpha1.Cluster) *v1alpha1.Device {
devOld := c.Filter.GetDeviceFromNodeConfig(clusterNode.Name, constants.VXLAN_BRIDGE_NAME)
- dev := helpers.BuildVxlanDevice(constants.VXLAN_BRIDGE_NAME, clusterNode.Spec.IP, cluster.Spec.BridgeCIDRs.IP, clusterNode.Spec.InterfaceName)
+ dev := helpers.BuildVxlanDevice(constants.VXLAN_BRIDGE_NAME, clusterNode.Spec.IP, cluster.Spec.ClusterLinkOptions.BridgeCIDRs.IP, clusterNode.Spec.InterfaceName)
if devOld != nil && devOld.Mac != "" {
dev.Mac = devOld.Mac
}
@@ -60,7 +60,7 @@ func (h *VxBridgeNetwork) createVxBridge(c *Context, clusterNode *v1alpha1.Clust
func (h *VxBridgeNetwork) createVxBridge6(c *Context, clusterNode *v1alpha1.ClusterNode, cluster *v1alpha1.Cluster) *v1alpha1.Device {
devOld := c.Filter.GetDeviceFromNodeConfig(clusterNode.Name, constants.VXLAN_BRIDGE_NAME_6)
- dev := helpers.BuildVxlanDevice(constants.VXLAN_BRIDGE_NAME_6, clusterNode.Spec.IP6, cluster.Spec.BridgeCIDRs.IP6, clusterNode.Spec.InterfaceName)
+ dev := helpers.BuildVxlanDevice(constants.VXLAN_BRIDGE_NAME_6, clusterNode.Spec.IP6, cluster.Spec.ClusterLinkOptions.BridgeCIDRs.IP6, clusterNode.Spec.InterfaceName)
if devOld != nil && devOld.Mac != "" {
dev.Mac = devOld.Mac
}
diff --git a/pkg/clusterlink/network-manager/handlers/vxlocal_mac_cache.go b/pkg/clusterlink/network-manager/handlers/vxlocal_mac_cache.go
index c018eae20..1d589a3c1 100644
--- a/pkg/clusterlink/network-manager/handlers/vxlocal_mac_cache.go
+++ b/pkg/clusterlink/network-manager/handlers/vxlocal_mac_cache.go
@@ -20,6 +20,10 @@ func (h *VxLocalMacCache) Do(c *Context) (err error) {
for _, node := range nodes {
ipTypes := h.getSupportIPTypes(node, c)
gw := c.Filter.GetGatewayNodeByClusterName(node.Spec.ClusterName)
+ if gw == nil {
+ klog.Warning("cannot find gateway node, cluster name: %s", node.Spec.ClusterName)
+ continue
+ }
for _, ipType := range ipTypes {
fdb, arp, err := h.buildVxLocalCachesByNode(c, ipType, gw)
diff --git a/pkg/clusterlink/network-manager/handlers/vxlocal_network.go b/pkg/clusterlink/network-manager/handlers/vxlocal_network.go
index 3ed508b5f..fab4ab352 100644
--- a/pkg/clusterlink/network-manager/handlers/vxlocal_network.go
+++ b/pkg/clusterlink/network-manager/handlers/vxlocal_network.go
@@ -37,18 +37,18 @@ func (h *VxLocalNetwork) Do(c *Context) (err error) {
func (h *VxLocalNetwork) needToCreateVxLocal(c *Context, clusterNode *v1alpha1.ClusterNode, cluster *v1alpha1.Cluster) bool {
return c.Filter.SupportIPv4(clusterNode) &&
clusterNode.Spec.IP != "" &&
- cluster.Spec.LocalCIDRs.IP != ""
+ cluster.Spec.ClusterLinkOptions.LocalCIDRs.IP != ""
}
func (h *VxLocalNetwork) needToCreateVxLocal6(c *Context, clusterNode *v1alpha1.ClusterNode, cluster *v1alpha1.Cluster) bool {
return c.Filter.SupportIPv6(clusterNode) &&
clusterNode.Spec.IP6 != "" &&
- cluster.Spec.LocalCIDRs.IP6 != ""
+ cluster.Spec.ClusterLinkOptions.LocalCIDRs.IP6 != ""
}
func (h *VxLocalNetwork) createVxLocal(c *Context, clusterNode *v1alpha1.ClusterNode, cluster *v1alpha1.Cluster) *v1alpha1.Device {
devOld := c.Filter.GetDeviceFromNodeConfig(clusterNode.Name, constants.VXLAN_LOCAL_NAME)
- dev := helpers.BuildVxlanDevice(constants.VXLAN_LOCAL_NAME, clusterNode.Spec.IP, cluster.Spec.LocalCIDRs.IP, clusterNode.Spec.InterfaceName)
+ dev := helpers.BuildVxlanDevice(constants.VXLAN_LOCAL_NAME, clusterNode.Spec.IP, cluster.Spec.ClusterLinkOptions.LocalCIDRs.IP, clusterNode.Spec.InterfaceName)
if devOld != nil && devOld.Mac != "" {
dev.Mac = devOld.Mac
}
@@ -57,7 +57,7 @@ func (h *VxLocalNetwork) createVxLocal(c *Context, clusterNode *v1alpha1.Cluster
func (h *VxLocalNetwork) createVxLocal6(c *Context, clusterNode *v1alpha1.ClusterNode, cluster *v1alpha1.Cluster) *v1alpha1.Device {
devOld := c.Filter.GetDeviceFromNodeConfig(clusterNode.Name, constants.VXLAN_LOCAL_NAME_6)
- dev := helpers.BuildVxlanDevice(constants.VXLAN_LOCAL_NAME_6, clusterNode.Spec.IP6, cluster.Spec.LocalCIDRs.IP6, clusterNode.Spec.InterfaceName)
+ dev := helpers.BuildVxlanDevice(constants.VXLAN_LOCAL_NAME_6, clusterNode.Spec.IP6, cluster.Spec.ClusterLinkOptions.LocalCIDRs.IP6, clusterNode.Spec.InterfaceName)
if devOld != nil && devOld.Mac != "" {
dev.Mac = devOld.Mac
}
diff --git a/pkg/clusterlink/network-manager/helpers/filter.go b/pkg/clusterlink/network-manager/helpers/filter.go
index 98b89c8c6..67b8761da 100644
--- a/pkg/clusterlink/network-manager/helpers/filter.go
+++ b/pkg/clusterlink/network-manager/helpers/filter.go
@@ -13,7 +13,7 @@ type Filter struct {
clustersMap map[string]*v1alpha1.Cluster
}
-func NewFilter(clusterNodes []v1alpha1.ClusterNode, clusters []v1alpha1.Cluster, nodeConfigs []v1alpha1.NodeConfig) *Filter {
+func NewFilter(clusters []v1alpha1.Cluster, clusterNodes []v1alpha1.ClusterNode, nodeConfigs []v1alpha1.NodeConfig) *Filter {
cm := buildClustersMap(clusters)
cs := convertToPointerSlice(clusters)
cns := convertToPointerSlice(clusterNodes)
@@ -77,6 +77,10 @@ func (f *Filter) GetGatewayNodeByClusterName(clusterName string) *v1alpha1.Clust
func (f *Filter) GetInternalNodesByClusterName(clusterName string) []*v1alpha1.ClusterNode {
var results []*v1alpha1.ClusterNode
for _, node := range f.clusterNodes {
+ cluster := f.GetClusterByName(node.Spec.ClusterName)
+ if cluster.IsP2P() {
+ continue
+ }
if node.Spec.ClusterName == clusterName && !node.IsGateway() {
results = append(results, node)
}
@@ -121,12 +125,12 @@ func (f *Filter) GetGatewayClusterNodes() []*v1alpha1.ClusterNode {
func (f *Filter) SupportIPv4(node *v1alpha1.ClusterNode) bool {
cluster := f.GetClusterByName(node.Spec.ClusterName)
- return cluster.Spec.IPFamily == v1alpha1.IPFamilyTypeALL || cluster.Spec.IPFamily == v1alpha1.IPFamilyTypeIPV4
+ return cluster.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeALL || cluster.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeIPV4
}
func (f *Filter) SupportIPv6(node *v1alpha1.ClusterNode) bool {
cluster := f.GetClusterByName(node.Spec.ClusterName)
- return cluster.Spec.IPFamily == v1alpha1.IPFamilyTypeALL || cluster.Spec.IPFamily == v1alpha1.IPFamilyTypeIPV6
+ return cluster.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeALL || cluster.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeIPV6
}
func (f *Filter) GetDeviceFromNodeConfig(nodeName string, devName string) *v1alpha1.Device {
diff --git a/pkg/clusterlink/network-manager/network_manager.go b/pkg/clusterlink/network-manager/network_manager.go
index 55fbfa76d..e2d8a9064 100644
--- a/pkg/clusterlink/network-manager/network_manager.go
+++ b/pkg/clusterlink/network-manager/network_manager.go
@@ -19,9 +19,55 @@ func NewManager() *Manager {
return &Manager{}
}
+// ExcludeInvalidItems Verify whether clusterNodes and clusters are valid and give instructions
+func ExcludeInvalidItems(clusters []v1alpha1.Cluster, clusterNodes []v1alpha1.ClusterNode) (cs []v1alpha1.Cluster, cns []v1alpha1.ClusterNode) {
+ klog.Infof("Start verifying clusterNodes and clusters")
+ clustersMap := map[string]v1alpha1.Cluster{}
+ for _, c := range clusters {
+ if c.Spec.ClusterLinkOptions == nil {
+ klog.Infof("the cluster %s's ClusterLinkOptions is empty, will exclude.", c.Name)
+ continue
+ }
+ clustersMap[c.Name] = c
+ cs = append(cs, c)
+ }
+
+ for _, cn := range clusterNodes {
+ if len(cn.Spec.ClusterName) == 0 {
+ klog.Infof("the clusterNode %s's clusterName is empty, will exclude.", cn.Name)
+ continue
+ }
+ if len(cn.Spec.InterfaceName) == 0 {
+ klog.Infof("the clusterNode %s's interfaceName is empty, will exclude.", cn.Name)
+ continue
+ }
+
+ if _, ok := clustersMap[cn.Spec.ClusterName]; !ok {
+ klog.Infof("the cluster which clusterNode %s belongs to does not exist, or the cluster lacks the spec.clusterLinkOptions configuration.", cn.Name)
+ continue
+ }
+
+ c := clustersMap[cn.Spec.ClusterName]
+ supportIPv4 := c.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeALL || c.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeIPV4
+ supportIPv6 := c.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeALL || c.Spec.ClusterLinkOptions.IPFamily == v1alpha1.IPFamilyTypeIPV6
+ if supportIPv4 && len(cn.Spec.IP) == 0 {
+ klog.Infof("the clusterNode %s's ip is empty, but cluster's ipFamily is %s", cn.Name, c.Spec.ClusterLinkOptions.IPFamily)
+ continue
+ }
+ if supportIPv6 && len(cn.Spec.IP6) == 0 {
+ klog.Infof("the clusterNode %s's ip6 is empty, but cluster's ipFamily is %s", cn.Name, c.Spec.ClusterLinkOptions.IPFamily)
+ continue
+ }
+
+ cns = append(cns, cn)
+ }
+ return
+}
+
// CalculateNetworkConfigs Calculate the network configuration required for each node
func (n *Manager) CalculateNetworkConfigs(clusters []v1alpha1.Cluster, clusterNodes []v1alpha1.ClusterNode, nodeConfigs []v1alpha1.NodeConfig) (map[string]*handlers.NodeConfig, error) {
- filter := helpers.NewFilter(clusterNodes, clusters, nodeConfigs)
+ cs, cns := ExcludeInvalidItems(clusters, clusterNodes)
+ filter := helpers.NewFilter(cs, cns, nodeConfigs)
c := &handlers.Context{
Filter: filter,
diff --git a/pkg/clusterlink/network/adapter.go b/pkg/clusterlink/network/adapter.go
index 458e690d8..c8ce08b80 100644
--- a/pkg/clusterlink/network/adapter.go
+++ b/pkg/clusterlink/network/adapter.go
@@ -185,5 +185,5 @@ func (n *DefaultNetWork) InitSys() {
}
func (n *DefaultNetWork) UpdateCidrConfig(cluster *clusterlinkv1alpha1.Cluster) {
- UpdateCidr(cluster.Spec.BridgeCIDRs.IP, cluster.Spec.BridgeCIDRs.IP6, cluster.Spec.LocalCIDRs.IP, cluster.Spec.LocalCIDRs.IP6)
+ UpdateCidr(cluster.Spec.ClusterLinkOptions.BridgeCIDRs.IP, cluster.Spec.ClusterLinkOptions.BridgeCIDRs.IP6, cluster.Spec.ClusterLinkOptions.LocalCIDRs.IP, cluster.Spec.ClusterLinkOptions.LocalCIDRs.IP6)
}
diff --git a/pkg/clusterlink/network/device.go b/pkg/clusterlink/network/device.go
index 7ab28c52c..e4ae3e1f8 100644
--- a/pkg/clusterlink/network/device.go
+++ b/pkg/clusterlink/network/device.go
@@ -3,6 +3,8 @@ package network
import (
"fmt"
"net"
+ "os"
+ "strings"
"syscall"
"github.com/pkg/errors"
@@ -15,11 +17,11 @@ import (
)
type IfaceInfo struct {
- MTU int
- name string
- // index int
- ip string
- ip6 string
+ MTU int
+ name string
+ index int
+ ip string
+ ip6 string
}
func getIfaceIPByName(name string) (*IfaceInfo, error) {
@@ -33,6 +35,7 @@ func getIfaceIPByName(name string) (*IfaceInfo, error) {
devIface.MTU = iface.Attrs().MTU
devIface.name = iface.Attrs().Name
+ devIface.index = iface.Attrs().Index
addrListV4, err := getFristScopeIPInLink(iface, name, netlink.FAMILY_V4)
if err == nil {
@@ -50,7 +53,7 @@ func getIfaceIPByName(name string) (*IfaceInfo, error) {
return devIface, nil
}
-func createNewVxlanIface(name string, addrIPWithMask *netlink.Addr, vxlanId int, vxlanPort int, hardwareAddr net.HardwareAddr, rIface *IfaceInfo, deviceIP string) error {
+func createNewVxlanIface(name string, addrIPWithMask *netlink.Addr, vxlanId int, vxlanPort int, hardwareAddr net.HardwareAddr, rIface *IfaceInfo, deviceIP string, vtepDevIndex int) error {
// srcAddr := rIface.ip
klog.Infof("name %v ------------------------- %v", name, deviceIP)
@@ -61,10 +64,11 @@ func createNewVxlanIface(name string, addrIPWithMask *netlink.Addr, vxlanId int,
Flags: net.FlagUp,
HardwareAddr: hardwareAddr,
},
- SrcAddr: net.ParseIP(deviceIP),
- VxlanId: vxlanId,
- Port: vxlanPort,
- Learning: false,
+ SrcAddr: net.ParseIP(deviceIP),
+ VxlanId: vxlanId,
+ Port: vxlanPort,
+ Learning: false,
+ VtepDevIndex: vtepDevIndex,
}
err := netlink.LinkAdd(iface)
@@ -99,30 +103,6 @@ func createNewVxlanIface(name string, addrIPWithMask *netlink.Addr, vxlanId int,
return nil
}
-func CIDRIPGenerator(ipcidr string, internalIP string) (*net.IP, error) {
- cidrip, ipNet, err := net.ParseCIDR(ipcidr)
- if err != nil {
- return nil, fmt.Errorf("CIDRIPGenerator err: %v", err)
- }
-
- nodeIP := net.ParseIP(internalIP)
-
- ret := net.ParseIP("0.0.0.0")
- for i := range ipNet.Mask {
- ret[len(ret)-i-1] = ^byte(ipNet.Mask[len(ipNet.Mask)-i-1])
- }
-
- for i := range nodeIP {
- ret[i] = byte(ret[i]) & byte(nodeIP[i])
- }
-
- for i := range cidrip {
- ret[i] = byte(ret[i]) | byte(cidrip[i])
- }
-
- return &ret, nil
-}
-
// load device info from environment
func loadDevices() ([]clusterlinkv1alpha1.Device, error) {
ret := []clusterlinkv1alpha1.Device{}
@@ -157,44 +137,36 @@ func loadDevices() ([]clusterlinkv1alpha1.Device, error) {
}
}
+ createNoneDevice := func() clusterlinkv1alpha1.Device {
+ // while recreate this deivce
+ return clusterlinkv1alpha1.Device{
+ Type: clusterlinkv1alpha1.DeviceType(vxlanIface.Type()),
+ Name: vxlan.LinkAttrs.Name,
+ Addr: vxlanNet.String(),
+ Mac: vxlan.LinkAttrs.HardwareAddr.String(),
+ BindDev: "",
+ ID: int32(vxlan.VxlanId),
+ Port: int32(vxlan.Port),
+ }
+ }
+
if vxlanNet == nil {
msg := fmt.Sprintf("Cannot get ip of device: %s", d.name)
klog.Error(msg)
- return nil, fmt.Errorf(msg)
- }
-
- addrListAll, err := netlink.AddrList(nil, d.family)
-
- if err != nil {
- return nil, err
- }
-
- interfaceIndex := -1
- vxlanIPAddr := vxlanNet.IP.String()
-
- for _, r := range addrListAll {
- if r.LinkIndex != vxlanIface.Attrs().Index {
- if ip, err := CIDRIPGenerator(d.cidr, r.IP.String()); err == nil {
- if ip.String() == vxlanIPAddr {
- interfaceIndex = r.LinkIndex
- break
- }
- }
- }
+ ret = append(ret, createNoneDevice())
+ continue
}
+ interfaceIndex := vxlan.VtepDevIndex
bindDev := ""
- if interfaceIndex == -1 {
- klog.Warningf("can not find the phys_dev for vxlan, name: %s", d.name)
+ defaultIface, err := netlink.LinkByIndex(interfaceIndex)
+ if err != nil {
+ klog.Errorf("When we get device by linkinx, get error : %v", err)
+ ret = append(ret, createNoneDevice())
+ continue
} else {
- defaultIface, err := netlink.LinkByIndex(interfaceIndex)
- if err != nil {
- klog.Errorf("When we get device by linkinx, get error : %v", err)
- return nil, err
- } else {
- bindDev = defaultIface.Attrs().Name
- }
+ bindDev = defaultIface.Attrs().Name
}
ret = append(ret, clusterlinkv1alpha1.Device{
@@ -245,8 +217,9 @@ func addDevice(d clusterlinkv1alpha1.Device) error {
deviceIP = currentIfaceInfo.ip6
family = netlink.FAMILY_V6
}
+ vtepDevIndex := currentIfaceInfo.index
- if err = createNewVxlanIface(d.Name, addrIPvWithMask, id, port, hardwareAddr, currentIfaceInfo, deviceIP); err != nil {
+ if err = createNewVxlanIface(d.Name, addrIPvWithMask, id, port, hardwareAddr, currentIfaceInfo, deviceIP, vtepDevIndex); err != nil {
klog.Errorf("ipv4 err: %v", err)
return err
}
@@ -323,14 +296,24 @@ func UpdateDefaultIptablesAndKernalConfig(name string, ipFamily int) error {
return err
}
- // tunl0 device
- if err := UpdateDefaultIp4tablesBehavior("tunl0"); err != nil {
- klog.Errorf("Try to add iptables rule for tunl0: %v", err)
+ nicNames := []string{"tunl0", "vxlan.calico"}
+
+ deviceNameStr := os.Getenv("AGENT_RP_FILTER_DEVICES")
+ if len(deviceNameStr) > 0 {
+ nicNames = append(nicNames, strings.Split(deviceNameStr, ",")...)
}
- // tunl0 device
- if err := EnableLooseModeByIFaceNmae("tunl0"); err != nil {
- klog.Errorf("Try to change kernel parameters(rp_filter) for tunl0: %v", err)
+ for _, nicName := range nicNames {
+ if len(nicName) == 0 {
+ continue
+ }
+ if err := UpdateDefaultIp4tablesBehavior(nicName); err != nil {
+ klog.Errorf("Try to add iptables rule for %s: %v", nicName, err)
+ }
+
+ if err := EnableLooseModeByIFaceNmae(nicName); err != nil {
+ klog.Errorf("Try to change kernel parameters(rp_filter) for %s: %v", nicName, err)
+ }
}
}
diff --git a/pkg/clusterlink/network/interface.go b/pkg/clusterlink/network/interface.go
index ac1204bc7..58b6a43d1 100644
--- a/pkg/clusterlink/network/interface.go
+++ b/pkg/clusterlink/network/interface.go
@@ -37,8 +37,12 @@ type NetWork interface {
UpdateCidrConfig(cluster *clusterlinkv1alpha1.Cluster)
}
-func NewNetWork() NetWork {
+func NewNetWork(enableInitSys bool) NetWork {
dn := &DefaultNetWork{}
- dn.InitSys()
+
+ if enableInitSys {
+ dn.InitSys()
+ }
+
return dn
}
diff --git a/pkg/clusterlink/network/iptables/iptables.go b/pkg/clusterlink/network/iptables/iptables.go
index 8209c6b25..8ce54f6da 100644
--- a/pkg/clusterlink/network/iptables/iptables.go
+++ b/pkg/clusterlink/network/iptables/iptables.go
@@ -22,6 +22,8 @@ limitations under the License.
package iptables
import (
+ "os"
+
"github.com/coreos/go-iptables/iptables"
"github.com/pkg/errors"
)
@@ -60,7 +62,8 @@ func New(proto iptables.Protocol) (Interface, error) {
return NewFunc()
}
- ipt, err := iptables.New(iptables.IPFamily(proto), iptables.Timeout(5))
+ // IPTABLES_PATH: the path decision the model of iptable, /sbin/xtables-nft-multi => nf_tables
+ ipt, err := iptables.New(iptables.IPFamily(proto), iptables.Timeout(5), iptables.Path(os.Getenv("IPTABLES_PATH")))
if err != nil {
return nil, errors.Wrap(err, "error creating IP tables")
}
diff --git a/pkg/clustertree/cluster-manager/cluster_controller.go b/pkg/clustertree/cluster-manager/cluster_controller.go
index f894ec20f..5e417c811 100644
--- a/pkg/clustertree/cluster-manager/cluster_controller.go
+++ b/pkg/clustertree/cluster-manager/cluster_controller.go
@@ -4,10 +4,16 @@ import (
"bytes"
"context"
"fmt"
+ "strings"
"sync"
+ "time"
"github.com/go-logr/logr"
+ corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/dynamic"
+ "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/record"
"k8s.io/klog/v2"
@@ -21,54 +27,56 @@ import (
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
- clusterlinkv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
- networkmanager "github.com/kosmos.io/kosmos/pkg/clusterlink/network-manager"
+ "github.com/kosmos.io/kosmos/cmd/clustertree/cluster-manager/app/options"
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
"github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers"
- "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/mcs"
+ podcontrollers "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pod"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pv"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/controllers/pvc"
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
"github.com/kosmos.io/kosmos/pkg/scheme"
+ "github.com/kosmos.io/kosmos/pkg/utils"
)
const (
ControllerName = "cluster-controller"
- //RequeueTime = 5 * time.Second
+ RequeueTime = 10 * time.Second
- ControllerFinalizerName = "kosmos.io/cluster-manager"
- MasterClusterAnnotationKey = "kosmos.io/cluster-role"
- MasterClusterAnnotationValue = "master"
-
- DefaultClusterKubeQPS = 40.0
- DefalutClusterKubeBurst = 60
+ ControllerFinalizerName = "kosmos.io/cluster-manager" // TODO merge to constants
)
type ClusterController struct {
- Master client.Client
+ Root client.Client
+ RootDynamic dynamic.Interface
+ RootClientset kubernetes.Interface
+
EventRecorder record.EventRecorder
Logger logr.Logger
+ Options *options.Options
- // clusterName: Manager
- ControllerManagers map[string]*manager.Manager
+ ControllerManagers map[string]manager.Manager
ManagerCancelFuncs map[string]*context.CancelFunc
ControllerManagersLock sync.Mutex
-}
-func isMasterCluster(cluster *clusterlinkv1alpha1.Cluster) bool {
- annotations := cluster.GetAnnotations()
- if val, ok := annotations[MasterClusterAnnotationKey]; ok {
- return val == MasterClusterAnnotationValue
- }
- return false
+ RootResourceManager *utils.ResourceManager
+
+ GlobalLeafManager leafUtils.LeafResourceManager
+
+ LeafModelHandler leafUtils.LeafModelHandler
}
var predicatesFunc = predicate.Funcs{
CreateFunc: func(createEvent event.CreateEvent) bool {
- obj := createEvent.Object.(*clusterlinkv1alpha1.Cluster)
- return !isMasterCluster(obj)
+ obj := createEvent.Object.(*kosmosv1alpha1.Cluster)
+ return !leafUtils.IsRootCluster(obj)
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
- obj := updateEvent.ObjectNew.(*clusterlinkv1alpha1.Cluster)
- old := updateEvent.ObjectOld.(*clusterlinkv1alpha1.Cluster)
+ obj := updateEvent.ObjectNew.(*kosmosv1alpha1.Cluster)
+ old := updateEvent.ObjectOld.(*kosmosv1alpha1.Cluster)
- if isMasterCluster(obj) {
+ if leafUtils.IsRootCluster(obj) {
return false
}
@@ -84,8 +92,8 @@ var predicatesFunc = predicate.Funcs{
return false
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
- obj := deleteEvent.Object.(*clusterlinkv1alpha1.Cluster)
- return !isMasterCluster(obj)
+ obj := deleteEvent.Object.(*kosmosv1alpha1.Cluster)
+ return !leafUtils.IsRootCluster(obj)
},
GenericFunc: func(genericEvent event.GenericEvent) bool {
return false
@@ -94,66 +102,82 @@ var predicatesFunc = predicate.Funcs{
func (c *ClusterController) SetupWithManager(mgr manager.Manager) error {
c.ManagerCancelFuncs = make(map[string]*context.CancelFunc)
- c.ControllerManagers = make(map[string]*manager.Manager)
+ c.ControllerManagers = make(map[string]manager.Manager)
c.Logger = mgr.GetLogger()
-
return controllerruntime.NewControllerManagedBy(mgr).
Named(ControllerName).
WithOptions(controller.Options{}).
- For(&clusterlinkv1alpha1.Cluster{}, builder.WithPredicates(predicatesFunc)).
+ For(&kosmosv1alpha1.Cluster{}, builder.WithPredicates(predicatesFunc)).
Complete(c)
}
func (c *ClusterController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
klog.V(4).Infof("============ %s starts to reconcile %s ============", ControllerName, request.Name)
- defer func() {
- klog.V(4).Infof("============ %s has been reconciled =============", request.Name)
- }()
- cluster := &clusterlinkv1alpha1.Cluster{}
- if err := c.Master.Get(ctx, request.NamespacedName, cluster); err != nil {
+ cluster := &kosmosv1alpha1.Cluster{}
+ if err := c.Root.Get(ctx, request.NamespacedName, cluster); err != nil {
if errors.IsNotFound(err) {
klog.Infof("Cluster %s has been deleted", request.Name)
return controllerruntime.Result{}, nil
}
- return controllerruntime.Result{}, err
+ return controllerruntime.Result{RequeueAfter: RequeueTime}, err
+ }
+
+ config, err := utils.NewConfigFromBytes(cluster.Spec.Kubeconfig, func(config *rest.Config) {
+ config.QPS = utils.DefaultLeafKubeQPS
+ config.Burst = utils.DefaultLeafKubeBurst
+ })
+ if err != nil {
+ return reconcile.Result{}, fmt.Errorf("could not build kubeconfig for cluster %s: %v", cluster.Name, err)
+ }
+
+ leafClient, err := kubernetes.NewForConfig(config)
+ if err != nil {
+ return reconcile.Result{}, fmt.Errorf("could not build clientset for cluster %s: %v", cluster.Name, err)
+ }
+
+ leafDynamic, err := dynamic.NewForConfig(config)
+ if err != nil {
+ return reconcile.Result{}, fmt.Errorf("could not build dynamic client for cluster %s: %v", cluster.Name, err)
+ }
+
+ kosmosClient, err := kosmosversioned.NewForConfig(config)
+ if err != nil {
+ return reconcile.Result{}, fmt.Errorf("could not build kosmos clientset for cluster %s: %v", cluster.Name, err)
}
// ensure finalizer
if cluster.DeletionTimestamp.IsZero() {
if !controllerutil.ContainsFinalizer(cluster, ControllerFinalizerName) {
controllerutil.AddFinalizer(cluster, ControllerFinalizerName)
- if err := c.Master.Update(ctx, cluster); err != nil {
+ if err := c.Root.Update(ctx, cluster); err != nil {
return controllerruntime.Result{}, err
}
}
}
- if !cluster.DeletionTimestamp.IsZero() {
- c.clearClusterControllers(cluster)
+ // cluster deleted || cluster added || kubeconfig changed
+ c.clearClusterControllers(cluster)
+ if !cluster.DeletionTimestamp.IsZero() {
+ if err := c.deleteNode(ctx, cluster); err != nil {
+ return reconcile.Result{
+ Requeue: true,
+ }, err
+ }
if controllerutil.ContainsFinalizer(cluster, ControllerFinalizerName) {
controllerutil.RemoveFinalizer(cluster, ControllerFinalizerName)
- if err := c.Master.Update(ctx, cluster); err != nil {
+ if err := c.Root.Update(ctx, cluster); err != nil {
return controllerruntime.Result{}, err
}
}
+ return reconcile.Result{}, nil
}
- // cluster added or kubeconfig changed
- c.clearClusterControllers(cluster)
-
// build mgr for cluster
- config, err := utils.NewConfigFromBytes(cluster.Spec.Kubeconfig, func(config *rest.Config) {
- config.QPS = DefaultClusterKubeQPS
- config.Burst = DefalutClusterKubeBurst
- })
- if err != nil {
- return reconcile.Result{}, fmt.Errorf("could not build clientset for cluster %s: %v", cluster.Name, err)
- }
-
+ // TODO bug, the v4 log is lost
mgr, err := controllerruntime.NewManager(config, controllerruntime.Options{
- Logger: c.Logger.WithName("cluster-controller-manager"),
+ Logger: c.Logger.WithName("leaf-controller-manager"),
Scheme: scheme.NewSchema(),
LeaderElection: false,
MetricsBindAddress: "0",
@@ -163,14 +187,26 @@ func (c *ClusterController) Reconcile(ctx context.Context, request reconcile.Req
return reconcile.Result{}, fmt.Errorf("new manager with err %v, cluster %s", err, cluster.Name)
}
+ leafModelHandler := leafUtils.NewLeafModelHandler(cluster, c.RootClientset, leafClient)
+ c.LeafModelHandler = leafModelHandler
+
+ nodes, leafNodeSelectors, err := c.createNode(ctx, cluster, leafClient)
+ if err != nil {
+ return reconcile.Result{RequeueAfter: RequeueTime}, fmt.Errorf("create node with err %v, cluster %s", err, cluster.Name)
+ }
+ // TODO @wyz
+ for _, node := range nodes {
+ node.ResourceVersion = ""
+ }
+
subContext, cancel := context.WithCancel(ctx)
c.ControllerManagersLock.Lock()
- c.ControllerManagers[cluster.Name] = &mgr
+ c.ControllerManagers[cluster.Name] = mgr
c.ManagerCancelFuncs[cluster.Name] = &cancel
c.ControllerManagersLock.Unlock()
- if err = c.setupControllers(&mgr); err != nil {
+ if err = c.setupControllers(mgr, cluster, nodes, leafDynamic, leafNodeSelectors, leafClient, kosmosClient, config); err != nil {
return reconcile.Result{}, fmt.Errorf("failed to setup cluster %s controllers: %v", cluster.Name, err)
}
@@ -180,10 +216,12 @@ func (c *ClusterController) Reconcile(ctx context.Context, request reconcile.Req
}
}()
+ klog.V(4).Infof("============ %s has been reconciled =============", request.Name)
+
return reconcile.Result{}, nil
}
-func (c *ClusterController) clearClusterControllers(cluster *clusterlinkv1alpha1.Cluster) {
+func (c *ClusterController) clearClusterControllers(cluster *kosmosv1alpha1.Cluster) {
c.ControllerManagersLock.Lock()
defer c.ControllerManagersLock.Unlock()
@@ -193,18 +231,130 @@ func (c *ClusterController) clearClusterControllers(cluster *clusterlinkv1alpha1
}
delete(c.ManagerCancelFuncs, cluster.Name)
delete(c.ControllerManagers, cluster.Name)
+
+ c.GlobalLeafManager.RemoveLeafResource(cluster.Name)
}
-func (c *ClusterController) setupControllers(m *manager.Manager) error {
- mgr := *m
+func (c *ClusterController) setupControllers(
+ mgr manager.Manager,
+ cluster *kosmosv1alpha1.Cluster,
+ nodes []*corev1.Node,
+ clientDynamic *dynamic.DynamicClient,
+ leafNodeSelector map[string]kosmosv1alpha1.NodeSelector,
+ leafClientset kubernetes.Interface,
+ kosmosClient kosmosversioned.Interface,
+ leafRestConfig *rest.Config) error {
+ c.GlobalLeafManager.AddLeafResource(&leafUtils.LeafResource{
+ Client: mgr.GetClient(),
+ DynamicClient: clientDynamic,
+ Clientset: leafClientset,
+ KosmosClient: kosmosClient,
+ ClusterName: cluster.Name,
+ // TODO: define node options
+ Namespace: "",
+ IgnoreLabels: strings.Split("", ","),
+ EnableServiceAccount: true,
+ RestConfig: leafRestConfig,
+ }, cluster, nodes)
nodeResourcesController := controllers.NodeResourcesController{
- Client: mgr.GetClient(),
- Master: c.Master,
+ Leaf: mgr.GetClient(),
+ GlobalLeafManager: c.GlobalLeafManager,
+ Root: c.Root,
+ RootClientset: c.RootClientset,
+ Nodes: nodes,
+ LeafNodeSelectors: leafNodeSelector,
+ LeafModelHandler: c.LeafModelHandler,
+ Cluster: cluster,
}
if err := nodeResourcesController.SetupWithManager(mgr); err != nil {
- return fmt.Errorf("error starting %s: %v", networkmanager.ControllerName, err)
+ return fmt.Errorf("error starting %s: %v", controllers.NodeResourcesControllerName, err)
}
+ nodeLeaseController := controllers.NewNodeLeaseController(leafClientset, c.Root, nodes, leafNodeSelector, c.RootClientset, c.LeafModelHandler)
+ if err := mgr.Add(nodeLeaseController); err != nil {
+ return fmt.Errorf("error starting %s: %v", controllers.NodeLeaseControllerName, err)
+ }
+
+ if c.Options.MultiClusterService {
+ serviceImportController := &mcs.ServiceImportController{
+ LeafClient: mgr.GetClient(),
+ RootKosmosClient: kosmosClient,
+ EventRecorder: mgr.GetEventRecorderFor(mcs.LeafServiceImportControllerName),
+ Logger: mgr.GetLogger(),
+ LeafNodeName: cluster.Name,
+ // todo @wyz
+ IPFamilyType: cluster.Spec.ClusterLinkOptions.IPFamily,
+ RootResourceManager: c.RootResourceManager,
+ ReservedNamespaces: c.Options.ReservedNamespaces,
+ }
+ if err := serviceImportController.AddController(mgr); err != nil {
+ return fmt.Errorf("error starting %s: %v", mcs.LeafServiceImportControllerName, err)
+ }
+ }
+
+ leafPodController := podcontrollers.LeafPodReconciler{
+ RootClient: c.Root,
+ Namespace: "",
+ }
+
+ if err := leafPodController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting podUpstreamReconciler %s: %v", podcontrollers.LeafPodControllerName, err)
+ }
+
+ if !c.Options.OnewayStorageControllers {
+ err := c.setupStorageControllers(mgr, utils.IsOne2OneMode(cluster), cluster.Name)
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func (c *ClusterController) setupStorageControllers(mgr manager.Manager, isOne2OneMode bool, clustername string) error {
+ leafPVCController := pvc.LeafPVCController{
+ LeafClient: mgr.GetClient(),
+ RootClient: c.Root,
+ RootClientSet: c.RootClientset,
+ ClusterName: clustername,
+ }
+ if err := leafPVCController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting leaf pvc controller %v", err)
+ }
+
+ leafPVController := pv.LeafPVController{
+ LeafClient: mgr.GetClient(),
+ RootClient: c.Root,
+ RootClientSet: c.RootClientset,
+ ClusterName: clustername,
+ IsOne2OneMode: isOne2OneMode,
+ }
+ if err := leafPVController.SetupWithManager(mgr); err != nil {
+ return fmt.Errorf("error starting leaf pv controller %v", err)
+ }
+ return nil
+}
+
+func (c *ClusterController) createNode(ctx context.Context, cluster *kosmosv1alpha1.Cluster, leafClient kubernetes.Interface) ([]*corev1.Node, map[string]kosmosv1alpha1.NodeSelector, error) {
+ serverVersion, err := leafClient.Discovery().ServerVersion()
+ if err != nil {
+ klog.Errorf("create node failed, can not connect to leaf %s", cluster.Name)
+ return nil, nil, err
+ }
+
+ nodes, leafNodeSelectors, err := c.LeafModelHandler.CreateRootNode(ctx, c.Options.ListenPort, serverVersion.GitVersion)
+ if err != nil {
+ klog.Errorf("create node for cluster %s failed, err: %v", cluster.Name, err)
+ return nil, nil, err
+ }
+ return nodes, leafNodeSelectors, nil
+}
+
+func (c *ClusterController) deleteNode(ctx context.Context, cluster *kosmosv1alpha1.Cluster) error {
+ err := c.RootClientset.CoreV1().Nodes().Delete(ctx, cluster.Name, metav1.DeleteOptions{})
+ if err != nil && !errors.IsNotFound(err) {
+ return err
+ }
return nil
}
diff --git a/pkg/clustertree/cluster-manager/controllers/common_controller.go b/pkg/clustertree/cluster-manager/controllers/common_controller.go
new file mode 100644
index 000000000..b86def352
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/common_controller.go
@@ -0,0 +1,186 @@
+package controllers
+
+import (
+ "context"
+ "time"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/client-go/dynamic"
+ "k8s.io/klog/v2"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const SyncResourcesRequeueTime = 10 * time.Second
+
+var SYNC_GVRS = []schema.GroupVersionResource{utils.GVR_CONFIGMAP, utils.GVR_SECRET}
+var SYNC_OBJS = []client.Object{&corev1.ConfigMap{}, &corev1.Secret{}}
+
+const SYNC_KIND_CONFIGMAP = "ConfigMap"
+const SYNC_KIND_SECRET = "Secret"
+
+type SyncResourcesReconciler struct {
+ GroupVersionResource schema.GroupVersionResource
+ Object client.Object
+ DynamicRootClient dynamic.Interface
+ ControllerName string
+
+ client.Client
+
+ GlobalLeafManager leafUtils.LeafResourceManager
+}
+
+func (r *SyncResourcesReconciler) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ var clusters []string
+ rootobj, err := r.DynamicRootClient.Resource(r.GroupVersionResource).Namespace(request.Namespace).Get(ctx, request.Name, metav1.GetOptions{})
+ if err != nil && !errors.IsNotFound(err) {
+ klog.Errorf("get %s error: %v", request.NamespacedName, err)
+ return reconcile.Result{RequeueAfter: SyncResourcesRequeueTime}, nil
+ }
+
+ if err != nil && errors.IsNotFound(err) {
+ // delete all
+ clusters = r.GlobalLeafManager.ListClusters()
+ } else {
+ clusters = utils.ListResourceClusters(rootobj.GetAnnotations())
+ }
+
+ for _, cluster := range clusters {
+ if r.GlobalLeafManager.HasCluster(cluster) {
+ lr, err := r.GlobalLeafManager.GetLeafResource(cluster)
+ if err != nil {
+ klog.Errorf("get lr(cluster: %s) err: %v", cluster, err)
+ return reconcile.Result{RequeueAfter: SyncResourcesRequeueTime}, nil
+ }
+ if err = r.SyncResource(ctx, request, lr); err != nil {
+ klog.Errorf("sync resource %s error: %v", request.NamespacedName, err)
+ return reconcile.Result{RequeueAfter: SyncResourcesRequeueTime}, nil
+ }
+ }
+ }
+
+ return reconcile.Result{}, nil
+}
+
+func (r *SyncResourcesReconciler) SetupWithManager(mgr manager.Manager, gvr schema.GroupVersionResource) error {
+ if r.Client == nil {
+ r.Client = mgr.GetClient()
+ }
+
+ skipFunc := func(obj client.Object) bool {
+ // skip reservedNS
+ if obj.GetNamespace() == utils.ReservedNS {
+ return false
+ }
+ if _, ok := obj.GetAnnotations()[utils.KosmosResourceOwnersAnnotations]; !ok {
+ return false
+ }
+ return true
+ }
+
+ if err := ctrl.NewControllerManagedBy(mgr).
+ Named(r.ControllerName).
+ WithOptions(controller.Options{}).
+ For(r.Object, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ return false
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return skipFunc(updateEvent.ObjectNew)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return skipFunc(deleteEvent.Object)
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ })).
+ Complete(r); err != nil {
+ return err
+ }
+ return nil
+}
+
+func (r *SyncResourcesReconciler) SyncResource(ctx context.Context, request reconcile.Request, lr *leafUtils.LeafResource) error {
+ klog.V(4).Infof("Started sync resource processing, ns: %s, name: %s", request.Namespace, request.Name)
+
+ deleteSecretInClient := false
+
+ obj, err := r.DynamicRootClient.Resource(r.GroupVersionResource).Namespace(request.Namespace).Get(ctx, request.Name, metav1.GetOptions{})
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ return err
+ }
+ // get obj in leaf cluster
+ _, err := lr.DynamicClient.Resource(r.GroupVersionResource).Namespace(request.Namespace).Get(ctx, request.Name, metav1.GetOptions{})
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("Get %s from leaef cluster failed, error: %v", obj.GetKind(), err)
+ return err
+ }
+ return nil
+ }
+
+ // delete OBJ in leaf cluster
+ deleteSecretInClient = true
+ }
+
+ if deleteSecretInClient || obj.GetDeletionTimestamp() != nil {
+ // delete OBJ in leaf cluster
+ if err = lr.DynamicClient.Resource(r.GroupVersionResource).Namespace(request.Namespace).Delete(ctx, request.Name, metav1.DeleteOptions{}); err != nil {
+ if errors.IsNotFound(err) {
+ return nil
+ }
+ return err
+ }
+ klog.V(4).Infof("%s %q deleted", r.GroupVersionResource.Resource, request.Name)
+ return nil
+ }
+
+ old, err := lr.DynamicClient.Resource(r.GroupVersionResource).Namespace(request.Namespace).Get(ctx, request.Name, metav1.GetOptions{})
+
+ if err != nil {
+ if errors.IsNotFound(err) {
+ // TODO: maybe deleted in leaf cluster by other people
+ klog.Errorf("Get %s from client cluster failed when try to update , error: %v", obj.GetKind(), err)
+ return nil
+ }
+ klog.Errorf("Get %s from client cluster failed, error: %v", obj.GetKind(), err)
+ return err
+ }
+
+ var latest *unstructured.Unstructured
+ var unstructerr error
+ switch old.GetKind() {
+ case SYNC_KIND_CONFIGMAP:
+ latest, unstructerr = utils.UpdateUnstructured(old, obj, &corev1.ConfigMap{}, &corev1.ConfigMap{}, utils.UpdateConfigMap)
+ case SYNC_KIND_SECRET:
+ latest, unstructerr = utils.UpdateUnstructured(old, obj, &corev1.Secret{}, &corev1.Secret{}, utils.UpdateSecret)
+ }
+
+ if unstructerr != nil {
+ return unstructerr
+ }
+ if !utils.IsObjectUnstructuredGlobal(old.GetAnnotations()) {
+ return nil
+ }
+ _, err = lr.DynamicClient.Resource(r.GroupVersionResource).Namespace(request.Namespace).Update(ctx, latest, metav1.UpdateOptions{})
+ if err != nil {
+ klog.Errorf("update %s from client cluster failed, error: %v", latest.GetKind(), err)
+ return err
+ }
+ return nil
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/mcs/auto_mcs_controller.go b/pkg/clustertree/cluster-manager/controllers/mcs/auto_mcs_controller.go
new file mode 100644
index 000000000..a454111ef
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/mcs/auto_mcs_controller.go
@@ -0,0 +1,292 @@
+package mcs
+
+import (
+ "context"
+ "strings"
+ "time"
+
+ "github.com/go-logr/logr"
+ corev1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/tools/record"
+ "k8s.io/klog/v2"
+ "k8s.io/utils/strings/slices"
+ controllerruntime "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/handler"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+ "sigs.k8s.io/controller-runtime/pkg/source"
+ mcsv1alpha1 "sigs.k8s.io/mcs-api/pkg/apis/v1alpha1"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ clustertreeutils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const AutoCreateMCSControllerName = "auto-mcs-controller"
+
+// AutoCreateMCSController watches services in root cluster and auto create serviceExport and serviceImport in leaf cluster
+type AutoCreateMCSController struct {
+ RootClient client.Client
+ RootKosmosClient kosmosversioned.Interface
+ EventRecorder record.EventRecorder
+ Logger logr.Logger
+ GlobalLeafManager clustertreeutils.LeafResourceManager
+ // AutoCreateMCSPrefix are the prefix of the namespace for service to auto create in leaf cluster
+ AutoCreateMCSPrefix []string
+ // ReservedNamespaces are the protected namespaces to prevent Kosmos for deleting system resources
+ ReservedNamespaces []string
+}
+
+func (c *AutoCreateMCSController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ klog.V(4).Infof("============ %s starts to reconcile %s ============", AutoCreateMCSControllerName, request.NamespacedName.String())
+ defer func() {
+ klog.V(4).Infof("============ %s has been reconciled =============", request.NamespacedName.String())
+ }()
+
+ var shouldDelete bool
+ service := &corev1.Service{}
+ if err := c.RootClient.Get(ctx, request.NamespacedName, service); err != nil {
+ if !apierrors.IsNotFound(err) {
+ klog.Errorf("Cloud not get service in root cluster,Error: %v", err)
+ return controllerruntime.Result{Requeue: true}, err
+ }
+ shouldDelete = true
+ }
+
+ if !matchNamespace(service.Namespace, c.AutoCreateMCSPrefix) && !hasAutoMCSAnnotation(service) {
+ shouldDelete = true
+ }
+
+ clusterList := &kosmosv1alpha1.ClusterList{}
+ if err := c.RootClient.List(ctx, clusterList); err != nil {
+ klog.Errorf("Cloud not get cluster in root cluster,Error: %v", err)
+ return controllerruntime.Result{Requeue: true}, err
+ }
+
+ // The service is being deleted, in which case we should clear serviceExport and serviceImport.
+ if shouldDelete || !service.DeletionTimestamp.IsZero() {
+ if err := c.cleanUpMcsResources(ctx, request.Namespace, request.Name, clusterList); err != nil {
+ return controllerruntime.Result{Requeue: true, RequeueAfter: 10 * time.Second}, err
+ }
+ return controllerruntime.Result{}, nil
+ }
+
+ err := c.autoCreateMcsResources(ctx, service, clusterList)
+ if err != nil {
+ return controllerruntime.Result{Requeue: true, RequeueAfter: 10 * time.Second}, err
+ }
+ return controllerruntime.Result{}, nil
+}
+
+func matchNamespace(namespace string, prefix []string) bool {
+ for _, p := range prefix {
+ if strings.HasPrefix(namespace, p) {
+ return true
+ }
+ }
+ return false
+}
+
+func hasAutoMCSAnnotation(service *corev1.Service) bool {
+ annotations := service.GetAnnotations()
+ if annotations == nil {
+ return false
+ }
+ if _, exists := annotations[utils.AutoCreateMCSAnnotation]; exists {
+ return true
+ }
+ return false
+}
+
+func (c *AutoCreateMCSController) shouldEnqueue(service *corev1.Service) bool {
+ if slices.Contains(c.ReservedNamespaces, service.Namespace) {
+ return false
+ }
+
+ if len(c.AutoCreateMCSPrefix) > 0 {
+ for _, prefix := range c.AutoCreateMCSPrefix {
+ if strings.HasPrefix(service.GetNamespace(), prefix) {
+ return true
+ }
+ }
+ }
+
+ if hasAutoMCSAnnotation(service) {
+ return true
+ }
+ return false
+}
+
+func (c *AutoCreateMCSController) SetupWithManager(mgr manager.Manager) error {
+ clusterFn := handler.MapFunc(
+ func(object client.Object) []reconcile.Request {
+ requestList := make([]reconcile.Request, 0)
+ serviceList := &corev1.ServiceList{}
+ err := c.RootClient.List(context.TODO(), serviceList)
+ if err != nil {
+ klog.Errorf("Can not get service in root cluster,Error: %v", err)
+ return nil
+ }
+ for _, service := range serviceList.Items {
+ request := reconcile.Request{
+ NamespacedName: types.NamespacedName{
+ Namespace: service.Namespace,
+ Name: service.Name,
+ },
+ }
+ requestList = append(requestList, request)
+ }
+ return requestList
+ },
+ )
+
+ clusterPredicate := builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(event event.CreateEvent) bool {
+ return true
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return false
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return false
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ },
+ )
+
+ servicePredicate := builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(event event.CreateEvent) bool {
+ service, ok := event.Object.(*corev1.Service)
+ if !ok {
+ return false
+ }
+
+ return c.shouldEnqueue(service)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ service, ok := deleteEvent.Object.(*corev1.Service)
+ if !ok {
+ return false
+ }
+
+ return c.shouldEnqueue(service)
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ newService, ok := updateEvent.ObjectNew.(*corev1.Service)
+ if !ok {
+ return false
+ }
+
+ oldService, ok := updateEvent.ObjectOld.(*corev1.Service)
+ if !ok {
+ return false
+ }
+
+ return c.shouldEnqueue(newService) != c.shouldEnqueue(oldService)
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ },
+ )
+
+ return controllerruntime.NewControllerManagedBy(mgr).
+ For(&corev1.Service{}, servicePredicate).
+ Watches(&source.Kind{Type: &kosmosv1alpha1.Cluster{}},
+ handler.EnqueueRequestsFromMapFunc(clusterFn),
+ clusterPredicate,
+ ).
+ Complete(c)
+}
+
+func (c *AutoCreateMCSController) cleanUpMcsResources(ctx context.Context, namespace string, name string, clusterList *kosmosv1alpha1.ClusterList) error {
+ // delete serviceExport in root cluster
+ if err := c.RootKosmosClient.MulticlusterV1alpha1().ServiceExports(namespace).Delete(ctx, name, metav1.DeleteOptions{}); err != nil {
+ if !apierrors.IsNotFound(err) {
+ klog.Errorf("Delete serviceExport in root cluster failed %s/%s, Error: %v", namespace, name, err)
+ return err
+ }
+ }
+ // delete serviceImport in all leaf cluster
+ for _, cluster := range clusterList.Items {
+ newCluster := cluster.DeepCopy()
+ if clustertreeutils.IsRootCluster(newCluster) {
+ continue
+ }
+
+ leafManager, err := c.GlobalLeafManager.GetLeafResource(cluster.Name)
+ if err != nil {
+ klog.Errorf("get leafManager for cluster %s failed,Error: %v", cluster.Name, err)
+ return err
+ }
+ if err = leafManager.KosmosClient.MulticlusterV1alpha1().ServiceImports(namespace).Delete(ctx, name, metav1.DeleteOptions{}); err != nil {
+ if !apierrors.IsNotFound(err) {
+ klog.Errorf("Delete serviceImport in leaf cluster failed %s/%s, Error: %v", namespace, name, err)
+ return err
+ }
+ }
+ }
+ return nil
+}
+
+func (c *AutoCreateMCSController) autoCreateMcsResources(ctx context.Context, service *corev1.Service, clusterList *kosmosv1alpha1.ClusterList) error {
+ // create serviceExport in root cluster
+ serviceExport := &mcsv1alpha1.ServiceExport{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: service.Name,
+ Namespace: service.Namespace,
+ },
+ }
+ if _, err := c.RootKosmosClient.MulticlusterV1alpha1().ServiceExports(service.Namespace).Create(ctx, serviceExport, metav1.CreateOptions{}); err != nil {
+ if !apierrors.IsAlreadyExists(err) {
+ klog.Errorf("Could not create serviceExport(%s/%s) in root cluster, Error: %v", service.Namespace, service.Name, err)
+ return err
+ }
+ }
+
+ // create serviceImport in leaf cluster
+ for _, cluster := range clusterList.Items {
+ newCluster := cluster.DeepCopy()
+ if clustertreeutils.IsRootCluster(newCluster) {
+ continue
+ }
+
+ leafManager, err := c.GlobalLeafManager.GetLeafResource(cluster.Name)
+ if err != nil {
+ klog.Errorf("get leafManager for cluster %s failed,Error: %v", cluster.Name, err)
+ return err
+ }
+ serviceImport := &mcsv1alpha1.ServiceImport{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: service.Name,
+ Namespace: service.Namespace,
+ },
+ Spec: mcsv1alpha1.ServiceImportSpec{
+ Type: mcsv1alpha1.ClusterSetIP,
+ Ports: []mcsv1alpha1.ServicePort{
+ {
+ Protocol: corev1.ProtocolTCP,
+ Port: 80,
+ },
+ },
+ },
+ }
+ if _, err = leafManager.KosmosClient.MulticlusterV1alpha1().ServiceImports(service.Namespace).Create(ctx, serviceImport, metav1.CreateOptions{}); err != nil {
+ if !apierrors.IsAlreadyExists(err) {
+ klog.Errorf("Create serviceImport in leaf cluster failed %s/%s, Error: %v", service.Namespace, service.Name, err)
+ return err
+ }
+ }
+ }
+ return nil
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/mcs/serviceexport_controller.go b/pkg/clustertree/cluster-manager/controllers/mcs/serviceexport_controller.go
new file mode 100644
index 000000000..ad17ecc2e
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/mcs/serviceexport_controller.go
@@ -0,0 +1,217 @@
+package mcs
+
+import (
+ "context"
+
+ "github.com/go-logr/logr"
+ corev1 "k8s.io/api/core/v1"
+ discoveryv1 "k8s.io/api/discovery/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/labels"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/tools/record"
+ "k8s.io/client-go/util/retry"
+ "k8s.io/klog/v2"
+ "k8s.io/utils/strings/slices"
+ controllerruntime "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/handler"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+ "sigs.k8s.io/controller-runtime/pkg/source"
+ mcsv1alpha1 "sigs.k8s.io/mcs-api/pkg/apis/v1alpha1"
+
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils/helper"
+)
+
+const ServiceExportControllerName = "service-export-controller"
+
+// ServiceExportController watches serviceExport in root cluster and annotated the endpointSlice
+type ServiceExportController struct {
+ RootClient client.Client
+ EventRecorder record.EventRecorder
+ Logger logr.Logger
+ // ReservedNamespaces are the protected namespaces to prevent Kosmos for deleting system resources
+ ReservedNamespaces []string
+}
+
+func (c *ServiceExportController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ klog.V(4).Infof("============ %s starts to reconcile %s ============", ServiceExportControllerName, request.NamespacedName.String())
+ defer func() {
+ klog.V(4).Infof("============ %s has been reconciled =============", request.NamespacedName.String())
+ }()
+
+ var shouldDelete bool
+ serviceExport := &mcsv1alpha1.ServiceExport{}
+ if err := c.RootClient.Get(ctx, request.NamespacedName, serviceExport); err != nil {
+ if !apierrors.IsNotFound(err) {
+ return controllerruntime.Result{Requeue: true}, err
+ }
+ shouldDelete = true
+ }
+
+ // The serviceExport is being deleted, in which case we should clear endpointSlice.
+ if shouldDelete || !serviceExport.DeletionTimestamp.IsZero() {
+ if err := c.removeAnnotation(ctx, request.Namespace, request.Name); err != nil {
+ return controllerruntime.Result{Requeue: true}, err
+ }
+ return controllerruntime.Result{}, nil
+ }
+
+ err := c.syncServiceExport(ctx, serviceExport)
+ if err != nil {
+ return controllerruntime.Result{Requeue: true}, err
+ }
+ return controllerruntime.Result{}, nil
+}
+
+func (c *ServiceExportController) SetupWithManager(mgr manager.Manager) error {
+ endpointSliceServiceExportFn := handler.MapFunc(
+ func(object client.Object) []reconcile.Request {
+ serviceName := helper.GetLabelOrAnnotationValue(object.GetLabels(), utils.ServiceKey)
+ return []reconcile.Request{
+ {
+ NamespacedName: types.NamespacedName{
+ Namespace: object.GetNamespace(),
+ Name: serviceName,
+ },
+ },
+ }
+ },
+ )
+
+ endpointSlicePredicate := builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(event event.CreateEvent) bool {
+ return c.shouldEnqueue(event.Object)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return c.shouldEnqueue(deleteEvent.Object)
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return c.shouldEnqueue(updateEvent.ObjectNew)
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ },
+ )
+
+ return controllerruntime.NewControllerManagedBy(mgr).
+ For(&mcsv1alpha1.ServiceExport{}).
+ Watches(&source.Kind{Type: &discoveryv1.EndpointSlice{}},
+ handler.EnqueueRequestsFromMapFunc(endpointSliceServiceExportFn),
+ endpointSlicePredicate,
+ ).
+ Complete(c)
+}
+
+func (c *ServiceExportController) shouldEnqueue(object client.Object) bool {
+ eps, ok := object.(*discoveryv1.EndpointSlice)
+ if !ok {
+ return false
+ }
+
+ if slices.Contains(c.ReservedNamespaces, eps.Namespace) {
+ return false
+ }
+
+ return true
+}
+
+func (c *ServiceExportController) removeAnnotation(ctx context.Context, namespace, name string) error {
+ var err error
+ selector := labels.SelectorFromSet(
+ map[string]string{
+ utils.ServiceKey: name,
+ },
+ )
+ epsList := &discoveryv1.EndpointSliceList{}
+ err = c.RootClient.List(ctx, epsList, &client.ListOptions{
+ Namespace: namespace,
+ LabelSelector: selector,
+ })
+ if err != nil {
+ klog.Errorf("List endpointSlice in %s failed, Error: %v", namespace, err)
+ return err
+ }
+
+ endpointSlices := epsList.Items
+ for i := range endpointSlices {
+ newEps := &endpointSlices[i]
+ if newEps.DeletionTimestamp != nil {
+ klog.V(4).Infof("EndpointSlice %s/%s is deleting and does not need to remove serviceExport annotation", namespace, newEps.Name)
+ continue
+ }
+ helper.RemoveAnnotation(newEps, utils.ServiceExportLabelKey)
+ err = c.updateEndpointSlice(ctx, newEps, c.RootClient)
+ if err != nil {
+ klog.Errorf("Update endpointSlice (%s/%s) failed, Error: %v", namespace, newEps.Name, err)
+ return err
+ }
+ }
+ return nil
+}
+
+func (c *ServiceExportController) updateEndpointSlice(ctx context.Context, eps *discoveryv1.EndpointSlice, rootClient client.Client) error {
+ return retry.RetryOnConflict(retry.DefaultRetry, func() error {
+ updateErr := rootClient.Update(ctx, eps)
+ if updateErr == nil {
+ return nil
+ }
+
+ newEps := &discoveryv1.EndpointSlice{}
+ key := types.NamespacedName{
+ Namespace: eps.Namespace,
+ Name: eps.Name,
+ }
+ getErr := rootClient.Get(ctx, key, newEps)
+ if getErr == nil {
+ //Make a copy, so we don't mutate the shared cache
+ eps = newEps.DeepCopy()
+ } else {
+ klog.Errorf("Failed to get updated endpointSlice %s/%s: %v", eps.Namespace, eps.Name, getErr)
+ }
+
+ return updateErr
+ })
+}
+
+func (c *ServiceExportController) syncServiceExport(ctx context.Context, export *mcsv1alpha1.ServiceExport) error {
+ var err error
+ selector := labels.SelectorFromSet(
+ map[string]string{
+ utils.ServiceKey: export.Name,
+ },
+ )
+ epsList := &discoveryv1.EndpointSliceList{}
+ err = c.RootClient.List(ctx, epsList, &client.ListOptions{
+ Namespace: export.Namespace,
+ LabelSelector: selector,
+ })
+ if err != nil {
+ klog.Errorf("List endpointSlice in %s failed, Error: %v", export.Namespace, err)
+ return err
+ }
+
+ endpointSlices := epsList.Items
+ for i := range endpointSlices {
+ newEps := &endpointSlices[i]
+ if newEps.DeletionTimestamp != nil {
+ klog.V(4).Infof("EndpointSlice %s/%s is deleting and does not need to remove serviceExport annotation", export.Namespace, newEps.Name)
+ continue
+ }
+ helper.AddEndpointSliceAnnotation(newEps, utils.ServiceExportLabelKey, utils.MCSLabelValue)
+ err = c.updateEndpointSlice(ctx, newEps, c.RootClient)
+ if err != nil {
+ klog.Errorf("Update endpointSlice (%s/%s) failed, Error: %v", export.Namespace, newEps.Name, err)
+ return err
+ }
+ }
+
+ c.EventRecorder.Event(export, corev1.EventTypeNormal, "Synced", "serviceExport has been synced to endpointSlice's annotation successfully")
+ return nil
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/mcs/serviceimport_controller.go b/pkg/clustertree/cluster-manager/controllers/mcs/serviceimport_controller.go
new file mode 100644
index 000000000..c3d22174a
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/mcs/serviceimport_controller.go
@@ -0,0 +1,528 @@
+package mcs
+
+import (
+ "context"
+ "fmt"
+ "strings"
+
+ "github.com/go-logr/logr"
+ corev1 "k8s.io/api/core/v1"
+ discoveryv1 "k8s.io/api/discovery/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/labels"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/tools/cache"
+ "k8s.io/client-go/tools/record"
+ "k8s.io/client-go/util/retry"
+ "k8s.io/klog/v2"
+ "k8s.io/utils/strings/slices"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ mcsv1alpha1 "sigs.k8s.io/mcs-api/pkg/apis/v1alpha1"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/generated/informers/externalversions"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils/helper"
+ "github.com/kosmos.io/kosmos/pkg/utils/keys"
+)
+
+const LeafServiceImportControllerName = "leaf-service-import-controller"
+
+// ServiceImportController watches serviceImport in leaf node and sync service and endpointSlice in root cluster
+type ServiceImportController struct {
+ LeafClient client.Client
+ RootKosmosClient kosmosversioned.Interface
+ LeafNodeName string
+ IPFamilyType kosmosv1alpha1.IPFamilyType
+ EventRecorder record.EventRecorder
+ Logger logr.Logger
+ processor utils.AsyncWorker
+ RootResourceManager *utils.ResourceManager
+ ctx context.Context
+ // ReservedNamespaces are the protected namespaces to prevent Kosmos for deleting system resources
+ ReservedNamespaces []string
+}
+
+func (c *ServiceImportController) AddController(mgr manager.Manager) error {
+ if err := mgr.Add(c); err != nil {
+ klog.Errorf("Unable to create %s Error: %v", LeafServiceImportControllerName, err)
+ }
+ return nil
+}
+
+func (c *ServiceImportController) Start(ctx context.Context) error {
+ klog.Infof("Starting %s", LeafServiceImportControllerName)
+ defer klog.Infof("Stop %s as process done.", LeafServiceImportControllerName)
+
+ opt := utils.Options{
+ Name: LeafServiceImportControllerName,
+ KeyFunc: func(obj interface{}) (utils.QueueKey, error) {
+ // Don't care about the GVK in the queue
+ return keys.NamespaceWideKeyFunc(obj)
+ },
+ ReconcileFunc: c.Reconcile,
+ }
+ c.processor = utils.NewAsyncWorker(opt)
+ c.ctx = ctx
+
+ serviceImportInformerFactory := externalversions.NewSharedInformerFactory(c.RootKosmosClient, 0)
+ serviceImportInformer := serviceImportInformerFactory.Multicluster().V1alpha1().ServiceImports()
+ _, err := serviceImportInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: c.OnAdd,
+ UpdateFunc: c.OnUpdate,
+ DeleteFunc: c.OnDelete,
+ })
+ if err != nil {
+ return err
+ }
+
+ _, err = c.RootResourceManager.EndpointSliceInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: c.OnEpsAdd,
+ UpdateFunc: c.OnEpsUpdate,
+ DeleteFunc: c.OnEpsDelete,
+ })
+ if err != nil {
+ return err
+ }
+
+ stopCh := ctx.Done()
+ serviceImportInformerFactory.Start(stopCh)
+ serviceImportInformerFactory.WaitForCacheSync(stopCh)
+
+ c.processor.Run(utils.DefaultWorkers, stopCh)
+ <-stopCh
+ return nil
+}
+
+func (c *ServiceImportController) Reconcile(key utils.QueueKey) error {
+ clusterWideKey, ok := key.(keys.ClusterWideKey)
+ if !ok {
+ klog.Error("invalid key")
+ return fmt.Errorf("invalid key")
+ }
+ klog.V(4).Infof("============ %s starts to reconcile %s in cluster %s ============", LeafServiceImportControllerName, clusterWideKey.NamespaceKey(), c.LeafNodeName)
+ defer func() {
+ klog.V(4).Infof("============ %s has been reconciled in cluster %s =============", clusterWideKey.NamespaceKey(), c.LeafNodeName)
+ }()
+
+ var shouldDelete bool
+ serviceImport := &mcsv1alpha1.ServiceImport{}
+ if err := c.LeafClient.Get(c.ctx, types.NamespacedName{Namespace: clusterWideKey.Namespace, Name: clusterWideKey.Name}, serviceImport); err != nil {
+ if !apierrors.IsNotFound(err) {
+ klog.Errorf("Get %s in cluster %s failed, Error: %v", clusterWideKey.NamespaceKey(), c.LeafNodeName, err)
+ return err
+ }
+ shouldDelete = true
+ }
+
+ // The serviceImport is being deleted, in which case we should clear endpointSlice.
+ if shouldDelete || !serviceImport.DeletionTimestamp.IsZero() {
+ if err := c.cleanupServiceAndEndpointSlice(c.ctx, clusterWideKey.Namespace, clusterWideKey.Name); err != nil {
+ return err
+ }
+ return nil
+ }
+
+ err := c.syncServiceImport(c.ctx, serviceImport)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func (c *ServiceImportController) cleanupServiceAndEndpointSlice(ctx context.Context, namespace, name string) error {
+ service := &corev1.Service{}
+ if err := c.LeafClient.Get(ctx, types.NamespacedName{Namespace: namespace, Name: name}, service); err != nil {
+ if apierrors.IsNotFound(err) {
+ klog.V(4).Infof("ServiceImport %s/%s is deleting and Service %s/%s is not found, ignore it", namespace, name, namespace, name)
+ return nil
+ }
+ klog.Errorf("ServiceImport %s/%s is deleting but clean up service in cluster %s failed, Error: %v", namespace, name, c.LeafNodeName, err)
+ return err
+ }
+
+ if !helper.HasAnnotation(service.ObjectMeta, utils.ServiceImportLabelKey) {
+ klog.V(4).Infof("Service %s/%s is not managed by kosmos, ignore it", namespace, name)
+ return nil
+ }
+
+ if err := c.LeafClient.Delete(ctx, service); err != nil {
+ if apierrors.IsNotFound(err) {
+ klog.V(4).Infof("ServiceImport %s/%s is deleting and Service %s/%s is not found, ignore it", namespace, name, namespace, name)
+ return nil
+ }
+ klog.Errorf("ServiceImport %s/%s is deleting but clean up service in cluster %s failed, Error: %v", namespace, name, c.LeafNodeName, err)
+ return err
+ }
+
+ endpointSlice := &discoveryv1.EndpointSlice{}
+ err := c.LeafClient.DeleteAllOf(ctx, endpointSlice, &client.DeleteAllOfOptions{
+ ListOptions: client.ListOptions{
+ Namespace: namespace,
+ LabelSelector: labels.SelectorFromSet(map[string]string{
+ utils.ServiceKey: name,
+ }),
+ },
+ })
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ klog.V(4).Infof("ServiceImport %s/%s is deleting and Service %s/%s is not found, ignore it", namespace, name, namespace, name)
+ return nil
+ }
+ klog.Errorf("ServiceImport %s/%s is deleting but clean up service in cluster %s failed, Error: %v", namespace, name, c.LeafNodeName, err)
+ return err
+ }
+ return nil
+}
+
+func (c *ServiceImportController) syncServiceImport(ctx context.Context, serviceImport *mcsv1alpha1.ServiceImport) error {
+ rootService, err := c.RootResourceManager.ServiceLister.Services(serviceImport.Namespace).Get(serviceImport.Name)
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ klog.V(4).Infof("Service %s/%s is not found in root cluster, ignore it", serviceImport.Namespace, serviceImport.Name)
+ return nil
+ }
+ klog.Errorf("Get Service %s/%s failed from root cluster", serviceImport.Namespace, serviceImport.Name, err)
+ return err
+ }
+
+ if err := c.importServiceHandler(ctx, rootService, serviceImport); err != nil {
+ klog.Errorf("Create or update service %s/%s in client cluster %s failed, error: %v", serviceImport.Namespace, serviceImport.Name, c.LeafNodeName, err)
+ return err
+ }
+
+ epsList, err := c.RootResourceManager.EndpointSliceLister.EndpointSlices(serviceImport.Namespace).List(labels.SelectorFromSet(map[string]string{utils.ServiceKey: serviceImport.Name}))
+ if err != nil {
+ klog.Errorf("Get endpointSlices in namespace %s from cluster %s failed, error: %v", serviceImport.Namespace, err)
+ return err
+ }
+
+ addresses := make([]string, 0)
+ for _, eps := range epsList {
+ epsCopy := eps.DeepCopy()
+ for _, endpoint := range epsCopy.Endpoints {
+ for _, address := range endpoint.Addresses {
+ newAddress := address
+ addresses = append(addresses, newAddress)
+ }
+ }
+ err = c.importEndpointSliceHandler(ctx, epsCopy, serviceImport)
+ if err != nil {
+ klog.Errorf("Create or update service %s/%s in client cluster failed, error: %v", serviceImport.Namespace, serviceImport.Name, err)
+ return err
+ }
+ }
+
+ addressString := strings.Join(addresses, ",")
+ helper.AddServiceImportAnnotation(serviceImport, utils.ServiceEndpointsKey, addressString)
+ if err = c.updateServiceImport(ctx, serviceImport, addressString); err != nil {
+ klog.Errorf("Update serviceImport (%s/%s) annotation in cluster %s failed, Error: %v", serviceImport.Namespace, serviceImport.Name, c.LeafNodeName, err)
+ return err
+ }
+
+ c.EventRecorder.Event(serviceImport, corev1.EventTypeNormal, "Synced", "serviceImport has been synced successfully")
+ return nil
+}
+
+func (c *ServiceImportController) importEndpointSliceHandler(ctx context.Context, endpointSlice *discoveryv1.EndpointSlice, serviceImport *mcsv1alpha1.ServiceImport) error {
+ if metav1.HasAnnotation(serviceImport.ObjectMeta, utils.DisconnectedEndpointsKey) {
+ annotationValue := helper.GetLabelOrAnnotationValue(serviceImport.Annotations, utils.DisconnectedEndpointsKey)
+ disConnectedAddress := strings.Split(annotationValue, ",")
+ clearEndpointSlice(endpointSlice, disConnectedAddress)
+ }
+
+ if endpointSlice.AddressType == discoveryv1.AddressTypeIPv4 && c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV6 ||
+ endpointSlice.AddressType == discoveryv1.AddressTypeIPv6 && c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV4 {
+ klog.Warningf("The endpointSlice's AddressType is not match leaf cluster %s IPFamilyType,so ignore it", c.LeafNodeName)
+ return nil
+ }
+
+ return c.createOrUpdateEndpointSliceInClient(ctx, endpointSlice, serviceImport.Name)
+}
+
+func (c *ServiceImportController) createOrUpdateEndpointSliceInClient(ctx context.Context, endpointSlice *discoveryv1.EndpointSlice, serviceName string) error {
+ newSlice := retainEndpointSlice(endpointSlice, serviceName)
+
+ if err := c.LeafClient.Create(ctx, newSlice); err != nil {
+ if apierrors.IsAlreadyExists(err) {
+ err = c.updateEndpointSlice(ctx, newSlice)
+ if err != nil {
+ klog.Errorf("Update endpointSlice(%s/%s) in cluster %s failed, Error: %v", newSlice.Namespace, newSlice.Name, c.LeafNodeName, err)
+ return err
+ }
+ return nil
+ }
+ klog.Errorf("Create endpointSlice(%s/%s) in cluster %s failed, Error: %v", newSlice.Namespace, newSlice.Name, c.LeafNodeName, err)
+ return err
+ }
+ return nil
+}
+
+func (c *ServiceImportController) updateEndpointSlice(ctx context.Context, endpointSlice *discoveryv1.EndpointSlice) error {
+ newEps := endpointSlice.DeepCopy()
+ return retry.RetryOnConflict(retry.DefaultRetry, func() error {
+ updateErr := c.LeafClient.Update(ctx, newEps)
+ if updateErr == nil {
+ return nil
+ }
+
+ updated := &discoveryv1.EndpointSlice{}
+ getErr := c.LeafClient.Get(ctx, types.NamespacedName{Namespace: newEps.Namespace, Name: newEps.Name}, updated)
+ if getErr == nil {
+ //Make a copy, so we don't mutate the shared cache
+ newEps = updated.DeepCopy()
+ } else {
+ klog.Errorf("Failed to get updated endpointSlice %s/%s in cluster %s: %v", endpointSlice.Namespace, endpointSlice.Name, c.LeafNodeName, getErr)
+ }
+
+ return updateErr
+ })
+}
+
+func retainEndpointSlice(original *discoveryv1.EndpointSlice, serviceName string) *discoveryv1.EndpointSlice {
+ endpointSlice := original.DeepCopy()
+ endpointSlice.ObjectMeta = metav1.ObjectMeta{
+ Namespace: original.Namespace,
+ Name: original.Name,
+ }
+ helper.AddEndpointSliceAnnotation(endpointSlice, utils.ServiceImportLabelKey, utils.MCSLabelValue)
+ helper.AddEndpointSliceLabel(endpointSlice, utils.ServiceKey, serviceName)
+ return endpointSlice
+}
+
+func clearEndpointSlice(slice *discoveryv1.EndpointSlice, disconnectedAddress []string) {
+ disconnectedAddressMap := make(map[string]struct{})
+ for _, name := range disconnectedAddress {
+ disconnectedAddressMap[name] = struct{}{}
+ }
+
+ endpoints := slice.Endpoints
+ newEndpoints := make([]discoveryv1.Endpoint, 0)
+ for _, endpoint := range endpoints {
+ newAddresses := make([]string, 0)
+ for _, address := range endpoint.Addresses {
+ if _, found := disconnectedAddressMap[address]; !found {
+ newAddresses = append(newAddresses, address)
+ }
+ }
+ // Only add non-empty addresses from endpoints
+ if len(newAddresses) > 0 {
+ endpoint.Addresses = newAddresses
+ newEndpoints = append(newEndpoints, endpoint)
+ }
+ }
+ slice.Endpoints = newEndpoints
+}
+
+func (c *ServiceImportController) importServiceHandler(ctx context.Context, rootService *corev1.Service, serviceImport *mcsv1alpha1.ServiceImport) error {
+ err := c.checkServiceType(rootService)
+ if err != nil {
+ klog.Warningf("Cloud not create service in leaf cluster %s,Error: %v", c.LeafNodeName, err)
+ // return nil will not requeue
+ return nil
+ }
+ clientService := c.generateService(rootService, serviceImport)
+ err = c.createOrUpdateServiceInClient(ctx, clientService)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func (c *ServiceImportController) createOrUpdateServiceInClient(ctx context.Context, service *corev1.Service) error {
+ oldService := &corev1.Service{}
+ if err := c.LeafClient.Get(ctx, types.NamespacedName{Namespace: service.Namespace, Name: service.Name}, oldService); err != nil {
+ if apierrors.IsNotFound(err) {
+ if err = c.LeafClient.Create(ctx, service); err != nil {
+ klog.Errorf("Create serviceImport service(%s/%s) in client cluster %s failed, Error: %v", service.Namespace, service.Name, c.LeafNodeName, err)
+ return err
+ } else {
+ return nil
+ }
+ }
+ klog.Errorf("Get service(%s/%s) from in cluster %s failed, Error: %v", service.Namespace, service.Name, c.LeafNodeName, err)
+ return err
+ }
+
+ retainServiceFields(oldService, service)
+
+ if err := c.LeafClient.Update(ctx, service); err != nil {
+ if err != nil {
+ klog.Errorf("Update serviceImport service(%s/%s) in cluster %s failed, Error: %v", service.Namespace, service.Name, c.LeafNodeName, err)
+ return err
+ }
+ }
+ return nil
+}
+
+func (c *ServiceImportController) updateServiceImport(ctx context.Context, serviceImport *mcsv1alpha1.ServiceImport, addresses string) error {
+ newImport := serviceImport.DeepCopy()
+ return retry.RetryOnConflict(retry.DefaultRetry, func() error {
+ updateErr := c.LeafClient.Update(ctx, newImport)
+ if updateErr == nil {
+ return nil
+ }
+ updated := &mcsv1alpha1.ServiceImport{}
+ getErr := c.LeafClient.Get(ctx, types.NamespacedName{Namespace: newImport.Namespace, Name: newImport.Name}, updated)
+ if getErr == nil {
+ // Make a copy, so we don't mutate the shared cache
+ newImport = updated.DeepCopy()
+ helper.AddServiceImportAnnotation(newImport, utils.ServiceEndpointsKey, addresses)
+ } else {
+ klog.Errorf("Failed to get updated serviceImport %s/%s in cluster %s,Error : %v", newImport.Namespace, serviceImport.Name, c.LeafNodeName, getErr)
+ }
+ return updateErr
+ })
+}
+
+func (c *ServiceImportController) OnAdd(obj interface{}) {
+ runtimeObj, ok := obj.(runtime.Object)
+ if !ok {
+ return
+ }
+ c.processor.Enqueue(runtimeObj)
+}
+
+func (c *ServiceImportController) OnUpdate(old interface{}, new interface{}) {
+ runtimeObj, ok := new.(runtime.Object)
+ if !ok {
+ return
+ }
+ c.processor.Enqueue(runtimeObj)
+}
+
+func (c *ServiceImportController) OnDelete(obj interface{}) {
+ runtimeObj, ok := obj.(runtime.Object)
+ if !ok {
+ return
+ }
+ c.processor.Enqueue(runtimeObj)
+}
+
+func (c *ServiceImportController) OnEpsAdd(obj interface{}) {
+ eps := obj.(*discoveryv1.EndpointSlice)
+ if !c.shouldEnqueue(eps) {
+ return
+ }
+
+ if helper.HasAnnotation(eps.ObjectMeta, utils.ServiceExportLabelKey) {
+ serviceExportName := helper.GetLabelOrAnnotationValue(eps.GetLabels(), utils.ServiceKey)
+ key := keys.ClusterWideKey{}
+ key.Namespace = eps.Namespace
+ key.Name = serviceExportName
+ c.processor.Add(key)
+ }
+}
+
+func (c *ServiceImportController) OnEpsUpdate(old interface{}, new interface{}) {
+ newSlice := new.(*discoveryv1.EndpointSlice)
+ oldSlice := old.(*discoveryv1.EndpointSlice)
+ if !c.shouldEnqueue(newSlice) {
+ return
+ }
+
+ isRemoveAnnotationEvent := helper.HasAnnotation(oldSlice.ObjectMeta, utils.ServiceExportLabelKey) && !helper.HasAnnotation(newSlice.ObjectMeta, utils.ServiceExportLabelKey)
+ if helper.HasAnnotation(newSlice.ObjectMeta, utils.ServiceExportLabelKey) || isRemoveAnnotationEvent {
+ serviceExportName := helper.GetLabelOrAnnotationValue(newSlice.GetLabels(), utils.ServiceKey)
+ key := keys.ClusterWideKey{}
+ key.Namespace = newSlice.Namespace
+ key.Name = serviceExportName
+ c.processor.Add(key)
+ }
+}
+
+func (c *ServiceImportController) OnEpsDelete(obj interface{}) {
+ eps := obj.(*discoveryv1.EndpointSlice)
+ if !c.shouldEnqueue(eps) {
+ return
+ }
+
+ if helper.HasAnnotation(eps.ObjectMeta, utils.ServiceExportLabelKey) {
+ serviceExportName := helper.GetLabelOrAnnotationValue(eps.GetLabels(), utils.ServiceKey)
+ key := keys.ClusterWideKey{}
+ key.Namespace = eps.Namespace
+ key.Name = serviceExportName
+ c.processor.Add(key)
+ }
+}
+
+func (c *ServiceImportController) shouldEnqueue(endpointSlice *discoveryv1.EndpointSlice) bool {
+ return !slices.Contains(c.ReservedNamespaces, endpointSlice.Namespace)
+}
+
+func retainServiceFields(oldSvc, newSvc *corev1.Service) {
+ newSvc.Spec.ClusterIP = oldSvc.Spec.ClusterIP
+ newSvc.ResourceVersion = oldSvc.ResourceVersion
+}
+
+func (c *ServiceImportController) generateService(service *corev1.Service, serviceImport *mcsv1alpha1.ServiceImport) *corev1.Service {
+ clusterIP := corev1.ClusterIPNone
+ if isServiceIPSet(service) {
+ clusterIP = ""
+ }
+
+ iPFamilies := make([]corev1.IPFamily, 0)
+ if c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeALL {
+ iPFamilies = service.Spec.IPFamilies
+ } else if c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV4 {
+ iPFamilies = append(iPFamilies, corev1.IPv4Protocol)
+ } else {
+ iPFamilies = append(iPFamilies, corev1.IPv6Protocol)
+ }
+
+ var iPFamilyPolicy corev1.IPFamilyPolicy
+ if c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeALL {
+ iPFamilyPolicy = *service.Spec.IPFamilyPolicy
+ } else {
+ iPFamilyPolicy = corev1.IPFamilyPolicySingleStack
+ }
+
+ return &corev1.Service{
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: serviceImport.Namespace,
+ Name: service.Name,
+ Annotations: map[string]string{
+ utils.ServiceImportLabelKey: utils.MCSLabelValue,
+ },
+ },
+ Spec: corev1.ServiceSpec{
+ Type: service.Spec.Type,
+ ClusterIP: clusterIP,
+ Ports: servicePorts(service),
+ IPFamilies: iPFamilies,
+ IPFamilyPolicy: &iPFamilyPolicy,
+ },
+ }
+}
+
+func (c *ServiceImportController) checkServiceType(service *corev1.Service) error {
+ if *service.Spec.IPFamilyPolicy == corev1.IPFamilyPolicySingleStack {
+ if service.Spec.IPFamilies[0] == corev1.IPv6Protocol && c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV4 ||
+ service.Spec.IPFamilies[0] == corev1.IPv4Protocol && c.IPFamilyType == kosmosv1alpha1.IPFamilyTypeIPV6 {
+ return fmt.Errorf("service's IPFamilyPolicy %s is not match the leaf cluster %s", *service.Spec.IPFamilyPolicy, c.LeafNodeName)
+ }
+ }
+ return nil
+}
+
+func isServiceIPSet(service *corev1.Service) bool {
+ return service.Spec.ClusterIP != corev1.ClusterIPNone && service.Spec.ClusterIP != ""
+}
+
+func servicePorts(service *corev1.Service) []corev1.ServicePort {
+ ports := make([]corev1.ServicePort, len(service.Spec.Ports))
+ for i, p := range service.Spec.Ports {
+ ports[i] = corev1.ServicePort{
+ NodePort: p.NodePort,
+ Name: p.Name,
+ Protocol: p.Protocol,
+ Port: p.Port,
+ AppProtocol: p.AppProtocol,
+ }
+ }
+ return ports
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/node_lease_controller.go b/pkg/clustertree/cluster-manager/controllers/node_lease_controller.go
new file mode 100644
index 000000000..31eedd15c
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/node_lease_controller.go
@@ -0,0 +1,209 @@
+package controllers
+
+import (
+ "context"
+ "sync"
+ "time"
+
+ coordinationv1 "k8s.io/api/coordination/v1"
+ corev1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/apimachinery/pkg/util/wait"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/util/retry"
+ "k8s.io/klog/v2"
+ "k8s.io/utils/pointer"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+)
+
+const (
+ NodeLeaseControllerName = "node-lease-controller"
+
+ DefaultLeaseDuration = 40
+ DefaultRenewIntervalFraction = 0.25
+
+ DefaultNodeStatusUpdateInterval = 1 * time.Minute
+)
+
+type NodeLeaseController struct {
+ leafClient kubernetes.Interface
+ rootClient kubernetes.Interface
+ root client.Client
+ LeafModelHandler leafUtils.LeafModelHandler
+
+ leaseInterval time.Duration
+ statusInterval time.Duration
+
+ nodes []*corev1.Node
+ LeafNodeSelectors map[string]kosmosv1alpha1.NodeSelector
+ nodeLock sync.Mutex
+}
+
+func NewNodeLeaseController(leafClient kubernetes.Interface, root client.Client, nodes []*corev1.Node, LeafNodeSelectors map[string]kosmosv1alpha1.NodeSelector, rootClient kubernetes.Interface, LeafModelHandler leafUtils.LeafModelHandler) *NodeLeaseController {
+ c := &NodeLeaseController{
+ leafClient: leafClient,
+ rootClient: rootClient,
+ root: root,
+ nodes: nodes,
+ LeafModelHandler: LeafModelHandler,
+ LeafNodeSelectors: LeafNodeSelectors,
+ leaseInterval: getRenewInterval(),
+ statusInterval: DefaultNodeStatusUpdateInterval,
+ }
+ return c
+}
+
+func (c *NodeLeaseController) Start(ctx context.Context) error {
+ go wait.UntilWithContext(ctx, c.syncLease, c.leaseInterval)
+ go wait.UntilWithContext(ctx, c.syncNodeStatus, c.statusInterval)
+ <-ctx.Done()
+ return nil
+}
+
+func (c *NodeLeaseController) syncNodeStatus(ctx context.Context) {
+ nodes := make([]*corev1.Node, 0)
+ c.nodeLock.Lock()
+ for _, nodeIndex := range c.nodes {
+ nodeCopy := nodeIndex.DeepCopy()
+ nodes = append(nodes, nodeCopy)
+ }
+ c.nodeLock.Unlock()
+
+ err := c.updateNodeStatus(ctx, nodes, c.LeafNodeSelectors)
+ if err != nil {
+ klog.Errorf(err.Error())
+ }
+}
+
+// nolint
+func (c *NodeLeaseController) updateNodeStatus(ctx context.Context, n []*corev1.Node, leafNodeSelector map[string]kosmosv1alpha1.NodeSelector) error {
+ err := c.LeafModelHandler.UpdateRootNodeStatus(ctx, n, leafNodeSelector)
+ if err != nil {
+ klog.Errorf("Could not update node status in root cluster,Error: %v", err)
+ }
+ return nil
+}
+
+func (c *NodeLeaseController) syncLease(ctx context.Context) {
+ nodes := make([]*corev1.Node, 0)
+ c.nodeLock.Lock()
+ for _, nodeIndex := range c.nodes {
+ nodeCopy := nodeIndex.DeepCopy()
+ nodes = append(nodes, nodeCopy)
+ }
+ c.nodeLock.Unlock()
+
+ _, err := c.leafClient.Discovery().ServerVersion()
+ if err != nil {
+ klog.Errorf("failed to ping leaf cluster")
+ return
+ }
+
+ err = c.createLeaseIfNotExists(ctx, nodes)
+ if err != nil {
+ return
+ }
+
+ err = c.updateLeaseWithRetry(ctx, nodes)
+ if err != nil {
+ klog.Errorf("lease has failed, and the maximum number of retries has been reached, %v", err)
+ return
+ }
+
+ klog.V(5).Infof("Successfully updated lease")
+}
+
+func (c *NodeLeaseController) createLeaseIfNotExists(ctx context.Context, nodes []*corev1.Node) error {
+ for _, node := range nodes {
+ namespaceName := types.NamespacedName{
+ Namespace: corev1.NamespaceNodeLease,
+ Name: node.Name,
+ }
+ lease := &coordinationv1.Lease{}
+ err := c.root.Get(ctx, namespaceName, lease)
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ leaseToCreate := c.newLease(node)
+ err = c.root.Create(ctx, leaseToCreate)
+ if err != nil {
+ klog.Errorf("create lease %s failed", node.Name)
+ return err
+ }
+ } else {
+ klog.Errorf("get lease %s failed", node.Name, err)
+ return err
+ }
+ }
+ }
+ return nil
+}
+
+func (c *NodeLeaseController) updateLeaseWithRetry(ctx context.Context, nodes []*corev1.Node) error {
+ for _, node := range nodes {
+ err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
+ lease := &coordinationv1.Lease{}
+ namespaceName := types.NamespacedName{
+ Namespace: corev1.NamespaceNodeLease,
+ Name: node.Name,
+ }
+ if err := c.root.Get(ctx, namespaceName, lease); err != nil {
+ klog.Warningf("get lease %s failed with err %v", node.Name, err)
+ return err
+ }
+
+ lease.Spec.RenewTime = &metav1.MicroTime{Time: time.Now()}
+ lease.OwnerReferences = []metav1.OwnerReference{
+ {
+ APIVersion: corev1.SchemeGroupVersion.WithKind("Node").Version,
+ Kind: corev1.SchemeGroupVersion.WithKind("Node").Kind,
+ Name: node.Name,
+ UID: node.UID,
+ },
+ }
+ err := c.root.Update(ctx, lease)
+ if err != nil {
+ klog.Warningf("update lease %s failed with err %v", node.Name, err)
+ return err
+ }
+ return nil
+ })
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (c *NodeLeaseController) newLease(node *corev1.Node) *coordinationv1.Lease {
+ lease := &coordinationv1.Lease{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: node.Name,
+ Namespace: corev1.NamespaceNodeLease,
+ OwnerReferences: []metav1.OwnerReference{
+ {
+ APIVersion: corev1.SchemeGroupVersion.WithKind("Node").Version,
+ Kind: corev1.SchemeGroupVersion.WithKind("Node").Kind,
+ Name: node.Name,
+ UID: node.UID,
+ },
+ },
+ },
+ Spec: coordinationv1.LeaseSpec{
+ HolderIdentity: pointer.String(node.Name),
+ LeaseDurationSeconds: pointer.Int32(DefaultLeaseDuration),
+ RenewTime: &metav1.MicroTime{Time: time.Now()},
+ },
+ }
+ return lease
+}
+
+func getRenewInterval() time.Duration {
+ interval := DefaultLeaseDuration * DefaultRenewIntervalFraction
+ intervalDuration := time.Second * time.Duration(int(interval))
+ return intervalDuration
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/node_resources_controller.go b/pkg/clustertree/cluster-manager/controllers/node_resources_controller.go
index a7b765cd6..2ea1472e9 100644
--- a/pkg/clustertree/cluster-manager/controllers/node_resources_controller.go
+++ b/pkg/clustertree/cluster-manager/controllers/node_resources_controller.go
@@ -2,9 +2,14 @@ package controllers
import (
"context"
+ "fmt"
+ "reflect"
"time"
corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/record"
"k8s.io/klog"
controllerruntime "sigs.k8s.io/controller-runtime"
@@ -12,20 +17,33 @@ import (
"sigs.k8s.io/controller-runtime/pkg/client"
"sigs.k8s.io/controller-runtime/pkg/controller"
"sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/handler"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/predicate"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
+ "sigs.k8s.io/controller-runtime/pkg/source"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
)
const (
- ControllerName = "node-resources-controller"
- RequeueTime = 10 * time.Second
+ NodeResourcesControllerName = "node-resources-controller"
+ RequeueTime = 10 * time.Second
)
type NodeResourcesController struct {
- Client client.Client
- Master client.Client
- EventRecorder record.EventRecorder
+ Leaf client.Client
+ Root client.Client
+ GlobalLeafManager leafUtils.LeafResourceManager
+ RootClientset kubernetes.Interface
+
+ Nodes []*corev1.Node
+ LeafNodeSelectors map[string]kosmosv1alpha1.NodeSelector
+ LeafModelHandler leafUtils.LeafModelHandler
+ Cluster *kosmosv1alpha1.Cluster
+ EventRecorder record.EventRecorder
}
var predicatesFunc = predicate.Funcs{
@@ -33,7 +51,17 @@ var predicatesFunc = predicate.Funcs{
return true
},
UpdateFunc: func(updateEvent event.UpdateEvent) bool {
- return true
+ curr := updateEvent.ObjectNew.(*corev1.Node)
+ old := updateEvent.ObjectOld.(*corev1.Node)
+
+ if old.Spec.Unschedulable != curr.Spec.Unschedulable ||
+ old.DeletionTimestamp != curr.DeletionTimestamp ||
+ utils.NodeReady(old) != utils.NodeReady(curr) ||
+ !reflect.DeepEqual(old.Status.Allocatable, curr.Status.Allocatable) {
+ return true
+ }
+
+ return false
},
DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
return true
@@ -43,19 +71,131 @@ var predicatesFunc = predicate.Funcs{
},
}
+func (c *NodeResourcesController) podMapFunc() handler.MapFunc {
+ return func(a client.Object) []reconcile.Request {
+ var requests []reconcile.Request
+ pod := a.(*corev1.Pod)
+
+ if len(pod.Spec.NodeName) > 0 {
+ requests = append(requests, reconcile.Request{NamespacedName: types.NamespacedName{
+ Name: pod.Spec.NodeName,
+ }})
+ }
+ return requests
+ }
+}
+
func (c *NodeResourcesController) SetupWithManager(mgr manager.Manager) error {
return controllerruntime.NewControllerManagedBy(mgr).
- Named(ControllerName).
+ Named(NodeResourcesControllerName).
WithOptions(controller.Options{}).
For(&corev1.Node{}, builder.WithPredicates(predicatesFunc)).
+ Watches(&source.Kind{Type: &corev1.Pod{}}, handler.EnqueueRequestsFromMapFunc(c.podMapFunc())).
Complete(c)
}
func (c *NodeResourcesController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
- klog.Infof("============ %s starts to reconcile %s ============", ControllerName, request.Name)
+ klog.V(4).Infof("============ %s starts to reconcile %s ============", NodeResourcesControllerName, request.Name)
defer func() {
- klog.Infof("============ %s has been reconciled =============", request.Name)
+ klog.V(4).Infof("============ %s has been reconciled =============", request.Name)
}()
+ for _, rootNode := range c.Nodes {
+ nodeInRoot := &corev1.Node{}
+ err := c.Root.Get(ctx, types.NamespacedName{Name: rootNode.Name}, nodeInRoot)
+ if err != nil {
+ klog.Errorf("Could not get node in root cluster,Error: %v", err)
+ return reconcile.Result{
+ Requeue: true,
+ RequeueAfter: RequeueTime,
+ }, fmt.Errorf("cannot get node while update nodeInRoot resources %s, err: %v", rootNode.Name, err)
+ }
+
+ nodesInLeaf, err := c.LeafModelHandler.GetLeafNodes(ctx, rootNode, c.LeafNodeSelectors[rootNode.Name])
+ if err != nil {
+ klog.Errorf("Could not get node in leaf cluster %s,Error: %v", c.Cluster.Name, err)
+ return controllerruntime.Result{
+ RequeueAfter: RequeueTime,
+ }, err
+ }
+
+ pods, err := c.LeafModelHandler.GetLeafPods(ctx, rootNode, c.LeafNodeSelectors[rootNode.Name])
+ if err != nil {
+ klog.Errorf("Could not list pod in leaf cluster %s,Error: %v", c.Cluster.Name, err)
+ return controllerruntime.Result{
+ RequeueAfter: RequeueTime,
+ }, err
+ }
+
+ clone := nodeInRoot.DeepCopy()
+ clone.Status.Conditions = utils.NodeConditions()
+
+ // Node2Node mode should sync leaf node's labels and annotations to root nodeInRoot
+ if c.LeafModelHandler.GetLeafMode() == leafUtils.Node {
+ getNode := func(nodes *corev1.NodeList) *corev1.Node {
+ for _, nodeInLeaf := range nodes.Items {
+ if nodeInLeaf.Name == rootNode.Name {
+ return &nodeInLeaf
+ }
+ }
+ return nil
+ }
+ node := getNode(nodesInLeaf)
+ if node != nil {
+ clone.Labels = mergeMap(node.GetLabels(), clone.GetLabels())
+ clone.Annotations = mergeMap(node.GetAnnotations(), clone.GetAnnotations())
+ // TODO @duanmengkk
+ // spec := corev1.NodeSpec{
+ // Taints: rootNode.Spec.Taints,
+ // }
+ clone.Spec.Taints = rootNode.Spec.Taints
+ clone.Status = node.Status
+ clone.Status.Addresses, err = leafUtils.GetAddress(ctx, c.RootClientset, node.Status.Addresses)
+ if err != nil {
+ klog.Errorf("GetAddress node %s, err: %v, ", rootNode.Name, err)
+ return reconcile.Result{}, err
+ }
+ }
+ }
+ // TODO ggregation Labels and Annotations for classificationModel
+ clusterResources := utils.CalculateClusterResources(nodesInLeaf, pods)
+ clone.Status.Allocatable = clusterResources
+ clone.Status.Capacity = clusterResources
+
+ patch, err := utils.CreateMergePatch(nodeInRoot, clone)
+ if err != nil {
+ klog.Errorf("Could not CreateMergePatch,Error: %v", err)
+ return reconcile.Result{}, err
+ }
+
+ if _, err = c.RootClientset.CoreV1().Nodes().Patch(ctx, rootNode.Name, types.MergePatchType, patch, metav1.PatchOptions{}); err != nil {
+ return reconcile.Result{
+ RequeueAfter: RequeueTime,
+ }, fmt.Errorf("failed to patch node resources: %v, will requeue", err)
+ }
+
+ if _, err = c.RootClientset.CoreV1().Nodes().PatchStatus(ctx, rootNode.Name, patch); err != nil {
+ return reconcile.Result{
+ RequeueAfter: RequeueTime,
+ }, fmt.Errorf("failed to patch node resources: %v, will requeue", err)
+ }
+ }
return reconcile.Result{}, nil
}
+
+func mergeMap(origin, new map[string]string) map[string]string {
+ if origin == nil {
+ return new
+ }
+ if new != nil {
+ for k, v := range origin {
+ if _, exists := new[k]; !exists {
+ new[k] = v
+ }
+ }
+ }
+ delete(new, utils.LabelNodeRoleControlPlane)
+ delete(new, utils.LabelNodeRoleOldControlPlane)
+ delete(new, utils.LabelNodeRoleNode)
+ return new
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/pod/leaf_pod_controller.go b/pkg/clustertree/cluster-manager/controllers/pod/leaf_pod_controller.go
new file mode 100644
index 000000000..9f90f5595
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pod/leaf_pod_controller.go
@@ -0,0 +1,172 @@
+package pod
+
+import (
+ "context"
+ "time"
+
+ "github.com/google/go-cmp/cmp"
+ "github.com/pkg/errors"
+ corev1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/klog/v2"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils/podutils"
+)
+
+const (
+ LeafPodControllerName = "leaf-pod-controller"
+ LeafPodRequeueTime = 10 * time.Second
+)
+
+type LeafPodReconciler struct {
+ client.Client
+ RootClient client.Client
+ Namespace string
+}
+
+func (r *LeafPodReconciler) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ var pod corev1.Pod
+ if err := r.Get(ctx, request.NamespacedName, &pod); err != nil {
+ if apierrors.IsNotFound(err) {
+ // delete pod in root
+ if err := DeletePodInRootCluster(ctx, request.NamespacedName, r.RootClient); err != nil {
+ return reconcile.Result{RequeueAfter: LeafPodRequeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+ }
+ klog.Errorf("get %s error: %v", request.NamespacedName, err)
+ return reconcile.Result{RequeueAfter: LeafPodRequeueTime}, nil
+ }
+
+ podCopy := pod.DeepCopy()
+
+ // if ShouldSkipStatusUpdate(podCopy) {
+ // return reconcile.Result{}, nil
+ // }
+
+ if podutils.IsKosmosPod(podCopy) {
+ podutils.FitObjectMeta(&podCopy.ObjectMeta)
+ podCopy.ResourceVersion = "0"
+ if err := r.RootClient.Status().Update(ctx, podCopy); err != nil && !apierrors.IsNotFound(err) {
+ klog.V(4).Info(errors.Wrap(err, "error while updating pod status in kubernetes"))
+ return reconcile.Result{RequeueAfter: LeafPodRequeueTime}, nil
+ }
+ }
+ return reconcile.Result{}, nil
+}
+
+type rootDeleteOption struct {
+ GracePeriodSeconds *int64
+}
+
+func (dopt *rootDeleteOption) ApplyToDelete(opt *client.DeleteOptions) {
+ opt.GracePeriodSeconds = dopt.GracePeriodSeconds
+}
+
+func NewRootDeleteOption(pod *corev1.Pod) client.DeleteOption {
+ // TODO
+ //gracePeriodSeconds := pod.DeletionGracePeriodSeconds
+ //
+ //current := metav1.NewTime(time.Now())
+ //if pod.DeletionTimestamp.Before(¤t) {
+ // gracePeriodSeconds = new(int64)
+ //}
+ return &rootDeleteOption{
+ GracePeriodSeconds: new(int64),
+ }
+}
+
+func NewLeafDeleteOption(pod *corev1.Pod) client.DeleteOption {
+ gracePeriodSeconds := new(int64)
+ if pod.DeletionGracePeriodSeconds != nil {
+ gracePeriodSeconds = pod.DeletionGracePeriodSeconds
+ }
+
+ return &rootDeleteOption{
+ GracePeriodSeconds: gracePeriodSeconds,
+ }
+}
+
+func DeletePodInRootCluster(ctx context.Context, rootnamespacedname types.NamespacedName, rootClient client.Client) error {
+ rPod := corev1.Pod{}
+ err := rootClient.Get(ctx, rootnamespacedname, &rPod)
+
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ return nil
+ } else {
+ return err
+ }
+ }
+
+ rPodCopy := rPod.DeepCopy()
+ deleteOption := NewRootDeleteOption(rPodCopy)
+
+ if err := rootClient.Delete(ctx, rPodCopy, deleteOption); err != nil {
+ if !apierrors.IsNotFound(err) {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func (r *LeafPodReconciler) SetupWithManager(mgr manager.Manager) error {
+ if r.Client == nil {
+ r.Client = mgr.GetClient()
+ }
+
+ skipFunc := func(obj client.Object) bool {
+ if obj.GetNamespace() == utils.ReservedNS {
+ return false
+ }
+
+ // skip namespace
+ if len(r.Namespace) > 0 && r.Namespace != obj.GetNamespace() {
+ return false
+ }
+
+ p := obj.(*corev1.Pod)
+ return podutils.IsKosmosPod(p)
+ }
+
+ return ctrl.NewControllerManagedBy(mgr).
+ Named(LeafPodControllerName).
+ WithOptions(controller.Options{}).
+ For(&corev1.Pod{}, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ // ignore create event
+ return skipFunc(createEvent.Object)
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ pod1 := updateEvent.ObjectOld.(*corev1.Pod)
+ pod2 := updateEvent.ObjectNew.(*corev1.Pod)
+ if !skipFunc(updateEvent.ObjectNew) {
+ return false
+ }
+ return !cmp.Equal(pod1.Status, pod2.Status)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return skipFunc(deleteEvent.Object)
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ })).
+ Complete(r)
+}
+
+// func ShouldSkipStatusUpdate(pod *corev1.Pod) bool {
+// return pod.Status.Phase == corev1.PodSucceeded ||
+// pod.Status.Phase == corev1.PodFailed
+// }
diff --git a/pkg/clustertree/cluster-manager/controllers/pod/root_pod_controller.go b/pkg/clustertree/cluster-manager/controllers/pod/root_pod_controller.go
new file mode 100644
index 000000000..9fea35173
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pod/root_pod_controller.go
@@ -0,0 +1,850 @@
+package pod
+
+import (
+ "context"
+ "fmt"
+ "reflect"
+ "strings"
+ "time"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/labels"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/apimachinery/pkg/util/wait"
+ "k8s.io/client-go/dynamic"
+ "k8s.io/klog/v2"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ "github.com/kosmos.io/kosmos/cmd/clustertree/cluster-manager/app/options"
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/extensions/daemonset"
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils/podutils"
+)
+
+const (
+ RootPodControllerName = "root-pod-controller"
+ RootPodRequeueTime = 10 * time.Second
+)
+
+type RootPodReconciler struct {
+ client.Client
+ RootClient client.Client
+
+ DynamicRootClient dynamic.Interface
+ envResourceManager utils.EnvResourceManager
+
+ GlobalLeafManager leafUtils.LeafResourceManager
+
+ Options *options.Options
+}
+
+type envResourceManager struct {
+ DynamicRootClient dynamic.Interface
+}
+
+// GetConfigMap retrieves the specified config map from the cache.
+func (rm *envResourceManager) GetConfigMap(name, namespace string) (*corev1.ConfigMap, error) {
+ // return rm.configMapLister.ConfigMaps(namespace).Get(name)
+ obj, err := rm.DynamicRootClient.Resource(utils.GVR_CONFIGMAP).Namespace(namespace).Get(context.TODO(), name, metav1.GetOptions{})
+ if err != nil {
+ return nil, err
+ }
+
+ retObj := &corev1.ConfigMap{}
+ if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &retObj); err != nil {
+ return nil, err
+ }
+
+ return retObj, nil
+}
+
+// GetSecret retrieves the specified secret from Kubernetes.
+func (rm *envResourceManager) GetSecret(name, namespace string) (*corev1.Secret, error) {
+ // return rm.secretLister.Secrets(namespace).Get(name)
+ obj, err := rm.DynamicRootClient.Resource(utils.GVR_SECRET).Namespace(namespace).Get(context.TODO(), name, metav1.GetOptions{})
+ if err != nil {
+ return nil, err
+ }
+
+ retObj := &corev1.Secret{}
+ if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &retObj); err != nil {
+ return nil, err
+ }
+
+ return retObj, nil
+}
+
+// ListServices retrieves the list of services from Kubernetes.
+func (rm *envResourceManager) ListServices() ([]*corev1.Service, error) {
+ // return rm.serviceLister.List(labels.Everything())
+ objs, err := rm.DynamicRootClient.Resource(utils.GVR_SERVICE).List(context.TODO(), metav1.ListOptions{
+ LabelSelector: labels.Everything().String(),
+ })
+
+ if err != nil {
+ return nil, err
+ }
+
+ retObj := make([]*corev1.Service, 0)
+
+ for _, obj := range objs.Items {
+ tmpObj := &corev1.Service{}
+ if err := runtime.DefaultUnstructuredConverter.FromUnstructured(obj.UnstructuredContent(), &tmpObj); err != nil {
+ return nil, err
+ }
+ retObj = append(retObj, tmpObj)
+ }
+
+ return retObj, nil
+}
+
+func NewEnvResourceManager(client dynamic.Interface) utils.EnvResourceManager {
+ return &envResourceManager{
+ DynamicRootClient: client,
+ }
+}
+
+func (r *RootPodReconciler) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ var cachepod corev1.Pod
+ if err := r.Get(ctx, request.NamespacedName, &cachepod); err != nil {
+ if errors.IsNotFound(err) {
+ // TODO: we cannot get leaf pod when we donnot known the node name of pod, so delete all ...
+ nodeNames := r.GlobalLeafManager.ListNodes()
+ for _, nodeName := range nodeNames {
+ lr, err := r.GlobalLeafManager.GetLeafResourceByNodeName(nodeName)
+ if err != nil {
+ // wait for leaf resource init
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+ if err := r.DeletePodInLeafCluster(ctx, lr, request.NamespacedName, false); err != nil {
+ klog.Errorf("delete pod in leaf error[1]: %v, %s", err, request.NamespacedName)
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+ }
+ return reconcile.Result{}, nil
+ }
+ klog.Errorf("get %s error: %v", request.NamespacedName, err)
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+
+ rootpod := *(cachepod.DeepCopy())
+
+ // node filter
+ if !strings.HasPrefix(rootpod.Spec.NodeName, utils.KosmosNodePrefix) {
+ // ignore the pod who donnot has the annotations "kosmos-io/owned-by-cluster"
+ // TODO: use const
+ nn := types.NamespacedName{
+ Namespace: "",
+ Name: rootpod.Spec.NodeName,
+ }
+
+ targetNode := &corev1.Node{}
+ if err := r.RootClient.Get(ctx, nn, targetNode); err != nil {
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+
+ if targetNode.Annotations == nil {
+ return reconcile.Result{}, nil
+ }
+
+ clusterName := targetNode.Annotations[utils.KosmosNodeOwnedByClusterAnnotations]
+
+ if len(clusterName) == 0 {
+ return reconcile.Result{}, nil
+ }
+ }
+
+ // TODO: GlobalLeafResourceManager may not inited....
+ // belongs to the current node
+ if !r.GlobalLeafManager.HasNode(rootpod.Spec.NodeName) {
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+
+ lr, err := r.GlobalLeafManager.GetLeafResourceByNodeName(rootpod.Spec.NodeName)
+ if err != nil {
+ // wait for leaf resource init
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+
+ // skip namespace
+ if len(lr.Namespace) > 0 && lr.Namespace != rootpod.Namespace {
+ return reconcile.Result{}, nil
+ }
+
+ // delete pod in leaf
+ if !rootpod.GetDeletionTimestamp().IsZero() {
+ if err := r.DeletePodInLeafCluster(ctx, lr, request.NamespacedName, true); err != nil {
+ klog.Errorf("delete pod in leaf error[1]: %v, %s", err, request.NamespacedName)
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+ }
+
+ leafPod := &corev1.Pod{}
+ err = lr.Client.Get(ctx, request.NamespacedName, leafPod)
+
+ // create pod in leaf
+ if err != nil {
+ if errors.IsNotFound(err) {
+ if err := r.CreatePodInLeafCluster(ctx, lr, &rootpod, r.GlobalLeafManager.GetClusterNode(rootpod.Spec.NodeName).LeafNodeSelector); err != nil {
+ klog.Errorf("create pod inleaf error, err: %s", err)
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ } else {
+ return reconcile.Result{}, nil
+ }
+ } else {
+ klog.Errorf("get pod in leaf error[3]: %v, %s", err, request.NamespacedName)
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+ }
+
+ // update pod in leaf
+ if podutils.ShouldEnqueue(leafPod, &rootpod) {
+ if err := r.UpdatePodInLeafCluster(ctx, lr, &rootpod, leafPod, r.GlobalLeafManager.GetClusterNode(rootpod.Spec.NodeName).LeafNodeSelector); err != nil {
+ return reconcile.Result{RequeueAfter: RootPodRequeueTime}, nil
+ }
+ }
+
+ return reconcile.Result{}, nil
+}
+
+func (r *RootPodReconciler) SetupWithManager(mgr manager.Manager) error {
+ if r.Client == nil {
+ r.Client = mgr.GetClient()
+ }
+
+ r.envResourceManager = NewEnvResourceManager(r.DynamicRootClient)
+
+ skipFunc := func(obj client.Object) bool {
+ // skip reservedNS
+ if obj.GetNamespace() == utils.ReservedNS {
+ return false
+ }
+ // don't create pod if pod has label daemonset.kosmos.io/managed=""
+ if _, ok := obj.GetLabels()[daemonset.ManagedLabel]; ok {
+ return false
+ }
+
+ p := obj.(*corev1.Pod)
+
+ // skip daemonset
+ if p.OwnerReferences != nil && len(p.OwnerReferences) > 0 {
+ for _, or := range p.OwnerReferences {
+ if or.Kind == "DaemonSet" {
+ if p.Annotations != nil {
+ if _, ok := p.Annotations[utils.KosmosDaemonsetAllowAnnotations]; ok {
+ return true
+ }
+ }
+ if !p.DeletionTimestamp.IsZero() {
+ return true
+ }
+ return false
+ }
+ }
+ }
+ return true
+ }
+
+ return ctrl.NewControllerManagedBy(mgr).
+ Named(RootPodControllerName).
+ WithOptions(controller.Options{}).
+ For(&corev1.Pod{}, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ return skipFunc(createEvent.Object)
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return skipFunc(updateEvent.ObjectNew)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return skipFunc(deleteEvent.Object)
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ // TODO
+ return false
+ },
+ })).
+ Complete(r)
+}
+
+func (r *RootPodReconciler) createStorageInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, gvr schema.GroupVersionResource, resourcenames []string, rootpod *corev1.Pod, cn *leafUtils.ClusterNode) error {
+ ns := rootpod.Namespace
+ storageHandler, err := NewStorageHandler(gvr)
+ if err != nil {
+ return err
+ }
+ for _, rname := range resourcenames {
+ // add annotations for root
+ rootobj, err := r.DynamicRootClient.Resource(gvr).Namespace(ns).Get(ctx, rname, metav1.GetOptions{})
+ if err != nil {
+ return fmt.Errorf("could not get resource gvr(%v) %s from root cluster: %v", gvr, rname, err)
+ }
+ rootannotations := rootobj.GetAnnotations()
+ rootannotations = utils.AddResourceClusters(rootannotations, lr.ClusterName)
+
+ rootobj.SetAnnotations(rootannotations)
+
+ _, err = r.DynamicRootClient.Resource(gvr).Namespace(ns).Update(ctx, rootobj, metav1.UpdateOptions{})
+ if err != nil {
+ return fmt.Errorf("could not update annotations of resource gvr(%v) %s from root cluster: %v", gvr, rname, err)
+ }
+
+ // create resource in leaf cluster
+ _, err = lr.DynamicClient.Resource(gvr).Namespace(ns).Get(ctx, rname, metav1.GetOptions{})
+ if err == nil {
+ // already existed, so skip
+ continue
+ }
+ if errors.IsNotFound(err) {
+ unstructuredObj := rootobj
+
+ podutils.FitUnstructuredObjMeta(unstructuredObj)
+
+ if err := storageHandler.BeforeCreateInLeaf(ctx, r, lr, unstructuredObj, rootpod, cn); err != nil {
+ return err
+ }
+
+ podutils.SetUnstructuredObjGlobal(unstructuredObj)
+
+ _, err = lr.DynamicClient.Resource(gvr).Namespace(ns).Create(ctx, unstructuredObj, metav1.CreateOptions{})
+ if err != nil {
+ if errors.IsAlreadyExists(err) {
+ continue
+ }
+ klog.Errorf("Failed to create gvr(%v) %v err: %v", gvr, rname, err)
+ return err
+ }
+ klog.V(4).Infof("Create gvr(%v) %v in %v success", gvr, rname, ns)
+ continue
+ }
+ return fmt.Errorf("could not check gvr(%v) %s in external cluster: %v", gvr, rname, err)
+ }
+ return nil
+}
+
+func (r *RootPodReconciler) createSAInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, sa string, ns string) (*corev1.ServiceAccount, error) {
+ saKey := types.NamespacedName{
+ Namespace: ns,
+ Name: sa,
+ }
+
+ clientSA := &corev1.ServiceAccount{}
+ err := lr.Client.Get(ctx, saKey, clientSA)
+ if err != nil && !errors.IsNotFound(err) {
+ return nil, fmt.Errorf("could not check sa %s in member cluster: %v", sa, err)
+ }
+
+ if err == nil {
+ return clientSA, nil
+ }
+
+ newSA := &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: sa,
+ Namespace: ns,
+ },
+ }
+ err = lr.Client.Create(ctx, newSA)
+ if err != nil && !errors.IsAlreadyExists(err) {
+ return nil, fmt.Errorf("could not create sa %s in member cluster: %v", sa, err)
+ }
+
+ return newSA, nil
+}
+
+func (r *RootPodReconciler) createSATokenInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, saName string, ns string) (*corev1.Secret, error) {
+ satokenKey := types.NamespacedName{
+ Namespace: ns,
+ Name: saName,
+ }
+ sa := &corev1.ServiceAccount{}
+ err := r.RootClient.Get(ctx, satokenKey, sa)
+ if err != nil {
+ return nil, fmt.Errorf("could not find sa %s in master cluster: %v", saName, err)
+ }
+
+ var secretName string
+ if len(sa.Secrets) > 0 {
+ secretName = sa.Secrets[0].Name
+ }
+
+ csName := fmt.Sprintf("master-%s-token", sa.Name)
+ csKey := types.NamespacedName{
+ Namespace: ns,
+ Name: csName,
+ }
+ clientSecret := &corev1.Secret{}
+ err = lr.Client.Get(ctx, csKey, clientSecret)
+ if err != nil && !errors.IsNotFound(err) {
+ return nil, fmt.Errorf("could not check secret %s in member cluster: %v", secretName, err)
+ }
+ if err == nil {
+ return clientSecret, nil
+ }
+
+ secretKey := types.NamespacedName{
+ Namespace: ns,
+ Name: secretName,
+ }
+
+ masterSecret := &corev1.Secret{}
+ err = r.RootClient.Get(ctx, secretKey, masterSecret)
+ if err != nil {
+ return nil, fmt.Errorf("could not find secret %s in master cluster: %v", secretName, err)
+ }
+
+ nData := map[string][]byte{}
+ nData["token"] = masterSecret.Data["token"]
+
+ newSE := &corev1.Secret{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: csName,
+ Namespace: ns,
+ },
+ Data: nData,
+ }
+ err = lr.Client.Create(ctx, newSE)
+
+ if err != nil && !errors.IsAlreadyExists(err) {
+ return nil, fmt.Errorf("could not create sa %s in member cluster: %v", sa, err)
+ }
+ return newSE, nil
+}
+
+func (r *RootPodReconciler) createCAInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, ns string) (*corev1.ConfigMap, error) {
+ masterCAConfigmapKey := types.NamespacedName{
+ Namespace: ns,
+ Name: utils.MasterRooTCAName,
+ }
+
+ masterCA := &corev1.ConfigMap{}
+
+ err := lr.Client.Get(ctx, masterCAConfigmapKey, masterCA)
+ if err != nil && !errors.IsNotFound(err) {
+ return nil, fmt.Errorf("could not check configmap %s in member cluster: %v", utils.MasterRooTCAName, err)
+ }
+ if err == nil {
+ return masterCA, nil
+ }
+
+ ca := &corev1.ConfigMap{}
+
+ rootCAConfigmapKey := types.NamespacedName{
+ Namespace: ns,
+ Name: utils.RooTCAConfigMapName,
+ }
+
+ err = r.Client.Get(ctx, rootCAConfigmapKey, ca)
+ if err != nil {
+ return nil, fmt.Errorf("could not find configmap %s in master cluster: %v", ca, err)
+ }
+
+ newCA := ca.DeepCopy()
+ newCA.Name = utils.MasterRooTCAName
+ podutils.FitObjectMeta(&newCA.ObjectMeta)
+
+ err = lr.Client.Create(ctx, newCA)
+ if err != nil && !errors.IsAlreadyExists(err) {
+ return nil, fmt.Errorf("could not create configmap %s in member cluster: %v", newCA.Name, err)
+ }
+
+ return newCA, nil
+}
+
+// changeToMasterCoreDNS point the dns of the pod to the master cluster, so that the pod can access any service.
+// The master cluster holds all the services in the multi-cluster.
+func (r *RootPodReconciler) changeToMasterCoreDNS(ctx context.Context, pod *corev1.Pod, opts *options.Options) {
+ if pod.Spec.DNSPolicy != corev1.DNSClusterFirst && pod.Spec.DNSPolicy != corev1.DNSClusterFirstWithHostNet {
+ return
+ }
+
+ ns := pod.Namespace
+ svc := &corev1.Service{}
+ err := r.RootClient.Get(ctx, types.NamespacedName{Namespace: opts.RootCoreDNSServiceNamespace, Name: opts.RootCoreDNSServiceName}, svc)
+ if err != nil {
+ return
+ }
+ if svc != nil && svc.Spec.ClusterIP != "" {
+ pod.Spec.DNSPolicy = "None"
+ dnsConfig := corev1.PodDNSConfig{
+ Nameservers: []string{
+ svc.Spec.ClusterIP,
+ },
+ // TODO, if the master domain is changed, an exception will occur
+ Searches: []string{
+ fmt.Sprintf("%s.svc.cluster.local", ns),
+ "svc.cluster.local",
+ "cluster.local",
+ "localdomain",
+ },
+ }
+ pod.Spec.DNSConfig = &dnsConfig
+ }
+}
+
+func (r *RootPodReconciler) convertAuth(ctx context.Context, lr *leafUtils.LeafResource, pod *corev1.Pod) {
+ if pod.Spec.AutomountServiceAccountToken == nil || *pod.Spec.AutomountServiceAccountToken {
+ falseValue := false
+ pod.Spec.AutomountServiceAccountToken = &falseValue
+
+ sa := pod.Spec.ServiceAccountName
+ _, err := r.createSAInLeafCluster(ctx, lr, sa, pod.Namespace)
+ if err != nil {
+ klog.Errorf("[convertAuth] create sa failed, ns: %s, pod: %s", pod.Namespace, pod.Name)
+ return
+ }
+
+ se, err := r.createSATokenInLeafCluster(ctx, lr, sa, pod.Namespace)
+ if err != nil {
+ klog.Errorf("[convertAuth] create sa secret failed, ns: %s, pod: %s", pod.Namespace, pod.Name)
+ return
+ }
+
+ rootCA, err := r.createCAInLeafCluster(ctx, lr, pod.Namespace)
+ if err != nil {
+ klog.Errorf("[convertAuth] create sa secret failed, ns: %s, pod: %s", pod.Namespace, pod.Name)
+ return
+ }
+
+ volumes := pod.Spec.Volumes
+ for _, v := range volumes {
+ if strings.HasPrefix(v.Name, utils.SATokenPrefix) {
+ sources := []corev1.VolumeProjection{}
+ for _, src := range v.Projected.Sources {
+ if src.ServiceAccountToken != nil {
+ continue
+ }
+ if src.ConfigMap != nil && src.ConfigMap.Name == utils.RooTCAConfigMapName {
+ src.ConfigMap.Name = rootCA.Name
+ }
+ sources = append(sources, src)
+ }
+
+ secretProjection := corev1.VolumeProjection{
+ Secret: &corev1.SecretProjection{
+ Items: []corev1.KeyToPath{
+ {
+ Key: "token",
+ Path: "token",
+ },
+ },
+ },
+ }
+ secretProjection.Secret.Name = se.Name
+ sources = append(sources, secretProjection)
+ v.Projected.Sources = sources
+ }
+ }
+ }
+}
+
+func (r *RootPodReconciler) createServiceAccountInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, secret *corev1.Secret) error {
+ if !lr.EnableServiceAccount {
+ return nil
+ }
+ if secret.Annotations == nil {
+ return fmt.Errorf("parse secret service account error")
+ }
+ klog.V(4).Infof("secret service-account info: [%v]", secret.Annotations)
+ accountName := secret.Annotations[corev1.ServiceAccountNameKey]
+ if accountName == "" {
+ err := fmt.Errorf("get secret of serviceAccount not exits: [%s] [%v]",
+ secret.Name, secret.Annotations)
+ return err
+ }
+
+ ns := secret.Namespace
+ sa := &corev1.ServiceAccount{}
+ saKey := types.NamespacedName{
+ Namespace: ns,
+ Name: accountName,
+ }
+
+ err := lr.Client.Get(ctx, saKey, sa)
+ if err != nil || sa == nil {
+ klog.V(4).Infof("get serviceAccount [%v] err: [%v]]", sa, err)
+ sa = &corev1.ServiceAccount{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: accountName,
+ Namespace: ns,
+ },
+ }
+ err := lr.Client.Create(ctx, sa)
+ klog.Errorf("create serviceAccount [%v] err: [%v]", sa, err)
+ if err != nil {
+ if errors.IsAlreadyExists(err) {
+ return nil
+ }
+ return err
+ }
+ } else {
+ klog.V(4).Infof("get secret serviceAccount info: [%s] [%v] [%v] [%v]",
+ sa.Name, sa.CreationTimestamp, sa.Annotations, sa.UID)
+ }
+ secret.UID = sa.UID
+ secret.Annotations[corev1.ServiceAccountNameKey] = accountName
+ secret.Annotations[corev1.ServiceAccountUIDKey] = string(sa.UID)
+
+ secret.ObjectMeta.Namespace = ns
+
+ err = lr.Client.Create(ctx, secret)
+
+ if err != nil {
+ if errors.IsAlreadyExists(err) {
+ return nil
+ }
+ klog.Errorf("Failed to create secret %v err: %v", secret.Name, err)
+ }
+
+ sa.Secrets = []corev1.ObjectReference{{Name: secret.Name}}
+
+ err = lr.Client.Update(ctx, sa)
+ if err != nil {
+ klog.V(4).Infof(
+ "update serviceAccount [%v] err: [%v]]",
+ sa, err)
+ return err
+ }
+ return nil
+}
+
+func (r *RootPodReconciler) createVolumes(ctx context.Context, lr *leafUtils.LeafResource, basicPod *corev1.Pod, clusterNodeInfo *leafUtils.ClusterNode) error {
+ // create secret configmap pvc
+ secretNames, imagePullSecrets := podutils.GetSecrets(basicPod)
+ configMaps := podutils.GetConfigmaps(basicPod)
+ pvcs := podutils.GetPVCs(basicPod)
+
+ ch := make(chan string, 3)
+
+ // configmap
+ go func() {
+ if err := wait.PollImmediate(500*time.Millisecond, 30*time.Second, func() (bool, error) {
+ klog.V(4).Info("Trying to creating dependent configmaps")
+ if err := r.createStorageInLeafCluster(ctx, lr, utils.GVR_CONFIGMAP, configMaps, basicPod, clusterNodeInfo); err != nil {
+ klog.Error(err)
+ return false, nil
+ }
+ klog.V(4).Infof("Create configmaps %v of %v/%v success", configMaps, basicPod.Namespace, basicPod.Name)
+ return true, nil
+ }); err != nil {
+ ch <- fmt.Sprintf("create configmap failed: %v", err)
+ }
+ ch <- ""
+ }()
+
+ // pvc
+ go func() {
+ if err := wait.PollImmediate(500*time.Millisecond, 30*time.Second, func() (bool, error) {
+ if !r.Options.OnewayStorageControllers {
+ klog.V(4).Info("Trying to creating dependent pvc")
+ if err := r.createStorageInLeafCluster(ctx, lr, utils.GVR_PVC, pvcs, basicPod, clusterNodeInfo); err != nil {
+ klog.Error(err)
+ return false, nil
+ }
+ klog.V(4).Infof("Create pvc %v of %v/%v success", pvcs, basicPod.Namespace, basicPod.Name)
+ }
+ return true, nil
+ }); err != nil {
+ ch <- fmt.Sprintf("create pvc failed: %v", err)
+ }
+ ch <- ""
+ }()
+
+ // secret
+ go func() {
+ if err := wait.PollImmediate(500*time.Millisecond, 10*time.Second, func() (bool, error) {
+ klog.V(4).Info("Trying to creating secret")
+ if err := r.createStorageInLeafCluster(ctx, lr, utils.GVR_SECRET, secretNames, basicPod, clusterNodeInfo); err != nil {
+ klog.Error(err)
+ return false, nil
+ }
+
+ // try to create image pull secrets, ignore err
+ if errignore := r.createStorageInLeafCluster(ctx, lr, utils.GVR_SECRET, imagePullSecrets, basicPod, clusterNodeInfo); errignore != nil {
+ klog.Warning(errignore)
+ }
+ return true, nil
+ }); err != nil {
+ ch <- fmt.Sprintf("create secrets failed: %v", err)
+ }
+ ch <- ""
+ }()
+
+ t1 := <-ch
+ t2 := <-ch
+ t3 := <-ch
+
+ errString := ""
+ errs := []string{t1, t2, t3}
+ for i := range errs {
+ if len(errs[i]) > 0 {
+ errString = errString + errs[i]
+ }
+ }
+
+ if len(errString) > 0 {
+ return fmt.Errorf("%s", errString)
+ }
+
+ return nil
+}
+
+func (r *RootPodReconciler) CreatePodInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, pod *corev1.Pod, nodeSelector kosmosv1alpha1.NodeSelector) error {
+ if err := podutils.PopulateEnvironmentVariables(ctx, pod, r.envResourceManager); err != nil {
+ // span.SetStatus(err)
+ return err
+ }
+
+ clusterNodeInfo := r.GlobalLeafManager.GetClusterNode(pod.Spec.NodeName)
+ if clusterNodeInfo == nil {
+ return fmt.Errorf("clusternode info is nil , name: %s", pod.Spec.NodeName)
+ }
+
+ basicPod := podutils.FitPod(pod, lr.IgnoreLabels, clusterNodeInfo.LeafMode, nodeSelector)
+ klog.V(4).Infof("Creating pod %v/%+v", pod.Namespace, pod.Name)
+
+ // create ns
+ ns := &corev1.Namespace{}
+ nsKey := types.NamespacedName{
+ Name: basicPod.Namespace,
+ }
+ if err := lr.Client.Get(ctx, nsKey, ns); err != nil {
+ if !errors.IsNotFound(err) {
+ // cannot get ns in root cluster, retry
+ return err
+ }
+ klog.V(4).Infof("Namespace %s does not exist for pod %s, creating it", basicPod.Namespace, basicPod.Name)
+ ns := &corev1.Namespace{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: basicPod.Namespace,
+ },
+ }
+
+ if createErr := lr.Client.Create(ctx, ns); createErr != nil {
+ if !errors.IsAlreadyExists(createErr) {
+ klog.V(4).Infof("Namespace %s create failed error: %v", basicPod.Namespace, createErr)
+ return err
+ } else {
+ // namespace already existed, skip create
+ klog.V(4).Info("Namespace %s already existed: %v", basicPod.Namespace, createErr)
+ }
+ }
+ }
+
+ if err := r.createVolumes(ctx, lr, basicPod, clusterNodeInfo); err != nil {
+ klog.Errorf("Creating Volumes error %+v", basicPod)
+ return err
+ } else {
+ klog.V(4).Infof("Creating Volumes successed %+v", basicPod)
+ }
+
+ r.convertAuth(ctx, lr, basicPod)
+
+ if !r.Options.MultiClusterService {
+ r.changeToMasterCoreDNS(ctx, basicPod, r.Options)
+ }
+
+ klog.V(4).Infof("Creating pod %+v", basicPod)
+
+ err := lr.Client.Create(ctx, basicPod)
+ if err != nil {
+ return fmt.Errorf("could not create pod: %v", err)
+ }
+ klog.V(4).Infof("Create pod %v/%+v success", basicPod.Namespace, basicPod.Name)
+ return nil
+}
+
+func (r *RootPodReconciler) UpdatePodInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, rootPod *corev1.Pod, leafPod *corev1.Pod, nodeSelector kosmosv1alpha1.NodeSelector) error {
+ // TODO: update env
+ // TODO: update config secret pv pvc ...
+ klog.V(4).Infof("Updating pod %v/%+v", rootPod.Namespace, rootPod.Name)
+
+ if !podutils.IsKosmosPod(leafPod) {
+ klog.V(4).Info("Pod is not created by kosmos tree, ignore")
+ return nil
+ }
+ // not used
+ podutils.FitLabels(leafPod.ObjectMeta.Labels, lr.IgnoreLabels)
+ podCopy := leafPod.DeepCopy()
+ // util.GetUpdatedPod update PodCopy container image, annotations, labels.
+ // recover toleration, affinity, tripped ignore labels.
+ clusterNodeInfo := r.GlobalLeafManager.GetClusterNode(rootPod.Spec.NodeName)
+ if clusterNodeInfo == nil {
+ return fmt.Errorf("clusternode info is nil , name: %s", rootPod.Spec.NodeName)
+ }
+ podutils.GetUpdatedPod(podCopy, rootPod, lr.IgnoreLabels, clusterNodeInfo.LeafMode, nodeSelector)
+ if reflect.DeepEqual(leafPod.Spec, podCopy.Spec) &&
+ reflect.DeepEqual(leafPod.Annotations, podCopy.Annotations) &&
+ reflect.DeepEqual(leafPod.Labels, podCopy.Labels) {
+ return nil
+ }
+
+ r.convertAuth(ctx, lr, podCopy)
+
+ if !r.Options.MultiClusterService {
+ r.changeToMasterCoreDNS(ctx, podCopy, r.Options)
+ }
+
+ klog.V(4).Infof("Updating pod %+v", podCopy)
+
+ err := lr.Client.Update(ctx, podCopy)
+ if err != nil {
+ return fmt.Errorf("could not update pod: %v", err)
+ }
+ klog.V(4).Infof("Update pod %v/%+v success ", rootPod.Namespace, rootPod.Name)
+ return nil
+}
+
+func (r *RootPodReconciler) DeletePodInLeafCluster(ctx context.Context, lr *leafUtils.LeafResource, rootnamespacedname types.NamespacedName, cleanflag bool) error {
+ klog.V(4).Infof("Deleting pod %v/%+v", rootnamespacedname.Namespace, rootnamespacedname.Name)
+ leafPod := &corev1.Pod{}
+
+ cleanRootPodFunc := func() error {
+ return DeletePodInRootCluster(ctx, rootnamespacedname, r.Client)
+ }
+
+ err := lr.Client.Get(ctx, rootnamespacedname, leafPod)
+
+ if err != nil {
+ if errors.IsNotFound(err) {
+ if cleanflag {
+ return cleanRootPodFunc()
+ }
+ return nil
+ }
+ return err
+ }
+
+ if !podutils.IsKosmosPod(leafPod) {
+ klog.V(4).Info("Pod is not create by kosmos tree, ignore")
+ return nil
+ }
+
+ deleteOption := NewLeafDeleteOption(leafPod)
+ err = lr.Client.Delete(ctx, leafPod, deleteOption)
+ if err != nil {
+ if errors.IsNotFound(err) {
+ klog.V(4).Infof("Tried to delete pod %s/%s, but it did not exist in the cluster", leafPod.Namespace, leafPod.Name)
+ if cleanflag {
+ return cleanRootPodFunc()
+ }
+ return nil
+ }
+ return fmt.Errorf("could not delete pod: %v", err)
+ }
+ klog.V(4).Infof("Delete pod %v/%+v success", leafPod.Namespace, leafPod.Name)
+ return nil
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/pod/storage_handler.go b/pkg/clustertree/cluster-manager/controllers/pod/storage_handler.go
new file mode 100644
index 000000000..9e4c06565
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pod/storage_handler.go
@@ -0,0 +1,77 @@
+package pod
+
+import (
+ "context"
+ "fmt"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/klog/v2"
+
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+type StorageHandler interface {
+ BeforeCreateInLeaf(context.Context, *RootPodReconciler, *leafUtils.LeafResource, *unstructured.Unstructured, *corev1.Pod, *leafUtils.ClusterNode) error
+}
+
+func NewStorageHandler(gvr schema.GroupVersionResource) (StorageHandler, error) {
+ switch gvr.Resource {
+ case utils.GVR_CONFIGMAP.Resource:
+ return &ConfigMapHandler{}, nil
+ case utils.GVR_SECRET.Resource:
+ return &SecretHandler{}, nil
+ case utils.GVR_PVC.Resource:
+ return &PVCHandler{}, nil
+ }
+ return nil, fmt.Errorf("gvr type is not match when create storage handler")
+}
+
+type ConfigMapHandler struct {
+}
+
+func (c *ConfigMapHandler) BeforeCreateInLeaf(context.Context, *RootPodReconciler, *leafUtils.LeafResource, *unstructured.Unstructured, *corev1.Pod, *leafUtils.ClusterNode) error {
+ return nil
+}
+
+type SecretHandler struct {
+}
+
+func (s *SecretHandler) BeforeCreateInLeaf(ctx context.Context, r *RootPodReconciler, lr *leafUtils.LeafResource, unstructuredObj *unstructured.Unstructured, rootpod *corev1.Pod, _ *leafUtils.ClusterNode) error {
+ secretObj := &corev1.Secret{}
+ err := runtime.DefaultUnstructuredConverter.FromUnstructured(unstructuredObj.Object, secretObj)
+ if err != nil {
+ panic(err.Error())
+ }
+ if secretObj.Type == corev1.SecretTypeServiceAccountToken {
+ if err := r.createServiceAccountInLeafCluster(ctx, lr, secretObj); err != nil {
+ klog.Error(err)
+ return err
+ }
+ }
+ return nil
+}
+
+type PVCHandler struct {
+}
+
+func (v *PVCHandler) BeforeCreateInLeaf(_ context.Context, _ *RootPodReconciler, lr *leafUtils.LeafResource, unstructuredObj *unstructured.Unstructured, rootpod *corev1.Pod, cn *leafUtils.ClusterNode) error {
+ if rootpod == nil || len(rootpod.Spec.NodeName) == 0 {
+ return nil
+ }
+ annotationsMap := unstructuredObj.GetAnnotations()
+ if annotationsMap == nil {
+ annotationsMap = map[string]string{}
+ }
+ // TODO: rootpod.Spec.NodeSelector
+ if cn.LeafMode == leafUtils.Node {
+ annotationsMap[utils.PVCSelectedNodeKey] = rootpod.Spec.NodeName
+ }
+
+ unstructuredObj.SetAnnotations(annotationsMap)
+
+ return nil
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/pv/leaf_pv_controller.go b/pkg/clustertree/cluster-manager/controllers/pv/leaf_pv_controller.go
new file mode 100644
index 000000000..f41ab3406
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pv/leaf_pv_controller.go
@@ -0,0 +1,227 @@
+package pv
+
+import (
+ "context"
+ "time"
+
+ v1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+ mergetypes "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/klog"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const (
+ LeafPVControllerName = "leaf-pv-controller"
+ LeafPVRequeueTime = 10 * time.Second
+)
+
+type LeafPVController struct {
+ LeafClient client.Client
+ RootClient client.Client
+ RootClientSet kubernetes.Interface
+ ClusterName string
+ IsOne2OneMode bool
+}
+
+func (l *LeafPVController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ pv := &v1.PersistentVolume{}
+ pvNeedDelete := false
+ err := l.LeafClient.Get(ctx, request.NamespacedName, pv)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("get pv from leaf cluster failed, error: %v", err)
+ return reconcile.Result{RequeueAfter: LeafPVRequeueTime}, nil
+ }
+ pvNeedDelete = true
+ }
+
+ pvCopy := pv.DeepCopy()
+ rootPV := &v1.PersistentVolume{}
+ err = l.RootClient.Get(ctx, request.NamespacedName, rootPV)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("get root pv failed, error: %v", err)
+ return reconcile.Result{RequeueAfter: LeafPVRequeueTime}, nil
+ }
+
+ if pvNeedDelete || pv.DeletionTimestamp != nil {
+ return reconcile.Result{}, nil
+ }
+
+ if pvCopy.Spec.ClaimRef != nil {
+ tmpPVC := &v1.PersistentVolumeClaim{}
+ nn := types.NamespacedName{
+ Name: pvCopy.Spec.ClaimRef.Name,
+ Namespace: pvCopy.Spec.ClaimRef.Namespace,
+ }
+ err := l.LeafClient.Get(ctx, nn, tmpPVC)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("get tmp pvc failed, error: %v", err)
+ return reconcile.Result{RequeueAfter: LeafPVRequeueTime}, nil
+ }
+ klog.Warningf("tmp pvc not exist, error: %v", err)
+ return reconcile.Result{}, nil
+ }
+ if !utils.IsObjectGlobal(&tmpPVC.ObjectMeta) {
+ return reconcile.Result{}, nil
+ }
+ } else {
+ klog.Warningf("Can't find pvc for pv, error: %v", err)
+ return reconcile.Result{}, nil
+ }
+
+ rootPV = pv.DeepCopy()
+ filterPV(rootPV, utils.NodeAffinity4RootPV(pv, l.IsOne2OneMode, l.ClusterName))
+ nn := types.NamespacedName{
+ Name: rootPV.Spec.ClaimRef.Name,
+ Namespace: rootPV.Spec.ClaimRef.Namespace,
+ }
+
+ rootPVC := &v1.PersistentVolumeClaim{}
+ err := l.RootClient.Get(ctx, nn, rootPVC)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("Can't find root pvc failed, error: %v", err)
+ }
+ return reconcile.Result{}, nil
+ }
+
+ rootPV.Spec.ClaimRef.UID = rootPVC.UID
+ rootPV.Spec.ClaimRef.ResourceVersion = rootPVC.ResourceVersion
+ utils.AddResourceClusters(rootPV.Annotations, l.ClusterName)
+
+ rootPV, err = l.RootClientSet.CoreV1().PersistentVolumes().Create(ctx, rootPV, metav1.CreateOptions{})
+ if err != nil || rootPV == nil {
+ klog.Errorf("create pv in root cluster failed, error: %v", err)
+ return reconcile.Result{RequeueAfter: LeafPVRequeueTime}, nil
+ }
+
+ return reconcile.Result{}, nil
+ }
+
+ if !utils.HasResourceClusters(rootPV.Annotations, l.ClusterName) {
+ klog.Errorf("meet the same name root pv name: %q !", request.NamespacedName.Name)
+ return reconcile.Result{}, nil
+ }
+
+ if pvNeedDelete || pv.DeletionTimestamp != nil {
+ if err = l.RootClientSet.CoreV1().PersistentVolumes().Delete(ctx, request.NamespacedName.Name, metav1.DeleteOptions{}); err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("delete root pv failed, error: %v", err)
+ return reconcile.Result{RequeueAfter: LeafPVRequeueTime}, nil
+ }
+ }
+ klog.V(4).Infof("root pv name: %q deleted", request.NamespacedName.Name)
+ return reconcile.Result{}, nil
+ }
+
+ filterPV(rootPV, utils.NodeAffinity4RootPV(pv, l.IsOne2OneMode, l.ClusterName))
+ if pvCopy.Spec.ClaimRef != nil || rootPV.Spec.ClaimRef == nil {
+ nn := types.NamespacedName{
+ Name: pvCopy.Spec.ClaimRef.Name,
+ Namespace: pvCopy.Spec.ClaimRef.Namespace,
+ }
+ rootPVC := &v1.PersistentVolumeClaim{}
+ err := l.RootClient.Get(ctx, nn, rootPVC)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("Can't find root pvc failed, error: %v", err)
+ }
+ return reconcile.Result{}, nil
+ }
+
+ pvCopy.Spec.ClaimRef.UID = rootPVC.UID
+ pvCopy.Spec.ClaimRef.ResourceVersion = rootPVC.ResourceVersion
+ }
+
+ klog.V(4).Infof("root pv %+v\n, leaf pv %+v", rootPV, pvCopy)
+ pvCopy.Spec.NodeAffinity = rootPV.Spec.NodeAffinity
+ pvCopy.UID = rootPV.UID
+ pvCopy.ResourceVersion = rootPV.ResourceVersion
+ utils.AddResourceClusters(pvCopy.Annotations, l.ClusterName)
+
+ if utils.IsPVEqual(rootPV, pvCopy) {
+ return reconcile.Result{}, nil
+ }
+ patch, err := utils.CreateMergePatch(rootPV, pvCopy)
+ if err != nil {
+ klog.Errorf("patch pv error: %v", err)
+ return reconcile.Result{}, err
+ }
+ _, err = l.RootClientSet.CoreV1().PersistentVolumes().Patch(ctx, rootPV.Name, mergetypes.MergePatchType, patch, metav1.PatchOptions{})
+ if err != nil {
+ klog.Errorf("patch pv namespace: %q, name: %q to root cluster failed, error: %v",
+ request.NamespacedName.Namespace, request.NamespacedName.Name, err)
+ return reconcile.Result{RequeueAfter: LeafPVRequeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+}
+
+func (l *LeafPVController) SetupWithManager(mgr manager.Manager) error {
+ return ctrl.NewControllerManagedBy(mgr).
+ Named(LeafPVControllerName).
+ WithOptions(controller.Options{}).
+ For(&v1.PersistentVolume{}, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ return true
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return true
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return true
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ })).
+ Complete(l)
+}
+
+func filterPV(pv *v1.PersistentVolume, nodeName string) {
+ pv.ObjectMeta.UID = ""
+ pv.ObjectMeta.ResourceVersion = ""
+ pv.ObjectMeta.OwnerReferences = nil
+
+ if pv.Annotations == nil {
+ pv.Annotations = make(map[string]string)
+ }
+ if pv.Spec.NodeAffinity == nil || pv.Spec.NodeAffinity.Required == nil {
+ return
+ }
+
+ selectors := pv.Spec.NodeAffinity.Required.NodeSelectorTerms
+ for k, v := range pv.Spec.NodeAffinity.Required.NodeSelectorTerms {
+ mfs := v.MatchFields
+ mes := v.MatchExpressions
+ for k, val := range v.MatchFields {
+ if val.Key == utils.NodeHostnameValue || val.Key == utils.NodeHostnameValueBeta {
+ val.Values = []string{nodeName}
+ }
+ mfs[k] = val
+ }
+ for k, val := range v.MatchExpressions {
+ if val.Key == utils.NodeHostnameValue || val.Key == utils.NodeHostnameValueBeta {
+ val.Values = []string{nodeName}
+ }
+ mes[k] = val
+ }
+ selectors[k].MatchFields = mfs
+ selectors[k].MatchExpressions = mes
+ }
+ pv.Spec.NodeAffinity.Required.NodeSelectorTerms = selectors
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/pv/oneway_pv_controller.go b/pkg/clustertree/cluster-manager/controllers/pv/oneway_pv_controller.go
new file mode 100644
index 000000000..65e6ff62d
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pv/oneway_pv_controller.go
@@ -0,0 +1,206 @@
+package pv
+
+import (
+ "context"
+ "time"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/dynamic"
+ "k8s.io/klog"
+ controllerruntime "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const (
+ controllerName = "oneway-pv-controller"
+ requeueTime = 10 * time.Second
+ quickRequeueTime = 3 * time.Second
+ csiDriverName = "infini.volumepath.csi"
+)
+
+var VolumePathGVR = schema.GroupVersionResource{
+ Version: "v1alpha1",
+ Group: "lvm.infinilabs.com",
+ Resource: "volumepaths",
+}
+
+type OnewayPVController struct {
+ Root client.Client
+ RootDynamic dynamic.Interface
+ GlobalLeafManager leafUtils.LeafResourceManager
+}
+
+func (c *OnewayPVController) SetupWithManager(mgr manager.Manager) error {
+ predicatesFunc := predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ curr := createEvent.Object.(*corev1.PersistentVolume)
+ return curr.Spec.CSI != nil && curr.Spec.CSI.Driver == csiDriverName
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ curr := updateEvent.ObjectNew.(*corev1.PersistentVolume)
+ return curr.Spec.CSI != nil && curr.Spec.CSI.Driver == csiDriverName
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ curr := deleteEvent.Object.(*corev1.PersistentVolume)
+ return curr.Spec.CSI != nil && curr.Spec.CSI.Driver == csiDriverName
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ }
+
+ return controllerruntime.NewControllerManagedBy(mgr).
+ Named(controllerName).
+ WithOptions(controller.Options{}).
+ For(&corev1.PersistentVolume{}, builder.WithPredicates(predicatesFunc)).
+ Complete(c)
+}
+
+func (c *OnewayPVController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ klog.V(4).Infof("============ %s starts to reconcile %s ============", controllerName, request.Name)
+ defer func() {
+ klog.V(4).Infof("============ %s has been reconciled =============", request.Name)
+ }()
+
+ pv := &corev1.PersistentVolume{}
+ pvErr := c.Root.Get(ctx, types.NamespacedName{Name: request.Name}, pv)
+ if pvErr != nil && !errors.IsNotFound(pvErr) {
+ klog.Errorf("get pv %s error: %v", request.Name, pvErr)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ // volumePath has the same name with pv
+ vp, err := c.RootDynamic.Resource(VolumePathGVR).Get(ctx, request.Name, metav1.GetOptions{})
+ if err != nil {
+ if errors.IsNotFound(err) {
+ klog.V(4).Infof("vp %s not found", request.Name)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+ klog.Errorf("get volumePath %s error: %v", request.Name, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ nodeName, _, _ := unstructured.NestedString(vp.Object, "spec", "node")
+ if nodeName == "" {
+ klog.Warningf("vp %s's nodeName is empty, skip", request.Name)
+ return reconcile.Result{}, nil
+ }
+
+ node := &corev1.Node{}
+ err = c.Root.Get(ctx, types.NamespacedName{Name: nodeName}, node)
+ if err != nil {
+ if errors.IsNotFound(err) {
+ klog.Warningf("cannot find node %s, error: %v", nodeName, err)
+ return reconcile.Result{}, nil
+ }
+ klog.Warningf("get node %s error: %v, will requeue", nodeName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ if !utils.IsKosmosNode(node) {
+ return reconcile.Result{}, nil
+ }
+
+ clusterName := node.Annotations[utils.KosmosNodeOwnedByClusterAnnotations]
+ if clusterName == "" {
+ klog.Warningf("node %s is kosmos node, but node's %s annotation is empty, will requeue", node.Name, utils.KosmosNodeOwnedByClusterAnnotations)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ leaf, err := c.GlobalLeafManager.GetLeafResource(clusterName)
+ if err != nil {
+ klog.Warningf("get leafManager for cluster %s failed, error: %v, will requeue", clusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ if pvErr != nil && errors.IsNotFound(pvErr) ||
+ !pv.DeletionTimestamp.IsZero() {
+ return c.clearLeafPV(ctx, leaf, pv)
+ }
+
+ return c.ensureLeafPV(ctx, leaf, pv)
+}
+
+func (c *OnewayPVController) clearLeafPV(ctx context.Context, leaf *leafUtils.LeafResource, pv *corev1.PersistentVolume) (reconcile.Result, error) {
+ err := leaf.Clientset.CoreV1().PersistentVolumes().Delete(ctx, pv.Name, metav1.DeleteOptions{})
+ if err != nil && !errors.IsNotFound(err) {
+ klog.Errorf("delete pv %s in %s cluster failed, error: %v", pv.Name, leaf.ClusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+}
+
+func (c *OnewayPVController) ensureLeafPV(ctx context.Context, leaf *leafUtils.LeafResource, pv *corev1.PersistentVolume) (reconcile.Result, error) {
+ clusterName := leaf.ClusterName
+ newPV := pv.DeepCopy()
+
+ pvc := &corev1.PersistentVolumeClaim{}
+ err := leaf.Client.Get(ctx, types.NamespacedName{
+ Namespace: newPV.Spec.ClaimRef.Namespace,
+ Name: newPV.Spec.ClaimRef.Name,
+ }, pvc)
+ if err != nil {
+ klog.Errorf("get pvc from cluster %s error: %v, will requeue", leaf.ClusterName, err)
+ return reconcile.Result{RequeueAfter: quickRequeueTime}, nil
+ }
+
+ newPV.Spec.ClaimRef.ResourceVersion = pvc.ResourceVersion
+ newPV.Spec.ClaimRef.UID = pvc.UID
+
+ anno := newPV.GetAnnotations()
+ anno = utils.AddResourceClusters(anno, leaf.ClusterName)
+ anno[utils.KosmosGlobalLabel] = "true"
+ newPV.SetAnnotations(anno)
+
+ oldPV := &corev1.PersistentVolume{}
+ err = leaf.Client.Get(ctx, types.NamespacedName{
+ Name: newPV.Name,
+ }, oldPV)
+ if err != nil && !errors.IsNotFound(err) {
+ klog.Errorf("get pv from cluster %s error: %v, will requeue", leaf.ClusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ // create
+ if err != nil && errors.IsNotFound(err) {
+ newPV.UID = ""
+ newPV.ResourceVersion = ""
+ if err = leaf.Client.Create(ctx, newPV); err != nil && !errors.IsAlreadyExists(err) {
+ klog.Errorf("create pv to cluster %s error: %v, will requeue", clusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+ }
+
+ // update
+ newPV.ResourceVersion = oldPV.ResourceVersion
+ newPV.UID = oldPV.UID
+ if utils.IsPVEqual(oldPV, newPV) {
+ return reconcile.Result{}, nil
+ }
+ patch, err := utils.CreateMergePatch(oldPV, newPV)
+ if err != nil {
+ klog.Errorf("patch pv error: %v", err)
+ return reconcile.Result{}, err
+ }
+ _, err = leaf.Clientset.CoreV1().PersistentVolumes().Patch(ctx, newPV.Name, types.MergePatchType, patch, metav1.PatchOptions{})
+ if err != nil {
+ klog.Errorf("patch pv %s to %s cluster failed, error: %v", newPV.Name, leaf.ClusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/pv/root_pv_controller.go b/pkg/clustertree/cluster-manager/controllers/pv/root_pv_controller.go
new file mode 100644
index 000000000..0936497e7
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pv/root_pv_controller.go
@@ -0,0 +1,81 @@
+package pv
+
+import (
+ "context"
+
+ v1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/klog"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const (
+ RootPVControllerName = "root-pv-controller"
+)
+
+type RootPVController struct {
+ RootClient client.Client
+ GlobalLeafManager leafUtils.LeafResourceManager
+}
+
+func (r *RootPVController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ return reconcile.Result{}, nil
+}
+
+func (r *RootPVController) SetupWithManager(mgr manager.Manager) error {
+ return ctrl.NewControllerManagedBy(mgr).
+ Named(RootPVControllerName).
+ WithOptions(controller.Options{}).
+ For(&v1.PersistentVolume{}, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ return false
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return false
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ if deleteEvent.DeleteStateUnknown {
+ //TODO ListAndDelete
+ klog.Warningf("missing delete pv root event %q", deleteEvent.Object.GetName())
+ return false
+ }
+
+ pv := deleteEvent.Object.(*v1.PersistentVolume)
+ clusters := utils.ListResourceClusters(pv.Annotations)
+ if len(clusters) == 0 {
+ klog.Warningf("pv leaf %q doesn't existed", deleteEvent.Object.GetName())
+ return false
+ }
+
+ lr, err := r.GlobalLeafManager.GetLeafResource(clusters[0])
+ if err != nil {
+ klog.Warningf("pv leaf %q doesn't existed in LeafResources", deleteEvent.Object.GetName())
+ return false
+ }
+
+ if err = lr.Clientset.CoreV1().PersistentVolumes().Delete(context.TODO(), deleteEvent.Object.GetName(),
+ metav1.DeleteOptions{}); err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("delete pv from leaf cluster failed, %q, error: %v", deleteEvent.Object.GetName(), err)
+ }
+ }
+
+ return false
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ })).
+ Complete(r)
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/pvc/leaf_pvc_controller.go b/pkg/clustertree/cluster-manager/controllers/pvc/leaf_pvc_controller.go
new file mode 100644
index 000000000..821ee7687
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pvc/leaf_pvc_controller.go
@@ -0,0 +1,158 @@
+package pvc
+
+import (
+ "context"
+ "encoding/json"
+ "reflect"
+ "time"
+
+ v1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ mergetypes "k8s.io/apimachinery/pkg/types"
+ "k8s.io/apimachinery/pkg/util/wait"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/klog"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils/podutils"
+)
+
+const (
+ LeafPVCControllerName = "leaf-pvc-controller"
+ LeafPVCRequeueTime = 10 * time.Second
+)
+
+type LeafPVCController struct {
+ LeafClient client.Client
+ RootClient client.Client
+ RootClientSet kubernetes.Interface
+ ClusterName string
+}
+
+func (l *LeafPVCController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ pvc := &v1.PersistentVolumeClaim{}
+ err := l.LeafClient.Get(ctx, request.NamespacedName, pvc)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("get pvc from leaf cluster failed, error: %v", err)
+ return reconcile.Result{RequeueAfter: LeafPVCRequeueTime}, nil
+ }
+ klog.V(4).Infof("leaf pvc namespace: %q, name: %q deleted", request.NamespacedName.Namespace,
+ request.NamespacedName.Name)
+ return reconcile.Result{}, nil
+ }
+
+ rootPVC := &v1.PersistentVolumeClaim{}
+ err = l.RootClient.Get(ctx, request.NamespacedName, rootPVC)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ return reconcile.Result{RequeueAfter: LeafPVCRequeueTime}, nil
+ }
+ klog.Warningf("pvc namespace: %q, name: %q has been deleted from root cluster", request.NamespacedName.Namespace,
+ request.NamespacedName.Name)
+ return reconcile.Result{}, nil
+ }
+
+ pvcCopy := pvc.DeepCopy()
+ if reflect.DeepEqual(rootPVC.Status, pvcCopy.Status) {
+ return reconcile.Result{}, nil
+ }
+
+ //when root pvc is not bound, it's status can't be changed to bound
+ if pvcCopy.Status.Phase == v1.ClaimBound {
+ err = wait.PollImmediate(500*time.Millisecond, 1*time.Minute, func() (bool, error) {
+ if rootPVC.Spec.VolumeName == "" {
+ klog.Warningf("pvc namespace: %q, name: %q is not bounded", request.NamespacedName.Namespace,
+ request.NamespacedName.Name)
+ err = l.RootClient.Get(ctx, request.NamespacedName, rootPVC)
+ if err != nil {
+ return false, err
+ }
+ return false, nil
+ }
+ return true, nil
+ })
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ return reconcile.Result{RequeueAfter: LeafPVCRequeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+ }
+ }
+
+ if err = filterPVC(pvcCopy, l.ClusterName); err != nil {
+ return reconcile.Result{}, nil
+ }
+
+ delete(pvcCopy.Annotations, utils.PVCSelectedNodeKey)
+ pvcCopy.ResourceVersion = rootPVC.ResourceVersion
+ pvcCopy.OwnerReferences = rootPVC.OwnerReferences
+ utils.AddResourceClusters(pvcCopy.Annotations, l.ClusterName)
+ pvcCopy.Spec = rootPVC.Spec
+ klog.V(4).Infof("rootPVC %+v\n, pvc %+v", rootPVC, pvcCopy)
+
+ patch, err := utils.CreateMergePatch(rootPVC, pvcCopy)
+ if err != nil {
+ klog.Errorf("patch pvc error: %v", err)
+ return reconcile.Result{}, err
+ }
+ _, err = l.RootClientSet.CoreV1().PersistentVolumeClaims(rootPVC.Namespace).Patch(ctx,
+ rootPVC.Name, mergetypes.MergePatchType, patch, metav1.PatchOptions{})
+ if err != nil {
+ klog.Errorf("patch pvc namespace: %q, name: %q to root cluster failed, error: %v",
+ request.NamespacedName.Namespace, request.NamespacedName.Name, err)
+ return reconcile.Result{RequeueAfter: RootPVCRequeueTime}, nil
+ }
+
+ return reconcile.Result{}, nil
+}
+
+func (l *LeafPVCController) SetupWithManager(mgr manager.Manager) error {
+ return ctrl.NewControllerManagedBy(mgr).
+ Named(LeafPVCControllerName).
+ WithOptions(controller.Options{}).
+ For(&v1.PersistentVolumeClaim{}, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ return false
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ pvc := updateEvent.ObjectOld.(*v1.PersistentVolumeClaim)
+ return utils.IsObjectGlobal(&pvc.ObjectMeta)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ return false
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ })).
+ Complete(l)
+}
+
+func filterPVC(leafPVC *v1.PersistentVolumeClaim, nodeName string) error {
+ labelSelector := leafPVC.Spec.Selector.DeepCopy()
+ leafPVC.Spec.Selector = nil
+ leafPVC.ObjectMeta.UID = ""
+ leafPVC.ObjectMeta.ResourceVersion = ""
+ leafPVC.ObjectMeta.OwnerReferences = nil
+
+ podutils.SetObjectGlobal(&leafPVC.ObjectMeta)
+ if labelSelector != nil {
+ labelStr, err := json.Marshal(labelSelector)
+ if err != nil {
+ klog.Errorf("pvc namespace: %q, name: %q marshal label failed", leafPVC.Namespace, leafPVC.Name)
+ return err
+ }
+ leafPVC.Annotations[utils.KosmosPvcLabelSelector] = string(labelStr)
+ }
+ return nil
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/pvc/oneway_pvc_controller.go b/pkg/clustertree/cluster-manager/controllers/pvc/oneway_pvc_controller.go
new file mode 100644
index 000000000..c6f005a46
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pvc/oneway_pvc_controller.go
@@ -0,0 +1,198 @@
+package pvc
+
+import (
+ "context"
+ "time"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/dynamic"
+ "k8s.io/klog"
+ controllerruntime "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const (
+ controllerName = "oneway-pvc-controller"
+ requeueTime = 10 * time.Second
+ vpAnnotationKey = "volumepath"
+)
+
+var VolumePathGVR = schema.GroupVersionResource{
+ Version: "v1alpha1",
+ Group: "lvm.infinilabs.com",
+ Resource: "volumepaths",
+}
+
+type OnewayPVCController struct {
+ Root client.Client
+ RootDynamic dynamic.Interface
+ GlobalLeafManager leafUtils.LeafResourceManager
+}
+
+func pvcEventFilter(pvc *corev1.PersistentVolumeClaim) bool {
+ anno := pvc.GetAnnotations()
+ if anno == nil {
+ return false
+ }
+ if _, ok := anno[vpAnnotationKey]; ok {
+ return true
+ }
+ return false
+}
+
+func (c *OnewayPVCController) SetupWithManager(mgr manager.Manager) error {
+ predicatesFunc := predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ curr := createEvent.Object.(*corev1.PersistentVolumeClaim)
+ return pvcEventFilter(curr)
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ curr := updateEvent.ObjectNew.(*corev1.PersistentVolumeClaim)
+ return pvcEventFilter(curr)
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ curr := deleteEvent.Object.(*corev1.PersistentVolumeClaim)
+ return pvcEventFilter(curr)
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ }
+ return controllerruntime.NewControllerManagedBy(mgr).
+ Named(controllerName).
+ WithOptions(controller.Options{}).
+ For(&corev1.PersistentVolumeClaim{}, builder.WithPredicates(predicatesFunc)).
+ Complete(c)
+}
+
+func (c *OnewayPVCController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ klog.V(4).Infof("============ %s starts to reconcile %s ============", controllerName, request.Name)
+ defer func() {
+ klog.V(4).Infof("============ %s has been reconciled =============", request.Name)
+ }()
+
+ rootPVC := &corev1.PersistentVolumeClaim{}
+ pvcErr := c.Root.Get(ctx, types.NamespacedName{Namespace: request.Namespace, Name: request.Name}, rootPVC)
+ if pvcErr != nil && !errors.IsNotFound(pvcErr) {
+ klog.Errorf("get pvc %s error: %v", request.Name, pvcErr)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ // volumePath has the same name with pvc
+ vp, err := c.RootDynamic.Resource(VolumePathGVR).Get(ctx, request.Name, metav1.GetOptions{})
+ if err != nil {
+ if errors.IsNotFound(err) {
+ klog.V(4).Infof("vp %s not found", request.Name)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+ klog.Errorf("get volumePath %s error: %v", request.Name, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ nodeName, _, _ := unstructured.NestedString(vp.Object, "spec", "node")
+ if nodeName == "" {
+ klog.Warningf("vp %s's nodeName is empty, skip", request.Name)
+ return reconcile.Result{}, nil
+ }
+
+ node := &corev1.Node{}
+ err = c.Root.Get(ctx, types.NamespacedName{Name: nodeName}, node)
+ if err != nil {
+ if errors.IsNotFound(err) {
+ klog.Warningf("cannot find node %s, error: %v", nodeName, err)
+ return reconcile.Result{}, nil
+ }
+ klog.Warningf("get node %s error: %v, will requeue", nodeName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ if !utils.IsKosmosNode(node) {
+ return reconcile.Result{}, nil
+ }
+
+ clusterName := node.Annotations[utils.KosmosNodeOwnedByClusterAnnotations]
+ if clusterName == "" {
+ klog.Warningf("node %s is kosmos node, but node's %s annotation is empty, will requeue", node.Name, utils.KosmosNodeOwnedByClusterAnnotations)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ leaf, err := c.GlobalLeafManager.GetLeafResource(clusterName)
+ if err != nil {
+ klog.Warningf("get leafManager for cluster %s failed, error: %v, will requeue", clusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ if pvcErr != nil && errors.IsNotFound(pvcErr) ||
+ !rootPVC.DeletionTimestamp.IsZero() {
+ return c.clearLeafPVC(ctx, leaf, rootPVC)
+ }
+
+ return c.ensureLeafPVC(ctx, leaf, rootPVC)
+}
+
+func (c *OnewayPVCController) clearLeafPVC(ctx context.Context, leaf *leafUtils.LeafResource, pvc *corev1.PersistentVolumeClaim) (reconcile.Result, error) {
+ return reconcile.Result{}, nil
+}
+
+func (c *OnewayPVCController) ensureLeafPVC(ctx context.Context, leaf *leafUtils.LeafResource, pvc *corev1.PersistentVolumeClaim) (reconcile.Result, error) {
+ clusterName := leaf.ClusterName
+ newPVC := pvc.DeepCopy()
+
+ anno := newPVC.GetAnnotations()
+ anno = utils.AddResourceClusters(anno, leaf.ClusterName)
+ anno[utils.KosmosGlobalLabel] = "true"
+ newPVC.SetAnnotations(anno)
+
+ oldPVC := &corev1.PersistentVolumeClaim{}
+ err := leaf.Client.Get(ctx, types.NamespacedName{
+ Name: newPVC.Name,
+ Namespace: newPVC.Namespace,
+ }, oldPVC)
+ if err != nil && !errors.IsNotFound(err) {
+ klog.Errorf("get pvc from cluster %s error: %v, will requeue", leaf.ClusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+
+ // create
+ if err != nil && errors.IsNotFound(err) {
+ newPVC.UID = ""
+ newPVC.ResourceVersion = ""
+ if err = leaf.Client.Create(ctx, newPVC); err != nil && !errors.IsAlreadyExists(err) {
+ klog.Errorf("create pv to cluster %s error: %v, will requeue", clusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+ }
+
+ // update
+ newPVC.ResourceVersion = oldPVC.ResourceVersion
+ newPVC.UID = oldPVC.UID
+ if utils.IsPVCEqual(oldPVC, newPVC) {
+ return reconcile.Result{}, nil
+ }
+ patch, err := utils.CreateMergePatch(oldPVC, newPVC)
+ if err != nil {
+ klog.Errorf("patch pv error: %v", err)
+ return reconcile.Result{}, err
+ }
+ _, err = leaf.Clientset.CoreV1().PersistentVolumeClaims(newPVC.Namespace).Patch(ctx, newPVC.Name, types.MergePatchType, patch, metav1.PatchOptions{})
+ if err != nil {
+ klog.Errorf("patch pvc %s to %s cluster failed, error: %v", newPVC.Name, leaf.ClusterName, err)
+ return reconcile.Result{RequeueAfter: requeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+}
diff --git a/pkg/clustertree/cluster-manager/controllers/pvc/root_pvc_controller.go b/pkg/clustertree/cluster-manager/controllers/pvc/root_pvc_controller.go
new file mode 100644
index 000000000..eba645ceb
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/controllers/pvc/root_pvc_controller.go
@@ -0,0 +1,147 @@
+package pvc
+
+import (
+ "context"
+ "reflect"
+ "time"
+
+ v1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ mergetypes "k8s.io/apimachinery/pkg/types"
+ "k8s.io/klog"
+ ctrl "sigs.k8s.io/controller-runtime"
+ "sigs.k8s.io/controller-runtime/pkg/builder"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+ "sigs.k8s.io/controller-runtime/pkg/controller"
+ "sigs.k8s.io/controller-runtime/pkg/event"
+ "sigs.k8s.io/controller-runtime/pkg/manager"
+ "sigs.k8s.io/controller-runtime/pkg/predicate"
+ "sigs.k8s.io/controller-runtime/pkg/reconcile"
+
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const (
+ RootPVCControllerName = "root-pvc-controller"
+ RootPVCRequeueTime = 10 * time.Second
+)
+
+type RootPVCController struct {
+ RootClient client.Client
+ GlobalLeafManager leafUtils.LeafResourceManager
+}
+
+func (r *RootPVCController) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
+ pvc := &v1.PersistentVolumeClaim{}
+ err := r.RootClient.Get(ctx, request.NamespacedName, pvc)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Warningf("get pvc from root cluster failed, error: %v", err)
+ return reconcile.Result{RequeueAfter: RootPVCRequeueTime}, nil
+ }
+ return reconcile.Result{}, nil
+ }
+
+ clusters := utils.ListResourceClusters(pvc.Annotations)
+ if len(clusters) == 0 {
+ klog.V(4).Infof("pvc leaf %q: %q doesn't existed", request.NamespacedName.Namespace, request.NamespacedName.Name)
+ return reconcile.Result{RequeueAfter: RootPVCRequeueTime}, nil
+ }
+
+ lr, err := r.GlobalLeafManager.GetLeafResource(clusters[0])
+ if err != nil {
+ klog.Warningf("pvc leaf %q: %q doesn't existed in LeafResources", request.NamespacedName.Namespace, request.NamespacedName.Name)
+ return reconcile.Result{RequeueAfter: RootPVCRequeueTime}, nil
+ }
+
+ pvcOld := &v1.PersistentVolumeClaim{}
+ err = lr.Client.Get(ctx, request.NamespacedName, pvcOld)
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Warningf("get pvc from leaf cluster failed, error: %v", err)
+ return reconcile.Result{RequeueAfter: RootPVCRequeueTime}, nil
+ }
+ // TODO Create?
+ return reconcile.Result{}, nil
+ }
+
+ /* if !utils.IsObjectGlobal(&pvcOld.ObjectMeta) {
+ return reconcile.Result{}, nil
+ }*/
+
+ if reflect.DeepEqual(pvcOld.Spec.Resources.Requests, pvc.Spec.Resources.Requests) {
+ return reconcile.Result{}, nil
+ }
+ pvcOld.Spec.Resources.Requests = pvc.Spec.Resources.Requests
+ pvc.Spec = pvcOld.Spec
+
+ pvc.Annotations = pvcOld.Annotations
+ pvc.ObjectMeta.UID = pvcOld.ObjectMeta.UID
+ pvc.ObjectMeta.ResourceVersion = ""
+ pvc.ObjectMeta.OwnerReferences = pvcOld.ObjectMeta.OwnerReferences
+ patch, err := utils.CreateMergePatch(pvcOld, pvc)
+ if err != nil {
+ klog.Errorf("patch pvc error: %v", err)
+ return reconcile.Result{}, err
+ }
+ _, err = lr.Clientset.CoreV1().PersistentVolumeClaims(pvc.Namespace).Patch(ctx,
+ pvc.Name, mergetypes.MergePatchType, patch, metav1.PatchOptions{})
+ if err != nil && !errors.IsNotFound(err) {
+ klog.Errorf("patch pvc namespace: %q, name: %q from root cluster failed, error: %v",
+ request.NamespacedName.Namespace, request.NamespacedName.Name, err)
+ return reconcile.Result{RequeueAfter: RootPVCRequeueTime}, nil
+ }
+
+ return reconcile.Result{}, nil
+}
+
+func (r *RootPVCController) SetupWithManager(mgr manager.Manager) error {
+ return ctrl.NewControllerManagedBy(mgr).
+ Named(RootPVCControllerName).
+ WithOptions(controller.Options{}).
+ For(&v1.PersistentVolumeClaim{}, builder.WithPredicates(predicate.Funcs{
+ CreateFunc: func(createEvent event.CreateEvent) bool {
+ return false
+ },
+ UpdateFunc: func(updateEvent event.UpdateEvent) bool {
+ return true
+ },
+ DeleteFunc: func(deleteEvent event.DeleteEvent) bool {
+ if deleteEvent.DeleteStateUnknown {
+ //TODO ListAndDelete
+ klog.Warningf("missing delete pvc root event %q: %q", deleteEvent.Object.GetNamespace(), deleteEvent.Object.GetName())
+ return false
+ }
+
+ pvc := deleteEvent.Object.(*v1.PersistentVolumeClaim)
+ clusters := utils.ListResourceClusters(pvc.Annotations)
+ if len(clusters) == 0 {
+ klog.V(4).Infof("pvc leaf %q: %q doesn't existed", deleteEvent.Object.GetNamespace(), deleteEvent.Object.GetName())
+ return false
+ }
+
+ lr, err := r.GlobalLeafManager.GetLeafResource(clusters[0])
+ if err != nil {
+ klog.Warningf("pvc leaf %q: %q doesn't existed in LeafResources", deleteEvent.Object.GetNamespace(),
+ deleteEvent.Object.GetName())
+ return false
+ }
+
+ if err = lr.Clientset.CoreV1().PersistentVolumeClaims(deleteEvent.Object.GetNamespace()).Delete(context.TODO(),
+ deleteEvent.Object.GetName(), metav1.DeleteOptions{}); err != nil {
+ if !errors.IsNotFound(err) {
+ klog.Errorf("delete pvc from leaf cluster failed, %q: %q, error: %v", deleteEvent.Object.GetNamespace(),
+ deleteEvent.Object.GetName(), err)
+ }
+ }
+
+ return false
+ },
+ GenericFunc: func(genericEvent event.GenericEvent) bool {
+ return false
+ },
+ })).
+ Complete(r)
+}
diff --git a/pkg/clustertree/cluster-manager/extensions/daemonset/constants.go b/pkg/clustertree/cluster-manager/extensions/daemonset/constants.go
index dcaf27cd7..cb5defad7 100644
--- a/pkg/clustertree/cluster-manager/extensions/daemonset/constants.go
+++ b/pkg/clustertree/cluster-manager/extensions/daemonset/constants.go
@@ -5,3 +5,5 @@ const ManagedLabel = "daemonset.kosmos.io/managed"
const DistributeControllerFinalizer = "kosmos.io/distribute-controller-finalizer"
const MirrorAnnotation = "kosmos.io/daemonset-mirror"
+
+const ClusterAnnotationKey = "kosmos.io/managed-cluster"
diff --git a/pkg/clustertree/cluster-manager/extensions/daemonset/daemonset_controller.go b/pkg/clustertree/cluster-manager/extensions/daemonset/daemonset_controller.go
index ed9189789..539d64bd0 100644
--- a/pkg/clustertree/cluster-manager/extensions/daemonset/daemonset_controller.go
+++ b/pkg/clustertree/cluster-manager/extensions/daemonset/daemonset_controller.go
@@ -38,7 +38,7 @@ type DaemonSetsController struct {
sdsLister kosmoslister.ShadowDaemonSetLister
- kNodeLister kosmoslister.KnodeLister
+ clusterLister kosmoslister.ClusterLister
eventBroadcaster record.EventBroadcaster
@@ -48,7 +48,7 @@ type DaemonSetsController struct {
shadowDaemonSetSynced cache.InformerSynced
- kNodeSynced cache.InformerSynced
+ clusterSynced cache.InformerSynced
processor utils.AsyncWorker
@@ -59,12 +59,9 @@ type DaemonSetsController struct {
func NewDaemonSetsController(
shadowDaemonSetInformer kosmosinformer.ShadowDaemonSetInformer,
daemonSetInformer kosmosinformer.DaemonSetInformer,
- kNodeInformer kosmosinformer.KnodeInformer,
+ clusterInformer kosmosinformer.ClusterInformer,
kubeClient clientset.Interface,
kosmosClient versioned.Interface,
- dsLister kosmoslister.DaemonSetLister,
- sdsLister kosmoslister.ShadowDaemonSetLister,
- kNodeLister kosmoslister.KnodeLister,
rateLimiterOptions flags.Options,
) *DaemonSetsController {
err := kosmosv1alpha1.Install(scheme.Scheme)
@@ -76,9 +73,9 @@ func NewDaemonSetsController(
dsc := &DaemonSetsController{
kubeClient: kubeClient,
kosmosClient: kosmosClient,
- dsLister: dsLister,
- sdsLister: sdsLister,
- kNodeLister: kNodeLister,
+ dsLister: daemonSetInformer.Lister(),
+ sdsLister: shadowDaemonSetInformer.Lister(),
+ clusterLister: clusterInformer.Lister(),
eventBroadcaster: eventBroadcaster,
eventRecorder: eventBroadcaster.NewRecorder(scheme.Scheme, v1.EventSource{Component: "daemonset-controller"}),
rateLimiterOptions: rateLimiterOptions,
@@ -107,11 +104,12 @@ func NewDaemonSetsController(
dsc.daemonSetSynced = daemonSetInformer.Informer().HasSynced
// nolint:errcheck
- kNodeInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
- AddFunc: dsc.addKNode,
+ clusterInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: dsc.addCluster,
+ UpdateFunc: dsc.updateCluster,
DeleteFunc: dsc.deleteKNode,
})
- dsc.kNodeSynced = kNodeInformer.Informer().HasSynced
+ dsc.clusterSynced = clusterInformer.Informer().HasSynced
return dsc
}
@@ -137,13 +135,11 @@ func (dsc *DaemonSetsController) Run(ctx context.Context, workers int) {
}
dsc.processor = utils.NewAsyncWorker(opt)
- if !cache.WaitForNamedCacheSync("kosmos_daemonset_controller", ctx.Done(), dsc.daemonSetSynced, dsc.shadowDaemonSetSynced, dsc.kNodeSynced) {
- klog.V(2).Infof("Timed out waiting for caches to sync")
+ if !cache.WaitForNamedCacheSync("kosmos_daemonset_controller", ctx.Done(), dsc.daemonSetSynced, dsc.shadowDaemonSetSynced, dsc.clusterSynced) {
+ klog.Errorf("Timed out waiting for caches to sync")
return
}
dsc.processor.Run(workers, ctx.Done())
-
- <-ctx.Done()
}
func (dsc *DaemonSetsController) addDaemonSet(obj interface{}) {
@@ -194,7 +190,7 @@ func (dsc *DaemonSetsController) deleteShadowDaemonSet(obj interface{}) {
dsc.processShadowDaemonSet(sds)
}
-func (dsc *DaemonSetsController) processKNode(knode *kosmosv1alpha1.Knode) {
+func (dsc *DaemonSetsController) processCluster(cluster *kosmosv1alpha1.Cluster) {
//TODO add should run on node logic
list, err := dsc.dsLister.List(labels.Everything())
if err != nil {
@@ -206,22 +202,25 @@ func (dsc *DaemonSetsController) processKNode(knode *kosmosv1alpha1.Knode) {
}
}
-func (dsc *DaemonSetsController) addKNode(obj interface{}) {
- kNode := obj.(*kosmosv1alpha1.Knode)
- klog.V(4).Infof("adding knode %s", kNode.Name)
- dsc.processKNode(kNode)
+func (dsc *DaemonSetsController) addCluster(obj interface{}) {
+ cluster := obj.(*kosmosv1alpha1.Cluster)
+ dsc.processCluster(cluster)
+}
+
+func (dsc *DaemonSetsController) updateCluster(old interface{}, new interface{}) {
+ cluster := new.(*kosmosv1alpha1.Cluster)
+ dsc.processCluster(cluster)
}
func (dsc *DaemonSetsController) deleteKNode(obj interface{}) {
- kNode := obj.(*kosmosv1alpha1.Knode)
- klog.V(4).Infof("deleting knode %s", kNode.Name)
- dsc.processKNode(kNode)
+ cluster := obj.(*kosmosv1alpha1.Cluster)
+ dsc.processCluster(cluster)
}
func (dsc *DaemonSetsController) syncDaemonSet(key utils.QueueKey) error {
clusterWideKey, ok := key.(keys.ClusterWideKey)
if !ok {
- klog.V(2).Infof("invalid key type %T", key)
+ klog.Errorf("invalid key type %T", key)
return fmt.Errorf("invalid key")
}
@@ -237,31 +236,33 @@ func (dsc *DaemonSetsController) syncDaemonSet(key utils.QueueKey) error {
err = dsc.removeOrphanShadowDaemonSet(ds)
if err != nil {
- klog.V(2).Infof("failed to remove orphan shadow daemon set for daemon set %s err: %v", ds.Name, err)
+ klog.Errorf("failed to remove orphan shadow daemon set for daemon set %s err: %v", ds.Name, err)
return err
}
- kNodeList, err := dsc.kNodeLister.List(labels.Everything())
+ clusterList, err := dsc.clusterLister.List(labels.Everything())
if err != nil {
- return fmt.Errorf("couldn't get list of knodes when syncing daemon set %#v: %v", ds, err)
+ return fmt.Errorf("couldn't get list of cluster when syncing daemon set %#v: %v", ds, err)
}
+
// sync daemonset
// sync host shadowDaemonSet
sdsHost := createShadowDaemonSet(ds, kosmosv1alpha1.RefTypeHost, "")
err = dsc.createOrUpdate(context.TODO(), sdsHost)
if err != nil {
- klog.V(2).Infof("failed sync ShadowDaemonSet %s err: %v", sdsHost.DaemonSetSpec, err)
+ klog.Errorf("failed sync ShadowDaemonSet %s err: %v", sdsHost.DaemonSetSpec, err)
return err
}
// sync member shadowDaemonSet
- for _, knode := range kNodeList {
- if knode.DeletionTimestamp == nil {
- memberSds := createShadowDaemonSet(ds, kosmosv1alpha1.RefTypeMember, knode.Name)
+ for _, cluster := range clusterList {
+ if cluster.DeletionTimestamp == nil {
+ memberSds := createShadowDaemonSet(ds, kosmosv1alpha1.RefTypeMember, cluster.Name)
err = dsc.createOrUpdate(context.TODO(), memberSds)
if err != nil {
- klog.V(2).Infof("failed sync ShadowDaemonSet %s err: %v", sdsHost.DaemonSetSpec, err)
- return err
+ klog.Errorf("failed sync ShadowDaemonSet %s err: %v", memberSds.DaemonSetSpec, err)
+ //klog.Errorf("failed sync ShadowDaemonSet %s err: %v", sdsHost.DaemonSetSpec, err)
+ //return err
}
}
}
@@ -280,7 +281,7 @@ func (dsc *DaemonSetsController) createOrUpdate(ctx context.Context, ds *kosmosv
// create new
_, err := dsc.kosmosClient.KosmosV1alpha1().ShadowDaemonSets(ds.Namespace).Create(ctx, ds, metav1.CreateOptions{})
if err != nil {
- klog.V(2).Infof("Failed create ShadowDaemonSet %s err: %v", ds.Name, err)
+ klog.Errorf("Failed create ShadowDaemonSet %s err: %v", ds.Name, err)
return err
}
return nil
@@ -292,13 +293,13 @@ func (dsc *DaemonSetsController) createOrUpdate(ctx context.Context, ds *kosmosv
desired.SetAnnotations(ds.Annotations)
_, err = dsc.kosmosClient.KosmosV1alpha1().ShadowDaemonSets(ds.Namespace).Update(ctx, desired, metav1.UpdateOptions{})
if err != nil {
- klog.V(2).Infof("Failed update ShadowDaemonSet %s err: %v", ds.Name, err)
+ klog.Errorf("Failed update ShadowDaemonSet %s err: %v", ds.Name, err)
return err
}
return nil
})
if err != nil {
- klog.V(2).Infof("Failed create or update ShadowDaemonSet %s err: %v", ds.Name, err)
+ klog.Errorf("Failed create or update ShadowDaemonSet %s err: %v", ds.Name, err)
return err
}
return nil
@@ -307,7 +308,7 @@ func (dsc *DaemonSetsController) createOrUpdate(ctx context.Context, ds *kosmosv
func (dsc *DaemonSetsController) updateStatus(ctx context.Context, ds *kosmosv1alpha1.DaemonSet) error {
sds, err := listAllShadowDaemonSet(dsc.sdsLister, ds)
if err != nil {
- klog.V(2).Infof("Failed list ShadowDaemonSet for %s err: %v", ds.Name, err)
+ klog.Errorf("Failed list ShadowDaemonSet for %s err: %v", ds.Name, err)
return err
}
desiredNumberScheduled := int32(0)
@@ -337,7 +338,7 @@ func (dsc *DaemonSetsController) updateStatus(ctx context.Context, ds *kosmosv1a
toUpdate.Status.NumberUnavailable = numberUnavailable
if _, updateErr := dsc.kosmosClient.KosmosV1alpha1().DaemonSets(ds.Namespace).UpdateStatus(ctx, toUpdate, metav1.UpdateOptions{}); updateErr != nil {
- klog.V(2).Infof("Failed update DaemonSet %s status err: %v", ds.Name, updateErr)
+ klog.Errorf("Failed update DaemonSet %s status err: %v", ds.Name, updateErr)
return updateErr
}
return nil
@@ -349,7 +350,7 @@ func (dsc *DaemonSetsController) resolveControllerRef(namespace string, ref *met
}
ds, err := dsc.dsLister.DaemonSets(namespace).Get(ref.Name)
if err != nil {
- klog.V(2).Infof("Failed get DaemonSet %s err: %v", ref.Name, err)
+ klog.Errorf("Failed get DaemonSet %s err: %v", ref.Name, err)
return nil
}
return ds
@@ -358,28 +359,28 @@ func (dsc *DaemonSetsController) resolveControllerRef(namespace string, ref *met
func (dsc *DaemonSetsController) removeOrphanShadowDaemonSet(ds *kosmosv1alpha1.DaemonSet) error {
allSds, err := listAllShadowDaemonSet(dsc.sdsLister, ds)
if err != nil {
- klog.V(2).Infof("Failed list ShadowDaemonSet for %s err: %v", ds.Name, err)
+ klog.Errorf("Failed list ShadowDaemonSet for %s err: %v", ds.Name, err)
return err
}
- kNodeList, err := dsc.kNodeLister.List(labels.Everything())
+ clusterList, err := dsc.clusterLister.List(labels.Everything())
if err != nil {
- klog.V(2).Infof("couldn't get list of knodes when syncing daemon set %#v: %v", ds, err)
+ klog.Errorf("couldn't get list of clusters when syncing daemon set %#v: %v", ds, err)
return err
}
- knodeSet := make(map[string]interface{})
- for _, kNode := range kNodeList {
- knodeSet[kNode.Name] = struct{}{}
+ clusterSet := make(map[string]interface{})
+ for _, cluster := range clusterList {
+ clusterSet[cluster.Name] = struct{}{}
}
for _, s := range allSds {
if s.RefType == kosmosv1alpha1.RefTypeHost {
continue
}
- if _, ok := knodeSet[s.Knode]; !ok {
+ if _, ok := clusterSet[s.Cluster]; !ok {
err = dsc.kosmosClient.KosmosV1alpha1().ShadowDaemonSets(s.Namespace).Delete(context.TODO(), s.Name,
metav1.DeleteOptions{})
if err != nil {
- klog.V(2).Infof("Failed delete ShadowDaemonSet %s err: %v", s.Name, err)
+ klog.Errorf("Failed delete ShadowDaemonSet %s err: %v", s.Name, err)
return err
}
}
@@ -390,7 +391,7 @@ func (dsc *DaemonSetsController) removeOrphanShadowDaemonSet(ds *kosmosv1alpha1.
func listAllShadowDaemonSet(lister kosmoslister.ShadowDaemonSetLister, ds *kosmosv1alpha1.DaemonSet) ([]*kosmosv1alpha1.ShadowDaemonSet, error) {
list, err := lister.ShadowDaemonSets(ds.Namespace).List(labels.Everything())
if err != nil {
- klog.V(2).Infof("Failed list ShadowDaemonSet for %s err: %v", ds.Name, err)
+ klog.Errorf("Failed list ShadowDaemonSet for %s err: %v", ds.Name, err)
return nil, err
}
var sds []*kosmosv1alpha1.ShadowDaemonSet
@@ -406,13 +407,13 @@ func listAllShadowDaemonSet(lister kosmoslister.ShadowDaemonSetLister, ds *kosmo
return sds, nil
}
-func createShadowDaemonSet(ds *kosmosv1alpha1.DaemonSet, refType kosmosv1alpha1.RefType, nodeName string) *kosmosv1alpha1.ShadowDaemonSet {
+func createShadowDaemonSet(ds *kosmosv1alpha1.DaemonSet, refType kosmosv1alpha1.RefType, cluster string) *kosmosv1alpha1.ShadowDaemonSet {
suffix := "-host"
if refType != kosmosv1alpha1.RefTypeHost {
- suffix = "-" + nodeName
+ suffix = "-" + cluster
}
var sds *kosmosv1alpha1.ShadowDaemonSet
- if nodeName != "" {
+ if cluster != "" {
sds = &kosmosv1alpha1.ShadowDaemonSet{
ObjectMeta: metav1.ObjectMeta{
Annotations: ds.Annotations,
@@ -422,7 +423,7 @@ func createShadowDaemonSet(ds *kosmosv1alpha1.DaemonSet, refType kosmosv1alpha1.
OwnerReferences: []metav1.OwnerReference{*metav1.NewControllerRef(ds, ControllerKind)},
},
RefType: refType,
- Knode: nodeName,
+ Cluster: cluster,
DaemonSetSpec: ds.Spec,
}
} else {
diff --git a/pkg/clustertree/cluster-manager/extensions/daemonset/daemonset_mirror_controller.go b/pkg/clustertree/cluster-manager/extensions/daemonset/daemonset_mirror_controller.go
index 74499dd6d..4943fb7f0 100644
--- a/pkg/clustertree/cluster-manager/extensions/daemonset/daemonset_mirror_controller.go
+++ b/pkg/clustertree/cluster-manager/extensions/daemonset/daemonset_mirror_controller.go
@@ -123,13 +123,12 @@ func (dmc *DaemonSetsMirrorController) Run(ctx context.Context, workers int) {
}
dmc.processor.Run(workers, ctx.Done())
- <-ctx.Done()
}
func (dmc *DaemonSetsMirrorController) syncDaemonSet(key utils.QueueKey) error {
clusterWideKey, ok := key.(keys.ClusterWideKey)
if !ok {
- klog.V(2).Infof("invalid key type %T", key)
+ klog.Errorf("invalid key type %T", key)
return fmt.Errorf("invalid key")
}
@@ -138,7 +137,7 @@ func (dmc *DaemonSetsMirrorController) syncDaemonSet(key utils.QueueKey) error {
d, err := dmc.dsLister.DaemonSets(namespace).Get(name)
if apierrors.IsNotFound(err) {
- klog.V(3).Infof("daemon set has been deleted %v", key)
+ klog.Errorf("daemon set has been deleted %v", key)
return nil
}
ds := d.DeepCopy()
@@ -179,12 +178,12 @@ func (dmc *DaemonSetsMirrorController) syncDaemonSet(key utils.QueueKey) error {
}
_, err := dmc.kosmosClient.KosmosV1alpha1().DaemonSets(ds.Namespace).Create(context.Background(), ds, metav1.CreateOptions{})
if err != nil {
- klog.V(2).Infof("failed to create kosmos daemon set %v", err)
+ klog.Errorf("failed to create kosmos daemon set %v", err)
return err
}
return nil
} else {
- klog.V(2).Infof("failed to get kosmos daemon set %v", err)
+ klog.Errorf("failed to get kosmos daemon set %v", err)
return err
}
}
@@ -202,7 +201,7 @@ func (dmc *DaemonSetsMirrorController) syncDaemonSet(key utils.QueueKey) error {
kds.Labels = labels.Set{ManagedLabel: ""}
kds, err = dmc.kosmosClient.KosmosV1alpha1().DaemonSets(ds.Namespace).Update(context.Background(), kds, metav1.UpdateOptions{})
if err != nil {
- klog.V(2).Infof("failed to update shadow daemon set %v", err)
+ klog.Errorf("failed to update shadow daemon set %v", err)
return err
}
ds.Status.CurrentNumberScheduled = kds.Status.CurrentNumberScheduled
@@ -217,7 +216,7 @@ func (dmc *DaemonSetsMirrorController) syncDaemonSet(key utils.QueueKey) error {
ds.Status.Conditions = kds.Status.Conditions
_, err = dmc.kubeClient.AppsV1().DaemonSets(ds.Namespace).UpdateStatus(context.Background(), ds, metav1.UpdateOptions{})
if err != nil {
- klog.V(2).Infof("failed to update daemon set status %v", err)
+ klog.Errorf("failed to update daemon set status %v", err)
return err
}
return nil
@@ -247,7 +246,7 @@ func (dmc *DaemonSetsMirrorController) ProcessKosmosDaemonSet(obj interface{}) {
ds, err := dmc.dsLister.DaemonSets(kds.Namespace).Get(kds.Name)
if err != nil {
if !apierrors.IsNotFound(err) {
- klog.V(2).Infof("failed to get daemon set %v", err)
+ klog.Errorf("failed to get daemon set %v", err)
}
return
}
diff --git a/pkg/clustertree/cluster-manager/extensions/daemonset/distribute_controller.go b/pkg/clustertree/cluster-manager/extensions/daemonset/distribute_controller.go
index d84d1f226..e85581b0c 100644
--- a/pkg/clustertree/cluster-manager/extensions/daemonset/distribute_controller.go
+++ b/pkg/clustertree/cluster-manager/extensions/daemonset/distribute_controller.go
@@ -3,6 +3,7 @@ package daemonset
import (
"context"
"fmt"
+ "reflect"
"sync"
appsv1 "k8s.io/api/apps/v1"
@@ -37,17 +38,17 @@ type DistributeController struct {
sdsLister kosmoslister.ShadowDaemonSetLister
- kNodeLister kosmoslister.KnodeLister
+ clusterLister kosmoslister.ClusterLister
shadowDaemonSetSynced cache.InformerSynced
- kNodeSynced cache.InformerSynced
+ clusterSynced cache.InformerSynced
- knodeProcessor utils.AsyncWorker
+ clusterProcessor utils.AsyncWorker
shadowDaemonSetProcessor utils.AsyncWorker
- knodeDaemonSetManagerMap map[string]*KNodeDaemonSetManager
+ clusterDaemonSetManagerMap map[string]*clusterDaemonSetManager
rateLimiterOptions flags.Options
@@ -57,24 +58,24 @@ type DistributeController struct {
func NewDistributeController(
kosmosClient versioned.Interface,
sdsInformer kosmosinformer.ShadowDaemonSetInformer,
- knodeInformer kosmosinformer.KnodeInformer,
+ clusterInformer kosmosinformer.ClusterInformer,
rateLimiterOptions flags.Options,
) *DistributeController {
dc := &DistributeController{
- kosmosClient: kosmosClient,
- sdsLister: sdsInformer.Lister(),
- kNodeLister: knodeInformer.Lister(),
- shadowDaemonSetSynced: sdsInformer.Informer().HasSynced,
- kNodeSynced: knodeInformer.Informer().HasSynced,
- knodeDaemonSetManagerMap: make(map[string]*KNodeDaemonSetManager),
- rateLimiterOptions: rateLimiterOptions,
+ kosmosClient: kosmosClient,
+ sdsLister: sdsInformer.Lister(),
+ clusterLister: clusterInformer.Lister(),
+ shadowDaemonSetSynced: sdsInformer.Informer().HasSynced,
+ clusterSynced: clusterInformer.Informer().HasSynced,
+ clusterDaemonSetManagerMap: make(map[string]*clusterDaemonSetManager),
+ rateLimiterOptions: rateLimiterOptions,
}
// nolint:errcheck
- knodeInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
- AddFunc: dc.addKNode,
- UpdateFunc: dc.updateKNode,
- DeleteFunc: dc.deleteKNode,
+ clusterInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: dc.addCluster,
+ UpdateFunc: dc.updateCluster,
+ DeleteFunc: dc.deleteCluster,
})
// nolint:errcheck
sdsInformer.Informer().AddEventHandler(cache.FilteringResourceEventHandler{
@@ -97,15 +98,15 @@ func (dc *DistributeController) Run(ctx context.Context, workers int) {
klog.Infof("Starting distribute controller")
defer klog.Infof("Shutting down distribute controller")
- knodeOpt := utils.Options{
- Name: "distribute controller: KNode",
+ clusterOpt := utils.Options{
+ Name: "distribute controller: cluster",
KeyFunc: func(obj interface{}) (utils.QueueKey, error) {
return keys.ClusterWideKeyFunc(obj)
},
- ReconcileFunc: dc.syncKNode,
+ ReconcileFunc: dc.syncCluster,
RateLimiterOptions: dc.rateLimiterOptions,
}
- dc.knodeProcessor = utils.NewAsyncWorker(knodeOpt)
+ dc.clusterProcessor = utils.NewAsyncWorker(clusterOpt)
sdsOpt := utils.Options{
Name: "distribute controller: ShadowDaemonSet",
@@ -117,17 +118,16 @@ func (dc *DistributeController) Run(ctx context.Context, workers int) {
}
dc.shadowDaemonSetProcessor = utils.NewAsyncWorker(sdsOpt)
- if !cache.WaitForNamedCacheSync("host_daemon_controller", ctx.Done(), dc.shadowDaemonSetSynced, dc.kNodeSynced) {
+ if !cache.WaitForNamedCacheSync("host_daemon_controller", ctx.Done(), dc.shadowDaemonSetSynced, dc.clusterSynced) {
klog.V(2).Infof("Timed out waiting for caches to sync")
return
}
- dc.knodeProcessor.Run(workers, ctx.Done())
+ dc.clusterProcessor.Run(workers, ctx.Done())
dc.shadowDaemonSetProcessor.Run(workers, ctx.Done())
- <-ctx.Done()
}
-func (dc *DistributeController) syncKNode(key utils.QueueKey) error {
+func (dc *DistributeController) syncCluster(key utils.QueueKey) error {
dc.lock.Lock()
defer dc.lock.Unlock()
clusterWideKey, ok := key.(keys.ClusterWideKey)
@@ -136,27 +136,27 @@ func (dc *DistributeController) syncKNode(key utils.QueueKey) error {
return fmt.Errorf("invalid key")
}
name := clusterWideKey.Name
- knode, err := dc.kNodeLister.Get(name)
+ cluster, err := dc.clusterLister.Get(name)
if err != nil {
if apierrors.IsNotFound(err) {
- klog.V(3).Infof("knode has been deleted %v", key)
+ klog.V(3).Infof("cluster has been deleted %v", key)
return nil
}
return err
}
- manager, ok := dc.knodeDaemonSetManagerMap[knode.Name]
+ manager, ok := dc.clusterDaemonSetManagerMap[cluster.Name]
if !ok {
- config, err := clientcmd.RESTConfigFromKubeConfig(knode.Spec.Kubeconfig)
+ config, err := clientcmd.RESTConfigFromKubeConfig(cluster.Spec.Kubeconfig)
if err != nil {
- klog.V(2).Infof("failed to create rest config for knode %s: %v", knode.Name, err)
+ klog.V(2).Infof("failed to create rest config for cluster %s: %v", cluster.Name, err)
return err
}
kubeClient, err := kubernetes.NewForConfig(config)
if err != nil {
- klog.V(2).Infof("failed to create kube client for knode %s: %v", knode.Name, err)
+ klog.V(2).Infof("failed to create kube client for cluster %s: %v", cluster.Name, err)
}
kubeFactory := informers.NewSharedInformerFactory(kubeClient, 0)
@@ -169,7 +169,7 @@ func (dc *DistributeController) syncKNode(key utils.QueueKey) error {
if !ok {
return false
}
- if ds.Labels[ManagedLabel] == "" {
+ if _, ok := ds.Labels[ManagedLabel]; ok {
return true
}
return false
@@ -182,39 +182,40 @@ func (dc *DistributeController) syncKNode(key utils.QueueKey) error {
})
daemonsetSynced := dsInformer.Informer().HasSynced()
- manager = NewKNodeDaemonSetManager(
+ manager = NewClusterDaemonSetManager(
+ name,
kubeClient,
dsInformer,
kubeFactory,
daemonsetSynced,
)
- dc.knodeDaemonSetManagerMap[knode.Name] = manager
+ dc.clusterDaemonSetManagerMap[cluster.Name] = manager
manager.Start()
}
- if knode.DeletionTimestamp != nil {
+ if cluster.DeletionTimestamp != nil {
list, error := manager.dsLister.List(labels.Set{ManagedLabel: ""}.AsSelector())
if error != nil {
- klog.V(2).Infof("failed to list daemonsets from knode %s: %v", knode.Name, error)
+ klog.V(2).Infof("failed to list daemonsets from cluster %s: %v", cluster.Name, error)
return error
}
for i := range list {
ds := list[i]
error := manager.kubeClient.AppsV1().DaemonSets(ds.Namespace).Delete(context.Background(), ds.Name, metav1.DeleteOptions{})
if err != nil {
- klog.V(2).Infof("failed to delete daemonset %s/%s from knode %s: %v", ds.Namespace, ds.Name, knode.Name, error)
+ klog.V(2).Infof("failed to delete daemonset %s/%s from cluster %s: %v", ds.Namespace, ds.Name, cluster.Name, error)
return error
}
}
- err = dc.removeKNodeFinalizer(knode)
+ err = dc.removeClusterFinalizer(cluster)
if err != nil {
return err
}
manager.Stop()
- delete(dc.knodeDaemonSetManagerMap, knode.Name)
+ delete(dc.clusterDaemonSetManagerMap, cluster.Name)
return err
}
- return dc.ensureKNodeFinalizer(knode)
+ return dc.ensureClusterFinalizer(cluster)
}
func (dc *DistributeController) syncShadowDaemonSet(key utils.QueueKey) error {
@@ -222,7 +223,7 @@ func (dc *DistributeController) syncShadowDaemonSet(key utils.QueueKey) error {
defer dc.lock.RUnlock()
clusterWideKey, ok := key.(keys.ClusterWideKey)
if !ok {
- klog.V(2).Infof("invalid key type %T", key)
+ klog.Errorf("invalid key type %T", key)
return fmt.Errorf("invalid key")
}
@@ -232,31 +233,31 @@ func (dc *DistributeController) syncShadowDaemonSet(key utils.QueueKey) error {
sds, err := dc.sdsLister.ShadowDaemonSets(namespace).Get(name)
if apierrors.IsNotFound(err) {
- klog.V(2).Infof("daemon set has been deleted %v", key)
+ klog.Errorf("daemon set has been deleted %v", key)
return nil
}
- knode, err := dc.kNodeLister.Get(sds.Knode)
+ cluster, err := dc.clusterLister.Get(sds.Cluster)
if err != nil && !apierrors.IsNotFound(err) {
- klog.V(2).Infof("failed to get knode %s: %v", sds.Knode, err)
+ klog.Errorf("failed to get cluster %s: %v", sds.Cluster, err)
return err
}
- // when knode is deleting or not found, skip sync
- if knode == nil || knode.DeletionTimestamp != nil {
+ // when cluster is deleting or not found, skip sync
+ if cluster == nil || cluster.DeletionTimestamp != nil {
return dc.removeShadowDaemonSetFinalizer(sds)
}
- manager, ok := dc.knodeDaemonSetManagerMap[sds.Knode]
+ manager, ok := dc.clusterDaemonSetManagerMap[sds.Cluster]
if !ok {
- msg := fmt.Sprintf("no manager found for knode %s", sds.Knode)
- klog.V(2).Info(msg)
+ msg := fmt.Sprintf("no manager found for cluster %s", sds.Cluster)
+ klog.Errorf(msg)
return fmt.Errorf(msg)
}
if sds.DeletionTimestamp != nil {
err := manager.kubeClient.AppsV1().DaemonSets(sds.Namespace).Delete(context.Background(), sds.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
- klog.V(2).Infof("failed to delete daemonset %s/%s from knode %s: %v", sds.Namespace, sds.Name, sds.Knode, err)
+ klog.Errorf("failed to delete daemonset %s/%s from cluster %s: %v", sds.Namespace, sds.Name, sds.Cluster, err)
return err
}
return dc.removeShadowDaemonSetFinalizer(sds)
@@ -264,28 +265,28 @@ func (dc *DistributeController) syncShadowDaemonSet(key utils.QueueKey) error {
sds, err = dc.ensureShadowDaemonSetFinalizer(sds)
if err != nil {
- klog.V(2).Infof("failed to ensure finalizer for shadow daemonset %s/%s: %v", namespace, name, err)
+ klog.Errorf("failed to ensure finalizer for shadow daemonset %s/%s: %v", namespace, name, err)
return err
}
copy := sds.DeepCopy()
- err = manager.tryCreateOrUpdateDaemonset(sds)
+ err = manager.tryCreateOrUpdateDaemonSet(sds)
if err != nil {
- klog.V(2).Infof("failed to create or update daemonset %s/%s: %v", namespace, name, err)
+ klog.Errorf("failed to create or update daemonset %s/%s: %v", namespace, name, err)
return err
}
ds, error := manager.dsLister.DaemonSets(sds.Namespace).Get(sds.Name)
if error != nil {
- klog.V(2).Infof("failed to get daemonset %s/%s: %v", namespace, name, error)
+ klog.Errorf("failed to get daemonset %s/%s: %v", namespace, name, error)
return error
}
error = dc.updateStatus(copy, ds)
if error != nil {
- klog.V(2).Infof("failed to update status for shadow daemonset %s/%s: %v", namespace, name, error)
+ klog.Errorf("failed to update status for shadow daemonset %s/%s: %v", namespace, name, error)
return error
}
return nil
@@ -319,29 +320,29 @@ func (dc *DistributeController) removeShadowDaemonSetFinalizer(sds *v1alpha1.Sha
return nil
}
-func (dc *DistributeController) ensureKNodeFinalizer(knode *v1alpha1.Knode) error {
- if controllerutil.ContainsFinalizer(knode, DistributeControllerFinalizer) {
+func (dc *DistributeController) ensureClusterFinalizer(cluster *v1alpha1.Cluster) error {
+ if controllerutil.ContainsFinalizer(cluster, DistributeControllerFinalizer) {
return nil
}
- controllerutil.AddFinalizer(knode, DistributeControllerFinalizer)
- _, err := dc.kosmosClient.KosmosV1alpha1().Knodes().Update(context.TODO(), knode, metav1.UpdateOptions{})
+ controllerutil.AddFinalizer(cluster, DistributeControllerFinalizer)
+ _, err := dc.kosmosClient.KosmosV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{})
if err != nil {
- klog.Errorf("knode %s failed add finalizer: %v", knode.Name, err)
+ klog.Errorf("cluster %s failed add finalizer: %v", cluster.Name, err)
return err
}
return nil
}
-func (dc *DistributeController) removeKNodeFinalizer(knode *v1alpha1.Knode) error {
- if !controllerutil.ContainsFinalizer(knode, DistributeControllerFinalizer) {
+func (dc *DistributeController) removeClusterFinalizer(cluster *v1alpha1.Cluster) error {
+ if !controllerutil.ContainsFinalizer(cluster, DistributeControllerFinalizer) {
return nil
}
- controllerutil.RemoveFinalizer(knode, DistributeControllerFinalizer)
- _, err := dc.kosmosClient.KosmosV1alpha1().Knodes().Update(context.TODO(), knode, metav1.UpdateOptions{})
+ controllerutil.RemoveFinalizer(cluster, DistributeControllerFinalizer)
+ _, err := dc.kosmosClient.KosmosV1alpha1().Clusters().Update(context.TODO(), cluster, metav1.UpdateOptions{})
if err != nil {
- klog.Errorf("knode %s failed remove finalizer: %v", knode.Name, err)
+ klog.Errorf("cluster %s failed remove finalizer: %v", cluster.Name, err)
return err
}
return nil
@@ -358,25 +359,23 @@ func (dc *DistributeController) updateStatus(sds *v1alpha1.ShadowDaemonSet, ds *
sds.Status.NumberUnavailable = ds.Status.NumberUnavailable
sds.Status.CollisionCount = ds.Status.CollisionCount
sds.Status.Conditions = ds.Status.Conditions
- _, error := dc.kosmosClient.KosmosV1alpha1().ShadowDaemonSets(sds.Namespace).UpdateStatus(context.Background(), sds, metav1.UpdateOptions{})
- return error
+ _, err := dc.kosmosClient.KosmosV1alpha1().ShadowDaemonSets(sds.Namespace).UpdateStatus(context.Background(), sds, metav1.UpdateOptions{})
+ return err
}
-func (dc *DistributeController) addKNode(obj interface{}) {
- ds := obj.(*v1alpha1.Knode)
- klog.V(4).Infof("Adding daemon set %s", ds.Name)
- dc.knodeProcessor.Enqueue(ds)
+func (dc *DistributeController) addCluster(obj interface{}) {
+ ds := obj.(*v1alpha1.Cluster)
+ dc.clusterProcessor.Enqueue(ds)
}
-func (dc *DistributeController) updateKNode(oldObj, newObj interface{}) {
- newDS := newObj.(*v1alpha1.Knode)
- klog.V(4).Infof("Updating daemon set %s", newDS.Name)
- dc.knodeProcessor.Enqueue(newDS)
+func (dc *DistributeController) updateCluster(oldObj, newObj interface{}) {
+ newDS := newObj.(*v1alpha1.Cluster)
+ dc.clusterProcessor.Enqueue(newDS)
}
-func (dc *DistributeController) deleteKNode(obj interface{}) {
- ds := obj.(*v1alpha1.Knode)
- dc.knodeProcessor.Enqueue(ds)
+func (dc *DistributeController) deleteCluster(obj interface{}) {
+ ds := obj.(*v1alpha1.Cluster)
+ dc.clusterProcessor.Enqueue(ds)
}
func (dc *DistributeController) addDaemonSet(obj interface{}) {
@@ -385,7 +384,12 @@ func (dc *DistributeController) addDaemonSet(obj interface{}) {
}
func (dc *DistributeController) updateDaemonSet(oldObj, newObj interface{}) {
+ oldDs := oldObj.(*appsv1.DaemonSet)
newDS := newObj.(*appsv1.DaemonSet)
+ if reflect.DeepEqual(oldDs.Status, newDS.Status) {
+ klog.V(4).Infof("member cluster daemon set %s/%s is unchanged skip enqueue", newDS.Namespace, newDS.Name)
+ return
+ }
dc.shadowDaemonSetProcessor.Enqueue(newDS)
}
@@ -396,33 +400,36 @@ func (dc *DistributeController) deleteDaemonSet(obj interface{}) {
func (dc *DistributeController) addShadowDaemonSet(obj interface{}) {
ds := obj.(*v1alpha1.ShadowDaemonSet)
- klog.V(4).Infof("Adding daemon set %s", ds.Name)
dc.shadowDaemonSetProcessor.Enqueue(ds)
}
func (dc *DistributeController) updateShadowDaemonSet(oldObj, newObj interface{}) {
+ oldDs := oldObj.(*v1alpha1.ShadowDaemonSet)
newDS := newObj.(*v1alpha1.ShadowDaemonSet)
- klog.V(4).Infof("Updating daemon set %s", newDS.Name)
+ if reflect.DeepEqual(oldDs.DaemonSetSpec, newDS.DaemonSetSpec) &&
+ reflect.DeepEqual(oldDs.Annotations, newDS.Annotations) &&
+ reflect.DeepEqual(oldDs.Labels, newDS.Labels) &&
+ oldDs.DeletionTimestamp == newDS.DeletionTimestamp {
+ klog.V(4).Infof("shadow daemon set %s/%s is unchanged skip enqueue", newDS.Namespace, newDS.Name)
+ return
+ }
dc.shadowDaemonSetProcessor.Enqueue(newDS)
}
func (dc *DistributeController) deleteShadowDaemonSet(obj interface{}) {
ds := obj.(*v1alpha1.ShadowDaemonSet)
- klog.V(4).Infof("Deleting daemon set %s", ds.Name)
dc.shadowDaemonSetProcessor.Enqueue(ds)
}
-type KNodeDaemonSetManager struct {
+type clusterDaemonSetManager struct {
+ name string
+
kubeClient clientset.Interface
dsLister appslisters.DaemonSetLister
factory informer.SharedInformerFactory
- version map[string]string
-
- lock sync.RWMutex
-
ctx context.Context
cancelFun context.CancelFunc
@@ -430,31 +437,30 @@ type KNodeDaemonSetManager struct {
daemonSetSynced cache.InformerSynced
}
-func (km *KNodeDaemonSetManager) Start() {
+func (km *clusterDaemonSetManager) Start() {
km.factory.Start(km.ctx.Done())
- if !cache.WaitForNamedCacheSync("km_manager", km.ctx.Done(), km.daemonSetSynced) {
+ if !cache.WaitForNamedCacheSync("distribute controller", km.ctx.Done(), km.daemonSetSynced) {
klog.Errorf("failed to wait for daemonSet caches to sync")
return
}
}
-func (km *KNodeDaemonSetManager) Stop() {
+func (km *clusterDaemonSetManager) Stop() {
if km.cancelFun != nil {
km.cancelFun()
}
}
-func (km *KNodeDaemonSetManager) tryCreateOrUpdateDaemonset(sds *v1alpha1.ShadowDaemonSet) error {
+func (km *clusterDaemonSetManager) tryCreateOrUpdateDaemonSet(sds *v1alpha1.ShadowDaemonSet) error {
err := km.ensureNameSpace(sds.Namespace)
if err != nil {
klog.V(4).Infof("ensure namespace %s failed: %v", sds.Namespace, err)
return err
}
- ds, error := km.dsLister.DaemonSets(sds.Namespace).Get(sds.Name)
- copyDs := ds.DeepCopy()
- if error != nil {
- if apierrors.IsNotFound(error) {
+ ds, err := km.getDaemonSet(sds.Namespace, sds.Name)
+ if err != nil {
+ if apierrors.IsNotFound(err) {
newDaemonSet := &appsv1.DaemonSet{}
newDaemonSet.Spec.Template = sds.DaemonSetSpec.Template
newDaemonSet.Spec.Selector = sds.DaemonSetSpec.Selector
@@ -466,49 +472,55 @@ func (km *KNodeDaemonSetManager) tryCreateOrUpdateDaemonset(sds *v1alpha1.Shadow
newDaemonSet.Labels = sds.Labels
newDaemonSet.Annotations = sds.Annotations
newDaemonSet.Labels = labels.Set{ManagedLabel: ""}
- newDs, error := km.kubeClient.AppsV1().DaemonSets(sds.Namespace).Create(context.Background(), newDaemonSet, metav1.CreateOptions{})
- if error != nil {
- klog.V(2).Infof("failed to create daemonset %s/%s: %v", sds.Namespace, sds.Name, error)
- return error
+ if newDaemonSet.Spec.Template.Annotations != nil {
+ newDaemonSet.Spec.Template.Annotations[ManagedLabel] = ""
+ newDaemonSet.Spec.Template.Annotations[ClusterAnnotationKey] = km.name
+ } else {
+ newDaemonSet.Spec.Template.Annotations = labels.Set{ManagedLabel: "", ClusterAnnotationKey: km.name}
+ }
+ _, err = km.kubeClient.AppsV1().DaemonSets(sds.Namespace).Create(context.Background(), newDaemonSet, metav1.CreateOptions{})
+ if err != nil {
+ klog.Errorf("failed to create daemonset %s/%s: %v", sds.Namespace, sds.Name, err)
+ return err
}
- km.updateVersion(newDs)
return nil
} else {
- klog.V(2).Infof("failed to get daemonset %s/%s: %v", sds.Namespace, sds.Name, error)
- return error
+ klog.Errorf("failed to get daemonset %s/%s: %v", sds.Namespace, sds.Name, err)
+ return err
}
}
- if copyDs.ResourceVersion == km.version[key(sds.ObjectMeta)] {
- return nil
- }
-
- copyDs.Spec.Template = sds.DaemonSetSpec.Template
- copyDs.Spec.Selector = sds.DaemonSetSpec.Selector
- copyDs.Spec.UpdateStrategy = sds.DaemonSetSpec.UpdateStrategy
- copyDs.Spec.MinReadySeconds = sds.DaemonSetSpec.MinReadySeconds
- copyDs.Spec.RevisionHistoryLimit = sds.DaemonSetSpec.RevisionHistoryLimit
+ ds.Spec.Template = sds.DaemonSetSpec.Template
+ ds.Spec.Selector = sds.DaemonSetSpec.Selector
+ ds.Spec.UpdateStrategy = sds.DaemonSetSpec.UpdateStrategy
+ ds.Spec.MinReadySeconds = sds.DaemonSetSpec.MinReadySeconds
+ ds.Spec.RevisionHistoryLimit = sds.DaemonSetSpec.RevisionHistoryLimit
for k, v := range sds.Labels {
- copyDs.Labels[k] = v
+ ds.Labels[k] = v
}
- copyDs.Labels[ManagedLabel] = ""
+ ds.Labels[ManagedLabel] = ""
for k, v := range sds.Annotations {
// TODO delete annotations which add by controller
- copyDs.Annotations[k] = v
+ ds.Annotations[k] = v
+ }
+ if ds.Spec.Template.Annotations != nil {
+ ds.Spec.Template.Annotations[ManagedLabel] = ""
+ ds.Spec.Template.Annotations[ClusterAnnotationKey] = km.name
+ } else {
+ ds.Spec.Template.Annotations = labels.Set{ManagedLabel: "", ClusterAnnotationKey: km.name}
}
- updated, error := km.kubeClient.AppsV1().DaemonSets(sds.Namespace).Update(context.Background(), copyDs, metav1.UpdateOptions{})
- if error != nil {
- klog.V(2).Infof("failed to update daemonset %s/%s: %v", sds.Namespace, sds.Name, error)
- return error
+ _, err = km.kubeClient.AppsV1().DaemonSets(sds.Namespace).Update(context.Background(), ds, metav1.UpdateOptions{})
+ if err != nil {
+ klog.Errorf("failed to update daemonset %s/%s: %v", sds.Namespace, sds.Name, err)
+ return err
}
- km.updateVersion(updated)
return nil
}
-func (km *KNodeDaemonSetManager) ensureNameSpace(namespace string) error {
+func (km *clusterDaemonSetManager) ensureNameSpace(namespace string) error {
ns := &corev1.Namespace{}
ns.Name = namespace
_, err := km.kubeClient.CoreV1().Namespaces().Create(context.Background(), ns, metav1.CreateOptions{})
@@ -516,33 +528,35 @@ func (km *KNodeDaemonSetManager) ensureNameSpace(namespace string) error {
if apierrors.IsAlreadyExists(err) {
return nil
}
- klog.V(2).Infof("failed to create namespace %s: %v", namespace, err)
+ klog.Errorf("failed to create namespace %s: %v", namespace, err)
return err
}
return nil
}
-func (km *KNodeDaemonSetManager) updateVersion(ds *appsv1.DaemonSet) {
- km.lock.Lock()
- defer km.lock.Unlock()
- km.version[key(ds.ObjectMeta)] = ds.ResourceVersion
+func (km *clusterDaemonSetManager) getDaemonSet(namespace, name string) (*appsv1.DaemonSet, error) {
+ ds, err := km.dsLister.DaemonSets(namespace).Get(name)
+ if err != nil {
+ ds, err = km.kubeClient.AppsV1().DaemonSets(namespace).Get(context.Background(), name, metav1.GetOptions{})
+ if err != nil {
+ return nil, err
+ }
+ return ds, nil
+ }
+ return ds.DeepCopy(), nil
}
-func NewKNodeDaemonSetManager(client *clientset.Clientset, dsInformer appsinformers.DaemonSetInformer, factory informer.SharedInformerFactory, synced bool) *KNodeDaemonSetManager {
+func NewClusterDaemonSetManager(name string, client *clientset.Clientset, dsInformer appsinformers.DaemonSetInformer, factory informer.SharedInformerFactory, synced bool) *clusterDaemonSetManager {
ctx := context.TODO()
ctx, cancel := context.WithCancel(ctx)
- return &KNodeDaemonSetManager{
+ return &clusterDaemonSetManager{
+ name: name,
kubeClient: client,
dsLister: dsInformer.Lister(),
factory: factory,
ctx: ctx,
cancelFun: cancel,
- version: map[string]string{},
daemonSetSynced: dsInformer.Informer().HasSynced,
}
}
-
-func key(obj metav1.ObjectMeta) string {
- return obj.Namespace + "/" + obj.Name
-}
diff --git a/pkg/clustertree/cluster-manager/extensions/daemonset/host_daemon_controller.go b/pkg/clustertree/cluster-manager/extensions/daemonset/host_daemon_controller.go
index d8f4dabfd..7e602b33a 100644
--- a/pkg/clustertree/cluster-manager/extensions/daemonset/host_daemon_controller.go
+++ b/pkg/clustertree/cluster-manager/extensions/daemonset/host_daemon_controller.go
@@ -182,10 +182,16 @@ func NewHostDaemonSetsController(
// Watch for creation/deletion of pods. The reason we watch is that we don't want a daemon set to create/delete
// more pods until all the effects (expectations) of a daemon set's create/delete have been observed.
// nolint:errcheck
- podInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
- AddFunc: dsc.addPod,
- UpdateFunc: dsc.updatePod,
- DeleteFunc: dsc.deletePod,
+ podInformer.Informer().AddEventHandler(cache.FilteringResourceEventHandler{
+ FilterFunc: func(obj interface{}) bool {
+ pod := obj.(*v1.Pod)
+ return pod.Annotations == nil || len(pod.Annotations[ClusterAnnotationKey]) == 0
+ },
+ Handler: cache.ResourceEventHandlerFuncs{
+ AddFunc: dsc.addPod,
+ UpdateFunc: dsc.updatePod,
+ DeleteFunc: dsc.deletePod,
+ },
})
dsc.podLister = podInformer.Lister()
dsc.podStoreSynced = podInformer.Informer().HasSynced
@@ -193,18 +199,9 @@ func NewHostDaemonSetsController(
// nolint:errcheck
nodeInformer.Informer().AddEventHandler(cache.FilteringResourceEventHandler{
FilterFunc: func(obj interface{}) bool {
- node, ok := obj.(*v1.Node)
- if !ok {
- return false
- }
-
+ node := obj.(*v1.Node)
// filter virtual node
- for _, taint := range node.Spec.Taints {
- if taint.Key == "kosmos.io/node" {
- return false
- }
- }
- return true
+ return !isVirtualNode(node)
},
Handler: cache.ResourceEventHandlerFuncs{
AddFunc: dsc.addNode,
@@ -645,6 +642,7 @@ func (dsc *HostDaemonSetsController) addNode(obj interface{}) {
klog.V(4).Infof("Error enqueueing daemon sets: %v", err)
return
}
+ dsList = filterHostShadowDS(dsList)
node := obj.(*v1.Node)
for _, ds := range dsList {
if shouldRun, _ := NodeShouldRunDaemonPod(node, ds); shouldRun {
@@ -703,6 +701,7 @@ func (dsc *HostDaemonSetsController) updateNode(old, cur interface{}) {
klog.V(4).Infof("Error listing daemon sets: %v", err)
return
}
+ dsList = filterHostShadowDS(dsList)
// TODO: it'd be nice to pass a hint with these enqueues, so that each ds would only examine the added node (unless it has other work to do, too).
for _, ds := range dsList {
oldShouldRun, oldShouldContinueRunning := NodeShouldRunDaemonPod(oldNode, ds)
@@ -725,10 +724,18 @@ func (dsc *HostDaemonSetsController) getDaemonPods(ctx context.Context, ds *kosm
// List all pods to include those that don't match the selector anymore but
// have a ControllerRef pointing to this controller.
- pods, err := dsc.podLister.Pods(ds.Namespace).List(labels.Everything())
+ allPods, err := dsc.podLister.Pods(ds.Namespace).List(labels.Everything())
if err != nil {
return nil, err
}
+ var pods []*v1.Pod
+ for i := range allPods {
+ pod := allPods[i]
+ if len(pod.Annotations[ClusterAnnotationKey]) == 0 {
+ pods = append(pods, pod)
+ }
+ }
+
// If any adoptions are attempted, we should first recheck for deletion with
// an uncached quorum read sometime after listing Pods (see #42639).
dsNotDeleted := controller.RecheckDeletionTimestamp(func(ctx context.Context) (metav1.Object, error) {
@@ -1225,10 +1232,17 @@ func (dsc *HostDaemonSetsController) syncDaemonSet(ctx context.Context, key stri
return fmt.Errorf("unable to retrieve ds %v from store: %v", key, err)
}
- nodeList, err := dsc.nodeLister.List(labels.Everything())
+ allNode, err := dsc.nodeLister.List(labels.Everything())
if err != nil {
return fmt.Errorf("couldn't get list of nodes when syncing daemon set %#v: %v", ds, err)
}
+ var nodeList []*v1.Node
+ for i := range allNode {
+ node := allNode[i]
+ if !isVirtualNode(node) {
+ nodeList = append(nodeList, node)
+ }
+ }
everything := metav1.LabelSelector{}
if reflect.DeepEqual(ds.DaemonSetSpec.Selector, &everything) {
@@ -1463,3 +1477,23 @@ func GetHistoryDaemonSets(history *apps.ControllerRevision, s kosmoslister.Shado
return daemonSets, nil
}
+
+func isVirtualNode(node *v1.Node) bool {
+ for _, taint := range node.Spec.Taints {
+ if taint.Key == "kosmos.io/node" {
+ return true
+ }
+ }
+ return false
+}
+
+func filterHostShadowDS(dsList []*kosmosv1alpha1.ShadowDaemonSet) []*kosmosv1alpha1.ShadowDaemonSet {
+ var filtered []*kosmosv1alpha1.ShadowDaemonSet
+ for i := range dsList {
+ ds := dsList[i]
+ if ds.RefType == kosmosv1alpha1.RefTypeHost {
+ filtered = append(filtered, ds)
+ }
+ }
+ return filtered
+}
diff --git a/pkg/clustertree/cluster-manager/extensions/daemonset/pod_reflect_controller.go b/pkg/clustertree/cluster-manager/extensions/daemonset/pod_reflect_controller.go
new file mode 100644
index 000000000..f4a31d83a
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/extensions/daemonset/pod_reflect_controller.go
@@ -0,0 +1,428 @@
+package daemonset
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "strings"
+ "sync"
+
+ appsv1 "k8s.io/api/apps/v1"
+ corev1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ utilruntime "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/client-go/informers"
+ appsv1informers "k8s.io/client-go/informers/apps/v1"
+ corev1informers "k8s.io/client-go/informers/core/v1"
+ clientset "k8s.io/client-go/kubernetes"
+ appslisters "k8s.io/client-go/listers/apps/v1"
+ corelisters "k8s.io/client-go/listers/core/v1"
+ "k8s.io/client-go/tools/cache"
+ "k8s.io/client-go/tools/clientcmd"
+ "k8s.io/klog/v2"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ kosmosinformer "github.com/kosmos.io/kosmos/pkg/generated/informers/externalversions/kosmos/v1alpha1"
+ kosmoslister "github.com/kosmos.io/kosmos/pkg/generated/listers/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils/flags"
+ "github.com/kosmos.io/kosmos/pkg/utils/keys"
+)
+
+var KosmosDaemonSetKind = kosmosv1alpha1.SchemeGroupVersion.WithKind("DaemonSet")
+var DaemonSetKind = appsv1.SchemeGroupVersion.WithKind("DaemonSet")
+
+type PodReflectorController struct {
+ // host cluster kube client
+ kubeClient clientset.Interface
+
+ dsLister appslisters.DaemonSetLister
+
+ kdsLister kosmoslister.DaemonSetLister
+
+ clusterLister kosmoslister.ClusterLister
+
+ podLister corelisters.PodLister
+
+ // member cluster podManager map
+ podManagerMap map[string]ClusterPodManager
+
+ daemonsetSynced cache.InformerSynced
+
+ kdaemonsetSynced cache.InformerSynced
+
+ clusterSynced cache.InformerSynced
+
+ podSynced cache.InformerSynced
+
+ clusterProcessor utils.AsyncWorker
+
+ podProcessor utils.AsyncWorker
+
+ rateLimiterOptions flags.Options
+
+ lock sync.RWMutex
+}
+
+func NewPodReflectorController(kubeClient clientset.Interface,
+ dsInformer appsv1informers.DaemonSetInformer,
+ kdsInformer kosmosinformer.DaemonSetInformer,
+ clusterInformer kosmosinformer.ClusterInformer,
+ podInformer corev1informers.PodInformer,
+ rateLimiterOptions flags.Options,
+) *PodReflectorController {
+ pc := &PodReflectorController{
+ kubeClient: kubeClient,
+ dsLister: dsInformer.Lister(),
+ kdsLister: kdsInformer.Lister(),
+ clusterLister: clusterInformer.Lister(),
+ podLister: podInformer.Lister(),
+ daemonsetSynced: dsInformer.Informer().HasSynced,
+ kdaemonsetSynced: kdsInformer.Informer().HasSynced,
+ clusterSynced: clusterInformer.Informer().HasSynced,
+ podSynced: clusterInformer.Informer().HasSynced,
+ podManagerMap: map[string]ClusterPodManager{},
+ rateLimiterOptions: rateLimiterOptions,
+ }
+
+ // nolint:errcheck
+ clusterInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
+ AddFunc: pc.addCluster,
+ DeleteFunc: pc.deleteCluster,
+ })
+ // nolint:errcheck
+ podInformer.Informer().AddEventHandler(cache.FilteringResourceEventHandler{
+ FilterFunc: func(obj interface{}) bool {
+ pod := obj.(*corev1.Pod)
+ _, ok := pod.Annotations[ManagedLabel]
+ return ok
+ },
+ Handler: cache.ResourceEventHandlerFuncs{
+ AddFunc: pc.addPod,
+ UpdateFunc: pc.updatePod,
+ DeleteFunc: pc.deletePod,
+ },
+ })
+
+ return pc
+}
+
+func (pc *PodReflectorController) Run(ctx context.Context, workers int) {
+ defer utilruntime.HandleCrash()
+
+ klog.Infof("Starting pod reflector controller")
+ defer klog.Infof("Shutting down pod reflector controller")
+
+ clusterOpt := utils.Options{
+ Name: "pod reflector controller: cluster",
+ KeyFunc: func(obj interface{}) (utils.QueueKey, error) {
+ return keys.ClusterWideKeyFunc(obj)
+ },
+ ReconcileFunc: pc.syncCluster,
+ RateLimiterOptions: pc.rateLimiterOptions,
+ }
+ pc.clusterProcessor = utils.NewAsyncWorker(clusterOpt)
+
+ podOpt := utils.Options{
+ Name: "pod reflector controller: pod",
+ KeyFunc: func(obj interface{}) (utils.QueueKey, error) {
+ pod := obj.(*corev1.Pod)
+ cluster := getCluster(pod)
+ if len(cluster) == 0 {
+ return nil, fmt.Errorf("pod is not manage by kosmos daemon set")
+ }
+ return keys.FederatedKeyFunc(cluster, obj)
+ },
+ ReconcileFunc: pc.syncPod,
+ RateLimiterOptions: pc.rateLimiterOptions,
+ }
+ pc.podProcessor = utils.NewAsyncWorker(podOpt)
+ if !cache.WaitForNamedCacheSync("pod_reflector_controller", ctx.Done(), pc.daemonsetSynced, pc.kdaemonsetSynced, pc.podSynced, pc.clusterSynced) {
+ klog.Errorf("Timed out waiting for caches to sync")
+ return
+ }
+ pc.clusterProcessor.Run(1, ctx.Done())
+ pc.podProcessor.Run(1, ctx.Done())
+}
+
+func getCluster(pod *corev1.Pod) string {
+ return pod.Annotations[ClusterAnnotationKey]
+}
+
+func (pc *PodReflectorController) syncCluster(key utils.QueueKey) error {
+ pc.lock.Lock()
+ defer pc.lock.Unlock()
+ clusterWideKey, exist := key.(keys.ClusterWideKey)
+ if !exist {
+ klog.Errorf("invalid key type %T", key)
+ return fmt.Errorf("invalid key")
+ }
+ name := clusterWideKey.Name
+ cluster, err := pc.clusterLister.Get(name)
+
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ klog.V(3).Infof("cluster has been deleted %v", key)
+ return nil
+ }
+ return err
+ }
+ manager, exist := pc.podManagerMap[cluster.Name]
+ if cluster.DeletionTimestamp != nil {
+ if exist {
+ manager.Stop()
+ delete(pc.podManagerMap, cluster.Name)
+ }
+ return nil
+ }
+
+ if !exist {
+ config, err := clientcmd.RESTConfigFromKubeConfig(cluster.Spec.Kubeconfig)
+ if err != nil {
+ klog.Errorf("failed to create rest config for cluster %s: %v", cluster.Name, err)
+ return err
+ }
+ kubeClient, err := clientset.NewForConfig(config)
+ if err != nil {
+ klog.Errorf("failed to create kube client for cluster %s: %v", cluster.Name, err)
+ }
+ kubeFactory := informers.NewSharedInformerFactory(kubeClient, 0)
+ podInformer := kubeFactory.Core().V1().Pods()
+ // nolint:errcheck
+ podInformer.Informer().AddEventHandler(cache.FilteringResourceEventHandler{
+ FilterFunc: func(obj interface{}) bool {
+ pod, ok := obj.(*corev1.Pod)
+ if !ok {
+ return false
+ }
+ _, ok = pod.Annotations[ManagedLabel]
+ return ok
+ },
+ Handler: cache.ResourceEventHandlerFuncs{
+ AddFunc: pc.addPod,
+ UpdateFunc: pc.updatePod,
+ DeleteFunc: pc.deletePod,
+ },
+ })
+ manager = NewClusterPodManager(kubeClient, podInformer, kubeFactory)
+ pc.podManagerMap[cluster.Name] = manager
+ manager.Start()
+ }
+ return nil
+}
+
+func (pc *PodReflectorController) syncPod(key utils.QueueKey) error {
+ pc.lock.RLock()
+ defer pc.lock.RUnlock()
+ fedKey, ok := key.(keys.FederatedKey)
+ if !ok {
+ klog.Errorf("invalid key type %T", key)
+ return fmt.Errorf("invalid key")
+ }
+ cluster := fedKey.Cluster
+ name := fedKey.Name
+ namespace := fedKey.Namespace
+ manager, ok := pc.podManagerMap[cluster]
+ if !ok {
+ msg := fmt.Sprintf("cluster %s not found", cluster)
+ return errors.New(msg)
+ }
+ memberClusterPod, err := manager.GetPod(namespace, name)
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ // pod is not found in member cluster may be this pod has been deleted, try to delete from host cluster
+ err := pc.kubeClient.CoreV1().Pods(namespace).Delete(context.Background(), name, metav1.DeleteOptions{})
+ if !(err == nil && apierrors.IsNotFound(err)) {
+ return err
+ }
+ return nil
+ }
+ klog.Errorf("failed to get pod %s/%s from member cluster %s: %v", namespace, name, cluster, err)
+ return err
+ }
+ if memberClusterPod.DeletionTimestamp != nil {
+ err := pc.kubeClient.CoreV1().Pods(memberClusterPod.Namespace).Delete(context.Background(), memberClusterPod.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ return nil
+ }
+ klog.Errorf("failed to delete pod %s/%s from member cluster %s: %v", namespace, name, cluster, err)
+ return err
+ }
+ }
+ return pc.tryUpdateOrCreate(memberClusterPod)
+}
+
+func (pc *PodReflectorController) addPod(obj interface{}) {
+ pod := obj.(*corev1.Pod)
+ pc.podProcessor.Enqueue(pod)
+}
+
+func (pc *PodReflectorController) updatePod(old interface{}, new interface{}) {
+ pod := new.(*corev1.Pod)
+ pc.podProcessor.Enqueue(pod)
+}
+
+func (pc *PodReflectorController) deletePod(obj interface{}) {
+ pod := obj.(*corev1.Pod)
+ pc.podProcessor.Enqueue(pod)
+}
+
+func (pc *PodReflectorController) tryUpdateOrCreate(pod *corev1.Pod) error {
+ clusterName := pod.Annotations[ClusterAnnotationKey]
+ shadowPod, err := pc.podLister.Pods(pod.Namespace).Get(pod.Name)
+ if err != nil {
+ shadowPod, err = pc.kubeClient.CoreV1().Pods(pod.Namespace).Get(context.Background(), pod.Name, metav1.GetOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ newPod := pod.DeepCopy()
+ err := pc.changeOwnerRef(newPod)
+ if err != nil {
+ klog.Errorf("failed to change owner ref for pod %s/%s: %v", newPod.Namespace, newPod.Name, err)
+ return err
+ }
+ newPod.ResourceVersion = ""
+ newPod.Spec.NodeName = clusterName
+ _, err = pc.kubeClient.CoreV1().Pods(newPod.Namespace).Create(context.Background(), newPod, metav1.CreateOptions{})
+ if err != nil {
+ klog.Errorf("failed to create pod %s/%s: %v", newPod.Namespace, newPod.Name, err)
+ return err
+ }
+ return nil
+ }
+ klog.Errorf("failed to get pod %s/%s: %v", pod.Namespace, pod.Name, err)
+ return err
+ }
+ }
+ copy := shadowPod.DeepCopy()
+ copy.SetAnnotations(pod.Annotations)
+ copy.SetLabels(pod.Labels)
+ copy.Spec = pod.Spec
+ copy.Spec.NodeName = clusterName
+ copy.Status = pod.Status
+ copy.UID = ""
+ updated, err := pc.kubeClient.CoreV1().Pods(copy.Namespace).Update(context.Background(), copy, metav1.UpdateOptions{})
+ if err != nil {
+ klog.Errorf("failed to update pod %s/%s: %v", pod.Namespace, pod.Name, err)
+ return err
+ }
+ updated.Status = pod.Status
+ _, err = pc.kubeClient.CoreV1().Pods(pod.Namespace).UpdateStatus(context.Background(), updated, metav1.UpdateOptions{})
+ if err != nil {
+ klog.Errorf("failed to update pod %s/%s status: %v", pod.Namespace, pod.Name, err)
+ return err
+ }
+ return nil
+}
+
+func (pc *PodReflectorController) addCluster(obj interface{}) {
+ cluster := obj.(*kosmosv1alpha1.Cluster)
+ pc.clusterProcessor.Enqueue(cluster)
+}
+
+func (pc *PodReflectorController) deleteCluster(obj interface{}) {
+ cluster := obj.(*kosmosv1alpha1.Cluster)
+ pc.clusterProcessor.Enqueue(cluster)
+}
+
+func (pc *PodReflectorController) changeOwnerRef(pod *corev1.Pod) error {
+ var newOwnerReference []metav1.OwnerReference
+ for i := range pod.OwnerReferences {
+ ownRef := pod.OwnerReferences[i]
+ if ownRef.Kind == "DaemonSet" {
+ clusterName, ok := pod.Annotations[ClusterAnnotationKey]
+ if !ok {
+ continue
+ }
+ suffix := "-" + clusterName
+ ownerName := strings.TrimSuffix(ownRef.Name, suffix)
+ daemonset, err := pc.dsLister.DaemonSets(pod.Namespace).Get(ownerName)
+ if err != nil {
+ return err
+ }
+ kdaemonset, err := pc.kdsLister.DaemonSets(pod.Namespace).Get(ownerName)
+ if err != nil {
+ return err
+ }
+ if kdaemonset != nil {
+ newOwnerReference = append(newOwnerReference, metav1.OwnerReference{
+ APIVersion: KosmosDaemonSetKind.Version,
+ Kind: KosmosDaemonSetKind.Kind,
+ Name: kdaemonset.Name,
+ UID: kdaemonset.UID,
+ })
+ }
+ if daemonset != nil {
+ _, isGlobalDs := daemonset.Annotations[MirrorAnnotation]
+ if isGlobalDs {
+ newOwnerReference = append(newOwnerReference, metav1.OwnerReference{
+ APIVersion: DaemonSetKind.Version,
+ Kind: DaemonSetKind.Kind,
+ Name: daemonset.Name,
+ UID: daemonset.UID,
+ })
+ }
+ }
+ break
+ }
+ }
+ pod.OwnerReferences = newOwnerReference
+ return nil
+}
+
+type ClusterPodManager struct {
+ kubeClient clientset.Interface
+
+ podLister corelisters.PodLister
+
+ factory informers.SharedInformerFactory
+
+ ctx context.Context
+
+ cancelFun context.CancelFunc
+
+ podSynced cache.InformerSynced
+}
+
+func (k *ClusterPodManager) Start() {
+ k.factory.Start(k.ctx.Done())
+ if !cache.WaitForNamedCacheSync("pod reflect controller", k.ctx.Done(), k.podSynced) {
+ klog.Errorf("failed to wait for pod caches to sync")
+ return
+ }
+}
+
+func (k *ClusterPodManager) GetPod(namespace, name string) (*corev1.Pod, error) {
+ pod, err := k.podLister.Pods(namespace).Get(name)
+ if err != nil {
+ pod, err = k.kubeClient.CoreV1().Pods(namespace).Get(context.Background(), name, metav1.GetOptions{})
+ if err != nil {
+ return nil, err
+ }
+ return pod, nil
+ }
+ return pod.DeepCopy(), nil
+}
+
+func (k *ClusterPodManager) Stop() {
+ if k.cancelFun != nil {
+ k.cancelFun()
+ }
+}
+
+func (k ClusterPodManager) GetPodLister() corelisters.PodLister {
+ return k.podLister
+}
+
+func NewClusterPodManager(kubeClient clientset.Interface, podInformer corev1informers.PodInformer, factory informers.SharedInformerFactory) ClusterPodManager {
+ ctx, cancelFun := context.WithCancel(context.Background())
+ return ClusterPodManager{
+ kubeClient: kubeClient,
+ podLister: podInformer.Lister(),
+ factory: factory,
+ ctx: ctx,
+ cancelFun: cancelFun,
+ podSynced: podInformer.Informer().HasSynced,
+ }
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/api/errdefs.go b/pkg/clustertree/cluster-manager/node-server/api/errdefs.go
new file mode 100644
index 000000000..59c38885a
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/api/errdefs.go
@@ -0,0 +1,73 @@
+package api
+
+import (
+ "errors"
+ "fmt"
+)
+
+const (
+ ERR_NOT_FOUND = "ErrNotFound"
+ ERR_INVALID_INPUT = "ErrInvalidInput"
+)
+
+type causal interface {
+ Cause() error
+ error
+}
+
+type ErrNodeServer interface {
+ GetErrorType() string
+ error
+}
+
+type errNodeServer struct {
+ errType string
+ error
+}
+
+func (e *errNodeServer) GetErrorType() string {
+ return e.errType
+}
+
+func ErrNotFound(msg string) error {
+ return &errNodeServer{ERR_NOT_FOUND, errors.New(msg)}
+}
+
+func ErrInvalidInput(msg string) error {
+ return &errNodeServer{ERR_INVALID_INPUT, errors.New(msg)}
+}
+
+func IsMatchErrType(err error, errType string) bool {
+ if err == nil {
+ return false
+ }
+ if e, ok := err.(ErrNodeServer); ok {
+ return e.GetErrorType() == errType
+ }
+
+ if e, ok := err.(causal); ok {
+ return IsMatchErrType(e.Cause(), errType)
+ }
+
+ return false
+}
+
+func IsNotFound(err error) bool {
+ return IsMatchErrType(err, ERR_NOT_FOUND)
+}
+
+func IsInvalidInput(err error) bool {
+ return IsMatchErrType(err, ERR_INVALID_INPUT)
+}
+
+func ConvertNotFound(err error) error {
+ return &errNodeServer{ERR_NOT_FOUND, err}
+}
+
+func ConvertInvalidInput(err error) error {
+ return &errNodeServer{ERR_INVALID_INPUT, err}
+}
+
+func ErrInvalidInputf(format string, args ...interface{}) error {
+ return &errNodeServer{ERR_INVALID_INPUT, fmt.Errorf(format, args...)}
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/api/exec.go b/pkg/clustertree/cluster-manager/node-server/api/exec.go
new file mode 100644
index 000000000..295772914
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/api/exec.go
@@ -0,0 +1,256 @@
+package api
+
+import (
+ "context"
+ "fmt"
+ "io"
+ "net/http"
+ "strings"
+ "time"
+
+ "github.com/gorilla/mux"
+ "github.com/pkg/errors"
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/apimachinery/pkg/util/httpstream"
+ "k8s.io/client-go/kubernetes/scheme"
+ remoteutils "k8s.io/client-go/tools/remotecommand"
+
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/node-server/api/remotecommand"
+)
+
+type execIO struct {
+ tty bool
+ stdin io.Reader
+ stdout io.WriteCloser
+ stderr io.WriteCloser
+ chResize chan TermSize
+}
+
+type ContainerExecHandlerFunc func(ctx context.Context, namespace, podName, containerName string, cmd []string, attach AttachIO, getClient getClientFunc) error
+
+func (e *execIO) TTY() bool {
+ return e.tty
+}
+
+func (e *execIO) Stdin() io.Reader {
+ return e.stdin
+}
+
+func (e *execIO) Stdout() io.WriteCloser {
+ return e.stdout
+}
+
+func (e *execIO) Stderr() io.WriteCloser {
+ return e.stderr
+}
+
+func (e *execIO) Resize() <-chan TermSize {
+ return e.chResize
+}
+
+type containerExecutor struct {
+ h ContainerExecHandlerFunc
+ namespace, pod, container string
+ ctx context.Context
+ getClient getClientFunc
+}
+
+func (c *containerExecutor) ExecInContainer(name string, uid types.UID, container string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remoteutils.TerminalSize, timeout time.Duration) error {
+ eio := &execIO{
+ tty: tty,
+ stdin: in,
+ stdout: out,
+ stderr: err,
+ }
+
+ if tty {
+ eio.chResize = make(chan TermSize)
+ }
+
+ ctx, cancel := context.WithCancel(c.ctx)
+ defer cancel()
+
+ if tty {
+ go func() {
+ send := func(s remoteutils.TerminalSize) bool {
+ select {
+ case eio.chResize <- TermSize{Width: s.Width, Height: s.Height}:
+ return false
+ case <-ctx.Done():
+ return true
+ }
+ }
+
+ for {
+ select {
+ case s := <-resize:
+ if send(s) {
+ return
+ }
+ case <-ctx.Done():
+ return
+ }
+ }
+ }()
+ }
+
+ return c.h(c.ctx, c.namespace, c.pod, c.container, cmd, eio, c.getClient)
+}
+
+type AttachIO interface {
+ Stdin() io.Reader
+ Stdout() io.WriteCloser
+ Stderr() io.WriteCloser
+ TTY() bool
+ Resize() <-chan TermSize
+}
+
+type TermSize struct {
+ Width uint16
+ Height uint16
+}
+
+type termSize struct {
+ attach AttachIO
+}
+
+func (t *termSize) Next() *remoteutils.TerminalSize {
+ resize := <-t.attach.Resize()
+ return &remoteutils.TerminalSize{
+ Height: resize.Height,
+ Width: resize.Width,
+ }
+}
+
+type ContainerExecOptions struct {
+ StreamIdleTimeout time.Duration
+ StreamCreationTimeout time.Duration
+}
+
+func getVarFromReq(req *http.Request) (string, string, string, []string, []string) {
+ vars := mux.Vars(req)
+ namespace := vars[namespaceVar]
+ pod := vars[podVar]
+ container := vars[containerVar]
+
+ supportedStreamProtocols := strings.Split(req.Header.Get(httpstream.HeaderProtocolVersion), ",")
+
+ q := req.URL.Query()
+ command := q[commandVar]
+
+ return namespace, pod, container, supportedStreamProtocols, command
+}
+
+func getExecOptions(req *http.Request) (*remotecommand.Options, error) {
+ tty := req.FormValue(execTTYParam) == "1"
+ stdin := req.FormValue(execStdinParam) == "1"
+ stdout := req.FormValue(execStdoutParam) == "1"
+ stderr := req.FormValue(execStderrParam) == "1"
+
+ if tty && stderr {
+ return nil, errors.New("cannot exec with tty and stderr")
+ }
+
+ if !stdin && !stdout && !stderr {
+ return nil, errors.New("you must specify at least one of stdin, stdout, stderr")
+ }
+ return &remotecommand.Options{
+ Stdin: stdin,
+ Stdout: stdout,
+ Stderr: stderr,
+ TTY: tty,
+ }, nil
+}
+
+func execInContainer(ctx context.Context, namespace string, podName string, containerName string, cmd []string, attach AttachIO, getClient getClientFunc) error {
+ defer func() {
+ if attach.Stdout() != nil {
+ attach.Stdout().Close()
+ }
+ if attach.Stderr() != nil {
+ attach.Stderr().Close()
+ }
+ }()
+
+ client, config, err := getClient(ctx, namespace, podName)
+
+ if err != nil {
+ return fmt.Errorf("could not get the leaf client, podName: %s, namespace: %s, err: %v", podName, namespace, err)
+ }
+
+ req := client.CoreV1().RESTClient().
+ Post().
+ Namespace(namespace).
+ Resource("pods").
+ Name(podName).
+ SubResource("exec").
+ Timeout(0).
+ VersionedParams(&corev1.PodExecOptions{
+ Container: containerName,
+ Command: cmd,
+ Stdin: attach.Stdin() != nil,
+ Stdout: attach.Stdout() != nil,
+ Stderr: attach.Stderr() != nil,
+ TTY: attach.TTY(),
+ }, scheme.ParameterCodec)
+
+ exec, err := remoteutils.NewSPDYExecutor(config, "POST", req.URL())
+ if err != nil {
+ return fmt.Errorf("could not make remote command: %v", err)
+ }
+
+ ts := &termSize{attach: attach}
+
+ err = exec.StreamWithContext(ctx, remoteutils.StreamOptions{
+ Stdin: attach.Stdin(),
+ Stdout: attach.Stdout(),
+ Stderr: attach.Stderr(),
+ Tty: attach.TTY(),
+ TerminalSizeQueue: ts,
+ })
+
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func ContainerExecHandler(cfg ContainerExecOptions, getClient getClientFunc) http.HandlerFunc {
+ return handleError(func(w http.ResponseWriter, req *http.Request) error {
+ namespace, pod, container, supportedStreamProtocols, command := getVarFromReq(req)
+
+ streamOpts, err := getExecOptions(req)
+ if err != nil {
+ return ConvertInvalidInput(err)
+ }
+
+ ctx, cancel := context.WithCancel(req.Context())
+ defer cancel()
+
+ exec := &containerExecutor{
+ ctx: ctx,
+ h: execInContainer,
+ pod: pod,
+ namespace: namespace,
+ container: container,
+ getClient: getClient,
+ }
+ remotecommand.ServeExec(
+ w,
+ req,
+ exec,
+ "",
+ "",
+ container,
+ command,
+ streamOpts,
+ cfg.StreamIdleTimeout,
+ cfg.StreamCreationTimeout,
+ supportedStreamProtocols,
+ )
+
+ return nil
+ })
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/api/helper.go b/pkg/clustertree/cluster-manager/node-server/api/helper.go
new file mode 100644
index 000000000..fe465f1e9
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/api/helper.go
@@ -0,0 +1,94 @@
+package api
+
+import (
+ "context"
+ "io"
+ "net/http"
+
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+ "k8s.io/klog/v2"
+)
+
+const (
+ execTTYParam = "tty"
+ execStdinParam = "input"
+ execStdoutParam = "output"
+ execStderrParam = "error"
+ namespaceVar = "namespace"
+ podVar = "pod"
+ containerVar = "container"
+ commandVar = "command"
+)
+
+type handlerFunc func(http.ResponseWriter, *http.Request) error
+
+type getClientFunc func(ctx context.Context, namespace string, podName string) (kubernetes.Interface, *rest.Config, error)
+
+func handleError(f handlerFunc) http.HandlerFunc {
+ return func(w http.ResponseWriter, req *http.Request) {
+ err := f(w, req)
+ if err == nil {
+ return
+ }
+
+ code := httpStatusCode(err)
+ w.WriteHeader(code)
+ if _, err := io.WriteString(w, err.Error()); err != nil {
+ klog.Error("error writing error response")
+ }
+
+ if code >= 500 {
+ klog.Error("Internal server error on request")
+ } else {
+ klog.Error("Error on request")
+ }
+ }
+}
+
+func flushOnWrite(w io.Writer) io.Writer {
+ if fw, ok := w.(writeFlusher); ok {
+ return &flushWriter{fw}
+ }
+ return w
+}
+
+type flushWriter struct {
+ w writeFlusher
+}
+
+type writeFlusher interface {
+ Flush()
+ Write([]byte) (int, error)
+}
+
+func (fw *flushWriter) Write(p []byte) (int, error) {
+ n, err := fw.w.Write(p)
+ if n > 0 {
+ fw.w.Flush()
+ }
+ return n, err
+}
+
+func httpStatusCode(err error) int {
+ switch {
+ case err == nil:
+ return http.StatusOK
+ case IsNotFound(err):
+ return http.StatusNotFound
+ case IsInvalidInput(err):
+ return http.StatusBadRequest
+ default:
+ return http.StatusInternalServerError
+ }
+}
+
+func NotImplemented(w http.ResponseWriter, r *http.Request) {
+ klog.Warning("501 not implemented")
+ http.Error(w, "501 not implemented", http.StatusNotImplemented)
+}
+
+func NotFound(w http.ResponseWriter, r *http.Request) {
+ klog.Warningf("404 request not found, url: %s", r.URL)
+ http.Error(w, "404 request not found", http.StatusNotFound)
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/api/logs.go b/pkg/clustertree/cluster-manager/node-server/api/logs.go
new file mode 100644
index 000000000..b0a7317fa
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/api/logs.go
@@ -0,0 +1,169 @@
+package api
+
+import (
+ "context"
+ "fmt"
+ "io"
+ "net/http"
+ "net/url"
+ "strconv"
+ "time"
+
+ "github.com/gorilla/mux"
+ "github.com/pkg/errors"
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/klog/v2"
+)
+
+type ContainerLogsHandlerFunc func(ctx context.Context, namespace, podName, containerName string, opts ContainerLogOpts) (io.ReadCloser, error)
+
+type ContainerLogOpts struct {
+ Tail int
+ LimitBytes int
+ Timestamps bool
+ Follow bool
+ Previous bool
+ SinceSeconds int
+ SinceTime time.Time
+}
+
+func parseLogOptions(q url.Values) (opts ContainerLogOpts, err error) {
+ if tailLines := q.Get("tailLines"); tailLines != "" {
+ opts.Tail, err = strconv.Atoi(tailLines)
+ if err != nil {
+ return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"tailLines\""))
+ }
+ if opts.Tail < 0 {
+ return opts, ErrInvalidInputf("\"tailLines\" is %d", opts.Tail)
+ }
+ }
+ if follow := q.Get("follow"); follow != "" {
+ opts.Follow, err = strconv.ParseBool(follow)
+ if err != nil {
+ return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"follow\""))
+ }
+ }
+ if limitBytes := q.Get("limitBytes"); limitBytes != "" {
+ opts.LimitBytes, err = strconv.Atoi(limitBytes)
+ if err != nil {
+ return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"limitBytes\""))
+ }
+ if opts.LimitBytes < 1 {
+ return opts, ErrInvalidInputf("\"limitBytes\" is %d", opts.LimitBytes)
+ }
+ }
+ if previous := q.Get("previous"); previous != "" {
+ opts.Previous, err = strconv.ParseBool(previous)
+ if err != nil {
+ return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"previous\""))
+ }
+ }
+ if sinceSeconds := q.Get("sinceSeconds"); sinceSeconds != "" {
+ opts.SinceSeconds, err = strconv.Atoi(sinceSeconds)
+ if err != nil {
+ return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"sinceSeconds\""))
+ }
+ if opts.SinceSeconds < 1 {
+ return opts, ErrInvalidInputf("\"sinceSeconds\" is %d", opts.SinceSeconds)
+ }
+ }
+ if sinceTime := q.Get("sinceTime"); sinceTime != "" {
+ opts.SinceTime, err = time.Parse(time.RFC3339, sinceTime)
+ if err != nil {
+ return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"sinceTime\""))
+ }
+ if opts.SinceSeconds > 0 {
+ return opts, ErrInvalidInputf("both \"sinceSeconds\" and \"sinceTime\" are set")
+ }
+ }
+ if timestamps := q.Get("timestamps"); timestamps != "" {
+ opts.Timestamps, err = strconv.ParseBool(timestamps)
+ if err != nil {
+ return opts, ConvertInvalidInput(errors.Wrap(err, "could not parse \"timestamps\""))
+ }
+ }
+ return opts, nil
+}
+
+func getContainerLogs(ctx context.Context, namespace string,
+ podName string, containerName string, opts ContainerLogOpts, getClient getClientFunc) (io.ReadCloser, error) {
+ tailLine := int64(opts.Tail)
+ limitBytes := int64(opts.LimitBytes)
+ sinceSeconds := opts.SinceSeconds
+ options := &corev1.PodLogOptions{
+ Container: containerName,
+ Timestamps: opts.Timestamps,
+ Follow: opts.Follow,
+ }
+ if tailLine != 0 {
+ options.TailLines = &tailLine
+ }
+ if limitBytes != 0 {
+ options.LimitBytes = &limitBytes
+ }
+ if !opts.SinceTime.IsZero() {
+ *options.SinceTime = metav1.Time{Time: opts.SinceTime}
+ }
+ if sinceSeconds != 0 {
+ *options.SinceSeconds = int64(sinceSeconds)
+ }
+ if opts.Previous {
+ options.Previous = opts.Previous
+ }
+ if opts.Follow {
+ options.Follow = opts.Follow
+ }
+
+ client, _, err := getClient(ctx, namespace, podName)
+
+ if err != nil {
+ return nil, fmt.Errorf("could not get the leaf client, podName: %s, namespace: %s, err: %v", podName, namespace, err)
+ }
+
+ logs := client.CoreV1().Pods(namespace).GetLogs(podName, options)
+ stream, err := logs.Stream(ctx)
+ if err != nil {
+ return nil, fmt.Errorf("could not get stream from logs request: %v", err)
+ }
+ return stream, nil
+}
+
+func ContainerLogsHandler(getClient getClientFunc) http.HandlerFunc {
+ return handleError(func(w http.ResponseWriter, req *http.Request) error {
+ vars := mux.Vars(req)
+ if len(vars) != 3 {
+ return ErrNotFound("not found")
+ }
+
+ ctx := req.Context()
+
+ namespace := vars[namespaceVar]
+ pod := vars[podVar]
+ container := vars[containerVar]
+
+ query := req.URL.Query()
+ opts, err := parseLogOptions(query)
+ if err != nil {
+ return err
+ }
+
+ logs, err := getContainerLogs(ctx, namespace, pod, container, opts, getClient)
+ if err != nil {
+ return errors.Wrap(err, "error getting container logs?)")
+ }
+
+ defer logs.Close()
+
+ req.Header.Set("Transfer-Encoding", "chunked")
+
+ if _, ok := w.(writeFlusher); !ok {
+ klog.V(4).Info("http response writer does not support flushes")
+ }
+
+ if _, err := io.Copy(flushOnWrite(w), logs); err != nil {
+ return errors.Wrap(err, "error writing response to client")
+ }
+ return nil
+ })
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/api/remotecommand/attach.go b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/attach.go
new file mode 100644
index 000000000..69d78de33
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/attach.go
@@ -0,0 +1,70 @@
+// This code is directly lifted from the Kubernetes
+// For reference:
+// https://github.com/kubernetes/kubernetes/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/attach.go
+
+package remotecommand
+
+import (
+ "fmt"
+ "io"
+ "net/http"
+ "time"
+
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+ remotecommandconsts "k8s.io/apimachinery/pkg/util/remotecommand"
+ "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/client-go/tools/remotecommand"
+ utilexec "k8s.io/utils/exec"
+)
+
+// Attacher knows how to attach to a container in a pod.
+type Attacher interface {
+ // AttachToContainer attaches to a container in the pod, copying data
+ // between in/out/err and the container's stdin/stdout/stderr.
+ AttachToContainer(name string, uid types.UID, container string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error
+}
+
+// ServeAttach handles requests to attach to a container. After
+// creating/receiving the required streams, it delegates the actual attachment
+// to the attacher.
+func ServeAttach(w http.ResponseWriter, req *http.Request, attacher Attacher, podName string, uid types.UID, container string, streamOpts *Options, idleTimeout, streamCreationTimeout time.Duration, supportedProtocols []string) {
+ ctx, ok := createStreams(req, w, streamOpts, supportedProtocols, idleTimeout, streamCreationTimeout)
+ if !ok {
+ // error is handled by createStreams
+ return
+ }
+ defer ctx.conn.Close()
+
+ err := attacher.AttachToContainer(podName, uid, container, ctx.stdinStream, ctx.stdoutStream, ctx.stderrStream, ctx.tty, ctx.resizeChan, 0)
+ if err != nil {
+ if exitErr, ok := err.(utilexec.ExitError); ok && exitErr.Exited() {
+ rc := exitErr.ExitStatus()
+ // nolint:errcheck
+ ctx.writeStatus(&apierrors.StatusError{ErrStatus: metav1.Status{
+ Status: metav1.StatusFailure,
+ Reason: remotecommandconsts.NonZeroExitCodeReason,
+ Details: &metav1.StatusDetails{
+ Causes: []metav1.StatusCause{
+ {
+ Type: remotecommandconsts.ExitCodeCauseType,
+ Message: fmt.Sprintf("%d", rc),
+ },
+ },
+ },
+ Message: fmt.Sprintf("command terminated with non-zero exit code: %v", exitErr),
+ }})
+ return
+ }
+ err = fmt.Errorf("error attaching to container: %v", err)
+ runtime.HandleError(err)
+ // nolint:errcheck
+ ctx.writeStatus(apierrors.NewInternalError(err))
+ return
+ }
+ // nolint:errcheck
+ ctx.writeStatus(&apierrors.StatusError{ErrStatus: metav1.Status{
+ Status: metav1.StatusSuccess,
+ }})
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/api/remotecommand/exec.go b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/exec.go
new file mode 100644
index 000000000..f2bcc0cbc
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/exec.go
@@ -0,0 +1,70 @@
+// This code is directly lifted from the Kubernetes
+// For reference:
+// https://github.com/kubernetes/kubernetes/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/exec.go
+
+package remotecommand
+
+import (
+ "fmt"
+ "io"
+ "net/http"
+ "time"
+
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/types"
+ remotecommandconsts "k8s.io/apimachinery/pkg/util/remotecommand"
+ "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/client-go/tools/remotecommand"
+ utilexec "k8s.io/utils/exec"
+)
+
+// Executor knows how to execute a command in a container in a pod.
+type Executor interface {
+ // ExecInContainer executes a command in a container in the pod, copying data
+ // between in/out/err and the container's stdin/stdout/stderr.
+ ExecInContainer(name string, uid types.UID, container string, cmd []string, in io.Reader, out, err io.WriteCloser, tty bool, resize <-chan remotecommand.TerminalSize, timeout time.Duration) error
+}
+
+// ServeExec handles requests to execute a command in a container. After
+// creating/receiving the required streams, it delegates the actual execution
+// to the executor.
+func ServeExec(w http.ResponseWriter, req *http.Request, executor Executor, podName string, uid types.UID, container string, cmd []string, streamOpts *Options, idleTimeout, streamCreationTimeout time.Duration, supportedProtocols []string) {
+ ctx, ok := createStreams(req, w, streamOpts, supportedProtocols, idleTimeout, streamCreationTimeout)
+ if !ok {
+ // error is handled by createStreams
+ return
+ }
+ defer ctx.conn.Close()
+
+ err := executor.ExecInContainer(podName, uid, container, cmd, ctx.stdinStream, ctx.stdoutStream, ctx.stderrStream, ctx.tty, ctx.resizeChan, 0)
+ if err != nil {
+ if exitErr, ok := err.(utilexec.ExitError); ok && exitErr.Exited() {
+ rc := exitErr.ExitStatus()
+ // nolint:errcheck
+ ctx.writeStatus(&apierrors.StatusError{ErrStatus: metav1.Status{
+ Status: metav1.StatusFailure,
+ Reason: remotecommandconsts.NonZeroExitCodeReason,
+ Details: &metav1.StatusDetails{
+ Causes: []metav1.StatusCause{
+ {
+ Type: remotecommandconsts.ExitCodeCauseType,
+ Message: fmt.Sprintf("%d", rc),
+ },
+ },
+ },
+ Message: fmt.Sprintf("command terminated with non-zero exit code: %v", exitErr),
+ }})
+ } else {
+ err = fmt.Errorf("error executing command in container: %v", err)
+ runtime.HandleError(err)
+ // nolint:errcheck
+ ctx.writeStatus(apierrors.NewInternalError(err))
+ }
+ } else {
+ // nolint:errcheck
+ ctx.writeStatus(&apierrors.StatusError{ErrStatus: metav1.Status{
+ Status: metav1.StatusSuccess,
+ }})
+ }
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/api/remotecommand/httpstream.go b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/httpstream.go
new file mode 100644
index 000000000..462a5544d
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/httpstream.go
@@ -0,0 +1,439 @@
+// This code is directly lifted from the Kubernetes
+// For reference:
+// https://github.com/kubernetes/kubernetes/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/httpstream.go
+
+package remotecommand
+
+import (
+ "encoding/json"
+ "errors"
+ "fmt"
+ "io"
+ "net/http"
+ "time"
+
+ api "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/util/httpstream"
+ "k8s.io/apimachinery/pkg/util/httpstream/spdy"
+ remotecommandconsts "k8s.io/apimachinery/pkg/util/remotecommand"
+ "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/apiserver/pkg/util/wsstream"
+ "k8s.io/client-go/tools/remotecommand"
+ "k8s.io/klog/v2"
+)
+
+// Options contains details about which streams are required for
+// remote command execution.
+type Options struct {
+ Stdin bool
+ Stdout bool
+ Stderr bool
+ TTY bool
+}
+
+// NewOptions creates a new Options from the Request.
+func NewOptions(req *http.Request) (*Options, error) {
+ tty := req.FormValue(api.ExecTTYParam) == "1"
+ stdin := req.FormValue(api.ExecStdinParam) == "1"
+ stdout := req.FormValue(api.ExecStdoutParam) == "1"
+ stderr := req.FormValue(api.ExecStderrParam) == "1"
+ if tty && stderr {
+ // TODO: make this an error before we reach this method
+ klog.V(4).Infof("Access to exec with tty and stderr is not supported, bypassing stderr")
+ stderr = false
+ }
+
+ if !stdin && !stdout && !stderr {
+ return nil, fmt.Errorf("you must specify at least 1 of stdin, stdout, stderr")
+ }
+
+ return &Options{
+ Stdin: stdin,
+ Stdout: stdout,
+ Stderr: stderr,
+ TTY: tty,
+ }, nil
+}
+
+// context contains the connection and streams used when
+// forwarding an attach or execute session into a container.
+type context struct {
+ conn io.Closer
+ stdinStream io.ReadCloser
+ stdoutStream io.WriteCloser
+ stderrStream io.WriteCloser
+ writeStatus func(status *apierrors.StatusError) error
+ resizeStream io.ReadCloser
+ resizeChan chan remotecommand.TerminalSize
+ tty bool
+}
+
+// streamAndReply holds both a Stream and a channel that is closed when the stream's reply frame is
+// enqueued. Consumers can wait for replySent to be closed prior to proceeding, to ensure that the
+// replyFrame is enqueued before the connection's goaway frame is sent (e.g. if a stream was
+// received and right after, the connection gets closed).
+type streamAndReply struct {
+ httpstream.Stream
+ replySent <-chan struct{}
+}
+
+// waitStreamReply waits until either replySent or stop is closed. If replySent is closed, it sends
+// an empty struct to the notify channel.
+func waitStreamReply(replySent <-chan struct{}, notify chan<- struct{}, stop <-chan struct{}) {
+ select {
+ case <-replySent:
+ notify <- struct{}{}
+ case <-stop:
+ }
+}
+
+func createStreams(req *http.Request, w http.ResponseWriter, opts *Options, supportedStreamProtocols []string, idleTimeout, streamCreationTimeout time.Duration) (*context, bool) {
+ var ctx *context
+ var ok bool
+ if wsstream.IsWebSocketRequest(req) {
+ ctx, ok = createWebSocketStreams(req, w, opts, idleTimeout)
+ } else {
+ ctx, ok = createHTTPStreamStreams(req, w, opts, supportedStreamProtocols, idleTimeout, streamCreationTimeout)
+ }
+ if !ok {
+ return nil, false
+ }
+
+ if ctx.resizeStream != nil {
+ ctx.resizeChan = make(chan remotecommand.TerminalSize)
+ go handleResizeEvents(ctx.resizeStream, ctx.resizeChan)
+ }
+
+ return ctx, true
+}
+
+func createHTTPStreamStreams(req *http.Request, w http.ResponseWriter, opts *Options, supportedStreamProtocols []string, idleTimeout, streamCreationTimeout time.Duration) (*context, bool) {
+ protocol, err := httpstream.Handshake(req, w, supportedStreamProtocols)
+ if err != nil {
+ http.Error(w, err.Error(), http.StatusBadRequest)
+ return nil, false
+ }
+
+ streamCh := make(chan streamAndReply)
+
+ upgrader := spdy.NewResponseUpgrader()
+ conn := upgrader.UpgradeResponse(w, req, func(stream httpstream.Stream, replySent <-chan struct{}) error {
+ streamCh <- streamAndReply{Stream: stream, replySent: replySent}
+ return nil
+ })
+ // from this point on, we can no longer call methods on response
+ if conn == nil {
+ // The upgrader is responsible for notifying the client of any errors that
+ // occurred during upgrading. All we can do is return here at this point
+ // if we weren't successful in upgrading.
+ return nil, false
+ }
+
+ conn.SetIdleTimeout(idleTimeout)
+
+ var handler protocolHandler
+ switch protocol {
+ case remotecommandconsts.StreamProtocolV4Name:
+ handler = &v4ProtocolHandler{}
+ case remotecommandconsts.StreamProtocolV3Name:
+ handler = &v3ProtocolHandler{}
+ case remotecommandconsts.StreamProtocolV2Name:
+ handler = &v2ProtocolHandler{}
+ case "":
+ klog.V(4).Infof("Client did not request protocol negotiation. Falling back to %q", remotecommandconsts.StreamProtocolV1Name)
+ fallthrough
+ case remotecommandconsts.StreamProtocolV1Name:
+ handler = &v1ProtocolHandler{}
+ }
+
+ // count the streams client asked for, starting with 1
+ expectedStreams := 1
+ if opts.Stdin {
+ expectedStreams++
+ }
+ if opts.Stdout {
+ expectedStreams++
+ }
+ if opts.Stderr {
+ expectedStreams++
+ }
+ if opts.TTY && handler.supportsTerminalResizing() {
+ expectedStreams++
+ }
+
+ expired := time.NewTimer(streamCreationTimeout)
+ defer expired.Stop()
+
+ ctx, err := handler.waitForStreams(streamCh, expectedStreams, expired.C)
+ if err != nil {
+ runtime.HandleError(err)
+ return nil, false
+ }
+
+ ctx.conn = conn
+ ctx.tty = opts.TTY
+
+ return ctx, true
+}
+
+type protocolHandler interface {
+ // waitForStreams waits for the expected streams or a timeout, returning a
+ // remoteCommandContext if all the streams were received, or an error if not.
+ waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error)
+ // supportsTerminalResizing returns true if the protocol handler supports terminal resizing
+ supportsTerminalResizing() bool
+}
+
+// v4ProtocolHandler implements the V4 protocol version for streaming command execution. It only differs
+// in from v3 in the error stream format using an json-marshaled metav1.Status which carries
+// the process' exit code.
+type v4ProtocolHandler struct{}
+
+// nolint:dupl
+func (*v4ProtocolHandler) waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) {
+ ctx := &context{}
+ receivedStreams := 0
+ replyChan := make(chan struct{})
+ stop := make(chan struct{})
+ defer close(stop)
+WaitForStreams:
+ for {
+ select {
+ case stream := <-streams:
+ streamType := stream.Headers().Get(api.StreamType)
+ switch streamType {
+ case api.StreamTypeError:
+ ctx.writeStatus = v4WriteStatusFunc(stream) // write json errors
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStdin:
+ ctx.stdinStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStdout:
+ ctx.stdoutStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStderr:
+ ctx.stderrStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeResize:
+ ctx.resizeStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ default:
+ runtime.HandleError(fmt.Errorf("unexpected stream type: %q", streamType))
+ }
+ case <-replyChan:
+ receivedStreams++
+ if receivedStreams == expectedStreams {
+ break WaitForStreams
+ }
+ case <-expired:
+ // TODO find a way to return the error to the user. Maybe use a separate
+ // stream to report errors?
+ return nil, errors.New("timed out waiting for client to create streams")
+ }
+ }
+
+ return ctx, nil
+}
+
+// supportsTerminalResizing returns true because v4ProtocolHandler supports it
+func (*v4ProtocolHandler) supportsTerminalResizing() bool { return true }
+
+// v3ProtocolHandler implements the V3 protocol version for streaming command execution.
+type v3ProtocolHandler struct{}
+
+// nolint:dupl
+func (*v3ProtocolHandler) waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) {
+ ctx := &context{}
+ receivedStreams := 0
+ replyChan := make(chan struct{})
+ stop := make(chan struct{})
+ defer close(stop)
+WaitForStreams:
+ for {
+ select {
+ case stream := <-streams:
+ streamType := stream.Headers().Get(api.StreamType)
+ switch streamType {
+ case api.StreamTypeError:
+ ctx.writeStatus = v1WriteStatusFunc(stream)
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStdin:
+ ctx.stdinStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStdout:
+ ctx.stdoutStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStderr:
+ ctx.stderrStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeResize:
+ ctx.resizeStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ default:
+ runtime.HandleError(fmt.Errorf("unexpected stream type: %q", streamType))
+ }
+ case <-replyChan:
+ receivedStreams++
+ if receivedStreams == expectedStreams {
+ break WaitForStreams
+ }
+ case <-expired:
+ // TODO find a way to return the error to the user. Maybe use a separate
+ // stream to report errors?
+ return nil, errors.New("timed out waiting for client to create streams")
+ }
+ }
+
+ return ctx, nil
+}
+
+// supportsTerminalResizing returns true because v3ProtocolHandler supports it
+func (*v3ProtocolHandler) supportsTerminalResizing() bool { return true }
+
+// v2ProtocolHandler implements the V2 protocol version for streaming command execution.
+type v2ProtocolHandler struct{}
+
+// nolint:dupl
+func (*v2ProtocolHandler) waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) {
+ ctx := &context{}
+ receivedStreams := 0
+ replyChan := make(chan struct{})
+ stop := make(chan struct{})
+ defer close(stop)
+WaitForStreams:
+ for {
+ select {
+ case stream := <-streams:
+ streamType := stream.Headers().Get(api.StreamType)
+ switch streamType {
+ case api.StreamTypeError:
+ ctx.writeStatus = v1WriteStatusFunc(stream)
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStdin:
+ ctx.stdinStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStdout:
+ ctx.stdoutStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStderr:
+ ctx.stderrStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ default:
+ runtime.HandleError(fmt.Errorf("unexpected stream type: %q", streamType))
+ }
+ case <-replyChan:
+ receivedStreams++
+ if receivedStreams == expectedStreams {
+ break WaitForStreams
+ }
+ case <-expired:
+ // TODO find a way to return the error to the user. Maybe use a separate
+ // stream to report errors?
+ return nil, errors.New("timed out waiting for client to create streams")
+ }
+ }
+
+ return ctx, nil
+}
+
+// supportsTerminalResizing returns false because v2ProtocolHandler doesn't support it.
+func (*v2ProtocolHandler) supportsTerminalResizing() bool { return false }
+
+// v1ProtocolHandler implements the V1 protocol version for streaming command execution.
+type v1ProtocolHandler struct{}
+
+// nolint:dupl
+func (*v1ProtocolHandler) waitForStreams(streams <-chan streamAndReply, expectedStreams int, expired <-chan time.Time) (*context, error) {
+ ctx := &context{}
+ receivedStreams := 0
+ replyChan := make(chan struct{})
+ stop := make(chan struct{})
+ defer close(stop)
+WaitForStreams:
+ for {
+ select {
+ case stream := <-streams:
+ streamType := stream.Headers().Get(api.StreamType)
+ switch streamType {
+ case api.StreamTypeError:
+ ctx.writeStatus = v1WriteStatusFunc(stream)
+
+ // This defer statement shouldn't be here, but due to previous refactoring, it ended up in
+ // here. This is what 1.0.x kubelets do, so we're retaining that behavior. This is fixed in
+ // the v2ProtocolHandler.
+ // nolint:errcheck
+ defer stream.Reset()
+
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStdin:
+ ctx.stdinStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStdout:
+ ctx.stdoutStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ case api.StreamTypeStderr:
+ ctx.stderrStream = stream
+ go waitStreamReply(stream.replySent, replyChan, stop)
+ default:
+ runtime.HandleError(fmt.Errorf("unexpected stream type: %q", streamType))
+ }
+ case <-replyChan:
+ receivedStreams++
+ if receivedStreams == expectedStreams {
+ break WaitForStreams
+ }
+ case <-expired:
+ // TODO find a way to return the error to the user. Maybe use a separate
+ // stream to report errors?
+ return nil, errors.New("timed out waiting for client to create streams")
+ }
+ }
+
+ if ctx.stdinStream != nil {
+ ctx.stdinStream.Close()
+ }
+
+ return ctx, nil
+}
+
+// supportsTerminalResizing returns false because v1ProtocolHandler doesn't support it.
+func (*v1ProtocolHandler) supportsTerminalResizing() bool { return false }
+
+func handleResizeEvents(stream io.Reader, channel chan<- remotecommand.TerminalSize) {
+ defer runtime.HandleCrash()
+ defer close(channel)
+
+ decoder := json.NewDecoder(stream)
+ for {
+ size := remotecommand.TerminalSize{}
+ if err := decoder.Decode(&size); err != nil {
+ break
+ }
+ channel <- size
+ }
+}
+
+func v1WriteStatusFunc(stream io.Writer) func(status *apierrors.StatusError) error {
+ return func(status *apierrors.StatusError) error {
+ if status.Status().Status == metav1.StatusSuccess {
+ return nil // send error messages
+ }
+ _, err := stream.Write([]byte(status.Error()))
+ return err
+ }
+}
+
+// v4WriteStatusFunc returns a WriteStatusFunc that marshals a given api Status
+// as json in the error channel.
+func v4WriteStatusFunc(stream io.Writer) func(status *apierrors.StatusError) error {
+ return func(status *apierrors.StatusError) error {
+ bs, err := json.Marshal(status.Status())
+ if err != nil {
+ return err
+ }
+ _, err = stream.Write(bs)
+ return err
+ }
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/api/remotecommand/websocket.go b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/websocket.go
new file mode 100644
index 000000000..ccde42a5c
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/api/remotecommand/websocket.go
@@ -0,0 +1,123 @@
+// This code is directly lifted from the Kubernetes
+// For reference:
+// https://github.com/kubernetes/kubernetes/staging/src/k8s.io/kubelet/pkg/cri/streaming/remotecommand/websocket.go
+
+package remotecommand
+
+import (
+ "fmt"
+ "net/http"
+ "time"
+
+ "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/apiserver/pkg/server/httplog"
+ "k8s.io/apiserver/pkg/util/wsstream"
+)
+
+const (
+ stdinChannel = iota
+ stdoutChannel
+ stderrChannel
+ errorChannel
+ resizeChannel
+
+ preV4BinaryWebsocketProtocol = wsstream.ChannelWebSocketProtocol
+ preV4Base64WebsocketProtocol = wsstream.Base64ChannelWebSocketProtocol
+ v4BinaryWebsocketProtocol = "v4." + wsstream.ChannelWebSocketProtocol
+ v4Base64WebsocketProtocol = "v4." + wsstream.Base64ChannelWebSocketProtocol
+)
+
+// createChannels returns the standard channel types for a shell connection (STDIN 0, STDOUT 1, STDERR 2)
+// along with the approximate duplex value. It also creates the error (3) and resize (4) channels.
+func createChannels(opts *Options) []wsstream.ChannelType {
+ // open the requested channels, and always open the error channel
+ channels := make([]wsstream.ChannelType, 5)
+ channels[stdinChannel] = readChannel(opts.Stdin)
+ channels[stdoutChannel] = writeChannel(opts.Stdout)
+ channels[stderrChannel] = writeChannel(opts.Stderr)
+ channels[errorChannel] = wsstream.WriteChannel
+ channels[resizeChannel] = wsstream.ReadChannel
+ return channels
+}
+
+// readChannel returns wsstream.ReadChannel if real is true, or wsstream.IgnoreChannel.
+func readChannel(real bool) wsstream.ChannelType {
+ if real {
+ return wsstream.ReadChannel
+ }
+ return wsstream.IgnoreChannel
+}
+
+// writeChannel returns wsstream.WriteChannel if real is true, or wsstream.IgnoreChannel.
+func writeChannel(real bool) wsstream.ChannelType {
+ if real {
+ return wsstream.WriteChannel
+ }
+ return wsstream.IgnoreChannel
+}
+
+// createWebSocketStreams returns a context containing the websocket connection and
+// streams needed to perform an exec or an attach.
+func createWebSocketStreams(req *http.Request, w http.ResponseWriter, opts *Options, idleTimeout time.Duration) (*context, bool) {
+ channels := createChannels(opts)
+ conn := wsstream.NewConn(map[string]wsstream.ChannelProtocolConfig{
+ "": {
+ Binary: true,
+ Channels: channels,
+ },
+ preV4BinaryWebsocketProtocol: {
+ Binary: true,
+ Channels: channels,
+ },
+ preV4Base64WebsocketProtocol: {
+ Binary: false,
+ Channels: channels,
+ },
+ v4BinaryWebsocketProtocol: {
+ Binary: true,
+ Channels: channels,
+ },
+ v4Base64WebsocketProtocol: {
+ Binary: false,
+ Channels: channels,
+ },
+ })
+ conn.SetIdleTimeout(idleTimeout)
+ negotiatedProtocol, streams, err := conn.Open(httplog.Unlogged(req, w), req)
+ if err != nil {
+ runtime.HandleError(fmt.Errorf("unable to upgrade websocket connection: %v", err))
+ return nil, false
+ }
+
+ // Send an empty message to the lowest writable channel to notify the client the connection is established
+ // TODO: make generic to SPDY and WebSockets and do it outside of this method?
+ switch {
+ case opts.Stdout:
+ // nolint:errcheck
+ streams[stdoutChannel].Write([]byte{})
+ case opts.Stderr:
+ // nolint:errcheck
+ streams[stderrChannel].Write([]byte{})
+ default:
+ // nolint:errcheck
+ streams[errorChannel].Write([]byte{})
+ }
+
+ ctx := &context{
+ conn: conn,
+ stdinStream: streams[stdinChannel],
+ stdoutStream: streams[stdoutChannel],
+ stderrStream: streams[stderrChannel],
+ tty: opts.TTY,
+ resizeStream: streams[resizeChannel],
+ }
+
+ switch negotiatedProtocol {
+ case v4BinaryWebsocketProtocol, v4Base64WebsocketProtocol:
+ ctx.writeStatus = v4WriteStatusFunc(streams[errorChannel])
+ default:
+ ctx.writeStatus = v1WriteStatusFunc(streams[errorChannel])
+ }
+
+ return ctx, true
+}
diff --git a/pkg/clustertree/cluster-manager/node-server/server.go b/pkg/clustertree/cluster-manager/node-server/server.go
new file mode 100644
index 000000000..928dbe5f9
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/node-server/server.go
@@ -0,0 +1,199 @@
+package nodeserver
+
+import (
+ "context"
+ "crypto/tls"
+ "crypto/x509"
+ "fmt"
+ "net/http"
+ "os"
+ "time"
+
+ "github.com/gorilla/mux"
+ "github.com/pkg/errors"
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+ "k8s.io/klog/v2"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+
+ "github.com/kosmos.io/kosmos/cmd/clustertree/cluster-manager/app/options"
+ "github.com/kosmos.io/kosmos/pkg/cert"
+ "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/node-server/api"
+ leafUtils "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+)
+
+func DefaultServerCiphers() []uint16 {
+ return []uint16{
+ tls.TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,
+ tls.TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,
+ tls.TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,
+ tls.TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,
+
+ tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
+ tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
+ tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
+ tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
+ }
+}
+
+type NodeServer struct {
+ RootClient client.Client
+ GlobalLeafManager leafUtils.LeafResourceManager
+}
+
+type HttpConfig struct {
+ listenAddr string
+ handler http.Handler
+ tlsConfig *tls.Config
+}
+
+func (n *NodeServer) getClient(ctx context.Context, namespace string, podName string) (kubernetes.Interface, *rest.Config, error) {
+ nsname := types.NamespacedName{
+ Namespace: namespace,
+ Name: podName,
+ }
+
+ rootPod := &corev1.Pod{}
+ if err := n.RootClient.Get(ctx, nsname, rootPod); err != nil {
+ return nil, nil, err
+ }
+
+ nodeName := rootPod.Spec.NodeName
+
+ lr, err := n.GlobalLeafManager.GetLeafResourceByNodeName(nodeName)
+ if err != nil {
+ return nil, nil, err
+ }
+
+ return lr.Clientset, lr.RestConfig, nil
+}
+
+func (s *NodeServer) RunHTTP(ctx context.Context, httpConfig HttpConfig) (func(), error) {
+ if httpConfig.tlsConfig == nil {
+ klog.Warning("TLS config not provided, not starting up http service")
+ return func() {}, nil
+ }
+ if httpConfig.handler == nil {
+ klog.Warning("No http handler, not starting up http service")
+ return func() {}, nil
+ }
+
+ l, err := tls.Listen("tcp", httpConfig.listenAddr, httpConfig.tlsConfig)
+ if err != nil {
+ return nil, errors.Wrap(err, "error starting http listener")
+ }
+
+ klog.V(4).Info("Started TLS listener")
+
+ srv := &http.Server{Handler: httpConfig.handler, TLSConfig: httpConfig.tlsConfig, ReadHeaderTimeout: 30 * time.Second}
+ // nolint:errcheck
+ go srv.Serve(l)
+ klog.V(4).Infof("HTTP server running, port: %s", httpConfig.listenAddr)
+
+ return func() {
+ srv.Close()
+ l.Close()
+ }, nil
+}
+
+func (s *NodeServer) AttachRoutes(m *http.ServeMux) {
+ r := mux.NewRouter()
+ r.StrictSlash(true)
+
+ r.HandleFunc(
+ "/containerLogs/{namespace}/{pod}/{container}",
+ api.ContainerLogsHandler(s.getClient),
+ ).Methods("GET")
+
+ r.HandleFunc(
+ "/exec/{namespace}/{pod}/{container}",
+ api.ContainerExecHandler(
+ api.ContainerExecOptions{
+ StreamIdleTimeout: 30 * time.Second,
+ StreamCreationTimeout: 30 * time.Second,
+ },
+ s.getClient,
+ ),
+ ).Methods("POST", "GET")
+
+ // append func here
+ // TODO: return node status, url: /stats/summary?only_cpu_and_memory=true
+
+ r.NotFoundHandler = http.HandlerFunc(api.NotFound)
+
+ m.Handle("/", r)
+}
+
+func loadKeyPair() (tls.Certificate, error) {
+ CertPath := os.Getenv("APISERVER_CERT_LOCATION")
+ KeyPath := os.Getenv("APISERVER_KEY_LOCATION")
+
+ if CertPath == "" || KeyPath == "" {
+ return tls.X509KeyPair(cert.GetCrt(), cert.GetKey())
+ }
+ return tls.LoadX509KeyPair(CertPath, KeyPath)
+}
+
+func (s *NodeServer) initTLSConfig() (*tls.Config, error) {
+ tlsCfg := &tls.Config{
+ MinVersion: tls.VersionTLS12,
+ PreferServerCipherSuites: true,
+ CipherSuites: DefaultServerCiphers(),
+ ClientAuth: tls.RequestClientCert,
+ }
+
+ cert, err := loadKeyPair()
+ if err != nil {
+ return nil, err
+ }
+ tlsCfg.Certificates = append(tlsCfg.Certificates, cert)
+
+ CACertPath := os.Getenv("APISERVER_CA_CERT_LOCATION")
+ if CACertPath != "" {
+ pem, err := os.ReadFile(CACertPath)
+ if err != nil {
+ return nil, fmt.Errorf("error reading ca cert pem: %w", err)
+ }
+ tlsCfg.ClientAuth = tls.RequireAndVerifyClientCert
+
+ if tlsCfg.ClientCAs == nil {
+ tlsCfg.ClientCAs = x509.NewCertPool()
+ }
+ if !tlsCfg.ClientCAs.AppendCertsFromPEM(pem) {
+ return nil, fmt.Errorf("could not parse ca cert pem")
+ }
+ }
+
+ return tlsCfg, nil
+}
+
+func (s *NodeServer) Start(ctx context.Context, opts *options.Options) error {
+ tlsConfig, err := s.initTLSConfig()
+
+ if err != nil {
+ klog.Fatalf("Node http server start failed: %s", err)
+ return err
+ }
+
+ handler := http.NewServeMux()
+ s.AttachRoutes(handler)
+
+ cancelHTTP, err := s.RunHTTP(ctx, HttpConfig{
+ listenAddr: fmt.Sprintf(":%d", opts.ListenPort),
+ tlsConfig: tlsConfig,
+ handler: handler,
+ })
+
+ if err != nil {
+ return err
+ }
+ defer cancelHTTP()
+
+ <-ctx.Done()
+
+ klog.V(4).Infof("Stop node http proxy")
+
+ return nil
+}
diff --git a/pkg/clustertree/cluster-manager/utils/leaf_model_handler.go b/pkg/clustertree/cluster-manager/utils/leaf_model_handler.go
new file mode 100644
index 000000000..e33e60f42
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/utils/leaf_model_handler.go
@@ -0,0 +1,263 @@
+package utils
+
+import (
+ "context"
+ "fmt"
+ "reflect"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/util/retry"
+ "k8s.io/klog/v2"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+// LeafModelHandler is the interface to handle the leafModel logic
+type LeafModelHandler interface {
+ // GetLeafMode returns the leafMode for a Cluster
+ GetLeafMode() LeafMode
+
+ // GetLeafNodes returns nodes in leaf cluster by the rootNode
+ GetLeafNodes(ctx context.Context, rootNode *corev1.Node, selector kosmosv1alpha1.NodeSelector) (*corev1.NodeList, error)
+
+ // GetLeafPods returns pods in leaf cluster by the rootNode
+ GetLeafPods(ctx context.Context, rootNode *corev1.Node, selector kosmosv1alpha1.NodeSelector) (*corev1.PodList, error)
+
+ // UpdateRootNodeStatus updates the node's status in root cluster
+ UpdateRootNodeStatus(ctx context.Context, node []*corev1.Node, leafNodeSelector map[string]kosmosv1alpha1.NodeSelector) error
+
+ // CreateRootNode creates the node in root cluster
+ CreateRootNode(ctx context.Context, listenPort int32, gitVersion string) ([]*corev1.Node, map[string]kosmosv1alpha1.NodeSelector, error)
+}
+
+// ClassificationHandler handles the Classification leaf model
+type ClassificationHandler struct {
+ leafMode LeafMode
+ Cluster *kosmosv1alpha1.Cluster
+ //LeafClient client.Client
+ //RootClient client.Client
+ RootClientset kubernetes.Interface
+ LeafClientset kubernetes.Interface
+}
+
+// GetLeafMode returns the leafMode for a Cluster
+func (h ClassificationHandler) GetLeafMode() LeafMode {
+ return h.leafMode
+}
+
+// GetLeafNodes returns nodes in leaf cluster by the rootNode
+func (h ClassificationHandler) GetLeafNodes(ctx context.Context, rootNode *corev1.Node, selector kosmosv1alpha1.NodeSelector) (nodesInLeaf *corev1.NodeList, err error) {
+ listOption := metav1.ListOptions{}
+ if h.leafMode == Party {
+ listOption.LabelSelector = metav1.FormatLabelSelector(selector.LabelSelector)
+ }
+
+ if h.leafMode == Node {
+ listOption.FieldSelector = fmt.Sprintf("metadata.name=%s", rootNode.Name)
+ }
+
+ nodesInLeaf, err = h.LeafClientset.CoreV1().Nodes().List(ctx, listOption)
+ if err != nil {
+ return nil, err
+ }
+ return nodesInLeaf, nil
+}
+
+// GetLeafPods returns pods in leaf cluster by the rootNode
+func (h ClassificationHandler) GetLeafPods(ctx context.Context, rootNode *corev1.Node, selector kosmosv1alpha1.NodeSelector) (pods *corev1.PodList, err error) {
+ if h.leafMode == Party {
+ pods, err = h.LeafClientset.CoreV1().Pods(metav1.NamespaceAll).List(ctx, metav1.ListOptions{})
+ if err != nil {
+ return nil, err
+ }
+ } else if h.leafMode == Node {
+ pods, err = h.LeafClientset.CoreV1().Pods(metav1.NamespaceAll).List(ctx, metav1.ListOptions{FieldSelector: fmt.Sprintf("spec.nodeName=%s", rootNode.Name)})
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ nodesInLeafs, err := h.GetLeafNodes(ctx, rootNode, selector)
+ if err != nil {
+ return nil, err
+ }
+
+ for _, node := range nodesInLeafs.Items {
+ podsInNode, err := h.LeafClientset.CoreV1().Pods(metav1.NamespaceAll).List(ctx, metav1.ListOptions{
+ FieldSelector: fmt.Sprintf("spec.nodeName=%s", node.Name),
+ })
+ if err != nil {
+ return nil, err
+ }
+ if pods == nil {
+ pods = podsInNode
+ } else {
+ pods.Items = append(pods.Items, podsInNode.Items...)
+ }
+ }
+ }
+ return pods, nil
+}
+
+// UpdateRootNodeStatus updates the node's status in root cluster
+func (h ClassificationHandler) UpdateRootNodeStatus(ctx context.Context, nodesInRoot []*corev1.Node, leafNodeSelector map[string]kosmosv1alpha1.NodeSelector) error {
+ for _, node := range nodesInRoot {
+ nodeNameInRoot := node.Name
+ listOptions := metav1.ListOptions{}
+ if h.leafMode == Party {
+ selector, ok := leafNodeSelector[nodeNameInRoot]
+ if !ok {
+ klog.Warningf("have no nodeSelector for the join node: v%", nodeNameInRoot)
+ continue
+ }
+ listOptions.LabelSelector = metav1.FormatLabelSelector(selector.LabelSelector)
+ }
+
+ err := retry.RetryOnConflict(retry.DefaultRetry, func() error {
+ nodeInRoot, err := h.RootClientset.CoreV1().Nodes().Get(ctx, nodeNameInRoot, metav1.GetOptions{})
+ if err != nil {
+ // TODO: If a node is accidentally deleted, recreate it
+ return fmt.Errorf("cannot get node in root cluster while update the join node status %s, err: %v", nodeNameInRoot, err)
+ }
+
+ nodesInLeaf, err := h.LeafClientset.CoreV1().Nodes().List(ctx, listOptions)
+ if err != nil {
+ // TODO: If a node is accidentally deleted, recreate it
+ return fmt.Errorf("cannot get node in leaf cluster while update the join node %s status, err: %v", nodeNameInRoot, err)
+ }
+ if len(nodesInLeaf.Items) == 0 {
+ // TODO: If a node is accidentally deleted, recreate it
+ return fmt.Errorf("have no node in leaf cluster while update the join node %s status", nodeNameInRoot)
+ }
+
+ rootCopy := nodeInRoot.DeepCopy()
+
+ if h.leafMode == Node {
+ rootCopy.Status = *nodesInLeaf.Items[0].Status.DeepCopy()
+ } else {
+ rootCopy.Status.Conditions = utils.NodeConditions()
+
+ // Aggregation the resources of the leaf nodes
+ pods, err := h.GetLeafPods(ctx, rootCopy, leafNodeSelector[nodeNameInRoot])
+ if err != nil {
+ return fmt.Errorf("could not list pod in leaf cluster while update the join node %s status, err: %v", nodeNameInRoot, err)
+ }
+ clusterResources := utils.CalculateClusterResources(nodesInLeaf, pods)
+ rootCopy.Status.Allocatable = clusterResources
+ rootCopy.Status.Capacity = clusterResources
+ }
+
+ rootCopy.Status.Addresses, err = GetAddress(ctx, h.RootClientset, nodesInLeaf.Items[0].Status.Addresses)
+ if err != nil {
+ return err
+ }
+
+ patch, err := utils.CreateMergePatch(nodeInRoot, rootCopy)
+ if err != nil {
+ return fmt.Errorf("failed to CreateMergePatch while update join node %s status, err: %v", nodeNameInRoot, err)
+ }
+
+ if _, err = h.RootClientset.CoreV1().Nodes().PatchStatus(ctx, node.Name, patch); err != nil {
+ return err
+ }
+ return nil
+ })
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func createNode(ctx context.Context, clientset kubernetes.Interface, clusterName, nodeName, gitVersion string, listenPort int32) (*corev1.Node, error) {
+ nodeInRoot, err := clientset.CoreV1().Nodes().Get(ctx, nodeName, metav1.GetOptions{})
+ if err != nil {
+ if !errors.IsNotFound(err) {
+ return nil, err
+ }
+
+ nodeInRoot = utils.BuildNodeTemplate(nodeName)
+ nodeAnnotations := nodeInRoot.GetAnnotations()
+ if nodeAnnotations == nil {
+ nodeAnnotations = make(map[string]string, 1)
+ }
+ nodeAnnotations[utils.KosmosNodeOwnedByClusterAnnotations] = clusterName
+ nodeInRoot.SetAnnotations(nodeAnnotations)
+
+ nodeInRoot.Status.NodeInfo.KubeletVersion = gitVersion
+ nodeInRoot.Status.DaemonEndpoints = corev1.NodeDaemonEndpoints{
+ KubeletEndpoint: corev1.DaemonEndpoint{
+ Port: listenPort,
+ },
+ }
+
+ nodeInRoot, err = clientset.CoreV1().Nodes().Create(ctx, nodeInRoot, metav1.CreateOptions{})
+ if err != nil {
+ return nil, err
+ }
+ }
+ return nodeInRoot, nil
+}
+
+// CreateRootNode creates the node in root cluster
+func (h ClassificationHandler) CreateRootNode(ctx context.Context, listenPort int32, gitVersion string) ([]*corev1.Node, map[string]kosmosv1alpha1.NodeSelector, error) {
+ nodes := make([]*corev1.Node, 0)
+ leafNodeSelectors := make(map[string]kosmosv1alpha1.NodeSelector)
+ cluster := h.Cluster
+
+ if h.leafMode == ALL {
+ nodeNameInRoot := fmt.Sprintf("%s%s", utils.KosmosNodePrefix, cluster.Name)
+ nodeInRoot, err := createNode(ctx, h.RootClientset, cluster.Name, nodeNameInRoot, gitVersion, listenPort)
+ if err != nil {
+ return nil, nil, err
+ }
+ nodes = append(nodes, nodeInRoot)
+ leafNodeSelectors[nodeNameInRoot] = kosmosv1alpha1.NodeSelector{}
+ } else {
+ for i, leafModel := range cluster.Spec.ClusterTreeOptions.LeafModels {
+ var nodeNameInRoot string
+ if h.leafMode == Node {
+ nodeNameInRoot = leafModel.NodeSelector.NodeName
+ } else {
+ nodeNameInRoot = fmt.Sprintf("%v%v%v%v", utils.KosmosNodePrefix, leafModel.LeafNodeName, "-", i)
+ }
+ if len(nodeNameInRoot) > 63 {
+ nodeNameInRoot = nodeNameInRoot[:63]
+ }
+
+ nodeInRoot, err := createNode(ctx, h.RootClientset, cluster.Name, nodeNameInRoot, gitVersion, listenPort)
+ if err != nil {
+ return nil, nil, err
+ }
+ nodes = append(nodes, nodeInRoot)
+ leafNodeSelectors[nodeNameInRoot] = leafModel.NodeSelector
+ }
+ }
+
+ return nodes, leafNodeSelectors, nil
+}
+
+// NewLeafModelHandler create a LeafModelHandler for Cluster
+func NewLeafModelHandler(cluster *kosmosv1alpha1.Cluster, rootClientset, leafClientset kubernetes.Interface) LeafModelHandler {
+ classificationModel := &ClassificationHandler{
+ leafMode: ALL,
+ Cluster: cluster,
+ RootClientset: rootClientset,
+ LeafClientset: leafClientset,
+ }
+
+ leafModels := cluster.Spec.ClusterTreeOptions.LeafModels
+
+ if leafModels != nil && !reflect.DeepEqual(leafModels[0].NodeSelector, kosmosv1alpha1.NodeSelector{}) {
+ if leafModels[0].NodeSelector.LabelSelector != nil && !reflect.DeepEqual(leafModels[0].NodeSelector.LabelSelector, metav1.LabelSelector{}) {
+ // support nodeSelector mode
+ classificationModel.leafMode = Party
+ } else if leafModels[0].NodeSelector.NodeName != "" {
+ classificationModel.leafMode = Node
+ }
+ }
+ return classificationModel
+}
diff --git a/pkg/clustertree/cluster-manager/utils/leaf_resource_manager.go b/pkg/clustertree/cluster-manager/utils/leaf_resource_manager.go
new file mode 100644
index 000000000..781135568
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/utils/leaf_resource_manager.go
@@ -0,0 +1,230 @@
+package utils
+
+import (
+ "fmt"
+ "strings"
+ "sync"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/client-go/dynamic"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+var (
+ instance LeafResourceManager
+ once sync.Once
+)
+
+type LeafMode int
+
+const (
+ ALL LeafMode = iota
+ Node
+ Party
+)
+
+type ClusterNode struct {
+ NodeName string
+ LeafMode LeafMode
+ LeafNodeSelector kosmosv1alpha1.NodeSelector
+}
+
+type LeafResource struct {
+ Client client.Client
+ DynamicClient dynamic.Interface
+ Clientset kubernetes.Interface
+ KosmosClient kosmosversioned.Interface
+ ClusterName string
+ Namespace string
+ IgnoreLabels []string
+ EnableServiceAccount bool
+ Nodes []ClusterNode
+ RestConfig *rest.Config
+}
+
+type LeafResourceManager interface {
+ AddLeafResource(lr *LeafResource, cluster *kosmosv1alpha1.Cluster, node []*corev1.Node)
+ RemoveLeafResource(clusterName string)
+ // get leafresource by cluster name
+ GetLeafResource(clusterName string) (*LeafResource, error)
+ // get leafresource by node name
+ GetLeafResourceByNodeName(nodeName string) (*LeafResource, error)
+ // determine if the cluster is present in the map
+ HasCluster(clusterName string) bool
+ // determine if the node is present in the map
+ HasNode(nodeName string) bool
+ // list all all node name
+ ListNodes() []string
+ // list all all cluster name
+ ListClusters() []string
+ // get ClusterNode(struct) by node name
+ GetClusterNode(nodeName string) *ClusterNode
+}
+
+type leafResourceManager struct {
+ resourceMap map[string]*LeafResource
+ leafResourceManagersLock sync.Mutex
+}
+
+func trimNamePrefix(name string) string {
+ return strings.TrimPrefix(name, utils.KosmosNodePrefix)
+}
+
+func has(clusternodes []ClusterNode, target string) bool {
+ for _, v := range clusternodes {
+ if v.NodeName == target {
+ return true
+ }
+ }
+ return false
+}
+
+func getClusterNode(clusternodes []ClusterNode, target string) *ClusterNode {
+ for _, v := range clusternodes {
+ if v.NodeName == target {
+ return &v
+ }
+ }
+ return nil
+}
+
+func (l *leafResourceManager) AddLeafResource(lptr *LeafResource, cluster *kosmosv1alpha1.Cluster, nodes []*corev1.Node) {
+ l.leafResourceManagersLock.Lock()
+ defer l.leafResourceManagersLock.Unlock()
+
+ clusterName := cluster.Name
+
+ leafModels := cluster.Spec.ClusterTreeOptions.LeafModels
+
+ clusterNodes := []ClusterNode{}
+ for i, n := range nodes {
+ if leafModels != nil && leafModels[i].NodeSelector.LabelSelector != nil {
+ // TODO: support labelselector
+ clusterNodes = append(clusterNodes, ClusterNode{
+ NodeName: trimNamePrefix(n.Name),
+ LeafMode: Party,
+ LeafNodeSelector: leafModels[i].NodeSelector,
+ })
+ } else if leafModels != nil && len(leafModels[i].NodeSelector.NodeName) > 0 {
+ clusterNodes = append(clusterNodes, ClusterNode{
+ NodeName: n.Name,
+ LeafMode: Node,
+ })
+ } else {
+ clusterNodes = append(clusterNodes, ClusterNode{
+ NodeName: trimNamePrefix(n.Name),
+ LeafMode: ALL,
+ })
+ }
+ }
+ lptr.Nodes = clusterNodes
+ l.resourceMap[clusterName] = lptr
+}
+
+func (l *leafResourceManager) RemoveLeafResource(clusterName string) {
+ l.leafResourceManagersLock.Lock()
+ defer l.leafResourceManagersLock.Unlock()
+ delete(l.resourceMap, clusterName)
+}
+
+func (l *leafResourceManager) GetLeafResource(clusterName string) (*LeafResource, error) {
+ l.leafResourceManagersLock.Lock()
+ defer l.leafResourceManagersLock.Unlock()
+ if m, ok := l.resourceMap[clusterName]; ok {
+ return m, nil
+ } else {
+ return nil, fmt.Errorf("cannot get leaf resource, clusterName: %s", clusterName)
+ }
+}
+
+func (l *leafResourceManager) GetLeafResourceByNodeName(nodeName string) (*LeafResource, error) {
+ l.leafResourceManagersLock.Lock()
+ defer l.leafResourceManagersLock.Unlock()
+ nodeName = trimNamePrefix(nodeName)
+ for k := range l.resourceMap {
+ if has(l.resourceMap[k].Nodes, nodeName) {
+ return l.resourceMap[k], nil
+ }
+ }
+
+ return nil, fmt.Errorf("cannot get leaf resource, nodeName: %s", nodeName)
+}
+
+func (l *leafResourceManager) HasNode(nodeName string) bool {
+ nodeName = trimNamePrefix(nodeName)
+ for k := range l.resourceMap {
+ if has(l.resourceMap[k].Nodes, nodeName) {
+ return true
+ }
+ }
+
+ return false
+}
+
+func (l *leafResourceManager) HasCluster(clustername string) bool {
+ for k := range l.resourceMap {
+ if k == clustername {
+ return true
+ }
+ }
+
+ return false
+}
+
+func (l *leafResourceManager) GetClusterNode(nodeName string) *ClusterNode {
+ nodeName = trimNamePrefix(nodeName)
+ for k := range l.resourceMap {
+ if clusterNode := getClusterNode(l.resourceMap[k].Nodes, nodeName); clusterNode != nil {
+ return clusterNode
+ }
+ }
+ return nil
+}
+
+func (l *leafResourceManager) ListClusters() []string {
+ l.leafResourceManagersLock.Lock()
+ defer l.leafResourceManagersLock.Unlock()
+ keys := make([]string, 0)
+ for k := range l.resourceMap {
+ if len(k) == 0 {
+ continue
+ }
+
+ keys = append(keys, k)
+ }
+ return keys
+}
+
+func (l *leafResourceManager) ListNodes() []string {
+ l.leafResourceManagersLock.Lock()
+ defer l.leafResourceManagersLock.Unlock()
+ keys := make([]string, 0)
+ for k := range l.resourceMap {
+ if len(k) == 0 {
+ continue
+ }
+ if len(l.resourceMap[k].Nodes) == 0 {
+ continue
+ }
+ for _, node := range l.resourceMap[k].Nodes {
+ keys = append(keys, node.NodeName)
+ }
+ }
+ return keys
+}
+
+func GetGlobalLeafResourceManager() LeafResourceManager {
+ once.Do(func() {
+ instance = &leafResourceManager{
+ resourceMap: make(map[string]*LeafResource),
+ }
+ })
+
+ return instance
+}
diff --git a/pkg/clustertree/cluster-manager/utils/rootcluster.go b/pkg/clustertree/cluster-manager/utils/rootcluster.go
new file mode 100644
index 000000000..3707ffbee
--- /dev/null
+++ b/pkg/clustertree/cluster-manager/utils/rootcluster.go
@@ -0,0 +1,103 @@
+package utils
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "sort"
+
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/kubernetes"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const (
+ RootClusterAnnotationKey = "kosmos.io/cluster-role"
+ RootClusterAnnotationValue = "root"
+)
+
+// IsRootCluster checks if a cluster is root cluster
+func IsRootCluster(cluster *kosmosv1alpha1.Cluster) bool {
+ annotations := cluster.GetAnnotations()
+ if val, ok := annotations[RootClusterAnnotationKey]; ok {
+ return val == RootClusterAnnotationValue
+ }
+ return false
+}
+
+func GetAddress(ctx context.Context, rootClient kubernetes.Interface, originAddress []corev1.NodeAddress) ([]corev1.NodeAddress, error) {
+ preferredAddressType := corev1.NodeAddressType(os.Getenv("PREFERRED-ADDRESS-TYPE"))
+
+ if len(preferredAddressType) == 0 {
+ preferredAddressType = corev1.NodeInternalDNS
+ }
+
+ prefixAddress := []corev1.NodeAddress{
+ {Type: preferredAddressType, Address: os.Getenv("LEAF_NODE_IP")},
+ }
+
+ address, err := SortAddress(ctx, rootClient, originAddress)
+
+ if err != nil {
+ return nil, err
+ }
+
+ return append(prefixAddress, address...), nil
+}
+
+func SortAddress(ctx context.Context, rootClient kubernetes.Interface, originAddress []corev1.NodeAddress) ([]corev1.NodeAddress, error) {
+ rootnodes, err := rootClient.CoreV1().Nodes().List(ctx, metav1.ListOptions{})
+ if err != nil {
+ return nil, fmt.Errorf("create node failed, cannot get node from root cluster, err: %v", err)
+ }
+
+ if len(rootnodes.Items) == 0 {
+ return nil, fmt.Errorf("create node failed, cannot get node from root cluster, len of leafnodes is 0")
+ }
+
+ isIPv4First := true
+ for _, addr := range rootnodes.Items[0].Status.Addresses {
+ if addr.Type == corev1.NodeInternalIP {
+ if utils.IsIPv6(addr.Address) {
+ isIPv4First = false
+ }
+ break
+ }
+ }
+
+ address := []corev1.NodeAddress{}
+ otherAddress := []corev1.NodeAddress{}
+
+ for _, addr := range originAddress {
+ if addr.Type == corev1.NodeInternalIP {
+ address = append(address, corev1.NodeAddress{Type: corev1.NodeInternalIP, Address: addr.Address})
+ } else {
+ otherAddress = append(otherAddress, addr)
+ }
+ }
+
+ sort.Slice(address, func(i, j int) bool {
+ if isIPv4First {
+ if !utils.IsIPv6(address[i].Address) && utils.IsIPv6(address[j].Address) {
+ return true
+ }
+ if utils.IsIPv6(address[i].Address) && !utils.IsIPv6(address[j].Address) {
+ return false
+ }
+ return true
+ } else {
+ if !utils.IsIPv6(address[i].Address) && utils.IsIPv6(address[j].Address) {
+ return false
+ }
+ if utils.IsIPv6(address[i].Address) && !utils.IsIPv6(address[j].Address) {
+ return true
+ }
+ return true
+ }
+ })
+
+ return append(address, otherAddress...), nil
+}
diff --git a/pkg/clustertree/cluster-manager/utils/util.go b/pkg/clustertree/cluster-manager/utils/util.go
deleted file mode 100644
index 86f864694..000000000
--- a/pkg/clustertree/cluster-manager/utils/util.go
+++ /dev/null
@@ -1,33 +0,0 @@
-package utils
-
-import (
- "k8s.io/client-go/rest"
- "k8s.io/client-go/tools/clientcmd"
-)
-
-type Handlers func(*rest.Config)
-
-func NewConfigFromBytes(kubeConfig []byte, handlers ...Handlers) (*rest.Config, error) {
- var (
- config *rest.Config
- err error
- )
-
- c, err := clientcmd.NewClientConfigFromBytes(kubeConfig)
- if err != nil {
- return nil, err
- }
- config, err = c.ClientConfig()
- if err != nil {
- return nil, err
- }
-
- for _, h := range handlers {
- if h == nil {
- continue
- }
- h(config)
- }
-
- return config, nil
-}
diff --git a/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/fake/fake_kosmos_client.go b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/fake/fake_kosmos_client.go
index 552486a29..c74de6468 100644
--- a/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/fake/fake_kosmos_client.go
+++ b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/fake/fake_kosmos_client.go
@@ -32,6 +32,10 @@ func (c *FakeKosmosV1alpha1) NodeConfigs() v1alpha1.NodeConfigInterface {
return &FakeNodeConfigs{c}
}
+func (c *FakeKosmosV1alpha1) PodConvertPolicies(namespace string) v1alpha1.PodConvertPolicyInterface {
+ return &FakePodConvertPolicies{c, namespace}
+}
+
func (c *FakeKosmosV1alpha1) ShadowDaemonSets(namespace string) v1alpha1.ShadowDaemonSetInterface {
return &FakeShadowDaemonSets{c, namespace}
}
diff --git a/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/fake/fake_podconvertpolicy.go b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/fake/fake_podconvertpolicy.go
new file mode 100644
index 000000000..7e336cdb3
--- /dev/null
+++ b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/fake/fake_podconvertpolicy.go
@@ -0,0 +1,114 @@
+// Code generated by client-gen. DO NOT EDIT.
+
+package fake
+
+import (
+ "context"
+
+ v1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ labels "k8s.io/apimachinery/pkg/labels"
+ schema "k8s.io/apimachinery/pkg/runtime/schema"
+ types "k8s.io/apimachinery/pkg/types"
+ watch "k8s.io/apimachinery/pkg/watch"
+ testing "k8s.io/client-go/testing"
+)
+
+// FakePodConvertPolicies implements PodConvertPolicyInterface
+type FakePodConvertPolicies struct {
+ Fake *FakeKosmosV1alpha1
+ ns string
+}
+
+var podconvertpoliciesResource = schema.GroupVersionResource{Group: "kosmos.io", Version: "v1alpha1", Resource: "podconvertpolicies"}
+
+var podconvertpoliciesKind = schema.GroupVersionKind{Group: "kosmos.io", Version: "v1alpha1", Kind: "PodConvertPolicy"}
+
+// Get takes name of the podConvertPolicy, and returns the corresponding podConvertPolicy object, and an error if there is any.
+func (c *FakePodConvertPolicies) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1alpha1.PodConvertPolicy, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewGetAction(podconvertpoliciesResource, c.ns, name), &v1alpha1.PodConvertPolicy{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*v1alpha1.PodConvertPolicy), err
+}
+
+// List takes label and field selectors, and returns the list of PodConvertPolicies that match those selectors.
+func (c *FakePodConvertPolicies) List(ctx context.Context, opts v1.ListOptions) (result *v1alpha1.PodConvertPolicyList, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewListAction(podconvertpoliciesResource, podconvertpoliciesKind, c.ns, opts), &v1alpha1.PodConvertPolicyList{})
+
+ if obj == nil {
+ return nil, err
+ }
+
+ label, _, _ := testing.ExtractFromListOptions(opts)
+ if label == nil {
+ label = labels.Everything()
+ }
+ list := &v1alpha1.PodConvertPolicyList{ListMeta: obj.(*v1alpha1.PodConvertPolicyList).ListMeta}
+ for _, item := range obj.(*v1alpha1.PodConvertPolicyList).Items {
+ if label.Matches(labels.Set(item.Labels)) {
+ list.Items = append(list.Items, item)
+ }
+ }
+ return list, err
+}
+
+// Watch returns a watch.Interface that watches the requested podConvertPolicies.
+func (c *FakePodConvertPolicies) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
+ return c.Fake.
+ InvokesWatch(testing.NewWatchAction(podconvertpoliciesResource, c.ns, opts))
+
+}
+
+// Create takes the representation of a podConvertPolicy and creates it. Returns the server's representation of the podConvertPolicy, and an error, if there is any.
+func (c *FakePodConvertPolicies) Create(ctx context.Context, podConvertPolicy *v1alpha1.PodConvertPolicy, opts v1.CreateOptions) (result *v1alpha1.PodConvertPolicy, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewCreateAction(podconvertpoliciesResource, c.ns, podConvertPolicy), &v1alpha1.PodConvertPolicy{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*v1alpha1.PodConvertPolicy), err
+}
+
+// Update takes the representation of a podConvertPolicy and updates it. Returns the server's representation of the podConvertPolicy, and an error, if there is any.
+func (c *FakePodConvertPolicies) Update(ctx context.Context, podConvertPolicy *v1alpha1.PodConvertPolicy, opts v1.UpdateOptions) (result *v1alpha1.PodConvertPolicy, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewUpdateAction(podconvertpoliciesResource, c.ns, podConvertPolicy), &v1alpha1.PodConvertPolicy{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*v1alpha1.PodConvertPolicy), err
+}
+
+// Delete takes name of the podConvertPolicy and deletes it. Returns an error if one occurs.
+func (c *FakePodConvertPolicies) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
+ _, err := c.Fake.
+ Invokes(testing.NewDeleteActionWithOptions(podconvertpoliciesResource, c.ns, name, opts), &v1alpha1.PodConvertPolicy{})
+
+ return err
+}
+
+// DeleteCollection deletes a collection of objects.
+func (c *FakePodConvertPolicies) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
+ action := testing.NewDeleteCollectionAction(podconvertpoliciesResource, c.ns, listOpts)
+
+ _, err := c.Fake.Invokes(action, &v1alpha1.PodConvertPolicyList{})
+ return err
+}
+
+// Patch applies the patch and returns the patched podConvertPolicy.
+func (c *FakePodConvertPolicies) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha1.PodConvertPolicy, err error) {
+ obj, err := c.Fake.
+ Invokes(testing.NewPatchSubresourceAction(podconvertpoliciesResource, c.ns, name, pt, data, subresources...), &v1alpha1.PodConvertPolicy{})
+
+ if obj == nil {
+ return nil, err
+ }
+ return obj.(*v1alpha1.PodConvertPolicy), err
+}
diff --git a/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/generated_expansion.go b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/generated_expansion.go
index 123a25122..49973005a 100644
--- a/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/generated_expansion.go
+++ b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/generated_expansion.go
@@ -12,4 +12,6 @@ type KnodeExpansion interface{}
type NodeConfigExpansion interface{}
+type PodConvertPolicyExpansion interface{}
+
type ShadowDaemonSetExpansion interface{}
diff --git a/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/kosmos_client.go b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/kosmos_client.go
index a4a620648..6340661cb 100644
--- a/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/kosmos_client.go
+++ b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/kosmos_client.go
@@ -17,6 +17,7 @@ type KosmosV1alpha1Interface interface {
DaemonSetsGetter
KnodesGetter
NodeConfigsGetter
+ PodConvertPoliciesGetter
ShadowDaemonSetsGetter
}
@@ -45,6 +46,10 @@ func (c *KosmosV1alpha1Client) NodeConfigs() NodeConfigInterface {
return newNodeConfigs(c)
}
+func (c *KosmosV1alpha1Client) PodConvertPolicies(namespace string) PodConvertPolicyInterface {
+ return newPodConvertPolicies(c, namespace)
+}
+
func (c *KosmosV1alpha1Client) ShadowDaemonSets(namespace string) ShadowDaemonSetInterface {
return newShadowDaemonSets(c, namespace)
}
diff --git a/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/podconvertpolicy.go b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/podconvertpolicy.go
new file mode 100644
index 000000000..65765fbdf
--- /dev/null
+++ b/pkg/generated/clientset/versioned/typed/kosmos/v1alpha1/podconvertpolicy.go
@@ -0,0 +1,162 @@
+// Code generated by client-gen. DO NOT EDIT.
+
+package v1alpha1
+
+import (
+ "context"
+ "time"
+
+ v1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ scheme "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned/scheme"
+ v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ types "k8s.io/apimachinery/pkg/types"
+ watch "k8s.io/apimachinery/pkg/watch"
+ rest "k8s.io/client-go/rest"
+)
+
+// PodConvertPoliciesGetter has a method to return a PodConvertPolicyInterface.
+// A group's client should implement this interface.
+type PodConvertPoliciesGetter interface {
+ PodConvertPolicies(namespace string) PodConvertPolicyInterface
+}
+
+// PodConvertPolicyInterface has methods to work with PodConvertPolicy resources.
+type PodConvertPolicyInterface interface {
+ Create(ctx context.Context, podConvertPolicy *v1alpha1.PodConvertPolicy, opts v1.CreateOptions) (*v1alpha1.PodConvertPolicy, error)
+ Update(ctx context.Context, podConvertPolicy *v1alpha1.PodConvertPolicy, opts v1.UpdateOptions) (*v1alpha1.PodConvertPolicy, error)
+ Delete(ctx context.Context, name string, opts v1.DeleteOptions) error
+ DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error
+ Get(ctx context.Context, name string, opts v1.GetOptions) (*v1alpha1.PodConvertPolicy, error)
+ List(ctx context.Context, opts v1.ListOptions) (*v1alpha1.PodConvertPolicyList, error)
+ Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error)
+ Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha1.PodConvertPolicy, err error)
+ PodConvertPolicyExpansion
+}
+
+// podConvertPolicies implements PodConvertPolicyInterface
+type podConvertPolicies struct {
+ client rest.Interface
+ ns string
+}
+
+// newPodConvertPolicies returns a PodConvertPolicies
+func newPodConvertPolicies(c *KosmosV1alpha1Client, namespace string) *podConvertPolicies {
+ return &podConvertPolicies{
+ client: c.RESTClient(),
+ ns: namespace,
+ }
+}
+
+// Get takes name of the podConvertPolicy, and returns the corresponding podConvertPolicy object, and an error if there is any.
+func (c *podConvertPolicies) Get(ctx context.Context, name string, options v1.GetOptions) (result *v1alpha1.PodConvertPolicy, err error) {
+ result = &v1alpha1.PodConvertPolicy{}
+ err = c.client.Get().
+ Namespace(c.ns).
+ Resource("podconvertpolicies").
+ Name(name).
+ VersionedParams(&options, scheme.ParameterCodec).
+ Do(ctx).
+ Into(result)
+ return
+}
+
+// List takes label and field selectors, and returns the list of PodConvertPolicies that match those selectors.
+func (c *podConvertPolicies) List(ctx context.Context, opts v1.ListOptions) (result *v1alpha1.PodConvertPolicyList, err error) {
+ var timeout time.Duration
+ if opts.TimeoutSeconds != nil {
+ timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
+ }
+ result = &v1alpha1.PodConvertPolicyList{}
+ err = c.client.Get().
+ Namespace(c.ns).
+ Resource("podconvertpolicies").
+ VersionedParams(&opts, scheme.ParameterCodec).
+ Timeout(timeout).
+ Do(ctx).
+ Into(result)
+ return
+}
+
+// Watch returns a watch.Interface that watches the requested podConvertPolicies.
+func (c *podConvertPolicies) Watch(ctx context.Context, opts v1.ListOptions) (watch.Interface, error) {
+ var timeout time.Duration
+ if opts.TimeoutSeconds != nil {
+ timeout = time.Duration(*opts.TimeoutSeconds) * time.Second
+ }
+ opts.Watch = true
+ return c.client.Get().
+ Namespace(c.ns).
+ Resource("podconvertpolicies").
+ VersionedParams(&opts, scheme.ParameterCodec).
+ Timeout(timeout).
+ Watch(ctx)
+}
+
+// Create takes the representation of a podConvertPolicy and creates it. Returns the server's representation of the podConvertPolicy, and an error, if there is any.
+func (c *podConvertPolicies) Create(ctx context.Context, podConvertPolicy *v1alpha1.PodConvertPolicy, opts v1.CreateOptions) (result *v1alpha1.PodConvertPolicy, err error) {
+ result = &v1alpha1.PodConvertPolicy{}
+ err = c.client.Post().
+ Namespace(c.ns).
+ Resource("podconvertpolicies").
+ VersionedParams(&opts, scheme.ParameterCodec).
+ Body(podConvertPolicy).
+ Do(ctx).
+ Into(result)
+ return
+}
+
+// Update takes the representation of a podConvertPolicy and updates it. Returns the server's representation of the podConvertPolicy, and an error, if there is any.
+func (c *podConvertPolicies) Update(ctx context.Context, podConvertPolicy *v1alpha1.PodConvertPolicy, opts v1.UpdateOptions) (result *v1alpha1.PodConvertPolicy, err error) {
+ result = &v1alpha1.PodConvertPolicy{}
+ err = c.client.Put().
+ Namespace(c.ns).
+ Resource("podconvertpolicies").
+ Name(podConvertPolicy.Name).
+ VersionedParams(&opts, scheme.ParameterCodec).
+ Body(podConvertPolicy).
+ Do(ctx).
+ Into(result)
+ return
+}
+
+// Delete takes name of the podConvertPolicy and deletes it. Returns an error if one occurs.
+func (c *podConvertPolicies) Delete(ctx context.Context, name string, opts v1.DeleteOptions) error {
+ return c.client.Delete().
+ Namespace(c.ns).
+ Resource("podconvertpolicies").
+ Name(name).
+ Body(&opts).
+ Do(ctx).
+ Error()
+}
+
+// DeleteCollection deletes a collection of objects.
+func (c *podConvertPolicies) DeleteCollection(ctx context.Context, opts v1.DeleteOptions, listOpts v1.ListOptions) error {
+ var timeout time.Duration
+ if listOpts.TimeoutSeconds != nil {
+ timeout = time.Duration(*listOpts.TimeoutSeconds) * time.Second
+ }
+ return c.client.Delete().
+ Namespace(c.ns).
+ Resource("podconvertpolicies").
+ VersionedParams(&listOpts, scheme.ParameterCodec).
+ Timeout(timeout).
+ Body(&opts).
+ Do(ctx).
+ Error()
+}
+
+// Patch applies the patch and returns the patched podConvertPolicy.
+func (c *podConvertPolicies) Patch(ctx context.Context, name string, pt types.PatchType, data []byte, opts v1.PatchOptions, subresources ...string) (result *v1alpha1.PodConvertPolicy, err error) {
+ result = &v1alpha1.PodConvertPolicy{}
+ err = c.client.Patch(pt).
+ Namespace(c.ns).
+ Resource("podconvertpolicies").
+ Name(name).
+ SubResource(subresources...).
+ VersionedParams(&opts, scheme.ParameterCodec).
+ Body(data).
+ Do(ctx).
+ Into(result)
+ return
+}
diff --git a/pkg/generated/informers/externalversions/generic.go b/pkg/generated/informers/externalversions/generic.go
index b7e59efd5..b24e17f81 100644
--- a/pkg/generated/informers/externalversions/generic.go
+++ b/pkg/generated/informers/externalversions/generic.go
@@ -48,6 +48,8 @@ func (f *sharedInformerFactory) ForResource(resource schema.GroupVersionResource
return &genericInformer{resource: resource.GroupResource(), informer: f.Kosmos().V1alpha1().Knodes().Informer()}, nil
case v1alpha1.SchemeGroupVersion.WithResource("nodeconfigs"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Kosmos().V1alpha1().NodeConfigs().Informer()}, nil
+ case v1alpha1.SchemeGroupVersion.WithResource("podconvertpolicies"):
+ return &genericInformer{resource: resource.GroupResource(), informer: f.Kosmos().V1alpha1().PodConvertPolicies().Informer()}, nil
case v1alpha1.SchemeGroupVersion.WithResource("shadowdaemonsets"):
return &genericInformer{resource: resource.GroupResource(), informer: f.Kosmos().V1alpha1().ShadowDaemonSets().Informer()}, nil
diff --git a/pkg/generated/informers/externalversions/kosmos/v1alpha1/interface.go b/pkg/generated/informers/externalversions/kosmos/v1alpha1/interface.go
index eb39d77ff..4768655ab 100644
--- a/pkg/generated/informers/externalversions/kosmos/v1alpha1/interface.go
+++ b/pkg/generated/informers/externalversions/kosmos/v1alpha1/interface.go
@@ -18,6 +18,8 @@ type Interface interface {
Knodes() KnodeInformer
// NodeConfigs returns a NodeConfigInformer.
NodeConfigs() NodeConfigInformer
+ // PodConvertPolicies returns a PodConvertPolicyInformer.
+ PodConvertPolicies() PodConvertPolicyInformer
// ShadowDaemonSets returns a ShadowDaemonSetInformer.
ShadowDaemonSets() ShadowDaemonSetInformer
}
@@ -58,6 +60,11 @@ func (v *version) NodeConfigs() NodeConfigInformer {
return &nodeConfigInformer{factory: v.factory, tweakListOptions: v.tweakListOptions}
}
+// PodConvertPolicies returns a PodConvertPolicyInformer.
+func (v *version) PodConvertPolicies() PodConvertPolicyInformer {
+ return &podConvertPolicyInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}
+}
+
// ShadowDaemonSets returns a ShadowDaemonSetInformer.
func (v *version) ShadowDaemonSets() ShadowDaemonSetInformer {
return &shadowDaemonSetInformer{factory: v.factory, namespace: v.namespace, tweakListOptions: v.tweakListOptions}
diff --git a/pkg/generated/informers/externalversions/kosmos/v1alpha1/podconvertpolicy.go b/pkg/generated/informers/externalversions/kosmos/v1alpha1/podconvertpolicy.go
new file mode 100644
index 000000000..213ca74ec
--- /dev/null
+++ b/pkg/generated/informers/externalversions/kosmos/v1alpha1/podconvertpolicy.go
@@ -0,0 +1,74 @@
+// Code generated by informer-gen. DO NOT EDIT.
+
+package v1alpha1
+
+import (
+ "context"
+ time "time"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ versioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ internalinterfaces "github.com/kosmos.io/kosmos/pkg/generated/informers/externalversions/internalinterfaces"
+ v1alpha1 "github.com/kosmos.io/kosmos/pkg/generated/listers/kosmos/v1alpha1"
+ v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ runtime "k8s.io/apimachinery/pkg/runtime"
+ watch "k8s.io/apimachinery/pkg/watch"
+ cache "k8s.io/client-go/tools/cache"
+)
+
+// PodConvertPolicyInformer provides access to a shared informer and lister for
+// PodConvertPolicies.
+type PodConvertPolicyInformer interface {
+ Informer() cache.SharedIndexInformer
+ Lister() v1alpha1.PodConvertPolicyLister
+}
+
+type podConvertPolicyInformer struct {
+ factory internalinterfaces.SharedInformerFactory
+ tweakListOptions internalinterfaces.TweakListOptionsFunc
+ namespace string
+}
+
+// NewPodConvertPolicyInformer constructs a new informer for PodConvertPolicy type.
+// Always prefer using an informer factory to get a shared informer instead of getting an independent
+// one. This reduces memory footprint and number of connections to the server.
+func NewPodConvertPolicyInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers) cache.SharedIndexInformer {
+ return NewFilteredPodConvertPolicyInformer(client, namespace, resyncPeriod, indexers, nil)
+}
+
+// NewFilteredPodConvertPolicyInformer constructs a new informer for PodConvertPolicy type.
+// Always prefer using an informer factory to get a shared informer instead of getting an independent
+// one. This reduces memory footprint and number of connections to the server.
+func NewFilteredPodConvertPolicyInformer(client versioned.Interface, namespace string, resyncPeriod time.Duration, indexers cache.Indexers, tweakListOptions internalinterfaces.TweakListOptionsFunc) cache.SharedIndexInformer {
+ return cache.NewSharedIndexInformer(
+ &cache.ListWatch{
+ ListFunc: func(options v1.ListOptions) (runtime.Object, error) {
+ if tweakListOptions != nil {
+ tweakListOptions(&options)
+ }
+ return client.KosmosV1alpha1().PodConvertPolicies(namespace).List(context.TODO(), options)
+ },
+ WatchFunc: func(options v1.ListOptions) (watch.Interface, error) {
+ if tweakListOptions != nil {
+ tweakListOptions(&options)
+ }
+ return client.KosmosV1alpha1().PodConvertPolicies(namespace).Watch(context.TODO(), options)
+ },
+ },
+ &kosmosv1alpha1.PodConvertPolicy{},
+ resyncPeriod,
+ indexers,
+ )
+}
+
+func (f *podConvertPolicyInformer) defaultInformer(client versioned.Interface, resyncPeriod time.Duration) cache.SharedIndexInformer {
+ return NewFilteredPodConvertPolicyInformer(client, f.namespace, resyncPeriod, cache.Indexers{cache.NamespaceIndex: cache.MetaNamespaceIndexFunc}, f.tweakListOptions)
+}
+
+func (f *podConvertPolicyInformer) Informer() cache.SharedIndexInformer {
+ return f.factory.InformerFor(&kosmosv1alpha1.PodConvertPolicy{}, f.defaultInformer)
+}
+
+func (f *podConvertPolicyInformer) Lister() v1alpha1.PodConvertPolicyLister {
+ return v1alpha1.NewPodConvertPolicyLister(f.Informer().GetIndexer())
+}
diff --git a/pkg/generated/listers/kosmos/v1alpha1/expansion_generated.go b/pkg/generated/listers/kosmos/v1alpha1/expansion_generated.go
index 501b485cd..e588209cc 100644
--- a/pkg/generated/listers/kosmos/v1alpha1/expansion_generated.go
+++ b/pkg/generated/listers/kosmos/v1alpha1/expansion_generated.go
@@ -26,6 +26,14 @@ type KnodeListerExpansion interface{}
// NodeConfigLister.
type NodeConfigListerExpansion interface{}
+// PodConvertPolicyListerExpansion allows custom methods to be added to
+// PodConvertPolicyLister.
+type PodConvertPolicyListerExpansion interface{}
+
+// PodConvertPolicyNamespaceListerExpansion allows custom methods to be added to
+// PodConvertPolicyNamespaceLister.
+type PodConvertPolicyNamespaceListerExpansion interface{}
+
// ShadowDaemonSetListerExpansion allows custom methods to be added to
// ShadowDaemonSetLister.
type ShadowDaemonSetListerExpansion interface{}
diff --git a/pkg/generated/listers/kosmos/v1alpha1/podconvertpolicy.go b/pkg/generated/listers/kosmos/v1alpha1/podconvertpolicy.go
new file mode 100644
index 000000000..03c2adff2
--- /dev/null
+++ b/pkg/generated/listers/kosmos/v1alpha1/podconvertpolicy.go
@@ -0,0 +1,83 @@
+// Code generated by lister-gen. DO NOT EDIT.
+
+package v1alpha1
+
+import (
+ v1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/labels"
+ "k8s.io/client-go/tools/cache"
+)
+
+// PodConvertPolicyLister helps list PodConvertPolicies.
+// All objects returned here must be treated as read-only.
+type PodConvertPolicyLister interface {
+ // List lists all PodConvertPolicies in the indexer.
+ // Objects returned here must be treated as read-only.
+ List(selector labels.Selector) (ret []*v1alpha1.PodConvertPolicy, err error)
+ // PodConvertPolicies returns an object that can list and get PodConvertPolicies.
+ PodConvertPolicies(namespace string) PodConvertPolicyNamespaceLister
+ PodConvertPolicyListerExpansion
+}
+
+// podConvertPolicyLister implements the PodConvertPolicyLister interface.
+type podConvertPolicyLister struct {
+ indexer cache.Indexer
+}
+
+// NewPodConvertPolicyLister returns a new PodConvertPolicyLister.
+func NewPodConvertPolicyLister(indexer cache.Indexer) PodConvertPolicyLister {
+ return &podConvertPolicyLister{indexer: indexer}
+}
+
+// List lists all PodConvertPolicies in the indexer.
+func (s *podConvertPolicyLister) List(selector labels.Selector) (ret []*v1alpha1.PodConvertPolicy, err error) {
+ err = cache.ListAll(s.indexer, selector, func(m interface{}) {
+ ret = append(ret, m.(*v1alpha1.PodConvertPolicy))
+ })
+ return ret, err
+}
+
+// PodConvertPolicies returns an object that can list and get PodConvertPolicies.
+func (s *podConvertPolicyLister) PodConvertPolicies(namespace string) PodConvertPolicyNamespaceLister {
+ return podConvertPolicyNamespaceLister{indexer: s.indexer, namespace: namespace}
+}
+
+// PodConvertPolicyNamespaceLister helps list and get PodConvertPolicies.
+// All objects returned here must be treated as read-only.
+type PodConvertPolicyNamespaceLister interface {
+ // List lists all PodConvertPolicies in the indexer for a given namespace.
+ // Objects returned here must be treated as read-only.
+ List(selector labels.Selector) (ret []*v1alpha1.PodConvertPolicy, err error)
+ // Get retrieves the PodConvertPolicy from the indexer for a given namespace and name.
+ // Objects returned here must be treated as read-only.
+ Get(name string) (*v1alpha1.PodConvertPolicy, error)
+ PodConvertPolicyNamespaceListerExpansion
+}
+
+// podConvertPolicyNamespaceLister implements the PodConvertPolicyNamespaceLister
+// interface.
+type podConvertPolicyNamespaceLister struct {
+ indexer cache.Indexer
+ namespace string
+}
+
+// List lists all PodConvertPolicies in the indexer for a given namespace.
+func (s podConvertPolicyNamespaceLister) List(selector labels.Selector) (ret []*v1alpha1.PodConvertPolicy, err error) {
+ err = cache.ListAllByNamespace(s.indexer, s.namespace, selector, func(m interface{}) {
+ ret = append(ret, m.(*v1alpha1.PodConvertPolicy))
+ })
+ return ret, err
+}
+
+// Get retrieves the PodConvertPolicy from the indexer for a given namespace and name.
+func (s podConvertPolicyNamespaceLister) Get(name string) (*v1alpha1.PodConvertPolicy, error) {
+ obj, exists, err := s.indexer.GetByKey(s.namespace + "/" + name)
+ if err != nil {
+ return nil, err
+ }
+ if !exists {
+ return nil, errors.NewNotFound(v1alpha1.Resource("podconvertpolicy"), name)
+ }
+ return obj.(*v1alpha1.PodConvertPolicy), nil
+}
diff --git a/pkg/generated/openapi/zz_generated.openapi.go b/pkg/generated/openapi/zz_generated.openapi.go
index 61095e4c8..1b04c12b5 100644
--- a/pkg/generated/openapi/zz_generated.openapi.go
+++ b/pkg/generated/openapi/zz_generated.openapi.go
@@ -15,88 +15,133 @@ import (
func GetOpenAPIDefinitions(ref common.ReferenceCallback) map[string]common.OpenAPIDefinition {
return map[string]common.OpenAPIDefinition{
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Arp": schema_pkg_apis_kosmos_v1alpha1_Arp(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Cluster": schema_pkg_apis_kosmos_v1alpha1_Cluster(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterList": schema_pkg_apis_kosmos_v1alpha1_ClusterList(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterNode": schema_pkg_apis_kosmos_v1alpha1_ClusterNode(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterNodeList": schema_pkg_apis_kosmos_v1alpha1_ClusterNodeList(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterNodeSpec": schema_pkg_apis_kosmos_v1alpha1_ClusterNodeSpec(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterNodeStatus": schema_pkg_apis_kosmos_v1alpha1_ClusterNodeStatus(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterSpec": schema_pkg_apis_kosmos_v1alpha1_ClusterSpec(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterStatus": schema_pkg_apis_kosmos_v1alpha1_ClusterStatus(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSet": schema_pkg_apis_kosmos_v1alpha1_DaemonSet(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSetList": schema_pkg_apis_kosmos_v1alpha1_DaemonSetList(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSetSpec": schema_pkg_apis_kosmos_v1alpha1_DaemonSetSpec(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSetStatus": schema_pkg_apis_kosmos_v1alpha1_DaemonSetStatus(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Device": schema_pkg_apis_kosmos_v1alpha1_Device(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Fdb": schema_pkg_apis_kosmos_v1alpha1_Fdb(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Iptables": schema_pkg_apis_kosmos_v1alpha1_Iptables(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Knode": schema_pkg_apis_kosmos_v1alpha1_Knode(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.KnodeList": schema_pkg_apis_kosmos_v1alpha1_KnodeList(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.KnodeSpec": schema_pkg_apis_kosmos_v1alpha1_KnodeSpec(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.KnodeStatus": schema_pkg_apis_kosmos_v1alpha1_KnodeStatus(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NICNodeNames": schema_pkg_apis_kosmos_v1alpha1_NICNodeNames(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeConfig": schema_pkg_apis_kosmos_v1alpha1_NodeConfig(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeConfigList": schema_pkg_apis_kosmos_v1alpha1_NodeConfigList(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeConfigSpec": schema_pkg_apis_kosmos_v1alpha1_NodeConfigSpec(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeConfigStatus": schema_pkg_apis_kosmos_v1alpha1_NodeConfigStatus(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Proxy": schema_pkg_apis_kosmos_v1alpha1_Proxy(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Route": schema_pkg_apis_kosmos_v1alpha1_Route(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ShadowDaemonSet": schema_pkg_apis_kosmos_v1alpha1_ShadowDaemonSet(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ShadowDaemonSetList": schema_pkg_apis_kosmos_v1alpha1_ShadowDaemonSetList(ref),
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.VxlanCIDRs": schema_pkg_apis_kosmos_v1alpha1_VxlanCIDRs(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.APIGroup": schema_pkg_apis_meta_v1_APIGroup(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.APIGroupList": schema_pkg_apis_meta_v1_APIGroupList(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.APIResource": schema_pkg_apis_meta_v1_APIResource(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.APIResourceList": schema_pkg_apis_meta_v1_APIResourceList(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.APIVersions": schema_pkg_apis_meta_v1_APIVersions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.ApplyOptions": schema_pkg_apis_meta_v1_ApplyOptions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.Condition": schema_pkg_apis_meta_v1_Condition(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.CreateOptions": schema_pkg_apis_meta_v1_CreateOptions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.DeleteOptions": schema_pkg_apis_meta_v1_DeleteOptions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.Duration": schema_pkg_apis_meta_v1_Duration(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.FieldsV1": schema_pkg_apis_meta_v1_FieldsV1(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.GetOptions": schema_pkg_apis_meta_v1_GetOptions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.GroupKind": schema_pkg_apis_meta_v1_GroupKind(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.GroupResource": schema_pkg_apis_meta_v1_GroupResource(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.GroupVersion": schema_pkg_apis_meta_v1_GroupVersion(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.GroupVersionForDiscovery": schema_pkg_apis_meta_v1_GroupVersionForDiscovery(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.GroupVersionKind": schema_pkg_apis_meta_v1_GroupVersionKind(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.GroupVersionResource": schema_pkg_apis_meta_v1_GroupVersionResource(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.InternalEvent": schema_pkg_apis_meta_v1_InternalEvent(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelector": schema_pkg_apis_meta_v1_LabelSelector(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelectorRequirement": schema_pkg_apis_meta_v1_LabelSelectorRequirement(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.List": schema_pkg_apis_meta_v1_List(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta": schema_pkg_apis_meta_v1_ListMeta(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.ListOptions": schema_pkg_apis_meta_v1_ListOptions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.ManagedFieldsEntry": schema_pkg_apis_meta_v1_ManagedFieldsEntry(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.MicroTime": schema_pkg_apis_meta_v1_MicroTime(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta": schema_pkg_apis_meta_v1_ObjectMeta(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.OwnerReference": schema_pkg_apis_meta_v1_OwnerReference(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.PartialObjectMetadata": schema_pkg_apis_meta_v1_PartialObjectMetadata(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.PartialObjectMetadataList": schema_pkg_apis_meta_v1_PartialObjectMetadataList(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.Patch": schema_pkg_apis_meta_v1_Patch(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.PatchOptions": schema_pkg_apis_meta_v1_PatchOptions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.Preconditions": schema_pkg_apis_meta_v1_Preconditions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.RootPaths": schema_pkg_apis_meta_v1_RootPaths(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.ServerAddressByClientCIDR": schema_pkg_apis_meta_v1_ServerAddressByClientCIDR(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.Status": schema_pkg_apis_meta_v1_Status(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.StatusCause": schema_pkg_apis_meta_v1_StatusCause(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.StatusDetails": schema_pkg_apis_meta_v1_StatusDetails(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.Table": schema_pkg_apis_meta_v1_Table(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.TableColumnDefinition": schema_pkg_apis_meta_v1_TableColumnDefinition(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.TableOptions": schema_pkg_apis_meta_v1_TableOptions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.TableRow": schema_pkg_apis_meta_v1_TableRow(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.TableRowCondition": schema_pkg_apis_meta_v1_TableRowCondition(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.Time": schema_pkg_apis_meta_v1_Time(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.Timestamp": schema_pkg_apis_meta_v1_Timestamp(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.TypeMeta": schema_pkg_apis_meta_v1_TypeMeta(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.UpdateOptions": schema_pkg_apis_meta_v1_UpdateOptions(ref),
- "k8s.io/apimachinery/pkg/apis/meta/v1.WatchEvent": schema_pkg_apis_meta_v1_WatchEvent(ref),
- "k8s.io/apimachinery/pkg/runtime.RawExtension": schema_k8sio_apimachinery_pkg_runtime_RawExtension(ref),
- "k8s.io/apimachinery/pkg/runtime.TypeMeta": schema_k8sio_apimachinery_pkg_runtime_TypeMeta(ref),
- "k8s.io/apimachinery/pkg/runtime.Unknown": schema_k8sio_apimachinery_pkg_runtime_Unknown(ref),
- "k8s.io/apimachinery/pkg/version.Info": schema_k8sio_apimachinery_pkg_version_Info(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.AffinityConverter": schema_pkg_apis_kosmos_v1alpha1_AffinityConverter(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Arp": schema_pkg_apis_kosmos_v1alpha1_Arp(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Cluster": schema_pkg_apis_kosmos_v1alpha1_Cluster(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterLinkOptions": schema_pkg_apis_kosmos_v1alpha1_ClusterLinkOptions(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterLinkStatus": schema_pkg_apis_kosmos_v1alpha1_ClusterLinkStatus(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterList": schema_pkg_apis_kosmos_v1alpha1_ClusterList(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterNode": schema_pkg_apis_kosmos_v1alpha1_ClusterNode(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterNodeList": schema_pkg_apis_kosmos_v1alpha1_ClusterNodeList(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterNodeSpec": schema_pkg_apis_kosmos_v1alpha1_ClusterNodeSpec(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterNodeStatus": schema_pkg_apis_kosmos_v1alpha1_ClusterNodeStatus(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterSpec": schema_pkg_apis_kosmos_v1alpha1_ClusterSpec(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterStatus": schema_pkg_apis_kosmos_v1alpha1_ClusterStatus(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterTreeOptions": schema_pkg_apis_kosmos_v1alpha1_ClusterTreeOptions(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterTreeStatus": schema_pkg_apis_kosmos_v1alpha1_ClusterTreeStatus(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Converters": schema_pkg_apis_kosmos_v1alpha1_Converters(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSet": schema_pkg_apis_kosmos_v1alpha1_DaemonSet(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSetList": schema_pkg_apis_kosmos_v1alpha1_DaemonSetList(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSetSpec": schema_pkg_apis_kosmos_v1alpha1_DaemonSetSpec(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSetStatus": schema_pkg_apis_kosmos_v1alpha1_DaemonSetStatus(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Device": schema_pkg_apis_kosmos_v1alpha1_Device(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Fdb": schema_pkg_apis_kosmos_v1alpha1_Fdb(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Iptables": schema_pkg_apis_kosmos_v1alpha1_Iptables(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Knode": schema_pkg_apis_kosmos_v1alpha1_Knode(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.KnodeList": schema_pkg_apis_kosmos_v1alpha1_KnodeList(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.KnodeSpec": schema_pkg_apis_kosmos_v1alpha1_KnodeSpec(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.KnodeStatus": schema_pkg_apis_kosmos_v1alpha1_KnodeStatus(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.LeafModel": schema_pkg_apis_kosmos_v1alpha1_LeafModel(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.LeafNodeItem": schema_pkg_apis_kosmos_v1alpha1_LeafNodeItem(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NICNodeNames": schema_pkg_apis_kosmos_v1alpha1_NICNodeNames(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeConfig": schema_pkg_apis_kosmos_v1alpha1_NodeConfig(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeConfigList": schema_pkg_apis_kosmos_v1alpha1_NodeConfigList(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeConfigSpec": schema_pkg_apis_kosmos_v1alpha1_NodeConfigSpec(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeConfigStatus": schema_pkg_apis_kosmos_v1alpha1_NodeConfigStatus(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeNameConverter": schema_pkg_apis_kosmos_v1alpha1_NodeNameConverter(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeSelector": schema_pkg_apis_kosmos_v1alpha1_NodeSelector(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeSelectorConverter": schema_pkg_apis_kosmos_v1alpha1_NodeSelectorConverter(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.PodConvertPolicy": schema_pkg_apis_kosmos_v1alpha1_PodConvertPolicy(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.PodConvertPolicyList": schema_pkg_apis_kosmos_v1alpha1_PodConvertPolicyList(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.PodConvertPolicySpec": schema_pkg_apis_kosmos_v1alpha1_PodConvertPolicySpec(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Proxy": schema_pkg_apis_kosmos_v1alpha1_Proxy(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Route": schema_pkg_apis_kosmos_v1alpha1_Route(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.SchedulerNameConverter": schema_pkg_apis_kosmos_v1alpha1_SchedulerNameConverter(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ShadowDaemonSet": schema_pkg_apis_kosmos_v1alpha1_ShadowDaemonSet(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ShadowDaemonSetList": schema_pkg_apis_kosmos_v1alpha1_ShadowDaemonSetList(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.TolerationConverter": schema_pkg_apis_kosmos_v1alpha1_TolerationConverter(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.TopologySpreadConstraintsConverter": schema_pkg_apis_kosmos_v1alpha1_TopologySpreadConstraintsConverter(ref),
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.VxlanCIDRs": schema_pkg_apis_kosmos_v1alpha1_VxlanCIDRs(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.APIGroup": schema_pkg_apis_meta_v1_APIGroup(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.APIGroupList": schema_pkg_apis_meta_v1_APIGroupList(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.APIResource": schema_pkg_apis_meta_v1_APIResource(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.APIResourceList": schema_pkg_apis_meta_v1_APIResourceList(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.APIVersions": schema_pkg_apis_meta_v1_APIVersions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.ApplyOptions": schema_pkg_apis_meta_v1_ApplyOptions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.Condition": schema_pkg_apis_meta_v1_Condition(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.CreateOptions": schema_pkg_apis_meta_v1_CreateOptions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.DeleteOptions": schema_pkg_apis_meta_v1_DeleteOptions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.Duration": schema_pkg_apis_meta_v1_Duration(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.FieldsV1": schema_pkg_apis_meta_v1_FieldsV1(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.GetOptions": schema_pkg_apis_meta_v1_GetOptions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.GroupKind": schema_pkg_apis_meta_v1_GroupKind(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.GroupResource": schema_pkg_apis_meta_v1_GroupResource(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.GroupVersion": schema_pkg_apis_meta_v1_GroupVersion(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.GroupVersionForDiscovery": schema_pkg_apis_meta_v1_GroupVersionForDiscovery(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.GroupVersionKind": schema_pkg_apis_meta_v1_GroupVersionKind(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.GroupVersionResource": schema_pkg_apis_meta_v1_GroupVersionResource(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.InternalEvent": schema_pkg_apis_meta_v1_InternalEvent(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelector": schema_pkg_apis_meta_v1_LabelSelector(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelectorRequirement": schema_pkg_apis_meta_v1_LabelSelectorRequirement(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.List": schema_pkg_apis_meta_v1_List(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta": schema_pkg_apis_meta_v1_ListMeta(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.ListOptions": schema_pkg_apis_meta_v1_ListOptions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.ManagedFieldsEntry": schema_pkg_apis_meta_v1_ManagedFieldsEntry(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.MicroTime": schema_pkg_apis_meta_v1_MicroTime(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta": schema_pkg_apis_meta_v1_ObjectMeta(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.OwnerReference": schema_pkg_apis_meta_v1_OwnerReference(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.PartialObjectMetadata": schema_pkg_apis_meta_v1_PartialObjectMetadata(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.PartialObjectMetadataList": schema_pkg_apis_meta_v1_PartialObjectMetadataList(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.Patch": schema_pkg_apis_meta_v1_Patch(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.PatchOptions": schema_pkg_apis_meta_v1_PatchOptions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.Preconditions": schema_pkg_apis_meta_v1_Preconditions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.RootPaths": schema_pkg_apis_meta_v1_RootPaths(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.ServerAddressByClientCIDR": schema_pkg_apis_meta_v1_ServerAddressByClientCIDR(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.Status": schema_pkg_apis_meta_v1_Status(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.StatusCause": schema_pkg_apis_meta_v1_StatusCause(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.StatusDetails": schema_pkg_apis_meta_v1_StatusDetails(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.Table": schema_pkg_apis_meta_v1_Table(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.TableColumnDefinition": schema_pkg_apis_meta_v1_TableColumnDefinition(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.TableOptions": schema_pkg_apis_meta_v1_TableOptions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.TableRow": schema_pkg_apis_meta_v1_TableRow(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.TableRowCondition": schema_pkg_apis_meta_v1_TableRowCondition(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.Time": schema_pkg_apis_meta_v1_Time(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.Timestamp": schema_pkg_apis_meta_v1_Timestamp(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.TypeMeta": schema_pkg_apis_meta_v1_TypeMeta(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.UpdateOptions": schema_pkg_apis_meta_v1_UpdateOptions(ref),
+ "k8s.io/apimachinery/pkg/apis/meta/v1.WatchEvent": schema_pkg_apis_meta_v1_WatchEvent(ref),
+ "k8s.io/apimachinery/pkg/runtime.RawExtension": schema_k8sio_apimachinery_pkg_runtime_RawExtension(ref),
+ "k8s.io/apimachinery/pkg/runtime.TypeMeta": schema_k8sio_apimachinery_pkg_runtime_TypeMeta(ref),
+ "k8s.io/apimachinery/pkg/runtime.Unknown": schema_k8sio_apimachinery_pkg_runtime_Unknown(ref),
+ "k8s.io/apimachinery/pkg/version.Info": schema_k8sio_apimachinery_pkg_version_Info(ref),
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_AffinityConverter(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Description: "AffinityConverter used to modify the pod's Affinity when pod synced to leaf cluster",
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "convertType": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "affinity": {
+ SchemaProps: spec.SchemaProps{
+ Ref: ref("k8s.io/api/core/v1.Affinity"),
+ },
+ },
+ },
+ Required: []string{"convertType"},
+ },
+ },
+ Dependencies: []string{
+ "k8s.io/api/core/v1.Affinity"},
}
}
@@ -183,6 +228,140 @@ func schema_pkg_apis_kosmos_v1alpha1_Cluster(ref common.ReferenceCallback) commo
}
}
+func schema_pkg_apis_kosmos_v1alpha1_ClusterLinkOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "enable": {
+ SchemaProps: spec.SchemaProps{
+ Default: false,
+ Type: []string{"boolean"},
+ Format: "",
+ },
+ },
+ "cni": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "networkType": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "ipFamily": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "useIPPool": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"boolean"},
+ Format: "",
+ },
+ },
+ "localCIDRs": {
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.VxlanCIDRs"),
+ },
+ },
+ "bridgeCIDRs": {
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.VxlanCIDRs"),
+ },
+ },
+ "nicNodeNames": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"array"},
+ Items: &spec.SchemaOrArray{
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NICNodeNames"),
+ },
+ },
+ },
+ },
+ },
+ "defaultNICName": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "globalCIDRsMap": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ AdditionalProperties: &spec.SchemaOrBool{
+ Allows: true,
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NICNodeNames", "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.VxlanCIDRs"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_ClusterLinkStatus(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "podCIDRs": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"array"},
+ Items: &spec.SchemaOrArray{
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ },
+ },
+ },
+ "serviceCIDRs": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"array"},
+ Items: &spec.SchemaOrArray{
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+}
+
func schema_pkg_apis_kosmos_v1alpha1_ClusterList(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
@@ -413,21 +592,13 @@ func schema_pkg_apis_kosmos_v1alpha1_ClusterSpec(ref common.ReferenceCallback) c
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
- "cni": {
- SchemaProps: spec.SchemaProps{
- Default: "",
- Type: []string{"string"},
- Format: "",
- },
- },
- "networkType": {
+ "kubeconfig": {
SchemaProps: spec.SchemaProps{
- Default: "",
- Type: []string{"string"},
- Format: "",
+ Type: []string{"string"},
+ Format: "byte",
},
},
- "ipFamily": {
+ "namespace": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
@@ -440,116 +611,152 @@ func schema_pkg_apis_kosmos_v1alpha1_ClusterSpec(ref common.ReferenceCallback) c
Format: "",
},
},
- "namespace": {
- SchemaProps: spec.SchemaProps{
- Default: "",
- Type: []string{"string"},
- Format: "",
- },
- },
- "useIPPool": {
+ "clusterLinkOptions": {
SchemaProps: spec.SchemaProps{
- Type: []string{"boolean"},
- Format: "",
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterLinkOptions"),
},
},
- "localCIDRs": {
+ "clusterTreeOptions": {
SchemaProps: spec.SchemaProps{
- Default: map[string]interface{}{},
- Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.VxlanCIDRs"),
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterTreeOptions"),
},
},
- "bridgeCIDRs": {
+ },
+ },
+ },
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterLinkOptions", "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterTreeOptions"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_ClusterStatus(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "clusterLinkStatus": {
SchemaProps: spec.SchemaProps{
- Default: map[string]interface{}{},
- Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.VxlanCIDRs"),
+ Description: "ClusterLinkStatus contain the cluster network information",
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterLinkStatus"),
},
},
- "nicNodeNames": {
+ "clusterTreeStatus": {
SchemaProps: spec.SchemaProps{
- Type: []string{"array"},
- Items: &spec.SchemaOrArray{
- Schema: &spec.Schema{
- SchemaProps: spec.SchemaProps{
- Default: map[string]interface{}{},
- Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NICNodeNames"),
- },
- },
- },
+ Description: "ClusterTreeStatus contain the member cluster leafNode end status",
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterTreeStatus"),
},
},
- "defaultNICName": {
+ },
+ },
+ },
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterLinkStatus", "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.ClusterTreeStatus"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_ClusterTreeOptions(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "enable": {
SchemaProps: spec.SchemaProps{
- Type: []string{"string"},
- Format: "",
+ Default: false,
+ Type: []string{"boolean"},
+ Format: "",
},
},
- "globalCIDRsMap": {
+ "leafModels": {
SchemaProps: spec.SchemaProps{
- Type: []string{"object"},
- AdditionalProperties: &spec.SchemaOrBool{
- Allows: true,
+ Description: "LeafModels provide an api to arrange the member cluster with some rules to pretend one or more leaf node",
+ Type: []string{"array"},
+ Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
- Default: "",
- Type: []string{"string"},
- Format: "",
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.LeafModel"),
},
},
},
},
},
- "kubeconfig": {
- SchemaProps: spec.SchemaProps{
- Type: []string{"string"},
- Format: "byte",
- },
- },
},
},
},
Dependencies: []string{
- "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NICNodeNames", "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.VxlanCIDRs"},
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.LeafModel"},
}
}
-func schema_pkg_apis_kosmos_v1alpha1_ClusterStatus(ref common.ReferenceCallback) common.OpenAPIDefinition {
+func schema_pkg_apis_kosmos_v1alpha1_ClusterTreeStatus(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
SchemaProps: spec.SchemaProps{
Type: []string{"object"},
Properties: map[string]spec.Schema{
- "podCIDRs": {
+ "leafNodeItems": {
SchemaProps: spec.SchemaProps{
- Type: []string{"array"},
+ Description: "LeafNodeItems represents list of the leaf node Items calculating in each member cluster.",
+ Type: []string{"array"},
Items: &spec.SchemaOrArray{
Schema: &spec.Schema{
SchemaProps: spec.SchemaProps{
- Default: "",
- Type: []string{"string"},
- Format: "",
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.LeafNodeItem"),
},
},
},
},
},
- "serviceCIDRs": {
- SchemaProps: spec.SchemaProps{
- Type: []string{"array"},
- Items: &spec.SchemaOrArray{
- Schema: &spec.Schema{
- SchemaProps: spec.SchemaProps{
- Default: "",
- Type: []string{"string"},
- Format: "",
- },
- },
- },
+ },
+ },
+ },
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.LeafNodeItem"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_Converters(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Description: "Converters are some converter for pod to scheduled in leaf cluster",
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "schedulerNameConverter": {
+ SchemaProps: spec.SchemaProps{
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.SchedulerNameConverter"),
+ },
+ },
+ "nodeNameConverter": {
+ SchemaProps: spec.SchemaProps{
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeNameConverter"),
+ },
+ },
+ "nodeSelectorConverter": {
+ SchemaProps: spec.SchemaProps{
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeSelectorConverter"),
+ },
+ },
+ "affinityConverter": {
+ SchemaProps: spec.SchemaProps{
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.AffinityConverter"),
+ },
+ },
+ "topologySpreadConstraintsConverter": {
+ SchemaProps: spec.SchemaProps{
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.TopologySpreadConstraintsConverter"),
},
},
},
},
},
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.AffinityConverter", "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeNameConverter", "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeSelectorConverter", "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.SchedulerNameConverter", "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.TopologySpreadConstraintsConverter"},
}
}
@@ -1043,12 +1250,6 @@ func schema_pkg_apis_kosmos_v1alpha1_KnodeSpec(ref common.ReferenceCallback) com
Format: "byte",
},
},
- "kubeAPIQPS": {
- SchemaProps: spec.SchemaProps{
- Type: []string{"number"},
- Format: "float",
- },
- },
"kubeAPIBurst": {
SchemaProps: spec.SchemaProps{
Type: []string{"integer"},
@@ -1118,6 +1319,85 @@ func schema_pkg_apis_kosmos_v1alpha1_KnodeStatus(ref common.ReferenceCallback) c
}
}
+func schema_pkg_apis_kosmos_v1alpha1_LeafModel(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "leafNodeName": {
+ SchemaProps: spec.SchemaProps{
+ Description: "LeafNodeName defines leaf name If nil or empty, the leaf node name will generate by controller and fill in cluster link status",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "labels": {
+ SchemaProps: spec.SchemaProps{
+ Description: "Labels that will be setting in the pretended Node labels",
+ Type: []string{"object"},
+ AdditionalProperties: &spec.SchemaOrBool{
+ Allows: true,
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ },
+ },
+ },
+ "taints": {
+ SchemaProps: spec.SchemaProps{
+ Description: "Taints attached to the leaf pretended Node. If nil or empty, controller will set the default no-schedule taint",
+ Type: []string{"array"},
+ Items: &spec.SchemaOrArray{
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("k8s.io/api/core/v1.Taint"),
+ },
+ },
+ },
+ },
+ },
+ "nodeSelector": {
+ SchemaProps: spec.SchemaProps{
+ Description: "NodeSelector is a selector to select member cluster nodes to pretend a leaf node in clusterTree.",
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeSelector"),
+ },
+ },
+ },
+ },
+ },
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.NodeSelector", "k8s.io/api/core/v1.Taint"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_LeafNodeItem(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "leafNodeName": {
+ SchemaProps: spec.SchemaProps{
+ Description: "LeafNodeName represents the leaf node name generate by controller. suggest name format like cluster-shortLabel-number like member-az1-1",
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ },
+ Required: []string{"leafNodeName"},
+ },
+ },
+ }
+}
+
func schema_pkg_apis_kosmos_v1alpha1_NICNodeNames(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
@@ -1352,6 +1632,220 @@ func schema_pkg_apis_kosmos_v1alpha1_NodeConfigStatus(ref common.ReferenceCallba
}
}
+func schema_pkg_apis_kosmos_v1alpha1_NodeNameConverter(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Description: "NodeNameConverter used to modify the pod's nodeName when pod synced to leaf cluster",
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "convertType": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "nodeName": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ },
+ Required: []string{"convertType"},
+ },
+ },
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_NodeSelector(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "nodeName": {
+ SchemaProps: spec.SchemaProps{
+ Description: "NodeName is Member cluster origin node Name",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "labelSelector": {
+ SchemaProps: spec.SchemaProps{
+ Description: "LabelSelector is a filter to select member cluster nodes to pretend a leaf node in clusterTree by labels. It will work on second level schedule on pod create in member clusters.",
+ Ref: ref("k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelector"),
+ },
+ },
+ },
+ },
+ },
+ Dependencies: []string{
+ "k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelector"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_NodeSelectorConverter(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Description: "NodeSelectorConverter used to modify the pod's NodeSelector when pod synced to leaf cluster",
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "convertType": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "nodeSelector": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ AdditionalProperties: &spec.SchemaOrBool{
+ Allows: true,
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ },
+ },
+ },
+ },
+ Required: []string{"convertType"},
+ },
+ },
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_PodConvertPolicy(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "kind": {
+ SchemaProps: spec.SchemaProps{
+ Description: "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "apiVersion": {
+ SchemaProps: spec.SchemaProps{
+ Description: "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "metadata": {
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"),
+ },
+ },
+ "spec": {
+ SchemaProps: spec.SchemaProps{
+ Description: "Spec is the specification for the behaviour of the podConversion.",
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.PodConvertPolicySpec"),
+ },
+ },
+ },
+ Required: []string{"spec"},
+ },
+ },
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.PodConvertPolicySpec", "k8s.io/apimachinery/pkg/apis/meta/v1.ObjectMeta"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_PodConvertPolicyList(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "kind": {
+ SchemaProps: spec.SchemaProps{
+ Description: "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "apiVersion": {
+ SchemaProps: spec.SchemaProps{
+ Description: "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "metadata": {
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta"),
+ },
+ },
+ "items": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"array"},
+ Items: &spec.SchemaOrArray{
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.PodConvertPolicy"),
+ },
+ },
+ },
+ },
+ },
+ },
+ Required: []string{"metadata", "items"},
+ },
+ },
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.PodConvertPolicy", "k8s.io/apimachinery/pkg/apis/meta/v1.ListMeta"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_PodConvertPolicySpec(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "labelSelector": {
+ SchemaProps: spec.SchemaProps{
+ Description: "A label query over a set of resources. If name is not empty, labelSelector will be ignored.",
+ Default: map[string]interface{}{},
+ Ref: ref("k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelector"),
+ },
+ },
+ "leafNodeSelector": {
+ SchemaProps: spec.SchemaProps{
+ Description: "A label query over a set of resources. If name is not empty, LeafNodeSelector will be ignored.",
+ Ref: ref("k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelector"),
+ },
+ },
+ "converters": {
+ SchemaProps: spec.SchemaProps{
+ Description: "Converters are some converter for convert pod when pod synced from root cluster to leaf cluster pod will use these converters to scheduled in leaf cluster",
+ Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Converters"),
+ },
+ },
+ },
+ Required: []string{"labelSelector"},
+ },
+ },
+ Dependencies: []string{
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.Converters", "k8s.io/apimachinery/pkg/apis/meta/v1.LabelSelector"},
+ }
+}
+
func schema_pkg_apis_kosmos_v1alpha1_Proxy(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
@@ -1407,6 +1901,33 @@ func schema_pkg_apis_kosmos_v1alpha1_Route(ref common.ReferenceCallback) common.
}
}
+func schema_pkg_apis_kosmos_v1alpha1_SchedulerNameConverter(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Description: "SchedulerNameConverter used to modify the pod's nodeName when pod synced to leaf cluster",
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "convertType": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "schedulerName": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ },
+ Required: []string{"convertType"},
+ },
+ },
+ }
+}
+
func schema_pkg_apis_kosmos_v1alpha1_ShadowDaemonSet(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
@@ -1446,7 +1967,7 @@ func schema_pkg_apis_kosmos_v1alpha1_ShadowDaemonSet(ref common.ReferenceCallbac
Ref: ref("github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1.DaemonSetStatus"),
},
},
- "knode": {
+ "cluster": {
SchemaProps: spec.SchemaProps{
Default: "",
Type: []string{"string"},
@@ -1511,6 +2032,79 @@ func schema_pkg_apis_kosmos_v1alpha1_ShadowDaemonSetList(ref common.ReferenceCal
}
}
+func schema_pkg_apis_kosmos_v1alpha1_TolerationConverter(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Description: "TolerationConverter used to modify the pod's Tolerations when pod synced to leaf cluster",
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "convertType": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "tolerations": {
+ SchemaProps: spec.SchemaProps{
+ Type: []string{"array"},
+ Items: &spec.SchemaOrArray{
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("k8s.io/api/core/v1.Toleration"),
+ },
+ },
+ },
+ },
+ },
+ },
+ Required: []string{"convertType"},
+ },
+ },
+ Dependencies: []string{
+ "k8s.io/api/core/v1.Toleration"},
+ }
+}
+
+func schema_pkg_apis_kosmos_v1alpha1_TopologySpreadConstraintsConverter(ref common.ReferenceCallback) common.OpenAPIDefinition {
+ return common.OpenAPIDefinition{
+ Schema: spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Description: "TopologySpreadConstraintsConverter used to modify the pod's TopologySpreadConstraints when pod synced to leaf cluster",
+ Type: []string{"object"},
+ Properties: map[string]spec.Schema{
+ "convertType": {
+ SchemaProps: spec.SchemaProps{
+ Default: "",
+ Type: []string{"string"},
+ Format: "",
+ },
+ },
+ "topologySpreadConstraints": {
+ SchemaProps: spec.SchemaProps{
+ Description: "TopologySpreadConstraints describes how a group of pods ought to spread across topology domains. Scheduler will schedule pods in a way which abides by the constraints. All topologySpreadConstraints are ANDed.",
+ Type: []string{"array"},
+ Items: &spec.SchemaOrArray{
+ Schema: &spec.Schema{
+ SchemaProps: spec.SchemaProps{
+ Default: map[string]interface{}{},
+ Ref: ref("k8s.io/api/core/v1.TopologySpreadConstraint"),
+ },
+ },
+ },
+ },
+ },
+ },
+ Required: []string{"convertType"},
+ },
+ },
+ Dependencies: []string{
+ "k8s.io/api/core/v1.TopologySpreadConstraint"},
+ }
+}
+
func schema_pkg_apis_kosmos_v1alpha1_VxlanCIDRs(ref common.ReferenceCallback) common.OpenAPIDefinition {
return common.OpenAPIDefinition{
Schema: spec.Schema{
diff --git a/pkg/kosmosctl/floater/analysis.go b/pkg/kosmosctl/floater/analysis.go
new file mode 100644
index 000000000..b43ff7481
--- /dev/null
+++ b/pkg/kosmosctl/floater/analysis.go
@@ -0,0 +1,241 @@
+package floater
+
+import (
+ "context"
+ "encoding/json"
+ "fmt"
+ "os"
+ "reflect"
+ "strconv"
+
+ "github.com/olekukonko/tablewriter"
+ "github.com/spf13/cobra"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/client-go/dynamic"
+ ctlutil "k8s.io/kubectl/pkg/cmd/util"
+ "k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/kubectl/pkg/util/templates"
+
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/util"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ "github.com/kosmos.io/kosmos/pkg/version"
+)
+
+var analysisExample = templates.Examples(i18n.T(`
+ # Analysis cluster network, e.g:
+ kosmosctl analysis cluster --name cluster-name --kubeconfig ~/kubeconfig/cluster-kubeconfig
+`))
+
+type CommandAnalysisOptions struct {
+ Namespace string
+ Name string
+ ImageRepository string
+ Version string
+ KubeConfig string
+
+ Port string
+ PodWaitTime int
+ GenGraph bool
+ GenPath string
+
+ Floater *Floater
+ DynamicClient *dynamic.DynamicClient
+
+ AnalysisResult []*PrintAnalysisData
+}
+
+type PrintAnalysisData struct {
+ ClusterName string
+ ClusterNodeName string
+ ParameterType string
+ AnalyzeResult string
+}
+
+func NewCmdAnalysis(f ctlutil.Factory) *cobra.Command {
+ o := &CommandAnalysisOptions{
+ Version: version.GetReleaseVersion().PatchRelease(),
+ }
+ cmd := &cobra.Command{
+ Use: "analysis",
+ Short: i18n.T("Analysis network connectivity between Kosmos clusters"),
+ Long: "",
+ Example: analysisExample,
+ SilenceUsage: true,
+ DisableFlagsInUseLine: true,
+ RunE: func(cmd *cobra.Command, args []string) error {
+ ctlutil.CheckErr(o.Complete(f))
+ ctlutil.CheckErr(o.Validate())
+ ctlutil.CheckErr(o.Run(args))
+ return nil
+ },
+ }
+
+ flags := cmd.Flags()
+ flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "Kosmos namespace.")
+ flags.StringVarP(&o.ImageRepository, "image-repository", "r", utils.DefaultImageRepository, "Image repository.")
+ flags.StringVar(&o.Name, "name", "", "Specify the name of the resource to analysis.")
+ flags.StringVar(&o.KubeConfig, "kubeconfig", "", "Absolute path to the cluster kubeconfig file.")
+ flags.StringVar(&o.Port, "port", utils.DefaultPort, "Port used by floater.")
+ flags.IntVarP(&o.PodWaitTime, "pod-wait-time", "w", utils.DefaultWaitTime, "Time for wait pod(floater) launch.")
+ flags.BoolVar(&o.GenGraph, "gen-graph", false, "Configure generate network analysis graph.")
+ flags.StringVar(&o.GenPath, "gen-path", "~/", "Configure save path for generate network analysis graph.")
+
+ return cmd
+}
+
+func (o *CommandAnalysisOptions) Complete(f ctlutil.Factory) error {
+ c, err := f.ToRESTConfig()
+ if err != nil {
+ return fmt.Errorf("kosmosctl analysis complete error, generate rest config failed: %v", err)
+ }
+ o.DynamicClient, err = dynamic.NewForConfig(c)
+ if err != nil {
+ return fmt.Errorf("kosmosctl analysis complete error, generate dynamic client failed: %s", err)
+ }
+
+ af := NewAnalysisFloater(o)
+ if err = af.completeFromKubeConfigPath(o.KubeConfig); err != nil {
+ return err
+ }
+ o.Floater = af
+
+ return nil
+}
+
+func (o *CommandAnalysisOptions) Validate() error {
+ if len(o.Namespace) == 0 {
+ return fmt.Errorf("kosmosctl analysis validate error, namespace is not valid")
+ }
+
+ if len(o.Name) == 0 {
+ return fmt.Errorf("kosmosctl analysis validate error, name is not valid")
+ }
+
+ return nil
+}
+
+func (o *CommandAnalysisOptions) Run(args []string) error {
+ switch args[0] {
+ case "cluster":
+ err := o.runCluster()
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func (o *CommandAnalysisOptions) runCluster() error {
+ if err := o.Floater.CreateFloater(); err != nil {
+ return err
+ }
+
+ sysNodeConfigs, err := o.Floater.GetSysNodeConfig()
+ if err != nil {
+ return fmt.Errorf("get cluster nodeConfigInfos failed: %s", err)
+ }
+
+ for _, sysNodeConfig := range sysNodeConfigs {
+ var obj unstructured.Unstructured
+ var nodeConfig v1alpha1.NodeConfig
+ nodeConfigs, err := o.DynamicClient.Resource(util.NodeConfigGVR).List(context.TODO(), metav1.ListOptions{})
+ if err != nil && !apierrors.IsNotFound(err) {
+ return fmt.Errorf("get nodeconfig failed: %v", err)
+ }
+ for _, n := range nodeConfigs.Items {
+ if sysNodeConfig.NodeName == n.GetName() {
+ obj = n
+ break
+ }
+ }
+ jsonData, err := obj.MarshalJSON()
+ if err != nil {
+ return fmt.Errorf("marshal nodeconfig failed: %v", err)
+ }
+ err = json.Unmarshal(jsonData, &nodeConfig)
+ if err != nil {
+ return fmt.Errorf("unmarshal nodeconfig failed: %v", err)
+ }
+ o.analysisNodeConfig(sysNodeConfig.NodeName, sysNodeConfig.NodeConfigSpec, nodeConfig.Spec)
+ }
+
+ o.PrintResult(o.AnalysisResult)
+
+ if err = o.Floater.RemoveFloater(); err != nil {
+ return err
+ }
+
+ return nil
+}
+
+func (o *CommandAnalysisOptions) analysisNodeConfig(nodeName string, nc1 v1alpha1.NodeConfigSpec, nc2 v1alpha1.NodeConfigSpec) {
+ analyzeType1 := reflect.TypeOf(nc1)
+ analyzeType2 := reflect.TypeOf(nc2)
+
+ for i := 0; i < analyzeType1.NumField(); i++ {
+ r := &PrintAnalysisData{
+ ClusterName: o.Name,
+ ClusterNodeName: nodeName,
+ }
+ field1 := analyzeType1.Field(i)
+ field2 := analyzeType2.Field(i)
+
+ if field1.Type != field2.Type {
+ r.ParameterType = field1.Name
+ r.AnalyzeResult = "false"
+ o.AnalysisResult = append(o.AnalysisResult, r)
+ continue
+ }
+
+ value1 := reflect.ValueOf(nc1).Field(i)
+ value2 := reflect.ValueOf(nc2).Field(i)
+
+ if !reflect.DeepEqual(value1.Interface(), value2.Interface()) {
+ r.ParameterType = field1.Name
+ r.AnalyzeResult = "false"
+ o.AnalysisResult = append(o.AnalysisResult, r)
+ continue
+ }
+
+ r.ParameterType = field1.Name
+ r.AnalyzeResult = "true"
+ o.AnalysisResult = append(o.AnalysisResult, r)
+ }
+}
+
+func (o *CommandAnalysisOptions) PrintResult(resultData []*PrintAnalysisData) {
+ table := tablewriter.NewWriter(os.Stdout)
+ table.SetHeader([]string{"S/N", "CLUSTER_NAME", "CLUSTER_NODE_NAME", "PARAMETER_TYPE", "ANALYZE_RESULT"})
+
+ tableException := tablewriter.NewWriter(os.Stdout)
+ tableException.SetHeader([]string{"S/N", "CLUSTER_NAME", "CLUSTER_NODE_NAME", "PARAMETER_TYPE", "ANALYZE_RESULT"})
+
+ for index, r := range resultData {
+ row := []string{strconv.Itoa(index + 1), r.ClusterName, r.ClusterNodeName, r.ParameterType, r.AnalyzeResult}
+ if r.AnalyzeResult == "false" {
+ tableException.Rich(row, []tablewriter.Colors{
+ {},
+ {tablewriter.Bold, tablewriter.FgHiRedColor},
+ {tablewriter.Bold, tablewriter.FgHiRedColor},
+ {tablewriter.Bold, tablewriter.FgHiRedColor},
+ {tablewriter.Bold, tablewriter.FgHiRedColor},
+ })
+ } else {
+ table.Rich(row, []tablewriter.Colors{
+ {},
+ {tablewriter.Bold, tablewriter.FgGreenColor},
+ {tablewriter.Bold, tablewriter.FgGreenColor},
+ {tablewriter.Bold, tablewriter.FgGreenColor},
+ {tablewriter.Bold, tablewriter.FgGreenColor},
+ })
+ }
+ }
+ fmt.Println("")
+ table.Render()
+ fmt.Println("")
+ tableException.Render()
+}
diff --git a/pkg/kosmosctl/floater/doctor.go b/pkg/kosmosctl/floater/check.go
similarity index 69%
rename from pkg/kosmosctl/floater/doctor.go
rename to pkg/kosmosctl/floater/check.go
index cb127a060..bc79898a0 100644
--- a/pkg/kosmosctl/floater/doctor.go
+++ b/pkg/kosmosctl/floater/check.go
@@ -7,9 +7,9 @@ import (
"github.com/olekukonko/tablewriter"
"github.com/spf13/cobra"
- "k8s.io/client-go/kubernetes"
- "k8s.io/client-go/tools/clientcmd"
ctlutil "k8s.io/kubectl/pkg/cmd/util"
+ "k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/kubectl/pkg/util/templates"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/floater/command"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/floater/netmap"
@@ -17,10 +17,21 @@ import (
"github.com/kosmos.io/kosmos/pkg/version"
)
-type CommandDoctorOptions struct {
+var checkExample = templates.Examples(i18n.T(`
+ # Check single cluster network connectivity, e.g:
+ kosmosctl check --src-kubeconfig ~/kubeconfig/src-kubeconfig
+
+ # Check across clusters network connectivity, e.g:
+ kosmosctl check --src-kubeconfig ~/kubeconfig/src-kubeconfig --dst-kubeconfig ~/kubeconfig/dst-kubeconfig
+
+ # Check cluster network connectivity, if you need to specify a special image repository, e.g:
+ kosmosctl check -r ghcr.io/kosmos-io
+`))
+
+type CommandCheckOptions struct {
Namespace string
ImageRepository string
- ImageRepositoryDst string
+ DstImageRepository string
Version string
Protocol string
@@ -28,38 +39,30 @@ type CommandDoctorOptions struct {
Port string
HostNetwork bool
- SrcKubeConfig string
- DstKubeConfig string
- HostKubeConfig string
+ KubeConfig string
+ SrcKubeConfig string
+ DstKubeConfig string
SrcFloater *Floater
DstFloater *Floater
}
-type Protocol string
-
-const (
- TCP Protocol = "tcp"
- UDP Protocol = "udp"
- IPv4 Protocol = "ipv4"
-)
-
-type PrintData struct {
+type PrintCheckData struct {
command.Result
SrcNodeName string
DstNodeName string
TargetIP string
}
-func NewCmdDoctor() *cobra.Command {
- o := &CommandDoctorOptions{
+func NewCmdCheck() *cobra.Command {
+ o := &CommandCheckOptions{
Version: version.GetReleaseVersion().PatchRelease(),
}
cmd := &cobra.Command{
- Use: "dr",
- Short: "Dr.link is an one-shot kubernetes network diagnose tool.",
+ Use: "check",
+ Short: i18n.T("Check network connectivity between Kosmos clusters"),
Long: "",
- Example: "",
+ Example: checkExample,
SilenceUsage: true,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
@@ -80,12 +83,12 @@ func NewCmdDoctor() *cobra.Command {
flags := cmd.Flags()
flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "Kosmos namespace.")
- flags.StringVarP(&o.ImageRepository, "image-repository", "r", "ghcr.io/kosmos-io", "Image repository.")
- flags.StringVarP(&o.ImageRepositoryDst, "image-repository-dst", "", "", "Image repository.")
- flags.StringVar(&o.HostKubeConfig, "host-kubeconfig", "", "Absolute path to the host kubeconfig file.")
+ flags.StringVarP(&o.ImageRepository, "image-repository", "r", utils.DefaultImageRepository, "Image repository.")
+ flags.StringVarP(&o.DstImageRepository, "dst-image-repository", "", "", "Destination cluster image repository.")
+ flags.StringVar(&o.KubeConfig, "kubeconfig", "", "Absolute path to the host kubeconfig file.")
flags.StringVar(&o.SrcKubeConfig, "src-kubeconfig", "", "Absolute path to the source cluster kubeconfig file.")
flags.StringVar(&o.DstKubeConfig, "dst-kubeconfig", "", "Absolute path to the destination cluster kubeconfig file.")
- flags.BoolVar(&o.HostNetwork, "host-network", false, "")
+ flags.BoolVar(&o.HostNetwork, "host-network", false, "Configure HostNetwork.")
flags.StringVar(&o.Port, "port", "8889", "Port used by floater.")
flags.IntVarP(&o.PodWaitTime, "pod-wait-time", "w", 30, "Time for wait pod(floater) launch.")
flags.StringVar(&o.Protocol, "protocol", string(TCP), "Protocol for the network problem.")
@@ -93,27 +96,29 @@ func NewCmdDoctor() *cobra.Command {
return cmd
}
-func (o *CommandDoctorOptions) Complete() error {
- if len(o.ImageRepositoryDst) == 0 {
- o.ImageRepositoryDst = o.ImageRepository
+func (o *CommandCheckOptions) Complete() error {
+ if len(o.DstImageRepository) == 0 {
+ o.DstImageRepository = o.ImageRepository
}
- srcFloater := newFloater(o)
+ srcFloater := NewCheckFloater(o, false)
if err := srcFloater.completeFromKubeConfigPath(o.SrcKubeConfig); err != nil {
return err
}
o.SrcFloater = srcFloater
- dstFloater := newFloater(o)
- if err := dstFloater.completeFromKubeConfigPath(o.DstKubeConfig); err != nil {
- return err
+ if o.DstKubeConfig != "" {
+ dstFloater := NewCheckFloater(o, true)
+ if err := dstFloater.completeFromKubeConfigPath(o.DstKubeConfig); err != nil {
+ return err
+ }
+ o.DstFloater = dstFloater
}
- o.DstFloater = dstFloater
return nil
}
-func (o *CommandDoctorOptions) Validate() error {
+func (o *CommandCheckOptions) Validate() error {
if len(o.Namespace) == 0 {
return fmt.Errorf("namespace must be specified")
}
@@ -121,41 +126,10 @@ func (o *CommandDoctorOptions) Validate() error {
return nil
}
-func newFloater(o *CommandDoctorOptions) *Floater {
- floater := &Floater{
- Namespace: o.Namespace,
- Name: DefaultFloaterName,
- ImageRepository: o.ImageRepositoryDst,
- Version: o.Version,
- PodWaitTime: o.PodWaitTime,
- Port: o.Port,
- EnableHostNetwork: false,
- }
- if o.HostNetwork {
- floater.EnableHostNetwork = true
- }
- return floater
-}
-
-func (f *Floater) completeFromKubeConfigPath(kubeConfigPath string) error {
- config, err := clientcmd.BuildConfigFromFlags("", kubeConfigPath)
- if err != nil {
- return fmt.Errorf("kosmosctl docter complete error, generate floater config failed: %v", err)
- }
- f.Config = config
-
- f.Client, err = kubernetes.NewForConfig(f.Config)
- if err != nil {
- return fmt.Errorf("kosmosctl docter complete error, generate floater client failed: %v", err)
- }
-
- return nil
-}
-
-func (o *CommandDoctorOptions) Run() error {
- var resultData []*PrintData
+func (o *CommandCheckOptions) Run() error {
+ var resultData []*PrintCheckData
- if err := o.SrcFloater.RunInit(); err != nil {
+ if err := o.SrcFloater.CreateFloater(); err != nil {
return err
}
@@ -166,7 +140,7 @@ func (o *CommandDoctorOptions) Run() error {
return fmt.Errorf("get src cluster nodeInfos failed: %s", err)
}
- if err = o.DstFloater.RunInit(); err != nil {
+ if err = o.DstFloater.CreateFloater(); err != nil {
return err
}
var dstNodeInfos []*FloatInfo
@@ -182,7 +156,7 @@ func (o *CommandDoctorOptions) Run() error {
return fmt.Errorf("get src cluster podInfos failed: %s", err)
}
- if err = o.DstFloater.RunInit(); err != nil {
+ if err = o.DstFloater.CreateFloater(); err != nil {
return err
}
var dstPodInfos []*FloatInfo
@@ -211,11 +185,21 @@ func (o *CommandDoctorOptions) Run() error {
o.PrintResult(resultData)
+ if err := o.SrcFloater.RemoveFloater(); err != nil {
+ return err
+ }
+
+ if o.DstKubeConfig != "" {
+ if err := o.DstFloater.RemoveFloater(); err != nil {
+ return err
+ }
+ }
+
return nil
}
-func (o *CommandDoctorOptions) RunRange(iPodInfos []*FloatInfo, jPodInfos []*FloatInfo) []*PrintData {
- var resultData []*PrintData
+func (o *CommandCheckOptions) RunRange(iPodInfos []*FloatInfo, jPodInfos []*FloatInfo) []*PrintCheckData {
+ var resultData []*PrintCheckData
if len(iPodInfos) > 0 && len(jPodInfos) > 0 {
for _, iPodInfo := range iPodInfos {
@@ -238,7 +222,7 @@ func (o *CommandDoctorOptions) RunRange(iPodInfos []*FloatInfo, jPodInfos []*Flo
}
cmdResult = o.SrcFloater.CommandExec(iPodInfo, cmdObj)
}
- resultData = append(resultData, &PrintData{
+ resultData = append(resultData, &PrintCheckData{
*cmdResult,
iPodInfo.NodeName, jPodInfo.NodeName, targetIP,
})
@@ -250,8 +234,8 @@ func (o *CommandDoctorOptions) RunRange(iPodInfos []*FloatInfo, jPodInfos []*Flo
return resultData
}
-func (o *CommandDoctorOptions) RunNative(iNodeInfos []*FloatInfo, jNodeInfos []*FloatInfo) []*PrintData {
- var resultData []*PrintData
+func (o *CommandCheckOptions) RunNative(iNodeInfos []*FloatInfo, jNodeInfos []*FloatInfo) []*PrintCheckData {
+ var resultData []*PrintCheckData
if len(iNodeInfos) > 0 && len(jNodeInfos) > 0 {
for _, iNodeInfo := range iNodeInfos {
@@ -262,7 +246,7 @@ func (o *CommandDoctorOptions) RunNative(iNodeInfos []*FloatInfo, jNodeInfos []*
TargetIP: ip,
}
cmdResult := o.SrcFloater.CommandExec(iNodeInfo, cmdObj)
- resultData = append(resultData, &PrintData{
+ resultData = append(resultData, &PrintCheckData{
*cmdResult,
iNodeInfo.NodeName, jNodeInfo.NodeName, ip,
})
@@ -274,7 +258,7 @@ func (o *CommandDoctorOptions) RunNative(iNodeInfos []*FloatInfo, jNodeInfos []*
return resultData
}
-func (o *CommandDoctorOptions) PrintResult(resultData []*PrintData) {
+func (o *CommandCheckOptions) PrintResult(resultData []*PrintCheckData) {
table := tablewriter.NewWriter(os.Stdout)
table.SetHeader([]string{"S/N", "SRC_NODE_NAME", "DST_NODE_NAME", "TARGET_IP", "RESULT"})
diff --git a/pkg/kosmosctl/floater/floater.go b/pkg/kosmosctl/floater/floater.go
index 3688922c7..6466d3983 100644
--- a/pkg/kosmosctl/floater/floater.go
+++ b/pkg/kosmosctl/floater/floater.go
@@ -3,6 +3,7 @@ package floater
import (
"bytes"
"context"
+ "encoding/json"
"fmt"
"strings"
"time"
@@ -10,14 +11,26 @@ import (
corev1 "k8s.io/api/core/v1"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/runtime"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
+ "k8s.io/client-go/tools/clientcmd"
"k8s.io/client-go/tools/remotecommand"
"k8s.io/klog/v2"
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/floater/command"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/util"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+type Protocol string
+
+const (
+ TCP Protocol = "tcp"
+ UDP Protocol = "udp"
+ IPv4 Protocol = "ipv4"
)
const (
@@ -25,8 +38,9 @@ const (
)
type FloatInfo struct {
- NodeName string
- NodeIPs []string
+ NodeName string
+ NodeIPs []string
+ NodeConfigSpec v1alpha1.NodeConfigSpec
PodName string
PodIPs []string
@@ -44,14 +58,66 @@ type Floater struct {
PodWaitTime int
Port string
EnableHostNetwork bool
+ EnableAnalysis bool
CIDRsMap map[string]string
- Client kubernetes.Interface
Config *rest.Config
+ Client kubernetes.Interface
+}
+
+func NewCheckFloater(o *CommandCheckOptions, isDst bool) *Floater {
+ imageRepository := o.ImageRepository
+ if isDst {
+ imageRepository = o.DstImageRepository
+ }
+ floater := &Floater{
+ Namespace: o.Namespace,
+ Name: DefaultFloaterName,
+ ImageRepository: imageRepository,
+ Version: o.Version,
+ PodWaitTime: o.PodWaitTime,
+ Port: o.Port,
+ EnableHostNetwork: false,
+ EnableAnalysis: false,
+ }
+ if o.HostNetwork {
+ floater.EnableHostNetwork = true
+ }
+ return floater
+}
+
+func NewAnalysisFloater(o *CommandAnalysisOptions) *Floater {
+ floater := &Floater{
+ Namespace: o.Namespace,
+ Name: DefaultFloaterName,
+ ImageRepository: o.ImageRepository,
+ Version: o.Version,
+ PodWaitTime: o.PodWaitTime,
+ Port: o.Port,
+ EnableHostNetwork: true,
+ EnableAnalysis: true,
+ }
+
+ return floater
+}
+
+func (f *Floater) completeFromKubeConfigPath(kubeConfigPath string) error {
+ config, err := clientcmd.BuildConfigFromFlags("", kubeConfigPath)
+ if err != nil {
+ return fmt.Errorf("kosmosctl docter complete error, generate floater config failed: %v", err)
+ }
+ f.Config = config
+
+ f.Client, err = kubernetes.NewForConfig(f.Config)
+ if err != nil {
+ return fmt.Errorf("kosmosctl docter complete error, generate floater client failed: %v", err)
+ }
+
+ return nil
}
-func (f *Floater) RunInit() error {
+func (f *Floater) CreateFloater() error {
klog.Infof("create Clusterlink floater, namespace: %s", f.Namespace)
namespace := &corev1.Namespace{}
namespace.Name = f.Namespace
@@ -74,7 +140,7 @@ func (f *Floater) RunInit() error {
}
klog.Infof("create Clusterlink floater, version: %s", f.Version)
- if err = f.initFloaterDaemonSet(); err != nil {
+ if err = f.applyDaemonSet(); err != nil {
return err
}
@@ -130,7 +196,7 @@ func (f *Floater) applyClusterRoleBinding() error {
return nil
}
-func (f *Floater) initFloaterDaemonSet() error {
+func (f *Floater) applyDaemonSet() error {
clusterlinkFloaterDaemonSet, err := util.GenerateDaemonSet(manifest.ClusterlinkFloaterDaemonSet, manifest.DaemonSetReplace{
Namespace: f.Namespace,
Name: f.Name,
@@ -138,6 +204,7 @@ func (f *Floater) initFloaterDaemonSet() error {
ImageRepository: f.ImageRepository,
Port: f.Port,
EnableHostNetwork: f.EnableHostNetwork,
+ EnableAnalysis: f.EnableAnalysis,
})
if err != nil {
return err
@@ -151,7 +218,7 @@ func (f *Floater) initFloaterDaemonSet() error {
floaterLabel := map[string]string{"app": f.Name}
if err = util.WaitPodReady(f.Client, f.Namespace, util.MapToString(floaterLabel), f.PodWaitTime); err != nil {
- return err
+ klog.Warningf("exist cluster node startup floater timeout, error: %v", err)
}
return nil
@@ -159,7 +226,9 @@ func (f *Floater) initFloaterDaemonSet() error {
func (f *Floater) GetPodInfo() ([]*FloatInfo, error) {
selector := util.MapToString(map[string]string{"app": f.Name})
- pods, err := f.Client.CoreV1().Pods(f.Namespace).List(context.TODO(), metav1.ListOptions{LabelSelector: selector})
+ pods, err := f.Client.CoreV1().Pods(f.Namespace).List(context.TODO(), metav1.ListOptions{
+ LabelSelector: selector,
+ })
if err != nil {
return nil, err
}
@@ -173,7 +242,7 @@ func (f *Floater) GetPodInfo() ([]*FloatInfo, error) {
podInfo := &FloatInfo{
NodeName: pod.Spec.NodeName,
PodName: pod.GetObjectMeta().GetName(),
- PodIPs: PodIPToArray(pod.Status.PodIPs),
+ PodIPs: podIPToArray(pod.Status.PodIPs),
}
floaterInfos = append(floaterInfos, podInfo)
@@ -182,7 +251,7 @@ func (f *Floater) GetPodInfo() ([]*FloatInfo, error) {
return floaterInfos, nil
}
-func PodIPToArray(podIPs []corev1.PodIP) []string {
+func podIPToArray(podIPs []corev1.PodIP) []string {
var ret []string
for _, podIP := range podIPs {
@@ -194,7 +263,9 @@ func PodIPToArray(podIPs []corev1.PodIP) []string {
func (f *Floater) GetNodesInfo() ([]*FloatInfo, error) {
selector := util.MapToString(map[string]string{"app": f.Name})
- pods, err := f.Client.CoreV1().Pods(f.Namespace).List(context.TODO(), metav1.ListOptions{LabelSelector: selector})
+ pods, err := f.Client.CoreV1().Pods(f.Namespace).List(context.TODO(), metav1.ListOptions{
+ LabelSelector: selector,
+ })
if err != nil {
return nil, err
}
@@ -216,7 +287,7 @@ func (f *Floater) GetNodesInfo() ([]*FloatInfo, error) {
if pod.Spec.NodeName == node.Name {
nodeInfo := &FloatInfo{
NodeName: node.Name,
- NodeIPs: NodeIPToArray(node),
+ NodeIPs: nodeIPToArray(node),
PodName: pod.Name,
}
floaterInfos = append(floaterInfos, nodeInfo)
@@ -227,7 +298,7 @@ func (f *Floater) GetNodesInfo() ([]*FloatInfo, error) {
return floaterInfos, nil
}
-func NodeIPToArray(node corev1.Node) []string {
+func nodeIPToArray(node corev1.Node) []string {
var nodeIPs []string
for _, addr := range node.Status.Addresses {
@@ -239,6 +310,65 @@ func NodeIPToArray(node corev1.Node) []string {
return nodeIPs
}
+func (f *Floater) GetSysNodeConfig() ([]*FloatInfo, error) {
+ var floatInfos []*FloatInfo
+ selector := util.MapToString(map[string]string{"app": f.Name})
+ getNodeConfigCmd := []string{"cat", utils.NodeConfigFile}
+ fPods, err := f.Client.CoreV1().Pods(f.Namespace).List(context.TODO(), metav1.ListOptions{
+ LabelSelector: selector,
+ })
+ if err != nil {
+ return nil, err
+ }
+
+ for _, fPod := range fPods.Items {
+ var nodeConfigSpec v1alpha1.NodeConfigSpec
+ var floatInfo FloatInfo
+ containerName := fPod.Spec.Containers[0].Name
+ req := f.Client.CoreV1().RESTClient().Post().Resource("pods").
+ Namespace(fPod.Namespace).Name(fPod.Name).SubResource("exec")
+ scheme := runtime.NewScheme()
+ if err = corev1.AddToScheme(scheme); err != nil {
+ panic(err.Error())
+ }
+ parameterCodec := runtime.NewParameterCodec(scheme)
+ req.VersionedParams(
+ &corev1.PodExecOptions{
+ Stdin: false,
+ Stdout: true,
+ Stderr: true,
+ TTY: false,
+ Container: containerName,
+ Command: getNodeConfigCmd,
+ }, parameterCodec)
+
+ exec, err := remotecommand.NewSPDYExecutor(f.Config, "POST", req.URL())
+ if err != nil {
+ klog.Infof("error: %s", err)
+ }
+
+ var stdout, stderr bytes.Buffer
+ err = exec.StreamWithContext(context.TODO(), remotecommand.StreamOptions{
+ Stdin: nil,
+ Stdout: &stdout,
+ Stderr: &stderr,
+ Tty: false,
+ })
+ if err != nil {
+ klog.Infof("error: %s", err)
+ }
+
+ err = json.Unmarshal(stdout.Bytes(), &nodeConfigSpec)
+ if err != nil {
+ klog.Infof("error: %s", err)
+ }
+ floatInfo.NodeConfigSpec = nodeConfigSpec
+ floatInfos = append(floatInfos, &floatInfo)
+ }
+
+ return floatInfos, nil
+}
+
func (f *Floater) CommandExec(fInfo *FloatInfo, cmd command.Command) *command.Result {
req := f.Client.CoreV1().RESTClient().Post().Resource("pods").Namespace(f.Namespace).Name(fInfo.PodName).
SubResource("exec").
@@ -277,3 +407,77 @@ func (f *Floater) CommandExec(fInfo *FloatInfo, cmd command.Command) *command.Re
return cmd.ParseResult(outBuffer.String())
}
+
+func (f *Floater) RemoveFloater() error {
+ klog.Infof("remove Clusterlink floater, version: %s", f.Version)
+ if err := f.removeDaemonSet(); err != nil {
+ return err
+ }
+
+ klog.Info("remove Clusterlink floater, apply RBAC")
+ if err := f.removeClusterRoleBinding(); err != nil {
+ return err
+ }
+ if err := f.removeClusterRole(); err != nil {
+ return err
+ }
+ if err := f.removeServiceAccount(); err != nil {
+ return err
+ }
+
+ if f.Namespace != utils.DefaultNamespace {
+ klog.Infof("remove namespace specified when creating Clusterlink floater, namespace: %s", f.Namespace)
+ err := f.Client.CoreV1().Namespaces().Delete(context.TODO(), f.Namespace, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl floater run error, namespace options failed: %v", err)
+ }
+ }
+ }
+
+ return nil
+}
+
+func (f *Floater) removeDaemonSet() error {
+ err := f.Client.AppsV1().DaemonSets(f.Namespace).Delete(context.Background(), f.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl floater run error, daemonset options failed: %v", err)
+ }
+ }
+
+ return nil
+}
+
+func (f *Floater) removeClusterRoleBinding() error {
+ err := f.Client.RbacV1().ClusterRoleBindings().Delete(context.Background(), f.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl floater run error, clusterrolebinding options failed: %v", err)
+ }
+ }
+
+ return nil
+}
+
+func (f *Floater) removeClusterRole() error {
+ err := f.Client.RbacV1().ClusterRoles().Delete(context.Background(), f.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl floater run error, clusterrole options failed: %v", err)
+ }
+ }
+
+ return nil
+}
+
+func (f *Floater) removeServiceAccount() error {
+ err := f.Client.CoreV1().ServiceAccounts(f.Namespace).Delete(context.Background(), f.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl floater run error, serviceaccount options failed: %v", err)
+ }
+ }
+
+ return nil
+}
diff --git a/pkg/kosmosctl/get/get.go b/pkg/kosmosctl/get/get.go
index 4f3a47534..92b462784 100644
--- a/pkg/kosmosctl/get/get.go
+++ b/pkg/kosmosctl/get/get.go
@@ -1,22 +1,31 @@
package get
import (
+ "context"
"fmt"
"strings"
"github.com/spf13/cobra"
+ authenticationv1 "k8s.io/api/authentication/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/cli-runtime/pkg/genericclioptions"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/tools/clientcmd"
ctlget "k8s.io/kubectl/pkg/cmd/get"
ctlutil "k8s.io/kubectl/pkg/cmd/util"
"k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/utils/pointer"
+ "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/util"
"github.com/kosmos.io/kosmos/pkg/utils"
)
const (
ClustersGroupResource = "clusters.kosmos.io"
ClusterNodesGroupResource = "clusternodes.kosmos.io"
- KnodesGroupResource = "knodes.kosmos.io"
+ NodeConfigsGroupResource = "nodeconfigs.kosmos.io"
)
type CommandGetOptions struct {
@@ -28,6 +37,8 @@ type CommandGetOptions struct {
GetOptions *ctlget.GetOptions
}
+var newF ctlutil.Factory
+
// NewCmdGet Display resources from the Kosmos control plane.
func NewCmdGet(f ctlutil.Factory, streams genericclioptions.IOStreams) *cobra.Command {
o := NewCommandGetOptions(streams)
@@ -47,9 +58,10 @@ func NewCmdGet(f ctlutil.Factory, streams genericclioptions.IOStreams) *cobra.Co
},
}
- o.GetOptions.PrintFlags.AddFlags(cmd)
flags := cmd.Flags()
flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "If present, the namespace scope for this CLI request.")
+ flags.StringVar(&o.Cluster, "cluster", utils.DefaultClusterName, "Specify a cluster, the default is the control cluster.")
+ o.GetOptions.PrintFlags.AddFlags(cmd)
return cmd
}
@@ -63,13 +75,71 @@ func NewCommandGetOptions(streams genericclioptions.IOStreams) *CommandGetOption
}
func (o *CommandGetOptions) Complete(f ctlutil.Factory, cmd *cobra.Command, args []string) error {
- err := o.GetOptions.Complete(f, cmd, args)
- if err != nil {
- return fmt.Errorf("kosmosctl get complete error, options failed: %s", err)
+ if o.Cluster != utils.DefaultClusterName {
+ controlConfig, err := f.ToRESTConfig()
+ if err != nil {
+ return err
+ }
+
+ rootClient, err := versioned.NewForConfig(controlConfig)
+ if err != nil {
+ return err
+ }
+ cluster, err := rootClient.KosmosV1alpha1().Clusters().Get(context.TODO(), o.Cluster, metav1.GetOptions{})
+ if err != nil {
+ return err
+ }
+
+ leafConfig, err := clientcmd.RESTConfigFromKubeConfig(cluster.Spec.Kubeconfig)
+ if err != nil {
+ return fmt.Errorf("kosmosctl get complete error, load leaf cluster kubeconfig failed: %s", err)
+ }
+
+ leafClient, err := kubernetes.NewForConfig(leafConfig)
+ if err != nil {
+ return fmt.Errorf("kosmosctl get complete error, generate leaf cluster client failed: %s", err)
+ }
+
+ kosmosControlSA, err := util.GenerateServiceAccount(manifest.KosmosControlServiceAccount, manifest.ServiceAccountReplace{
+ Namespace: o.Namespace,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl get complete error, generate kosmos serviceaccount failed: %s", err)
+ }
+ expirationSeconds := int64(600)
+ leafToken, err := leafClient.CoreV1().ServiceAccounts(kosmosControlSA.Namespace).CreateToken(
+ context.TODO(), kosmosControlSA.Name, &authenticationv1.TokenRequest{
+ Spec: authenticationv1.TokenRequestSpec{
+ ExpirationSeconds: &expirationSeconds,
+ },
+ }, metav1.CreateOptions{})
+ if err != nil {
+ return fmt.Errorf("kosmosctl get complete error, list leaf cluster secret failed: %s", err)
+ }
+
+ configFlags := genericclioptions.NewConfigFlags(false)
+ configFlags.APIServer = &leafConfig.Host
+ configFlags.BearerToken = &leafToken.Status.Token
+ configFlags.Insecure = pointer.Bool(true)
+ configFlags.Namespace = &o.Namespace
+
+ newF = ctlutil.NewFactory(configFlags)
+
+ err = o.GetOptions.Complete(newF, cmd, args)
+ if err != nil {
+ return fmt.Errorf("kosmosctl get complete error, options failed: %s", err)
+ }
+
+ o.GetOptions.Namespace = o.Namespace
+ } else {
+ err := o.GetOptions.Complete(f, cmd, args)
+ if err != nil {
+ return fmt.Errorf("kosmosctl get complete error, options failed: %s", err)
+ }
+
+ o.GetOptions.Namespace = o.Namespace
}
- o.GetOptions.Namespace = o.Namespace
-
return nil
}
@@ -88,13 +158,20 @@ func (o *CommandGetOptions) Run(f ctlutil.Factory, cmd *cobra.Command, args []st
args[0] = ClustersGroupResource
case "clusternode", "clusternodes":
args[0] = ClusterNodesGroupResource
- case "knode", "knodes":
- args[0] = KnodesGroupResource
+ case "nodeconfig", "nodeconfigs":
+ args[0] = NodeConfigsGroupResource
}
- err := o.GetOptions.Run(f, cmd, args)
- if err != nil {
- return fmt.Errorf("kosmosctl get run error, options failed: %s", err)
+ if o.Cluster != utils.DefaultClusterName {
+ err := o.GetOptions.Run(newF, cmd, args)
+ if err != nil {
+ return fmt.Errorf("kosmosctl get run error, options failed: %s", err)
+ }
+ } else {
+ err := o.GetOptions.Run(f, cmd, args)
+ if err != nil {
+ return fmt.Errorf("kosmosctl get run error, options failed: %s", err)
+ }
}
return nil
diff --git a/pkg/kosmosctl/image/image.go b/pkg/kosmosctl/image/image.go
new file mode 100644
index 000000000..cc4d22b06
--- /dev/null
+++ b/pkg/kosmosctl/image/image.go
@@ -0,0 +1,18 @@
+package image
+
+import (
+ "github.com/spf13/cobra"
+ "k8s.io/kubectl/pkg/util/i18n"
+)
+
+// NewCmdImage pull/push a kosmos offline installation package.
+func NewCmdImage() *cobra.Command {
+ cmd := &cobra.Command{
+ Use: "image",
+ Short: i18n.T("pull and push kosmos offline installation package. "),
+ }
+
+ cmd.AddCommand(NewCmdPull())
+ cmd.AddCommand(NewCmdPush())
+ return cmd
+}
diff --git a/pkg/kosmosctl/image/pull.go b/pkg/kosmosctl/image/pull.go
new file mode 100644
index 000000000..483b54dc1
--- /dev/null
+++ b/pkg/kosmosctl/image/pull.go
@@ -0,0 +1,295 @@
+package image
+
+import (
+ "bufio"
+ "context"
+ "fmt"
+ "io"
+ "os"
+
+ "github.com/containerd/containerd"
+ "github.com/containerd/containerd/images/archive"
+ "github.com/containerd/containerd/namespaces"
+ "github.com/docker/docker/api/types"
+ docker "github.com/docker/docker/client"
+ "github.com/spf13/cobra"
+ "k8s.io/klog"
+ ctlutil "k8s.io/kubectl/pkg/cmd/util"
+ "k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/kubectl/pkg/util/templates"
+
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+var PullExample = templates.Examples(i18n.T(`
+ # Pull and save images with default config, e.g:
+ kosmosctl image pull
+
+ # Pull and save images with custom config, e.g:
+ kosmosctl image pull --kosmos-version=[kosmos-image-version] --coredns-version=[coredns-image-version] --eps-version=[eps-image-version] --output=[output-dir] --container-runtime=[container-runtime]
+`))
+
+type CommandPullOptions struct {
+ Output string
+ ImageList string
+ ContainerRuntime string
+ KosmosImageVersion string
+ CorednsImageVersion string
+ EpsImageVersion string
+ ContainerdNamespace string
+ Context context.Context
+ DockerClient *docker.Client
+ ContainerdClient *containerd.Client
+}
+
+func NewCmdPull() *cobra.Command {
+ o := &CommandPullOptions{}
+ cmd := &cobra.Command{
+ Use: "pull",
+ Short: i18n.T("pull a kosmos offline installation package. "),
+ Long: "",
+ Example: PullExample,
+ SilenceUsage: true,
+ DisableFlagsInUseLine: true,
+ RunE: func(cmd *cobra.Command, args []string) error {
+ ctlutil.CheckErr(o.Complete())
+ ctlutil.CheckErr(o.Validate())
+ ctlutil.CheckErr(o.Run())
+ return nil
+ },
+ }
+
+ flags := cmd.Flags()
+ flags.StringVarP(&o.ImageList, "image-list", "l", "", "Image list of kosmos. ")
+ flags.StringVarP(&o.Output, "output", "o", "", "Path to a output path, default path is current dir")
+ flags.StringVarP(&o.ContainerRuntime, "containerd-runtime", "c", utils.DefaultContainerRuntime, "Type of container runtime(docker or containerd), docker is used by default .")
+ flags.StringVarP(&o.ContainerdNamespace, "containerd-namespace", "n", utils.DefaultContainerdNamespace, "Namespace of containerd. ")
+ flags.StringVarP(&o.KosmosImageVersion, "kosmos-version", "", utils.DefaultVersion, "Image list of kosmos. ")
+ flags.StringVarP(&o.CorednsImageVersion, "coredns-version", "", utils.DefaultVersion, "Image list of kosmos. ")
+ flags.StringVarP(&o.EpsImageVersion, "eps-version", "", utils.DefaultVersion, "Image list of kosmos. ")
+ return cmd
+}
+
+func (o *CommandPullOptions) Complete() (err error) {
+ if len(o.Output) == 0 {
+ currentPath, err := os.Getwd()
+ if err != nil {
+ return fmt.Errorf("current path can not find: %s", err)
+ }
+ o.Output = currentPath
+ }
+
+ switch o.ContainerRuntime {
+ case utils.Containerd:
+ o.ContainerdClient, err = containerd.New(utils.DefaultContainerdSockAddress)
+ if err != nil {
+ return fmt.Errorf("init containerd client failed: %s", err)
+ }
+ default:
+ o.DockerClient, err = docker.NewClientWithOpts(docker.FromEnv, docker.WithAPIVersionNegotiation())
+ if err != nil {
+ return fmt.Errorf("init docker client failed: %s", err)
+ }
+ }
+
+ o.Context = namespaces.WithNamespace(context.TODO(), o.ContainerdNamespace)
+ return nil
+}
+
+func (o *CommandPullOptions) Validate() error {
+ return nil
+}
+
+func (o *CommandPullOptions) Run() error {
+ // 1. pull image from public registry
+ klog.V(4).Info("Start pulling images ...")
+ imageList, err := o.PullImage()
+ if err != nil {
+ klog.V(4).Infof("image pull failed: %s", err)
+ return err
+ }
+ klog.V(4).Info("kosmos images have been pulled successfully. ")
+
+ // 2. save image to *.tar.gz
+ klog.V(4).Info("Start saving images ...")
+ err = o.SaveImage(imageList)
+ if err != nil {
+ klog.V(4).Infof("image save failed: %s", err)
+ return err
+ }
+ klog.V(4).Info("kosmos-io.tar.gz has been saved successfully. ")
+ return nil
+}
+
+func (o *CommandPullOptions) PullImage() (imageList []string, err error) {
+ if len(o.ImageList) != 0 {
+ // pull images from image-list.txt
+ imageList, err = o.PullFromImageList()
+ if err != nil {
+ return nil, err
+ }
+ } else {
+ // pull images with specific version
+ imageList, err = o.PullWithSpecificVersion()
+ if err != nil {
+ return nil, err
+ }
+ }
+ return imageList, nil
+}
+
+func (o *CommandPullOptions) SaveImage(imageList []string) (err error) {
+ switch o.ContainerRuntime {
+ case utils.Containerd:
+ err = o.ContainerdExport(imageList)
+ if err != nil {
+ return err
+ }
+ default:
+ err = o.DockerSave(imageList)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (o *CommandPullOptions) PullFromImageList() (imageList []string, err error) {
+ var imageName string
+ file, err := os.Open(o.ImageList)
+ if err != nil {
+ return nil, fmt.Errorf("read image list failed: %v", err)
+ }
+ defer func() {
+ if err = file.Close(); err != nil {
+ klog.Errorf("fail close failed: %s", err)
+ }
+ }()
+
+ scanner := bufio.NewScanner(file)
+ for scanner.Scan() {
+ imageName = scanner.Text()
+ err = o.PullCommand(imageName)
+ if err != nil {
+ return nil, err
+ }
+ imageList = append(imageList, imageName)
+ }
+ return imageList, nil
+}
+
+func (o *CommandPullOptions) PullWithSpecificVersion() (imageList []string, err error) {
+ var imageName string
+ for _, name := range utils.ImageList {
+ switch name {
+ case utils.Coredns:
+ imageName = fmt.Sprintf("%s:%s", name, o.CorednsImageVersion)
+ case utils.EpsProbePlugin:
+ imageName = fmt.Sprintf("%s:%s", name, o.EpsImageVersion)
+ default:
+ imageName = fmt.Sprintf("%s:%s", name, o.KosmosImageVersion)
+ }
+ err := o.PullCommand(imageName)
+ if err != nil {
+ return nil, err
+ }
+ imageList = append(imageList, imageName)
+ }
+ return imageList, nil
+}
+
+func (o *CommandPullOptions) PullCommand(imageName string) (err error) {
+ switch o.ContainerRuntime {
+ case utils.Containerd:
+ err = o.ContainerdPull(imageName)
+ if err != nil {
+ return err
+ }
+ default:
+ err = o.DockerPull(imageName)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (o *CommandPullOptions) DockerPull(imageName string) (err error) {
+ reader, err := o.DockerClient.ImagePull(context.Background(), imageName, types.ImagePullOptions{})
+ if err != nil {
+ return fmt.Errorf("docker pull %s failed: %s", imageName, err)
+ }
+ _, err = io.Copy(os.Stdout, reader)
+ if err != nil {
+ return err
+ }
+ klog.V(4).Infof("docker pull %s successfully.", imageName)
+ return nil
+}
+
+func (o *CommandPullOptions) DockerSave(imageList []string) (err error) {
+ outputPath := fmt.Sprintf("%s/%s", o.Output, utils.DefaultTarName)
+ file, err := os.OpenFile(outputPath, os.O_WRONLY|os.O_CREATE|os.O_EXCL, 0666)
+ if err != nil {
+ return fmt.Errorf("open file %s failed: %s", outputPath, err)
+ }
+ defer func() {
+ if err = file.Close(); err != nil {
+ klog.Errorf("file close failed: %s", err)
+ }
+ }()
+
+ saveResponse, err := o.DockerClient.ImageSave(context.Background(), imageList)
+ if err != nil {
+ return fmt.Errorf("docker save images failed: %s", err)
+ }
+
+ if _, err = io.Copy(file, saveResponse); err != nil {
+ return fmt.Errorf("io.Copy failed: %s", err)
+ }
+ return nil
+}
+
+func (o *CommandPullOptions) ContainerdPull(imageName string) (err error) {
+ opts := []containerd.RemoteOpt{
+ containerd.WithPullUnpack,
+ }
+ image, err := o.ContainerdClient.Pull(o.Context, imageName, opts...)
+ if err != nil {
+ return fmt.Errorf("ctr image pull %s failed: %s", imageName, err)
+ }
+ klog.V(4).Infof("ctr image pull %s successfully.", image.Name())
+ return nil
+}
+
+func (o *CommandPullOptions) ContainerdExport(imageList []string) (err error) {
+ outputPath := fmt.Sprintf("%s/%s", o.Output, utils.DefaultTarName)
+ file, err := os.OpenFile(outputPath, os.O_CREATE|os.O_WRONLY, 0666)
+ if err != nil {
+ return fmt.Errorf("open file %s failed: %s", outputPath, err)
+ }
+ defer func() {
+ if err = file.Close(); err != nil {
+ klog.Errorf("file close failed: %s", err)
+ }
+ }()
+
+ imageStore := o.ContainerdClient.ImageService()
+ var exportOpts []archive.ExportOpt
+ for _, imageName := range imageList {
+ if len(imageName) == 0 {
+ continue
+ }
+ klog.V(4).Infof("imageName: %s", imageName)
+ exportOpts = append(exportOpts, archive.WithImage(imageStore, imageName))
+ }
+
+ err = o.ContainerdClient.Export(o.Context, file, exportOpts...)
+ if err != nil && outputPath != "" {
+ if err1 := os.Remove(outputPath); err1 != nil {
+ return fmt.Errorf("os,Remove failed: %s", err1)
+ }
+ return fmt.Errorf("ctr image export failed: %s", err)
+ }
+ return nil
+}
diff --git a/pkg/kosmosctl/image/push.go b/pkg/kosmosctl/image/push.go
new file mode 100644
index 000000000..eaa27d1a3
--- /dev/null
+++ b/pkg/kosmosctl/image/push.go
@@ -0,0 +1,413 @@
+package image
+
+import (
+ "bufio"
+ "context"
+ "crypto/tls"
+ "encoding/base64"
+ "encoding/json"
+ "fmt"
+ "io"
+ "os"
+ "regexp"
+ "strings"
+
+ "github.com/containerd/console"
+ "github.com/containerd/containerd"
+ "github.com/containerd/containerd/errdefs"
+ "github.com/containerd/containerd/namespaces"
+ refdocker "github.com/containerd/containerd/reference/docker"
+ "github.com/containerd/containerd/remotes"
+ docker2 "github.com/containerd/containerd/remotes/docker"
+ "github.com/containerd/containerd/remotes/docker/config"
+ "github.com/docker/docker/api/types"
+ "github.com/docker/docker/api/types/registry"
+ docker "github.com/docker/docker/client"
+ "github.com/spf13/cobra"
+ "k8s.io/klog/v2"
+ ctlutil "k8s.io/kubectl/pkg/cmd/util"
+ "k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/kubectl/pkg/util/templates"
+
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+var PushExample = templates.Examples(i18n.T(`
+ # Push images for ./*.tar.gz to private-registry, e.g:
+ kosmoscrl image push --artifact=[*.tar.gz] --private-registry=[private-registry-name]
+
+ # Push images for ./*.tar.gz to private-registry which need to logged in, e.g:
+ kosmoscrl image push --artifact=[*.tar.gz] --username=[registry-username] --private-registry=[private-registry-name]
+`))
+
+type CommandPushOptions struct {
+ UserName string
+ PassWord string
+ PrivateRegistry string
+ ContainerRuntime string
+ ContainerdNamespace string
+ ImageList string
+ Artifact string
+ Context context.Context
+ DockerClient *docker.Client
+ ContainerdClient *containerd.Client
+}
+
+func NewCmdPush() *cobra.Command {
+ o := &CommandPushOptions{}
+ cmd := &cobra.Command{
+ Use: "push",
+ Short: i18n.T("push images from *.tar.gz to private registry. "),
+ Long: "",
+ Example: PushExample,
+ SilenceUsage: true,
+ DisableFlagsInUseLine: true,
+ RunE: func(cmd *cobra.Command, args []string) error {
+ ctlutil.CheckErr(o.Complete())
+ ctlutil.CheckErr(o.Validate())
+ ctlutil.CheckErr(o.Run())
+ return nil
+ },
+ }
+ flags := cmd.Flags()
+ flags.StringVarP(&o.ImageList, "image-list", "d", "", "Path of image-list.txt. ")
+ flags.StringVarP(&o.Artifact, "artifact", "a", "", "Path of kosmos-io.tar.gz ")
+ flags.StringVarP(&o.UserName, "username", "u", "", "Username to private registry. ")
+ flags.StringVarP(&o.PrivateRegistry, "private-registry", "r", "", "private registry. ")
+ flags.StringVarP(&o.ContainerRuntime, "containerd-runtime", "c", utils.DefaultContainerRuntime, "Type of container runtime(docker or containerd), docker is used by default .")
+ flags.StringVarP(&o.ContainerdNamespace, "containerd-namespace", "n", utils.DefaultContainerdNamespace, "Namespace of containerd. ")
+ return cmd
+}
+
+func (o *CommandPushOptions) Complete() (err error) {
+ switch o.ContainerRuntime {
+ case utils.Containerd:
+ o.ContainerdClient, err = containerd.New(utils.DefaultContainerdSockAddress)
+ if err != nil {
+ return fmt.Errorf("init containerd client failed: %s", err)
+ }
+ default:
+ o.DockerClient, err = docker.NewClientWithOpts(docker.FromEnv, docker.WithAPIVersionNegotiation())
+ if err != nil {
+ return fmt.Errorf("init docker client failed: %s", err)
+ }
+ }
+
+ if len(o.UserName) != 0 {
+ klog.V(4).Info("please enter password of registry: ")
+ o.PassWord, err = o.passwordPrompt()
+ if err != nil {
+ return fmt.Errorf("enter password failed: %s", err)
+ }
+ }
+
+ o.Context = namespaces.WithNamespace(context.TODO(), o.ContainerdNamespace)
+
+ return nil
+}
+
+func (o *CommandPushOptions) Validate() (err error) {
+ if len(o.Artifact) == 0 {
+ return fmt.Errorf("artifact path can not be empty. ")
+ }
+
+ if len(o.UserName) == 0 {
+ return fmt.Errorf("userName of registry can not be empty. ")
+ }
+
+ if len(o.PrivateRegistry) == 0 {
+ return fmt.Errorf("private registry can not be empty. ")
+ }
+
+ return nil
+}
+
+func (o *CommandPushOptions) Run() error {
+ // 1. load image from *.tar.gz
+ klog.V(4).Info("Start loading images ...")
+ imageList, err := o.LoadImage()
+ if err != nil {
+ klog.Infof("image load failed: %s", err)
+ return err
+ }
+ klog.V(4).Info("kosmos images have been loaded successfully. ")
+
+ // 2. push image to private registry
+ klog.V(4).Info("Start pushing images ...")
+ err = o.PushImage(imageList)
+ if err != nil {
+ klog.V(4).Infof("image push failed: %s", err)
+ return nil
+ }
+ klog.V(4).Info("kosmos images have been pushed successfully. ")
+ return nil
+}
+
+func (o *CommandPushOptions) LoadImage() (imageList []string, err error) {
+ switch o.ContainerRuntime {
+ case utils.Containerd:
+ imageList, err = o.ContainerdImport()
+ if err != nil {
+ return nil, err
+ }
+ default:
+ imageList, err = o.DockerLoad()
+ if err != nil {
+ return nil, err
+ }
+ }
+ return imageList, nil
+}
+
+func (o *CommandPushOptions) PushImage(imageList []string) error {
+ if len(o.ImageList) != 0 {
+ // push images from image-list.txt
+ file, err := os.Open(o.ImageList)
+ if err != nil {
+ return fmt.Errorf("read image list, err: %v", err)
+ }
+ defer func() {
+ if err := file.Close(); err != nil {
+ klog.Errorf("file close failed: %s", err)
+ }
+ }()
+
+ scanner := bufio.NewScanner(file)
+ for scanner.Scan() {
+ imageName := scanner.Text()
+ err = o.PushCommand(imageName)
+ if err != nil {
+ return err
+ }
+ }
+ } else {
+ // push images with specific version
+ for _, imageName := range imageList {
+ if len(imageName) == 0 {
+ continue
+ }
+ imageName = strings.TrimSpace(imageName)
+ err := o.PushCommand(imageName)
+ if err != nil {
+ return err
+ }
+ }
+ }
+ return nil
+}
+
+func (o *CommandPushOptions) PushCommand(imageName string) (err error) {
+ splits := strings.Split(imageName, "/")
+ imageTagName := fmt.Sprintf("%s/%s", o.PrivateRegistry, splits[len(splits)-1])
+ switch o.ContainerRuntime {
+ case utils.Containerd:
+ err = o.ContainerdTag(imageName, imageTagName)
+ if err != nil {
+ return err
+ }
+ err = o.ContainerdPush(imageTagName)
+ if err != nil {
+ return err
+ }
+ default:
+ err = o.DockerTag(imageName, imageTagName)
+ if err != nil {
+ return err
+ }
+ err = o.DockerPush(imageTagName)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+func (o *CommandPushOptions) DockerLoad() (imageList []string, err error) {
+ file, err := os.Open(o.Artifact)
+ if err != nil {
+ return nil, fmt.Errorf("open %s failed: %s", o.Artifact, err)
+ }
+ defer file.Close()
+ imageLoadResponse, err := o.DockerClient.ImageLoad(context.Background(), file, true)
+ if err != nil {
+ return nil, fmt.Errorf("docker load failed: %s", err)
+ }
+
+ body, err := io.ReadAll(imageLoadResponse.Body)
+ if err != nil {
+ return nil, fmt.Errorf("io.Read failed: %s", err)
+ }
+
+ strSlice := strings.Split(string(body), "\n")
+ for _, str := range strSlice {
+ if len(str) == 0 {
+ continue
+ }
+ imageParts := strings.Split(str, ":")
+ imageVersion := imageParts[len(imageParts)-1]
+
+ var imageName string
+ if strings.Contains(imageVersion, utils.DefaultVersion) {
+ imageName = fmt.Sprintf("%s:%s", imageParts[len(imageParts)-2], utils.DefaultVersion)
+ } else if strings.Contains(imageVersion, "v") {
+ regex := regexp.MustCompile(`v\d+\.\d+\.\d+`)
+ imageVersionMatch := regex.FindString(imageVersion)
+ imageName = fmt.Sprintf("%s:%s", imageParts[len(imageParts)-2], imageVersionMatch)
+ }
+
+ if len(imageName) == 0 {
+ continue
+ }
+ imageList = append(imageList, imageName)
+ }
+ return imageList, nil
+}
+
+func (o *CommandPushOptions) DockerTag(imageSourceName, imageTargetName string) (err error) {
+ err = o.DockerClient.ImageTag(context.Background(), imageSourceName, imageTargetName)
+ if err != nil {
+ return fmt.Errorf("docker tag %s %s failed: %s", imageSourceName, imageTargetName, err)
+ }
+ return nil
+}
+
+func (o *CommandPushOptions) DockerPush(imageName string) (err error) {
+ var result io.ReadCloser
+
+ authConfig := registry.AuthConfig{
+ Username: o.UserName,
+ Password: o.PassWord,
+ }
+ encodedJSON, err := json.Marshal(authConfig)
+ if err != nil {
+ return fmt.Errorf("json marshal failed: %s", err)
+ }
+ authStr := base64.URLEncoding.EncodeToString(encodedJSON)
+
+ result, err = o.DockerClient.ImagePush(context.Background(), imageName, types.ImagePushOptions{RegistryAuth: authStr})
+ if err != nil {
+ return fmt.Errorf("docker push failed: %s", err)
+ }
+
+ body, err := io.ReadAll(result)
+ if err != nil {
+ klog.Info(err)
+ return fmt.Errorf(" ioutil Readall failed: %s", err)
+ }
+ klog.V(4).Infof(string(body))
+ klog.V(4).Infof("docker push %s successfully.", imageName)
+ return nil
+}
+
+func (o *CommandPushOptions) ContainerdImport() (imageList []string, err error) {
+ file, err := os.Open(o.Artifact)
+ if err != nil {
+ return nil, fmt.Errorf("open %s failed: %s", o.Artifact, err)
+ }
+ defer file.Close()
+
+ images, err := o.ContainerdClient.Import(o.Context, file)
+ if err != nil {
+ return nil, fmt.Errorf("cre image import failed: %s", err)
+ }
+
+ for _, image := range images {
+ imageList = append(imageList, image.Name)
+ klog.Infof(" ctr image import %s successfully.", image.Name)
+ }
+ return imageList, nil
+}
+
+func (o *CommandPushOptions) ContainerdTag(imageSourceName, imageTargetName string) (err error) {
+ target, err := refdocker.ParseDockerRef(imageTargetName)
+ if err != nil {
+ return fmt.Errorf("parse docekr ref failed: %s", err)
+ }
+
+ ctx, done, err := o.ContainerdClient.WithLease(o.Context)
+ if err != nil {
+ return fmt.Errorf("with lease failed: %s", err)
+ }
+ defer func() {
+ if err = done(ctx); err != nil {
+ klog.Errorf("done failed: %s", err)
+ }
+ }()
+
+ imageService := o.ContainerdClient.ImageService()
+ image, err := imageService.Get(ctx, imageSourceName)
+ if err != nil {
+ return fmt.Errorf("imageService get image failed: %s", err)
+ }
+ image.Name = target.String()
+ if _, err = imageService.Create(ctx, image); err != nil {
+ if errdefs.IsAlreadyExists(err) {
+ if err = imageService.Delete(ctx, image.Name); err != nil {
+ return fmt.Errorf("imageService delete image failed: %s", err)
+ }
+ if _, err = imageService.Create(ctx, image); err != nil {
+ return fmt.Errorf("imageService create image failed: %s", err)
+ }
+ } else {
+ return fmt.Errorf("ctr image tag %s %s failed: %s", imageSourceName, imageTargetName, err)
+ }
+ }
+ return nil
+}
+
+func (o *CommandPushOptions) ContainerdPush(imageName string) (err error) {
+ image, err := o.ContainerdClient.GetImage(o.Context, imageName)
+ if err != nil {
+ return fmt.Errorf("get image failed: %s", err)
+ }
+ resolver, err := o.GetResolver()
+ if err != nil {
+ return fmt.Errorf("get resolver failed: %s", err)
+ }
+
+ options := []containerd.RemoteOpt{
+ containerd.WithResolver(resolver),
+ }
+ err = o.ContainerdClient.Push(o.Context, imageName, image.Target(), options...)
+ if err != nil {
+ return fmt.Errorf("ctr image push %s failed: %s", imageName, err)
+ }
+ klog.V(4).Infof("ctr image push %s successfully.", imageName)
+ return nil
+}
+
+func (o *CommandPushOptions) GetResolver() (remotes.Resolver, error) {
+ var PushTracker = docker2.NewInMemoryTracker()
+ options := docker2.ResolverOptions{
+ Tracker: PushTracker,
+ }
+
+ hostOptions := config.HostOptions{}
+ hostOptions.Credentials = func(host string) (string, string, error) {
+ return o.UserName, o.PassWord, nil
+ }
+
+ hostOptions.DefaultTLS = &tls.Config{MinVersion: tls.VersionTLS13}
+ options.Hosts = config.ConfigureHosts(o.Context, hostOptions)
+
+ return docker2.NewResolver(options), nil
+}
+
+func (o *CommandPushOptions) passwordPrompt() (string, error) {
+ c := console.Current()
+ defer func() {
+ if err := c.Reset(); err != nil {
+ klog.Errorf("c.Reset failed: %s", err)
+ }
+ }()
+
+ if err := c.DisableEcho(); err != nil {
+ return "", fmt.Errorf("failed to disable echo: %w", err)
+ }
+
+ line, _, err := bufio.NewReader(c).ReadLine()
+ if err != nil {
+ return "", fmt.Errorf("failed to read line: %w", err)
+ }
+ return string(line), nil
+}
diff --git a/pkg/kosmosctl/install/install.go b/pkg/kosmosctl/install/install.go
index 9d331f64f..6c7703e3b 100644
--- a/pkg/kosmosctl/install/install.go
+++ b/pkg/kosmosctl/install/install.go
@@ -21,6 +21,10 @@ import (
"k8s.io/kubectl/pkg/util/i18n"
"k8s.io/kubectl/pkg/util/templates"
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/cert"
+ "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/join"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/util"
"github.com/kosmos.io/kosmos/pkg/utils"
@@ -28,21 +32,20 @@ import (
)
var installExample = templates.Examples(i18n.T(`
- # Install all module to Kosmos control plane, e.g:
- kosmosctl install
-
- # Install Kosmos control plane, if you need to specify a special master cluster kubeconfig, e.g:
- kosmosctl install --host-kubeconfig=[host-kubeconfig]
-
- # Install clusterlink module to Kosmos control plane, e.g:
- kosmosctl install -m clusterlink
-
+ # Install all module to Kosmos control plane, e.g:
+ kosmosctl install --cni cni-name --default-nic nic-name
+
+ # Install Kosmos control plane, if you need to specify a special control plane cluster kubeconfig, e.g:
+ kosmosctl install --kubeconfig ~/kubeconfig/cluster-kubeconfig
+
# Install clustertree module to Kosmos control plane, e.g:
- kosmosctl install -m clustertree
-
+ kosmosctl install -m clustertree
+
+ # Install clusterlink module to Kosmos control plane and set the necessary parameters, e.g:
+ kosmosctl install -m clusterlink --cni cni-name --default-nic nic-name
+
# Install coredns module to Kosmos control plane, e.g:
- kosmosctl install -m coredns
-`))
+ kosmosctl install -m coredns`))
type CommandInstallOptions struct {
Namespace string
@@ -53,8 +56,18 @@ type CommandInstallOptions struct {
HostKubeConfigStream []byte
WaitTime int
- Client kubernetes.Interface
- ExtensionsClient extensionsclient.Interface
+ CNI string
+ DefaultNICName string
+ NetworkType string
+ IpFamily string
+ UseProxy string
+
+ KosmosClient versioned.Interface
+ K8sClient kubernetes.Interface
+ K8sExtensionsClient extensionsclient.Interface
+
+ CertEncode string
+ KeyEncode string
}
// NewCmdInstall Install the Kosmos control plane in a Kubernetes cluster.
@@ -79,9 +92,17 @@ func NewCmdInstall(f ctlutil.Factory) *cobra.Command {
flags := cmd.Flags()
flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "Kosmos namespace.")
flags.StringVarP(&o.ImageRegistry, "private-image-registry", "", utils.DefaultImageRepository, "Private image registry where pull images from. If set, all required images will be downloaded from it, it would be useful in offline installation scenarios. In addition, you still can use --kube-image-registry to specify the registry for Kubernetes's images.")
- flags.StringVarP(&o.Module, "module", "m", utils.DefaultInstallModule, "Kosmos specify the module to install.")
- flags.StringVar(&o.HostKubeConfig, "host-kubeconfig", "", "Absolute path to the special host kubeconfig file.")
- flags.IntVarP(&o.WaitTime, "wait-time", "", 120, "Wait the specified time for the Kosmos install ready.")
+ flags.StringVarP(&o.Module, "module", "m", utils.All, "Kosmos specify the module to install.")
+ flags.StringVar(&o.HostKubeConfig, "kubeconfig", "", "Absolute path to the special kubeconfig file.")
+ flags.StringVar(&o.CNI, "cni", "", "The cluster is configured using cni and currently supports calico and flannel.")
+ flags.StringVar(&o.DefaultNICName, "default-nic", "", "Set default network interface card.")
+ flags.StringVar(&o.NetworkType, "network-type", utils.NetworkTypeGateway, "Set the cluster network connection mode, which supports gateway and p2p modes, gateway is used by default.")
+ flags.StringVar(&o.IpFamily, "ip-family", string(v1alpha1.IPFamilyTypeIPV4), "Specify the IP protocol version used by network devices, common IP families include IPv4 and IPv6.")
+ flags.StringVar(&o.UseProxy, "use-proxy", "false", "Set whether to enable proxy.")
+ flags.IntVarP(&o.WaitTime, "wait-time", "", utils.DefaultWaitTime, "Wait the specified time for the Kosmos install ready.")
+
+ flags.StringVar(&o.CertEncode, "cert-encode", cert.GetCrtEncode(), "cert base64 string for node server.")
+ flags.StringVar(&o.KeyEncode, "key-encode", cert.GetKeyEncode(), "key base64 string for node server.")
return cmd
}
@@ -110,14 +131,19 @@ func (o *CommandInstallOptions) Complete(f ctlutil.Factory) error {
}
}
- o.Client, err = kubernetes.NewForConfig(config)
+ o.KosmosClient, err = versioned.NewForConfig(config)
if err != nil {
- return fmt.Errorf("kosmosctl install complete error, generate basic client failed: %v", err)
+ return fmt.Errorf("kosmosctl install complete error, generate Kosmos client failed: %v", err)
}
- o.ExtensionsClient, err = extensionsclient.NewForConfig(config)
+ o.K8sClient, err = kubernetes.NewForConfig(config)
if err != nil {
- return fmt.Errorf("kosmosctl install complete error, generate extensions client failed: %v", err)
+ return fmt.Errorf("kosmosctl install complete error, generate K8s basic client failed: %v", err)
+ }
+
+ o.K8sExtensionsClient, err = extensionsclient.NewForConfig(config)
+ if err != nil {
+ return fmt.Errorf("kosmosctl install complete error, generate K8s extensions client failed: %v", err)
}
return nil
@@ -125,7 +151,7 @@ func (o *CommandInstallOptions) Complete(f ctlutil.Factory) error {
func (o *CommandInstallOptions) Validate() error {
if len(o.Namespace) == 0 {
- return fmt.Errorf("namespace must be specified")
+ return fmt.Errorf("kosmosctl install validate error, namespace is not valid")
}
return nil
@@ -134,25 +160,33 @@ func (o *CommandInstallOptions) Validate() error {
func (o *CommandInstallOptions) Run() error {
klog.Info("Kosmos starts installing.")
switch o.Module {
- case "coredns":
- err := o.runCoredns()
+ case utils.CoreDNS:
+ err := o.runCoreDNS()
if err != nil {
return err
}
- util.CheckInstall("coredns")
- case "clusterlink":
+ util.CheckInstall("CoreDNS")
+ case utils.ClusterLink:
err := o.runClusterlink()
if err != nil {
return err
}
+ err = o.createControlCluster()
+ if err != nil {
+ return err
+ }
util.CheckInstall("Clusterlink")
- case "clustertree":
+ case utils.ClusterTree:
err := o.runClustertree()
if err != nil {
return err
}
+ err = o.createControlCluster()
+ if err != nil {
+ return err
+ }
util.CheckInstall("Clustertree")
- case "all":
+ case utils.All:
err := o.runClusterlink()
if err != nil {
return err
@@ -161,6 +195,10 @@ func (o *CommandInstallOptions) Run() error {
if err != nil {
return err
}
+ err = o.createControlCluster()
+ if err != nil {
+ return err
+ }
util.CheckInstall("Clusterlink && Clustertree")
}
@@ -168,89 +206,90 @@ func (o *CommandInstallOptions) Run() error {
}
func (o *CommandInstallOptions) runClusterlink() error {
- klog.Info("Start creating kosmos-clusterlink...")
+ klog.Info("Start creating Kosmos-Clusterlink...")
namespace := &corev1.Namespace{}
namespace.Name = o.Namespace
- _, err := o.Client.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{})
+ _, err := o.K8sClient.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install clusterlink run error, namespace options failed: %v", err)
}
}
- klog.Info("Namespace kosmos-system has been created.")
+ klog.Info("Namespace " + namespace.Name + " has been created.")
- klog.Info("Start creating kosmos-clusterlink ServiceAccount...")
- clusterlinkServiceAccount, err := util.GenerateServiceAccount(manifest.ClusterlinkNetworkManagerServiceAccount, manifest.ServiceAccountReplace{
+ klog.Info("Start creating Kosmos-Clusterlink network-manager RBAC...")
+ networkManagerSA, err := util.GenerateServiceAccount(manifest.ClusterlinkNetworkManagerServiceAccount, manifest.ServiceAccountReplace{
Namespace: o.Namespace,
})
if err != nil {
return err
}
- _, err = o.Client.CoreV1().ServiceAccounts(o.Namespace).Create(context.TODO(), clusterlinkServiceAccount, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().ServiceAccounts(o.Namespace).Create(context.TODO(), networkManagerSA, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
- return fmt.Errorf("kosmosctl install clusterlink run error, serviceaccount options failed: %v", err)
+ return fmt.Errorf("kosmosctl install clusterlink run error, network-manager serviceaccount options failed: %v", err)
}
}
- klog.Info("ServiceAccount clusterlink-network-manager has been created.")
+ klog.Info("ServiceAccount " + networkManagerSA.Name + " has been created.")
- klog.Info("Start creating kosmos-clusterlink ClusterRole...")
- clusterlinkClusterRole, err := util.GenerateClusterRole(manifest.ClusterlinkNetworkManagerClusterRole, nil)
+ networkManagerCR, err := util.GenerateClusterRole(manifest.ClusterlinkNetworkManagerClusterRole, nil)
if err != nil {
return err
}
- _, err = o.Client.RbacV1().ClusterRoles().Create(context.TODO(), clusterlinkClusterRole, metav1.CreateOptions{})
+ _, err = o.K8sClient.RbacV1().ClusterRoles().Create(context.TODO(), networkManagerCR, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
- return fmt.Errorf("kosmosctl install clusterlink run error, clusterrole options failed: %v", err)
+ return fmt.Errorf("kosmosctl install clusterlink run error, network-manager clusterrole options failed: %v", err)
}
}
- klog.Info("ClusterRole clusterlink-network-manager has been created.")
+ klog.Info("ClusterRole " + networkManagerCR.Name + " has been created.")
- klog.Info("Start creating kosmos-clusterlink ClusterRoleBinding...")
- clusterlinkClusterRoleBinding, err := util.GenerateClusterRoleBinding(manifest.ClusterlinkNetworkManagerClusterRoleBinding, manifest.ClusterRoleBindingReplace{
+ networkManagerCRB, err := util.GenerateClusterRoleBinding(manifest.ClusterlinkNetworkManagerClusterRoleBinding, manifest.ClusterRoleBindingReplace{
Namespace: o.Namespace,
})
if err != nil {
return err
}
- _, err = o.Client.RbacV1().ClusterRoleBindings().Create(context.TODO(), clusterlinkClusterRoleBinding, metav1.CreateOptions{})
+ _, err = o.K8sClient.RbacV1().ClusterRoleBindings().Create(context.TODO(), networkManagerCRB, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
- return fmt.Errorf("kosmosctl install clusterlink run error, clusterrolebinding options failed: %v", err)
+ return fmt.Errorf("kosmosctl install clusterlink run error, network-manager clusterrolebinding options failed: %v", err)
}
}
- klog.Info("ClusterRoleBinding clusterlink-network-manager has been created.")
+ klog.Info("ClusterRoleBinding " + networkManagerCRB.Name + " has been created.")
- klog.Info("Attempting to create clusterlink CRDs...")
+ klog.Info("Attempting to create Kosmos-Clusterlink CRDs...")
crds := apiextensionsv1.CustomResourceDefinitionList{}
- clusterlinkCluster, err := util.GenerateCustomResourceDefinition(manifest.ClusterlinkCluster, manifest.ClusterlinkReplace{
+ clusterlinkCluster, err := util.GenerateCustomResourceDefinition(manifest.Cluster, manifest.CRDReplace{
Namespace: o.Namespace,
})
if err != nil {
return err
}
- clusterlinkClusterNode, err := util.GenerateCustomResourceDefinition(manifest.ClusterlinkClusterNode, nil)
+ clusterlinkClusterNode, err := util.GenerateCustomResourceDefinition(manifest.ClusterNode, nil)
if err != nil {
return err
}
- clusterlinkNodeConfig, err := util.GenerateCustomResourceDefinition(manifest.ClusterlinkNodeConfig, nil)
+ clusterlinkNodeConfig, err := util.GenerateCustomResourceDefinition(manifest.NodeConfig, nil)
if err != nil {
return err
}
crds.Items = append(crds.Items, *clusterlinkCluster, *clusterlinkClusterNode, *clusterlinkNodeConfig)
for i := range crds.Items {
- _, err = o.ExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Create(context.Background(), &crds.Items[i], metav1.CreateOptions{})
+ _, err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Create(context.Background(), &crds.Items[i], metav1.CreateOptions{})
if err != nil {
- if !apierrors.IsAlreadyExists(err) {
+ if apierrors.IsAlreadyExists(err) {
+ klog.Warningf("CRD %v is existed, creation process will skip", &crds.Items[i].Name)
+ continue
+ } else {
return fmt.Errorf("kosmosctl install clusterlink run error, crd options failed: %v", err)
}
}
klog.Info("Create CRD " + crds.Items[i].Name + " successful.")
}
- klog.Info("Start creating kosmos-clusterlink Deployment...")
- clusterlinkDeployment, err := util.GenerateDeployment(manifest.ClusterlinkNetworkManagerDeployment, manifest.DeploymentReplace{
+ klog.Info("Start creating Kosmos-Clusterlink network-manager Deployment...")
+ networkManagerDeploy, err := util.GenerateDeployment(manifest.ClusterlinkNetworkManagerDeployment, manifest.DeploymentReplace{
Namespace: o.Namespace,
ImageRepository: o.ImageRegistry,
Version: version.GetReleaseVersion().PatchRelease(),
@@ -258,17 +297,38 @@ func (o *CommandInstallOptions) runClusterlink() error {
if err != nil {
return err
}
- _, err = o.Client.AppsV1().Deployments(o.Namespace).Create(context.Background(), clusterlinkDeployment, metav1.CreateOptions{})
+ _, err = o.K8sClient.AppsV1().Deployments(o.Namespace).Create(context.Background(), networkManagerDeploy, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
- return fmt.Errorf("kosmosctl install clusterlink run error, deployment options failed: %v", err)
+ return fmt.Errorf("kosmosctl install clusterlink run error, network-manager deployment options failed: %v", err)
}
}
- label := map[string]string{"app": clusterlinkDeployment.Labels["app"]}
- if err = util.WaitPodReady(o.Client, clusterlinkDeployment.Namespace, util.MapToString(label), o.WaitTime); err != nil {
- return fmt.Errorf("kosmosctl install clusterlink run error, deployment options failed: %v", err)
+ networkManagerLabel := map[string]string{"app": networkManagerDeploy.Labels["app"]}
+ if err = util.WaitPodReady(o.K8sClient, networkManagerDeploy.Namespace, util.MapToString(networkManagerLabel), o.WaitTime); err != nil {
+ return fmt.Errorf("kosmosctl install clusterlink run error, network-manager deployment options failed: %v", err)
} else {
- klog.Info("Deployment clusterlink-network-manager has been created.")
+ klog.Info("Deployment " + networkManagerDeploy.Name + " has been created.")
+ }
+
+ operatorDeploy, err := util.GenerateDeployment(manifest.KosmosOperatorDeployment, manifest.DeploymentReplace{
+ Namespace: o.Namespace,
+ Version: version.GetReleaseVersion().PatchRelease(),
+ UseProxy: o.UseProxy,
+ ImageRepository: o.ImageRegistry,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl install operator run error, generate deployment failed: %s", err)
+ }
+ _, err = o.K8sClient.AppsV1().Deployments(operatorDeploy.Namespace).Get(context.TODO(), operatorDeploy.Name, metav1.GetOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ err = o.createOperator()
+ if err != nil {
+ return err
+ }
+ } else {
+ return fmt.Errorf("kosmosctl install operator run error, get operator deployment failed: %s", err)
+ }
}
return nil
@@ -278,69 +338,71 @@ func (o *CommandInstallOptions) runClustertree() error {
klog.Info("Start creating kosmos-clustertree...")
namespace := &corev1.Namespace{}
namespace.Name = o.Namespace
- _, err := o.Client.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{})
+ _, err := o.K8sClient.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install clustertree run error, namespace options failed: %v", err)
}
}
- klog.Info("Namespace kosmos-system has been created.")
+ klog.Info("Namespace " + o.Namespace + " has been created.")
klog.Info("Start creating kosmos-clustertree ServiceAccount...")
- clustertreeServiceAccount, err := util.GenerateServiceAccount(manifest.ClusterTreeKnodeManagerServiceAccount, manifest.ServiceAccountReplace{
+ clustertreeSA, err := util.GenerateServiceAccount(manifest.ClusterTreeServiceAccount, manifest.ServiceAccountReplace{
Namespace: o.Namespace,
})
if err != nil {
return err
}
- _, err = o.Client.CoreV1().ServiceAccounts(o.Namespace).Create(context.TODO(), clustertreeServiceAccount, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().ServiceAccounts(o.Namespace).Create(context.TODO(), clustertreeSA, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install clustertree run error, serviceaccount options failed: %v", err)
}
}
- klog.Info("ServiceAccount clustertree-cluster-manager has been created.")
+ klog.Info("ServiceAccount " + clustertreeSA.Name + " has been created.")
klog.Info("Start creating kosmos-clustertree ClusterRole...")
- clustertreeClusterRole, err := util.GenerateClusterRole(manifest.ClusterTreeKnodeManagerClusterRole, nil)
+ clustertreeCR, err := util.GenerateClusterRole(manifest.ClusterTreeClusterRole, nil)
if err != nil {
return err
}
- _, err = o.Client.RbacV1().ClusterRoles().Create(context.TODO(), clustertreeClusterRole, metav1.CreateOptions{})
+ _, err = o.K8sClient.RbacV1().ClusterRoles().Create(context.TODO(), clustertreeCR, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install clustertree run error, clusterrole options failed: %v", err)
}
}
- klog.Info("ClusterRole clustertree-knode has been created.")
+ klog.Info("ClusterRole " + clustertreeCR.Name + " has been created.")
klog.Info("Start creating kosmos-clustertree ClusterRoleBinding...")
- clustertreeClusterRoleBinding, err := util.GenerateClusterRoleBinding(manifest.ClusterTreeKnodeManagerClusterRoleBinding, manifest.ClusterRoleBindingReplace{
+ clustertreeCRB, err := util.GenerateClusterRoleBinding(manifest.ClusterTreeClusterRoleBinding, manifest.ClusterRoleBindingReplace{
Namespace: o.Namespace,
})
if err != nil {
return err
}
- _, err = o.Client.RbacV1().ClusterRoleBindings().Create(context.TODO(), clustertreeClusterRoleBinding, metav1.CreateOptions{})
+ _, err = o.K8sClient.RbacV1().ClusterRoleBindings().Create(context.TODO(), clustertreeCRB, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install clustertree run error, clusterrolebinding options failed: %v", err)
}
}
- klog.Info("ClusterRoleBinding clustertree-knode has been created.")
+ klog.Info("ClusterRoleBinding " + clustertreeCRB.Name + " has been created.")
- klog.Info("Attempting to create kosmos-clustertree knode CRDs...")
- clustertreeKnode, err := util.GenerateCustomResourceDefinition(manifest.ClusterTreeKnode, nil)
+ klog.Info("Attempting to create kosmos-clustertree CRDs...")
+ clustertreeCluster, err := util.GenerateCustomResourceDefinition(manifest.Cluster, nil)
if err != nil {
return err
}
- _, err = o.ExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Create(context.Background(), clustertreeKnode, metav1.CreateOptions{})
+ _, err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Create(context.Background(), clustertreeCluster, metav1.CreateOptions{})
if err != nil {
- if !apierrors.IsAlreadyExists(err) {
+ if apierrors.IsAlreadyExists(err) {
+ klog.Warningf("CRD %v is existed, creation process will skip", clustertreeCluster.Name)
+ } else {
return fmt.Errorf("kosmosctl install clustertree run error, crd options failed: %v", err)
}
}
- klog.Info("Create CRD " + clustertreeKnode.Name + " successful.")
+ klog.Info("Create CRD " + clustertreeCluster.Name + " successful.")
klog.Info("Start creating kosmos-clustertree ConfigMap...")
clustertreeConfigMap := &corev1.ConfigMap{
@@ -352,7 +414,7 @@ func (o *CommandInstallOptions) runClustertree() error {
"kubeconfig": string(o.HostKubeConfigStream),
},
}
- _, err = o.Client.CoreV1().ConfigMaps(o.Namespace).Create(context.TODO(), clustertreeConfigMap, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().ConfigMaps(o.Namespace).Create(context.TODO(), clustertreeConfigMap, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install clustertree run error, configmap options failed: %v", err)
@@ -360,8 +422,25 @@ func (o *CommandInstallOptions) runClustertree() error {
}
klog.Info("ConfigMap host-kubeconfig has been created.")
+ klog.Info("Start creating kosmos-clustertree secret")
+ clustertreeSecret, err := util.GenerateSecret(manifest.ClusterTreeClusterManagerSecret, manifest.SecretReplace{
+ Namespace: o.Namespace,
+ Cert: o.CertEncode,
+ Key: o.KeyEncode,
+ })
+ if err != nil {
+ return err
+ }
+ _, err = o.K8sClient.CoreV1().Secrets(o.Namespace).Create(context.Background(), clustertreeSecret, metav1.CreateOptions{})
+ if err != nil {
+ if !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl install clustertree run error, secret options failed: %v", err)
+ }
+ }
+ klog.Info("Secret has been created. ")
+
klog.Info("Start creating kosmos-clustertree Deployment...")
- clustertreeDeployment, err := util.GenerateDeployment(manifest.ClusterTreeKnodeManagerDeployment, manifest.DeploymentReplace{
+ clustertreeDeploy, err := util.GenerateDeployment(manifest.ClusterTreeClusterManagerDeployment, manifest.DeploymentReplace{
Namespace: o.Namespace,
ImageRepository: o.ImageRegistry,
Version: version.GetReleaseVersion().PatchRelease(),
@@ -369,27 +448,280 @@ func (o *CommandInstallOptions) runClustertree() error {
if err != nil {
return err
}
- _, err = o.Client.AppsV1().Deployments(o.Namespace).Create(context.Background(), clustertreeDeployment, metav1.CreateOptions{})
+ _, err = o.K8sClient.AppsV1().Deployments(o.Namespace).Create(context.Background(), clustertreeDeploy, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install clustertree run error, deployment options failed: %v", err)
}
}
- label := map[string]string{"app": clustertreeDeployment.Labels["app"]}
- if err = util.WaitPodReady(o.Client, clustertreeDeployment.Namespace, util.MapToString(label), o.WaitTime); err != nil {
+ label := map[string]string{"app": clustertreeDeploy.Labels["app"]}
+ if err = util.WaitPodReady(o.K8sClient, clustertreeDeploy.Namespace, util.MapToString(label), o.WaitTime); err != nil {
return fmt.Errorf("kosmosctl install clustertree run error, deployment options failed: %v", err)
} else {
klog.Info("Deployment clustertree-cluster-manager has been created.")
}
+ operatorDeploy, err := util.GenerateDeployment(manifest.KosmosOperatorDeployment, manifest.DeploymentReplace{
+ Namespace: o.Namespace,
+ Version: version.GetReleaseVersion().PatchRelease(),
+ UseProxy: o.UseProxy,
+ ImageRepository: o.ImageRegistry,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl install operator run error, operator generate deployment failed: %s", err)
+ }
+ _, err = o.K8sClient.AppsV1().Deployments(operatorDeploy.Namespace).Get(context.TODO(), operatorDeploy.Name, metav1.GetOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ err = o.createOperator()
+ if err != nil {
+ return err
+ }
+ } else {
+ return fmt.Errorf("kosmosctl install operator run error, operator get deployment failed: %s", err)
+ }
+ }
+
return nil
}
-func (o *CommandInstallOptions) runCoredns() error {
+func (o *CommandInstallOptions) createOperator() error {
+ klog.Info("Start creating Kosmos-Operator...")
+ operatorDeploy, err := util.GenerateDeployment(manifest.KosmosOperatorDeployment, manifest.DeploymentReplace{
+ Namespace: o.Namespace,
+ Version: version.GetReleaseVersion().PatchRelease(),
+ UseProxy: o.UseProxy,
+ ImageRepository: o.ImageRegistry,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl install operator run error, operator generate deployment failed: %s", err)
+ }
+ _, err = o.K8sClient.AppsV1().Deployments(operatorDeploy.Namespace).Create(context.TODO(), operatorDeploy, metav1.CreateOptions{})
+ if err != nil && !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl install operator run error, operator options deployment failed: %s", err)
+ }
+
+ operatorSecret := &corev1.Secret{
+ TypeMeta: metav1.TypeMeta{},
+ ObjectMeta: metav1.ObjectMeta{
+ Name: utils.ControlPanelSecretName,
+ Namespace: o.Namespace,
+ },
+ Data: map[string][]byte{
+ "kubeconfig": o.HostKubeConfigStream,
+ },
+ }
+ _, err = o.K8sClient.CoreV1().Secrets(operatorSecret.Namespace).Create(context.TODO(), operatorSecret, metav1.CreateOptions{})
+ if err != nil && !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl install operator run error, operator options secret failed: %s", err)
+ }
+
+ operatorCR, err := util.GenerateClusterRole(manifest.KosmosClusterRole, nil)
+ if err != nil {
+ return fmt.Errorf("kosmosctl install operator run error, generate operator clusterrole failed: %s", err)
+ }
+ _, err = o.K8sClient.RbacV1().ClusterRoles().Create(context.TODO(), operatorCR, metav1.CreateOptions{})
+ if err != nil && !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl install operator run error, operator options clusterrole failed: %s", err)
+ }
+
+ operatorCRB, err := util.GenerateClusterRoleBinding(manifest.KosmosClusterRoleBinding, manifest.ClusterRoleBindingReplace{
+ Namespace: o.Namespace,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl install operator run error, generate operator clusterrolebinding failed: %s", err)
+ }
+ _, err = o.K8sClient.RbacV1().ClusterRoleBindings().Create(context.TODO(), operatorCRB, metav1.CreateOptions{})
+ if err != nil && !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl install operator run error, operator options clusterrolebinding failed: %s", err)
+ }
+
+ operatorSA, err := util.GenerateServiceAccount(manifest.KosmosOperatorServiceAccount, manifest.ServiceAccountReplace{
+ Namespace: o.Namespace,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl install operator run error, generate operator serviceaccount failed: %s", err)
+ }
+ _, err = o.K8sClient.CoreV1().ServiceAccounts(operatorSA.Namespace).Create(context.TODO(), operatorSA, metav1.CreateOptions{})
+ if err != nil && !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl install clusterlink run error, operator options serviceaccount failed: %s", err)
+ }
+
+ operatorLabel := map[string]string{"app": operatorDeploy.Labels["app"]}
+ if err = util.WaitPodReady(o.K8sClient, operatorDeploy.Namespace, util.MapToString(operatorLabel), o.WaitTime); err != nil {
+ return fmt.Errorf("kosmosctl install operator run error, operator options deployment failed: %s", err)
+ } else {
+ klog.Info("Operator " + operatorDeploy.Name + " has been created.")
+ }
+
+ return nil
+}
+
+func (o *CommandInstallOptions) createControlCluster() error {
+ switch o.Module {
+ case utils.ClusterLink:
+ controlCluster, err := o.KosmosClient.KosmosV1alpha1().Clusters().Get(context.TODO(), utils.DefaultClusterName, metav1.GetOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ clusterArgs := []string{"cluster"}
+ joinOptions := join.CommandJoinOptions{
+ Name: utils.DefaultClusterName,
+ Namespace: o.Namespace,
+ ImageRegistry: o.ImageRegistry,
+ KubeConfigStream: o.HostKubeConfigStream,
+ WaitTime: o.WaitTime,
+ KosmosClient: o.KosmosClient,
+ K8sClient: o.K8sClient,
+ K8sExtensionsClient: o.K8sExtensionsClient,
+ RootFlag: true,
+ EnableLink: true,
+ CNI: o.CNI,
+ DefaultNICName: o.DefaultNICName,
+ NetworkType: o.NetworkType,
+ IpFamily: o.IpFamily,
+ UseProxy: o.UseProxy,
+ }
+
+ err = joinOptions.Run(clusterArgs)
+ if err != nil {
+ return fmt.Errorf("kosmosctl install run error, join control panel cluster failed: %s", err)
+ }
+ } else {
+ return fmt.Errorf("kosmosctl install run error, get control panel cluster failed: %s", err)
+ }
+ }
+
+ if len(controlCluster.Name) > 0 {
+ if !controlCluster.Spec.ClusterLinkOptions.Enable {
+ controlCluster.Spec.ClusterLinkOptions.Enable = true
+ controlCluster.Spec.ClusterLinkOptions.CNI = o.CNI
+ controlCluster.Spec.ClusterLinkOptions.DefaultNICName = o.DefaultNICName
+ switch o.NetworkType {
+ case utils.NetworkTypeGateway:
+ controlCluster.Spec.ClusterLinkOptions.NetworkType = v1alpha1.NetWorkTypeGateWay
+ case utils.NetworkTypeP2P:
+ controlCluster.Spec.ClusterLinkOptions.NetworkType = v1alpha1.NetworkTypeP2P
+ }
+
+ switch o.IpFamily {
+ case utils.DefaultIPv4:
+ controlCluster.Spec.ClusterLinkOptions.IPFamily = v1alpha1.IPFamilyTypeIPV4
+ case utils.DefaultIPv6:
+ controlCluster.Spec.ClusterLinkOptions.IPFamily = v1alpha1.IPFamilyTypeIPV6
+ }
+ _, err = o.KosmosClient.KosmosV1alpha1().Clusters().Update(context.TODO(), controlCluster, metav1.UpdateOptions{})
+ if err != nil {
+ klog.Infof("ControlCluster-Link: ", controlCluster)
+ return fmt.Errorf("kosmosctl install clusterlink run error, update control panel cluster failed: %s", err)
+ }
+ }
+ }
+ case utils.ClusterTree:
+ controlCluster, err := o.KosmosClient.KosmosV1alpha1().Clusters().Get(context.TODO(), utils.DefaultClusterName, metav1.GetOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ clusterArgs := []string{"cluster"}
+ joinOptions := join.CommandJoinOptions{
+ Name: utils.DefaultClusterName,
+ Namespace: o.Namespace,
+ ImageRegistry: o.ImageRegistry,
+ KubeConfigStream: o.HostKubeConfigStream,
+ K8sExtensionsClient: o.K8sExtensionsClient,
+ WaitTime: o.WaitTime,
+ KosmosClient: o.KosmosClient,
+ K8sClient: o.K8sClient,
+ RootFlag: true,
+ EnableTree: true,
+ }
+
+ err = joinOptions.Run(clusterArgs)
+ if err != nil {
+ return fmt.Errorf("kosmosctl install run error, join control panel cluster failed: %s", err)
+ }
+ } else {
+ return fmt.Errorf("kosmosctl install run error, get control panel cluster failed: %s", err)
+ }
+ }
+
+ if len(controlCluster.Name) > 0 {
+ if !controlCluster.Spec.ClusterTreeOptions.Enable {
+ controlCluster.Spec.ClusterTreeOptions.Enable = true
+ _, err = o.KosmosClient.KosmosV1alpha1().Clusters().Update(context.TODO(), controlCluster, metav1.UpdateOptions{})
+ if err != nil {
+ klog.Infof("ControlCluster-Tree: ", controlCluster)
+ return fmt.Errorf("kosmosctl install clustertree run error, update control panel cluster failed: %s", err)
+ }
+ }
+ }
+ case utils.All:
+ controlCluster, err := o.KosmosClient.KosmosV1alpha1().Clusters().Get(context.TODO(), utils.DefaultClusterName, metav1.GetOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ clusterArgs := []string{"cluster"}
+ joinOptions := join.CommandJoinOptions{
+ Name: utils.DefaultClusterName,
+ Namespace: o.Namespace,
+ ImageRegistry: o.ImageRegistry,
+ KubeConfigStream: o.HostKubeConfigStream,
+ K8sExtensionsClient: o.K8sExtensionsClient,
+ WaitTime: o.WaitTime,
+ KosmosClient: o.KosmosClient,
+ K8sClient: o.K8sClient,
+ RootFlag: true,
+ EnableLink: true,
+ CNI: o.CNI,
+ DefaultNICName: o.DefaultNICName,
+ NetworkType: o.NetworkType,
+ IpFamily: o.IpFamily,
+ UseProxy: o.UseProxy,
+ EnableTree: true,
+ }
+
+ err = joinOptions.Run(clusterArgs)
+ if err != nil {
+ return fmt.Errorf("kosmosctl install run error, join control panel cluster failed: %s", err)
+ }
+ } else {
+ return fmt.Errorf("kosmosctl install run error, get control panel cluster failed: %s", err)
+ }
+ }
+
+ if len(controlCluster.Name) > 0 {
+ if !controlCluster.Spec.ClusterTreeOptions.Enable || !controlCluster.Spec.ClusterLinkOptions.Enable {
+ controlCluster.Spec.ClusterTreeOptions.Enable = true
+ controlCluster.Spec.ClusterLinkOptions.Enable = true
+ controlCluster.Spec.ClusterLinkOptions.CNI = o.CNI
+ controlCluster.Spec.ClusterLinkOptions.DefaultNICName = o.DefaultNICName
+ switch o.NetworkType {
+ case utils.NetworkTypeGateway:
+ controlCluster.Spec.ClusterLinkOptions.NetworkType = v1alpha1.NetWorkTypeGateWay
+ case utils.NetworkTypeP2P:
+ controlCluster.Spec.ClusterLinkOptions.NetworkType = v1alpha1.NetworkTypeP2P
+ }
+
+ switch o.IpFamily {
+ case utils.DefaultIPv4:
+ controlCluster.Spec.ClusterLinkOptions.IPFamily = v1alpha1.IPFamilyTypeIPV4
+ case utils.DefaultIPv6:
+ controlCluster.Spec.ClusterLinkOptions.IPFamily = v1alpha1.IPFamilyTypeIPV6
+ }
+ _, err = o.KosmosClient.KosmosV1alpha1().Clusters().Update(context.TODO(), controlCluster, metav1.UpdateOptions{})
+ if err != nil {
+ klog.Infof("ControlCluster-All: ", controlCluster)
+ return fmt.Errorf("kosmosctl install clustertree run error, update control panel cluster failed: %s", err)
+ }
+ }
+ }
+ }
+
+ return nil
+}
+
+func (o *CommandInstallOptions) runCoreDNS() error {
klog.Info("Start creating kosmos-coredns...")
namespace := &corev1.Namespace{}
namespace.Name = o.Namespace
- _, err := o.Client.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{})
+ _, err := o.K8sClient.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install coredns run error, namespace options failed: %v", err)
@@ -404,7 +736,7 @@ func (o *CommandInstallOptions) runCoredns() error {
if err != nil {
return err
}
- _, err = o.Client.CoreV1().ServiceAccounts(o.Namespace).Create(context.TODO(), sa, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().ServiceAccounts(o.Namespace).Create(context.TODO(), sa, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install coredns run error, serviceaccount options failed: %v", err)
@@ -417,7 +749,7 @@ func (o *CommandInstallOptions) runCoredns() error {
if err != nil {
return err
}
- _, err = o.Client.RbacV1().ClusterRoles().Create(context.TODO(), cRole, metav1.CreateOptions{})
+ _, err = o.K8sClient.RbacV1().ClusterRoles().Create(context.TODO(), cRole, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install coredns run error, clusterrole options failed: %v", err)
@@ -432,7 +764,7 @@ func (o *CommandInstallOptions) runCoredns() error {
if err != nil {
return err
}
- _, err = o.Client.RbacV1().ClusterRoleBindings().Create(context.TODO(), crb, metav1.CreateOptions{})
+ _, err = o.K8sClient.RbacV1().ClusterRoleBindings().Create(context.TODO(), crb, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install coredns run error, clusterrolebinding options failed: %v", err)
@@ -447,13 +779,13 @@ func (o *CommandInstallOptions) runCoredns() error {
if err != nil {
return err
}
- _, err = o.Client.CoreV1().ConfigMaps(o.Namespace).Create(context.TODO(), coreFile, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().ConfigMaps(o.Namespace).Create(context.TODO(), coreFile, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install coredns coreFile run error, configmap options failed: %v", err)
}
}
- klog.Info("ConfigMap corefile has been created.")
+ klog.Infof("ConfigMap %s has been created.", coreFile.Name)
customerHosts, err := util.GenerateConfigMap(manifest.CorednsCustomerHosts, manifest.ConfigmapReplace{
Namespace: o.Namespace,
@@ -461,24 +793,26 @@ func (o *CommandInstallOptions) runCoredns() error {
if err != nil {
return err
}
- _, err = o.Client.CoreV1().ConfigMaps(o.Namespace).Create(context.TODO(), customerHosts, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().ConfigMaps(o.Namespace).Create(context.TODO(), customerHosts, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install coredns customerHosts run error, configmap options failed: %v", err)
}
}
- klog.Info("ConfigMap customerHosts has been created.")
+ klog.Infof("ConfigMap %s has been created.", customerHosts.Name)
klog.Info("Attempting to create coredns CRDs, coredns reuses clusterlink's cluster CRD")
- crd, err := util.GenerateCustomResourceDefinition(manifest.ClusterlinkCluster, manifest.ClusterlinkReplace{
+ crd, err := util.GenerateCustomResourceDefinition(manifest.Cluster, manifest.CRDReplace{
Namespace: o.Namespace,
})
if err != nil {
return err
}
- _, err = o.ExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Create(context.Background(), crd, metav1.CreateOptions{})
+ _, err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Create(context.Background(), crd, metav1.CreateOptions{})
if err != nil {
- if !apierrors.IsAlreadyExists(err) {
+ if apierrors.IsAlreadyExists(err) {
+ klog.Warningf("CRD %v is existed, creation process will skip", crd.Name)
+ } else {
return fmt.Errorf("kosmosctl install coredns run error, crd options failed: %v", err)
}
}
@@ -492,13 +826,13 @@ func (o *CommandInstallOptions) runCoredns() error {
if err != nil {
return err
}
- _, err = o.Client.AppsV1().Deployments(o.Namespace).Create(context.Background(), deploy, metav1.CreateOptions{})
+ _, err = o.K8sClient.AppsV1().Deployments(o.Namespace).Create(context.Background(), deploy, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install coredns run error, deployment options failed: %v", err)
}
}
- if err = util.WaitDeploymentReady(o.Client, deploy, o.WaitTime); err != nil {
+ if err = util.WaitDeploymentReady(o.K8sClient, deploy, o.WaitTime); err != nil {
return fmt.Errorf("kosmosctl install coredns run error, deployment options failed: %v", err)
} else {
klog.Info("Deployment coredns has been created.")
@@ -511,7 +845,7 @@ func (o *CommandInstallOptions) runCoredns() error {
if err != nil {
return err
}
- _, err = o.Client.CoreV1().Services(o.Namespace).Create(context.Background(), svc, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().Services(o.Namespace).Create(context.Background(), svc, metav1.CreateOptions{})
if err != nil {
if !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl install coredns run error, service options failed: %v", err)
diff --git a/pkg/kosmosctl/join/join.go b/pkg/kosmosctl/join/join.go
index 02273ed27..d8018a8e4 100644
--- a/pkg/kosmosctl/join/join.go
+++ b/pkg/kosmosctl/join/join.go
@@ -2,18 +2,15 @@ package join
import (
"context"
- "encoding/base64"
"fmt"
"os"
"path/filepath"
"github.com/spf13/cobra"
corev1 "k8s.io/api/core/v1"
+ extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
- k8syaml "k8s.io/apimachinery/pkg/runtime/serializer/yaml"
- "k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
@@ -23,41 +20,52 @@ import (
"k8s.io/kubectl/pkg/util/i18n"
"k8s.io/kubectl/pkg/util/templates"
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/util"
"github.com/kosmos.io/kosmos/pkg/utils"
- "github.com/kosmos.io/kosmos/pkg/version"
)
var joinExample = templates.Examples(i18n.T(`
- # Join cluster resource from a directory containing cluster.yaml, e.g:
- kosmosctl join cluster --name=[cluster-name] --master-kubeconfig=[master-kubeconfig] --cluster-kubeconfig=[cluster-kubeconfig]
-
- # Join cluster resource without master-kubeconfig, e.g:
- kosmosctl join cluster --name=[cluster-name] --cluster-kubeconfig=[cluster-kubeconfig]
+ # Join cluster resource, e.g:
+ kosmosctl join cluster --name cluster-name --kubeconfig ~/kubeconfig/cluster-kubeconfig
- # Join knode resource, e.g:
- kosmosctl join knode --name=[knode-name] --master-kubeconfig=[master-kubeconfig] --cluster-kubeconfig=[cluster-kubeconfig]
+ # Join cluster resource and turn on Clusterlink, e.g:
+ kosmosctl join cluster --name cluster-name --kubeconfig ~/kubeconfig/cluster-kubeconfig --enable-link
- # Join knode resource without master-kubeconfig, e.g:
- kosmosctl join knode --name=[knode-name] --cluster-kubeconfig=[cluster-kubeconfig]
+ # Join cluster resource and turn on Clustertree, e.g:
+ kosmosctl join cluster --name cluster-name --kubeconfig ~/kubeconfig/cluster-kubeconfig --enable-tree
+
+ # Join cluster resource use param values other than default, e.g:
+ kosmosctl join cluster --name cluster-name --kubeconfig ~/kubeconfig/cluster-kubeconfig --cni cni-name --default-nic nic-name
`))
type CommandJoinOptions struct {
- MasterKubeConfig string
- MasterKubeConfigStream []byte
- ClusterKubeConfig string
-
- Name string
+ Name string
+ Namespace string
+ ImageRegistry string
+ KubeConfig string
+ KubeConfigStream []byte
+ HostKubeConfig string
+ HostKubeConfigStream []byte
+ WaitTime int
+ RootFlag bool
+ EnableAll bool
+
+ EnableLink bool
CNI string
DefaultNICName string
- ImageRegistry string
NetworkType string
+ IpFamily string
UseProxy string
- WaitTime int
- Client kubernetes.Interface
- DynamicClient *dynamic.DynamicClient
+ EnableTree bool
+ LeafModel string
+
+ KosmosClient versioned.Interface
+ K8sClient kubernetes.Interface
+ K8sExtensionsClient extensionsclient.Interface
}
// NewCmdJoin join resource to Kosmos control plane.
@@ -80,59 +88,78 @@ func NewCmdJoin(f ctlutil.Factory) *cobra.Command {
}
flags := cmd.Flags()
- flags.StringVar(&o.MasterKubeConfig, "master-kubeconfig", "", "Absolute path to the master kubeconfig file.")
- flags.StringVar(&o.ClusterKubeConfig, "cluster-kubeconfig", "", "Absolute path to the cluster kubeconfig file.")
flags.StringVar(&o.Name, "name", "", "Specify the name of the resource to join.")
+ flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "Kosmos namespace.")
+ flags.StringVar(&o.KubeConfig, "kubeconfig", "", "Absolute path to the cluster kubeconfig file.")
+ flags.StringVar(&o.HostKubeConfig, "host-kubeconfig", "", "Absolute path to the special host kubeconfig file.")
+ flags.StringVar(&o.ImageRegistry, "private-image-registry", utils.DefaultImageRepository, "Private image registry where pull images from. If set, all required images will be downloaded from it, it would be useful in offline installation scenarios.")
+ flags.BoolVar(&o.RootFlag, "root-flag", false, "Tag control cluster.")
+ flags.BoolVar(&o.EnableAll, "enable-all", false, "Turn on all module.")
+ flags.BoolVar(&o.EnableLink, "enable-link", false, "Turn on clusterlink.")
flags.StringVar(&o.CNI, "cni", "", "The cluster is configured using cni and currently supports calico and flannel.")
flags.StringVar(&o.DefaultNICName, "default-nic", "", "Set default network interface card.")
- flags.StringVar(&o.ImageRegistry, "private-image-registry", utils.DefaultImageRepository, "Private image registry where pull images from. If set, all required images will be downloaded from it, it would be useful in offline installation scenarios. In addition, you still can use --kube-image-registry to specify the registry for Kubernetes's images.")
- flags.StringVar(&o.NetworkType, "network-type", utils.NetworkTypeP2P, "Set the cluster network connection mode, which supports gateway and p2p modes, p2p is used by default.")
+ flags.StringVar(&o.NetworkType, "network-type", utils.NetworkTypeGateway, "Set the cluster network connection mode, which supports gateway and p2p modes, gateway is used by default.")
+ flags.StringVar(&o.IpFamily, "ip-family", utils.DefaultIPv4, "Specify the IP protocol version used by network devices, common IP families include IPv4 and IPv6.")
flags.StringVar(&o.UseProxy, "use-proxy", "false", "Set whether to enable proxy.")
- flags.IntVarP(&o.WaitTime, "wait-time", "", 120, "Wait the specified time for the Kosmos install ready.")
+ flags.BoolVar(&o.EnableTree, "enable-tree", false, "Turn on clustertree.")
+ flags.StringVar(&o.LeafModel, "leaf-model", "", "Set leaf cluster model, which supports one-to-one model.")
+ flags.IntVarP(&o.WaitTime, "wait-time", "", utils.DefaultWaitTime, "Wait the specified time for the Kosmos install ready.")
return cmd
}
func (o *CommandJoinOptions) Complete(f ctlutil.Factory) error {
- var masterConfig *rest.Config
+ var hostConfig *rest.Config
var clusterConfig *rest.Config
var err error
- if len(o.MasterKubeConfig) > 0 {
- masterConfig, err = clientcmd.BuildConfigFromFlags("", o.MasterKubeConfig)
+ if len(o.HostKubeConfig) > 0 {
+ hostConfig, err = clientcmd.BuildConfigFromFlags("", o.HostKubeConfig)
if err != nil {
- return fmt.Errorf("kosmosctl join complete error, generate masterConfig failed: %s", err)
+ return fmt.Errorf("kosmosctl join complete error, generate hostConfig failed: %s", err)
}
- o.MasterKubeConfigStream, err = os.ReadFile(o.MasterKubeConfig)
+ o.HostKubeConfigStream, err = os.ReadFile(o.HostKubeConfig)
if err != nil {
- return fmt.Errorf("kosmosctl join complete error, read masterconfig failed: %s", err)
+ return fmt.Errorf("kosmosctl join complete error, read hostConfig failed: %s", err)
}
} else {
- masterConfig, err = f.ToRESTConfig()
+ hostConfig, err = f.ToRESTConfig()
if err != nil {
- return fmt.Errorf("kosmosctl join complete error, generate masterConfig failed: %s", err)
+ return fmt.Errorf("kosmosctl join complete error, generate hostConfig failed: %s", err)
}
- o.MasterKubeConfigStream, err = os.ReadFile(filepath.Join(homedir.HomeDir(), ".kube", "config"))
+ o.HostKubeConfigStream, err = os.ReadFile(filepath.Join(homedir.HomeDir(), ".kube", "config"))
if err != nil {
- return fmt.Errorf("kosmosctl join complete error, read masterconfig failed: %s", err)
+ return fmt.Errorf("kosmosctl join complete error, read hostConfig failed: %s", err)
}
}
- o.DynamicClient, err = dynamic.NewForConfig(masterConfig)
+ o.KosmosClient, err = versioned.NewForConfig(hostConfig)
if err != nil {
- return fmt.Errorf("kosmosctl join complete error, generate dynamic client failed: %s", err)
+ return fmt.Errorf("kosmosctl install complete error, generate Kosmos client failed: %v", err)
}
- if len(o.ClusterKubeConfig) > 0 {
- clusterConfig, err = clientcmd.BuildConfigFromFlags("", o.ClusterKubeConfig)
+ if len(o.KubeConfig) > 0 {
+ o.KubeConfigStream, err = os.ReadFile(o.KubeConfig)
+ if err != nil {
+ return fmt.Errorf("kosmosctl join run error, read KubeConfigStream failed: %s", err)
+ }
+
+ clusterConfig, err = clientcmd.BuildConfigFromFlags("", o.KubeConfig)
if err != nil {
return fmt.Errorf("kosmosctl join complete error, generate clusterConfig failed: %s", err)
}
- o.Client, err = kubernetes.NewForConfig(clusterConfig)
+ o.K8sClient, err = kubernetes.NewForConfig(clusterConfig)
if err != nil {
- return fmt.Errorf("kosmosctl join complete error, generate basic client failed: %v", err)
+ return fmt.Errorf("kosmosctl join complete error, generate K8s basic client failed: %v", err)
}
+
+ o.K8sExtensionsClient, err = extensionsclient.NewForConfig(clusterConfig)
+ if err != nil {
+ return fmt.Errorf("kosmosctl join complete error, generate K8s extensions client failed: %v", err)
+ }
+ } else {
+ return fmt.Errorf("kosmosctl join complete error, arg ClusterKubeConfig is required")
}
return nil
@@ -140,24 +167,21 @@ func (o *CommandJoinOptions) Complete(f ctlutil.Factory) error {
func (o *CommandJoinOptions) Validate(args []string) error {
if len(o.Name) == 0 {
- return fmt.Errorf("kosmosctl join validate error, resource name is not valid")
+ return fmt.Errorf("kosmosctl join validate error, name is not valid")
+ }
+
+ if len(o.Namespace) == 0 {
+ return fmt.Errorf("kosmosctl join validate error, namespace is not valid")
}
switch args[0] {
case "cluster":
- _, err := o.DynamicClient.Resource(util.ClusterGVR).Get(context.TODO(), o.Name, metav1.GetOptions{})
+ _, err := o.KosmosClient.KosmosV1alpha1().Clusters().Get(context.TODO(), o.Name, metav1.GetOptions{})
if err != nil {
if apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl join validate error, clsuter already exists: %s", err)
}
}
- case "knode":
- _, err := o.DynamicClient.Resource(util.KnodeGVR).Get(context.TODO(), o.Name, metav1.GetOptions{})
- if err != nil && apierrors.IsAlreadyExists(err) {
- if apierrors.IsAlreadyExists(err) {
- return fmt.Errorf("kosmosctl join validate error, knode already exists: %s", err)
- }
- }
}
return nil
@@ -170,11 +194,6 @@ func (o *CommandJoinOptions) Run(args []string) error {
if err != nil {
return err
}
- case "knode":
- err := o.runKnode()
- if err != nil {
- return err
- }
}
return nil
@@ -182,141 +201,203 @@ func (o *CommandJoinOptions) Run(args []string) error {
func (o *CommandJoinOptions) runCluster() error {
klog.Info("Start registering cluster to kosmos control plane...")
- // 1. create cluster in master
- clusterByte, err := util.GenerateCustomResource(manifest.ClusterCR, manifest.ClusterReplace{
- ClusterName: o.Name,
- CNI: o.CNI,
- DefaultNICName: o.DefaultNICName,
- ImageRepository: o.ImageRegistry,
- NetworkType: o.NetworkType,
- })
- if err != nil {
- return err
+ if o.EnableAll {
+ o.EnableLink = true
+ o.EnableTree = true
}
- decoder := k8syaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
- obj := &unstructured.Unstructured{}
- _, _, err = decoder.Decode(clusterByte, nil, obj)
- if err != nil {
- return fmt.Errorf("(cluster) kosmosctl join run error, decode cluster cr failed: %s", err)
+
+ // create cluster in control panel
+ cluster := v1alpha1.Cluster{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: o.Name,
+ },
+ Spec: v1alpha1.ClusterSpec{
+ Kubeconfig: o.KubeConfigStream,
+ Namespace: o.Namespace,
+ ImageRepository: o.ImageRegistry,
+ ClusterLinkOptions: &v1alpha1.ClusterLinkOptions{
+ Enable: o.EnableLink,
+ BridgeCIDRs: v1alpha1.VxlanCIDRs{
+ IP: "220.0.0.0/8",
+ IP6: "9480::0/16",
+ },
+ LocalCIDRs: v1alpha1.VxlanCIDRs{
+ IP: "210.0.0.0/8",
+ IP6: "9470::0/16",
+ },
+ NetworkType: v1alpha1.NetWorkTypeGateWay,
+ IPFamily: v1alpha1.IPFamilyTypeIPV4,
+ },
+ ClusterTreeOptions: &v1alpha1.ClusterTreeOptions{
+ Enable: o.EnableTree,
+ },
+ },
+ }
+
+ if o.EnableLink {
+ switch o.NetworkType {
+ case utils.NetworkTypeP2P:
+ cluster.Spec.ClusterLinkOptions.NetworkType = v1alpha1.NetworkTypeP2P
+ default:
+ cluster.Spec.ClusterLinkOptions.NetworkType = v1alpha1.NetWorkTypeGateWay
+ }
+
+ switch o.IpFamily {
+ case utils.DefaultIPv4:
+ cluster.Spec.ClusterLinkOptions.IPFamily = v1alpha1.IPFamilyTypeIPV4
+ case utils.DefaultIPv6:
+ cluster.Spec.ClusterLinkOptions.IPFamily = v1alpha1.IPFamilyTypeIPV6
+ default:
+ cluster.Spec.ClusterLinkOptions.IPFamily = v1alpha1.IPFamilyTypeALL
+ }
+
+ cluster.Spec.ClusterLinkOptions.DefaultNICName = o.DefaultNICName
+ cluster.Spec.ClusterLinkOptions.CNI = o.CNI
+ }
+
+ if o.EnableTree {
+ serviceExport, err := util.GenerateCustomResourceDefinition(manifest.ServiceExport, nil)
+ if err != nil {
+ return err
+ }
+ _, err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Create(context.Background(), serviceExport, metav1.CreateOptions{})
+ if err != nil {
+ if !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl join run error, crd options failed: %v", err)
+ }
+ }
+ klog.Info("Create CRD " + serviceExport.Name + " successful.")
+
+ serviceImport, err := util.GenerateCustomResourceDefinition(manifest.ServiceImport, nil)
+ if err != nil {
+ return err
+ }
+ _, err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Create(context.Background(), serviceImport, metav1.CreateOptions{})
+ if err != nil {
+ if !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl join run error, crd options failed: %v", err)
+ }
+ }
+ klog.Info("Create CRD " + serviceImport.Name + " successful.")
+
+ if len(o.LeafModel) > 0 {
+ switch o.LeafModel {
+ case "one-to-one":
+ // ToDo Perform follow-up query based on the leaf cluster label
+ nodes, err := o.K8sClient.CoreV1().Nodes().List(context.Background(), metav1.ListOptions{
+ LabelSelector: utils.KosmosNodeJoinLabel + "=" + utils.KosmosNodeJoinValue,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl join run error, list cluster node failed: %v", err)
+ }
+ var leafModels []v1alpha1.LeafModel
+ for _, n := range nodes.Items {
+ leafModel := v1alpha1.LeafModel{
+ LeafNodeName: n.Name,
+ Taints: []corev1.Taint{
+ {
+ Effect: utils.KosmosNodeTaintEffect,
+ Key: utils.KosmosNodeTaintKey,
+ Value: utils.KosmosNodeValue,
+ },
+ },
+ NodeSelector: v1alpha1.NodeSelector{
+ NodeName: n.Name,
+ },
+ }
+ leafModels = append(leafModels, leafModel)
+ }
+ cluster.Spec.ClusterTreeOptions.LeafModels = leafModels
+ }
+ }
}
- _, err = o.DynamicClient.Resource(util.ClusterGVR).Namespace("").Create(context.TODO(), obj, metav1.CreateOptions{})
+
+ if o.RootFlag {
+ cluster.Annotations = map[string]string{
+ utils.RootClusterAnnotationKey: utils.RootClusterAnnotationValue,
+ }
+ }
+
+ _, err := o.KosmosClient.KosmosV1alpha1().Clusters().Create(context.TODO(), &cluster, metav1.CreateOptions{})
if err != nil {
return fmt.Errorf("kosmosctl join run error, create cluster failed: %s", err)
}
- klog.Info("Cluster: " + o.Name + " has been created.")
+ klog.Info("Cluster " + o.Name + " has been created.")
- // 2. create namespace in member
- namespace := &corev1.Namespace{}
- namespace.Name = utils.DefaultNamespace
- _, err = o.Client.CoreV1().Namespaces().Create(context.TODO(), namespace, metav1.CreateOptions{})
+ // create ns if it does not exist
+ kosmosNS := &corev1.Namespace{}
+ kosmosNS.Name = o.Namespace
+ _, err = o.K8sClient.CoreV1().Namespaces().Create(context.TODO(), kosmosNS, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl join run error, create namespace failed: %s", err)
}
- // 3. create secret in member
- secret := &corev1.Secret{
+ // create rbac
+ kosmosControlSA, err := util.GenerateServiceAccount(manifest.KosmosControlServiceAccount, manifest.ServiceAccountReplace{
+ Namespace: o.Namespace,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl join run error, generate kosmos serviceaccount failed: %s", err)
+ }
+ _, err = o.K8sClient.CoreV1().ServiceAccounts(kosmosControlSA.Namespace).Create(context.TODO(), kosmosControlSA, metav1.CreateOptions{})
+ if err != nil && !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl join run error, create kosmos serviceaccount failed: %s", err)
+ }
+ klog.Info("ServiceAccount " + kosmosControlSA.Name + " has been created.")
+
+ controlPanelSecret := &corev1.Secret{
TypeMeta: metav1.TypeMeta{},
ObjectMeta: metav1.ObjectMeta{
Name: utils.ControlPanelSecretName,
- Namespace: utils.DefaultNamespace,
+ Namespace: o.Namespace,
},
Data: map[string][]byte{
- "kubeconfig": o.MasterKubeConfigStream,
+ "kubeconfig": o.HostKubeConfigStream,
},
}
- _, err = o.Client.CoreV1().Secrets(secret.Namespace).Create(context.TODO(), secret, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().Secrets(controlPanelSecret.Namespace).Create(context.TODO(), controlPanelSecret, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl join run error, create secret failed: %s", err)
}
- klog.Info("Secret: " + secret.Name + " has been created.")
+ klog.Info("Secret " + controlPanelSecret.Name + " has been created.")
- // 4. create rbac in member
- clusterRole, err := util.GenerateClusterRole(manifest.ClusterlinkClusterRole, nil)
+ kosmosCR, err := util.GenerateClusterRole(manifest.KosmosClusterRole, nil)
if err != nil {
return fmt.Errorf("kosmosctl join run error, generate clusterrole failed: %s", err)
}
- _, err = o.Client.RbacV1().ClusterRoles().Create(context.TODO(), clusterRole, metav1.CreateOptions{})
+ _, err = o.K8sClient.RbacV1().ClusterRoles().Create(context.TODO(), kosmosCR, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl join run error, create clusterrole failed: %s", err)
}
- klog.Info("ClusterRole: " + clusterRole.Name + " has been created.")
+ klog.Info("ClusterRole " + kosmosCR.Name + " has been created.")
- clusterRoleBinding, err := util.GenerateClusterRoleBinding(manifest.ClusterlinkClusterRoleBinding, manifest.ClusterRoleBindingReplace{
- Namespace: utils.DefaultNamespace,
+ kosmosCRB, err := util.GenerateClusterRoleBinding(manifest.KosmosClusterRoleBinding, manifest.ClusterRoleBindingReplace{
+ Namespace: o.Namespace,
})
if err != nil {
return fmt.Errorf("kosmosctl join run error, generate clusterrolebinding failed: %s", err)
}
- _, err = o.Client.RbacV1().ClusterRoleBindings().Create(context.TODO(), clusterRoleBinding, metav1.CreateOptions{})
+ _, err = o.K8sClient.RbacV1().ClusterRoleBindings().Create(context.TODO(), kosmosCRB, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl join run error, create clusterrolebinding failed: %s", err)
}
- klog.Info("ClusterRoleBinding: " + clusterRoleBinding.Name + " has been created.")
+ klog.Info("ClusterRoleBinding " + kosmosCRB.Name + " has been created.")
- // 5. create operator in member
- serviceAccount, err := util.GenerateServiceAccount(manifest.ClusterlinkOperatorServiceAccount, manifest.ServiceAccountReplace{
- Namespace: utils.DefaultNamespace,
+ kosmosOperatorSA, err := util.GenerateServiceAccount(manifest.KosmosOperatorServiceAccount, manifest.ServiceAccountReplace{
+ Namespace: o.Namespace,
})
if err != nil {
return fmt.Errorf("kosmosctl join run error, generate serviceaccount failed: %s", err)
}
- _, err = o.Client.CoreV1().ServiceAccounts(serviceAccount.Namespace).Create(context.TODO(), serviceAccount, metav1.CreateOptions{})
+ _, err = o.K8sClient.CoreV1().ServiceAccounts(kosmosOperatorSA.Namespace).Create(context.TODO(), kosmosOperatorSA, metav1.CreateOptions{})
if err != nil && !apierrors.IsAlreadyExists(err) {
return fmt.Errorf("kosmosctl join run error, create serviceaccount failed: %s", err)
}
- klog.Info("ServiceAccount: " + serviceAccount.Name + " has been created.")
-
- deployment, err := util.GenerateDeployment(manifest.ClusterlinkOperatorDeployment, manifest.ClusterlinkDeploymentReplace{
- Namespace: utils.DefaultNamespace,
- Version: version.GetReleaseVersion().PatchRelease(),
- ClusterName: o.Name,
- UseProxy: o.UseProxy,
- ImageRepository: o.ImageRegistry,
- })
- if err != nil {
- return fmt.Errorf("kosmosctl join run error, generate deployment failed: %s", err)
- }
- _, err = o.Client.AppsV1().Deployments(deployment.Namespace).Create(context.TODO(), deployment, metav1.CreateOptions{})
- if err != nil && !apierrors.IsAlreadyExists(err) {
- return fmt.Errorf("kosmosctl join run error, create deployment failed: %s", err)
- }
- label := map[string]string{"app": deployment.Labels["app"]}
- if err = util.WaitPodReady(o.Client, deployment.Namespace, util.MapToString(label), o.WaitTime); err != nil {
- return fmt.Errorf("kosmosctl join run error, create deployment failed: %s", err)
- } else {
- klog.Info("Deployment: " + deployment.Name + " has been created.")
- klog.Info("Cluster [" + o.Name + "] registration successful.")
- }
+ klog.Info("ServiceAccount " + kosmosOperatorSA.Name + " has been created.")
- return nil
-}
+ //ToDo Wait for all services to be running
-func (o *CommandJoinOptions) runKnode() error {
- klog.Info("Start registering knode to kosmos control plane...")
- clusterKubeConfigByte, err := os.ReadFile(o.ClusterKubeConfig)
- if err != nil {
- return fmt.Errorf("kosmosctl join run error, decode knode cr failed: %s", err)
- }
- base64ClusterKubeConfig := base64.StdEncoding.EncodeToString(clusterKubeConfigByte)
- knodeByte, err := util.GenerateCustomResource(manifest.KnodeCR, manifest.KnodeReplace{
- KnodeName: o.Name,
- KnodeKubeConfig: base64ClusterKubeConfig,
- })
- if err != nil {
- return err
- }
- decoder := k8syaml.NewDecodingSerializer(unstructured.UnstructuredJSONScheme)
- obj := &unstructured.Unstructured{}
- _, _, err = decoder.Decode(knodeByte, nil, obj)
- if err != nil {
- return fmt.Errorf("kosmosctl join run error, decode knode cr failed: %s", err)
- }
- _, err = o.DynamicClient.Resource(util.KnodeGVR).Namespace("").Create(context.TODO(), obj, metav1.CreateOptions{})
- if err != nil && !apierrors.IsAlreadyExists(err) {
- return fmt.Errorf("kosmosctl join run error, create knode failed: %s", err)
- }
- klog.Info("Knode: " + obj.GetName() + " has been created.")
- klog.Info("Knode [" + obj.GetName() + "] registration successful.")
+ klog.Info("Cluster [" + o.Name + "] registration successful.")
return nil
}
diff --git a/pkg/kosmosctl/kosmosctl.go b/pkg/kosmosctl/kosmosctl.go
index 87b1d58cb..b28a09630 100644
--- a/pkg/kosmosctl/kosmosctl.go
+++ b/pkg/kosmosctl/kosmosctl.go
@@ -15,8 +15,11 @@ import (
"github.com/kosmos.io/kosmos/pkg/kosmosctl/floater"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/get"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/image"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/install"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/join"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/logs"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/rsmigrate"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/uninstall"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/unjoin"
)
@@ -68,9 +71,23 @@ func NewKosmosCtlCommand() *cobra.Command {
},
},
{
- Message: "Cluster Doctor/Floater Commands:",
+ Message: "Troubleshooting and Debugging Commands:",
Commands: []*cobra.Command{
- floater.NewCmdDoctor(),
+ logs.NewCmdLogs(f, ioStreams),
+ floater.NewCmdCheck(),
+ floater.NewCmdAnalysis(f),
+ },
+ }, {
+ Message: "Cluster Resource Import/Export Commands:",
+ Commands: []*cobra.Command{
+ rsmigrate.NewCmdImport(f),
+ rsmigrate.NewCmdExport(f),
+ },
+ },
+ {
+ Message: "Image Pull/Push commands",
+ Commands: []*cobra.Command{
+ image.NewCmdImage(),
},
},
}
diff --git a/pkg/kosmosctl/logs/logs.go b/pkg/kosmosctl/logs/logs.go
new file mode 100644
index 000000000..a06b567c3
--- /dev/null
+++ b/pkg/kosmosctl/logs/logs.go
@@ -0,0 +1,154 @@
+package logs
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/spf13/cobra"
+ authenticationv1 "k8s.io/api/authentication/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/cli-runtime/pkg/genericclioptions"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/tools/clientcmd"
+ ctllogs "k8s.io/kubectl/pkg/cmd/logs"
+ ctlutil "k8s.io/kubectl/pkg/cmd/util"
+ "k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/kubectl/pkg/util/templates"
+ "k8s.io/utils/pointer"
+
+ "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest"
+ "github.com/kosmos.io/kosmos/pkg/kosmosctl/util"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+var (
+ logsLong = templates.LongDesc(i18n.T(`
+ Print the logs for a container in a pod or specified resource from the specified cluster.
+ If the pod has only one container, the container name is optional.`))
+
+ logsExample = templates.Examples(i18n.T(`
+ # Return logs from pod, e.g:
+ kosmosctl logs pod-name --cluster cluster-name
+
+ # Return logs from pod of special container, e.g:
+ kosmosctl logs pod-name --cluster cluster-name -c container-name`))
+)
+
+type CommandLogsOptions struct {
+ Cluster string
+
+ Namespace string
+
+ LogsOptions *ctllogs.LogsOptions
+}
+
+func NewCmdLogs(f ctlutil.Factory, streams genericclioptions.IOStreams) *cobra.Command {
+ o := NewCommandLogsOptions(streams)
+
+ cmd := &cobra.Command{
+ Use: "logs [-f] [-p] (POD | TYPE/NAME) [-c CONTAINER] (--cluster CLUSTER_NAME)",
+ Short: i18n.T("Display resources from the Kosmos control plane"),
+ Long: logsLong,
+ Example: logsExample,
+ SilenceUsage: true,
+ DisableFlagsInUseLine: true,
+ RunE: func(cmd *cobra.Command, args []string) error {
+ ctlutil.CheckErr(o.Complete(f, cmd, args))
+ ctlutil.CheckErr(o.Validate())
+ ctlutil.CheckErr(o.Run())
+ return nil
+ },
+ }
+
+ flags := cmd.Flags()
+ flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "If present, the namespace scope for this CLI request.")
+ flags.StringVar(&o.Cluster, "cluster", utils.DefaultClusterName, "Specify a cluster, the default is the control cluster.")
+ o.LogsOptions.AddFlags(cmd)
+
+ return cmd
+}
+
+func NewCommandLogsOptions(streams genericclioptions.IOStreams) *CommandLogsOptions {
+ logsOptions := ctllogs.NewLogsOptions(streams, false)
+ return &CommandLogsOptions{
+ LogsOptions: logsOptions,
+ }
+}
+
+func (o *CommandLogsOptions) Complete(f ctlutil.Factory, cmd *cobra.Command, args []string) error {
+ controlConfig, err := f.ToRESTConfig()
+ if err != nil {
+ return err
+ }
+
+ rootClient, err := versioned.NewForConfig(controlConfig)
+ if err != nil {
+ return err
+ }
+ cluster, err := rootClient.KosmosV1alpha1().Clusters().Get(context.TODO(), o.Cluster, metav1.GetOptions{})
+ if err != nil {
+ return err
+ }
+
+ leafConfig, err := clientcmd.RESTConfigFromKubeConfig(cluster.Spec.Kubeconfig)
+ if err != nil {
+ return fmt.Errorf("kosmosctl logs complete error, load leaf cluster kubeconfig failed: %s", err)
+ }
+
+ leafClient, err := kubernetes.NewForConfig(leafConfig)
+ if err != nil {
+ return fmt.Errorf("kosmosctl logs complete error, generate leaf cluster client failed: %s", err)
+ }
+
+ kosmosControlSA, err := util.GenerateServiceAccount(manifest.KosmosControlServiceAccount, manifest.ServiceAccountReplace{
+ Namespace: o.Namespace,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl logs complete error, generate kosmos serviceaccount failed: %s", err)
+ }
+ expirationSeconds := int64(600)
+ leafToken, err := leafClient.CoreV1().ServiceAccounts(kosmosControlSA.Namespace).CreateToken(
+ context.TODO(), kosmosControlSA.Name, &authenticationv1.TokenRequest{
+ Spec: authenticationv1.TokenRequestSpec{
+ ExpirationSeconds: &expirationSeconds,
+ },
+ }, metav1.CreateOptions{})
+ if err != nil {
+ return fmt.Errorf("kosmosctl logs complete error, list leaf cluster secret failed: %s", err)
+ }
+
+ configFlags := genericclioptions.NewConfigFlags(false)
+ configFlags.APIServer = &leafConfig.Host
+ configFlags.BearerToken = &leafToken.Status.Token
+ configFlags.Insecure = pointer.Bool(true)
+ configFlags.Namespace = &o.Namespace
+
+ o.LogsOptions.Namespace = o.Namespace
+
+ newF := ctlutil.NewFactory(configFlags)
+ err = o.LogsOptions.Complete(newF, cmd, args)
+ if err != nil {
+ return fmt.Errorf("kosmosctl logs complete error, options failed: %s", err)
+ }
+
+ return nil
+}
+
+func (o *CommandLogsOptions) Validate() error {
+ err := o.LogsOptions.Validate()
+ if err != nil {
+ return fmt.Errorf("kosmosctl logs validate error, options failed: %s", err)
+ }
+
+ return nil
+}
+
+func (o *CommandLogsOptions) Run() error {
+ err := o.LogsOptions.RunLogs()
+ if err != nil {
+ return fmt.Errorf("kosmosctl logs run error, options failed: %s", err)
+ }
+
+ return nil
+}
diff --git a/pkg/kosmosctl/manifest/manifest_clusterrolebindings.go b/pkg/kosmosctl/manifest/manifest_clusterrolebindings.go
index 8055be215..1c000f674 100644
--- a/pkg/kosmosctl/manifest/manifest_clusterrolebindings.go
+++ b/pkg/kosmosctl/manifest/manifest_clusterrolebindings.go
@@ -30,37 +30,41 @@ subjects:
name: clusterlink-floater
namespace: {{ .Namespace }}
`
- ClusterlinkClusterRoleBinding = `
+
+ KosmosClusterRoleBinding = `
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
- name: clusterlink
+ name: kosmos
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
- name: clusterlink
-subjects:
+ name: kosmos
+subjects:
+ - kind: ServiceAccount
+ name: kosmos-control
+ namespace: {{ .Namespace }}
- kind: ServiceAccount
name: clusterlink-controller-manager
namespace: {{ .Namespace }}
- kind: ServiceAccount
- name: clusterlink-operator
+ name: kosmos-operator
namespace: {{ .Namespace }}
`
- ClusterTreeKnodeManagerClusterRoleBinding = `
+ ClusterTreeClusterRoleBinding = `
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
- name: clustertree-cluster-manager
+ name: clustertree
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
- name: clustertree-cluster-manager
+ name: clustertree
subjects:
- kind: ServiceAccount
- name: clustertree-cluster-manager
+ name: clustertree
namespace: {{ .Namespace }}
`
diff --git a/pkg/kosmosctl/manifest/manifest_clusterroles.go b/pkg/kosmosctl/manifest/manifest_clusterroles.go
index 2e67f4e14..7e4d55c36 100644
--- a/pkg/kosmosctl/manifest/manifest_clusterroles.go
+++ b/pkg/kosmosctl/manifest/manifest_clusterroles.go
@@ -27,24 +27,24 @@ rules:
verbs: ["get"]
`
- ClusterlinkClusterRole = `
+ KosmosClusterRole = `
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
- name: clusterlink
+ name: kosmos
rules:
- apiGroups: ['*']
resources: ['*']
verbs: ["*"]
- nonResourceURLs: ['*']
- verbs: ["get"]
+ verbs: ["*"]
`
- ClusterTreeKnodeManagerClusterRole = `
+ ClusterTreeClusterRole = `
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
- name: clustertree-cluster-manager
+ name: clustertree
rules:
- apiGroups: ['*']
resources: ['*']
diff --git a/pkg/kosmosctl/manifest/manifest_cr.go b/pkg/kosmosctl/manifest/manifest_cr.go
deleted file mode 100644
index 8d1d2ba18..000000000
--- a/pkg/kosmosctl/manifest/manifest_cr.go
+++ /dev/null
@@ -1,39 +0,0 @@
-package manifest
-
-const (
- ClusterCR = `
-apiVersion: kosmos.io/v1alpha1
-kind: Cluster
-metadata:
- name: {{ .ClusterName }}
-spec:
- cni: {{ .CNI }}
- defaultNICName: {{ .DefaultNICName }}
- imageRepository: {{ .ImageRepository }}
- networkType: {{ .NetworkType }}
-`
-
- KnodeCR = `
-apiVersion: kosmos.io/v1alpha1
-kind: Knode
-metadata:
- name: {{ .KnodeName }}
-spec:
- nodeName: {{ .KnodeName }}
- type: k8s
- kubeconfig: {{ .KnodeKubeConfig }}
-`
-)
-
-type ClusterReplace struct {
- ClusterName string
- CNI string
- DefaultNICName string
- ImageRepository string
- NetworkType string
-}
-
-type KnodeReplace struct {
- KnodeName string
- KnodeKubeConfig string
-}
diff --git a/pkg/kosmosctl/manifest/manifest_crds.go b/pkg/kosmosctl/manifest/manifest_crds.go
index 6216a0f55..1174c1550 100644
--- a/pkg/kosmosctl/manifest/manifest_crds.go
+++ b/pkg/kosmosctl/manifest/manifest_crds.go
@@ -1,7 +1,306 @@
package manifest
const (
- ClusterlinkClusterNode = `---
+ ServiceImport = `# Copyright 2020 The Kubernetes Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ name: serviceimports.multicluster.x-k8s.io
+spec:
+ group: multicluster.x-k8s.io
+ scope: Namespaced
+ names:
+ plural: serviceimports
+ singular: serviceimport
+ kind: ServiceImport
+ shortNames:
+ - svcim
+ versions:
+ - name: v1alpha1
+ served: true
+ storage: true
+ subresources:
+ status: {}
+ additionalPrinterColumns:
+ - name: Type
+ type: string
+ description: The type of this ServiceImport
+ jsonPath: .spec.type
+ - name: IP
+ type: string
+ description: The VIP for this ServiceImport
+ jsonPath: .spec.ips
+ - name: Age
+ type: date
+ jsonPath: .metadata.creationTimestamp
+ "schema":
+ "openAPIV3Schema":
+ description: ServiceImport describes a service imported from clusters in a
+ ClusterSet.
+ type: object
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: spec defines the behavior of a ServiceImport.
+ type: object
+ required:
+ - ports
+ - type
+ properties:
+ ips:
+ description: ip will be used as the VIP for this service when type
+ is ClusterSetIP.
+ type: array
+ maxItems: 1
+ items:
+ type: string
+ ports:
+ type: array
+ items:
+ description: ServicePort represents the port on which the service
+ is exposed
+ type: object
+ required:
+ - port
+ properties:
+ appProtocol:
+ description: The application protocol for this port. This field
+ follows standard Kubernetes label syntax. Un-prefixed names
+ are reserved for IANA standard service names (as per RFC-6335
+ and http://www.iana.org/assignments/service-names). Non-standard
+ protocols should use prefixed names such as mycompany.com/my-custom-protocol.
+ Field can be enabled with ServiceAppProtocol feature gate.
+ type: string
+ name:
+ description: The name of this port within the service. This
+ must be a DNS_LABEL. All ports within a ServiceSpec must have
+ unique names. When considering the endpoints for a Service,
+ this must match the 'name' field in the EndpointPort. Optional
+ if only one ServicePort is defined on this service.
+ type: string
+ port:
+ description: The port that will be exposed by this service.
+ type: integer
+ format: int32
+ protocol:
+ description: The IP protocol for this port. Supports "TCP",
+ "UDP", and "SCTP". Default is TCP.
+ type: string
+ x-kubernetes-list-type: atomic
+ sessionAffinity:
+ description: 'Supports "ClientIP" and "None". Used to maintain session
+ affinity. Enable client IP based session affinity. Must be ClientIP
+ or None. Defaults to None. Ignored when type is Headless More info:
+ https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies'
+ type: string
+ sessionAffinityConfig:
+ description: sessionAffinityConfig contains session affinity configuration.
+ type: object
+ properties:
+ clientIP:
+ description: clientIP contains the configurations of Client IP
+ based session affinity.
+ type: object
+ properties:
+ timeoutSeconds:
+ description: timeoutSeconds specifies the seconds of ClientIP
+ type session sticky time. The value must be >0 && <=86400(for
+ 1 day) if ServiceAffinity == "ClientIP". Default value is
+ 10800(for 3 hours).
+ type: integer
+ format: int32
+ type:
+ description: type defines the type of this service. Must be ClusterSetIP
+ or Headless.
+ type: string
+ enum:
+ - ClusterSetIP
+ - Headless
+ status:
+ description: status contains information about the exported services that
+ form the multi-cluster service referenced by this ServiceImport.
+ type: object
+ properties:
+ clusters:
+ description: clusters is the list of exporting clusters from which
+ this service was derived.
+ type: array
+ items:
+ description: ClusterStatus contains service configuration mapped
+ to a specific source cluster
+ type: object
+ required:
+ - cluster
+ properties:
+ cluster:
+ description: cluster is the name of the exporting cluster. Must
+ be a valid RFC-1123 DNS label.
+ type: string
+ x-kubernetes-list-map-keys:
+ - cluster
+ x-kubernetes-list-type: map
+`
+
+ ServiceExport = `# Copyright 2020 The Kubernetes Authors.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ name: serviceexports.multicluster.x-k8s.io
+spec:
+ group: multicluster.x-k8s.io
+ scope: Namespaced
+ names:
+ plural: serviceexports
+ singular: serviceexport
+ kind: ServiceExport
+ shortNames:
+ - svcex
+ versions:
+ - name: v1alpha1
+ served: true
+ storage: true
+ subresources:
+ status: {}
+ additionalPrinterColumns:
+ - name: Age
+ type: date
+ jsonPath: .metadata.creationTimestamp
+ "schema":
+ "openAPIV3Schema":
+ description: ServiceExport declares that the Service with the same name and
+ namespace as this export should be consumable from other clusters.
+ type: object
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ status:
+ description: status describes the current state of an exported service.
+ Service configuration comes from the Service that had the same name
+ and namespace as this ServiceExport. Populated by the multi-cluster
+ service implementation's controller.
+ type: object
+ properties:
+ conditions:
+ type: array
+ items:
+ description: "Condition contains details for one aspect of the current
+ state of this API Resource. --- This struct is intended for direct
+ use as an array at the field path .status.conditions. For example,
+ type FooStatus struct{ // Represents the observations of a
+ foo's current state. // Known .status.conditions.type are:
+ \"Available\", \"Progressing\", and \"Degraded\" // +patchMergeKey=type
+ \ // +patchStrategy=merge // +listType=map // +listMapKey=type
+ \ Conditions []metav1.Condition ` + "`" + `json:\"conditions,omitempty\"
+ patchStrategy:\"merge\" patchMergeKey:\"type\" protobuf:\"bytes,1,rep,name=conditions\"` + "`" + `
+ \n // other fields }"
+ type: object
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ properties:
+ lastTransitionTime:
+ description: lastTransitionTime is the last time the condition
+ transitioned from one status to another. This should be when
+ the underlying condition changed. If that is not known, then
+ using the time when the API field changed is acceptable.
+ type: string
+ format: date-time
+ message:
+ description: message is a human readable message indicating
+ details about the transition. This may be an empty string.
+ type: string
+ maxLength: 32768
+ observedGeneration:
+ description: observedGeneration represents the .metadata.generation
+ that the condition was set based upon. For instance, if .metadata.generation
+ is currently 12, but the .status.conditions[x].observedGeneration
+ is 9, the condition is out of date with respect to the current
+ state of the instance.
+ type: integer
+ format: int64
+ minimum: 0
+ reason:
+ description: reason contains a programmatic identifier indicating
+ the reason for the condition's last transition. Producers
+ of specific condition types may define expected values and
+ meanings for this field, and whether the values are considered
+ a guaranteed API. The value should be a CamelCase string.
+ This field may not be empty.
+ type: string
+ maxLength: 1024
+ minLength: 1
+ pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
+ status:
+ description: status of the condition, one of True, False, Unknown.
+ type: string
+ enum:
+ - "True"
+ - "False"
+ - Unknown
+ type:
+ description: type of condition in CamelCase or in foo.example.com/CamelCase.
+ --- Many .condition.type values are consistent across resources
+ like Available, but because arbitrary conditions can be useful
+ (see .node.status.conditions), the ability to deconflict is
+ important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
+ type: string
+ maxLength: 316
+ pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
+ x-kubernetes-list-map-keys:
+ - type
+ x-kubernetes-list-type: map
+`
+)
+
+const ClusterNode = `---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
@@ -76,7 +375,7 @@ spec:
status: {}
`
- ClusterlinkCluster = `---
+const Cluster = `---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
@@ -94,10 +393,10 @@ spec:
scope: Cluster
versions:
- additionalPrinterColumns:
- - jsonPath: .spec.networkType
+ - jsonPath: .spec.clusterLinkOptions.networkType
name: NETWORK_TYPE
type: string
- - jsonPath: .spec.ipFamily
+ - jsonPath: .spec.clusterLinkOptions.ipFamily
name: IP_FAMILY
type: string
name: v1alpha1
@@ -119,88 +418,232 @@ spec:
spec:
description: Spec is the specification for the behaviour of the cluster.
properties:
- bridgeCIDRs:
- default:
- ip: 220.0.0.0/8
- ip6: 9470::/16
+ clusterLinkOptions:
properties:
- ip:
+ bridgeCIDRs:
+ default:
+ ip: 220.0.0.0/8
+ ip6: 9470::/16
+ properties:
+ ip:
+ type: string
+ ip6:
+ type: string
+ required:
+ - ip
+ - ip6
+ type: object
+ cni:
+ default: calico
+ type: string
+ defaultNICName:
+ default: '*'
+ type: string
+ enable:
+ default: true
+ type: boolean
+ globalCIDRsMap:
+ additionalProperties:
+ type: string
+ type: object
+ ipFamily:
+ default: all
type: string
- ip6:
+ localCIDRs:
+ default:
+ ip: 210.0.0.0/8
+ ip6: 9480::/16
+ properties:
+ ip:
+ type: string
+ ip6:
+ type: string
+ required:
+ - ip
+ - ip6
+ type: object
+ networkType:
+ default: p2p
+ enum:
+ - p2p
+ - gateway
type: string
- required:
- - ip
- - ip6
+ nicNodeNames:
+ items:
+ properties:
+ interfaceName:
+ type: string
+ nodeName:
+ items:
+ type: string
+ type: array
+ required:
+ - interfaceName
+ - nodeName
+ type: object
+ type: array
+ useIPPool:
+ default: false
+ type: boolean
type: object
- cni:
- default: calico
- type: string
- defaultNICName:
- default: '*'
- type: string
- globalCIDRsMap:
- additionalProperties:
- type: string
+ clusterTreeOptions:
+ properties:
+ enable:
+ default: true
+ type: boolean
+ leafModels:
+ description: LeafModels provide an api to arrange the member cluster
+ with some rules to pretend one or more leaf node
+ items:
+ properties:
+ labels:
+ additionalProperties:
+ type: string
+ description: Labels that will be setting in the pretended
+ Node labels
+ type: object
+ leafNodeName:
+ description: LeafNodeName defines leaf name If nil or empty,
+ the leaf node name will generate by controller and fill
+ in cluster link status
+ type: string
+ nodeSelector:
+ description: NodeSelector is a selector to select member
+ cluster nodes to pretend a leaf node in clusterTree.
+ properties:
+ labelSelector:
+ description: LabelSelector is a filter to select member
+ cluster nodes to pretend a leaf node in clusterTree
+ by labels. It will work on second level schedule on
+ pod create in member clusters.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label
+ selector requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a
+ selector that contains values, a key, and an
+ operator that relates the key and values.
+ properties:
+ key:
+ description: key is the label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty. If the
+ operator is Exists or DoesNotExist, the
+ values array must be empty. This array is
+ replaced during a strategic merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is "In",
+ and the values array contains only "value". The
+ requirements are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ nodeName:
+ description: NodeName is Member cluster origin node
+ Name
+ type: string
+ type: object
+ taints:
+ description: Taints attached to the leaf pretended Node.
+ If nil or empty, controller will set the default no-schedule
+ taint
+ items:
+ description: The node this Taint is attached to has the
+ "effect" on any pod that does not tolerate the Taint.
+ properties:
+ effect:
+ description: Required. The effect of the taint on
+ pods that do not tolerate the taint. Valid effects
+ are NoSchedule, PreferNoSchedule and NoExecute.
+ type: string
+ key:
+ description: Required. The taint key to be applied
+ to a node.
+ type: string
+ timeAdded:
+ description: TimeAdded represents the time at which
+ the taint was added. It is only written for NoExecute
+ taints.
+ format: date-time
+ type: string
+ value:
+ description: The taint value corresponding to the
+ taint key.
+ type: string
+ required:
+ - effect
+ - key
+ type: object
+ type: array
+ type: object
+ type: array
type: object
imageRepository:
type: string
- ipFamily:
- default: all
- type: string
kubeconfig:
format: byte
type: string
- localCIDRs:
- default:
- ip: 210.0.0.0/8
- ip6: 9480::/16
- properties:
- ip:
- type: string
- ip6:
- type: string
- required:
- - ip
- - ip6
- type: object
namespace:
- default: {{ .Namespace }}
- type: string
- networkType:
- default: p2p
- enum:
- - p2p
- - gateway
+ default: kosmos-system
type: string
- nicNodeNames:
- items:
- properties:
- interfaceName:
- type: string
- nodeName:
- items:
- type: string
- type: array
- required:
- - interfaceName
- - nodeName
- type: object
- type: array
- useIPPool:
- default: false
- type: boolean
type: object
status:
description: Status describes the current status of a cluster.
properties:
- podCIDRs:
- items:
- type: string
- type: array
- serviceCIDRs:
- items:
- type: string
- type: array
+ clusterLinkStatus:
+ description: ClusterLinkStatus contain the cluster network information
+ properties:
+ podCIDRs:
+ items:
+ type: string
+ type: array
+ serviceCIDRs:
+ items:
+ type: string
+ type: array
+ type: object
+ clusterTreeStatus:
+ description: ClusterTreeStatus contain the member cluster leafNode
+ end status
+ properties:
+ leafNodeItems:
+ description: LeafNodeItems represents list of the leaf node Items
+ calculating in each member cluster.
+ items:
+ properties:
+ leafNodeName:
+ description: LeafNodeName represents the leaf node name
+ generate by controller. suggest name format like cluster-shortLabel-number
+ like member-az1-1
+ type: string
+ required:
+ - leafNodeName
+ type: object
+ type: array
+ type: object
type: object
required:
- spec
@@ -210,7 +653,7 @@ spec:
subresources: {}
`
- ClusterlinkNodeConfig = `---
+const NodeConfig = `---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
@@ -353,26 +796,68 @@ spec:
status: {}
`
- ClusterTreeKnode = `---
+const DaemonSet = `---
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.0
creationTimestamp: null
- name: knodes.kosmos.io
+ name: daemonsets.kosmos.io
spec:
group: kosmos.io
names:
- kind: Knode
- listKind: KnodeList
- plural: knodes
- singular: knode
- scope: Cluster
+ kind: DaemonSet
+ listKind: DaemonSetList
+ plural: daemonsets
+ shortNames:
+ - kdaemon
+ - kds
+ singular: daemonset
+ scope: Namespaced
versions:
- - name: v1alpha1
+ - additionalPrinterColumns:
+ - description: The desired number of pods.
+ jsonPath: .status.desiredNumberScheduled
+ name: DESIRED
+ type: integer
+ - description: The current number of pods.
+ jsonPath: .status.currentNumberScheduled
+ name: CURRENT
+ type: integer
+ - description: The ready number of pods.
+ jsonPath: .status.numberReady
+ name: READY
+ type: integer
+ - description: The updated number of pods.
+ jsonPath: .status.updatedNumberScheduled
+ name: UP-TO-DATE
+ type: integer
+ - description: The updated number of pods.
+ jsonPath: .status.numberAvailable
+ name: AVAILABLE
+ type: integer
+ - description: CreationTimestamp is a timestamp representing the server time when
+ this object was created. It is not guaranteed to be set in happens-before
+ order across separate operations. Clients may not set this value. It is represented
+ in RFC3339 form and is in UTC.
+ jsonPath: .metadata.creationTimestamp
+ name: AGE
+ type: date
+ - description: The containers of currently daemonset.
+ jsonPath: .spec.template.spec.containers[*].name
+ name: CONTAINERS
+ priority: 1
+ type: string
+ - description: The images of currently advanced daemonset.
+ jsonPath: .spec.template.spec.containers[*].image
+ name: IMAGES
+ priority: 1
+ type: string
+ name: v1alpha1
schema:
openAPIV3Schema:
+ description: DaemonSet represents the configuration of a daemon set.
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation
@@ -387,93 +872,549 @@ spec:
metadata:
type: object
spec:
+ description: 'The desired behavior of this daemon set. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status'
properties:
- disableTaint:
- type: boolean
- kubeconfig:
- format: byte
- type: string
- nodeName:
- type: string
- type:
- default: k8s
- type: string
+ minReadySeconds:
+ description: The minimum number of seconds for which a newly created
+ DaemonSet pod should be ready without any of its container crashing,
+ for it to be considered available. Defaults to 0 (pod will be considered
+ available as soon as it is ready).
+ format: int32
+ type: integer
+ revisionHistoryLimit:
+ description: The number of old history to retain to allow rollback.
+ This is a pointer to distinguish between explicit zero and not specified.
+ Defaults to 10.
+ format: int32
+ type: integer
+ selector:
+ description: 'A label query over pods that are managed by the daemon
+ set. Must match in order to be controlled. It must match the pod
+ template''s labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors'
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector requirements.
+ The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector that
+ contains values, a key, and an operator that relates the key
+ and values.
+ properties:
+ key:
+ description: key is the label key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents a key's relationship to
+ a set of values. Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If the
+ operator is In or NotIn, the values array must be non-empty.
+ If the operator is Exists or DoesNotExist, the values
+ array must be empty. This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A single
+ {key,value} in the matchLabels map is equivalent to an element
+ of matchExpressions, whose key field is "key", the operator
+ is "In", and the values array contains only "value". The requirements
+ are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ template:
+ description: 'An object that describes the pod that will be created.
+ The DaemonSet will create exactly one copy of this pod on every
+ node that matches the template''s node selector (or on every node
+ if no node selector is specified). More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template'
+ x-kubernetes-preserve-unknown-fields: true
+ updateStrategy:
+ description: An update strategy to replace existing DaemonSet pods
+ with new pods.
+ properties:
+ rollingUpdate:
+ description: 'Rolling update config params. Present only if type
+ = "RollingUpdate". --- TODO: Update this to follow our convention
+ for oneOf, whatever we decide it to be. Same as Deployment ` + "`" + `strategy.rollingUpdate` + "`" + `.
+ See https://github.com/kubernetes/kubernetes/issues/35345'
+ properties:
+ maxSurge:
+ anyOf:
+ - type: integer
+ - type: string
+ description: 'The maximum number of nodes with an existing
+ available DaemonSet pod that can have an updated DaemonSet
+ pod during during an update. Value can be an absolute number
+ (ex: 5) or a percentage of desired pods (ex: 10%). This
+ can not be 0 if MaxUnavailable is 0. Absolute number is
+ calculated from percentage by rounding up to a minimum of
+ 1. Default value is 0. Example: when this is set to 30%,
+ at most 30% of the total number of nodes that should be
+ running the daemon pod (i.e. status.desiredNumberScheduled)
+ can have their a new pod created before the old pod is marked
+ as deleted. The update starts by launching new pods on 30%
+ of nodes. Once an updated pod is available (Ready for at
+ least minReadySeconds) the old DaemonSet pod on that node
+ is marked deleted. If the old pod becomes unavailable for
+ any reason (Ready transitions to false, is evicted, or is
+ drained) an updated pod is immediatedly created on that
+ node without considering surge limits. Allowing surge implies
+ the possibility that the resources consumed by the daemonset
+ on any given node can double if the readiness check fails,
+ and so resource intensive daemonsets should take into account
+ that they may cause evictions during disruption.'
+ x-kubernetes-int-or-string: true
+ maxUnavailable:
+ anyOf:
+ - type: integer
+ - type: string
+ description: 'The maximum number of DaemonSet pods that can
+ be unavailable during the update. Value can be an absolute
+ number (ex: 5) or a percentage of total number of DaemonSet
+ pods at the start of the update (ex: 10%). Absolute number
+ is calculated from percentage by rounding up. This cannot
+ be 0 if MaxSurge is 0 Default value is 1. Example: when
+ this is set to 30%, at most 30% of the total number of nodes
+ that should be running the daemon pod (i.e. status.desiredNumberScheduled)
+ can have their pods stopped for an update at any given time.
+ The update starts by stopping at most 30% of those DaemonSet
+ pods and then brings up new DaemonSet pods in their place.
+ Once the new pods are available, it then proceeds onto other
+ DaemonSet pods, thus ensuring that at least 70% of original
+ number of DaemonSet pods are available at all times during
+ the update.'
+ x-kubernetes-int-or-string: true
+ type: object
+ type:
+ description: Type of daemon set update. Can be "RollingUpdate"
+ or "OnDelete". Default is RollingUpdate.
+ type: string
+ type: object
+ required:
+ - selector
+ - template
type: object
status:
+ description: 'The current status of this daemon set. This data may be
+ out of date by some window of time. Populated by the system. Read-only.
+ More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status'
properties:
- apiserver:
- type: string
+ collisionCount:
+ description: Count of hash collisions for the DaemonSet. The DaemonSet
+ controller uses this field as a collision avoidance mechanism when
+ it needs to create the name for the newest ControllerRevision.
+ format: int32
+ type: integer
conditions:
+ description: Represents the latest available observations of a DaemonSet's
+ current state.
items:
- description: "Condition contains details for one aspect of the current
- state of this API Resource. --- This struct is intended for direct
- use as an array at the field path .status.conditions."
+ description: DaemonSetCondition describes the state of a DaemonSet
+ at a certain point.
properties:
lastTransitionTime:
- description: lastTransitionTime is the last time the condition
- transitioned from one status to another. This should be when
- the underlying condition changed. If that is not known, then
- using the time when the API field changed is acceptable.
+ description: Last time the condition transitioned from one status
+ to another.
format: date-time
type: string
message:
- description: message is a human readable message indicating
- details about the transition. This may be an empty string.
- maxLength: 32768
+ description: A human readable message indicating details about
+ the transition.
type: string
- observedGeneration:
- description: observedGeneration represents the .metadata.generation
- that the condition was set based upon. For instance, if .metadata.generation
- is currently 12, but the .status.conditions[x].observedGeneration
- is 9, the condition is out of date with respect to the current
- state of the instance.
- format: int64
- minimum: 0
- type: integer
reason:
- description: reason contains a programmatic identifier indicating
- the reason for the condition's last transition. Producers
- of specific condition types may define expected values and
- meanings for this field, and whether the values are considered
- a guaranteed API. The value should be a CamelCase string.
- This field may not be empty.
- maxLength: 1024
- minLength: 1
- pattern: ^[A-Za-z]([A-Za-z0-9_,:]*[A-Za-z0-9_])?$
+ description: The reason for the condition's last transition.
type: string
status:
- description: status of the condition, one of True, False, Unknown.
- enum:
- - "True"
- - "False"
- - Unknown
+ description: Status of the condition, one of True, False, Unknown.
type: string
type:
- description: type of condition in CamelCase or in foo.example.com/CamelCase.
- --- Many .condition.type values are consistent across resources
- like Available, but because arbitrary conditions can be useful
- (see .node.status.conditions), the ability to deconflict is
- important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt)
- maxLength: 316
- pattern: ^([a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*/)?(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])$
+ description: Type of DaemonSet condition.
type: string
required:
- - lastTransitionTime
- - message
- - reason
- status
- type
type: object
type: array
- version:
- type: string
+ currentNumberScheduled:
+ description: 'The number of nodes that are running at least 1 daemon
+ pod and are supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/'
+ format: int32
+ type: integer
+ desiredNumberScheduled:
+ description: 'The total number of nodes that should be running the
+ daemon pod (including nodes correctly running the daemon pod). More
+ info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/'
+ format: int32
+ type: integer
+ numberAvailable:
+ description: The number of nodes that should be running the daemon
+ pod and have one or more of the daemon pod running and available
+ (ready for at least spec.minReadySeconds)
+ format: int32
+ type: integer
+ numberMisscheduled:
+ description: 'The number of nodes that are running the daemon pod,
+ but are not supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/'
+ format: int32
+ type: integer
+ numberReady:
+ description: numberReady is the number of nodes that should be running
+ the daemon pod and have one or more of the daemon pod running with
+ a Ready Condition.
+ format: int32
+ type: integer
+ numberUnavailable:
+ description: The number of nodes that should be running the daemon
+ pod and have none of the daemon pod running and available (ready
+ for at least spec.minReadySeconds)
+ format: int32
+ type: integer
+ observedGeneration:
+ description: The most recent generation observed by the daemon set
+ controller.
+ format: int64
+ type: integer
+ updatedNumberScheduled:
+ description: The total number of nodes that are running updated daemon
+ pod
+ format: int32
+ type: integer
+ required:
+ - currentNumberScheduled
+ - desiredNumberScheduled
+ - numberMisscheduled
+ - numberReady
type: object
type: object
served: true
storage: true
+ subresources:
+ status: {}
+`
+
+const ShadowDaemonSet = `---
+apiVersion: apiextensions.k8s.io/v1
+kind: CustomResourceDefinition
+metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.11.0
+ creationTimestamp: null
+ name: shadowdaemonsets.kosmos.io
+spec:
+ group: kosmos.io
+ names:
+ kind: ShadowDaemonSet
+ listKind: ShadowDaemonSetList
+ plural: shadowdaemonsets
+ shortNames:
+ - ksds
+ singular: shadowdaemonset
+ scope: Namespaced
+ versions:
+ - additionalPrinterColumns:
+ - description: The desired number of pods.
+ jsonPath: .status.desiredNumberScheduled
+ name: DESIRED
+ type: integer
+ - description: The current number of pods.
+ jsonPath: .status.currentNumberScheduled
+ name: CURRENT
+ type: integer
+ - description: The ready number of pods.
+ jsonPath: .status.numberReady
+ name: READY
+ type: integer
+ - description: The updated number of pods.
+ jsonPath: .status.updatedNumberScheduled
+ name: UP-TO-DATE
+ type: integer
+ - description: The updated number of pods.
+ jsonPath: .status.numberAvailable
+ name: AVAILABLE
+ type: integer
+ - description: CreationTimestamp is a timestamp representing the server time when
+ this object was created. It is not guaranteed to be set in happens-before
+ order across separate operations. Clients may not set this value. It is represented
+ in RFC3339 form and is in UTC.
+ jsonPath: .metadata.creationTimestamp
+ name: AGE
+ type: date
+ - description: The containers of currently daemonset.
+ jsonPath: .daemonSetSpec.template.spec.containers[*].name
+ name: CONTAINERS
+ priority: 1
+ type: string
+ - description: The images of currently advanced daemonset.
+ jsonPath: .daemonSetSpec.template.spec.containers[*].image
+ name: IMAGES
+ priority: 1
+ type: string
+ name: v1alpha1
+ schema:
+ openAPIV3Schema:
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ cluster:
+ type: string
+ daemonSetSpec:
+ description: DaemonSetSpec is the specification of a daemon set.
+ properties:
+ minReadySeconds:
+ description: The minimum number of seconds for which a newly created
+ DaemonSet pod should be ready without any of its container crashing,
+ for it to be considered available. Defaults to 0 (pod will be considered
+ available as soon as it is ready).
+ format: int32
+ type: integer
+ revisionHistoryLimit:
+ description: The number of old history to retain to allow rollback.
+ This is a pointer to distinguish between explicit zero and not specified.
+ Defaults to 10.
+ format: int32
+ type: integer
+ selector:
+ description: 'A label query over pods that are managed by the daemon
+ set. Must match in order to be controlled. It must match the pod
+ template''s labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors'
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector requirements.
+ The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector that
+ contains values, a key, and an operator that relates the key
+ and values.
+ properties:
+ key:
+ description: key is the label key that the selector applies
+ to.
+ type: string
+ operator:
+ description: operator represents a key's relationship to
+ a set of values. Valid operators are In, NotIn, Exists
+ and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values. If the
+ operator is In or NotIn, the values array must be non-empty.
+ If the operator is Exists or DoesNotExist, the values
+ array must be empty. This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs. A single
+ {key,value} in the matchLabels map is equivalent to an element
+ of matchExpressions, whose key field is "key", the operator
+ is "In", and the values array contains only "value". The requirements
+ are ANDed.
+ type: object
+ type: object
+ x-kubernetes-map-type: atomic
+ template:
+ description: 'An object that describes the pod that will be created.
+ The DaemonSet will create exactly one copy of this pod on every
+ node that matches the template''s node selector (or on every node
+ if no node selector is specified). More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template'
+ x-kubernetes-preserve-unknown-fields: true
+ updateStrategy:
+ description: An update strategy to replace existing DaemonSet pods
+ with new pods.
+ properties:
+ rollingUpdate:
+ description: 'Rolling update config params. Present only if type
+ = "RollingUpdate". --- TODO: Update this to follow our convention
+ for oneOf, whatever we decide it to be. Same as Deployment ` + "`" + `strategy.rollingUpdate` + "`" + `.
+ See https://github.com/kubernetes/kubernetes/issues/35345'
+ properties:
+ maxSurge:
+ anyOf:
+ - type: integer
+ - type: string
+ description: 'The maximum number of nodes with an existing
+ available DaemonSet pod that can have an updated DaemonSet
+ pod during during an update. Value can be an absolute number
+ (ex: 5) or a percentage of desired pods (ex: 10%). This
+ can not be 0 if MaxUnavailable is 0. Absolute number is
+ calculated from percentage by rounding up to a minimum of
+ 1. Default value is 0. Example: when this is set to 30%,
+ at most 30% of the total number of nodes that should be
+ running the daemon pod (i.e. status.desiredNumberScheduled)
+ can have their a new pod created before the old pod is marked
+ as deleted. The update starts by launching new pods on 30%
+ of nodes. Once an updated pod is available (Ready for at
+ least minReadySeconds) the old DaemonSet pod on that node
+ is marked deleted. If the old pod becomes unavailable for
+ any reason (Ready transitions to false, is evicted, or is
+ drained) an updated pod is immediatedly created on that
+ node without considering surge limits. Allowing surge implies
+ the possibility that the resources consumed by the daemonset
+ on any given node can double if the readiness check fails,
+ and so resource intensive daemonsets should take into account
+ that they may cause evictions during disruption.'
+ x-kubernetes-int-or-string: true
+ maxUnavailable:
+ anyOf:
+ - type: integer
+ - type: string
+ description: 'The maximum number of DaemonSet pods that can
+ be unavailable during the update. Value can be an absolute
+ number (ex: 5) or a percentage of total number of DaemonSet
+ pods at the start of the update (ex: 10%). Absolute number
+ is calculated from percentage by rounding up. This cannot
+ be 0 if MaxSurge is 0 Default value is 1. Example: when
+ this is set to 30%, at most 30% of the total number of nodes
+ that should be running the daemon pod (i.e. status.desiredNumberScheduled)
+ can have their pods stopped for an update at any given time.
+ The update starts by stopping at most 30% of those DaemonSet
+ pods and then brings up new DaemonSet pods in their place.
+ Once the new pods are available, it then proceeds onto other
+ DaemonSet pods, thus ensuring that at least 70% of original
+ number of DaemonSet pods are available at all times during
+ the update.'
+ x-kubernetes-int-or-string: true
+ type: object
+ type:
+ description: Type of daemon set update. Can be "RollingUpdate"
+ or "OnDelete". Default is RollingUpdate.
+ type: string
+ type: object
+ required:
+ - selector
+ - template
+ type: object
+ kind:
+ description: 'Kind is a string value representing the REST resource this
+ object represents. Servers may infer this from the endpoint the client
+ submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ refType:
+ type: string
+ status:
+ description: DaemonSetStatus represents the current status of a daemon
+ set.
+ properties:
+ collisionCount:
+ description: Count of hash collisions for the DaemonSet. The DaemonSet
+ controller uses this field as a collision avoidance mechanism when
+ it needs to create the name for the newest ControllerRevision.
+ format: int32
+ type: integer
+ conditions:
+ description: Represents the latest available observations of a DaemonSet's
+ current state.
+ items:
+ description: DaemonSetCondition describes the state of a DaemonSet
+ at a certain point.
+ properties:
+ lastTransitionTime:
+ description: Last time the condition transitioned from one status
+ to another.
+ format: date-time
+ type: string
+ message:
+ description: A human readable message indicating details about
+ the transition.
+ type: string
+ reason:
+ description: The reason for the condition's last transition.
+ type: string
+ status:
+ description: Status of the condition, one of True, False, Unknown.
+ type: string
+ type:
+ description: Type of DaemonSet condition.
+ type: string
+ required:
+ - status
+ - type
+ type: object
+ type: array
+ currentNumberScheduled:
+ description: 'The number of nodes that are running at least 1 daemon
+ pod and are supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/'
+ format: int32
+ type: integer
+ desiredNumberScheduled:
+ description: 'The total number of nodes that should be running the
+ daemon pod (including nodes correctly running the daemon pod). More
+ info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/'
+ format: int32
+ type: integer
+ numberAvailable:
+ description: The number of nodes that should be running the daemon
+ pod and have one or more of the daemon pod running and available
+ (ready for at least spec.minReadySeconds)
+ format: int32
+ type: integer
+ numberMisscheduled:
+ description: 'The number of nodes that are running the daemon pod,
+ but are not supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/'
+ format: int32
+ type: integer
+ numberReady:
+ description: numberReady is the number of nodes that should be running
+ the daemon pod and have one or more of the daemon pod running with
+ a Ready Condition.
+ format: int32
+ type: integer
+ numberUnavailable:
+ description: The number of nodes that should be running the daemon
+ pod and have none of the daemon pod running and available (ready
+ for at least spec.minReadySeconds)
+ format: int32
+ type: integer
+ observedGeneration:
+ description: The most recent generation observed by the daemon set
+ controller.
+ format: int64
+ type: integer
+ updatedNumberScheduled:
+ description: The total number of nodes that are running updated daemon
+ pod
+ format: int32
+ type: integer
+ required:
+ - currentNumberScheduled
+ - desiredNumberScheduled
+ - numberMisscheduled
+ - numberReady
+ type: object
+ required:
+ - refType
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
`
-)
-type ClusterlinkReplace struct {
+type CRDReplace struct {
Namespace string
}
diff --git a/pkg/kosmosctl/manifest/manifest_daemonsets.go b/pkg/kosmosctl/manifest/manifest_daemonsets.go
index e7716a29c..0fec93712 100644
--- a/pkg/kosmosctl/manifest/manifest_daemonsets.go
+++ b/pkg/kosmosctl/manifest/manifest_daemonsets.go
@@ -34,9 +34,13 @@ spec:
imagePullPolicy: IfNotPresent
command:
- clusterlink-floater
+ securityContext:
+ privileged: true
env:
- name: "PORT"
value: "{{ .Port }}"
+ - name: "ENABLE_ANALYSIS"
+ value: "{{ .EnableAnalysis }}"
tolerations:
- effect: NoSchedule
operator: Exists
@@ -55,4 +59,5 @@ type DaemonSetReplace struct {
Port string
EnableHostNetwork bool `default:"false"`
+ EnableAnalysis bool `default:"false"`
}
diff --git a/pkg/kosmosctl/manifest/manifest_deployments.go b/pkg/kosmosctl/manifest/manifest_deployments.go
index 2681a61de..f888726b4 100644
--- a/pkg/kosmosctl/manifest/manifest_deployments.go
+++ b/pkg/kosmosctl/manifest/manifest_deployments.go
@@ -26,7 +26,7 @@ spec:
imagePullPolicy: IfNotPresent
command:
- clusterlink-network-manager
- - v=4
+ - --v=4
resources:
limits:
memory: 500Mi
@@ -36,11 +36,11 @@ spec:
memory: 500Mi
`
- ClusterlinkOperatorDeployment = `
+ KosmosOperatorDeployment = `
apiVersion: apps/v1
kind: Deployment
metadata:
- name: clusterlink-operator
+ name: kosmos-operator
namespace: {{ .Namespace }}
labels:
app: operator
@@ -54,7 +54,7 @@ spec:
labels:
app: operator
spec:
- serviceAccountName: clusterlink-operator
+ serviceAccountName: kosmos-operator
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
@@ -69,11 +69,11 @@ spec:
topologyKey: kubernetes.io/hostname
containers:
- name: operator
- image: {{ .ImageRepository }}/clusterlink-operator:v{{ .Version }}
+ image: {{ .ImageRepository }}/kosmos-operator:v{{ .Version }}
imagePullPolicy: IfNotPresent
command:
- - clusterlink-operator
- - --controlpanelconfig=/etc/clusterlink/kubeconfig
+ - kosmos-operator
+ - --controlpanelconfig=/etc/kosmos-operator/kubeconfig
resources:
limits:
memory: 200Mi
@@ -84,12 +84,10 @@ spec:
env:
- name: VERSION
value: v{{ .Version }}
- - name: CLUSTER_NAME
- value: {{ .ClusterName }}
- name: USE_PROXY
value: "{{ .UseProxy }}"
volumeMounts:
- - mountPath: /etc/clusterlink
+ - mountPath: /etc/kosmos-operator
name: proxy-config
readOnly: true
volumes:
@@ -99,7 +97,7 @@ spec:
`
- ClusterTreeKnodeManagerDeployment = `---
+ ClusterTreeClusterManagerDeployment = `---
apiVersion: apps/v1
kind: Deployment
metadata:
@@ -117,29 +115,32 @@ spec:
labels:
app: clustertree-cluster-manager
spec:
- serviceAccountName: clustertree-cluster-manager
+ serviceAccountName: clustertree
containers:
- name: manager
image: {{ .ImageRepository }}/clustertree-cluster-manager:v{{ .Version }}
imagePullPolicy: IfNotPresent
- command:
- - clustertree-cluster-manager
- - --kube-api-qps=500
- - --kube-api-burst=1000
- - --kubeconfig=/etc/kube/config
- - --leader-elect=false
+ env:
+ - name: APISERVER_CERT_LOCATION
+ value: /etc/cluster-tree/cert/cert.pem
+ - name: APISERVER_KEY_LOCATION
+ value: /etc/cluster-tree/cert/key.pem
+ - name: LEAF_NODE_IP
+ valueFrom:
+ fieldRef:
+ fieldPath: status.podIP
volumeMounts:
- - mountPath: /etc/kube
- name: config-volume
+ - name: credentials
+ mountPath: "/etc/cluster-tree/cert"
readOnly: true
+ command:
+ - clustertree-cluster-manager
+ - --multi-cluster-service=true
+ - --v=4
volumes:
- - configMap:
- defaultMode: 420
- items:
- - key: kubeconfig
- path: config
- name: host-kubeconfig
- name: config-volume
+ - name: credentials
+ secret:
+ secretName: clustertree-cluster-manager
`
CorednsDeployment = `
@@ -266,12 +267,6 @@ type DeploymentReplace struct {
Namespace string
ImageRepository string
Version string
-}
-type ClusterlinkDeploymentReplace struct {
- Namespace string
- Version string
- ClusterName string
- UseProxy string
- ImageRepository string
+ UseProxy string
}
diff --git a/pkg/kosmosctl/manifest/manifest_secrets.go b/pkg/kosmosctl/manifest/manifest_secrets.go
new file mode 100644
index 000000000..6d7aac0f0
--- /dev/null
+++ b/pkg/kosmosctl/manifest/manifest_secrets.go
@@ -0,0 +1,21 @@
+package manifest
+
+const (
+ ClusterTreeClusterManagerSecret = `---
+apiVersion: v1
+kind: Secret
+metadata:
+ name: clustertree-cluster-manager
+ namespace: {{ .Namespace }}
+type: Opaque
+data:
+ cert.pem: {{ .Cert }}
+ key.pem: {{ .Key }}
+`
+)
+
+type SecretReplace struct {
+ Namespace string
+ Cert string
+ Key string
+}
diff --git a/pkg/kosmosctl/manifest/manifest_serviceaccounts.go b/pkg/kosmosctl/manifest/manifest_serviceaccounts.go
index 53ba83077..a23318507 100644
--- a/pkg/kosmosctl/manifest/manifest_serviceaccounts.go
+++ b/pkg/kosmosctl/manifest/manifest_serviceaccounts.go
@@ -1,35 +1,43 @@
package manifest
const (
- ClusterlinkNetworkManagerServiceAccount = `
+ KosmosControlServiceAccount = `
apiVersion: v1
kind: ServiceAccount
metadata:
- name: clusterlink-network-manager
+ name: kosmos-control
namespace: {{ .Namespace }}
`
- ClusterlinkFloaterServiceAccount = `
+ KosmosOperatorServiceAccount = `
apiVersion: v1
kind: ServiceAccount
metadata:
- name: clusterlink-floater
+ name: kosmos-operator
+ namespace: {{ .Namespace }}
+`
+
+ ClusterlinkNetworkManagerServiceAccount = `
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: clusterlink-network-manager
namespace: {{ .Namespace }}
`
- ClusterlinkOperatorServiceAccount = `
+ ClusterlinkFloaterServiceAccount = `
apiVersion: v1
kind: ServiceAccount
metadata:
- name: clusterlink-operator
+ name: clusterlink-floater
namespace: {{ .Namespace }}
`
- ClusterTreeKnodeManagerServiceAccount = `
+ ClusterTreeServiceAccount = `
apiVersion: v1
kind: ServiceAccount
metadata:
- name: clustertree-cluster-manager
+ name: clustertree
namespace: {{ .Namespace }}
`
diff --git a/pkg/kosmosctl/rsmigrate/common.go b/pkg/kosmosctl/rsmigrate/common.go
new file mode 100644
index 000000000..70c022c7b
--- /dev/null
+++ b/pkg/kosmosctl/rsmigrate/common.go
@@ -0,0 +1,52 @@
+package rsmigrate
+
+import (
+ "context"
+ "fmt"
+
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/kubernetes"
+
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+func getClientFromLeafCluster(leafCluster *v1alpha1.Cluster) (kubernetes.Interface, kosmosversioned.Interface, error) {
+ //generate clientset by leafCluster kubeconfig
+ leafClusterKubeconfig := leafCluster.Spec.Kubeconfig
+ if len(leafClusterKubeconfig) == 0 {
+ return nil, nil, fmt.Errorf("leafcluster's kubeconfig is nil, it's unable to work normally")
+ }
+ k8sClient, err := utils.NewClientFromBytes(leafClusterKubeconfig)
+ if err != nil {
+ return nil, nil, fmt.Errorf("create kubernetes clientset error: %s ", err)
+ }
+
+ kosmosClient, err := utils.NewKosmosClientFromBytes(leafClusterKubeconfig)
+ if err != nil {
+ return nil, nil, fmt.Errorf("create kubernetes clientset for leafcluster crd error: %s ", err)
+ }
+
+ return k8sClient, kosmosClient, nil
+}
+
+func completeLeafClusterOptions(leafClusterOptions *LeafClusterOptions, masterClient kosmosversioned.Interface) error {
+ //complete leafClusterOptions by leafCluster name
+ if leafClusterOptions.LeafClusterName == "" {
+ return fmt.Errorf("get leafcluster error: %s ", "leafcluster value can't be empty")
+ }
+ leafCluster, err := masterClient.KosmosV1alpha1().Clusters().Get(context.TODO(), leafClusterOptions.LeafClusterName, metav1.GetOptions{})
+ if err != nil {
+ return fmt.Errorf("get leafcluster error: %s", err)
+ }
+ leafClusterOptions.LeafCluster = leafCluster
+ k8sClient, kosmosClient, err := getClientFromLeafCluster(leafClusterOptions.LeafCluster)
+ if err != nil {
+ return fmt.Errorf("get leafcluster clientset error: %s", err)
+ }
+ leafClusterOptions.LeafClusterNativeClient = k8sClient
+ leafClusterOptions.LeafClusterKosmosClient = kosmosClient
+
+ return nil
+}
diff --git a/pkg/kosmosctl/rsmigrate/options.go b/pkg/kosmosctl/rsmigrate/options.go
new file mode 100644
index 000000000..58656f3a4
--- /dev/null
+++ b/pkg/kosmosctl/rsmigrate/options.go
@@ -0,0 +1,74 @@
+package rsmigrate
+
+import (
+ "fmt"
+ "os"
+ "path/filepath"
+
+ "github.com/spf13/cobra"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/util/homedir"
+ ctlutil "k8s.io/kubectl/pkg/cmd/util"
+
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+type LeafClusterOptions struct {
+ LeafClusterName string
+ LeafCluster *v1alpha1.Cluster
+ LeafClusterNativeClient kubernetes.Interface
+ //clientset operate leafCluster releted resource
+ LeafClusterKosmosClient kosmosversioned.Interface
+}
+
+type CommandOptions struct {
+ MasterKubeConfig string
+ MasterClient kubernetes.Interface
+ //clientset operate leafCluster releted resource
+ MasterKosmosClient kosmosversioned.Interface
+
+ SrcLeafClusterOptions *LeafClusterOptions
+ Namespace string
+}
+
+func (o *CommandOptions) Validate(cmd *cobra.Command) error {
+ return nil
+}
+
+func (o *CommandOptions) Complete(f ctlutil.Factory, cmd *cobra.Command) error {
+ var err error
+ var kubeConfigStream []byte
+ // get master kubernetes clientset
+ if len(o.MasterKubeConfig) > 0 {
+ kubeConfigStream, err = os.ReadFile(o.MasterKubeConfig)
+ } else {
+ kubeConfigStream, err = os.ReadFile(filepath.Join(homedir.HomeDir(), ".kube", "config"))
+ }
+ if err != nil {
+ return fmt.Errorf("get master kubeconfig failed: %s", err)
+ }
+
+ masterClient, err := utils.NewClientFromBytes(kubeConfigStream)
+ if err != nil {
+ return fmt.Errorf("create master clientset error: %s ", err)
+ }
+ o.MasterClient = masterClient
+
+ kosmosClient, err := utils.NewKosmosClientFromBytes(kubeConfigStream)
+ if err != nil {
+ return fmt.Errorf("get master rest client config error:%s", err)
+ }
+
+ o.MasterKosmosClient = kosmosClient
+
+ // get src leafCluster options
+ if cmd.Flags().Changed("leafcluster") {
+ err := completeLeafClusterOptions(o.SrcLeafClusterOptions, o.MasterKosmosClient)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/pkg/kosmosctl/rsmigrate/serviceexport.go b/pkg/kosmosctl/rsmigrate/serviceexport.go
new file mode 100644
index 000000000..edc67ef2e
--- /dev/null
+++ b/pkg/kosmosctl/rsmigrate/serviceexport.go
@@ -0,0 +1,97 @@
+package rsmigrate
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/spf13/cobra"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ ctlutil "k8s.io/kubectl/pkg/cmd/util"
+ "k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/kubectl/pkg/util/templates"
+ mcsv1alpha1 "sigs.k8s.io/mcs-api/pkg/apis/v1alpha1"
+)
+
+var exportExample = templates.Examples(i18n.T(`
+ # Export service in control plane
+ kosmosctl export service foo -n namespacefoo --kubeconfig=[control plane kubeconfig]
+`))
+
+var exportErr string = "kosmosctl export error"
+
+type CommandExportOptions struct {
+ *CommandOptions
+}
+
+// NewCmdExport export resource to control plane
+func NewCmdExport(f ctlutil.Factory) *cobra.Command {
+ o := &CommandExportOptions{CommandOptions: &CommandOptions{SrcLeafClusterOptions: &LeafClusterOptions{}}}
+
+ cmd := &cobra.Command{
+ Use: "export",
+ Short: i18n.T("Export resource to control plane data storage center"),
+ Long: "",
+ Example: exportExample,
+ SilenceUsage: true,
+ DisableFlagsInUseLine: true,
+ RunE: func(cmd *cobra.Command, args []string) error {
+ ctlutil.CheckErr(o.Complete(f, cmd))
+ ctlutil.CheckErr(o.Validate(cmd))
+ ctlutil.CheckErr(o.Run(cmd, args))
+ return nil
+ },
+ }
+
+ cmd.Flags().StringVarP(&o.MasterKubeConfig, "kubeconfig", "", "", "Absolute path to the master kubeconfig file.")
+ cmd.Flags().StringVarP(&o.Namespace, "namespace", "n", "default", "The namespace scope for this CLI request")
+
+ return cmd
+}
+
+func (o *CommandExportOptions) Complete(f ctlutil.Factory, cmd *cobra.Command) error {
+ err := o.CommandOptions.Complete(f, cmd)
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func (o *CommandExportOptions) Validate(cmd *cobra.Command) error {
+ err := o.CommandOptions.Validate(cmd)
+ if err != nil {
+ return fmt.Errorf("%s, valid args error: %s", exportErr, err)
+ }
+
+ return nil
+}
+
+func (o *CommandExportOptions) Run(cmd *cobra.Command, args []string) error {
+ if len(args) == 0 {
+ return fmt.Errorf("args is null, resource type should be specified")
+ }
+
+ switch args[0] {
+ case "svc", "services", "service":
+ if len(args[1:]) != 1 {
+ return fmt.Errorf("%s, exactly one NAME is required, got %d", exportErr, len(args[1:]))
+ }
+
+ var err error
+ serviceExport := &mcsv1alpha1.ServiceExport{}
+ serviceExport.Namespace = o.Namespace
+ serviceExport.Kind = "ServiceExport"
+ serviceExport.Name = args[1]
+
+ // Create serviceExport, if exists,return error instead of updating it
+ _, err = o.MasterKosmosClient.MulticlusterV1alpha1().ServiceExports(o.Namespace).
+ Create(context.TODO(), serviceExport, metav1.CreateOptions{})
+ if err != nil {
+ return fmt.Errorf("%s, create %s %s/%s %s: %s", exportErr, serviceExport.Kind, o.Namespace, args[1], args[0], err)
+ }
+
+ fmt.Printf("Create %s %s/%s successfully!\n", serviceExport.Kind, o.Namespace, args[1])
+ default:
+ return fmt.Errorf("%s, not support export resouece %s", exportErr, args[0])
+ }
+ return nil
+}
diff --git a/pkg/kosmosctl/rsmigrate/serviceimport.go b/pkg/kosmosctl/rsmigrate/serviceimport.go
new file mode 100644
index 000000000..b647ef4d4
--- /dev/null
+++ b/pkg/kosmosctl/rsmigrate/serviceimport.go
@@ -0,0 +1,144 @@
+package rsmigrate
+
+import (
+ "context"
+ "fmt"
+
+ "github.com/spf13/cobra"
+ v1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ ctlutil "k8s.io/kubectl/pkg/cmd/util"
+ "k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/kubectl/pkg/util/templates"
+ mcsv1alpha1 "sigs.k8s.io/mcs-api/pkg/apis/v1alpha1"
+)
+
+var importExample = templates.Examples(i18n.T(`
+ # Import service from control plane to leafcluster
+ kosmosctl import service foo -n namespacefoo --kubecnfig=[control plane kubeconfig] --to-leafcluster leafclusterfoo
+`))
+
+var importErr string = "kosmosctl import error"
+
+type CommandImportOptions struct {
+ *CommandOptions
+ DstLeafClusterOptions *LeafClusterOptions
+}
+
+// NewCmdImport import resource
+func NewCmdImport(f ctlutil.Factory) *cobra.Command {
+ o := &CommandImportOptions{
+ CommandOptions: &CommandOptions{SrcLeafClusterOptions: &LeafClusterOptions{}},
+ DstLeafClusterOptions: &LeafClusterOptions{},
+ }
+
+ cmd := &cobra.Command{
+ Use: "import",
+ Short: i18n.T("Import resource to leafCluster"),
+ Long: "",
+ Example: importExample,
+ SilenceUsage: true,
+ DisableFlagsInUseLine: true,
+ RunE: func(cmd *cobra.Command, args []string) error {
+ ctlutil.CheckErr(o.Complete(f, cmd))
+ ctlutil.CheckErr(o.Validate(cmd))
+ ctlutil.CheckErr(o.Run(f, cmd, args))
+ return nil
+ },
+ }
+ cmd.Flags().StringVarP(&o.MasterKubeConfig, "kubeconfig", "", "", "Absolute path to the master kubeconfig file.")
+ cmd.Flags().StringVarP(&o.Namespace, "namespace", "n", "default", "The namespace scope for this CLI request")
+ cmd.Flags().StringVar(&o.DstLeafClusterOptions.LeafClusterName, "to-leafcluster", "", "Import resource to this destination leafcluster")
+
+ return cmd
+}
+
+func (o *CommandImportOptions) Complete(f ctlutil.Factory, cmd *cobra.Command) error {
+ err := o.CommandOptions.Complete(f, cmd)
+ if err != nil {
+ return err
+ }
+
+ // get dst leafCluster options
+ if cmd.Flags().Changed("to-leafcluster") {
+ err := completeLeafClusterOptions(o.DstLeafClusterOptions, o.MasterKosmosClient)
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+}
+
+func (o *CommandImportOptions) Validate(cmd *cobra.Command) error {
+ err := o.CommandOptions.Validate(cmd)
+ if err != nil {
+ return fmt.Errorf("%s, valid args error: %s", importErr, err)
+ }
+
+ if !cmd.Flags().Changed("to-leafcluster") {
+ return fmt.Errorf("%s, required flag(s) 'to-leafcluster' not set", importErr)
+ }
+ return nil
+}
+
+func (o *CommandImportOptions) Run(f ctlutil.Factory, cmd *cobra.Command, args []string) error {
+ if len(args) == 0 {
+ return fmt.Errorf("args is null, resource should be specified")
+ }
+
+ switch args[0] {
+ case "svc", "services", "service":
+ if len(args[1:]) != 1 {
+ return fmt.Errorf("%s, exactly one NAME is required, got %d", importErr, len(args[1:]))
+ }
+
+ var srcService *v1.Service
+ var err error
+ if o.SrcLeafClusterOptions.LeafClusterName != "" {
+ srcService, err = o.SrcLeafClusterOptions.LeafClusterNativeClient.CoreV1().Services(o.Namespace).Get(context.TODO(), args[1], metav1.GetOptions{})
+ } else {
+ srcService, err = o.MasterClient.CoreV1().Services(o.Namespace).Get(context.TODO(), args[1], metav1.GetOptions{})
+ }
+ if err != nil {
+ return fmt.Errorf("%s, get source service %s/%s error: %s", importErr, o.Namespace, args[1], err)
+ }
+
+ serviceImport := &mcsv1alpha1.ServiceImport{}
+ serviceImport.Kind = "ServiceImport"
+ serviceImport.Namespace = o.Namespace
+ serviceImport.Name = args[1]
+ serviceImport.Spec.Type = "ClusterSetIP"
+ if srcService.Spec.ClusterIP == "None" || len(srcService.Spec.ClusterIP) == 0 {
+ serviceImport.Spec.Type = "Headless"
+ }
+
+ serviceImport.Spec.Ports = make([]mcsv1alpha1.ServicePort, len(srcService.Spec.Ports))
+ for portIndex, svcPort := range srcService.Spec.Ports {
+ serviceImport.Spec.Ports[portIndex] = mcsv1alpha1.ServicePort{
+ Name: svcPort.Name,
+ Protocol: svcPort.Protocol,
+ AppProtocol: svcPort.AppProtocol,
+ Port: svcPort.Port,
+ }
+ }
+
+ // Create serviceImport, if exists,return error instead of updating it
+ if len(o.DstLeafClusterOptions.LeafClusterName) != 0 {
+ _, err = o.DstLeafClusterOptions.LeafClusterKosmosClient.MulticlusterV1alpha1().ServiceImports(o.Namespace).
+ Create(context.TODO(), serviceImport, metav1.CreateOptions{})
+ } else {
+ _, err = o.MasterKosmosClient.MulticlusterV1alpha1().ServiceImports(o.Namespace).
+ Create(context.TODO(), serviceImport, metav1.CreateOptions{})
+ }
+ if err != nil {
+ return fmt.Errorf("%s, create %s %s/%s %s: %s", importErr, serviceImport.Kind, o.Namespace, args[1], args[0], err)
+ }
+
+ fmt.Printf("Create %s %s/%s successfully!\n", serviceImport.Kind, o.Namespace, args[1])
+ default:
+ return fmt.Errorf("%s, not support import resouece %s", importErr, args[0])
+ }
+
+ return nil
+}
diff --git a/pkg/kosmosctl/uninstall/uninstall.go b/pkg/kosmosctl/uninstall/uninstall.go
index 5dcb13b16..97930452b 100644
--- a/pkg/kosmosctl/uninstall/uninstall.go
+++ b/pkg/kosmosctl/uninstall/uninstall.go
@@ -8,7 +8,6 @@ import (
extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
@@ -16,20 +15,40 @@ import (
"k8s.io/klog/v2"
ctlutil "k8s.io/kubectl/pkg/cmd/util"
"k8s.io/kubectl/pkg/util/i18n"
+ "k8s.io/kubectl/pkg/util/templates"
+ "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/util"
"github.com/kosmos.io/kosmos/pkg/utils"
)
-type CommandUninstallOptions struct {
- Namespace string
- Module string
- HostKubeConfig string
+var uninstallExample = templates.Examples(i18n.T(`
+ # Uninstall all module from Kosmos control plane, e.g:
+ kosmosctl uninstall
+
+ # Uninstall Kosmos control plane, if you need to specify a special control plane cluster kubeconfig, e.g:
+ kosmosctl uninstall --kubeconfig ~/kubeconfig/cluster-kubeconfig
+
+ # Uninstall clusterlink module from Kosmos control plane, e.g:
+ kosmosctl uninstall -m clusterlink
- Client kubernetes.Interface
- DynamicClient *dynamic.DynamicClient
- ExtensionsClient extensionsclient.Interface
+ # Uninstall clustertree module from Kosmos control plane, e.g:
+ kosmosctl uninstall -m clustertree
+
+ # Uninstall coredns module from Kosmos control plane, e.g:
+ kosmosctl uninstall -m coredns
+`))
+
+type CommandUninstallOptions struct {
+ Namespace string
+ Module string
+ KubeConfig string
+
+ KosmosClient versioned.Interface
+ K8sClient kubernetes.Interface
+ K8sDynamicClient *dynamic.DynamicClient
+ K8sExtensionsClient extensionsclient.Interface
}
// NewCmdUninstall Uninstall the Kosmos control plane in a Kubernetes cluster.
@@ -40,7 +59,7 @@ func NewCmdUninstall(f ctlutil.Factory) *cobra.Command {
Use: "uninstall",
Short: i18n.T("Uninstall the Kosmos control plane in a Kubernetes cluster"),
Long: "",
- Example: "",
+ Example: uninstallExample,
SilenceUsage: true,
DisableFlagsInUseLine: true,
RunE: func(cmd *cobra.Command, args []string) error {
@@ -53,8 +72,8 @@ func NewCmdUninstall(f ctlutil.Factory) *cobra.Command {
flags := cmd.Flags()
flags.StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "Kosmos namespace.")
- flags.StringVarP(&o.Module, "module", "m", utils.DefaultInstallModule, "Kosmos specify the module to uninstall.")
- flags.StringVar(&o.HostKubeConfig, "host-kubeconfig", "", "Absolute path to the special host kubeconfig file.")
+ flags.StringVarP(&o.Module, "module", "m", utils.All, "Kosmos specify the module to uninstall.")
+ flags.StringVar(&o.KubeConfig, "kubeconfig", "", "Absolute path to the special kubeconfig file.")
return cmd
}
@@ -63,10 +82,10 @@ func (o *CommandUninstallOptions) Complete(f ctlutil.Factory) error {
var config *rest.Config
var err error
- if len(o.HostKubeConfig) > 0 {
- config, err = clientcmd.BuildConfigFromFlags("", o.HostKubeConfig)
+ if len(o.KubeConfig) > 0 {
+ config, err = clientcmd.BuildConfigFromFlags("", o.KubeConfig)
if err != nil {
- return fmt.Errorf("kosmosctl uninstall complete error, generate host config failed: %s", err)
+ return fmt.Errorf("kosmosctl uninstall complete error, generate config failed: %s", err)
}
} else {
config, err = f.ToRESTConfig()
@@ -75,17 +94,22 @@ func (o *CommandUninstallOptions) Complete(f ctlutil.Factory) error {
}
}
- o.Client, err = kubernetes.NewForConfig(config)
+ o.KosmosClient, err = versioned.NewForConfig(config)
+ if err != nil {
+ return fmt.Errorf("kosmosctl install complete error, generate Kosmos client failed: %v", err)
+ }
+
+ o.K8sClient, err = kubernetes.NewForConfig(config)
if err != nil {
return fmt.Errorf("kosmosctl uninstall complete error, generate basic client failed: %v", err)
}
- o.DynamicClient, err = dynamic.NewForConfig(config)
+ o.K8sDynamicClient, err = dynamic.NewForConfig(config)
if err != nil {
return fmt.Errorf("kosmosctl join complete error, generate dynamic client failed: %s", err)
}
- o.ExtensionsClient, err = extensionsclient.NewForConfig(config)
+ o.K8sExtensionsClient, err = extensionsclient.NewForConfig(config)
if err != nil {
return fmt.Errorf("kosmosctl uninstall complete error, generate extensions client failed: %v", err)
}
@@ -128,8 +152,8 @@ func (o *CommandUninstallOptions) Run() error {
if err != nil {
return err
}
- err = o.Client.CoreV1().Namespaces().Delete(context.TODO(), o.Namespace, metav1.DeleteOptions{})
- if err != nil {
+ err = o.K8sClient.CoreV1().Namespaces().Delete(context.TODO(), o.Namespace, metav1.DeleteOptions{})
+ if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall all module run error, namespace options failed: %v", err)
}
}
@@ -139,169 +163,274 @@ func (o *CommandUninstallOptions) Run() error {
func (o *CommandUninstallOptions) runClusterlink() error {
klog.Info("Start uninstalling clusterlink from kosmos control plane...")
- clusterlinkDeployment, err := util.GenerateDeployment(manifest.ClusterlinkNetworkManagerDeployment, nil)
+ clusterlinkDeploy, err := util.GenerateDeployment(manifest.ClusterlinkNetworkManagerDeployment, manifest.DeploymentReplace{
+ Namespace: o.Namespace,
+ })
if err != nil {
return err
}
- err = o.Client.AppsV1().Deployments(o.Namespace).Delete(context.Background(), clusterlinkDeployment.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clusterlink run error, deployment options failed: %v", err)
+ err = o.K8sClient.AppsV1().Deployments(o.Namespace).Delete(context.Background(), clusterlinkDeploy.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clusterlink run error, deployment options failed: %v", err)
+ }
+ } else {
+ klog.Info("Deployment " + clusterlinkDeploy.Name + " is deleted.")
}
- klog.Info("Deployment " + clusterlinkDeployment.Name + " is deleted.")
- var clusters, clusternodes, nodeconfigs *unstructured.UnstructuredList
- clusters, err = o.DynamicClient.Resource(util.ClusterGVR).List(context.TODO(), metav1.ListOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clusterlink run error, list cluster failed: %v", err)
+ clusters, err := o.KosmosClient.KosmosV1alpha1().Clusters().List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clusterlink run error, list cluster failed: %v", err)
+ }
} else if clusters != nil && len(clusters.Items) > 0 {
klog.Info("kosmosctl uninstall warning, skip removing cluster crd because cr instance exists")
} else {
- clusterCRD, _ := util.GenerateCustomResourceDefinition(manifest.ClusterlinkCluster, nil)
+ clusterCRD, err := util.GenerateCustomResourceDefinition(manifest.Cluster, nil)
if err != nil {
return err
}
- err = o.ExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), clusterCRD.Name, metav1.DeleteOptions{})
+ err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), clusterCRD.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall clusterlink run error, cluster crd delete failed: %v", err)
}
klog.Info("CRD " + clusterCRD.Name + " is deleted.")
}
- clusternodes, err = o.DynamicClient.Resource(util.ClusterNodeGVR).List(context.TODO(), metav1.ListOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clusterlink run error, list clusternode failed: %v", err)
+ clusternodes, err := o.KosmosClient.KosmosV1alpha1().ClusterNodes().List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clusterlink run error, list clusternode failed: %v", err)
+ }
} else if clusternodes != nil && len(clusternodes.Items) > 0 {
klog.Info("kosmosctl uninstall warning, skip removing clusternode crd because cr instance exists")
} else {
- clusternodeCRD, _ := util.GenerateCustomResourceDefinition(manifest.ClusterlinkClusterNode, nil)
+ clusternodeCRD, err := util.GenerateCustomResourceDefinition(manifest.ClusterNode, nil)
if err != nil {
return err
}
- err = o.ExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), clusternodeCRD.Name, metav1.DeleteOptions{})
+ err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), clusternodeCRD.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall clusterlink run error, clusternode crd delete failed: %v", err)
}
klog.Info("CRD " + clusternodeCRD.Name + " is deleted.")
}
- nodeconfigs, err = o.DynamicClient.Resource(util.NodeConfigGVR).List(context.TODO(), metav1.ListOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clusterlink run error, list nodeconfig failed: %v", err)
+ nodeconfigs, err := o.KosmosClient.KosmosV1alpha1().NodeConfigs().List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clusterlink run error, list nodeconfig failed: %v", err)
+ }
} else if nodeconfigs != nil && len(nodeconfigs.Items) > 0 {
klog.Info("kosmosctl uninstall warning, skip removing nodeconfig crd because cr instance exists")
} else {
- nodeConfigCRD, _ := util.GenerateCustomResourceDefinition(manifest.ClusterlinkNodeConfig, nil)
+ nodeConfigCRD, err := util.GenerateCustomResourceDefinition(manifest.NodeConfig, nil)
if err != nil {
return err
}
- err = o.ExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), nodeConfigCRD.Name, metav1.DeleteOptions{})
+ err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), nodeConfigCRD.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall clusterlink run error, clusternode crd delete failed: %v", err)
}
klog.Info("CRD " + nodeConfigCRD.Name + " is deleted.")
}
- clusterlinkClusterRoleBinding, err := util.GenerateClusterRoleBinding(manifest.ClusterlinkNetworkManagerClusterRoleBinding, nil)
+ clusterlinkCRB, err := util.GenerateClusterRoleBinding(manifest.ClusterlinkNetworkManagerClusterRoleBinding, nil)
if err != nil {
return err
}
- err = o.Client.RbacV1().ClusterRoleBindings().Delete(context.TODO(), clusterlinkClusterRoleBinding.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clusterlink run error, clusterrolebinding options failed: %v", err)
+ err = o.K8sClient.RbacV1().ClusterRoleBindings().Delete(context.TODO(), clusterlinkCRB.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clusterlink run error, clusterrolebinding options failed: %v", err)
+ }
+ } else {
+ klog.Info("ClusterRoleBinding " + clusterlinkCRB.Name + " is deleted.")
}
- klog.Info("ClusterRoleBinding " + clusterlinkClusterRoleBinding.Name + " is deleted.")
- clusterlinkClusterRole, err := util.GenerateClusterRole(manifest.ClusterlinkNetworkManagerClusterRole, nil)
+ clusterlinkCR, err := util.GenerateClusterRole(manifest.ClusterlinkNetworkManagerClusterRole, nil)
if err != nil {
return err
}
- err = o.Client.RbacV1().ClusterRoles().Delete(context.TODO(), clusterlinkClusterRole.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl install clusterlink run error, clusterrole options failed: %v", err)
+ err = o.K8sClient.RbacV1().ClusterRoles().Delete(context.TODO(), clusterlinkCR.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl install clusterlink run error, clusterrole options failed: %v", err)
+ }
+ } else {
+ klog.Info("ClusterRole " + clusterlinkCR.Name + " is deleted.")
}
- klog.Info("ClusterRole " + clusterlinkClusterRole.Name + " is deleted.")
- clusterlinkServiceAccount, err := util.GenerateServiceAccount(manifest.ClusterlinkNetworkManagerServiceAccount, nil)
+ clusterlinkSA, err := util.GenerateServiceAccount(manifest.ClusterlinkNetworkManagerServiceAccount, nil)
if err != nil {
return err
}
- err = o.Client.CoreV1().ServiceAccounts(o.Namespace).Delete(context.TODO(), clusterlinkServiceAccount.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clusterlink run error, serviceaccount options failed: %v", err)
+ err = o.K8sClient.CoreV1().ServiceAccounts(o.Namespace).Delete(context.TODO(), clusterlinkSA.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clusterlink run error, serviceaccount options failed: %v", err)
+ }
+ } else {
+ klog.Info("ServiceAccount " + clusterlinkSA.Name + " is deleted.")
}
- klog.Info("ServiceAccount " + clusterlinkServiceAccount.Name + " is deleted.")
- klog.Info("Clusterlink was uninstalled.")
+ clustertreeDeploy, err := util.GenerateDeployment(manifest.ClusterTreeClusterManagerDeployment, nil)
+ if err != nil {
+ return err
+ }
+ _, err = o.K8sClient.AppsV1().Deployments(o.Namespace).Get(context.Background(), clustertreeDeploy.Name, metav1.GetOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ operatorDeploy, err := util.GenerateDeployment(manifest.KosmosOperatorDeployment, manifest.DeploymentReplace{
+ Namespace: o.Namespace,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl uninstall clusterlink run error, generate operator deployment failed: %s", err)
+ }
+ err = o.K8sClient.AppsV1().Deployments(o.Namespace).Delete(context.Background(), operatorDeploy.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clusterlink run error, operator deployment options failed: %v", err)
+ }
+ } else {
+ klog.Info("Deployment " + operatorDeploy.Name + " is deleted.")
+ }
+ }
+ } else {
+ klog.Info("Kosmos-Clustertree is still running, skip uninstall Kosmos-Operator.")
+ }
+
+ klog.Info("Clusterlink uninstalled.")
return nil
}
func (o *CommandUninstallOptions) runClustertree() error {
klog.Info("Start uninstalling clustertree from kosmos control plane...")
- clustertreeDeployment, err := util.GenerateDeployment(manifest.ClusterTreeKnodeManagerDeployment, nil)
+ clustertreeDeploy, err := util.GenerateDeployment(manifest.ClusterTreeClusterManagerDeployment, manifest.DeploymentReplace{
+ Namespace: o.Namespace,
+ })
if err != nil {
return err
}
- err = o.Client.AppsV1().Deployments(o.Namespace).Delete(context.Background(), clustertreeDeployment.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clustertree run error, deployment options failed: %v", err)
+ err = o.K8sClient.AppsV1().Deployments(o.Namespace).Delete(context.Background(), clustertreeDeploy.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, deployment options failed: %v", err)
+ }
+ } else {
+ klog.Info("Deployment " + clustertreeDeploy.Name + " is deleted.")
+ clustertreeSecret, err := util.GenerateSecret(manifest.ClusterTreeClusterManagerSecret, manifest.SecretReplace{
+ Namespace: o.Namespace,
+ Cert: "",
+ Key: "",
+ })
+ if err != nil {
+ return err
+ }
+ err = o.K8sClient.CoreV1().Secrets(o.Namespace).Delete(context.Background(), clustertreeSecret.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clustertree secret run error, secret options failed: %v", err)
+ }
+ } else {
+ klog.Info("Secret " + clustertreeSecret.Name + " is deleted.")
+ }
}
- klog.Info("Deployment " + clustertreeDeployment.Name + " is deleted.")
- var knodes *unstructured.UnstructuredList
- knodes, err = o.DynamicClient.Resource(util.KnodeGVR).List(context.TODO(), metav1.ListOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clustertree run error, list knode failed: %v", err)
- } else if knodes != nil && len(knodes.Items) > 0 {
- klog.Info("kosmosctl uninstall warning, skip removing knode crd because cr instance exists")
+ clusters, err := o.KosmosClient.KosmosV1alpha1().Clusters().List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, list cluster failed: %v", err)
+ }
+ } else if clusters != nil && len(clusters.Items) > 0 {
+ klog.Info("kosmosctl uninstall warning, skip removing cluster crd because cr instance exists")
} else {
- knodeCRD, _ := util.GenerateCustomResourceDefinition(manifest.ClusterTreeKnode, nil)
+ clusterCRD, err := util.GenerateCustomResourceDefinition(manifest.Cluster, nil)
if err != nil {
return err
}
- err = o.ExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), knodeCRD.Name, metav1.DeleteOptions{})
+ err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), clusterCRD.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clustertree run error, knode crd delete failed: %v", err)
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, cluster crd delete failed: %v", err)
}
- klog.Info("CRD " + knodeCRD.Name + " is deleted.")
+ klog.Info("CRD " + clusterCRD.Name + " is deleted.")
}
- err = o.Client.CoreV1().ConfigMaps(utils.DefaultNamespace).Delete(context.TODO(), utils.HostKubeConfigName, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clustertree run error, configmap options failed: %v", err)
+ err = o.K8sClient.CoreV1().ConfigMaps(o.Namespace).Delete(context.TODO(), utils.HostKubeConfigName, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, configmap options failed: %v", err)
+ }
+ } else {
+ klog.Info("ConfigMap " + utils.HostKubeConfigName + " is deleted.")
}
- klog.Info("ConfigMap " + utils.HostKubeConfigName + " is deleted.")
- clustertreeClusterRoleBinding, err := util.GenerateClusterRoleBinding(manifest.ClusterTreeKnodeManagerClusterRoleBinding, nil)
+ clustertreeCRB, err := util.GenerateClusterRoleBinding(manifest.ClusterTreeClusterRoleBinding, nil)
if err != nil {
return err
}
- err = o.Client.RbacV1().ClusterRoleBindings().Delete(context.TODO(), clustertreeClusterRoleBinding.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clustertree run error, clusterrolebinding options failed: %v", err)
+ err = o.K8sClient.RbacV1().ClusterRoleBindings().Delete(context.TODO(), clustertreeCRB.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, clusterrolebinding options failed: %v", err)
+ }
+ } else {
+ klog.Info("ClusterRoleBinding " + clustertreeCRB.Name + " is deleted.")
}
- klog.Info("ClusterRoleBinding " + clustertreeClusterRoleBinding.Name + " is deleted.")
- clustertreeClusterRole, err := util.GenerateClusterRole(manifest.ClusterTreeKnodeManagerClusterRole, nil)
+ clustertreeCR, err := util.GenerateClusterRole(manifest.ClusterTreeClusterRole, nil)
if err != nil {
return err
}
- err = o.Client.RbacV1().ClusterRoles().Delete(context.TODO(), clustertreeClusterRole.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl uninstall clustertree run error, clusterrole options failed: %v", err)
+ err = o.K8sClient.RbacV1().ClusterRoles().Delete(context.TODO(), clustertreeCR.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, clusterrole options failed: %v", err)
+ }
+ } else {
+ klog.Info("ClusterRole " + clustertreeCR.Name + " is deleted.")
}
- klog.Info("ClusterRole " + clustertreeClusterRole.Name + " is deleted.")
- clustertreeServiceAccount, err := util.GenerateServiceAccount(manifest.ClusterTreeKnodeManagerServiceAccount, nil)
+ clustertreeSA, err := util.GenerateServiceAccount(manifest.ClusterTreeServiceAccount, nil)
if err != nil {
return err
}
- err = o.Client.CoreV1().ServiceAccounts(o.Namespace).Delete(context.TODO(), clustertreeServiceAccount.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl install clustertree run error, serviceaccount options failed: %v", err)
+ err = o.K8sClient.CoreV1().ServiceAccounts(o.Namespace).Delete(context.TODO(), clustertreeSA.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, serviceaccount options failed: %v", err)
+ }
+ } else {
+ klog.Info("ServiceAccount " + clustertreeSA.Name + " is deleted.")
+ }
+
+ clusterlinkDeploy, err := util.GenerateDeployment(manifest.ClusterlinkNetworkManagerDeployment, nil)
+ if err != nil {
+ return err
+ }
+ _, err = o.K8sClient.AppsV1().Deployments(o.Namespace).Get(context.Background(), clusterlinkDeploy.Name, metav1.GetOptions{})
+ if err != nil {
+ if apierrors.IsNotFound(err) {
+ operatorDeploy, err := util.GenerateDeployment(manifest.KosmosOperatorDeployment, manifest.DeploymentReplace{
+ Namespace: o.Namespace,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, generate operator deployment failed: %s", err)
+ }
+ err = o.K8sClient.AppsV1().Deployments(o.Namespace).Delete(context.Background(), operatorDeploy.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl uninstall clustertree run error, operator deployment options failed: %v", err)
+ }
+ } else {
+ klog.Info("Deployment " + operatorDeploy.Name + " is deleted.")
+ }
+ }
+ } else {
+ klog.Info("Kosmos-Clusterlink is still running, skip uninstall Kosmos-Operator.")
}
- klog.Info("ServiceAccount " + clustertreeServiceAccount.Name + " is deleted.")
- klog.Info("Clustertree was uninstalled.")
+ klog.Info("Clustertree uninstalled.")
return nil
}
@@ -311,7 +440,7 @@ func (o *CommandUninstallOptions) runCoredns() error {
if err != nil {
return err
}
- err = o.Client.AppsV1().Deployments(o.Namespace).Delete(context.Background(), deploy.Name, metav1.DeleteOptions{})
+ err = o.K8sClient.AppsV1().Deployments(o.Namespace).Delete(context.Background(), deploy.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall coredns run error, deployment options failed: %v", err)
}
@@ -321,7 +450,7 @@ func (o *CommandUninstallOptions) runCoredns() error {
if err != nil {
return err
}
- err = o.Client.CoreV1().Services(o.Namespace).Delete(context.TODO(), svc.Name, metav1.DeleteOptions{})
+ err = o.K8sClient.CoreV1().Services(o.Namespace).Delete(context.TODO(), svc.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall coredns run error, service options failed: %v", err)
}
@@ -331,7 +460,7 @@ func (o *CommandUninstallOptions) runCoredns() error {
if err != nil {
return err
}
- err = o.Client.CoreV1().ConfigMaps(o.Namespace).Delete(context.TODO(), coreFile.Name, metav1.DeleteOptions{})
+ err = o.K8sClient.CoreV1().ConfigMaps(o.Namespace).Delete(context.TODO(), coreFile.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall coredns run error, configmap options failed: %v", err)
}
@@ -341,23 +470,23 @@ func (o *CommandUninstallOptions) runCoredns() error {
if err != nil {
return err
}
- err = o.Client.CoreV1().ConfigMaps(o.Namespace).Delete(context.TODO(), customerHosts.Name, metav1.DeleteOptions{})
+ err = o.K8sClient.CoreV1().ConfigMaps(o.Namespace).Delete(context.TODO(), customerHosts.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall coredns run error, configmap options failed: %v", err)
}
klog.Info("Configmap " + svc.Name + " is deleted.")
- clusters, err := o.DynamicClient.Resource(util.ClusterGVR).List(context.TODO(), metav1.ListOptions{})
+ clusters, err := o.K8sDynamicClient.Resource(util.ClusterGVR).List(context.TODO(), metav1.ListOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall coredns run error, list cluster failed: %v", err)
} else if len(clusters.Items) > 0 {
klog.Info("kosmosctl uninstall warning, skip removing cluster crd because cr instance exists")
} else {
- clusterCRD, _ := util.GenerateCustomResourceDefinition(manifest.ClusterlinkCluster, nil)
+ clusterCRD, _ := util.GenerateCustomResourceDefinition(manifest.Cluster, nil)
if err != nil {
return err
}
- err = o.ExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), clusterCRD.Name, metav1.DeleteOptions{})
+ err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), clusterCRD.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall coredns run error, cluster crd delete failed: %v", err)
}
@@ -368,7 +497,7 @@ func (o *CommandUninstallOptions) runCoredns() error {
if err != nil {
return err
}
- err = o.Client.RbacV1().ClusterRoleBindings().Delete(context.TODO(), crb.Name, metav1.DeleteOptions{})
+ err = o.K8sClient.RbacV1().ClusterRoleBindings().Delete(context.TODO(), crb.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall coredns run error, clusterrolebinding options failed: %v", err)
}
@@ -378,7 +507,7 @@ func (o *CommandUninstallOptions) runCoredns() error {
if err != nil {
return err
}
- err = o.Client.RbacV1().ClusterRoles().Delete(context.TODO(), cRole.Name, metav1.DeleteOptions{})
+ err = o.K8sClient.RbacV1().ClusterRoles().Delete(context.TODO(), cRole.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl install coredns run error, clusterrole options failed: %v", err)
}
@@ -388,7 +517,7 @@ func (o *CommandUninstallOptions) runCoredns() error {
if err != nil {
return err
}
- err = o.Client.CoreV1().ServiceAccounts(o.Namespace).Delete(context.TODO(), sa.Name, metav1.DeleteOptions{})
+ err = o.K8sClient.CoreV1().ServiceAccounts(o.Namespace).Delete(context.TODO(), sa.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl uninstall coredns run error, serviceaccount options failed: %v", err)
}
diff --git a/pkg/kosmosctl/unjoin/unjoin.go b/pkg/kosmosctl/unjoin/unjoin.go
index 56ac298a9..24ed7fcfc 100644
--- a/pkg/kosmosctl/unjoin/unjoin.go
+++ b/pkg/kosmosctl/unjoin/unjoin.go
@@ -6,9 +6,9 @@ import (
"time"
"github.com/spf13/cobra"
+ extensionsclient "k8s.io/apiextensions-apiserver/pkg/client/clientset/clientset"
apierrors "k8s.io/apimachinery/pkg/api/errors"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
- "k8s.io/client-go/dynamic"
"k8s.io/client-go/kubernetes"
restclient "k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
@@ -17,33 +17,29 @@ import (
"k8s.io/kubectl/pkg/util/i18n"
"k8s.io/kubectl/pkg/util/templates"
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/manifest"
"github.com/kosmos.io/kosmos/pkg/kosmosctl/util"
"github.com/kosmos.io/kosmos/pkg/utils"
)
var unjoinExample = templates.Examples(i18n.T(`
- # Unjoin cluster from Kosmos control plane in any cluster, e.g:
- kosmosctl unjoin cluster --name=[cluster-name] --cluster-kubeconfig=[member-kubeconfig] --master-kubeconfig=[master-kubeconfig]
-
- # Unjoin cluster from Kosmos control plane in master cluster, e.g:
- kosmosctl unjoin cluster --name=[cluster-name] --cluster-kubeconfig=[member-kubeconfig]
-
- # Unjoin knode from Kosmos control plane in any cluster, e.g:
- kosmosctl unjoin knode --name=[knode-name] --master-kubeconfig=[master-kubeconfig]
-
- # Unjoin knode from Kosmos control plane in master cluster, e.g:
- kosmosctl unjoin knode --name=[knode-name]
-`))
+ # Unjoin cluster from Kosmos control plane, e.g:
+ kosmosctl unjoin cluster --name cluster-name
+
+ # Unjoin cluster from Kosmos control plane, if you need to specify a special cluster kubeconfig, e.g:
+ kosmosctl unjoin cluster --name cluster-name --kubeconfig ~/kubeconfig/cluster-kubeconfig`))
type CommandUnJoinOptions struct {
- MasterKubeConfig string
- ClusterKubeConfig string
-
- Name string
-
- Client kubernetes.Interface
- DynamicClient *dynamic.DynamicClient
+ Name string
+ Namespace string
+ KubeConfig string
+ HostKubeConfig string
+
+ KosmosClient versioned.Interface
+ K8sClient kubernetes.Interface
+ K8sExtensionsClient extensionsclient.Interface
}
// NewCmdUnJoin Delete resource in Kosmos control plane.
@@ -52,7 +48,7 @@ func NewCmdUnJoin(f ctlutil.Factory) *cobra.Command {
cmd := &cobra.Command{
Use: "unjoin",
- Short: i18n.T("Unjoin resource in kosmos control plane"),
+ Short: i18n.T("Unjoin resource from Kosmos control plane"),
Long: "",
Example: unjoinExample,
SilenceUsage: true,
@@ -65,41 +61,60 @@ func NewCmdUnJoin(f ctlutil.Factory) *cobra.Command {
},
}
- cmd.Flags().StringVarP(&o.MasterKubeConfig, "master-kubeconfig", "", "", "Absolute path to the master kubeconfig file.")
- cmd.Flags().StringVarP(&o.ClusterKubeConfig, "cluster-kubeconfig", "", "", "Absolute path to the cluster kubeconfig file.")
cmd.Flags().StringVar(&o.Name, "name", "", "Specify the name of the resource to unjoin.")
+ cmd.Flags().StringVarP(&o.Namespace, "namespace", "n", utils.DefaultNamespace, "Kosmos namespace.")
+ cmd.Flags().StringVar(&o.KubeConfig, "kubeconfig", "", "Absolute path to the cluster kubeconfig file.")
+ cmd.Flags().StringVar(&o.HostKubeConfig, "host-kubeconfig", "", "Absolute path to the special host kubeconfig file.")
return cmd
}
func (o *CommandUnJoinOptions) Complete(f ctlutil.Factory) error {
- var masterConfig *restclient.Config
+ var hostConfig *restclient.Config
+ var clusterConfig *restclient.Config
var err error
- if o.MasterKubeConfig != "" {
- masterConfig, err = clientcmd.BuildConfigFromFlags("", o.MasterKubeConfig)
+ if o.HostKubeConfig != "" {
+ hostConfig, err = clientcmd.BuildConfigFromFlags("", o.HostKubeConfig)
if err != nil {
return fmt.Errorf("kosmosctl unjoin complete error, generate masterConfig failed: %s", err)
}
} else {
- masterConfig, err = f.ToRESTConfig()
+ hostConfig, err = f.ToRESTConfig()
if err != nil {
return fmt.Errorf("kosmosctl unjoin complete error, get current masterConfig failed: %s", err)
}
}
- clusterConfig, err := clientcmd.BuildConfigFromFlags("", o.ClusterKubeConfig)
+ o.KosmosClient, err = versioned.NewForConfig(hostConfig)
if err != nil {
- return fmt.Errorf("kosmosctl unjoin complete error, generate memberConfig failed: %s", err)
+ return fmt.Errorf("kosmosctl install complete error, generate Kosmos client failed: %v", err)
}
- o.Client, err = kubernetes.NewForConfig(clusterConfig)
+ if o.KubeConfig != "" {
+ clusterConfig, err = clientcmd.BuildConfigFromFlags("", o.KubeConfig)
+ if err != nil {
+ return fmt.Errorf("kosmosctl unjoin complete error, generate clusterConfig failed: %s", err)
+ }
+ } else {
+ var cluster *v1alpha1.Cluster
+ cluster, err = o.KosmosClient.KosmosV1alpha1().Clusters().Get(context.TODO(), o.Name, metav1.GetOptions{})
+ if err != nil {
+ return fmt.Errorf("kosmosctl unjoin complete error, get cluster failed: %s", err)
+ }
+ clusterConfig, err = clientcmd.RESTConfigFromKubeConfig(cluster.Spec.Kubeconfig)
+ if err != nil {
+ return fmt.Errorf("kosmosctl unjoin complete error, generate clusterConfig failed: %s", err)
+ }
+ }
+
+ o.K8sClient, err = kubernetes.NewForConfig(clusterConfig)
if err != nil {
- return fmt.Errorf("kosmosctl join complete error, generate basic client failed: %v", err)
+ return fmt.Errorf("kosmosctl unjoin complete error, generate K8s basic client failed: %v", err)
}
- o.DynamicClient, err = dynamic.NewForConfig(masterConfig)
+ o.K8sExtensionsClient, err = extensionsclient.NewForConfig(clusterConfig)
if err != nil {
- return fmt.Errorf("kosmosctl unjoin complete error, generate dynamic client failed: %s", err)
+ return fmt.Errorf("kosmosctl unjoin complete error, generate K8s extensions client failed: %v", err)
}
return nil
@@ -107,26 +122,7 @@ func (o *CommandUnJoinOptions) Complete(f ctlutil.Factory) error {
func (o *CommandUnJoinOptions) Validate(args []string) error {
if len(o.Name) == 0 {
- return fmt.Errorf("kosmosctl unjoin validate error, resource name is not valid")
- }
-
- switch args[0] {
- case "cluster":
- _, err := o.DynamicClient.Resource(util.ClusterGVR).Get(context.TODO(), o.Name, metav1.GetOptions{})
- if err != nil {
- if apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl unjoin validate warning, clsuter is not found: %s", err)
- }
- return fmt.Errorf("kosmosctl unjoin validate error, get cluster failed: %s", err)
- }
- case "knode":
- _, err := o.DynamicClient.Resource(util.KnodeGVR).Get(context.TODO(), o.Name, metav1.GetOptions{})
- if err != nil {
- if apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl unjoin validate warning, knode is not found: %s", err)
- }
- return fmt.Errorf("kosmosctl unjoin validate error, get knode failed: %s", err)
- }
+ return fmt.Errorf("kosmosctl unjoin validate error, name is not valid")
}
return nil
@@ -139,11 +135,6 @@ func (o *CommandUnJoinOptions) Run(args []string) error {
if err != nil {
return err
}
- case "knode":
- err := o.runKnode()
- if err != nil {
- return err
- }
}
return nil
@@ -151,9 +142,9 @@ func (o *CommandUnJoinOptions) Run(args []string) error {
func (o *CommandUnJoinOptions) runCluster() error {
klog.Info("Start removing cluster from kosmos control plane...")
- // 1. delete cluster
+ // delete cluster
for {
- err := o.DynamicClient.Resource(util.ClusterGVR).Namespace("").Delete(context.TODO(), o.Name, metav1.DeleteOptions{})
+ err := o.KosmosClient.KosmosV1alpha1().Clusters().Delete(context.TODO(), o.Name, metav1.DeleteOptions{})
if err != nil {
if apierrors.IsNotFound(err) {
break
@@ -164,55 +155,97 @@ func (o *CommandUnJoinOptions) runCluster() error {
}
klog.Info("Cluster: " + o.Name + " has been deleted.")
- // 2. delete operator
- clusterlinkOperatorDeployment, err := util.GenerateDeployment(manifest.ClusterlinkOperatorDeployment, nil)
+ // delete crd
+ serviceExport, err := util.GenerateCustomResourceDefinition(manifest.ServiceExport, nil)
if err != nil {
return err
}
- err = o.Client.AppsV1().Deployments(utils.DefaultNamespace).Delete(context.TODO(), clusterlinkOperatorDeployment.Name, metav1.DeleteOptions{})
- if err != nil && !apierrors.IsNotFound(err) {
- return fmt.Errorf("kosmosctl unjoin run error, delete deployment failed: %s", err)
+ err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), serviceExport.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl unjoin run error, crd options failed: %v", err)
+ }
}
- klog.Info("Deployment: " + clusterlinkOperatorDeployment.Name + " has been deleted.")
+ klog.Info("CRD: " + serviceExport.Name + " has been deleted.")
- // 3. delete secret
- err = o.Client.CoreV1().Secrets(utils.DefaultNamespace).Delete(context.TODO(), utils.ControlPanelSecretName, metav1.DeleteOptions{})
+ serviceImport, err := util.GenerateCustomResourceDefinition(manifest.ServiceImport, nil)
+ if err != nil {
+ return err
+ }
+ err = o.K8sExtensionsClient.ApiextensionsV1().CustomResourceDefinitions().Delete(context.Background(), serviceImport.Name, metav1.DeleteOptions{})
+ if err != nil {
+ if !apierrors.IsAlreadyExists(err) {
+ return fmt.Errorf("kosmosctl unjoin run error, crd options failed: %v", err)
+ }
+ }
+ klog.Info("CRD: " + serviceImport.Name + " has been deleted.")
+
+ // delete rbac
+ err = o.K8sClient.CoreV1().Secrets(o.Namespace).Delete(context.TODO(), utils.ControlPanelSecretName, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl unjoin run error, delete secret failed: %s", err)
}
klog.Info("Secret: " + utils.ControlPanelSecretName + " has been deleted.")
- // 4. delete rbac
- err = o.Client.RbacV1().ClusterRoleBindings().Delete(context.TODO(), utils.ExternalIPPoolNamePrefix, metav1.DeleteOptions{})
+ err = o.K8sClient.RbacV1().ClusterRoleBindings().Delete(context.TODO(), utils.ExternalIPPoolNamePrefix, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl unjoin run error, delete clusterrolebinding failed: %s", err)
}
klog.Info("ClusterRoleBinding: " + utils.ExternalIPPoolNamePrefix + " has been deleted.")
- err = o.Client.RbacV1().ClusterRoles().Delete(context.TODO(), utils.ExternalIPPoolNamePrefix, metav1.DeleteOptions{})
+ err = o.K8sClient.RbacV1().ClusterRoles().Delete(context.TODO(), utils.ExternalIPPoolNamePrefix, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl unjoin run error, delete clusterrole failed: %s", err)
}
klog.Info("ClusterRole: " + utils.ExternalIPPoolNamePrefix + " has been deleted.")
- clusterlinkOperatorServiceAccount, err := util.GenerateServiceAccount(manifest.ClusterlinkOperatorServiceAccount, nil)
+ kosmosCR, err := util.GenerateClusterRole(manifest.KosmosClusterRole, nil)
+ if err != nil {
+ return fmt.Errorf("kosmosctl unjoin run error, generate clusterrole failed: %s", err)
+ }
+ err = o.K8sClient.RbacV1().ClusterRoles().Delete(context.TODO(), kosmosCR.Name, metav1.DeleteOptions{})
+ if err != nil && !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl unjoin run error, delete clusterrole failed: %s", err)
+ }
+ klog.Info("ClusterRole " + kosmosCR.Name + " has been deleted.")
+
+ kosmosCRB, err := util.GenerateClusterRoleBinding(manifest.KosmosClusterRoleBinding, manifest.ClusterRoleBindingReplace{
+ Namespace: o.Namespace,
+ })
+ if err != nil {
+ return fmt.Errorf("kosmosctl join run error, generate clusterrolebinding failed: %s", err)
+ }
+ err = o.K8sClient.RbacV1().ClusterRoleBindings().Delete(context.TODO(), kosmosCRB.Name, metav1.DeleteOptions{})
+ if err != nil && !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl unjoin run error, delete clusterrolebinding failed: %s", err)
+ }
+ klog.Info("ClusterRoleBinding " + kosmosCRB.Name + " has been deleted.")
+
+ kosmosOperatorSA, err := util.GenerateServiceAccount(manifest.KosmosOperatorServiceAccount, nil)
if err != nil {
return err
}
- err = o.Client.CoreV1().ServiceAccounts(utils.DefaultNamespace).Delete(context.TODO(), clusterlinkOperatorServiceAccount.Name, metav1.DeleteOptions{})
+ err = o.K8sClient.CoreV1().ServiceAccounts(o.Namespace).Delete(context.TODO(), kosmosOperatorSA.Name, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl unjoin run error, delete serviceaccout failed: %s", err)
}
- klog.Info("ServiceAccount: " + clusterlinkOperatorServiceAccount.Name + " has been deleted.")
+ klog.Info("ServiceAccount: " + kosmosOperatorSA.Name + " has been deleted.")
- // 5. If cluster is not the master, delete namespace
- clusterlinkNetworkManagerDeployment, err := util.GenerateDeployment(manifest.ClusterlinkNetworkManagerDeployment, nil)
+ kosmosControlSA, err := util.GenerateServiceAccount(manifest.KosmosControlServiceAccount, manifest.ServiceAccountReplace{
+ Namespace: o.Namespace,
+ })
if err != nil {
- return err
+ return fmt.Errorf("kosmosctl unjoin run error, generate serviceaccount failed: %s", err)
+ }
+ err = o.K8sClient.CoreV1().ServiceAccounts(kosmosControlSA.Namespace).Delete(context.TODO(), kosmosControlSA.Name, metav1.DeleteOptions{})
+ if err != nil && !apierrors.IsNotFound(err) {
+ return fmt.Errorf("kosmosctl unjoin run error, delete serviceaccount failed: %s", err)
}
- _, err = o.Client.AppsV1().Deployments(utils.DefaultNamespace).Get(context.TODO(), clusterlinkNetworkManagerDeployment.Name, metav1.GetOptions{})
- if err != nil && apierrors.IsNotFound(err) {
- err = o.Client.CoreV1().Namespaces().Delete(context.TODO(), utils.DefaultNamespace, metav1.DeleteOptions{})
+ klog.Info("ServiceAccount " + kosmosControlSA.Name + " has been deleted.")
+
+ // if cluster is not the master, delete namespace
+ if o.Name != utils.DefaultClusterName {
+ err = o.K8sClient.CoreV1().Namespaces().Delete(context.TODO(), o.Namespace, metav1.DeleteOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return fmt.Errorf("kosmosctl unjoin run error, delete namespace failed: %s", err)
}
@@ -221,20 +254,3 @@ func (o *CommandUnJoinOptions) runCluster() error {
klog.Info("Cluster [" + o.Name + "] is removed.")
return nil
}
-
-func (o *CommandUnJoinOptions) runKnode() error {
- klog.Info("Start removing knode from kosmos control plane...")
- for {
- err := o.DynamicClient.Resource(util.KnodeGVR).Namespace("").Delete(context.TODO(), o.Name, metav1.DeleteOptions{})
- if err != nil {
- if apierrors.IsNotFound(err) {
- break
- }
- return fmt.Errorf("kosmosctl unjoin run error, delete knode failed: %s", err)
- }
- time.Sleep(3 * time.Second)
- }
-
- klog.Info("Knode [" + o.Name + "] is removed.")
- return nil
-}
diff --git a/pkg/kosmosctl/util/builder.go b/pkg/kosmosctl/util/builder.go
index 7c5647c91..db8c20fcf 100644
--- a/pkg/kosmosctl/util/builder.go
+++ b/pkg/kosmosctl/util/builder.go
@@ -18,7 +18,6 @@ var (
ClusterGVR = schema.GroupVersionResource{Group: "kosmos.io", Version: "v1alpha1", Resource: "clusters"}
ClusterNodeGVR = schema.GroupVersionResource{Group: "kosmos.io", Version: "v1alpha1", Resource: "clusternodes"}
NodeConfigGVR = schema.GroupVersionResource{Group: "kosmos.io", Version: "v1alpha1", Resource: "nodeconfigs"}
- KnodeGVR = schema.GroupVersionResource{Group: "kosmos.io", Version: "v1alpha1", Resource: "knodes"}
)
func GenerateDeployment(deployTemplate string, obj interface{}) (*appsv1.Deployment, error) {
@@ -179,3 +178,20 @@ func GenerateService(template string, obj interface{}) (*corev1.Service, error)
}
return o, nil
}
+
+func GenerateSecret(template string, obj interface{}) (*corev1.Secret, error) {
+ bs, err := parseTemplate(template, obj)
+ if err != nil {
+ return nil, fmt.Errorf("kosmosctl parsing secret template exception, error: %v", err)
+ } else if bs == nil {
+ return nil, fmt.Errorf("kosmosctl get secret template exception, value is empty")
+ }
+
+ o := &corev1.Secret{}
+
+ if err = runtime.DecodeInto(scheme.Codecs.UniversalDecoder(), bs, o); err != nil {
+ return nil, fmt.Errorf("kosmosctl decode secret bytes error: %v", err)
+ }
+
+ return o, nil
+}
diff --git a/pkg/kosmosctl/util/check.go b/pkg/kosmosctl/util/verify.go
similarity index 100%
rename from pkg/kosmosctl/util/check.go
rename to pkg/kosmosctl/util/verify.go
diff --git a/pkg/clusterlink/operator/addons/agent/agent.go b/pkg/operator/clusterlink/agent/agent.go
similarity index 90%
rename from pkg/clusterlink/operator/addons/agent/agent.go
rename to pkg/operator/clusterlink/agent/agent.go
index 6b61be91f..226324f52 100644
--- a/pkg/clusterlink/operator/addons/agent/agent.go
+++ b/pkg/operator/clusterlink/agent/agent.go
@@ -15,9 +15,8 @@ import (
bootstrapapi "k8s.io/cluster-bootstrap/token/api"
"k8s.io/klog/v2"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/option"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/utils"
- cmdutil "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/util"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/option"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/utils"
utils2 "github.com/kosmos.io/kosmos/pkg/utils"
)
@@ -57,7 +56,7 @@ func applyDaemonSet(opt *option.AddonOption) error {
return fmt.Errorf("decode agent daemonset error: %v", err)
}
- if err := cmdutil.CreateOrUpdateDaemonSet(opt.KubeClientSet, clAgentDaemonSet); err != nil {
+ if err := utils.CreateOrUpdateDaemonSet(opt.KubeClientSet, clAgentDaemonSet); err != nil {
return fmt.Errorf("create clusterlink agent daemonset error: %v", err)
}
@@ -92,7 +91,7 @@ func applySecret(opt *option.AddonOption) error {
// Create or update the Secret in the kube-public namespace
klog.Infof("[bootstrap-token] creating/updating Secret in kube-public namespace")
- return cmdutil.CreateOrUpdateSecret(opt.KubeClientSet, &corev1.Secret{
+ return utils.CreateOrUpdateSecret(opt.KubeClientSet, &corev1.Secret{
ObjectMeta: metav1.ObjectMeta{
Name: utils2.ProxySecretName,
Namespace: opt.GetSpecNamespace(),
diff --git a/pkg/clusterlink/operator/addons/agent/manifests.go b/pkg/operator/clusterlink/agent/manifests.go
similarity index 84%
rename from pkg/clusterlink/operator/addons/agent/manifests.go
rename to pkg/operator/clusterlink/agent/manifests.go
index d8545423c..ff3e33f7d 100644
--- a/pkg/clusterlink/operator/addons/agent/manifests.go
+++ b/pkg/operator/clusterlink/agent/manifests.go
@@ -39,6 +39,7 @@ spec:
command:
- clusterlink-agent
- --kubeconfig=/etc/clusterlink/kubeconfig
+ - --v=4
env:
- name: CLUSTER_NAME
value: "{{ .ClusterName }}"
@@ -56,13 +57,25 @@ spec:
- mountPath: /etc/clusterlink
name: proxy-config
readOnly: true
+ - mountPath: /run/xtables.lock
+ name: iptableslock
+ readOnly: false
+ - mountPath: /lib/modules
+ name: lib-modules
+ readOnly: true
terminationGracePeriodSeconds: 30
hostNetwork: true
volumes:
- name: proxy-config
secret:
secretName: {{ .ProxyConfigMapName }}
-
+ - hostPath:
+ path: /run/xtables.lock
+ type: FileOrCreate
+ name: iptableslock
+ - name: lib-modules
+ hostPath:
+ path: /lib/modules
`
// DaemonSetReplace is a struct to help to concrete
diff --git a/pkg/clusterlink/operator/addons/elector/elector.go b/pkg/operator/clusterlink/elector/elector.go
similarity index 90%
rename from pkg/clusterlink/operator/addons/elector/elector.go
rename to pkg/operator/clusterlink/elector/elector.go
index 5e0d207b7..d62650337 100644
--- a/pkg/clusterlink/operator/addons/elector/elector.go
+++ b/pkg/operator/clusterlink/elector/elector.go
@@ -13,9 +13,8 @@ import (
clientsetscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/klog/v2"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/option"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/utils"
- cmdutil "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/util"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/option"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/utils"
utils2 "github.com/kosmos.io/kosmos/pkg/utils"
)
@@ -49,7 +48,7 @@ func applyServiceAccount(opt *option.AddonOption) error {
return fmt.Errorf("decode elector serviceaccount error: %v", err)
}
- if err := cmdutil.CreateOrUpdateServiceAccount(opt.KubeClientSet, clElectorServiceAccount); err != nil {
+ if err := utils.CreateOrUpdateServiceAccount(opt.KubeClientSet, clElectorServiceAccount); err != nil {
return fmt.Errorf("create clusterlink agent serviceaccount error: %v", err)
}
@@ -82,7 +81,7 @@ func applyDeployment(opt *option.AddonOption) error {
return fmt.Errorf("decode elector deployment error: %v", err)
}
- if err := cmdutil.CreateOrUpdateDeployment(opt.KubeClientSet, clElectorDeployment); err != nil {
+ if err := utils.CreateOrUpdateDeployment(opt.KubeClientSet, clElectorDeployment); err != nil {
return fmt.Errorf("create clusterlink elector deployment error: %v", err)
}
@@ -110,7 +109,7 @@ func applyClusterRole(opt *option.AddonOption) error {
return fmt.Errorf("decode elector clusterrole error: %v", err)
}
- if err := cmdutil.CreateOrUpdateClusterRole(opt.KubeClientSet, clElectorClusterRole); err != nil {
+ if err := utils.CreateOrUpdateClusterRole(opt.KubeClientSet, clElectorClusterRole); err != nil {
return fmt.Errorf("create clusterlink elector clusterrole error: %v", err)
}
@@ -139,7 +138,7 @@ func applyClusterRoleBinding(opt *option.AddonOption) error {
return fmt.Errorf("decode elector clusterrolebinding error: %v", err)
}
- if err := cmdutil.CreateOrUpdateClusterRoleBinding(opt.KubeClientSet, clElectorClusterRoleBinding); err != nil {
+ if err := utils.CreateOrUpdateClusterRoleBinding(opt.KubeClientSet, clElectorClusterRoleBinding); err != nil {
return fmt.Errorf("create clusterlink elector clusterrolebinding error: %v", err)
}
diff --git a/pkg/clusterlink/operator/addons/elector/manifests.go b/pkg/operator/clusterlink/elector/manifests.go
similarity index 93%
rename from pkg/clusterlink/operator/addons/elector/manifests.go
rename to pkg/operator/clusterlink/elector/manifests.go
index e386f95cf..7f17b4e97 100644
--- a/pkg/clusterlink/operator/addons/elector/manifests.go
+++ b/pkg/operator/clusterlink/elector/manifests.go
@@ -33,6 +33,12 @@ spec:
spec:
serviceAccountName: {{ .Name }}
affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kosmos.io/exclude
+ operator: DoesNotExist
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
diff --git a/pkg/clusterlink/operator/addons/global/global.go b/pkg/operator/clusterlink/global/global.go
similarity index 83%
rename from pkg/clusterlink/operator/addons/global/global.go
rename to pkg/operator/clusterlink/global/global.go
index 47dac5eb7..a4137cb0b 100644
--- a/pkg/clusterlink/operator/addons/global/global.go
+++ b/pkg/operator/clusterlink/global/global.go
@@ -8,9 +8,8 @@ import (
clientsetscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/klog/v2"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/option"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/utils"
- cmdutil "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/util"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/option"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/utils"
)
type Installer struct {
@@ -39,7 +38,7 @@ func (i *Installer) Install(opt *option.AddonOption) error {
return fmt.Errorf("decode namespace error: %v", err)
}
- if err := cmdutil.CreateOrUpdateNamespace(opt.KubeClientSet, clNamespace); err != nil {
+ if err := utils.CreateOrUpdateNamespace(opt.KubeClientSet, clNamespace); err != nil {
return fmt.Errorf("create clusterlink namespace error: %v", err)
}
diff --git a/pkg/clusterlink/operator/addons/global/manifests.go b/pkg/operator/clusterlink/global/manifests.go
similarity index 100%
rename from pkg/clusterlink/operator/addons/global/manifests.go
rename to pkg/operator/clusterlink/global/manifests.go
diff --git a/pkg/clusterlink/operator/addons/install.go b/pkg/operator/clusterlink/install.go
similarity index 57%
rename from pkg/clusterlink/operator/addons/install.go
rename to pkg/operator/clusterlink/install.go
index 3deada288..315df7d41 100644
--- a/pkg/clusterlink/operator/addons/install.go
+++ b/pkg/operator/clusterlink/install.go
@@ -1,14 +1,14 @@
-package addons
+package clusterlink
import (
"k8s.io/klog/v2"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/agent"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/elector"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/global"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/manager"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/option"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/proxy"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/agent"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/elector"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/global"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/manager"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/option"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/proxy"
)
type AddonInstaller interface {
@@ -21,7 +21,7 @@ var (
)
func Install(opt *option.AddonOption) error {
- klog.Infof("install addons")
+ klog.Infof("install clusterlink")
for _, ins := range installers {
if err := ins.Install(opt); err != nil {
return err
@@ -32,7 +32,7 @@ func Install(opt *option.AddonOption) error {
}
func Uninstall(opt *option.AddonOption) error {
- klog.Infof("uninstall addons")
+ klog.Infof("uninstall clusterlink")
i := len(installers)
for i > 0 {
i--
diff --git a/pkg/clusterlink/operator/addons/manager/manager.go b/pkg/operator/clusterlink/manager/manager.go
similarity index 89%
rename from pkg/clusterlink/operator/addons/manager/manager.go
rename to pkg/operator/clusterlink/manager/manager.go
index 7d4ddfd9d..cf54dae52 100644
--- a/pkg/clusterlink/operator/addons/manager/manager.go
+++ b/pkg/operator/clusterlink/manager/manager.go
@@ -13,9 +13,8 @@ import (
clientsetscheme "k8s.io/client-go/kubernetes/scheme"
"k8s.io/klog/v2"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/option"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/utils"
- cmdutil "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/util"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/option"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/utils"
utils2 "github.com/kosmos.io/kosmos/pkg/utils"
)
@@ -45,7 +44,7 @@ func applyServiceAccount(opt *option.AddonOption) error {
return fmt.Errorf("decode controller-manager serviceaccount error: %v", err)
}
- if err := cmdutil.CreateOrUpdateServiceAccount(opt.KubeClientSet, clCtrManagerServiceAccount); err != nil {
+ if err := utils.CreateOrUpdateServiceAccount(opt.KubeClientSet, clCtrManagerServiceAccount); err != nil {
return fmt.Errorf("create clusterlink agent serviceaccount error: %v", err)
}
@@ -78,7 +77,7 @@ func applyDeployment(opt *option.AddonOption) error {
return fmt.Errorf("decode controller-manager deployment error: %v", err)
}
- if err := cmdutil.CreateOrUpdateDeployment(opt.KubeClientSet, clCtrManagerDeployment); err != nil {
+ if err := utils.CreateOrUpdateDeployment(opt.KubeClientSet, clCtrManagerDeployment); err != nil {
return fmt.Errorf("create clusterlink controller-manager deployment error: %v", err)
}
@@ -106,7 +105,7 @@ func applyClusterRole(opt *option.AddonOption) error {
return fmt.Errorf("decode controller-manager clusterrole error: %v", err)
}
- if err := cmdutil.CreateOrUpdateClusterRole(opt.KubeClientSet, clCtrManagerClusterRole); err != nil {
+ if err := utils.CreateOrUpdateClusterRole(opt.KubeClientSet, clCtrManagerClusterRole); err != nil {
return fmt.Errorf("create clusterlink controller-manager clusterrole error: %v", err)
}
@@ -135,7 +134,7 @@ func applyClusterRoleBinding(opt *option.AddonOption) error {
return fmt.Errorf("decode controller-manager clusterrolebinding error: %v", err)
}
- if err := cmdutil.CreateOrUpdateClusterRoleBinding(opt.KubeClientSet, clCtrManagerClusterRoleBinding); err != nil {
+ if err := utils.CreateOrUpdateClusterRoleBinding(opt.KubeClientSet, clCtrManagerClusterRoleBinding); err != nil {
return fmt.Errorf("create clusterlink controller-manager clusterrolebinding error: %v", err)
}
diff --git a/pkg/clusterlink/operator/addons/manager/manifests.go b/pkg/operator/clusterlink/manager/manifests.go
similarity index 100%
rename from pkg/clusterlink/operator/addons/manager/manifests.go
rename to pkg/operator/clusterlink/manager/manifests.go
diff --git a/pkg/clusterlink/operator/addons/option/option.go b/pkg/operator/clusterlink/option/option.go
similarity index 54%
rename from pkg/clusterlink/operator/addons/option/option.go
rename to pkg/operator/clusterlink/option/option.go
index 70921864b..b3c36e4f6 100644
--- a/pkg/clusterlink/operator/addons/option/option.go
+++ b/pkg/operator/clusterlink/option/option.go
@@ -5,27 +5,29 @@ import (
"os"
"k8s.io/client-go/kubernetes"
- "k8s.io/client-go/tools/clientcmd"
clientcmdapi "k8s.io/client-go/tools/clientcmd/api"
- cmdOptions "github.com/kosmos.io/kosmos/cmd/clusterlink/operator/app/options"
- clusterlinkv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/utils"
"github.com/kosmos.io/kosmos/pkg/version"
)
// AddonOption for cluster
type AddonOption struct {
- clusterlinkv1alpha1.Cluster
+ kosmosv1alpha1.Cluster
+
+ Version string
+ UseProxy bool
+
+ KubeConfigByte []byte
KubeClientSet *kubernetes.Clientset
ControlPanelKubeConfig *clientcmdapi.Config
- Version string
- UseProxy bool
}
-func (o *AddonOption) buildClusterConfig(opts *cmdOptions.Options) error {
- restConfig, err := clientcmd.BuildConfigFromFlags("", opts.KubeConfig)
+func (o *AddonOption) buildClusterConfig() error {
+ restConfig, err := utils.NewConfigFromBytes(o.KubeConfigByte)
if err != nil {
- return fmt.Errorf("error building kubeconfig: %s", err.Error())
+ return fmt.Errorf("error building restConfig: %s", err.Error())
}
clusterClientSet, err := kubernetes.NewForConfig(restConfig)
@@ -42,12 +44,12 @@ func (o *AddonOption) buildClusterConfig(opts *cmdOptions.Options) error {
return nil
}
-// preparation for option
-func (o *AddonOption) Complete(opts *cmdOptions.Options) error {
- return o.buildClusterConfig(opts)
+// Complete preparation for option
+func (o *AddonOption) Complete() error {
+ return o.buildClusterConfig()
}
-// return spec.namespace
+// GetSpecNamespace return spec.namespace
func (o *AddonOption) GetSpecNamespace() string {
return o.Spec.Namespace
}
@@ -57,5 +59,5 @@ func (o *AddonOption) GetImageRepository() string {
}
func (o *AddonOption) GetIPFamily() string {
- return string(o.Spec.IPFamily)
+ return string(o.Spec.ClusterLinkOptions.IPFamily)
}
diff --git a/pkg/clusterlink/operator/addons/proxy/manifests.go b/pkg/operator/clusterlink/proxy/manifests.go
similarity index 100%
rename from pkg/clusterlink/operator/addons/proxy/manifests.go
rename to pkg/operator/clusterlink/proxy/manifests.go
diff --git a/pkg/clusterlink/operator/addons/proxy/proxy.go b/pkg/operator/clusterlink/proxy/proxy.go
similarity index 89%
rename from pkg/clusterlink/operator/addons/proxy/proxy.go
rename to pkg/operator/clusterlink/proxy/proxy.go
index 1d658342a..b4acfa946 100644
--- a/pkg/clusterlink/operator/addons/proxy/proxy.go
+++ b/pkg/operator/clusterlink/proxy/proxy.go
@@ -14,9 +14,8 @@ import (
"k8s.io/client-go/tools/clientcmd"
"k8s.io/klog/v2"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/option"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/utils"
- cmdutil "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/util"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/option"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/utils"
utils2 "github.com/kosmos.io/kosmos/pkg/utils"
)
@@ -57,7 +56,7 @@ func applySecret(opt *option.AddonOption) error {
},
}
- if err := cmdutil.CreateOrUpdateSecret(opt.KubeClientSet, secret); err != nil {
+ if err := utils.CreateOrUpdateSecret(opt.KubeClientSet, secret); err != nil {
return fmt.Errorf("create clusterlink agent secret error: %v", err)
}
@@ -79,7 +78,7 @@ func applyService(opt *option.AddonOption) error {
return fmt.Errorf("decode controller-proxy service error: %v", err)
}
- if err := cmdutil.CreateOrUpdateService(opt.KubeClientSet, proxyService); err != nil {
+ if err := utils.CreateOrUpdateService(opt.KubeClientSet, proxyService); err != nil {
return fmt.Errorf("create clusterlink-proxy service error: %v", err)
}
@@ -107,7 +106,7 @@ func applyDeployment(opt *option.AddonOption) error {
return fmt.Errorf("decode clusterlink-proxy deployment error: %v", err)
}
- if err := cmdutil.CreateOrUpdateDeployment(opt.KubeClientSet, proxyDeployment); err != nil {
+ if err := utils.CreateOrUpdateDeployment(opt.KubeClientSet, proxyDeployment); err != nil {
return fmt.Errorf("create controller-proxy deployment error: %v", err)
}
diff --git a/pkg/clusterlink/operator/util/idempotency.go b/pkg/operator/clusterlink/utils/idempotency.go
similarity index 99%
rename from pkg/clusterlink/operator/util/idempotency.go
rename to pkg/operator/clusterlink/utils/idempotency.go
index 016759d84..b8dc7ed70 100644
--- a/pkg/clusterlink/operator/util/idempotency.go
+++ b/pkg/operator/clusterlink/utils/idempotency.go
@@ -1,5 +1,5 @@
// nolint:dupl
-package util
+package utils
import (
"context"
diff --git a/pkg/clusterlink/operator/addons/utils/template.go b/pkg/operator/clusterlink/utils/template.go
similarity index 100%
rename from pkg/clusterlink/operator/addons/utils/template.go
rename to pkg/operator/clusterlink/utils/template.go
diff --git a/pkg/clusterlink/operator/operator_controller.go b/pkg/operator/operator_controller.go
similarity index 71%
rename from pkg/clusterlink/operator/operator_controller.go
rename to pkg/operator/operator_controller.go
index c7399506c..7d6364378 100644
--- a/pkg/clusterlink/operator/operator_controller.go
+++ b/pkg/operator/operator_controller.go
@@ -16,11 +16,11 @@ import (
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/reconcile"
- cmdOptions "github.com/kosmos.io/kosmos/cmd/clusterlink/operator/app/options"
- clusterlinkv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons"
- "github.com/kosmos.io/kosmos/pkg/clusterlink/operator/addons/option"
- clusterlinkv1alpha1lister "github.com/kosmos.io/kosmos/pkg/generated/listers/kosmos/v1alpha1"
+ cmdOptions "github.com/kosmos.io/kosmos/cmd/operator/app/options"
+ "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ lister "github.com/kosmos.io/kosmos/pkg/generated/listers/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink"
+ "github.com/kosmos.io/kosmos/pkg/operator/clusterlink/option"
"github.com/kosmos.io/kosmos/pkg/utils"
)
@@ -33,7 +33,7 @@ const (
type Reconciler struct {
client.Client
Scheme *runtime.Scheme
- ClusterLister clusterlinkv1alpha1lister.ClusterLister
+ ClusterLister lister.ClusterLister
ControlPanelKubeConfig *clientcmdapi.Config
ClusterName string
Options *cmdOptions.Options
@@ -48,11 +48,11 @@ func (r *Reconciler) SetupWithManager(mgr manager.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
Named(controllerName).
WithOptions(controller.Options{}).
- For(&clusterlinkv1alpha1.Cluster{}).
+ For(&v1alpha1.Cluster{}).
Complete(r)
}
-func (r *Reconciler) syncCluster(cluster *clusterlinkv1alpha1.Cluster) (reconcile.Result, error) {
+func (r *Reconciler) syncCluster(cluster *v1alpha1.Cluster) (reconcile.Result, error) {
klog.Infof("install agent")
useProxy := r.Options.UseProxy
if value, exist := os.LookupEnv(utils.EnvUseProxy); exist {
@@ -64,33 +64,35 @@ func (r *Reconciler) syncCluster(cluster *clusterlinkv1alpha1.Cluster) (reconcil
}
opt := &option.AddonOption{
Cluster: *cluster,
+ KubeConfigByte: cluster.Spec.Kubeconfig,
ControlPanelKubeConfig: r.ControlPanelKubeConfig,
UseProxy: useProxy,
}
- if err := opt.Complete(r.Options); err != nil {
+ if err := opt.Complete(); err != nil {
klog.Error(err)
return reconcile.Result{Requeue: true}, err
}
- if err := addons.Install(opt); err != nil {
+ if err := clusterlink.Install(opt); err != nil {
klog.Error(err)
return reconcile.Result{Requeue: true}, err
}
return r.ensureFinalizer(cluster)
}
-func (r *Reconciler) removeCluster(cluster *clusterlinkv1alpha1.Cluster) (reconcile.Result, error) {
+func (r *Reconciler) removeCluster(cluster *v1alpha1.Cluster) (reconcile.Result, error) {
klog.Infof("uninstall agent")
opt := &option.AddonOption{
Cluster: *cluster,
+ KubeConfigByte: cluster.Spec.Kubeconfig,
ControlPanelKubeConfig: r.ControlPanelKubeConfig,
}
- if err := opt.Complete(r.Options); err != nil {
+ if err := opt.Complete(); err != nil {
klog.Error(err)
return reconcile.Result{Requeue: true}, err
}
- if err := addons.Uninstall(opt); err != nil {
+ if err := clusterlink.Uninstall(opt); err != nil {
klog.Error(err)
return reconcile.Result{Requeue: true}, err
}
@@ -98,7 +100,7 @@ func (r *Reconciler) removeCluster(cluster *clusterlinkv1alpha1.Cluster) (reconc
return r.removeFinalizer(cluster)
}
-func (r *Reconciler) ensureFinalizer(cluster *clusterlinkv1alpha1.Cluster) (reconcile.Result, error) {
+func (r *Reconciler) ensureFinalizer(cluster *v1alpha1.Cluster) (reconcile.Result, error) {
if controllerutil.ContainsFinalizer(cluster, ClusterControllerFinalizer) {
return reconcile.Result{}, nil
}
@@ -112,7 +114,7 @@ func (r *Reconciler) ensureFinalizer(cluster *clusterlinkv1alpha1.Cluster) (reco
return reconcile.Result{}, nil
}
-func (r *Reconciler) removeFinalizer(cluster *clusterlinkv1alpha1.Cluster) (reconcile.Result, error) {
+func (r *Reconciler) removeFinalizer(cluster *v1alpha1.Cluster) (reconcile.Result, error) {
if !controllerutil.ContainsFinalizer(cluster, ClusterControllerFinalizer) {
return reconcile.Result{}, nil
}
@@ -128,14 +130,14 @@ func (r *Reconciler) removeFinalizer(cluster *clusterlinkv1alpha1.Cluster) (reco
// Reconcile is for controller reconcile
func (r *Reconciler) Reconcile(ctx context.Context, request reconcile.Request) (reconcile.Result, error) {
- klog.Infof("Reconciling cluster %s, operator cluster name %s", request.NamespacedName.Name, r.ClusterName)
+ klog.Infof("Reconciling cluster %s", request.NamespacedName.Name)
- if request.NamespacedName.Name != r.ClusterName {
- klog.Infof("Skip this event")
- return reconcile.Result{}, nil
- }
+ //if request.NamespacedName.Name != r.ClusterName {
+ // klog.Infof("Skip this event")
+ // return reconcile.Result{}, nil
+ //}
- cluster := &clusterlinkv1alpha1.Cluster{}
+ cluster := &v1alpha1.Cluster{}
if err := r.Client.Get(ctx, request.NamespacedName, cluster); err != nil {
// The resource may no longer exist, in which case we stop processing.
diff --git a/pkg/scheduler/lifted/plugins/knodevolumebinding/knode_volume_binding.go b/pkg/scheduler/lifted/plugins/knodevolumebinding/knode_volume_binding.go
index f105a3a2a..11129db74 100644
--- a/pkg/scheduler/lifted/plugins/knodevolumebinding/knode_volume_binding.go
+++ b/pkg/scheduler/lifted/plugins/knodevolumebinding/knode_volume_binding.go
@@ -69,6 +69,7 @@ func (d *stateData) Clone() framework.StateData {
type VolumeBinding struct {
Binder scheduling.SchedulerVolumeBinder
PVCLister corelisters.PersistentVolumeClaimLister
+ NodeLister corelisters.NodeLister
frameworkHandler framework.Handle
}
@@ -232,6 +233,15 @@ func (pl *VolumeBinding) Filter(_ context.Context, cs *framework.CycleState, pod
// Reserve reserves volumes of pod and saves binding status in cycle state.
func (pl *VolumeBinding) Reserve(_ context.Context, cs *framework.CycleState, pod *corev1.Pod, nodeName string) *framework.Status {
+ node, err := pl.NodeLister.Get(nodeName)
+ if err != nil {
+ return framework.NewStatus(framework.Error, "node not found")
+ }
+
+ if helpers.HasKnodeTaint(node) {
+ return nil
+ }
+
state, err := getStateData(cs)
if err != nil {
return framework.AsStatus(err)
@@ -257,6 +267,15 @@ func (pl *VolumeBinding) Reserve(_ context.Context, cs *framework.CycleState, po
// If binding errors, times out or gets undone, then an error will be returned to
// retry scheduling.
func (pl *VolumeBinding) PreBind(ctx context.Context, cs *framework.CycleState, pod *corev1.Pod, nodeName string) *framework.Status {
+ node, err := pl.NodeLister.Get(nodeName)
+ if err != nil {
+ return framework.NewStatus(framework.Error, "node not found")
+ }
+
+ if helpers.HasKnodeTaint(node) {
+ return nil
+ }
+
s, err := getStateData(cs)
if err != nil {
return framework.AsStatus(err)
@@ -283,6 +302,15 @@ func (pl *VolumeBinding) PreBind(ctx context.Context, cs *framework.CycleState,
// Unreserve clears assumed PV and PVC cache.
// It's idempotent, and does nothing if no cache found for the given pod.
func (pl *VolumeBinding) Unreserve(_ context.Context, cs *framework.CycleState, _ *corev1.Pod, nodeName string) {
+ node, err := pl.NodeLister.Get(nodeName)
+ if err != nil {
+ return
+ }
+
+ if helpers.HasKnodeTaint(node) {
+ return
+ }
+
s, err := getStateData(cs)
if err != nil {
return
@@ -317,6 +345,7 @@ func New(plArgs runtime.Object, fh framework.Handle) (framework.Plugin, error) {
return &VolumeBinding{
Binder: binder,
PVCLister: pvcInformer.Lister(),
+ NodeLister: nodeInformer.Lister(),
frameworkHandler: fh,
}, nil
}
diff --git a/pkg/utils/constants.go b/pkg/utils/constants.go
index 5cf238b74..da08a63db 100644
--- a/pkg/utils/constants.go
+++ b/pkg/utils/constants.go
@@ -1,22 +1,80 @@
package utils
+import (
+ "time"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+)
+
+var (
+ ImageList = []string{
+ ClusterTreeClusterManager,
+ ClusterLinkOperator,
+ ClusterLinkAgent,
+ ClusterLinkNetworkManager,
+ ClusterLinkControllerManager,
+ ClusterLinkProxy,
+ ClusterLinkElector,
+ ClusterLinkFloater,
+ KosmosOperator,
+ Coredns,
+ EpsProbePlugin}
+)
+
const (
- DefaultNamespace = "kosmos-system"
- DefaultImageRepository = "ghcr.io/kosmos-io"
- DefaultInstallModule = "all"
+ ClusterTreeClusterManager = "ghcr.io/kosmos-io/clustertree-cluster-manager"
+ ClusterLinkOperator = "ghcr.io/kosmos-io/clusterlink-operator"
+ ClusterLinkAgent = "ghcr.io/kosmos-io/clusterlink-agent"
+ ClusterLinkNetworkManager = "ghcr.io/kosmos-io/clusterlink-network-manager"
+ ClusterLinkControllerManager = "ghcr.io/kosmos-io/clusterlink-controller-manager"
+ ClusterLinkProxy = "ghcr.io/kosmos-io/clusterlink-proxy"
+ ClusterLinkElector = "ghcr.io/kosmos-io/clusterlink-elector"
+ ClusterLinkFloater = "ghcr.io/kosmos-io/clusterlink-floater"
+ Coredns = "ghcr.io/kosmos-io/coredns"
+ EpsProbePlugin = "ghcr.io/kosmos-io/eps-probe-plugin"
+ KosmosOperator = "ghcr.io/kosmos-io/kosmos-operator"
+ Containerd = "containerd"
+ DefaultContainerRuntime = "docker"
+ DefaultContainerdNamespace = "default"
+ DefaultContainerdSockAddress = "/run/containerd/containerd.sock"
+ DefaultVersion = "latest"
+ DefaultTarName = "kosmos-io.tar.gz"
+)
+
+const (
+ DefaultNamespace = "kosmos-system"
+ DefaultClusterName = "kosmos-control-cluster"
+ DefaultImageRepository = "ghcr.io/kosmos-io"
+ DefaultWaitTime = 120
+ RootClusterAnnotationKey = "kosmos.io/cluster-role"
+ RootClusterAnnotationValue = "root"
+ KosmosSchedulerName = "kosmos-scheduler"
+)
+
+const (
+ All = "all"
+ ClusterLink = "clusterlink"
+ ClusterTree = "clustertree"
+ CoreDNS = "coredns"
)
const ExternalIPPoolNamePrefix = "clusterlink"
const (
- CNITypeCalico = "calico"
- NetworkTypeP2P = "p2p"
+ CNITypeCalico = "calico"
+ NetworkTypeP2P = "p2p"
+ NetworkTypeGateway = "gateway"
+ DefaultIPv4 = "ipv4"
+ DefaultIPv6 = "ipv6"
+ DefaultPort = "8889"
)
const (
ProxySecretName = "clusterlink-agent-proxy"
ControlPanelSecretName = "controlpanel-config"
HostKubeConfigName = "host-kubeconfig"
+ NodeConfigFile = "~/nodeconfig.json"
)
const (
@@ -26,3 +84,104 @@ const (
)
const ClusterStartControllerFinalizer = "kosmos.io/cluster-start-finazlizer"
+
+// mcs
+const (
+ ServiceKey = "kubernetes.io/service-name"
+ ServiceExportLabelKey = "kosmos.io/service-export"
+ ServiceImportLabelKey = "kosmos.io/service-import"
+ MCSLabelValue = "ture"
+ ServiceEndpointsKey = "kosmos.io/address"
+ DisconnectedEndpointsKey = "kosmos.io/disconnected-address"
+ AutoCreateMCSAnnotation = "kosmos.io/auto-create-mcs"
+)
+
+// cluster node
+const (
+ KosmosNodePrefix = "kosmos-"
+ KosmosNodeLabel = "kosmos.io/node"
+ KosmosNodeValue = "true"
+ KosmosNodeJoinLabel = "kosmos.io/join"
+ KosmosNodeJoinValue = "true"
+ KosmosNodeTaintKey = "kosmos.io/node"
+ KosmosNodeTaintValue = "true"
+ KosmosNodeTaintEffect = "NoSchedule"
+ KosmosPodLabel = "kosmos-io/pod"
+ KosmosGlobalLabel = "kosmos.io/global"
+ KosmosSelectorKey = "kosmos.io/cluster-selector"
+ KosmosTrippedLabels = "kosmos-io/tripped"
+ KosmosPvcLabelSelector = "kosmos-io/label-selector"
+ KosmosExcludeNodeLabel = "kosmos.io/exclude"
+ KosmosExcludeNodeValue = "true"
+
+ // on resorce (pv, configmap, secret), represents which cluster this resource belongs to
+ KosmosResourceOwnersAnnotations = "kosmos-io/cluster-owners"
+ // on node, represents which cluster this node belongs to
+ KosmosNodeOwnedByClusterAnnotations = "kosmos-io/owned-by-cluster"
+
+ KosmosDaemonsetAllowAnnotations = "kosmos-io/daemonset-allow"
+
+ NodeRoleLabel = "kubernetes.io/role"
+ NodeRoleValue = "agent"
+ NodeOSLabelBeta = "beta.kubernetes.io/os"
+ NodeHostnameValue = corev1.LabelHostname
+ NodeHostnameValueBeta = "beta.kubernetes.io/hostname"
+ NodeOSLabelStable = corev1.LabelOSStable
+ NodeArchLabelStable = corev1.LabelArchStable
+ PVCSelectedNodeKey = "volume.kubernetes.io/selected-node"
+
+ DefaultK8sOS = "linux"
+ DefaultK8sArch = "amd64"
+
+ DefaultInformerResyncPeriod = 1 * time.Minute
+ DefaultListenPort = 10250
+ DefaultPodSyncWorkers = 10
+ DefaultWorkers = 5
+ DefaultKubeNamespace = corev1.NamespaceAll
+
+ DefaultTaintEffect = string(corev1.TaintEffectNoSchedule)
+ DefaultTaintKey = "kosmos-node.io/plugin"
+
+ DefaultLeafKubeQPS = 40.0
+ DefaultLeafKubeBurst = 60
+
+ // LabelNodeRoleControlPlane specifies that a node hosts control-plane components
+ LabelNodeRoleControlPlane = "node-role.kubernetes.io/control-plane"
+
+ // LabelNodeRoleOldControlPlane specifies that a node hosts control-plane components
+ LabelNodeRoleOldControlPlane = "node-role.kubernetes.io/master"
+
+ // LabelNodeRoleNode specifies that a node hosts node components
+ LabelNodeRoleNode = "node-role.kubernetes.io/node"
+)
+
+const (
+ ReservedNS = "kube-system"
+ RooTCAConfigMapName = "kube-root-ca.crt"
+ SATokenPrefix = "kube-api-access"
+ MasterRooTCAName = "master-root-ca.crt"
+)
+
+var GVR_CONFIGMAP = schema.GroupVersionResource{
+ Group: "",
+ Version: "v1",
+ Resource: "configmaps",
+}
+
+var GVR_PVC = schema.GroupVersionResource{
+ Group: "",
+ Version: "v1",
+ Resource: "persistentvolumeclaims",
+}
+
+var GVR_SECRET = schema.GroupVersionResource{
+ Group: "",
+ Version: "v1",
+ Resource: "secrets",
+}
+
+var GVR_SERVICE = schema.GroupVersionResource{
+ Group: "",
+ Version: "v1",
+ Resource: "services",
+}
diff --git a/pkg/utils/controllers/controller_util.go b/pkg/utils/controllers/controller_util.go
new file mode 100644
index 000000000..c762c764c
--- /dev/null
+++ b/pkg/utils/controllers/controller_util.go
@@ -0,0 +1,164 @@
+package controllers
+
+import (
+ "fmt"
+ "time"
+
+ "k8s.io/apimachinery/pkg/runtime"
+ utilruntime "k8s.io/apimachinery/pkg/util/runtime"
+ "k8s.io/apimachinery/pkg/util/wait"
+ "k8s.io/client-go/tools/cache"
+ "k8s.io/client-go/util/workqueue"
+ "k8s.io/klog/v2"
+)
+
+const (
+ // maxRetries is the number of times a runtime object will be retried before it is dropped out of the queue.
+ // With the current rate-limiter in use (5ms*2^(maxRetries-1)) the following numbers represent the times
+ // an object is going to be requeued:
+ //
+ // 5ms, 10ms, 20ms, 40ms, 80ms, 160ms, 320ms, 640ms, 1.3s, 2.6s, 5.1s, 10.2s, 20.4s, 41s, 82s
+ maxRetries = 15
+)
+
+type Reconcile func(string) error
+
+type Worker interface {
+ // Enqueue enqueue a event object key into queue without block
+ Enqueue(obj runtime.Object)
+
+ // EnqueueRateLimited enqueue an event object key into queue after the rate limiter says it's ok
+ EnqueueRateLimited(obj runtime.Object)
+
+ // EnqueueAfter enqueue an event object key into queue after the indicated duration has passed
+ EnqueueAfter(obj runtime.Object, after time.Duration)
+
+ // Forget remove an event object key out of queue
+ Forget(obj runtime.Object)
+
+ // GetFirst for test
+ GetFirst() (string, error)
+
+ // Run will not return until stopChan is closed.
+ Run(concurrency int, stopChan <-chan struct{})
+
+ // SplitKey returns the namespace and name that
+ // MetaNamespaceKeyFunc encoded into key.
+ SplitKey(key string) (namespace, name string, err error)
+}
+
+type worker struct {
+ // runtime Objects keys that need to be synced.
+ queue workqueue.RateLimitingInterface
+
+ // reconcile function to handle keys
+ reconcile Reconcile
+
+ // keyFunc encoded an object into string
+ keyFunc func(obj interface{}) (string, error)
+}
+
+// NewWorker returns a Concurrent informer worker which can process resource event.
+func NewWorker(reconcile Reconcile, name string) Worker {
+ return &worker{
+ queue: workqueue.NewNamedRateLimitingQueue(workqueue.DefaultControllerRateLimiter(), name),
+ reconcile: reconcile,
+ keyFunc: cache.DeletionHandlingMetaNamespaceKeyFunc,
+ }
+}
+
+func (c *worker) Enqueue(obj runtime.Object) {
+ key, err := c.keyFunc(obj)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
+ return
+ }
+
+ c.queue.Add(key)
+}
+
+func (c *worker) EnqueueRateLimited(obj runtime.Object) {
+ key, err := c.keyFunc(obj)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
+ return
+ }
+
+ c.queue.AddRateLimited(key)
+}
+
+func (c *worker) EnqueueAfter(obj runtime.Object, after time.Duration) {
+ key, err := c.keyFunc(obj)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
+ return
+ }
+
+ c.queue.AddAfter(key, after)
+}
+
+func (c *worker) GetFirst() (string, error) {
+ item, _ := c.queue.Get()
+ return item.(string), nil
+}
+
+func (c *worker) Forget(obj runtime.Object) {
+ key, err := c.keyFunc(obj)
+ if err != nil {
+ utilruntime.HandleError(fmt.Errorf("couldn't get key for object %#v: %v", obj, err))
+ return
+ }
+
+ c.queue.Forget(key)
+}
+
+func (c *worker) Run(workerNumber int, stopChan <-chan struct{}) {
+ defer c.queue.ShutDown()
+
+ for i := 0; i < workerNumber; i++ {
+ go wait.Until(c.worker, time.Second, stopChan)
+ }
+
+ <-stopChan
+}
+
+func (c *worker) SplitKey(key string) (namespace, name string, err error) {
+ return cache.SplitMetaNamespaceKey(key)
+}
+
+// worker runs a worker thread that just dequeues items, processes them, and
+// marks them done. You may run as many of these in parallel as you wish; the
+// queue guarantees that they will not end up processing the same runtime object
+// at the same time
+func (c *worker) worker() {
+ for c.processNextItem() {
+ }
+}
+
+func (c *worker) processNextItem() bool {
+ key, quit := c.queue.Get()
+ if quit {
+ return false
+ }
+
+ defer c.queue.Done(key)
+
+ err := c.reconcile(key.(string))
+ c.handleErr(err, key)
+ return true
+}
+
+func (c *worker) handleErr(err error, key interface{}) {
+ if err == nil {
+ c.queue.Forget(key)
+ return
+ }
+
+ if c.queue.NumRequeues(key) < maxRetries {
+ c.queue.AddRateLimited(key)
+ return
+ }
+
+ klog.V(2).Infof("Dropping resource %q out of the queue: %v", key, err)
+ c.queue.Forget(key)
+}
diff --git a/pkg/utils/controllers/controller_util_test.go b/pkg/utils/controllers/controller_util_test.go
new file mode 100644
index 000000000..a55e72d07
--- /dev/null
+++ b/pkg/utils/controllers/controller_util_test.go
@@ -0,0 +1,28 @@
+package controllers
+
+import (
+ "testing"
+
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+func Test_Enqueue(t *testing.T) {
+ const name = "public"
+
+ worker := NewWorker(nil, "namespace")
+
+ ns := &corev1.Namespace{
+ ObjectMeta: metav1.ObjectMeta{Name: name},
+ }
+
+ worker.Enqueue(ns)
+
+ first, _ := worker.GetFirst()
+
+ _, metaName, _ := worker.SplitKey(first)
+
+ if name != metaName {
+ t.Errorf("Added NS: %v, want: %v, but resout: %v", first, name, metaName)
+ }
+}
diff --git a/pkg/utils/helper/mcs.go b/pkg/utils/helper/mcs.go
index ef81c27cc..694d59693 100644
--- a/pkg/utils/helper/mcs.go
+++ b/pkg/utils/helper/mcs.go
@@ -3,6 +3,7 @@ package helper
import (
discoveryv1 "k8s.io/api/discovery/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ mcsv1alpha1 "sigs.k8s.io/mcs-api/pkg/apis/v1alpha1"
)
// AddEndpointSliceAnnotation adds annotation for the given endpointSlice.
@@ -15,6 +16,16 @@ func AddEndpointSliceAnnotation(eps *discoveryv1.EndpointSlice, annotationKey st
eps.SetAnnotations(epsAnnotations)
}
+// AddServiceImportAnnotation adds annotation for the given endpointSlice.
+func AddServiceImportAnnotation(serviceImport *mcsv1alpha1.ServiceImport, annotationKey string, annotationValue string) {
+ importAnnotations := serviceImport.GetAnnotations()
+ if importAnnotations == nil {
+ importAnnotations = make(map[string]string, 1)
+ }
+ importAnnotations[annotationKey] = annotationValue
+ serviceImport.SetAnnotations(importAnnotations)
+}
+
// AddEndpointSliceLabel adds label for the given endpointSlice.
func AddEndpointSliceLabel(eps *discoveryv1.EndpointSlice, labelKey string, labelValue string) {
epsLabel := eps.GetLabels()
@@ -53,3 +64,16 @@ func HasAnnotation(m metav1.ObjectMeta, key string) bool {
return false
}
}
+
+// GetAnnotationValue returns the annotation key of ObjectMeta
+func GetAnnotationValue(m metav1.ObjectMeta, key string) (annotationValue string, found bool) {
+ annotations := m.GetAnnotations()
+ if annotations == nil {
+ return "", false
+ }
+ if value, exists := annotations[key]; exists {
+ return value, true
+ } else {
+ return "", false
+ }
+}
diff --git a/pkg/utils/k8s.go b/pkg/utils/k8s.go
new file mode 100644
index 000000000..f22024a25
--- /dev/null
+++ b/pkg/utils/k8s.go
@@ -0,0 +1,344 @@
+package utils
+
+import (
+ "encoding/json"
+ "fmt"
+ "strings"
+
+ jsonpatch "github.com/evanphx/json-patch"
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/apimachinery/pkg/runtime"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+ "k8s.io/client-go/tools/clientcmd"
+ "k8s.io/metrics/pkg/client/clientset/versioned"
+
+ kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+)
+
+type ClustersNodeSelection struct {
+ NodeSelector map[string]string `json:"nodeSelector,omitempty"`
+ Affinity *corev1.Affinity `json:"affinity,omitempty"`
+ Tolerations []corev1.Toleration `json:"tolerations,omitempty"`
+ TopologySpreadConstraints []corev1.TopologySpreadConstraint `json:"topologySpreadConstraints,omitempty"`
+}
+
+type EnvResourceManager interface {
+ GetConfigMap(name, namespace string) (*corev1.ConfigMap, error)
+ GetSecret(name, namespace string) (*corev1.Secret, error)
+ ListServices() ([]*corev1.Service, error)
+}
+
+func CreateMergePatch(original, new interface{}) ([]byte, error) {
+ originBytes, err := json.Marshal(original)
+ if err != nil {
+ return nil, err
+ }
+ cloneBytes, err := json.Marshal(new)
+ if err != nil {
+ return nil, err
+ }
+ patch, err := jsonpatch.CreateMergePatch(originBytes, cloneBytes)
+ if err != nil {
+ return nil, err
+ }
+ return patch, nil
+}
+
+type Opts func(*rest.Config)
+
+func NewConfigFromBytes(kubeConfig []byte, opts ...Opts) (*rest.Config, error) {
+ var (
+ config *rest.Config
+ err error
+ )
+
+ c, err := clientcmd.NewClientConfigFromBytes(kubeConfig)
+ if err != nil {
+ return nil, err
+ }
+ config, err = c.ClientConfig()
+ if err != nil {
+ return nil, err
+ }
+
+ for _, h := range opts {
+ if h == nil {
+ continue
+ }
+ h(config)
+ }
+
+ return config, nil
+}
+
+func NewClientFromConfigPath(configPath string, opts ...Opts) (kubernetes.Interface, error) {
+ var (
+ config *rest.Config
+ err error
+ )
+ config, err = clientcmd.BuildConfigFromFlags("", configPath)
+ if err != nil {
+ return nil, fmt.Errorf("failed to build config from configpath: %v", err)
+ }
+
+ for _, opt := range opts {
+ if opt == nil {
+ continue
+ }
+ opt(config)
+ }
+
+ client, err := kubernetes.NewForConfig(config)
+ if err != nil {
+ return nil, fmt.Errorf("could not create clientset: %v", err)
+ }
+ return client, nil
+}
+
+func NewKosmosClientFromConfigPath(configPath string, opts ...Opts) (kosmosversioned.Interface, error) {
+ var (
+ config *rest.Config
+ err error
+ )
+ config, err = clientcmd.BuildConfigFromFlags("", configPath)
+ if err != nil {
+ return nil, fmt.Errorf("failed to build config from configpath: %v", err)
+ }
+
+ for _, opt := range opts {
+ if opt == nil {
+ continue
+ }
+ opt(config)
+ }
+
+ client, err := kosmosversioned.NewForConfig(config)
+ if err != nil {
+ return nil, fmt.Errorf("could not create clientset: %v", err)
+ }
+ return client, nil
+}
+
+func NewClientFromBytes(kubeConfig []byte, opts ...Opts) (kubernetes.Interface, error) {
+ var (
+ config *rest.Config
+ err error
+ )
+
+ clientConfig, err := clientcmd.NewClientConfigFromBytes(kubeConfig)
+ if err != nil {
+ return nil, err
+ }
+ config, err = clientConfig.ClientConfig()
+ if err != nil {
+ return nil, err
+ }
+
+ for _, opt := range opts {
+ if opt == nil {
+ continue
+ }
+ opt(config)
+ }
+
+ client, err := kubernetes.NewForConfig(config)
+ if err != nil {
+ return nil, fmt.Errorf("create client failed: %v", err)
+ }
+ return client, nil
+}
+
+func NewKosmosClientFromBytes(kubeConfig []byte, opts ...Opts) (kosmosversioned.Interface, error) {
+ var (
+ config *rest.Config
+ err error
+ )
+
+ clientConfig, err := clientcmd.NewClientConfigFromBytes(kubeConfig)
+ if err != nil {
+ return nil, err
+ }
+ config, err = clientConfig.ClientConfig()
+ if err != nil {
+ return nil, err
+ }
+
+ for _, opt := range opts {
+ if opt == nil {
+ continue
+ }
+ opt(config)
+ }
+
+ client, err := kosmosversioned.NewForConfig(config)
+ if err != nil {
+ return nil, fmt.Errorf("create client failed: %v", err)
+ }
+ return client, nil
+}
+
+func NewMetricClient(configPath string, opts ...Opts) (versioned.Interface, error) {
+ var (
+ config *rest.Config
+ err error
+ )
+ config, err = clientcmd.BuildConfigFromFlags("", configPath)
+ if err != nil {
+ config, err = rest.InClusterConfig()
+ if err != nil {
+ return nil, fmt.Errorf("could not read config file for cluster: %v", err)
+ }
+ }
+
+ for _, opt := range opts {
+ if opt == nil {
+ continue
+ }
+ opt(config)
+ }
+
+ metricClient, err := versioned.NewForConfig(config)
+ if err != nil {
+ return nil, fmt.Errorf("could not create client for root cluster: %v", err)
+ }
+ return metricClient, nil
+}
+
+func IsKosmosNode(node *corev1.Node) bool {
+ if node == nil {
+ return false
+ }
+ labelVal, exist := node.ObjectMeta.Labels[KosmosNodeLabel]
+ if !exist {
+ return false
+ }
+ return labelVal == KosmosNodeValue
+}
+
+func IsExcludeNode(node *corev1.Node) bool {
+ if node == nil {
+ return false
+ }
+ labelVal, exist := node.ObjectMeta.Labels[KosmosExcludeNodeLabel]
+ if !exist {
+ return false
+ }
+ return labelVal == KosmosExcludeNodeValue
+}
+
+func IsVirtualPod(pod *corev1.Pod) bool {
+ if pod.Labels != nil && pod.Labels[KosmosPodLabel] == "true" {
+ return true
+ }
+ return false
+}
+
+func UpdateConfigMap(old, new *corev1.ConfigMap) {
+ old.Labels = new.Labels
+ old.Data = new.Data
+ old.BinaryData = new.BinaryData
+}
+
+func UpdateSecret(old, new *corev1.Secret) {
+ old.Labels = new.Labels
+ old.Data = new.Data
+ old.StringData = new.StringData
+ old.Type = new.Type
+}
+
+func UpdateUnstructured[T *corev1.ConfigMap | *corev1.Secret](old, new *unstructured.Unstructured, oldObj T, newObj T, update func(old, new T)) (*unstructured.Unstructured, error) {
+ if err := runtime.DefaultUnstructuredConverter.FromUnstructured(old.UnstructuredContent(), &oldObj); err != nil {
+ return nil, err
+ }
+
+ if err := runtime.DefaultUnstructuredConverter.FromUnstructured(new.UnstructuredContent(), &newObj); err != nil {
+ return nil, err
+ }
+
+ update(oldObj, newObj)
+
+ if retObj, err := runtime.DefaultUnstructuredConverter.ToUnstructured(oldObj); err == nil {
+ return &unstructured.Unstructured{
+ Object: retObj,
+ }, nil
+ } else {
+ return nil, err
+ }
+}
+
+func IsObjectGlobal(obj *metav1.ObjectMeta) bool {
+ if obj.Annotations == nil {
+ return false
+ }
+
+ if obj.Annotations[KosmosGlobalLabel] == "true" {
+ return true
+ }
+
+ return false
+}
+
+func IsObjectUnstructuredGlobal(obj map[string]string) bool {
+ if obj == nil {
+ return false
+ }
+
+ if obj[KosmosGlobalLabel] == "true" {
+ return true
+ }
+
+ return false
+}
+
+func AddResourceClusters(anno map[string]string, clusterName string) map[string]string {
+ if anno == nil {
+ anno = map[string]string{}
+ }
+ owners := strings.Split(anno[KosmosResourceOwnersAnnotations], ",")
+ newowners := make([]string, 0)
+
+ flag := false
+ for _, v := range owners {
+ if len(v) == 0 {
+ continue
+ }
+ newowners = append(newowners, v)
+ if v == clusterName {
+ // already existed
+ flag = true
+ }
+ }
+
+ if !flag {
+ newowners = append(newowners, clusterName)
+ }
+
+ anno[KosmosResourceOwnersAnnotations] = strings.Join(newowners, ",")
+ return anno
+}
+
+func HasResourceClusters(anno map[string]string, clusterName string) bool {
+ if anno == nil {
+ anno = map[string]string{}
+ }
+ owners := strings.Split(anno[KosmosResourceOwnersAnnotations], ",")
+
+ for _, v := range owners {
+ if v == clusterName {
+ // already existed
+ return true
+ }
+ }
+ return false
+}
+
+func ListResourceClusters(anno map[string]string) []string {
+ if anno == nil || anno[KosmosResourceOwnersAnnotations] == "" {
+ return []string{}
+ }
+ owners := strings.Split(anno[KosmosResourceOwnersAnnotations], ",")
+ return owners
+}
diff --git a/pkg/utils/keys/keys.go b/pkg/utils/keys/keys.go
index 3cb1fbd6a..cb7209a4c 100755
--- a/pkg/utils/keys/keys.go
+++ b/pkg/utils/keys/keys.go
@@ -88,6 +88,20 @@ func ClusterWideKeyFunc(obj interface{}) (ClusterWideKey, error) {
return key, nil
}
+// NamespaceWideKeyFunc generates a NamespaceWideKey for object.
+func NamespaceWideKeyFunc(obj interface{}) (ClusterWideKey, error) {
+ key := ClusterWideKey{}
+ metaInfo, err := meta.Accessor(obj)
+ if err != nil { // should not happen
+ return key, fmt.Errorf("object has no meta: %v", err)
+ }
+
+ key.Namespace = metaInfo.GetNamespace()
+ key.Name = metaInfo.GetName()
+
+ return key, nil
+}
+
// FederatedKey is the object key which is a unique identifier across all clusters in federation.
type FederatedKey struct {
// Cluster is the cluster name of the referencing object.
diff --git a/pkg/utils/lifted/autodetection/filtered.go b/pkg/utils/lifted/autodetection/filtered.go
new file mode 100644
index 000000000..d2f6a3742
--- /dev/null
+++ b/pkg/utils/lifted/autodetection/filtered.go
@@ -0,0 +1,71 @@
+// This code is directly lifted from the calico
+
+// For reference:
+// https://github.com/projectcalico/calico/blob/master/node/pkg/lifecycle/startup/autodetection/filtered.go
+
+// Copyright (c) 2016 Tigera, Inc. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package autodetection
+
+import (
+ "errors"
+ "fmt"
+ gonet "net"
+
+ "github.com/projectcalico/calico/libcalico-go/lib/net"
+ log "github.com/sirupsen/logrus"
+)
+
+// FilteredEnumeration performs basic IP and IPNetwork discovery by enumerating
+// all interfaces and filtering in/out based on the supplied filter regex.
+//
+// The incl and excl slice of regex strings may be nil.
+func FilteredEnumeration(incl, excl []string, cidrs []net.IPNet, version int) (*Interface, *net.IPNet, error) {
+ interfaces, err := GetInterfaces(gonet.Interfaces, incl, excl, version)
+ if err != nil {
+ return nil, nil, err
+ }
+ if len(interfaces) == 0 {
+ return nil, nil, errors.New("no valid host interfaces found")
+ }
+
+ // Find the first interface with a valid matching IP address and network.
+ // We initialise the IP with the first valid IP that we find just in
+ // case we don't find an IP *and* network.
+ for _, i := range interfaces {
+ log.WithField("Name", i.Name).Debug("Check interface")
+ for _, c := range i.Cidrs {
+ log.WithField("CIDR", c).Debug("Check address")
+ if c.IP.IsGlobalUnicast() && matchCIDRs(c.IP, cidrs) {
+ return &i, &c, nil
+ }
+ }
+ }
+
+ return nil, nil, fmt.Errorf("no valid IPv%d addresses found on the host interfaces", version)
+}
+
+// matchCIDRs matches an IP address against a list of cidrs.
+// If the list is empty, it always matches.
+func matchCIDRs(ip gonet.IP, cidrs []net.IPNet) bool {
+ if len(cidrs) == 0 {
+ return true
+ }
+ for _, cidr := range cidrs {
+ if cidr.Contains(ip) {
+ return true
+ }
+ }
+ return false
+}
diff --git a/pkg/utils/lifted/autodetection/interfaces.go b/pkg/utils/lifted/autodetection/interfaces.go
new file mode 100644
index 000000000..a2e0cd0f5
--- /dev/null
+++ b/pkg/utils/lifted/autodetection/interfaces.go
@@ -0,0 +1,109 @@
+// This code is directly lifted from the calico
+
+// For reference:
+// https://github.com/projectcalico/calico/blob/master/node/pkg/lifecycle/startup/autodetection/interfaces.go
+
+// Copyright (c) 2016 Tigera, Inc. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package autodetection
+
+import (
+ "net"
+ "regexp"
+ "strings"
+
+ cnet "github.com/projectcalico/calico/libcalico-go/lib/net"
+ log "github.com/sirupsen/logrus"
+)
+
+// Interface contains details about an interface on the host.
+type Interface struct {
+ Name string
+ Cidrs []cnet.IPNet
+}
+
+// GetInterfaces returns a list of all interfaces, skipping any interfaces whose
+// name matches any of the exclusion list regexes, and including those on the
+// inclusion list.
+func GetInterfaces(getSystemInterfaces func() ([]net.Interface, error), includeRegexes []string, excludeRegexes []string, version int) ([]Interface, error) {
+ netIfaces, err := getSystemInterfaces()
+ if err != nil {
+ log.WithError(err).Warnf("Failed to enumerate interfaces")
+ return nil, err
+ }
+
+ var filteredIfaces []Interface
+ var includeRegexp *regexp.Regexp
+ var excludeRegexp *regexp.Regexp
+
+ // Create single include and exclude regexes to perform the interface
+ // check.
+ if len(includeRegexes) > 0 {
+ if includeRegexp, err = regexp.Compile("(" + strings.Join(includeRegexes, ")|(") + ")"); err != nil {
+ log.WithError(err).Warnf("Invalid interface regex")
+ return nil, err
+ }
+ }
+ if len(excludeRegexes) > 0 {
+ if excludeRegexp, err = regexp.Compile("(" + strings.Join(excludeRegexes, ")|(") + ")"); err != nil {
+ log.WithError(err).Warnf("Invalid interface regex")
+ return nil, err
+ }
+ }
+
+ // Loop through interfaces filtering on the regexes. Loop in reverse
+ // order to maintain behavior with older versions.
+ for idx := len(netIfaces) - 1; idx >= 0; idx-- {
+ iface := netIfaces[idx]
+ include := (includeRegexp == nil) || includeRegexp.MatchString(iface.Name)
+ exclude := (excludeRegexp != nil) && excludeRegexp.MatchString(iface.Name)
+ if include && !exclude {
+ if i, err := convertInterface(&iface, version); err == nil {
+ filteredIfaces = append(filteredIfaces, *i)
+ }
+ }
+ }
+ return filteredIfaces, nil
+}
+
+// convertInterface converts a net.Interface to our Interface type (which has
+// converted address types).
+func convertInterface(i *net.Interface, version int) (*Interface, error) {
+ log.WithField("Interface", i.Name).Debug("Querying interface addresses")
+ addrs, err := i.Addrs()
+ if err != nil {
+ log.Warnf("Cannot get interface address(es): %v", err)
+ return nil, err
+ }
+
+ iface := &Interface{Name: i.Name}
+ for _, addr := range addrs {
+ addrStr := addr.String()
+ ip, ipNet, err := cnet.ParseCIDR(addrStr)
+ if err != nil {
+ log.WithError(err).WithField("Address", addrStr).Warning("Failed to parse CIDR")
+ continue
+ }
+
+ if ip.Version() == version {
+ // Switch out the IP address in the network with the
+ // interface IP to get the CIDR (IP + network).
+ ipNet.IP = ip.IP
+ log.WithField("CIDR", ipNet).Debug("Found valid IP address and network")
+ iface.Cidrs = append(iface.Cidrs, *ipNet)
+ }
+ }
+
+ return iface, nil
+}
diff --git a/pkg/utils/lifted/autodetection/reachaddr.go b/pkg/utils/lifted/autodetection/reachaddr.go
new file mode 100644
index 000000000..fadf2a6cc
--- /dev/null
+++ b/pkg/utils/lifted/autodetection/reachaddr.go
@@ -0,0 +1,71 @@
+// This code is directly lifted from the calico
+
+// For reference:
+// https://github.com/projectcalico/calico/blob/master/node/pkg/lifecycle/startup/autodetection/reachaddr.go
+
+// Copyright (c) 2017 Tigera, Inc. All rights reserved.
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+package autodetection
+
+import (
+ "fmt"
+ gonet "net"
+
+ "github.com/projectcalico/calico/libcalico-go/lib/net"
+ log "github.com/sirupsen/logrus"
+)
+
+// ReachDestination auto-detects the interface Network by setting up a UDP
+// connection to a "reach" destination.
+func ReachDestination(dest string, version int) (*Interface, *net.IPNet, error) {
+ log.Debugf("Auto-detecting IPv%d CIDR by reaching destination %s", version, dest)
+
+ // Open a UDP connection to determine which external IP address is
+ // used to access the supplied destination.
+ protocol := fmt.Sprintf("udp%d", version)
+ address := fmt.Sprintf("[%s]:80", dest)
+ conn, err := gonet.Dial(protocol, address)
+ if err != nil {
+ return nil, nil, err
+ }
+ defer conn.Close() // nolint: errcheck
+
+ // Get the local address as a golang IP and use that to find the matching
+ // interface CIDR.
+ addr := conn.LocalAddr()
+ if addr == nil {
+ return nil, nil, fmt.Errorf("no address detected by connecting to %s", dest)
+ }
+ udpAddr := addr.(*gonet.UDPAddr)
+ log.WithFields(log.Fields{"IP": udpAddr.IP, "Destination": dest}).Info("Auto-detected address by connecting to remote")
+
+ // Get a full list of interface and IPs and find the CIDR matching the
+ // found IP.
+ ifaces, err := GetInterfaces(gonet.Interfaces, nil, nil, version)
+ if err != nil {
+ return nil, nil, err
+ }
+ for _, iface := range ifaces {
+ log.WithField("Name", iface.Name).Debug("Checking interface CIDRs")
+ for _, cidr := range iface.Cidrs {
+ log.WithField("CIDR", cidr.String()).Debug("Checking CIDR")
+ if cidr.IP.Equal(udpAddr.IP) {
+ log.WithField("CIDR", cidr.String()).Debug("Found matching interface CIDR")
+ return &iface, &cidr, nil
+ }
+ }
+ }
+
+ return nil, nil, fmt.Errorf("autodetected IPv%d address does not match any addresses found on local interfaces: %s", version, udpAddr.IP.String())
+}
diff --git a/pkg/utils/manager/resource.go b/pkg/utils/manager/resource.go
new file mode 100644
index 000000000..17dd6851c
--- /dev/null
+++ b/pkg/utils/manager/resource.go
@@ -0,0 +1,46 @@
+package manager
+
+import (
+ v1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/labels"
+ corev1listers "k8s.io/client-go/listers/core/v1"
+ "k8s.io/klog/v2"
+)
+
+type ResourceManager struct {
+ podLister corev1listers.PodLister
+ secretLister corev1listers.SecretLister
+ configMapLister corev1listers.ConfigMapLister
+ serviceLister corev1listers.ServiceLister
+}
+
+func NewResourceManager(podLister corev1listers.PodLister, secretLister corev1listers.SecretLister, configMapLister corev1listers.ConfigMapLister, serviceLister corev1listers.ServiceLister) (*ResourceManager, error) {
+ rm := ResourceManager{
+ podLister: podLister,
+ secretLister: secretLister,
+ configMapLister: configMapLister,
+ serviceLister: serviceLister,
+ }
+ return &rm, nil
+}
+
+func (rm *ResourceManager) GetPods() []*v1.Pod {
+ l, err := rm.podLister.List(labels.Everything())
+ if err == nil {
+ return l
+ }
+ klog.Errorf("failed to fetch pods from lister: %v", err)
+ return make([]*v1.Pod, 0)
+}
+
+func (rm *ResourceManager) GetConfigMap(name, namespace string) (*v1.ConfigMap, error) {
+ return rm.configMapLister.ConfigMaps(namespace).Get(name)
+}
+
+func (rm *ResourceManager) GetSecret(name, namespace string) (*v1.Secret, error) {
+ return rm.secretLister.Secrets(namespace).Get(name)
+}
+
+func (rm *ResourceManager) ListServices() ([]*v1.Service, error) {
+ return rm.serviceLister.List(labels.Everything())
+}
diff --git a/pkg/utils/node.go b/pkg/utils/node.go
new file mode 100644
index 000000000..f7cc169c4
--- /dev/null
+++ b/pkg/utils/node.go
@@ -0,0 +1,104 @@
+package utils
+
+import (
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+)
+
+func BuildNodeTemplate(name string) *corev1.Node {
+ node := &corev1.Node{
+ ObjectMeta: metav1.ObjectMeta{
+ Name: name,
+ Labels: map[string]string{
+ KosmosNodeLabel: KosmosNodeValue,
+ NodeRoleLabel: NodeRoleValue,
+ NodeHostnameValue: name,
+ NodeArchLabelStable: DefaultK8sArch,
+ NodeOSLabelStable: DefaultK8sOS,
+ NodeOSLabelBeta: DefaultK8sOS,
+ },
+ },
+ Spec: corev1.NodeSpec{},
+ Status: corev1.NodeStatus{
+ NodeInfo: corev1.NodeSystemInfo{
+ OperatingSystem: DefaultK8sOS,
+ Architecture: DefaultK8sArch,
+ },
+ },
+ }
+
+ // todo: custom taints from cluster cr
+ node.Spec.Taints = append(node.Spec.Taints, corev1.Taint{
+ Key: KosmosNodeTaintKey,
+ Value: KosmosNodeTaintValue,
+ Effect: KosmosNodeTaintEffect,
+ })
+
+ node.Status.Conditions = NodeConditions()
+
+ return node
+}
+
+func NodeConditions() []corev1.NodeCondition {
+ return []corev1.NodeCondition{
+ {
+ Type: "Ready",
+ Status: corev1.ConditionTrue,
+ LastHeartbeatTime: metav1.Now(),
+ LastTransitionTime: metav1.Now(),
+ Reason: "KubeletReady",
+ Message: "kubelet is posting ready status",
+ },
+ {
+ Type: "NetworkUnavailable",
+ Status: corev1.ConditionFalse,
+ LastHeartbeatTime: metav1.Now(),
+ LastTransitionTime: metav1.Now(),
+ Reason: "RouteCreated",
+ Message: "RouteController created a route",
+ },
+ {
+ Type: "MemoryPressure",
+ Status: corev1.ConditionFalse,
+ LastHeartbeatTime: metav1.Now(),
+ LastTransitionTime: metav1.Now(),
+ Reason: "KubeletHasSufficientMemory",
+ Message: "kubelet has sufficient memory available",
+ },
+ {
+ Type: "DiskPressure",
+ Status: corev1.ConditionFalse,
+ LastHeartbeatTime: metav1.Now(),
+ LastTransitionTime: metav1.Now(),
+ Reason: "KubeletHasNoDiskPressure",
+ Message: "kubelet has no disk pressure",
+ },
+ {
+ Type: "PIDPressure",
+ Status: corev1.ConditionFalse,
+ LastHeartbeatTime: metav1.Now(),
+ LastTransitionTime: metav1.Now(),
+ Reason: "KubeletHasSufficientPID",
+ Message: "kubelet has sufficient PID available",
+ },
+ }
+}
+
+func NodeReady(node *corev1.Node) bool {
+ for _, condition := range node.Status.Conditions {
+ if condition.Type != corev1.NodeReady {
+ continue
+ }
+ if condition.Status == corev1.ConditionTrue {
+ return true
+ }
+ }
+ return false
+}
+
+//func UpdateLastHeartbeatTime(n *corev1.Node) {
+// now := metav1.NewTime(time.Now())
+// for i := range n.Status.Conditions {
+// n.Status.Conditions[i].LastHeartbeatTime = now
+// }
+//}
diff --git a/pkg/utils/node_client.go b/pkg/utils/node_client.go
new file mode 100644
index 000000000..a6dd7d65c
--- /dev/null
+++ b/pkg/utils/node_client.go
@@ -0,0 +1,85 @@
+package utils
+
+import (
+ "context"
+
+ "k8s.io/apimachinery/pkg/types"
+ "k8s.io/client-go/dynamic"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/client-go/rest"
+ "sigs.k8s.io/controller-runtime/pkg/client"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ kosmosversioned "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+)
+
+type ClusterKubeClient struct {
+ KubeClient kubernetes.Interface
+ ClusterName string
+}
+
+type ClusterKosmosClient struct {
+ KosmosClient kosmosversioned.Interface
+ ClusterName string
+}
+
+type ClusterDynamicClient struct {
+ DynamicClient dynamic.Interface
+ ClusterName string
+}
+
+// NewClusterKubeClient create a kube client for a member cluster
+func NewClusterKubeClient(config *rest.Config, ClusterName string) (*ClusterKubeClient, error) {
+ kubeClient, err := kubernetes.NewForConfig(config)
+ if err != nil {
+ return nil, err
+ }
+
+ return &ClusterKubeClient{
+ KubeClient: kubeClient,
+ ClusterName: ClusterName,
+ }, nil
+}
+
+// NewClusterKosmosClient create a dynamic client for a member cluster
+func NewClusterKosmosClient(config *rest.Config, ClusterName string) (*ClusterKosmosClient, error) {
+ kosmosClient, err := kosmosversioned.NewForConfig(config)
+ if err != nil {
+ return nil, err
+ }
+
+ return &ClusterKosmosClient{
+ KosmosClient: kosmosClient,
+ ClusterName: ClusterName,
+ }, nil
+}
+
+// NewClusterDynamicClient create a kosmos crd client for a member cluster
+func NewClusterDynamicClient(config *rest.Config, ClusterName string) (*ClusterDynamicClient, error) {
+ dynamicClient, err := dynamic.NewForConfig(config)
+ if err != nil {
+ return nil, err
+ }
+
+ return &ClusterDynamicClient{
+ DynamicClient: dynamicClient,
+ ClusterName: ClusterName,
+ }, nil
+}
+
+func BuildConfig(cluster *kosmosv1alpha1.Cluster, opts Opts) (*rest.Config, error) {
+ config, err := NewConfigFromBytes(cluster.Spec.Kubeconfig, opts)
+ if err != nil {
+ return nil, err
+ }
+ return config, nil
+}
+
+// GetCluster returns the member cluster
+func GetCluster(hostClient client.Client, clusterName string) (*kosmosv1alpha1.Cluster, error) {
+ cluster := &kosmosv1alpha1.Cluster{}
+ if err := hostClient.Get(context.TODO(), types.NamespacedName{Name: clusterName}, cluster); err != nil {
+ return nil, err
+ }
+ return cluster, nil
+}
diff --git a/pkg/utils/podutils/env.go b/pkg/utils/podutils/env.go
new file mode 100644
index 000000000..58a42ab4d
--- /dev/null
+++ b/pkg/utils/podutils/env.go
@@ -0,0 +1,702 @@
+// This code is directly lifted from the VIRTUAL-KUBELET
+// For reference:
+// https://github.com/virtual-kubelet/virtual-kubelet/blob/master/internal/podutils/env.go
+
+package podutils
+
+import (
+ "context"
+ "fmt"
+ "net"
+ "sort"
+ "strconv"
+ "strings"
+
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/api/meta"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/util/sets"
+ apivalidation "k8s.io/apimachinery/pkg/util/validation"
+ "k8s.io/klog"
+ "k8s.io/utils/pointer"
+
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+const (
+ // ReasonOptionalConfigMapNotFound is the reason used in events emitted when an optional configmap is not found.
+ ReasonOptionalConfigMapNotFound = "OptionalConfigMapNotFound"
+ // ReasonOptionalConfigMapKeyNotFound is the reason used in events emitted when an optional configmap key is not found.
+ ReasonOptionalConfigMapKeyNotFound = "OptionalConfigMapKeyNotFound"
+ // ReasonFailedToReadOptionalConfigMap is the reason used in events emitted when an optional configmap could not be read.
+ ReasonFailedToReadOptionalConfigMap = "FailedToReadOptionalConfigMap"
+
+ // ReasonOptionalSecretNotFound is the reason used in events emitted when an optional secret is not found.
+ ReasonOptionalSecretNotFound = "OptionalSecretNotFound"
+ // ReasonOptionalSecretKeyNotFound is the reason used in events emitted when an optional secret key is not found.
+ ReasonOptionalSecretKeyNotFound = "OptionalSecretKeyNotFound"
+ // ReasonFailedToReadOptionalSecret is the reason used in events emitted when an optional secret could not be read.
+ ReasonFailedToReadOptionalSecret = "FailedToReadOptionalSecret"
+
+ // ReasonMandatoryConfigMapNotFound is the reason used in events emitted when a mandatory configmap is not found.
+ ReasonMandatoryConfigMapNotFound = "MandatoryConfigMapNotFound"
+ // ReasonMandatoryConfigMapKeyNotFound is the reason used in events emitted when a mandatory configmap key is not found.
+ ReasonMandatoryConfigMapKeyNotFound = "MandatoryConfigMapKeyNotFound"
+ // ReasonFailedToReadMandatoryConfigMap is the reason used in events emitted when a mandatory configmap could not be read.
+ ReasonFailedToReadMandatoryConfigMap = "FailedToReadMandatoryConfigMap"
+
+ // ReasonMandatorySecretNotFound is the reason used in events emitted when a mandatory secret is not found.
+ ReasonMandatorySecretNotFound = "MandatorySecretNotFound"
+ // ReasonMandatorySecretKeyNotFound is the reason used in events emitted when a mandatory secret key is not found.
+ ReasonMandatorySecretKeyNotFound = "MandatorySecretKeyNotFound"
+ // ReasonFailedToReadMandatorySecret is the reason used in events emitted when a mandatory secret could not be read.
+ ReasonFailedToReadMandatorySecret = "FailedToReadMandatorySecret"
+
+ // ReasonInvalidEnvironmentVariableNames is the reason used in events emitted when a configmap/secret referenced in a ".spec.containers[*].envFrom" field contains invalid environment variable names.
+ ReasonInvalidEnvironmentVariableNames = "InvalidEnvironmentVariableNames"
+)
+
+var masterServices = sets.NewString("kubernetes")
+
+// PopulateEnvironmentVariables populates the environment of each container (and init container) in the specified pod.
+func PopulateEnvironmentVariables(ctx context.Context, pod *corev1.Pod, rm utils.EnvResourceManager) error {
+ // Populate each init container's environment.
+ for idx := range pod.Spec.InitContainers {
+ if err := populateContainerEnvironment(ctx, pod, &pod.Spec.InitContainers[idx], rm); err != nil {
+ return err
+ }
+ }
+ // Populate each container's environment.
+ for idx := range pod.Spec.Containers {
+ if err := populateContainerEnvironment(ctx, pod, &pod.Spec.Containers[idx], rm); err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+// populateContainerEnvironment populates the environment of a single container in the specified pod.
+func populateContainerEnvironment(ctx context.Context, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) error {
+ // Create an "environment map" based on the value of the specified container's ".envFrom" field.
+ tmpEnv, err := makeEnvironmentMapBasedOnEnvFrom(ctx, pod, container, rm)
+ if err != nil {
+ return err
+ }
+ // Create the final "environment map" for the container using the ".env" and ".envFrom" field
+ // and service environment variables.
+ envs, err := makeEnvironmentMap(ctx, pod, container, rm, tmpEnv)
+ if err != nil && len(envs) == 0 {
+ return err
+ }
+ // Empty the container's ".envFrom" field and replace its ".env" field with the final, merged environment.
+ // Values in "env" (sourced from ".env") will override any values with the same key defined in "envFrom" (sourced from ".envFrom").
+ // This is in accordance with what the Kubelet itself does.
+ // https://github.com/kubernetes/kubernetes/blob/v1.13.1/pkg/kubelet/kubelet_pods.go#L557-L558
+ container.EnvFrom = []corev1.EnvFromSource{}
+
+ res := make([]corev1.EnvVar, 0, len(tmpEnv))
+
+ for key, val := range tmpEnv {
+ res = append(res, corev1.EnvVar{
+ Name: key,
+ Value: val,
+ })
+ }
+ res = append(res, envs...)
+ container.Env = res
+
+ return nil
+}
+
+// getServiceEnvVarMap makes a map[string]string of env vars for services a
+// pod in namespace ns should see.
+// Based on getServiceEnvVarMap in kubelet_pods.go.
+func getServiceEnvVarMap(rm utils.EnvResourceManager, ns string, enableServiceLinks bool) (map[string]string, error) {
+ var (
+ serviceMap = make(map[string]*corev1.Service)
+ m = make(map[string]string)
+ )
+
+ services, err := rm.ListServices()
+ if err != nil {
+ return nil, err
+ }
+
+ // project the services in namespace ns onto the master services
+ for i := range services {
+ service := services[i]
+ // ignore services where ClusterIP is "None" or empty
+ if !IsServiceIPSet(service) {
+ continue
+ }
+ serviceName := service.Name
+
+ // We always want to add environment variables for master kubernetes service
+ // from the default namespace, even if enableServiceLinks is false.
+ // We also add environment variables for other services in the same
+ // namespace, if enableServiceLinks is true.
+ if service.Namespace == metav1.NamespaceDefault && masterServices.Has(serviceName) {
+ if _, exists := serviceMap[serviceName]; !exists {
+ serviceMap[serviceName] = service
+ }
+ } else if service.Namespace == ns && enableServiceLinks {
+ serviceMap[serviceName] = service
+ }
+ }
+
+ mappedServices := make([]*corev1.Service, 0, len(serviceMap))
+ for key := range serviceMap {
+ mappedServices = append(mappedServices, serviceMap[key])
+ }
+
+ for _, e := range FromServices(mappedServices) {
+ m[e.Name] = e.Value
+ }
+ return m, nil
+}
+
+// makeEnvironmentMapBasedOnEnvFrom returns a map representing the resolved environment of the specified container after being populated from the entries in the ".envFrom" field.
+func makeEnvironmentMapBasedOnEnvFrom(ctx context.Context, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (map[string]string, error) {
+ // Create a map to hold the resulting environment.
+ res := make(map[string]string)
+ // Iterate over "envFrom" references in order to populate the environment.
+loop:
+ for _, envFrom := range container.EnvFrom {
+ switch {
+ // Handle population from a configmap.
+ case envFrom.ConfigMapRef != nil:
+ ef := envFrom.ConfigMapRef
+ // Check whether the configmap reference is optional.
+ // This will control whether we fail when unable to read the configmap.
+ optional := ef.Optional != nil && *ef.Optional
+ // Try to grab the referenced configmap.
+ m, err := rm.GetConfigMap(ef.Name, pod.Namespace)
+ if err != nil {
+ // We couldn't fetch the configmap.
+ // However, if the configmap reference is optional we should not fail.
+ if optional {
+ if errors.IsNotFound(err) {
+ klog.Warningf("configmap %q not found", ef.Name)
+ } else {
+ klog.Warningf("failed to read configmap %q: %v", ef.Name, err)
+ }
+ // Continue on to the next reference.
+ continue loop
+ }
+ // At this point we know the configmap reference is mandatory.
+ // Hence, we should return a meaningful error.
+ if errors.IsNotFound(err) {
+ klog.Warningf("configmap %q not found", ef.Name)
+ return nil, fmt.Errorf("configmap %q not found", ef.Name)
+ }
+ klog.Warningf("failed to read configmap %q", ef.Name)
+ return nil, fmt.Errorf("failed to fetch configmap %q: %v", ef.Name, err)
+ }
+ // At this point we have successfully fetched the target configmap.
+ // Iterate over the keys defined in the configmap and populate the environment accordingly.
+ // https://github.com/kubernetes/kubernetes/blob/v1.13.1/pkg/kubelet/kubelet_pods.go#L581-L595
+ invalidKeys := make([]string, 0)
+ mKeys:
+ for key, val := range m.Data {
+ // If a prefix has been defined, prepend it to the environment variable's name.
+ if len(envFrom.Prefix) > 0 {
+ key = envFrom.Prefix + key
+ }
+ // Make sure that the resulting key is a valid environment variable name.
+ // If it isn't, it should be appended to the list of invalid keys and skipped.
+ if errMsgs := apivalidation.IsEnvVarName(key); len(errMsgs) != 0 {
+ invalidKeys = append(invalidKeys, key)
+ continue mKeys
+ }
+ // Add the key and its value to the environment.
+ res[key] = val
+ }
+ // Report any invalid keys.
+ if len(invalidKeys) > 0 {
+ sort.Strings(invalidKeys)
+ klog.Warningf("keys [%s] from configmap %s/%s were skipped since they are invalid as environment variable names", strings.Join(invalidKeys, ", "), m.Namespace, m.Name)
+ }
+ // Handle population from a secret.
+ case envFrom.SecretRef != nil:
+ ef := envFrom.SecretRef
+ // Check whether the secret reference is optional.
+ // This will control whether we fail when unable to read the secret.
+ optional := ef.Optional != nil && *ef.Optional
+ // Try to grab the referenced secret.
+ s, err := rm.GetSecret(ef.Name, pod.Namespace)
+ if err != nil {
+ // We couldn't fetch the secret.
+ // However, if the secret reference is optional we should not fail.
+ if optional {
+ if errors.IsNotFound(err) {
+ klog.Warningf("secret %q not found", ef.Name)
+ } else {
+ klog.Warningf("failed to read secret %q: %v", ef.Name, err)
+ }
+ // Continue on to the next reference.
+ continue loop
+ }
+ // At this point we know the secret reference is mandatory.
+ // Hence, we should return a meaningful error.
+ if errors.IsNotFound(err) {
+ klog.Warningf("secret %q not found", ef.Name)
+ return nil, fmt.Errorf("secret %q not found", ef.Name)
+ }
+ klog.Warningf("failed to read secret %q", ef.Name)
+ return nil, fmt.Errorf("failed to fetch secret %q: %v", ef.Name, err)
+ }
+ // At this point we have successfully fetched the target secret.
+ // Iterate over the keys defined in the secret and populate the environment accordingly.
+ // https://github.com/kubernetes/kubernetes/blob/v1.13.1/pkg/kubelet/kubelet_pods.go#L581-L595
+ invalidKeys := make([]string, 0)
+ sKeys:
+ for key, val := range s.Data {
+ // If a prefix has been defined, prepend it to the environment variable's name.
+ if len(envFrom.Prefix) > 0 {
+ key = envFrom.Prefix + key
+ }
+ // Make sure that the resulting key is a valid environment variable name.
+ // If it isn't, it should be appended to the list of invalid keys and skipped.
+ if errMsgs := apivalidation.IsEnvVarName(key); len(errMsgs) != 0 {
+ invalidKeys = append(invalidKeys, key)
+ continue sKeys
+ }
+ // Add the key and its value to the environment.
+ res[key] = string(val)
+ }
+ // Report any invalid keys.
+ if len(invalidKeys) > 0 {
+ sort.Strings(invalidKeys)
+ klog.Warningf("keys [%s] from secret %s/%s were skipped since they are invalid as environment variable names", strings.Join(invalidKeys, ", "), s.Namespace, s.Name)
+ }
+ }
+ }
+ // Return the populated environment.
+ return res, nil
+}
+
+// makeEnvironmentMap returns a map representing the resolved environment of the specified container after being populated from the entries in the ".env" and ".envFrom" field.
+func makeEnvironmentMap(ctx context.Context, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager, res map[string]string) ([]corev1.EnvVar, error) {
+ // TODO If pod.Spec.EnableServiceLinks is nil then fail as per 1.14 kubelet.
+ enableServiceLinks := corev1.DefaultEnableServiceLinks
+ if pod.Spec.EnableServiceLinks != nil {
+ enableServiceLinks = *pod.Spec.EnableServiceLinks
+ }
+
+ // Note that there is a race between Kubelet seeing the pod and kubelet seeing the service.
+ // To avoid this users can: (1) wait between starting a service and starting; or (2) detect
+ // missing service env var and exit and be restarted; or (3) use DNS instead of env vars
+ // and keep trying to resolve the DNS name of the service (recommended).
+ svcEnv, err := getServiceEnvVarMap(rm, pod.Namespace, enableServiceLinks)
+ if err != nil {
+ return nil, err
+ }
+
+ // If the variable's Value is set, expand the `$(var)` references to other
+ // variables in the .Value field; the sources of variables are the declared
+ // variables of the container and the service environment variables.
+ mappingFunc := MappingFuncFor(res, svcEnv)
+
+ // Iterate over environment variables in order to populate the map.
+ var keys []corev1.EnvVar
+ for _, env := range container.Env {
+ envptr := env
+ val, err := getEnvironmentVariableValue(ctx, &envptr, mappingFunc, pod, container, rm)
+ if err != nil {
+ keys = append(keys, env)
+ }
+ if val != nil {
+ res[env.Name] = *val
+ }
+ }
+
+ // Append service env vars.
+ for k, v := range svcEnv {
+ if _, present := res[k]; !present {
+ res[k] = v
+ }
+ }
+
+ return keys, nil
+}
+
+func getEnvironmentVariableValue(ctx context.Context, env *corev1.EnvVar, mappingFunc func(string) string, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (*string, error) {
+ if env.ValueFrom != nil {
+ return getEnvironmentVariableValueWithValueFrom(ctx, env, pod, container, rm)
+ }
+ // Handle values that have been directly provided after expanding variable references.
+ return pointer.String(Expand(env.Value, mappingFunc)), nil
+}
+
+func getEnvironmentVariableValueWithValueFrom(ctx context.Context, env *corev1.EnvVar, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (*string, error) {
+ // Handle population from a configmap key.
+ if env.ValueFrom.ConfigMapKeyRef != nil {
+ return getEnvironmentVariableValueWithValueFromConfigMapKeyRef(ctx, env, pod, container, rm)
+ }
+
+ // Handle population from a secret key.
+ if env.ValueFrom.SecretKeyRef != nil {
+ return getEnvironmentVariableValueWithValueFromSecretKeyRef(ctx, env, pod, container, rm)
+ }
+
+ // Handle population from a field (downward API).
+ if env.ValueFrom.FieldRef != nil {
+ return getEnvironmentVariableValueWithValueFromFieldRef(ctx, env, pod, container, rm)
+ }
+ if env.ValueFrom.ResourceFieldRef != nil {
+ // TODO Implement populating resource requests.
+ return nil, nil
+ }
+
+ klog.Error("Unhandled environment variable with non-nil env.ValueFrom, do not know how to populate")
+ return nil, nil
+}
+
+func getEnvironmentVariableValueWithValueFromConfigMapKeyRef(ctx context.Context, env *corev1.EnvVar, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (*string, error) {
+ // The environment variable must be set from a configmap.
+ vf := env.ValueFrom.ConfigMapKeyRef
+ // Check whether the key reference is optional.
+ // This will control whether we fail when unable to read the requested key.
+ optional := vf != nil && vf.Optional != nil && *vf.Optional
+ // Try to grab the referenced configmap.
+ m, err := rm.GetConfigMap(vf.Name, pod.Namespace)
+ if err != nil {
+ // We couldn't fetch the configmap.
+ // However, if the key reference is optional we should not fail.
+ if optional {
+ if errors.IsNotFound(err) {
+ klog.Warningf("skipping optional envvar %q: configmap %q not found", env.Name, vf.Name)
+ } else {
+ klog.Warningf("failed to read configmap %q: %v", vf.Name, err)
+ }
+ // Continue on to the next reference.
+ return nil, nil
+ }
+ // At this point we know the key reference is mandatory.
+ // Hence, we should return a meaningful error.
+ if errors.IsNotFound(err) {
+ klog.Warningf("configmap %q not found", vf.Name)
+ return nil, fmt.Errorf("configmap %q not found", vf.Name)
+ }
+ klog.Warningf("failed to read configmap %q", vf.Name)
+ return nil, fmt.Errorf("failed to read configmap %q: %v", vf.Name, err)
+ }
+ // At this point we have successfully fetched the target configmap.
+ // We must now try to grab the requested key.
+ var (
+ keyExists bool
+ keyValue string
+ )
+ if keyValue, keyExists = m.Data[vf.Key]; !keyExists {
+ // The requested key does not exist.
+ // However, we should not fail if the key reference is optional.
+ if optional {
+ // Continue on to the next reference.
+ klog.Warningf("skipping optional envvar %q: key %q does not exist in configmap %q", env.Name, vf.Key, vf.Name)
+ return nil, nil
+ }
+ // At this point we know the key reference is mandatory.
+ // Hence, we should fail.
+ klog.Warningf("key %q does not exist in configmap %q", vf.Key, vf.Name)
+ return nil, fmt.Errorf("configmap %q doesn't contain the %q key required by pod %s", vf.Name, vf.Key, pod.Name)
+ }
+ // Populate the environment variable and continue on to the next reference.
+ return &keyValue, nil
+}
+
+func getEnvironmentVariableValueWithValueFromSecretKeyRef(ctx context.Context, env *corev1.EnvVar, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (*string, error) {
+ vf := env.ValueFrom.SecretKeyRef
+ // Check whether the key reference is optional.
+ // This will control whether we fail when unable to read the requested key.
+ optional := vf != nil && vf.Optional != nil && *vf.Optional
+ // Try to grab the referenced secret.
+ s, err := rm.GetSecret(vf.Name, pod.Namespace)
+ if err != nil {
+ // We couldn't fetch the secret.
+ // However, if the key reference is optional we should not fail.
+ if optional {
+ if errors.IsNotFound(err) {
+ klog.Warningf("skipping optional envvar %q: secret %q not found", env.Name, vf.Name)
+ } else {
+ klog.Warningf("failed to read secret %q: %v", vf.Name, err)
+ klog.Warningf("skipping optional envvar %q: failed to read secret %q", env.Name, vf.Name)
+ }
+ // Continue on to the next reference.
+ return nil, nil
+ }
+ // At this point we know the key reference is mandatory.
+ // Hence, we should return a meaningful error.
+ if errors.IsNotFound(err) {
+ klog.Warningf("secret %q not found", vf.Name)
+ return nil, fmt.Errorf("secret %q not found", vf.Name)
+ }
+ klog.Warningf("failed to read secret %q", vf.Name)
+ return nil, fmt.Errorf("failed to read secret %q: %v", vf.Name, err)
+ }
+ // At this point we have successfully fetched the target secret.
+ // We must now try to grab the requested key.
+ var (
+ keyExists bool
+ keyValue []byte
+ )
+ if keyValue, keyExists = s.Data[vf.Key]; !keyExists {
+ // The requested key does not exist.
+ // However, we should not fail if the key reference is optional.
+ if optional {
+ // Continue on to the next reference.
+ klog.Warningf("skipping optional envvar %q: key %q does not exist in secret %q", env.Name, vf.Key, vf.Name)
+ return nil, nil
+ }
+ // At this point we know the key reference is mandatory.
+ // Hence, we should fail.
+ klog.Warningf("key %q does not exist in secret %q", vf.Key, vf.Name)
+ return nil, fmt.Errorf("secret %q doesn't contain the %q key required by pod %s", vf.Name, vf.Key, pod.Name)
+ }
+ // Populate the environment variable and continue on to the next reference.
+ ret := string(keyValue)
+ return &ret, nil
+}
+
+// Handle population from a field (downward API).
+func getEnvironmentVariableValueWithValueFromFieldRef(ctx context.Context, env *corev1.EnvVar, pod *corev1.Pod, container *corev1.Container, rm utils.EnvResourceManager) (*string, error) {
+ // https://github.com/virtual-kubelet/virtual-kubelet/issues/123
+ vf := env.ValueFrom.FieldRef
+
+ runtimeVal, err := podFieldSelectorRuntimeValue(vf, pod)
+ if err != nil {
+ return nil, err
+ }
+
+ return &runtimeVal, nil
+}
+
+// podFieldSelectorRuntimeValue returns the runtime value of the given
+// selector for a pod.
+func podFieldSelectorRuntimeValue(fs *corev1.ObjectFieldSelector, pod *corev1.Pod) (string, error) {
+ internalFieldPath, _, err := ConvertDownwardAPIFieldLabel(fs.APIVersion, fs.FieldPath, "")
+ if err != nil {
+ return "", err
+ }
+ switch internalFieldPath {
+ case "spec.nodeName":
+ return pod.Spec.NodeName, nil
+ case "spec.serviceAccountName":
+ return pod.Spec.ServiceAccountName, nil
+ }
+ return ExtractFieldPathAsString(pod, internalFieldPath)
+}
+
+// ExtractFieldPathAsString extracts the field from the given object
+// and returns it as a string. The object must be a pointer to an
+// API type.
+func ExtractFieldPathAsString(obj interface{}, fieldPath string) (string, error) {
+ accessor, err := meta.Accessor(obj)
+ if err != nil {
+ return "", err
+ }
+
+ if path, subscript, ok := SplitMaybeSubscriptedPath(fieldPath); ok {
+ switch path {
+ case "metadata.annotations":
+ if errs := apivalidation.IsQualifiedName(strings.ToLower(subscript)); len(errs) != 0 {
+ return "", fmt.Errorf("invalid key subscript in %s: %s", fieldPath, strings.Join(errs, ";"))
+ }
+ return accessor.GetAnnotations()[subscript], nil
+ case "metadata.labels":
+ if errs := apivalidation.IsQualifiedName(subscript); len(errs) != 0 {
+ return "", fmt.Errorf("invalid key subscript in %s: %s", fieldPath, strings.Join(errs, ";"))
+ }
+ return accessor.GetLabels()[subscript], nil
+ default:
+ return "", fmt.Errorf("fieldPath %q does not support subscript", fieldPath)
+ }
+ }
+
+ switch fieldPath {
+ case "metadata.annotations":
+ return FormatMap(accessor.GetAnnotations()), nil
+ case "metadata.labels":
+ return FormatMap(accessor.GetLabels()), nil
+ case "metadata.name":
+ return accessor.GetName(), nil
+ case "metadata.namespace":
+ return accessor.GetNamespace(), nil
+ case "metadata.uid":
+ return string(accessor.GetUID()), nil
+ }
+
+ return "", fmt.Errorf("unsupported fieldPath: %v", fieldPath)
+}
+
+// FormatMap formats map[string]string to a string.
+func FormatMap(m map[string]string) (fmtStr string) {
+ // output with keys in sorted order to provide stable output
+ keys := sets.NewString()
+ for key := range m {
+ keys.Insert(key)
+ }
+ for _, key := range keys.List() {
+ fmtStr += fmt.Sprintf("%v=%q\n", key, m[key])
+ }
+ fmtStr = strings.TrimSuffix(fmtStr, "\n")
+
+ return
+}
+
+// SplitMaybeSubscriptedPath checks whether the specified fieldPath is
+// subscripted, and
+// - if yes, this function splits the fieldPath into path and subscript, and
+// returns (path, subscript, true).
+// - if no, this function returns (fieldPath, "", false).
+//
+// Example inputs and outputs:
+// - "metadata.annotations['myKey']" --> ("metadata.annotations", "myKey", true)
+// - "metadata.annotations['a[b]c']" --> ("metadata.annotations", "a[b]c", true)
+// - "metadata.labels[”]" --> ("metadata.labels", "", true)
+// - "metadata.labels" --> ("metadata.labels", "", false)
+func SplitMaybeSubscriptedPath(fieldPath string) (string, string, bool) {
+ if !strings.HasSuffix(fieldPath, "']") {
+ return fieldPath, "", false
+ }
+ s := strings.TrimSuffix(fieldPath, "']")
+ parts := strings.SplitN(s, "['", 2)
+ if len(parts) < 2 {
+ return fieldPath, "", false
+ }
+ if len(parts[0]) == 0 {
+ return fieldPath, "", false
+ }
+ return parts[0], parts[1], true
+}
+
+// ConvertDownwardAPIFieldLabel converts the specified downward API field label
+// and its value in the pod of the specified version to the internal version,
+// and returns the converted label and value. This function returns an error if
+// the conversion fails.
+func ConvertDownwardAPIFieldLabel(version, label, value string) (string, string, error) {
+ if version != "v1" {
+ return "", "", fmt.Errorf("unsupported pod version: %s", version)
+ }
+
+ if path, _, ok := SplitMaybeSubscriptedPath(label); ok {
+ switch path {
+ case "metadata.annotations", "metadata.labels":
+ return label, value, nil
+ default:
+ return "", "", fmt.Errorf("field label does not support subscript: %s", label)
+ }
+ }
+
+ switch label {
+ case "metadata.annotations",
+ "metadata.labels",
+ "metadata.name",
+ "metadata.namespace",
+ "metadata.uid",
+ "spec.nodeName",
+ "spec.restartPolicy",
+ "spec.serviceAccountName",
+ "spec.schedulerName",
+ "status.phase",
+ "status.hostIP",
+ "status.podIP",
+ "status.podIPs":
+ return label, value, nil
+ // This is for backwards compatibility with old v1 clients which send spec.host
+ case "spec.host":
+ return "spec.nodeName", value, nil
+ default:
+ return "", "", fmt.Errorf("field label not supported: %s", label)
+ }
+}
+
+// this function aims to check if the service's ClusterIP is set or not
+// the objective is not to perform validation here
+func IsServiceIPSet(service *corev1.Service) bool {
+ return service.Spec.ClusterIP != corev1.ClusterIPNone && service.Spec.ClusterIP != ""
+}
+
+// provided as an argument.
+func FromServices(services []*corev1.Service) []corev1.EnvVar {
+ var result []corev1.EnvVar
+ for i := range services {
+ service := services[i]
+
+ // ignore services where ClusterIP is "None" or empty
+ // the services passed to this method should be pre-filtered
+ // only services that have the cluster IP set should be included here
+ if !IsServiceIPSet(service) {
+ continue
+ }
+
+ // Host
+ name := makeEnvVariableName(service.Name) + "_SERVICE_HOST"
+ result = append(result, corev1.EnvVar{Name: name, Value: service.Spec.ClusterIP})
+ // First port - give it the backwards-compatible name
+ name = makeEnvVariableName(service.Name) + "_SERVICE_PORT"
+ result = append(result, corev1.EnvVar{Name: name, Value: strconv.Itoa(int(service.Spec.Ports[0].Port))})
+ // All named ports (only the first may be unnamed, checked in validation)
+ for i := range service.Spec.Ports {
+ sp := &service.Spec.Ports[i]
+ if sp.Name != "" {
+ pn := name + "_" + makeEnvVariableName(sp.Name)
+ result = append(result, corev1.EnvVar{Name: pn, Value: strconv.Itoa(int(sp.Port))})
+ }
+ }
+ // Docker-compatible vars.
+ result = append(result, makeLinkVariables(service)...)
+ }
+ return result
+}
+
+func makeEnvVariableName(str string) string {
+ // TODO: If we simplify to "all names are DNS1123Subdomains" this
+ // will need two tweaks:
+ // 1) Handle leading digits
+ // 2) Handle dots
+ return strings.ToUpper(strings.Replace(str, "-", "_", -1))
+}
+
+func makeLinkVariables(service *corev1.Service) []corev1.EnvVar {
+ prefix := makeEnvVariableName(service.Name)
+ all := []corev1.EnvVar{}
+ for i := range service.Spec.Ports {
+ sp := &service.Spec.Ports[i]
+
+ protocol := string(corev1.ProtocolTCP)
+ if sp.Protocol != "" {
+ protocol = string(sp.Protocol)
+ }
+
+ hostPort := net.JoinHostPort(service.Spec.ClusterIP, strconv.Itoa(int(sp.Port)))
+
+ if i == 0 {
+ // Docker special-cases the first port.
+ all = append(all, corev1.EnvVar{
+ Name: prefix + "_PORT",
+ Value: fmt.Sprintf("%s://%s", strings.ToLower(protocol), hostPort),
+ })
+ }
+ portPrefix := fmt.Sprintf("%s_PORT_%d_%s", prefix, sp.Port, strings.ToUpper(protocol))
+ all = append(all, []corev1.EnvVar{
+ {
+ Name: portPrefix,
+ Value: fmt.Sprintf("%s://%s", strings.ToLower(protocol), hostPort),
+ },
+ {
+ Name: portPrefix + "_PROTO",
+ Value: strings.ToLower(protocol),
+ },
+ {
+ Name: portPrefix + "_PORT",
+ Value: strconv.Itoa(int(sp.Port)),
+ },
+ {
+ Name: portPrefix + "_ADDR",
+ Value: service.Spec.ClusterIP,
+ },
+ }...)
+ }
+ return all
+}
diff --git a/pkg/utils/podutils/expand.go b/pkg/utils/podutils/expand.go
new file mode 100644
index 000000000..5f9bab3ed
--- /dev/null
+++ b/pkg/utils/podutils/expand.go
@@ -0,0 +1,107 @@
+//Copied from
+//https://github.com/kubernetes/kubernetes/tree/master/third_party/forked/golang/expansion .
+//
+//This is to eliminate a direct dependency on kubernetes/kubernetes.
+
+package podutils
+
+import (
+ "bytes"
+)
+
+const (
+ operator = '$'
+ referenceOpener = '('
+ referenceCloser = ')'
+)
+
+// syntaxWrap returns the input string wrapped by the expansion syntax.
+func syntaxWrap(input string) string {
+ return string(operator) + string(referenceOpener) + input + string(referenceCloser)
+}
+
+// MappingFuncFor returns a mapping function for use with Expand that
+// implements the expansion semantics defined in the expansion spec; it
+// returns the input string wrapped in the expansion syntax if no mapping
+// for the input is found.
+func MappingFuncFor(context ...map[string]string) func(string) string {
+ return func(input string) string {
+ for _, vars := range context {
+ val, ok := vars[input]
+ if ok {
+ return val
+ }
+ }
+
+ return syntaxWrap(input)
+ }
+}
+
+// Expand replaces variable references in the input string according to
+// the expansion spec using the given mapping function to resolve the
+// values of variables.
+func Expand(input string, mapping func(string) string) string {
+ var buf bytes.Buffer
+ checkpoint := 0
+ for cursor := 0; cursor < len(input); cursor++ {
+ if input[cursor] == operator && cursor+1 < len(input) {
+ // Copy the portion of the input string since the last
+ // checkpoint into the buffer
+ buf.WriteString(input[checkpoint:cursor])
+
+ // Attempt to read the variable name as defined by the
+ // syntax from the input string
+ read, isVar, advance := tryReadVariableName(input[cursor+1:])
+
+ if isVar {
+ // We were able to read a variable name correctly;
+ // apply the mapping to the variable name and copy the
+ // bytes into the buffer
+ buf.WriteString(mapping(read))
+ } else {
+ // Not a variable name; copy the read bytes into the buffer
+ buf.WriteString(read)
+ }
+
+ // Advance the cursor in the input string to account for
+ // bytes consumed to read the variable name expression
+ cursor += advance
+
+ // Advance the checkpoint in the input string
+ checkpoint = cursor + 1
+ }
+ }
+
+ // Return the buffer and any remaining unwritten bytes in the
+ // input string.
+ return buf.String() + input[checkpoint:]
+}
+
+// tryReadVariableName attempts to read a variable name from the input
+// string and returns the content read from the input, whether that content
+// represents a variable name to perform mapping on, and the number of bytes
+// consumed in the input string.
+//
+// The input string is assumed not to contain the initial operator.
+func tryReadVariableName(input string) (string, bool, int) {
+ switch input[0] {
+ case operator:
+ // Escaped operator; return it.
+ return input[0:1], false, 1
+ case referenceOpener:
+ // Scan to expression closer
+ for i := 1; i < len(input); i++ {
+ if input[i] == referenceCloser {
+ return input[1:i], true, i + 1
+ }
+ }
+
+ // Incomplete reference; return it.
+ return string(operator) + string(referenceOpener), false, 1
+ default:
+ // Not the beginning of an expression, ie, an operator
+ // that doesn't begin an expression. Return the operator
+ // and the first rune in the string.
+ return (string(operator) + string(input[0])), false, 1
+ }
+}
diff --git a/pkg/utils/podutils/pod.go b/pkg/utils/podutils/pod.go
new file mode 100644
index 000000000..3044d3387
--- /dev/null
+++ b/pkg/utils/podutils/pod.go
@@ -0,0 +1,392 @@
+package podutils
+
+import (
+ "encoding/json"
+ "strings"
+
+ "github.com/google/go-cmp/cmp"
+ corev1 "k8s.io/api/core/v1"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/klog"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ clustertreeutil "github.com/kosmos.io/kosmos/pkg/clustertree/cluster-manager/utils"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+)
+
+func GetSecrets(pod *corev1.Pod) ([]string, []string) {
+ secretNames := []string{}
+ imagePullSecrets := []string{}
+ for _, v := range pod.Spec.Volumes {
+ switch {
+ case v.Secret != nil:
+ if strings.HasPrefix(v.Name, "default-token") {
+ continue
+ }
+ klog.Infof("pod %s depends on secret %s", pod.Name, v.Secret.SecretName)
+ secretNames = append(secretNames, v.Secret.SecretName)
+
+ case v.CephFS != nil:
+ klog.Infof("pod %s depends on secret %s", pod.Name, v.CephFS.SecretRef.Name)
+ secretNames = append(secretNames, v.CephFS.SecretRef.Name)
+ case v.Cinder != nil:
+ klog.Infof("pod %s depends on secret %s", pod.Name, v.Cinder.SecretRef.Name)
+ secretNames = append(secretNames, v.Cinder.SecretRef.Name)
+ case v.RBD != nil:
+ klog.Infof("pod %s depends on secret %s", pod.Name, v.RBD.SecretRef.Name)
+ secretNames = append(secretNames, v.RBD.SecretRef.Name)
+ default:
+ klog.Warning("Skip other type volumes")
+ }
+ }
+ if pod.Spec.ImagePullSecrets != nil {
+ for _, s := range pod.Spec.ImagePullSecrets {
+ imagePullSecrets = append(imagePullSecrets, s.Name)
+ }
+ }
+ klog.Infof("pod %s depends on secrets %s, imagePullSecrets %s", pod.Name, secretNames, imagePullSecrets)
+ return secretNames, imagePullSecrets
+}
+
+func GetConfigmaps(pod *corev1.Pod) []string {
+ cmNames := []string{}
+ for _, v := range pod.Spec.Volumes {
+ if v.ConfigMap == nil {
+ continue
+ }
+ cmNames = append(cmNames, v.ConfigMap.Name)
+ }
+ klog.Infof("pod %s depends on configMap %s", pod.Name, cmNames)
+ return cmNames
+}
+
+func GetPVCs(pod *corev1.Pod) []string {
+ cmNames := []string{}
+ for _, v := range pod.Spec.Volumes {
+ if v.PersistentVolumeClaim == nil {
+ continue
+ }
+ cmNames = append(cmNames, v.PersistentVolumeClaim.ClaimName)
+ }
+ klog.Infof("pod %s depends on pvc %v", pod.Name, cmNames)
+ return cmNames
+}
+
+func SetObjectGlobal(obj *metav1.ObjectMeta) {
+ if obj.Annotations == nil {
+ obj.Annotations = map[string]string{}
+ }
+ obj.Annotations[utils.KosmosGlobalLabel] = "true"
+}
+
+func SetUnstructuredObjGlobal(unstructuredObj *unstructured.Unstructured) {
+ annotationsMap := unstructuredObj.GetAnnotations()
+ if annotationsMap == nil {
+ annotationsMap = map[string]string{}
+ }
+ annotationsMap[utils.KosmosGlobalLabel] = "true"
+
+ unstructuredObj.SetAnnotations(annotationsMap)
+}
+
+func DeleteGraceTimeEqual(old, new *int64) bool {
+ if old == nil && new == nil {
+ return true
+ }
+ if old != nil && new != nil {
+ return *old == *new
+ }
+ return false
+}
+
+func IsEqual(pod1, pod2 *corev1.Pod) bool {
+ return cmp.Equal(pod1.Spec.Containers, pod2.Spec.Containers) &&
+ cmp.Equal(pod1.Spec.InitContainers, pod2.Spec.InitContainers) &&
+ cmp.Equal(pod1.Spec.ActiveDeadlineSeconds, pod2.Spec.ActiveDeadlineSeconds) &&
+ cmp.Equal(pod1.Spec.Tolerations, pod2.Spec.Tolerations) &&
+ cmp.Equal(pod1.ObjectMeta.Labels, pod2.Labels) &&
+ cmp.Equal(pod1.ObjectMeta.Annotations, pod2.Annotations)
+}
+
+func ShouldEnqueue(oldPod, newPod *corev1.Pod) bool {
+ if !IsEqual(oldPod, newPod) {
+ return true
+ }
+ if !DeleteGraceTimeEqual(oldPod.DeletionGracePeriodSeconds, newPod.DeletionGracePeriodSeconds) {
+ return true
+ }
+ if !oldPod.DeletionTimestamp.Equal(newPod.DeletionTimestamp) {
+ return true
+ }
+ return false
+}
+
+func FitObjectMeta(meta *metav1.ObjectMeta) {
+ meta.UID = ""
+ meta.ResourceVersion = ""
+ // meta.SelfLink = ""
+ meta.OwnerReferences = nil
+}
+
+func FitUnstructuredObjMeta(unstructuredObj *unstructured.Unstructured) {
+ unstructuredObj.SetUID("")
+ unstructuredObj.SetResourceVersion("")
+ unstructuredObj.SetOwnerReferences(nil)
+ anno := unstructuredObj.GetAnnotations()
+ if anno == nil {
+ return
+ }
+ if len(anno[utils.PVCSelectedNodeKey]) != 0 {
+ delete(anno, utils.PVCSelectedNodeKey)
+ unstructuredObj.SetAnnotations(anno)
+ }
+}
+
+func fitNodeAffinity(affinity *corev1.Affinity, nodeSelector kosmosv1alpha1.NodeSelector) (cpAffinity *corev1.Affinity) {
+ nodeSelectorTerms := make([]corev1.NodeSelectorTerm, 0)
+ nodeSelectorTerm := corev1.NodeSelectorTerm{
+ MatchExpressions: make([]corev1.NodeSelectorRequirement, 0),
+ }
+ if nodeSelector.LabelSelector.MatchLabels != nil {
+ for key, value := range nodeSelector.LabelSelector.MatchLabels {
+ selector := corev1.NodeSelectorRequirement{
+ Key: key,
+ Operator: corev1.NodeSelectorOpIn,
+ Values: []string{value},
+ }
+ nodeSelectorTerm.MatchExpressions = append(nodeSelectorTerm.MatchExpressions, selector)
+ }
+ }
+
+ if nodeSelector.LabelSelector.MatchExpressions != nil {
+ for _, item := range nodeSelector.LabelSelector.MatchExpressions {
+ selector := corev1.NodeSelectorRequirement{
+ Key: item.Key,
+ Operator: corev1.NodeSelectorOperator(item.Operator),
+ Values: item.Values,
+ }
+ nodeSelectorTerm.MatchExpressions = append(nodeSelectorTerm.MatchExpressions, selector)
+ }
+ }
+ nodeSelectorTerms = append(nodeSelectorTerms, nodeSelectorTerm)
+
+ if affinity == nil {
+ cpAffinity = &corev1.Affinity{
+ NodeAffinity: &corev1.NodeAffinity{
+ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{
+ NodeSelectorTerms: nodeSelectorTerms,
+ },
+ },
+ }
+ } else {
+ cpAffinity = affinity.DeepCopy()
+ if cpAffinity.NodeAffinity == nil {
+ cpAffinity.NodeAffinity = &corev1.NodeAffinity{
+ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{
+ NodeSelectorTerms: nodeSelectorTerms,
+ },
+ }
+ } else if cpAffinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution == nil {
+ cpAffinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = &corev1.NodeSelector{
+ NodeSelectorTerms: nodeSelectorTerms,
+ }
+ } else if cpAffinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms == nil {
+ cpAffinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms = nodeSelectorTerms
+ } else {
+ cpAffinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution.NodeSelectorTerms = nodeSelectorTerms
+ }
+ }
+ return cpAffinity
+}
+
+func FitPod(pod *corev1.Pod, ignoreLabels []string, leafMode clustertreeutil.LeafMode, nodeSelector kosmosv1alpha1.NodeSelector) *corev1.Pod {
+ vols := []corev1.Volume{}
+ for _, v := range pod.Spec.Volumes {
+ if strings.HasPrefix(v.Name, "default-token") {
+ continue
+ }
+ vols = append(vols, v)
+ }
+
+ podCopy := pod.DeepCopy()
+ FitObjectMeta(&podCopy.ObjectMeta)
+ if podCopy.Labels == nil {
+ podCopy.Labels = make(map[string]string)
+ }
+ if podCopy.Annotations == nil {
+ podCopy.Annotations = make(map[string]string)
+ }
+ podCopy.Labels[utils.KosmosPodLabel] = "true"
+ cns := ConvertAnnotations(pod.Annotations)
+ recoverSelectors(podCopy, cns)
+ podCopy.Spec.Containers = fitContainers(pod.Spec.Containers)
+ podCopy.Spec.InitContainers = fitContainers(pod.Spec.InitContainers)
+ podCopy.Spec.Volumes = vols
+ podCopy.Status = corev1.PodStatus{}
+
+ if podCopy.Spec.SchedulerName == utils.KosmosSchedulerName {
+ podCopy.Spec.SchedulerName = ""
+ }
+
+ if leafMode != clustertreeutil.Node {
+ podCopy.Spec.NodeName = ""
+ }
+
+ if leafMode == clustertreeutil.Party {
+ podCopy.Spec.Affinity = fitNodeAffinity(pod.Spec.Affinity, nodeSelector)
+ }
+
+ tripped := FitLabels(podCopy.ObjectMeta.Labels, ignoreLabels)
+ if tripped != nil {
+ trippedStr, err := json.Marshal(tripped)
+ if err != nil {
+ return podCopy
+ }
+ podCopy.Annotations[utils.KosmosTrippedLabels] = string(trippedStr)
+ }
+
+ return podCopy
+}
+
+func fitContainers(containers []corev1.Container) []corev1.Container {
+ var newContainers []corev1.Container
+
+ for _, c := range containers {
+ var volMounts []corev1.VolumeMount
+ for _, v := range c.VolumeMounts {
+ if strings.HasPrefix(v.Name, "default-token") {
+ continue
+ }
+ volMounts = append(volMounts, v)
+ }
+ c.VolumeMounts = volMounts
+ newContainers = append(newContainers, c)
+ }
+
+ return newContainers
+}
+
+func IsKosmosPod(pod *corev1.Pod) bool {
+ if pod.Labels != nil && pod.Labels[utils.KosmosPodLabel] == "true" {
+ return true
+ }
+ return false
+}
+
+func RecoverLabels(labels map[string]string, annotations map[string]string) {
+ trippedLabels := annotations[utils.KosmosTrippedLabels]
+ if trippedLabels == "" {
+ return
+ }
+ trippedLabelsMap := make(map[string]string)
+ if err := json.Unmarshal([]byte(trippedLabels), &trippedLabelsMap); err != nil {
+ return
+ }
+ for k, v := range trippedLabelsMap {
+ labels[k] = v
+ }
+}
+
+func FitLabels(labels map[string]string, ignoreLabels []string) map[string]string {
+ if ignoreLabels == nil {
+ return nil
+ }
+ trippedLabels := make(map[string]string, len(ignoreLabels))
+ for _, key := range ignoreLabels {
+ if labels[key] == "" {
+ continue
+ }
+ trippedLabels[key] = labels[key]
+ delete(labels, key)
+ }
+ return trippedLabels
+}
+
+func GetUpdatedPod(orig, update *corev1.Pod, ignoreLabels []string, leafMode clustertreeutil.LeafMode, nodeSelector kosmosv1alpha1.NodeSelector) {
+ for i := range orig.Spec.InitContainers {
+ orig.Spec.InitContainers[i].Image = update.Spec.InitContainers[i].Image
+ }
+ for i := range orig.Spec.Containers {
+ orig.Spec.Containers[i].Image = update.Spec.Containers[i].Image
+ }
+ if update.Annotations == nil {
+ update.Annotations = make(map[string]string)
+ }
+ if orig.Annotations[utils.KosmosSelectorKey] != update.Annotations[utils.KosmosSelectorKey] {
+ if cns := ConvertAnnotations(update.Annotations); cns != nil {
+ orig.Spec.Tolerations = cns.Tolerations
+ }
+ }
+ orig.Labels = update.Labels
+ if orig.Labels == nil {
+ orig.Labels = make(map[string]string)
+ }
+ orig.Labels[utils.KosmosPodLabel] = "true"
+ orig.Annotations = update.Annotations
+ orig.Spec.ActiveDeadlineSeconds = update.Spec.ActiveDeadlineSeconds
+ if orig.Labels != nil {
+ FitLabels(orig.ObjectMeta.Labels, ignoreLabels)
+ }
+
+ if leafMode == clustertreeutil.Party {
+ orig.Spec.Affinity = fitNodeAffinity(update.Spec.Affinity, nodeSelector)
+ }
+}
+
+func ConvertAnnotations(annotation map[string]string) *utils.ClustersNodeSelection {
+ if annotation == nil {
+ return nil
+ }
+ val := annotation[utils.KosmosSelectorKey]
+ if len(val) == 0 {
+ return nil
+ }
+
+ var cns utils.ClustersNodeSelection
+ err := json.Unmarshal([]byte(val), &cns)
+ if err != nil {
+ return nil
+ }
+ return &cns
+}
+
+func recoverSelectors(pod *corev1.Pod, cns *utils.ClustersNodeSelection) {
+ if cns != nil {
+ pod.Spec.NodeSelector = cns.NodeSelector
+ pod.Spec.Tolerations = cns.Tolerations
+ pod.Spec.TopologySpreadConstraints = cns.TopologySpreadConstraints
+ if pod.Spec.Affinity == nil {
+ pod.Spec.Affinity = cns.Affinity
+ } else {
+ if cns.Affinity != nil && cns.Affinity.NodeAffinity != nil {
+ if pod.Spec.Affinity.NodeAffinity != nil {
+ pod.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = cns.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution
+ } else {
+ pod.Spec.Affinity.NodeAffinity = cns.Affinity.NodeAffinity
+ }
+ } else {
+ pod.Spec.Affinity.NodeAffinity = nil
+ }
+ }
+ } else {
+ pod.Spec.NodeSelector = nil
+ pod.Spec.Tolerations = nil
+ pod.Spec.TopologySpreadConstraints = nil
+ if pod.Spec.Affinity != nil && pod.Spec.Affinity.NodeAffinity != nil {
+ pod.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution = nil
+ }
+ }
+ if pod.Spec.Affinity != nil {
+ if pod.Spec.Affinity.NodeAffinity != nil {
+ if pod.Spec.Affinity.NodeAffinity.RequiredDuringSchedulingIgnoredDuringExecution == nil &&
+ pod.Spec.Affinity.NodeAffinity.PreferredDuringSchedulingIgnoredDuringExecution == nil {
+ pod.Spec.Affinity.NodeAffinity = nil
+ }
+ }
+ if pod.Spec.Affinity.NodeAffinity == nil && pod.Spec.Affinity.PodAffinity == nil &&
+ pod.Spec.Affinity.PodAntiAffinity == nil {
+ pod.Spec.Affinity = nil
+ }
+ }
+}
diff --git a/pkg/utils/pvpvc.go b/pkg/utils/pvpvc.go
new file mode 100644
index 000000000..9ceaebbf4
--- /dev/null
+++ b/pkg/utils/pvpvc.go
@@ -0,0 +1,51 @@
+package utils
+
+import (
+ "fmt"
+ "reflect"
+
+ v1 "k8s.io/api/core/v1"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+)
+
+func IsPVEqual(pv *v1.PersistentVolume, clone *v1.PersistentVolume) bool {
+ if reflect.DeepEqual(pv.Annotations, clone.Annotations) &&
+ reflect.DeepEqual(pv.Spec, clone.Spec) &&
+ reflect.DeepEqual(pv.Status, clone.Status) {
+ return true
+ }
+ return false
+}
+
+func IsOne2OneMode(cluster *kosmosv1alpha1.Cluster) bool {
+ return cluster.Spec.ClusterTreeOptions.LeafModels != nil
+}
+
+func NodeAffinity4RootPV(pv *v1.PersistentVolume, isOne2OneMode bool, clusterName string) string {
+ node4RootPV := fmt.Sprintf("%s%s", KosmosNodePrefix, clusterName)
+ if isOne2OneMode {
+ for _, v := range pv.Spec.NodeAffinity.Required.NodeSelectorTerms {
+ for _, val := range v.MatchFields {
+ if val.Key == NodeHostnameValue || val.Key == NodeHostnameValueBeta {
+ node4RootPV = val.Values[0]
+ }
+ }
+ for _, val := range v.MatchExpressions {
+ if val.Key == NodeHostnameValue || val.Key == NodeHostnameValueBeta {
+ node4RootPV = val.Values[0]
+ }
+ }
+ }
+ }
+ return node4RootPV
+}
+
+func IsPVCEqual(pvc *v1.PersistentVolumeClaim, clone *v1.PersistentVolumeClaim) bool {
+ if reflect.DeepEqual(pvc.Annotations, clone.Annotations) &&
+ reflect.DeepEqual(pvc.Spec, clone.Spec) &&
+ reflect.DeepEqual(pvc.Status, clone.Status) {
+ return true
+ }
+ return false
+}
diff --git a/pkg/utils/resource_manager.go b/pkg/utils/resource_manager.go
new file mode 100644
index 000000000..f7112498d
--- /dev/null
+++ b/pkg/utils/resource_manager.go
@@ -0,0 +1,37 @@
+package utils
+
+import (
+ kubeinformers "k8s.io/client-go/informers"
+ "k8s.io/client-go/kubernetes"
+ corev1listers "k8s.io/client-go/listers/core/v1"
+ discoverylisterv1 "k8s.io/client-go/listers/discovery/v1"
+ "k8s.io/client-go/tools/cache"
+
+ "github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
+ "github.com/kosmos.io/kosmos/pkg/generated/informers/externalversions"
+)
+
+type ResourceManager struct {
+ InformerFactory kubeinformers.SharedInformerFactory
+ KosmosInformerFactory externalversions.SharedInformerFactory
+ EndpointSliceInformer cache.SharedInformer
+ EndpointSliceLister discoverylisterv1.EndpointSliceLister
+ ServiceInformer cache.SharedInformer
+ ServiceLister corev1listers.ServiceLister
+}
+
+// NewResourceManager hold the informer manager of master cluster
+func NewResourceManager(kubeClient kubernetes.Interface, kosmosClient versioned.Interface) *ResourceManager {
+ informerFactory := kubeinformers.NewSharedInformerFactory(kubeClient, DefaultInformerResyncPeriod)
+ kosmosInformerFactory := externalversions.NewSharedInformerFactory(kosmosClient, DefaultInformerResyncPeriod)
+ endpointSliceInformer := informerFactory.Discovery().V1().EndpointSlices()
+ serviceInformer := informerFactory.Core().V1().Services()
+ return &ResourceManager{
+ InformerFactory: informerFactory,
+ KosmosInformerFactory: kosmosInformerFactory,
+ EndpointSliceInformer: endpointSliceInformer.Informer(),
+ EndpointSliceLister: endpointSliceInformer.Lister(),
+ ServiceInformer: serviceInformer.Informer(),
+ ServiceLister: serviceInformer.Lister(),
+ }
+}
diff --git a/pkg/utils/resources.go b/pkg/utils/resources.go
new file mode 100644
index 000000000..ea53b1d65
--- /dev/null
+++ b/pkg/utils/resources.go
@@ -0,0 +1,104 @@
+package utils
+
+import (
+ corev1 "k8s.io/api/core/v1"
+ "k8s.io/apimachinery/pkg/api/resource"
+ v1resource "k8s.io/kubernetes/pkg/api/v1/resource"
+)
+
+const (
+ podResourceName corev1.ResourceName = "pods"
+)
+
+func CalculateClusterResources(nodes *corev1.NodeList, pods *corev1.PodList) corev1.ResourceList {
+ base := GetNodesTotalResources(nodes)
+ reqs, _ := GetPodsTotalRequestsAndLimits(pods)
+ podNums := GetUsedPodNums(pods, nodes)
+ SubResourceList(base, reqs)
+ SubResourceList(base, podNums)
+ return base
+}
+
+func GetNodesTotalResources(nodes *corev1.NodeList) (total corev1.ResourceList) {
+ total = corev1.ResourceList{}
+ for i, n := range nodes.Items {
+ if n.Spec.Unschedulable || !NodeReady(&nodes.Items[i]) {
+ continue
+ }
+ for key, val := range n.Status.Allocatable {
+ if value, ok := total[key]; !ok {
+ total[key] = val.DeepCopy()
+ } else {
+ value.Add(val)
+ total[key] = value
+ }
+ }
+ }
+ return
+}
+
+func SubResourceList(base, list corev1.ResourceList) {
+ for name, quantity := range list {
+ value, ok := base[name]
+ if ok {
+ q := value.DeepCopy()
+ q.Sub(quantity)
+ base[name] = q
+ }
+ }
+}
+
+// GetPodsTotalRequestsAndLimits
+// lifted from https://github.com/kubernetes/kubernetes/blob/v1.21.8/staging/src/k8s.io/kubectl/pkg/describe/describe.go#L4051
+func GetPodsTotalRequestsAndLimits(podList *corev1.PodList) (reqs corev1.ResourceList, limits corev1.ResourceList) {
+ reqs, limits = corev1.ResourceList{}, corev1.ResourceList{}
+ if podList.Items != nil {
+ for _, p := range podList.Items {
+ pod := p
+ if IsVirtualPod(&pod) {
+ continue
+ }
+ podReqs, podLimits := v1resource.PodRequestsAndLimits(&pod)
+ for podReqName, podReqValue := range podReqs {
+ if value, ok := reqs[podReqName]; !ok {
+ reqs[podReqName] = podReqValue.DeepCopy()
+ } else {
+ value.Add(podReqValue)
+ reqs[podReqName] = value
+ }
+ }
+ for podLimitName, podLimitValue := range podLimits {
+ if value, ok := limits[podLimitName]; !ok {
+ limits[podLimitName] = podLimitValue.DeepCopy()
+ } else {
+ value.Add(podLimitValue)
+ limits[podLimitName] = value
+ }
+ }
+ }
+ }
+ return
+}
+
+func GetUsedPodNums(podList *corev1.PodList, nodes *corev1.NodeList) (res corev1.ResourceList) {
+ podQuantity := resource.Quantity{}
+ res = corev1.ResourceList{}
+ nodeMap := map[string]corev1.Node{}
+ for _, item := range nodes.Items {
+ nodeMap[item.Name] = item
+ }
+ for _, p := range podList.Items {
+ pod := p
+ if IsVirtualPod(&pod) {
+ continue
+ }
+ node, exists := nodeMap[pod.Spec.NodeName]
+ if !exists || node.Spec.Unschedulable || !NodeReady(&node) {
+ continue
+ }
+ q := resource.MustParse("1")
+ podQuantity.Add(q)
+ }
+ res[podResourceName] = podQuantity
+ return
+}
diff --git a/pkg/utils/utils.go b/pkg/utils/utils.go
index 555070269..28e89b7ff 100644
--- a/pkg/utils/utils.go
+++ b/pkg/utils/utils.go
@@ -1,16 +1,9 @@
package utils
import (
- "fmt"
"strings"
-
- "k8s.io/client-go/dynamic"
- "k8s.io/client-go/kubernetes"
- "k8s.io/client-go/rest"
- "k8s.io/client-go/tools/clientcmd"
)
-// nolint
func ContainsString(arr []string, s string) bool {
for _, str := range arr {
if strings.Contains(str, s) {
@@ -20,67 +13,6 @@ func ContainsString(arr []string, s string) bool {
return false
}
-func CompareStringArrays(a, b []string) bool {
- if len(a) != len(b) {
- return false
- }
- for i := 0; i < len(a); i++ {
- if a[i] != b[i] {
- return false
- }
- }
- return true
-}
-
-func BuildClusterConfig(configBytes []byte) (*rest.Config, error) {
- if len(configBytes) == 0 {
- return nil, fmt.Errorf("config bytes is nil")
- }
- clientConfig, err := clientcmd.NewClientConfigFromBytes(configBytes)
- if err != nil {
- return nil, err
- }
- return clientConfig.ClientConfig()
-}
-
-func BuildClusterClient(configBytes []byte) (*kubernetes.Clientset, error) {
- if len(configBytes) == 0 {
- return nil, fmt.Errorf("config bytes is nil")
- }
- clientConfig, err := clientcmd.NewClientConfigFromBytes(configBytes)
- if err != nil {
- return nil, err
- }
- config, err := clientConfig.ClientConfig()
- if err != nil {
- return nil, err
- }
- clusterClientSet, err := kubernetes.NewForConfig(config)
- if err != nil {
- return nil, err
- }
- return clusterClientSet, nil
-}
-
-func BuildDynamicClient(configBytes []byte) (dynamic.Interface, error) {
- if len(configBytes) == 0 {
- return nil, fmt.Errorf("config bytes is nil")
- }
- clientConfig, err := clientcmd.NewClientConfigFromBytes(configBytes)
- if err != nil {
- return nil, err
- }
- config, err := clientConfig.ClientConfig()
- if err != nil {
- return nil, err
- }
- client, err := dynamic.NewForConfig(config)
- if err != nil {
- return nil, err
- }
- return client, nil
-}
-
func IsIPv6(s string) bool {
// 0.234.63.0 and 0.234.63.0/22
for i := 0; i < len(s); i++ {
diff --git a/test/OWNERS b/test/OWNERS
new file mode 100644
index 000000000..c6b629db9
--- /dev/null
+++ b/test/OWNERS
@@ -0,0 +1,6 @@
+approvers:
+ - wuyingjun-lucky
+ - duanmengkk
+reviewers:
+ - wuyingjun-lucky
+ - duanmengkk
diff --git a/test/e2e/deploy/cr/cluster-cr.yaml b/test/e2e/deploy/cr/cluster-cr.yaml
new file mode 100644
index 000000000..e723c16ae
--- /dev/null
+++ b/test/e2e/deploy/cr/cluster-cr.yaml
@@ -0,0 +1,198 @@
+apiVersion: mysql.presslabs.org/v1alpha1
+kind: MysqlCluster
+metadata:
+ name: mysql-cluster-e2e
+ namespace: kosmos-e2e
+spec:
+ replicas: 2
+ secretName: my-secret
+
+ ## For setting custom docker image or specifying mysql version
+ ## the image field has priority over mysqlVersion.
+ image: docker.io/percona:5.7
+ mysqlVersion: "5.7"
+
+ # initBucketURL: gs://bucket_name/backup.xtrabackup.gz
+ # initBucketSecretName:
+
+ ## PodDisruptionBudget
+ # minAvailable: 1
+
+ ## For recurrent backups set backupSchedule with a cronjob expression with seconds
+ # backupSchedule:
+ # backupURL: s3://bucket_name/
+ # backupSecretName:
+ # backupScheduleJobsHistoryLimit:
+ # backupRemoteDeletePolicy:
+ # backupCredentials:
+ # use s3 https://rclone.org/s3/
+ # S3_PROVIDER: ? # like: AWS, Minio, Ceph, and so on
+ # S3_ENDPOINT: ?
+ # AWS_ACCESS_KEY_ID: ?
+ # AWS_SECRET_ACCESS_KEY: ?
+ # AWS_REGION: ?
+ # AWS_ACL: ?
+ # AWS_STORAGE_CLASS: ?
+ # AWS_SESSION_TOKEN: ?
+
+ # use google cloud storage https://rclone.org/googlecloudstorage/
+ # GCS_SERVICE_ACCOUNT_JSON_KEY: ?
+ # GCS_PROJECT_ID: ?
+ # GCS_OBJECT_ACL: ?
+ # GCS_BUCKET_ACL: ?
+ # GCS_LOCATION: ?
+ # GCS_STORAGE_CLASS: MULTI_REGIONAL
+
+ # use http https://rclone.org/http/
+ # HTTP_URL: ?
+
+ # use google drive https://rclone.org/drive/
+ # GDRIVE_CLIENT_ID: ?
+ # GDRIVE_ROOT_FOLDER_ID: ?
+ # GDRIVE_IMPERSONATOR: ?
+
+ # use azure https://rclone.org/azureblob/
+ # AZUREBLOB_ACCOUNT: ?
+ # AZUREBLOB_KEY: ?
+
+ ## Custom Server ID Offset for replication
+ # serverIDOffset: 100
+
+ ## Configs that will be added to my.cnf for cluster
+ mysqlConf:
+ # innodb-buffer-size: 128M
+
+
+ ## Specify additional pod specification
+ podSpec:
+ tolerations:
+ - key: "kosmos.io/node"
+ operator: "Equal"
+ value: "true"
+ effect: "NoSchedule"
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kubernetes.io/hostname
+ operator: NotIn
+ values:
+ - cluster-host-control-plane
+ - cluster-host-worker
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ mysql.presslabs.org/cluster: mysql-cluster-e2e
+ topologyKey: kubernetes.io/hostname
+
+ # imagePullSecrets: []
+ # labels: {}
+ # annotations: {}
+ # affinity:
+ # podAntiAffinity:
+ # preferredDuringSchedulingIgnoredDuringExecution:
+ # weight: 100
+ # podAffinityTerm:
+ # topologyKey: "kubernetes.io/hostname"
+ # labelSelector:
+ # matchlabels:
+ # backupAffinity: {}
+ # backupNodeSelector: {}
+ # backupPriorityClassName:
+ # backupTolerations: []
+ # # Override the default preStop hook with a custom command/script
+ # mysqlLifecycle:
+ # preStop:
+ # exec:
+ # command:
+ # - /scripts/demote-if-master
+ # nodeSelector:
+ # nexus: "true"
+ # resources:
+ # requests:
+ # memory: 1G
+ # cpu: 200m
+ # tolerations: []
+ # priorityClassName:
+ # serviceAccountName: default
+ # # Use a initContainer to fix the permissons of a hostPath volume.
+ # initContainers:
+ # - name: volume-permissions
+ # image: busybox
+ # securityContext:
+ # runAsUser: 0
+ # command:
+ # - sh
+ # - -c
+ # - chmod 750 /data/mysql; chown 999:999 /data/mysql
+ # volumeMounts:
+ # - name: data
+ # mountPath: /data/mysql
+
+ ## Specify additional volume specification
+ volumeSpec:
+ persistentVolumeClaim:
+ storageClassName: standard
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 0.5Gi
+
+ ## Specify service objectives
+ ## If thoses SLO are not fulfilled by cluster node then that node is
+ ## removed from scheme
+ # targetSLO:
+ # maxSlaveLatency: 10s
+
+ ## You can use custom volume for /tmp partition if needed.
+ ## Is disabled by default
+ # tmpfsSize: 1Gi
+
+ ## Set cluster in read only
+ # readOnly: false
+
+ ## Use `pigz` for parallel compression/decompression of backups
+ ## Or specify any arbitrary compress/decompress commands with args
+ # backupCompressCommand:
+ # - pigz
+ # - --stdout
+ #
+ # backupDecompressCommand:
+ # - pigz
+ # - --decompress
+
+ ## Add metrics exporter extra arguments
+ # metricsExporterExtraArgs:
+ # - --collect.info_schema.userstats
+ # - --collect.perf_schema.file_events
+
+ ## Add extra arguments to rclone
+ # rcloneExtraArgs:
+ # - --buffer-size=1G
+ # - --multi-thread-streams=8
+ # - --retries-sleep=10s
+ # - --retries=10
+ # - --transfers=8
+ # - --s3-force-path-style=false # when use Alibaba OSS
+
+ ## Add extra arguments to xbstream
+ # xbstreamExtraArgs:
+ # - --parallel=8
+
+ ## Add extra arguments to xtrabackup
+ # xtrabackupExtraArgs:
+ # - --parallel=8
+
+ ## Add extra arguments to xtrabackup during --prepare
+ # xtrabackupPrepareExtraArgs:
+ # - --use-memory=5G
+
+ ## Set xtrabackup target directory (the directory needs to exist)
+ # xtrabackupTargetDir: /var/lib/mysql/.tmp/xtrabackup/
+
+ # Add additional SQL commands to run during init of mysql
+ # initFileExtraSQL:
+ # - "CREATE USER test@localhost"
diff --git a/test/e2e/deploy/cr/secrets.yaml b/test/e2e/deploy/cr/secrets.yaml
new file mode 100644
index 000000000..14ccb2176
--- /dev/null
+++ b/test/e2e/deploy/cr/secrets.yaml
@@ -0,0 +1,9 @@
+apiVersion: v1
+data:
+ ROOT_PASSWORD: TWduU0hpQHRvSTElCg==
+kind: Secret
+metadata:
+ name: my-secret
+ namespace: kosmos-e2e
+type: Opaque
+---
diff --git a/test/e2e/deploy/mysql-operator/mysql-crd.yaml b/test/e2e/deploy/mysql-operator/mysql-crd.yaml
new file mode 100644
index 000000000..a94cc4c25
--- /dev/null
+++ b/test/e2e/deploy/mysql-operator/mysql-crd.yaml
@@ -0,0 +1,7127 @@
+apiVersion: v1
+items:
+- apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.7.0
+ creationTimestamp: "2023-11-01T09:02:25Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/name: mysql-operator
+ name: mysqlbackups.mysql.presslabs.org
+ resourceVersion: "1064320"
+ uid: bcc2bde5-c42e-4342-b051-7ab443a87c46
+ spec:
+ conversion:
+ strategy: None
+ group: mysql.presslabs.org
+ names:
+ kind: MysqlBackup
+ listKind: MysqlBackupList
+ plural: mysqlbackups
+ singular: mysqlbackup
+ scope: Namespaced
+ versions:
+ - name: v1alpha1
+ schema:
+ openAPIV3Schema:
+ description: MysqlBackup is the Schema for the mysqlbackups API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource
+ this object represents. Servers may infer this from the endpoint the
+ client submits requests to. Cannot be updated. In CamelCase. More
+ info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: MysqlBackupSpec defines the desired state of MysqlBackup
+ properties:
+ backupSecretName:
+ description: BackupSecretName the name of secrets that contains
+ the credentials to access the bucket. Default is used the secret
+ specified in cluster.
+ type: string
+ backupURL:
+ description: BackupURL represents the URL to the backup location,
+ this can be partially specifyied. Default is used the one specified
+ in the cluster.
+ type: string
+ clusterName:
+ description: ClustterName represents the cluster for which to take
+ backup
+ type: string
+ remoteDeletePolicy:
+ description: RemoteDeletePolicy the deletion policy that specify
+ how to treat the data from remote storage. By default it's used
+ softDelete.
+ type: string
+ required:
+ - clusterName
+ type: object
+ status:
+ description: MysqlBackupStatus defines the observed state of MysqlBackup
+ properties:
+ completed:
+ description: Completed indicates whether the backup is in a final
+ state, no matter whether its' corresponding job failed or succeeded
+ type: boolean
+ conditions:
+ description: Conditions represents the backup resource conditions
+ list.
+ items:
+ description: BackupCondition defines condition struct for backup
+ resource
+ properties:
+ lastTransitionTime:
+ description: LastTransitionTime
+ format: date-time
+ type: string
+ message:
+ description: Message
+ type: string
+ reason:
+ description: Reason
+ type: string
+ status:
+ description: Status of the condition, one of (\"True\", \"False\",
+ \"Unknown\")
+ type: string
+ type:
+ description: type of cluster condition, values in (\"Ready\")
+ type: string
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ type: object
+ type: array
+ type: object
+ type: object
+ served: true
+ storage: true
+ status:
+ acceptedNames:
+ kind: MysqlBackup
+ listKind: MysqlBackupList
+ plural: mysqlbackups
+ singular: mysqlbackup
+ conditions:
+ - lastTransitionTime: "2023-11-01T09:02:25Z"
+ message: no conflicts found
+ reason: NoConflicts
+ status: "True"
+ type: NamesAccepted
+ - lastTransitionTime: "2023-11-01T09:02:25Z"
+ message: the initial names have been accepted
+ reason: InitialNamesAccepted
+ status: "True"
+ type: Established
+ storedVersions:
+ - v1alpha1
+- apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.7.0
+ creationTimestamp: "2023-11-01T09:02:32Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/name: mysql-operator
+ name: mysqlclusters.mysql.presslabs.org
+ resourceVersion: "1064337"
+ uid: 9f366565-62f3-4fbe-99a7-b037a828fafd
+ spec:
+ conversion:
+ strategy: None
+ group: mysql.presslabs.org
+ names:
+ kind: MysqlCluster
+ listKind: MysqlClusterList
+ plural: mysqlclusters
+ shortNames:
+ - mysql
+ singular: mysqlcluster
+ scope: Namespaced
+ versions:
+ - additionalPrinterColumns:
+ - description: The cluster status
+ jsonPath: .status.conditions[?(@.type == 'Ready')].status
+ name: Ready
+ type: string
+ - description: The number of desired nodes
+ jsonPath: .spec.replicas
+ name: Replicas
+ type: integer
+ - jsonPath: .metadata.creationTimestamp
+ name: Age
+ type: date
+ name: v1alpha1
+ schema:
+ openAPIV3Schema:
+ description: MysqlCluster is the Schema for the mysqlclusters API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource
+ this object represents. Servers may infer this from the endpoint the
+ client submits requests to. Cannot be updated. In CamelCase. More
+ info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: 'MysqlClusterSpec defines the desired state of MysqlCluster
+ nolint: maligned'
+ properties:
+ backupCompressCommand:
+ description: BackupCompressCommand is a command to use for compressing
+ the backup.
+ items:
+ type: string
+ type: array
+ backupDecompressCommand:
+ description: BackupDecompressCommand is a command to use for decompressing
+ the backup.
+ items:
+ type: string
+ type: array
+ backupRemoteDeletePolicy:
+ description: BackupRemoteDeletePolicy the deletion policy that specify
+ how to treat the data from remote storage. By default it's used
+ softDelete.
+ type: string
+ backupSchedule:
+ description: Specify under crontab format interval to take backups
+ leave it empty to deactivate the backup process Defaults to ""
+ type: string
+ backupScheduleJobsHistoryLimit:
+ description: If set keeps last BackupScheduleJobsHistoryLimit Backups
+ type: integer
+ backupSecretName:
+ description: Represents the name of the secret that contains credentials
+ to connect to the storage provider to store backups.
+ type: string
+ backupURL:
+ description: Represents an URL to the location where to put backups.
+ type: string
+ image:
+ description: To specify the image that will be used for mysql server
+ container. If this is specified then the mysqlVersion is used
+ as source for MySQL server version.
+ type: string
+ initBucketSecretName:
+ type: string
+ initBucketURI:
+ description: Same as InitBucketURL but is DEPRECATED
+ type: string
+ initBucketURL:
+ description: A bucket URL that contains a xtrabackup to initialize
+ the mysql database.
+ type: string
+ initFileExtraSQL:
+ description: InitFileExtraSQL is a list of extra sql commands to
+ append to init_file.
+ items:
+ type: string
+ type: array
+ maxSlaveLatency:
+ description: MaxSlaveLatency represents the allowed latency for
+ a slave node in seconds. If set then the node with a latency grater
+ than this is removed from service.
+ format: int64
+ type: integer
+ metricsExporterExtraArgs:
+ description: MetricsExporterExtraArgs is a list of extra command
+ line arguments to pass to MySQL metrics exporter. See https://github.com/prometheus/mysqld_exporter
+ for the list of available flags.
+ items:
+ type: string
+ type: array
+ minAvailable:
+ description: The number of pods from that set that must still be
+ available after the eviction, even in the absence of the evicted
+ pod Defaults to 50%
+ type: string
+ mysqlConf:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ x-kubernetes-int-or-string: true
+ description: A map[string]string that will be passed to my.cnf file.
+ type: object
+ mysqlVersion:
+ description: 'Represents the MySQL version that will be run. The
+ available version can be found here: https://github.com/bitpoke/mysql-operator/blob/0fd4641ce4f756a0aab9d31c8b1f1c44ee10fcb2/pkg/util/constants/constants.go#L87
+ This field should be set even if the Image is set to let the operator
+ know which mysql version is running. Based on this version the
+ operator can take decisions which features can be used. Defaults
+ to 5.7'
+ type: string
+ podSpec:
+ description: Pod extra specification
+ properties:
+ affinity:
+ description: Affinity is a group of affinity scheduling rules.
+ properties:
+ nodeAffinity:
+ description: Describes node affinity scheduling rules for
+ the pod.
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule pods
+ to nodes that satisfy the affinity expressions specified
+ by this field, but it may choose a node that violates
+ one or more of the expressions. The node that is most
+ preferred is the one with the greatest sum of weights,
+ i.e. for each node that meets all of the scheduling
+ requirements (resource request, requiredDuringScheduling
+ affinity expressions, etc.), compute a sum by iterating
+ through the elements of this field and adding "weight"
+ to the sum if the node matches the corresponding matchExpressions;
+ the node(s) with the highest sum are the most preferred.
+ items:
+ description: An empty preferred scheduling term matches
+ all objects with implicit weight 0 (i.e. it's a
+ no-op). A null preferred scheduling term matches
+ no objects (i.e. is also a no-op).
+ properties:
+ preference:
+ description: A node selector term, associated
+ with the corresponding weight.
+ properties:
+ matchExpressions:
+ description: A list of node selector requirements
+ by node's labels.
+ items:
+ description: A node selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn, the
+ values array must be non-empty. If
+ the operator is Exists or DoesNotExist,
+ the values array must be empty. If
+ the operator is Gt or Lt, the values
+ array must have a single element,
+ which will be interpreted as an integer.
+ This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchFields:
+ description: A list of node selector requirements
+ by node's fields.
+ items:
+ description: A node selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn, the
+ values array must be non-empty. If
+ the operator is Exists or DoesNotExist,
+ the values array must be empty. If
+ the operator is Gt or Lt, the values
+ array must have a single element,
+ which will be interpreted as an integer.
+ This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ type: object
+ weight:
+ description: Weight associated with matching the
+ corresponding nodeSelectorTerm, in the range
+ 1-100.
+ format: int32
+ type: integer
+ required:
+ - preference
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the affinity
+ requirements specified by this field cease to be met
+ at some point during pod execution (e.g. due to an
+ update), the system may or may not try to eventually
+ evict the pod from its node.
+ properties:
+ nodeSelectorTerms:
+ description: Required. A list of node selector terms.
+ The terms are ORed.
+ items:
+ description: A null or empty node selector term
+ matches no objects. The requirements of them
+ are ANDed. The TopologySelectorTerm type implements
+ a subset of the NodeSelectorTerm.
+ properties:
+ matchExpressions:
+ description: A list of node selector requirements
+ by node's labels.
+ items:
+ description: A node selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn, the
+ values array must be non-empty. If
+ the operator is Exists or DoesNotExist,
+ the values array must be empty. If
+ the operator is Gt or Lt, the values
+ array must have a single element,
+ which will be interpreted as an integer.
+ This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchFields:
+ description: A list of node selector requirements
+ by node's fields.
+ items:
+ description: A node selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn, the
+ values array must be non-empty. If
+ the operator is Exists or DoesNotExist,
+ the values array must be empty. If
+ the operator is Gt or Lt, the values
+ array must have a single element,
+ which will be interpreted as an integer.
+ This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ type: object
+ type: array
+ required:
+ - nodeSelectorTerms
+ type: object
+ type: object
+ podAffinity:
+ description: Describes pod affinity scheduling rules (e.g.
+ co-locate this pod in the same node, zone, etc. as some
+ other pod(s)).
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule pods
+ to nodes that satisfy the affinity expressions specified
+ by this field, but it may choose a node that violates
+ one or more of the expressions. The node that is most
+ preferred is the one with the greatest sum of weights,
+ i.e. for each node that meets all of the scheduling
+ requirements (resource request, requiredDuringScheduling
+ affinity expressions, etc.), compute a sum by iterating
+ through the elements of this field and adding "weight"
+ to the sum if the node has pods which matches the
+ corresponding podAffinityTerm; the node(s) with the
+ highest sum are the most preferred.
+ items:
+ description: The weights of all of the matched WeightedPodAffinityTerm
+ fields are added per-node to find the most preferred
+ node(s)
+ properties:
+ podAffinityTerm:
+ description: Required. A pod affinity term, associated
+ with the corresponding weight.
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaceSelector:
+ description: A label query over the set of
+ namespaces that the term applies to. The
+ term is applied to the union of the namespaces
+ selected by this field and the ones listed
+ in the namespaces field. null selector and
+ null or empty namespaces list means "this
+ pod's namespace". An empty selector ({})
+ matches all namespaces. This field is alpha-level
+ and is only honored when PodAffinityNamespaceSelector
+ feature is enabled.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaces:
+ description: namespaces specifies a static
+ list of namespace names that the term applies
+ to. The term is applied to the union of
+ the namespaces listed in this field and
+ the ones selected by namespaceSelector.
+ null or empty namespaces list and null namespaceSelector
+ means "this pod's namespace"
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located
+ (affinity) or not co-located (anti-affinity)
+ with the pods matching the labelSelector
+ in the specified namespaces, where co-located
+ is defined as running on a node whose value
+ of the label with key topologyKey matches
+ that of any node on which any of the selected
+ pods is running. Empty topologyKey is not
+ allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ weight:
+ description: weight associated with matching the
+ corresponding podAffinityTerm, in the range
+ 1-100.
+ format: int32
+ type: integer
+ required:
+ - podAffinityTerm
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the affinity
+ requirements specified by this field cease to be met
+ at some point during pod execution (e.g. due to a
+ pod label update), the system may or may not try to
+ eventually evict the pod from its node. When there
+ are multiple elements, the lists of nodes corresponding
+ to each podAffinityTerm are intersected, i.e. all
+ terms must be satisfied.
+ items:
+ description: Defines a set of pods (namely those matching
+ the labelSelector relative to the given namespace(s))
+ that this pod should be co-located (affinity) or
+ not co-located (anti-affinity) with, where co-located
+ is defined as running on a node whose value of the
+ label with key matches that of any
+ node on which a pod of the set of pods is running
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of
+ label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: key is the label key that
+ the selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's
+ relationship to a set of values. Valid
+ operators are In, NotIn, Exists and
+ DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is
+ "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaceSelector:
+ description: A label query over the set of namespaces
+ that the term applies to. The term is applied
+ to the union of the namespaces selected by this
+ field and the ones listed in the namespaces
+ field. null selector and null or empty namespaces
+ list means "this pod's namespace". An empty
+ selector ({}) matches all namespaces. This field
+ is alpha-level and is only honored when PodAffinityNamespaceSelector
+ feature is enabled.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of
+ label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: key is the label key that
+ the selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's
+ relationship to a set of values. Valid
+ operators are In, NotIn, Exists and
+ DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is
+ "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaces:
+ description: namespaces specifies a static list
+ of namespace names that the term applies to.
+ The term is applied to the union of the namespaces
+ listed in this field and the ones selected by
+ namespaceSelector. null or empty namespaces
+ list and null namespaceSelector means "this
+ pod's namespace"
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located (affinity)
+ or not co-located (anti-affinity) with the pods
+ matching the labelSelector in the specified
+ namespaces, where co-located is defined as running
+ on a node whose value of the label with key
+ topologyKey matches that of any node on which
+ any of the selected pods is running. Empty topologyKey
+ is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ type: array
+ type: object
+ podAntiAffinity:
+ description: Describes pod anti-affinity scheduling rules
+ (e.g. avoid putting this pod in the same node, zone, etc.
+ as some other pod(s)).
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule pods
+ to nodes that satisfy the anti-affinity expressions
+ specified by this field, but it may choose a node
+ that violates one or more of the expressions. The
+ node that is most preferred is the one with the greatest
+ sum of weights, i.e. for each node that meets all
+ of the scheduling requirements (resource request,
+ requiredDuringScheduling anti-affinity expressions,
+ etc.), compute a sum by iterating through the elements
+ of this field and adding "weight" to the sum if the
+ node has pods which matches the corresponding podAffinityTerm;
+ the node(s) with the highest sum are the most preferred.
+ items:
+ description: The weights of all of the matched WeightedPodAffinityTerm
+ fields are added per-node to find the most preferred
+ node(s)
+ properties:
+ podAffinityTerm:
+ description: Required. A pod affinity term, associated
+ with the corresponding weight.
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaceSelector:
+ description: A label query over the set of
+ namespaces that the term applies to. The
+ term is applied to the union of the namespaces
+ selected by this field and the ones listed
+ in the namespaces field. null selector and
+ null or empty namespaces list means "this
+ pod's namespace". An empty selector ({})
+ matches all namespaces. This field is alpha-level
+ and is only honored when PodAffinityNamespaceSelector
+ feature is enabled.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaces:
+ description: namespaces specifies a static
+ list of namespace names that the term applies
+ to. The term is applied to the union of
+ the namespaces listed in this field and
+ the ones selected by namespaceSelector.
+ null or empty namespaces list and null namespaceSelector
+ means "this pod's namespace"
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located
+ (affinity) or not co-located (anti-affinity)
+ with the pods matching the labelSelector
+ in the specified namespaces, where co-located
+ is defined as running on a node whose value
+ of the label with key topologyKey matches
+ that of any node on which any of the selected
+ pods is running. Empty topologyKey is not
+ allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ weight:
+ description: weight associated with matching the
+ corresponding podAffinityTerm, in the range
+ 1-100.
+ format: int32
+ type: integer
+ required:
+ - podAffinityTerm
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the anti-affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the anti-affinity
+ requirements specified by this field cease to be met
+ at some point during pod execution (e.g. due to a
+ pod label update), the system may or may not try to
+ eventually evict the pod from its node. When there
+ are multiple elements, the lists of nodes corresponding
+ to each podAffinityTerm are intersected, i.e. all
+ terms must be satisfied.
+ items:
+ description: Defines a set of pods (namely those matching
+ the labelSelector relative to the given namespace(s))
+ that this pod should be co-located (affinity) or
+ not co-located (anti-affinity) with, where co-located
+ is defined as running on a node whose value of the
+ label with key matches that of any
+ node on which a pod of the set of pods is running
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of
+ label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: key is the label key that
+ the selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's
+ relationship to a set of values. Valid
+ operators are In, NotIn, Exists and
+ DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is
+ "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaceSelector:
+ description: A label query over the set of namespaces
+ that the term applies to. The term is applied
+ to the union of the namespaces selected by this
+ field and the ones listed in the namespaces
+ field. null selector and null or empty namespaces
+ list means "this pod's namespace". An empty
+ selector ({}) matches all namespaces. This field
+ is alpha-level and is only honored when PodAffinityNamespaceSelector
+ feature is enabled.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of
+ label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: key is the label key that
+ the selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's
+ relationship to a set of values. Valid
+ operators are In, NotIn, Exists and
+ DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is
+ "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaces:
+ description: namespaces specifies a static list
+ of namespace names that the term applies to.
+ The term is applied to the union of the namespaces
+ listed in this field and the ones selected by
+ namespaceSelector. null or empty namespaces
+ list and null namespaceSelector means "this
+ pod's namespace"
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located (affinity)
+ or not co-located (anti-affinity) with the pods
+ matching the labelSelector in the specified
+ namespaces, where co-located is defined as running
+ on a node whose value of the label with key
+ topologyKey matches that of any node on which
+ any of the selected pods is running. Empty topologyKey
+ is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ type: array
+ type: object
+ type: object
+ annotations:
+ additionalProperties:
+ type: string
+ type: object
+ backupAffinity:
+ description: Affinity is a group of affinity scheduling rules.
+ properties:
+ nodeAffinity:
+ description: Describes node affinity scheduling rules for
+ the pod.
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule pods
+ to nodes that satisfy the affinity expressions specified
+ by this field, but it may choose a node that violates
+ one or more of the expressions. The node that is most
+ preferred is the one with the greatest sum of weights,
+ i.e. for each node that meets all of the scheduling
+ requirements (resource request, requiredDuringScheduling
+ affinity expressions, etc.), compute a sum by iterating
+ through the elements of this field and adding "weight"
+ to the sum if the node matches the corresponding matchExpressions;
+ the node(s) with the highest sum are the most preferred.
+ items:
+ description: An empty preferred scheduling term matches
+ all objects with implicit weight 0 (i.e. it's a
+ no-op). A null preferred scheduling term matches
+ no objects (i.e. is also a no-op).
+ properties:
+ preference:
+ description: A node selector term, associated
+ with the corresponding weight.
+ properties:
+ matchExpressions:
+ description: A list of node selector requirements
+ by node's labels.
+ items:
+ description: A node selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn, the
+ values array must be non-empty. If
+ the operator is Exists or DoesNotExist,
+ the values array must be empty. If
+ the operator is Gt or Lt, the values
+ array must have a single element,
+ which will be interpreted as an integer.
+ This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchFields:
+ description: A list of node selector requirements
+ by node's fields.
+ items:
+ description: A node selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn, the
+ values array must be non-empty. If
+ the operator is Exists or DoesNotExist,
+ the values array must be empty. If
+ the operator is Gt or Lt, the values
+ array must have a single element,
+ which will be interpreted as an integer.
+ This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ type: object
+ weight:
+ description: Weight associated with matching the
+ corresponding nodeSelectorTerm, in the range
+ 1-100.
+ format: int32
+ type: integer
+ required:
+ - preference
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the affinity
+ requirements specified by this field cease to be met
+ at some point during pod execution (e.g. due to an
+ update), the system may or may not try to eventually
+ evict the pod from its node.
+ properties:
+ nodeSelectorTerms:
+ description: Required. A list of node selector terms.
+ The terms are ORed.
+ items:
+ description: A null or empty node selector term
+ matches no objects. The requirements of them
+ are ANDed. The TopologySelectorTerm type implements
+ a subset of the NodeSelectorTerm.
+ properties:
+ matchExpressions:
+ description: A list of node selector requirements
+ by node's labels.
+ items:
+ description: A node selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn, the
+ values array must be non-empty. If
+ the operator is Exists or DoesNotExist,
+ the values array must be empty. If
+ the operator is Gt or Lt, the values
+ array must have a single element,
+ which will be interpreted as an integer.
+ This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchFields:
+ description: A list of node selector requirements
+ by node's fields.
+ items:
+ description: A node selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: The label key that the
+ selector applies to.
+ type: string
+ operator:
+ description: Represents a key's relationship
+ to a set of values. Valid operators
+ are In, NotIn, Exists, DoesNotExist.
+ Gt, and Lt.
+ type: string
+ values:
+ description: An array of string values.
+ If the operator is In or NotIn, the
+ values array must be non-empty. If
+ the operator is Exists or DoesNotExist,
+ the values array must be empty. If
+ the operator is Gt or Lt, the values
+ array must have a single element,
+ which will be interpreted as an integer.
+ This array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ type: object
+ type: array
+ required:
+ - nodeSelectorTerms
+ type: object
+ type: object
+ podAffinity:
+ description: Describes pod affinity scheduling rules (e.g.
+ co-locate this pod in the same node, zone, etc. as some
+ other pod(s)).
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule pods
+ to nodes that satisfy the affinity expressions specified
+ by this field, but it may choose a node that violates
+ one or more of the expressions. The node that is most
+ preferred is the one with the greatest sum of weights,
+ i.e. for each node that meets all of the scheduling
+ requirements (resource request, requiredDuringScheduling
+ affinity expressions, etc.), compute a sum by iterating
+ through the elements of this field and adding "weight"
+ to the sum if the node has pods which matches the
+ corresponding podAffinityTerm; the node(s) with the
+ highest sum are the most preferred.
+ items:
+ description: The weights of all of the matched WeightedPodAffinityTerm
+ fields are added per-node to find the most preferred
+ node(s)
+ properties:
+ podAffinityTerm:
+ description: Required. A pod affinity term, associated
+ with the corresponding weight.
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaceSelector:
+ description: A label query over the set of
+ namespaces that the term applies to. The
+ term is applied to the union of the namespaces
+ selected by this field and the ones listed
+ in the namespaces field. null selector and
+ null or empty namespaces list means "this
+ pod's namespace". An empty selector ({})
+ matches all namespaces. This field is alpha-level
+ and is only honored when PodAffinityNamespaceSelector
+ feature is enabled.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaces:
+ description: namespaces specifies a static
+ list of namespace names that the term applies
+ to. The term is applied to the union of
+ the namespaces listed in this field and
+ the ones selected by namespaceSelector.
+ null or empty namespaces list and null namespaceSelector
+ means "this pod's namespace"
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located
+ (affinity) or not co-located (anti-affinity)
+ with the pods matching the labelSelector
+ in the specified namespaces, where co-located
+ is defined as running on a node whose value
+ of the label with key topologyKey matches
+ that of any node on which any of the selected
+ pods is running. Empty topologyKey is not
+ allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ weight:
+ description: weight associated with matching the
+ corresponding podAffinityTerm, in the range
+ 1-100.
+ format: int32
+ type: integer
+ required:
+ - podAffinityTerm
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the affinity
+ requirements specified by this field cease to be met
+ at some point during pod execution (e.g. due to a
+ pod label update), the system may or may not try to
+ eventually evict the pod from its node. When there
+ are multiple elements, the lists of nodes corresponding
+ to each podAffinityTerm are intersected, i.e. all
+ terms must be satisfied.
+ items:
+ description: Defines a set of pods (namely those matching
+ the labelSelector relative to the given namespace(s))
+ that this pod should be co-located (affinity) or
+ not co-located (anti-affinity) with, where co-located
+ is defined as running on a node whose value of the
+ label with key matches that of any
+ node on which a pod of the set of pods is running
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of
+ label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: key is the label key that
+ the selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's
+ relationship to a set of values. Valid
+ operators are In, NotIn, Exists and
+ DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is
+ "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaceSelector:
+ description: A label query over the set of namespaces
+ that the term applies to. The term is applied
+ to the union of the namespaces selected by this
+ field and the ones listed in the namespaces
+ field. null selector and null or empty namespaces
+ list means "this pod's namespace". An empty
+ selector ({}) matches all namespaces. This field
+ is alpha-level and is only honored when PodAffinityNamespaceSelector
+ feature is enabled.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of
+ label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: key is the label key that
+ the selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's
+ relationship to a set of values. Valid
+ operators are In, NotIn, Exists and
+ DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is
+ "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaces:
+ description: namespaces specifies a static list
+ of namespace names that the term applies to.
+ The term is applied to the union of the namespaces
+ listed in this field and the ones selected by
+ namespaceSelector. null or empty namespaces
+ list and null namespaceSelector means "this
+ pod's namespace"
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located (affinity)
+ or not co-located (anti-affinity) with the pods
+ matching the labelSelector in the specified
+ namespaces, where co-located is defined as running
+ on a node whose value of the label with key
+ topologyKey matches that of any node on which
+ any of the selected pods is running. Empty topologyKey
+ is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ type: array
+ type: object
+ podAntiAffinity:
+ description: Describes pod anti-affinity scheduling rules
+ (e.g. avoid putting this pod in the same node, zone, etc.
+ as some other pod(s)).
+ properties:
+ preferredDuringSchedulingIgnoredDuringExecution:
+ description: The scheduler will prefer to schedule pods
+ to nodes that satisfy the anti-affinity expressions
+ specified by this field, but it may choose a node
+ that violates one or more of the expressions. The
+ node that is most preferred is the one with the greatest
+ sum of weights, i.e. for each node that meets all
+ of the scheduling requirements (resource request,
+ requiredDuringScheduling anti-affinity expressions,
+ etc.), compute a sum by iterating through the elements
+ of this field and adding "weight" to the sum if the
+ node has pods which matches the corresponding podAffinityTerm;
+ the node(s) with the highest sum are the most preferred.
+ items:
+ description: The weights of all of the matched WeightedPodAffinityTerm
+ fields are added per-node to find the most preferred
+ node(s)
+ properties:
+ podAffinityTerm:
+ description: Required. A pod affinity term, associated
+ with the corresponding weight.
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaceSelector:
+ description: A label query over the set of
+ namespaces that the term applies to. The
+ term is applied to the union of the namespaces
+ selected by this field and the ones listed
+ in the namespaces field. null selector and
+ null or empty namespaces list means "this
+ pod's namespace". An empty selector ({})
+ matches all namespaces. This field is alpha-level
+ and is only honored when PodAffinityNamespaceSelector
+ feature is enabled.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaces:
+ description: namespaces specifies a static
+ list of namespace names that the term applies
+ to. The term is applied to the union of
+ the namespaces listed in this field and
+ the ones selected by namespaceSelector.
+ null or empty namespaces list and null namespaceSelector
+ means "this pod's namespace"
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located
+ (affinity) or not co-located (anti-affinity)
+ with the pods matching the labelSelector
+ in the specified namespaces, where co-located
+ is defined as running on a node whose value
+ of the label with key topologyKey matches
+ that of any node on which any of the selected
+ pods is running. Empty topologyKey is not
+ allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ weight:
+ description: weight associated with matching the
+ corresponding podAffinityTerm, in the range
+ 1-100.
+ format: int32
+ type: integer
+ required:
+ - podAffinityTerm
+ - weight
+ type: object
+ type: array
+ requiredDuringSchedulingIgnoredDuringExecution:
+ description: If the anti-affinity requirements specified
+ by this field are not met at scheduling time, the
+ pod will not be scheduled onto the node. If the anti-affinity
+ requirements specified by this field cease to be met
+ at some point during pod execution (e.g. due to a
+ pod label update), the system may or may not try to
+ eventually evict the pod from its node. When there
+ are multiple elements, the lists of nodes corresponding
+ to each podAffinityTerm are intersected, i.e. all
+ terms must be satisfied.
+ items:
+ description: Defines a set of pods (namely those matching
+ the labelSelector relative to the given namespace(s))
+ that this pod should be co-located (affinity) or
+ not co-located (anti-affinity) with, where co-located
+ is defined as running on a node whose value of the
+ label with key matches that of any
+ node on which a pod of the set of pods is running
+ properties:
+ labelSelector:
+ description: A label query over a set of resources,
+ in this case pods.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of
+ label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: key is the label key that
+ the selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's
+ relationship to a set of values. Valid
+ operators are In, NotIn, Exists and
+ DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is
+ "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaceSelector:
+ description: A label query over the set of namespaces
+ that the term applies to. The term is applied
+ to the union of the namespaces selected by this
+ field and the ones listed in the namespaces
+ field. null selector and null or empty namespaces
+ list means "this pod's namespace". An empty
+ selector ({}) matches all namespaces. This field
+ is alpha-level and is only honored when PodAffinityNamespaceSelector
+ feature is enabled.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of
+ label selector requirements. The requirements
+ are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values, a
+ key, and an operator that relates the
+ key and values.
+ properties:
+ key:
+ description: key is the label key that
+ the selector applies to.
+ type: string
+ operator:
+ description: operator represents a key's
+ relationship to a set of values. Valid
+ operators are In, NotIn, Exists and
+ DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string
+ values. If the operator is In or NotIn,
+ the values array must be non-empty.
+ If the operator is Exists or DoesNotExist,
+ the values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator is
+ "In", and the values array contains only
+ "value". The requirements are ANDed.
+ type: object
+ type: object
+ namespaces:
+ description: namespaces specifies a static list
+ of namespace names that the term applies to.
+ The term is applied to the union of the namespaces
+ listed in this field and the ones selected by
+ namespaceSelector. null or empty namespaces
+ list and null namespaceSelector means "this
+ pod's namespace"
+ items:
+ type: string
+ type: array
+ topologyKey:
+ description: This pod should be co-located (affinity)
+ or not co-located (anti-affinity) with the pods
+ matching the labelSelector in the specified
+ namespaces, where co-located is defined as running
+ on a node whose value of the label with key
+ topologyKey matches that of any node on which
+ any of the selected pods is running. Empty topologyKey
+ is not allowed.
+ type: string
+ required:
+ - topologyKey
+ type: object
+ type: array
+ type: object
+ type: object
+ backupNodeSelector:
+ additionalProperties:
+ type: string
+ type: object
+ backupPriorityClassName:
+ type: string
+ backupTolerations:
+ items:
+ description: The pod this Toleration is attached to tolerates
+ any taint that matches the triple using
+ the matching operator .
+ properties:
+ effect:
+ description: Effect indicates the taint effect to match.
+ Empty means match all taint effects. When specified,
+ allowed values are NoSchedule, PreferNoSchedule and
+ NoExecute.
+ type: string
+ key:
+ description: Key is the taint key that the toleration
+ applies to. Empty means match all taint keys. If the
+ key is empty, operator must be Exists; this combination
+ means to match all values and all keys.
+ type: string
+ operator:
+ description: Operator represents a key's relationship
+ to the value. Valid operators are Exists and Equal.
+ Defaults to Equal. Exists is equivalent to wildcard
+ for value, so that a pod can tolerate all taints of
+ a particular category.
+ type: string
+ tolerationSeconds:
+ description: TolerationSeconds represents the period of
+ time the toleration (which must be of effect NoExecute,
+ otherwise this field is ignored) tolerates the taint.
+ By default, it is not set, which means tolerate the
+ taint forever (do not evict). Zero and negative values
+ will be treated as 0 (evict immediately) by the system.
+ format: int64
+ type: integer
+ value:
+ description: Value is the taint value the toleration matches
+ to. If the operator is Exists, the value should be empty,
+ otherwise just a regular string.
+ type: string
+ type: object
+ type: array
+ containers:
+ description: Containers allows for user to specify extra sidecar
+ containers to run along with mysql
+ items:
+ description: A single application container that you want
+ to run within a pod.
+ properties:
+ args:
+ description: 'Arguments to the entrypoint. The docker
+ image''s CMD is used if this is not provided. Variable
+ references $(VAR_NAME) are expanded using the container''s
+ environment. If a variable cannot be resolved, the reference
+ in the input string will be unchanged. The $(VAR_NAME)
+ syntax can be escaped with a double $$, ie: $$(VAR_NAME).
+ Escaped references will never be expanded, regardless
+ of whether the variable exists or not. Cannot be updated.
+ More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
+ items:
+ type: string
+ type: array
+ command:
+ description: 'Entrypoint array. Not executed within a
+ shell. The docker image''s ENTRYPOINT is used if this
+ is not provided. Variable references $(VAR_NAME) are
+ expanded using the container''s environment. If a variable
+ cannot be resolved, the reference in the input string
+ will be unchanged. The $(VAR_NAME) syntax can be escaped
+ with a double $$, ie: $$(VAR_NAME). Escaped references
+ will never be expanded, regardless of whether the variable
+ exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
+ items:
+ type: string
+ type: array
+ env:
+ description: List of environment variables to set in the
+ container. Cannot be updated.
+ items:
+ description: EnvVar represents an environment variable
+ present in a Container.
+ properties:
+ name:
+ description: Name of the environment variable. Must
+ be a C_IDENTIFIER.
+ type: string
+ value:
+ description: 'Variable references $(VAR_NAME) are
+ expanded using the previous defined environment
+ variables in the container and any service environment
+ variables. If a variable cannot be resolved, the
+ reference in the input string will be unchanged.
+ The $(VAR_NAME) syntax can be escaped with a double
+ $$, ie: $$(VAR_NAME). Escaped references will
+ never be expanded, regardless of whether the variable
+ exists or not. Defaults to "".'
+ type: string
+ valueFrom:
+ description: Source for the environment variable's
+ value. Cannot be used if value is not empty.
+ properties:
+ configMapKeyRef:
+ description: Selects a key of a ConfigMap.
+ properties:
+ key:
+ description: The key to select.
+ type: string
+ name:
+ description: 'Name of the referent. More
+ info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the ConfigMap
+ or its key must be defined
+ type: boolean
+ required:
+ - key
+ type: object
+ fieldRef:
+ description: 'Selects a field of the pod: supports
+ metadata.name, metadata.namespace, `metadata.labels['''']`,
+ `metadata.annotations['''']`, spec.nodeName,
+ spec.serviceAccountName, status.hostIP, status.podIP,
+ status.podIPs.'
+ properties:
+ apiVersion:
+ description: Version of the schema the FieldPath
+ is written in terms of, defaults to "v1".
+ type: string
+ fieldPath:
+ description: Path of the field to select
+ in the specified API version.
+ type: string
+ required:
+ - fieldPath
+ type: object
+ resourceFieldRef:
+ description: 'Selects a resource of the container:
+ only resources limits and requests (limits.cpu,
+ limits.memory, limits.ephemeral-storage, requests.cpu,
+ requests.memory and requests.ephemeral-storage)
+ are currently supported.'
+ properties:
+ containerName:
+ description: 'Container name: required for
+ volumes, optional for env vars'
+ type: string
+ divisor:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Specifies the output format
+ of the exposed resources, defaults to
+ "1"
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ resource:
+ description: 'Required: resource to select'
+ type: string
+ required:
+ - resource
+ type: object
+ secretKeyRef:
+ description: Selects a key of a secret in the
+ pod's namespace
+ properties:
+ key:
+ description: The key of the secret to select
+ from. Must be a valid secret key.
+ type: string
+ name:
+ description: 'Name of the referent. More
+ info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the Secret
+ or its key must be defined
+ type: boolean
+ required:
+ - key
+ type: object
+ type: object
+ required:
+ - name
+ type: object
+ type: array
+ envFrom:
+ description: List of sources to populate environment variables
+ in the container. The keys defined within a source must
+ be a C_IDENTIFIER. All invalid keys will be reported
+ as an event when the container is starting. When a key
+ exists in multiple sources, the value associated with
+ the last source will take precedence. Values defined
+ by an Env with a duplicate key will take precedence.
+ Cannot be updated.
+ items:
+ description: EnvFromSource represents the source of
+ a set of ConfigMaps
+ properties:
+ configMapRef:
+ description: The ConfigMap to select from
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the ConfigMap must
+ be defined
+ type: boolean
+ type: object
+ prefix:
+ description: An optional identifier to prepend to
+ each key in the ConfigMap. Must be a C_IDENTIFIER.
+ type: string
+ secretRef:
+ description: The Secret to select from
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the Secret must
+ be defined
+ type: boolean
+ type: object
+ type: object
+ type: array
+ image:
+ description: 'Docker image name. More info: https://kubernetes.io/docs/concepts/containers/images
+ This field is optional to allow higher level config
+ management to default or override container images in
+ workload controllers like Deployments and StatefulSets.'
+ type: string
+ imagePullPolicy:
+ description: 'Image pull policy. One of Always, Never,
+ IfNotPresent. Defaults to Always if :latest tag is specified,
+ or IfNotPresent otherwise. Cannot be updated. More info:
+ https://kubernetes.io/docs/concepts/containers/images#updating-images'
+ type: string
+ lifecycle:
+ description: Actions that the management system should
+ take in response to container lifecycle events. Cannot
+ be updated.
+ properties:
+ postStart:
+ description: 'PostStart is called immediately after
+ a container is created. If the handler fails, the
+ container is terminated and restarted according
+ to its restart policy. Other management of the container
+ blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks'
+ properties:
+ exec:
+ description: One and only one of the following
+ should be specified. Exec specifies the action
+ to take.
+ properties:
+ command:
+ description: Command is the command line to
+ execute inside the container, the working
+ directory for the command is root ('/')
+ in the container's filesystem. The command
+ is simply exec'd, it is not run inside a
+ shell, so traditional shell instructions
+ ('|', etc) won't work. To use a shell, you
+ need to explicitly call out to that shell.
+ Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ httpGet:
+ description: HTTPGet specifies the http request
+ to perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set
+ "Host" in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the
+ request. HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom
+ header to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to
+ access on the container. Number must be
+ in the range 1 to 65535. Name must be an
+ IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting
+ to the host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO:
+ implement a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect
+ to, defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to
+ access on the container. Number must be
+ in the range 1 to 65535. Name must be an
+ IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ type: object
+ preStop:
+ description: 'PreStop is called immediately before
+ a container is terminated due to an API request
+ or management event such as liveness/startup probe
+ failure, preemption, resource contention, etc. The
+ handler is not called if the container crashes or
+ exits. The reason for termination is passed to the
+ handler. The Pod''s termination grace period countdown
+ begins before the PreStop hooked is executed. Regardless
+ of the outcome of the handler, the container will
+ eventually terminate within the Pod''s termination
+ grace period. Other management of the container
+ blocks until the hook completes or until the termination
+ grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks'
+ properties:
+ exec:
+ description: One and only one of the following
+ should be specified. Exec specifies the action
+ to take.
+ properties:
+ command:
+ description: Command is the command line to
+ execute inside the container, the working
+ directory for the command is root ('/')
+ in the container's filesystem. The command
+ is simply exec'd, it is not run inside a
+ shell, so traditional shell instructions
+ ('|', etc) won't work. To use a shell, you
+ need to explicitly call out to that shell.
+ Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ httpGet:
+ description: HTTPGet specifies the http request
+ to perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set
+ "Host" in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the
+ request. HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom
+ header to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to
+ access on the container. Number must be
+ in the range 1 to 65535. Name must be an
+ IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting
+ to the host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO:
+ implement a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect
+ to, defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to
+ access on the container. Number must be
+ in the range 1 to 65535. Name must be an
+ IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ type: object
+ type: object
+ livenessProbe:
+ description: 'Periodic probe of container liveness. Container
+ will be restarted if the probe fails. Cannot be updated.
+ More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ properties:
+ exec:
+ description: One and only one of the following should
+ be specified. Exec specifies the action to take.
+ properties:
+ command:
+ description: Command is the command line to execute
+ inside the container, the working directory
+ for the command is root ('/') in the container's
+ filesystem. The command is simply exec'd, it
+ is not run inside a shell, so traditional shell
+ instructions ('|', etc) won't work. To use a
+ shell, you need to explicitly call out to that
+ shell. Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ failureThreshold:
+ description: Minimum consecutive failures for the
+ probe to be considered failed after having succeeded.
+ Defaults to 3. Minimum value is 1.
+ format: int32
+ type: integer
+ httpGet:
+ description: HTTPGet specifies the http request to
+ perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set "Host"
+ in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the request.
+ HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom header
+ to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting to the
+ host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ initialDelaySeconds:
+ description: 'Number of seconds after the container
+ has started before liveness probes are initiated.
+ More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ periodSeconds:
+ description: How often (in seconds) to perform the
+ probe. Default to 10 seconds. Minimum value is 1.
+ format: int32
+ type: integer
+ successThreshold:
+ description: Minimum consecutive successes for the
+ probe to be considered successful after having failed.
+ Defaults to 1. Must be 1 for liveness and startup.
+ Minimum value is 1.
+ format: int32
+ type: integer
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO: implement
+ a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect to,
+ defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ terminationGracePeriodSeconds:
+ description: Optional duration in seconds the pod
+ needs to terminate gracefully upon probe failure.
+ The grace period is the duration in seconds after
+ the processes running in the pod are sent a termination
+ signal and the time when the processes are forcibly
+ halted with a kill signal. Set this value longer
+ than the expected cleanup time for your process.
+ If this value is nil, the pod's terminationGracePeriodSeconds
+ will be used. Otherwise, this value overrides the
+ value provided by the pod spec. Value must be non-negative
+ integer. The value zero indicates stop immediately
+ via the kill signal (no opportunity to shut down).
+ This is an alpha field and requires enabling ProbeTerminationGracePeriod
+ feature gate.
+ format: int64
+ type: integer
+ timeoutSeconds:
+ description: 'Number of seconds after which the probe
+ times out. Defaults to 1 second. Minimum value is
+ 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ type: object
+ name:
+ description: Name of the container specified as a DNS_LABEL.
+ Each container in a pod must have a unique name (DNS_LABEL).
+ Cannot be updated.
+ type: string
+ ports:
+ description: List of ports to expose from the container.
+ Exposing a port here gives the system additional information
+ about the network connections a container uses, but
+ is primarily informational. Not specifying a port here
+ DOES NOT prevent that port from being exposed. Any port
+ which is listening on the default "0.0.0.0" address
+ inside a container will be accessible from the network.
+ Cannot be updated.
+ items:
+ description: ContainerPort represents a network port
+ in a single container.
+ properties:
+ containerPort:
+ description: Number of port to expose on the pod's
+ IP address. This must be a valid port number,
+ 0 < x < 65536.
+ format: int32
+ type: integer
+ hostIP:
+ description: What host IP to bind the external port
+ to.
+ type: string
+ hostPort:
+ description: Number of port to expose on the host.
+ If specified, this must be a valid port number,
+ 0 < x < 65536. If HostNetwork is specified, this
+ must match ContainerPort. Most containers do not
+ need this.
+ format: int32
+ type: integer
+ name:
+ description: If specified, this must be an IANA_SVC_NAME
+ and unique within the pod. Each named port in
+ a pod must have a unique name. Name for the port
+ that can be referred to by services.
+ type: string
+ protocol:
+ default: TCP
+ description: Protocol for port. Must be UDP, TCP,
+ or SCTP. Defaults to "TCP".
+ type: string
+ required:
+ - containerPort
+ type: object
+ type: array
+ x-kubernetes-list-map-keys:
+ - containerPort
+ - protocol
+ x-kubernetes-list-type: map
+ readinessProbe:
+ description: 'Periodic probe of container service readiness.
+ Container will be removed from service endpoints if
+ the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ properties:
+ exec:
+ description: One and only one of the following should
+ be specified. Exec specifies the action to take.
+ properties:
+ command:
+ description: Command is the command line to execute
+ inside the container, the working directory
+ for the command is root ('/') in the container's
+ filesystem. The command is simply exec'd, it
+ is not run inside a shell, so traditional shell
+ instructions ('|', etc) won't work. To use a
+ shell, you need to explicitly call out to that
+ shell. Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ failureThreshold:
+ description: Minimum consecutive failures for the
+ probe to be considered failed after having succeeded.
+ Defaults to 3. Minimum value is 1.
+ format: int32
+ type: integer
+ httpGet:
+ description: HTTPGet specifies the http request to
+ perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set "Host"
+ in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the request.
+ HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom header
+ to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting to the
+ host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ initialDelaySeconds:
+ description: 'Number of seconds after the container
+ has started before liveness probes are initiated.
+ More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ periodSeconds:
+ description: How often (in seconds) to perform the
+ probe. Default to 10 seconds. Minimum value is 1.
+ format: int32
+ type: integer
+ successThreshold:
+ description: Minimum consecutive successes for the
+ probe to be considered successful after having failed.
+ Defaults to 1. Must be 1 for liveness and startup.
+ Minimum value is 1.
+ format: int32
+ type: integer
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO: implement
+ a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect to,
+ defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ terminationGracePeriodSeconds:
+ description: Optional duration in seconds the pod
+ needs to terminate gracefully upon probe failure.
+ The grace period is the duration in seconds after
+ the processes running in the pod are sent a termination
+ signal and the time when the processes are forcibly
+ halted with a kill signal. Set this value longer
+ than the expected cleanup time for your process.
+ If this value is nil, the pod's terminationGracePeriodSeconds
+ will be used. Otherwise, this value overrides the
+ value provided by the pod spec. Value must be non-negative
+ integer. The value zero indicates stop immediately
+ via the kill signal (no opportunity to shut down).
+ This is an alpha field and requires enabling ProbeTerminationGracePeriod
+ feature gate.
+ format: int64
+ type: integer
+ timeoutSeconds:
+ description: 'Number of seconds after which the probe
+ times out. Defaults to 1 second. Minimum value is
+ 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ type: object
+ resources:
+ description: 'Compute Resources required by this container.
+ Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Limits describes the maximum amount
+ of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Requests describes the minimum amount
+ of compute resources required. If Requests is omitted
+ for a container, it defaults to Limits if that is
+ explicitly specified, otherwise to an implementation-defined
+ value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ type: object
+ securityContext:
+ description: 'Security options the pod should run with.
+ More info: https://kubernetes.io/docs/concepts/policy/security-context/
+ More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/'
+ properties:
+ allowPrivilegeEscalation:
+ description: 'AllowPrivilegeEscalation controls whether
+ a process can gain more privileges than its parent
+ process. This bool directly controls if the no_new_privs
+ flag will be set on the container process. AllowPrivilegeEscalation
+ is true always when the container is: 1) run as
+ Privileged 2) has CAP_SYS_ADMIN'
+ type: boolean
+ capabilities:
+ description: The capabilities to add/drop when running
+ containers. Defaults to the default set of capabilities
+ granted by the container runtime.
+ properties:
+ add:
+ description: Added capabilities
+ items:
+ description: Capability represent POSIX capabilities
+ type
+ type: string
+ type: array
+ drop:
+ description: Removed capabilities
+ items:
+ description: Capability represent POSIX capabilities
+ type
+ type: string
+ type: array
+ type: object
+ privileged:
+ description: Run container in privileged mode. Processes
+ in privileged containers are essentially equivalent
+ to root on the host. Defaults to false.
+ type: boolean
+ procMount:
+ description: procMount denotes the type of proc mount
+ to use for the containers. The default is DefaultProcMount
+ which uses the container runtime defaults for readonly
+ paths and masked paths. This requires the ProcMountType
+ feature flag to be enabled.
+ type: string
+ readOnlyRootFilesystem:
+ description: Whether this container has a read-only
+ root filesystem. Default is false.
+ type: boolean
+ runAsGroup:
+ description: The GID to run the entrypoint of the
+ container process. Uses runtime default if unset.
+ May also be set in PodSecurityContext. If set in
+ both SecurityContext and PodSecurityContext, the
+ value specified in SecurityContext takes precedence.
+ format: int64
+ type: integer
+ runAsNonRoot:
+ description: Indicates that the container must run
+ as a non-root user. If true, the Kubelet will validate
+ the image at runtime to ensure that it does not
+ run as UID 0 (root) and fail to start the container
+ if it does. If unset or false, no such validation
+ will be performed. May also be set in PodSecurityContext. If
+ set in both SecurityContext and PodSecurityContext,
+ the value specified in SecurityContext takes precedence.
+ type: boolean
+ runAsUser:
+ description: The UID to run the entrypoint of the
+ container process. Defaults to user specified in
+ image metadata if unspecified. May also be set in
+ PodSecurityContext. If set in both SecurityContext
+ and PodSecurityContext, the value specified in SecurityContext
+ takes precedence.
+ format: int64
+ type: integer
+ seLinuxOptions:
+ description: The SELinux context to be applied to
+ the container. If unspecified, the container runtime
+ will allocate a random SELinux context for each
+ container. May also be set in PodSecurityContext. If
+ set in both SecurityContext and PodSecurityContext,
+ the value specified in SecurityContext takes precedence.
+ properties:
+ level:
+ description: Level is SELinux level label that
+ applies to the container.
+ type: string
+ role:
+ description: Role is a SELinux role label that
+ applies to the container.
+ type: string
+ type:
+ description: Type is a SELinux type label that
+ applies to the container.
+ type: string
+ user:
+ description: User is a SELinux user label that
+ applies to the container.
+ type: string
+ type: object
+ seccompProfile:
+ description: The seccomp options to use by this container.
+ If seccomp options are provided at both the pod
+ & container level, the container options override
+ the pod options.
+ properties:
+ localhostProfile:
+ description: localhostProfile indicates a profile
+ defined in a file on the node should be used.
+ The profile must be preconfigured on the node
+ to work. Must be a descending path, relative
+ to the kubelet's configured seccomp profile
+ location. Must only be set if type is "Localhost".
+ type: string
+ type:
+ description: "type indicates which kind of seccomp
+ profile will be applied. Valid options are:
+ \n Localhost - a profile defined in a file on
+ the node should be used. RuntimeDefault - the
+ container runtime default profile should be
+ used. Unconfined - no profile should be applied."
+ type: string
+ required:
+ - type
+ type: object
+ windowsOptions:
+ description: The Windows specific settings applied
+ to all containers. If unspecified, the options from
+ the PodSecurityContext will be used. If set in both
+ SecurityContext and PodSecurityContext, the value
+ specified in SecurityContext takes precedence.
+ properties:
+ gmsaCredentialSpec:
+ description: GMSACredentialSpec is where the GMSA
+ admission webhook (https://github.com/kubernetes-sigs/windows-gmsa)
+ inlines the contents of the GMSA credential
+ spec named by the GMSACredentialSpecName field.
+ type: string
+ gmsaCredentialSpecName:
+ description: GMSACredentialSpecName is the name
+ of the GMSA credential spec to use.
+ type: string
+ runAsUserName:
+ description: The UserName in Windows to run the
+ entrypoint of the container process. Defaults
+ to the user specified in image metadata if unspecified.
+ May also be set in PodSecurityContext. If set
+ in both SecurityContext and PodSecurityContext,
+ the value specified in SecurityContext takes
+ precedence.
+ type: string
+ type: object
+ type: object
+ startupProbe:
+ description: 'StartupProbe indicates that the Pod has
+ successfully initialized. If specified, no other probes
+ are executed until this completes successfully. If this
+ probe fails, the Pod will be restarted, just as if the
+ livenessProbe failed. This can be used to provide different
+ probe parameters at the beginning of a Pod''s lifecycle,
+ when it might take a long time to load data or warm
+ a cache, than during steady-state operation. This cannot
+ be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ properties:
+ exec:
+ description: One and only one of the following should
+ be specified. Exec specifies the action to take.
+ properties:
+ command:
+ description: Command is the command line to execute
+ inside the container, the working directory
+ for the command is root ('/') in the container's
+ filesystem. The command is simply exec'd, it
+ is not run inside a shell, so traditional shell
+ instructions ('|', etc) won't work. To use a
+ shell, you need to explicitly call out to that
+ shell. Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ failureThreshold:
+ description: Minimum consecutive failures for the
+ probe to be considered failed after having succeeded.
+ Defaults to 3. Minimum value is 1.
+ format: int32
+ type: integer
+ httpGet:
+ description: HTTPGet specifies the http request to
+ perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set "Host"
+ in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the request.
+ HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom header
+ to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting to the
+ host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ initialDelaySeconds:
+ description: 'Number of seconds after the container
+ has started before liveness probes are initiated.
+ More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ periodSeconds:
+ description: How often (in seconds) to perform the
+ probe. Default to 10 seconds. Minimum value is 1.
+ format: int32
+ type: integer
+ successThreshold:
+ description: Minimum consecutive successes for the
+ probe to be considered successful after having failed.
+ Defaults to 1. Must be 1 for liveness and startup.
+ Minimum value is 1.
+ format: int32
+ type: integer
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO: implement
+ a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect to,
+ defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ terminationGracePeriodSeconds:
+ description: Optional duration in seconds the pod
+ needs to terminate gracefully upon probe failure.
+ The grace period is the duration in seconds after
+ the processes running in the pod are sent a termination
+ signal and the time when the processes are forcibly
+ halted with a kill signal. Set this value longer
+ than the expected cleanup time for your process.
+ If this value is nil, the pod's terminationGracePeriodSeconds
+ will be used. Otherwise, this value overrides the
+ value provided by the pod spec. Value must be non-negative
+ integer. The value zero indicates stop immediately
+ via the kill signal (no opportunity to shut down).
+ This is an alpha field and requires enabling ProbeTerminationGracePeriod
+ feature gate.
+ format: int64
+ type: integer
+ timeoutSeconds:
+ description: 'Number of seconds after which the probe
+ times out. Defaults to 1 second. Minimum value is
+ 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ type: object
+ stdin:
+ description: Whether this container should allocate a
+ buffer for stdin in the container runtime. If this is
+ not set, reads from stdin in the container will always
+ result in EOF. Default is false.
+ type: boolean
+ stdinOnce:
+ description: Whether the container runtime should close
+ the stdin channel after it has been opened by a single
+ attach. When stdin is true the stdin stream will remain
+ open across multiple attach sessions. If stdinOnce is
+ set to true, stdin is opened on container start, is
+ empty until the first client attaches to stdin, and
+ then remains open and accepts data until the client
+ disconnects, at which time stdin is closed and remains
+ closed until the container is restarted. If this flag
+ is false, a container processes that reads from stdin
+ will never receive an EOF. Default is false
+ type: boolean
+ terminationMessagePath:
+ description: 'Optional: Path at which the file to which
+ the container''s termination message will be written
+ is mounted into the container''s filesystem. Message
+ written is intended to be brief final status, such as
+ an assertion failure message. Will be truncated by the
+ node if greater than 4096 bytes. The total message length
+ across all containers will be limited to 12kb. Defaults
+ to /dev/termination-log. Cannot be updated.'
+ type: string
+ terminationMessagePolicy:
+ description: Indicate how the termination message should
+ be populated. File will use the contents of terminationMessagePath
+ to populate the container status message on both success
+ and failure. FallbackToLogsOnError will use the last
+ chunk of container log output if the termination message
+ file is empty and the container exited with an error.
+ The log output is limited to 2048 bytes or 80 lines,
+ whichever is smaller. Defaults to File. Cannot be updated.
+ type: string
+ tty:
+ description: Whether this container should allocate a
+ TTY for itself, also requires 'stdin' to be true. Default
+ is false.
+ type: boolean
+ volumeDevices:
+ description: volumeDevices is the list of block devices
+ to be used by the container.
+ items:
+ description: volumeDevice describes a mapping of a raw
+ block device within a container.
+ properties:
+ devicePath:
+ description: devicePath is the path inside of the
+ container that the device will be mapped to.
+ type: string
+ name:
+ description: name must match the name of a persistentVolumeClaim
+ in the pod
+ type: string
+ required:
+ - devicePath
+ - name
+ type: object
+ type: array
+ volumeMounts:
+ description: Pod volumes to mount into the container's
+ filesystem. Cannot be updated.
+ items:
+ description: VolumeMount describes a mounting of a Volume
+ within a container.
+ properties:
+ mountPath:
+ description: Path within the container at which
+ the volume should be mounted. Must not contain
+ ':'.
+ type: string
+ mountPropagation:
+ description: mountPropagation determines how mounts
+ are propagated from the host to container and
+ the other way around. When not set, MountPropagationNone
+ is used. This field is beta in 1.10.
+ type: string
+ name:
+ description: This must match the Name of a Volume.
+ type: string
+ readOnly:
+ description: Mounted read-only if true, read-write
+ otherwise (false or unspecified). Defaults to
+ false.
+ type: boolean
+ subPath:
+ description: Path within the volume from which the
+ container's volume should be mounted. Defaults
+ to "" (volume's root).
+ type: string
+ subPathExpr:
+ description: Expanded path within the volume from
+ which the container's volume should be mounted.
+ Behaves similarly to SubPath but environment variable
+ references $(VAR_NAME) are expanded using the
+ container's environment. Defaults to "" (volume's
+ root). SubPathExpr and SubPath are mutually exclusive.
+ type: string
+ required:
+ - mountPath
+ - name
+ type: object
+ type: array
+ workingDir:
+ description: Container's working directory. If not specified,
+ the container runtime's default will be used, which
+ might be configured in the container image. Cannot be
+ updated.
+ type: string
+ required:
+ - name
+ type: object
+ type: array
+ imagePullPolicy:
+ description: PullPolicy describes a policy for if/when to pull
+ a container image
+ type: string
+ imagePullSecrets:
+ items:
+ description: LocalObjectReference contains enough information
+ to let you locate the referenced object inside the same
+ namespace.
+ properties:
+ name:
+ description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind, uid?'
+ type: string
+ type: object
+ type: array
+ initContainers:
+ description: InitContainers allows the user to specify extra
+ init containers
+ items:
+ description: A single application container that you want
+ to run within a pod.
+ properties:
+ args:
+ description: 'Arguments to the entrypoint. The docker
+ image''s CMD is used if this is not provided. Variable
+ references $(VAR_NAME) are expanded using the container''s
+ environment. If a variable cannot be resolved, the reference
+ in the input string will be unchanged. The $(VAR_NAME)
+ syntax can be escaped with a double $$, ie: $$(VAR_NAME).
+ Escaped references will never be expanded, regardless
+ of whether the variable exists or not. Cannot be updated.
+ More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
+ items:
+ type: string
+ type: array
+ command:
+ description: 'Entrypoint array. Not executed within a
+ shell. The docker image''s ENTRYPOINT is used if this
+ is not provided. Variable references $(VAR_NAME) are
+ expanded using the container''s environment. If a variable
+ cannot be resolved, the reference in the input string
+ will be unchanged. The $(VAR_NAME) syntax can be escaped
+ with a double $$, ie: $$(VAR_NAME). Escaped references
+ will never be expanded, regardless of whether the variable
+ exists or not. Cannot be updated. More info: https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#running-a-command-in-a-shell'
+ items:
+ type: string
+ type: array
+ env:
+ description: List of environment variables to set in the
+ container. Cannot be updated.
+ items:
+ description: EnvVar represents an environment variable
+ present in a Container.
+ properties:
+ name:
+ description: Name of the environment variable. Must
+ be a C_IDENTIFIER.
+ type: string
+ value:
+ description: 'Variable references $(VAR_NAME) are
+ expanded using the previous defined environment
+ variables in the container and any service environment
+ variables. If a variable cannot be resolved, the
+ reference in the input string will be unchanged.
+ The $(VAR_NAME) syntax can be escaped with a double
+ $$, ie: $$(VAR_NAME). Escaped references will
+ never be expanded, regardless of whether the variable
+ exists or not. Defaults to "".'
+ type: string
+ valueFrom:
+ description: Source for the environment variable's
+ value. Cannot be used if value is not empty.
+ properties:
+ configMapKeyRef:
+ description: Selects a key of a ConfigMap.
+ properties:
+ key:
+ description: The key to select.
+ type: string
+ name:
+ description: 'Name of the referent. More
+ info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the ConfigMap
+ or its key must be defined
+ type: boolean
+ required:
+ - key
+ type: object
+ fieldRef:
+ description: 'Selects a field of the pod: supports
+ metadata.name, metadata.namespace, `metadata.labels['''']`,
+ `metadata.annotations['''']`, spec.nodeName,
+ spec.serviceAccountName, status.hostIP, status.podIP,
+ status.podIPs.'
+ properties:
+ apiVersion:
+ description: Version of the schema the FieldPath
+ is written in terms of, defaults to "v1".
+ type: string
+ fieldPath:
+ description: Path of the field to select
+ in the specified API version.
+ type: string
+ required:
+ - fieldPath
+ type: object
+ resourceFieldRef:
+ description: 'Selects a resource of the container:
+ only resources limits and requests (limits.cpu,
+ limits.memory, limits.ephemeral-storage, requests.cpu,
+ requests.memory and requests.ephemeral-storage)
+ are currently supported.'
+ properties:
+ containerName:
+ description: 'Container name: required for
+ volumes, optional for env vars'
+ type: string
+ divisor:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Specifies the output format
+ of the exposed resources, defaults to
+ "1"
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ resource:
+ description: 'Required: resource to select'
+ type: string
+ required:
+ - resource
+ type: object
+ secretKeyRef:
+ description: Selects a key of a secret in the
+ pod's namespace
+ properties:
+ key:
+ description: The key of the secret to select
+ from. Must be a valid secret key.
+ type: string
+ name:
+ description: 'Name of the referent. More
+ info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the Secret
+ or its key must be defined
+ type: boolean
+ required:
+ - key
+ type: object
+ type: object
+ required:
+ - name
+ type: object
+ type: array
+ envFrom:
+ description: List of sources to populate environment variables
+ in the container. The keys defined within a source must
+ be a C_IDENTIFIER. All invalid keys will be reported
+ as an event when the container is starting. When a key
+ exists in multiple sources, the value associated with
+ the last source will take precedence. Values defined
+ by an Env with a duplicate key will take precedence.
+ Cannot be updated.
+ items:
+ description: EnvFromSource represents the source of
+ a set of ConfigMaps
+ properties:
+ configMapRef:
+ description: The ConfigMap to select from
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the ConfigMap must
+ be defined
+ type: boolean
+ type: object
+ prefix:
+ description: An optional identifier to prepend to
+ each key in the ConfigMap. Must be a C_IDENTIFIER.
+ type: string
+ secretRef:
+ description: The Secret to select from
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the Secret must
+ be defined
+ type: boolean
+ type: object
+ type: object
+ type: array
+ image:
+ description: 'Docker image name. More info: https://kubernetes.io/docs/concepts/containers/images
+ This field is optional to allow higher level config
+ management to default or override container images in
+ workload controllers like Deployments and StatefulSets.'
+ type: string
+ imagePullPolicy:
+ description: 'Image pull policy. One of Always, Never,
+ IfNotPresent. Defaults to Always if :latest tag is specified,
+ or IfNotPresent otherwise. Cannot be updated. More info:
+ https://kubernetes.io/docs/concepts/containers/images#updating-images'
+ type: string
+ lifecycle:
+ description: Actions that the management system should
+ take in response to container lifecycle events. Cannot
+ be updated.
+ properties:
+ postStart:
+ description: 'PostStart is called immediately after
+ a container is created. If the handler fails, the
+ container is terminated and restarted according
+ to its restart policy. Other management of the container
+ blocks until the hook completes. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks'
+ properties:
+ exec:
+ description: One and only one of the following
+ should be specified. Exec specifies the action
+ to take.
+ properties:
+ command:
+ description: Command is the command line to
+ execute inside the container, the working
+ directory for the command is root ('/')
+ in the container's filesystem. The command
+ is simply exec'd, it is not run inside a
+ shell, so traditional shell instructions
+ ('|', etc) won't work. To use a shell, you
+ need to explicitly call out to that shell.
+ Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ httpGet:
+ description: HTTPGet specifies the http request
+ to perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set
+ "Host" in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the
+ request. HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom
+ header to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to
+ access on the container. Number must be
+ in the range 1 to 65535. Name must be an
+ IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting
+ to the host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO:
+ implement a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect
+ to, defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to
+ access on the container. Number must be
+ in the range 1 to 65535. Name must be an
+ IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ type: object
+ preStop:
+ description: 'PreStop is called immediately before
+ a container is terminated due to an API request
+ or management event such as liveness/startup probe
+ failure, preemption, resource contention, etc. The
+ handler is not called if the container crashes or
+ exits. The reason for termination is passed to the
+ handler. The Pod''s termination grace period countdown
+ begins before the PreStop hooked is executed. Regardless
+ of the outcome of the handler, the container will
+ eventually terminate within the Pod''s termination
+ grace period. Other management of the container
+ blocks until the hook completes or until the termination
+ grace period is reached. More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks'
+ properties:
+ exec:
+ description: One and only one of the following
+ should be specified. Exec specifies the action
+ to take.
+ properties:
+ command:
+ description: Command is the command line to
+ execute inside the container, the working
+ directory for the command is root ('/')
+ in the container's filesystem. The command
+ is simply exec'd, it is not run inside a
+ shell, so traditional shell instructions
+ ('|', etc) won't work. To use a shell, you
+ need to explicitly call out to that shell.
+ Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ httpGet:
+ description: HTTPGet specifies the http request
+ to perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set
+ "Host" in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the
+ request. HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom
+ header to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to
+ access on the container. Number must be
+ in the range 1 to 65535. Name must be an
+ IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting
+ to the host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO:
+ implement a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect
+ to, defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to
+ access on the container. Number must be
+ in the range 1 to 65535. Name must be an
+ IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ type: object
+ type: object
+ livenessProbe:
+ description: 'Periodic probe of container liveness. Container
+ will be restarted if the probe fails. Cannot be updated.
+ More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ properties:
+ exec:
+ description: One and only one of the following should
+ be specified. Exec specifies the action to take.
+ properties:
+ command:
+ description: Command is the command line to execute
+ inside the container, the working directory
+ for the command is root ('/') in the container's
+ filesystem. The command is simply exec'd, it
+ is not run inside a shell, so traditional shell
+ instructions ('|', etc) won't work. To use a
+ shell, you need to explicitly call out to that
+ shell. Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ failureThreshold:
+ description: Minimum consecutive failures for the
+ probe to be considered failed after having succeeded.
+ Defaults to 3. Minimum value is 1.
+ format: int32
+ type: integer
+ httpGet:
+ description: HTTPGet specifies the http request to
+ perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set "Host"
+ in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the request.
+ HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom header
+ to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting to the
+ host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ initialDelaySeconds:
+ description: 'Number of seconds after the container
+ has started before liveness probes are initiated.
+ More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ periodSeconds:
+ description: How often (in seconds) to perform the
+ probe. Default to 10 seconds. Minimum value is 1.
+ format: int32
+ type: integer
+ successThreshold:
+ description: Minimum consecutive successes for the
+ probe to be considered successful after having failed.
+ Defaults to 1. Must be 1 for liveness and startup.
+ Minimum value is 1.
+ format: int32
+ type: integer
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO: implement
+ a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect to,
+ defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ terminationGracePeriodSeconds:
+ description: Optional duration in seconds the pod
+ needs to terminate gracefully upon probe failure.
+ The grace period is the duration in seconds after
+ the processes running in the pod are sent a termination
+ signal and the time when the processes are forcibly
+ halted with a kill signal. Set this value longer
+ than the expected cleanup time for your process.
+ If this value is nil, the pod's terminationGracePeriodSeconds
+ will be used. Otherwise, this value overrides the
+ value provided by the pod spec. Value must be non-negative
+ integer. The value zero indicates stop immediately
+ via the kill signal (no opportunity to shut down).
+ This is an alpha field and requires enabling ProbeTerminationGracePeriod
+ feature gate.
+ format: int64
+ type: integer
+ timeoutSeconds:
+ description: 'Number of seconds after which the probe
+ times out. Defaults to 1 second. Minimum value is
+ 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ type: object
+ name:
+ description: Name of the container specified as a DNS_LABEL.
+ Each container in a pod must have a unique name (DNS_LABEL).
+ Cannot be updated.
+ type: string
+ ports:
+ description: List of ports to expose from the container.
+ Exposing a port here gives the system additional information
+ about the network connections a container uses, but
+ is primarily informational. Not specifying a port here
+ DOES NOT prevent that port from being exposed. Any port
+ which is listening on the default "0.0.0.0" address
+ inside a container will be accessible from the network.
+ Cannot be updated.
+ items:
+ description: ContainerPort represents a network port
+ in a single container.
+ properties:
+ containerPort:
+ description: Number of port to expose on the pod's
+ IP address. This must be a valid port number,
+ 0 < x < 65536.
+ format: int32
+ type: integer
+ hostIP:
+ description: What host IP to bind the external port
+ to.
+ type: string
+ hostPort:
+ description: Number of port to expose on the host.
+ If specified, this must be a valid port number,
+ 0 < x < 65536. If HostNetwork is specified, this
+ must match ContainerPort. Most containers do not
+ need this.
+ format: int32
+ type: integer
+ name:
+ description: If specified, this must be an IANA_SVC_NAME
+ and unique within the pod. Each named port in
+ a pod must have a unique name. Name for the port
+ that can be referred to by services.
+ type: string
+ protocol:
+ default: TCP
+ description: Protocol for port. Must be UDP, TCP,
+ or SCTP. Defaults to "TCP".
+ type: string
+ required:
+ - containerPort
+ type: object
+ type: array
+ x-kubernetes-list-map-keys:
+ - containerPort
+ - protocol
+ x-kubernetes-list-type: map
+ readinessProbe:
+ description: 'Periodic probe of container service readiness.
+ Container will be removed from service endpoints if
+ the probe fails. Cannot be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ properties:
+ exec:
+ description: One and only one of the following should
+ be specified. Exec specifies the action to take.
+ properties:
+ command:
+ description: Command is the command line to execute
+ inside the container, the working directory
+ for the command is root ('/') in the container's
+ filesystem. The command is simply exec'd, it
+ is not run inside a shell, so traditional shell
+ instructions ('|', etc) won't work. To use a
+ shell, you need to explicitly call out to that
+ shell. Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ failureThreshold:
+ description: Minimum consecutive failures for the
+ probe to be considered failed after having succeeded.
+ Defaults to 3. Minimum value is 1.
+ format: int32
+ type: integer
+ httpGet:
+ description: HTTPGet specifies the http request to
+ perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set "Host"
+ in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the request.
+ HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom header
+ to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting to the
+ host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ initialDelaySeconds:
+ description: 'Number of seconds after the container
+ has started before liveness probes are initiated.
+ More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ periodSeconds:
+ description: How often (in seconds) to perform the
+ probe. Default to 10 seconds. Minimum value is 1.
+ format: int32
+ type: integer
+ successThreshold:
+ description: Minimum consecutive successes for the
+ probe to be considered successful after having failed.
+ Defaults to 1. Must be 1 for liveness and startup.
+ Minimum value is 1.
+ format: int32
+ type: integer
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO: implement
+ a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect to,
+ defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ terminationGracePeriodSeconds:
+ description: Optional duration in seconds the pod
+ needs to terminate gracefully upon probe failure.
+ The grace period is the duration in seconds after
+ the processes running in the pod are sent a termination
+ signal and the time when the processes are forcibly
+ halted with a kill signal. Set this value longer
+ than the expected cleanup time for your process.
+ If this value is nil, the pod's terminationGracePeriodSeconds
+ will be used. Otherwise, this value overrides the
+ value provided by the pod spec. Value must be non-negative
+ integer. The value zero indicates stop immediately
+ via the kill signal (no opportunity to shut down).
+ This is an alpha field and requires enabling ProbeTerminationGracePeriod
+ feature gate.
+ format: int64
+ type: integer
+ timeoutSeconds:
+ description: 'Number of seconds after which the probe
+ times out. Defaults to 1 second. Minimum value is
+ 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ type: object
+ resources:
+ description: 'Compute Resources required by this container.
+ Cannot be updated. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Limits describes the maximum amount
+ of compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Requests describes the minimum amount
+ of compute resources required. If Requests is omitted
+ for a container, it defaults to Limits if that is
+ explicitly specified, otherwise to an implementation-defined
+ value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ type: object
+ securityContext:
+ description: 'Security options the pod should run with.
+ More info: https://kubernetes.io/docs/concepts/policy/security-context/
+ More info: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/'
+ properties:
+ allowPrivilegeEscalation:
+ description: 'AllowPrivilegeEscalation controls whether
+ a process can gain more privileges than its parent
+ process. This bool directly controls if the no_new_privs
+ flag will be set on the container process. AllowPrivilegeEscalation
+ is true always when the container is: 1) run as
+ Privileged 2) has CAP_SYS_ADMIN'
+ type: boolean
+ capabilities:
+ description: The capabilities to add/drop when running
+ containers. Defaults to the default set of capabilities
+ granted by the container runtime.
+ properties:
+ add:
+ description: Added capabilities
+ items:
+ description: Capability represent POSIX capabilities
+ type
+ type: string
+ type: array
+ drop:
+ description: Removed capabilities
+ items:
+ description: Capability represent POSIX capabilities
+ type
+ type: string
+ type: array
+ type: object
+ privileged:
+ description: Run container in privileged mode. Processes
+ in privileged containers are essentially equivalent
+ to root on the host. Defaults to false.
+ type: boolean
+ procMount:
+ description: procMount denotes the type of proc mount
+ to use for the containers. The default is DefaultProcMount
+ which uses the container runtime defaults for readonly
+ paths and masked paths. This requires the ProcMountType
+ feature flag to be enabled.
+ type: string
+ readOnlyRootFilesystem:
+ description: Whether this container has a read-only
+ root filesystem. Default is false.
+ type: boolean
+ runAsGroup:
+ description: The GID to run the entrypoint of the
+ container process. Uses runtime default if unset.
+ May also be set in PodSecurityContext. If set in
+ both SecurityContext and PodSecurityContext, the
+ value specified in SecurityContext takes precedence.
+ format: int64
+ type: integer
+ runAsNonRoot:
+ description: Indicates that the container must run
+ as a non-root user. If true, the Kubelet will validate
+ the image at runtime to ensure that it does not
+ run as UID 0 (root) and fail to start the container
+ if it does. If unset or false, no such validation
+ will be performed. May also be set in PodSecurityContext. If
+ set in both SecurityContext and PodSecurityContext,
+ the value specified in SecurityContext takes precedence.
+ type: boolean
+ runAsUser:
+ description: The UID to run the entrypoint of the
+ container process. Defaults to user specified in
+ image metadata if unspecified. May also be set in
+ PodSecurityContext. If set in both SecurityContext
+ and PodSecurityContext, the value specified in SecurityContext
+ takes precedence.
+ format: int64
+ type: integer
+ seLinuxOptions:
+ description: The SELinux context to be applied to
+ the container. If unspecified, the container runtime
+ will allocate a random SELinux context for each
+ container. May also be set in PodSecurityContext. If
+ set in both SecurityContext and PodSecurityContext,
+ the value specified in SecurityContext takes precedence.
+ properties:
+ level:
+ description: Level is SELinux level label that
+ applies to the container.
+ type: string
+ role:
+ description: Role is a SELinux role label that
+ applies to the container.
+ type: string
+ type:
+ description: Type is a SELinux type label that
+ applies to the container.
+ type: string
+ user:
+ description: User is a SELinux user label that
+ applies to the container.
+ type: string
+ type: object
+ seccompProfile:
+ description: The seccomp options to use by this container.
+ If seccomp options are provided at both the pod
+ & container level, the container options override
+ the pod options.
+ properties:
+ localhostProfile:
+ description: localhostProfile indicates a profile
+ defined in a file on the node should be used.
+ The profile must be preconfigured on the node
+ to work. Must be a descending path, relative
+ to the kubelet's configured seccomp profile
+ location. Must only be set if type is "Localhost".
+ type: string
+ type:
+ description: "type indicates which kind of seccomp
+ profile will be applied. Valid options are:
+ \n Localhost - a profile defined in a file on
+ the node should be used. RuntimeDefault - the
+ container runtime default profile should be
+ used. Unconfined - no profile should be applied."
+ type: string
+ required:
+ - type
+ type: object
+ windowsOptions:
+ description: The Windows specific settings applied
+ to all containers. If unspecified, the options from
+ the PodSecurityContext will be used. If set in both
+ SecurityContext and PodSecurityContext, the value
+ specified in SecurityContext takes precedence.
+ properties:
+ gmsaCredentialSpec:
+ description: GMSACredentialSpec is where the GMSA
+ admission webhook (https://github.com/kubernetes-sigs/windows-gmsa)
+ inlines the contents of the GMSA credential
+ spec named by the GMSACredentialSpecName field.
+ type: string
+ gmsaCredentialSpecName:
+ description: GMSACredentialSpecName is the name
+ of the GMSA credential spec to use.
+ type: string
+ runAsUserName:
+ description: The UserName in Windows to run the
+ entrypoint of the container process. Defaults
+ to the user specified in image metadata if unspecified.
+ May also be set in PodSecurityContext. If set
+ in both SecurityContext and PodSecurityContext,
+ the value specified in SecurityContext takes
+ precedence.
+ type: string
+ type: object
+ type: object
+ startupProbe:
+ description: 'StartupProbe indicates that the Pod has
+ successfully initialized. If specified, no other probes
+ are executed until this completes successfully. If this
+ probe fails, the Pod will be restarted, just as if the
+ livenessProbe failed. This can be used to provide different
+ probe parameters at the beginning of a Pod''s lifecycle,
+ when it might take a long time to load data or warm
+ a cache, than during steady-state operation. This cannot
+ be updated. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ properties:
+ exec:
+ description: One and only one of the following should
+ be specified. Exec specifies the action to take.
+ properties:
+ command:
+ description: Command is the command line to execute
+ inside the container, the working directory
+ for the command is root ('/') in the container's
+ filesystem. The command is simply exec'd, it
+ is not run inside a shell, so traditional shell
+ instructions ('|', etc) won't work. To use a
+ shell, you need to explicitly call out to that
+ shell. Exit status of 0 is treated as live/healthy
+ and non-zero is unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ failureThreshold:
+ description: Minimum consecutive failures for the
+ probe to be considered failed after having succeeded.
+ Defaults to 3. Minimum value is 1.
+ format: int32
+ type: integer
+ httpGet:
+ description: HTTPGet specifies the http request to
+ perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults
+ to the pod IP. You probably want to set "Host"
+ in httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the request.
+ HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom header
+ to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting to the
+ host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ initialDelaySeconds:
+ description: 'Number of seconds after the container
+ has started before liveness probes are initiated.
+ More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ periodSeconds:
+ description: How often (in seconds) to perform the
+ probe. Default to 10 seconds. Minimum value is 1.
+ format: int32
+ type: integer
+ successThreshold:
+ description: Minimum consecutive successes for the
+ probe to be considered successful after having failed.
+ Defaults to 1. Must be 1 for liveness and startup.
+ Minimum value is 1.
+ format: int32
+ type: integer
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO: implement
+ a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect to,
+ defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ terminationGracePeriodSeconds:
+ description: Optional duration in seconds the pod
+ needs to terminate gracefully upon probe failure.
+ The grace period is the duration in seconds after
+ the processes running in the pod are sent a termination
+ signal and the time when the processes are forcibly
+ halted with a kill signal. Set this value longer
+ than the expected cleanup time for your process.
+ If this value is nil, the pod's terminationGracePeriodSeconds
+ will be used. Otherwise, this value overrides the
+ value provided by the pod spec. Value must be non-negative
+ integer. The value zero indicates stop immediately
+ via the kill signal (no opportunity to shut down).
+ This is an alpha field and requires enabling ProbeTerminationGracePeriod
+ feature gate.
+ format: int64
+ type: integer
+ timeoutSeconds:
+ description: 'Number of seconds after which the probe
+ times out. Defaults to 1 second. Minimum value is
+ 1. More info: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle#container-probes'
+ format: int32
+ type: integer
+ type: object
+ stdin:
+ description: Whether this container should allocate a
+ buffer for stdin in the container runtime. If this is
+ not set, reads from stdin in the container will always
+ result in EOF. Default is false.
+ type: boolean
+ stdinOnce:
+ description: Whether the container runtime should close
+ the stdin channel after it has been opened by a single
+ attach. When stdin is true the stdin stream will remain
+ open across multiple attach sessions. If stdinOnce is
+ set to true, stdin is opened on container start, is
+ empty until the first client attaches to stdin, and
+ then remains open and accepts data until the client
+ disconnects, at which time stdin is closed and remains
+ closed until the container is restarted. If this flag
+ is false, a container processes that reads from stdin
+ will never receive an EOF. Default is false
+ type: boolean
+ terminationMessagePath:
+ description: 'Optional: Path at which the file to which
+ the container''s termination message will be written
+ is mounted into the container''s filesystem. Message
+ written is intended to be brief final status, such as
+ an assertion failure message. Will be truncated by the
+ node if greater than 4096 bytes. The total message length
+ across all containers will be limited to 12kb. Defaults
+ to /dev/termination-log. Cannot be updated.'
+ type: string
+ terminationMessagePolicy:
+ description: Indicate how the termination message should
+ be populated. File will use the contents of terminationMessagePath
+ to populate the container status message on both success
+ and failure. FallbackToLogsOnError will use the last
+ chunk of container log output if the termination message
+ file is empty and the container exited with an error.
+ The log output is limited to 2048 bytes or 80 lines,
+ whichever is smaller. Defaults to File. Cannot be updated.
+ type: string
+ tty:
+ description: Whether this container should allocate a
+ TTY for itself, also requires 'stdin' to be true. Default
+ is false.
+ type: boolean
+ volumeDevices:
+ description: volumeDevices is the list of block devices
+ to be used by the container.
+ items:
+ description: volumeDevice describes a mapping of a raw
+ block device within a container.
+ properties:
+ devicePath:
+ description: devicePath is the path inside of the
+ container that the device will be mapped to.
+ type: string
+ name:
+ description: name must match the name of a persistentVolumeClaim
+ in the pod
+ type: string
+ required:
+ - devicePath
+ - name
+ type: object
+ type: array
+ volumeMounts:
+ description: Pod volumes to mount into the container's
+ filesystem. Cannot be updated.
+ items:
+ description: VolumeMount describes a mounting of a Volume
+ within a container.
+ properties:
+ mountPath:
+ description: Path within the container at which
+ the volume should be mounted. Must not contain
+ ':'.
+ type: string
+ mountPropagation:
+ description: mountPropagation determines how mounts
+ are propagated from the host to container and
+ the other way around. When not set, MountPropagationNone
+ is used. This field is beta in 1.10.
+ type: string
+ name:
+ description: This must match the Name of a Volume.
+ type: string
+ readOnly:
+ description: Mounted read-only if true, read-write
+ otherwise (false or unspecified). Defaults to
+ false.
+ type: boolean
+ subPath:
+ description: Path within the volume from which the
+ container's volume should be mounted. Defaults
+ to "" (volume's root).
+ type: string
+ subPathExpr:
+ description: Expanded path within the volume from
+ which the container's volume should be mounted.
+ Behaves similarly to SubPath but environment variable
+ references $(VAR_NAME) are expanded using the
+ container's environment. Defaults to "" (volume's
+ root). SubPathExpr and SubPath are mutually exclusive.
+ type: string
+ required:
+ - mountPath
+ - name
+ type: object
+ type: array
+ workingDir:
+ description: Container's working directory. If not specified,
+ the container runtime's default will be used, which
+ might be configured in the container image. Cannot be
+ updated.
+ type: string
+ required:
+ - name
+ type: object
+ type: array
+ labels:
+ additionalProperties:
+ type: string
+ type: object
+ metricsExporterResources:
+ description: MetricsExporterResources allows you to specify
+ resources for metrics exporter container
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Limits describes the maximum amount of compute
+ resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Requests describes the minimum amount of compute
+ resources required. If Requests is omitted for a container,
+ it defaults to Limits if that is explicitly specified,
+ otherwise to an implementation-defined value. More info:
+ https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ type: object
+ mysqlLifecycle:
+ description: Lifecycle describes actions that the management
+ system should take in response to container lifecycle events.
+ For the PostStart and PreStop lifecycle handlers, management
+ of the container blocks until the action is complete, unless
+ the container process fails, in which case the handler is
+ aborted.
+ properties:
+ postStart:
+ description: 'PostStart is called immediately after a container
+ is created. If the handler fails, the container is terminated
+ and restarted according to its restart policy. Other management
+ of the container blocks until the hook completes. More
+ info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks'
+ properties:
+ exec:
+ description: One and only one of the following should
+ be specified. Exec specifies the action to take.
+ properties:
+ command:
+ description: Command is the command line to execute
+ inside the container, the working directory for
+ the command is root ('/') in the container's
+ filesystem. The command is simply exec'd, it is
+ not run inside a shell, so traditional shell instructions
+ ('|', etc) won't work. To use a shell, you need
+ to explicitly call out to that shell. Exit status
+ of 0 is treated as live/healthy and non-zero is
+ unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ httpGet:
+ description: HTTPGet specifies the http request to perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults to
+ the pod IP. You probably want to set "Host" in
+ httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the request.
+ HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom header
+ to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting to the
+ host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO: implement
+ a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect to,
+ defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ type: object
+ preStop:
+ description: 'PreStop is called immediately before a container
+ is terminated due to an API request or management event
+ such as liveness/startup probe failure, preemption, resource
+ contention, etc. The handler is not called if the container
+ crashes or exits. The reason for termination is passed
+ to the handler. The Pod''s termination grace period countdown
+ begins before the PreStop hooked is executed. Regardless
+ of the outcome of the handler, the container will eventually
+ terminate within the Pod''s termination grace period.
+ Other management of the container blocks until the hook
+ completes or until the termination grace period is reached.
+ More info: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/#container-hooks'
+ properties:
+ exec:
+ description: One and only one of the following should
+ be specified. Exec specifies the action to take.
+ properties:
+ command:
+ description: Command is the command line to execute
+ inside the container, the working directory for
+ the command is root ('/') in the container's
+ filesystem. The command is simply exec'd, it is
+ not run inside a shell, so traditional shell instructions
+ ('|', etc) won't work. To use a shell, you need
+ to explicitly call out to that shell. Exit status
+ of 0 is treated as live/healthy and non-zero is
+ unhealthy.
+ items:
+ type: string
+ type: array
+ type: object
+ httpGet:
+ description: HTTPGet specifies the http request to perform.
+ properties:
+ host:
+ description: Host name to connect to, defaults to
+ the pod IP. You probably want to set "Host" in
+ httpHeaders instead.
+ type: string
+ httpHeaders:
+ description: Custom headers to set in the request.
+ HTTP allows repeated headers.
+ items:
+ description: HTTPHeader describes a custom header
+ to be used in HTTP probes
+ properties:
+ name:
+ description: The header field name
+ type: string
+ value:
+ description: The header field value
+ type: string
+ required:
+ - name
+ - value
+ type: object
+ type: array
+ path:
+ description: Path to access on the HTTP server.
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Name or number of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ scheme:
+ description: Scheme to use for connecting to the
+ host. Defaults to HTTP.
+ type: string
+ required:
+ - port
+ type: object
+ tcpSocket:
+ description: 'TCPSocket specifies an action involving
+ a TCP port. TCP hooks not yet supported TODO: implement
+ a realistic TCP lifecycle hook'
+ properties:
+ host:
+ description: 'Optional: Host name to connect to,
+ defaults to the pod IP.'
+ type: string
+ port:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Number or name of the port to access
+ on the container. Number must be in the range
+ 1 to 65535. Name must be an IANA_SVC_NAME.
+ x-kubernetes-int-or-string: true
+ required:
+ - port
+ type: object
+ type: object
+ type: object
+ mysqlOperatorSidecarResources:
+ description: MySQLOperatorSidecarResources allows you to specify
+ resources for sidecar container used to take backups with
+ xtrabackup
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Limits describes the maximum amount of compute
+ resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Requests describes the minimum amount of compute
+ resources required. If Requests is omitted for a container,
+ it defaults to Limits if that is explicitly specified,
+ otherwise to an implementation-defined value. More info:
+ https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ type: object
+ nodeSelector:
+ additionalProperties:
+ type: string
+ type: object
+ priorityClassName:
+ type: string
+ ptHeartbeatResources:
+ description: PtHeartbeatResources allows you to specify resources
+ for pt-heartbeat container
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Limits describes the maximum amount of compute
+ resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Requests describes the minimum amount of compute
+ resources required. If Requests is omitted for a container,
+ it defaults to Limits if that is explicitly specified,
+ otherwise to an implementation-defined value. More info:
+ https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ type: object
+ resources:
+ description: ResourceRequirements describes the compute resource
+ requirements.
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Limits describes the maximum amount of compute
+ resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Requests describes the minimum amount of compute
+ resources required. If Requests is omitted for a container,
+ it defaults to Limits if that is explicitly specified,
+ otherwise to an implementation-defined value. More info:
+ https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ type: object
+ serviceAccountName:
+ type: string
+ tolerations:
+ items:
+ description: The pod this Toleration is attached to tolerates
+ any taint that matches the triple using
+ the matching operator .
+ properties:
+ effect:
+ description: Effect indicates the taint effect to match.
+ Empty means match all taint effects. When specified,
+ allowed values are NoSchedule, PreferNoSchedule and
+ NoExecute.
+ type: string
+ key:
+ description: Key is the taint key that the toleration
+ applies to. Empty means match all taint keys. If the
+ key is empty, operator must be Exists; this combination
+ means to match all values and all keys.
+ type: string
+ operator:
+ description: Operator represents a key's relationship
+ to the value. Valid operators are Exists and Equal.
+ Defaults to Equal. Exists is equivalent to wildcard
+ for value, so that a pod can tolerate all taints of
+ a particular category.
+ type: string
+ tolerationSeconds:
+ description: TolerationSeconds represents the period of
+ time the toleration (which must be of effect NoExecute,
+ otherwise this field is ignored) tolerates the taint.
+ By default, it is not set, which means tolerate the
+ taint forever (do not evict). Zero and negative values
+ will be treated as 0 (evict immediately) by the system.
+ format: int64
+ type: integer
+ value:
+ description: Value is the taint value the toleration matches
+ to. If the operator is Exists, the value should be empty,
+ otherwise just a regular string.
+ type: string
+ type: object
+ type: array
+ volumeMounts:
+ description: VolumesMounts allows mounting extra volumes to
+ the mysql container
+ items:
+ description: VolumeMount describes a mounting of a Volume
+ within a container.
+ properties:
+ mountPath:
+ description: Path within the container at which the volume
+ should be mounted. Must not contain ':'.
+ type: string
+ mountPropagation:
+ description: mountPropagation determines how mounts are
+ propagated from the host to container and the other
+ way around. When not set, MountPropagationNone is used.
+ This field is beta in 1.10.
+ type: string
+ name:
+ description: This must match the Name of a Volume.
+ type: string
+ readOnly:
+ description: Mounted read-only if true, read-write otherwise
+ (false or unspecified). Defaults to false.
+ type: boolean
+ subPath:
+ description: Path within the volume from which the container's
+ volume should be mounted. Defaults to "" (volume's root).
+ type: string
+ subPathExpr:
+ description: Expanded path within the volume from which
+ the container's volume should be mounted. Behaves similarly
+ to SubPath but environment variable references $(VAR_NAME)
+ are expanded using the container's environment. Defaults
+ to "" (volume's root). SubPathExpr and SubPath are mutually
+ exclusive.
+ type: string
+ required:
+ - mountPath
+ - name
+ type: object
+ type: array
+ volumes:
+ description: Volumes allows adding extra volumes to the statefulset
+ items:
+ description: Volume represents a named volume in a pod that
+ may be accessed by any container in the pod.
+ properties:
+ awsElasticBlockStore:
+ description: 'AWSElasticBlockStore represents an AWS Disk
+ resource that is attached to a kubelet''s host machine
+ and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore'
+ properties:
+ fsType:
+ description: 'Filesystem type of the volume that you
+ want to mount. Tip: Ensure that the filesystem type
+ is supported by the host operating system. Examples:
+ "ext4", "xfs", "ntfs". Implicitly inferred to be
+ "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore
+ TODO: how do we prevent errors in the filesystem
+ from compromising the machine'
+ type: string
+ partition:
+ description: 'The partition in the volume that you
+ want to mount. If omitted, the default is to mount
+ by volume name. Examples: For volume /dev/sda1,
+ you specify the partition as "1". Similarly, the
+ volume partition for /dev/sda is "0" (or you can
+ leave the property empty).'
+ format: int32
+ type: integer
+ readOnly:
+ description: 'Specify "true" to force and set the
+ ReadOnly property in VolumeMounts to "true". If
+ omitted, the default is "false". More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore'
+ type: boolean
+ volumeID:
+ description: 'Unique ID of the persistent disk resource
+ in AWS (Amazon EBS volume). More info: https://kubernetes.io/docs/concepts/storage/volumes#awselasticblockstore'
+ type: string
+ required:
+ - volumeID
+ type: object
+ azureDisk:
+ description: AzureDisk represents an Azure Data Disk mount
+ on the host and bind mount to the pod.
+ properties:
+ cachingMode:
+ description: 'Host Caching mode: None, Read Only,
+ Read Write.'
+ type: string
+ diskName:
+ description: The Name of the data disk in the blob
+ storage
+ type: string
+ diskURI:
+ description: The URI the data disk in the blob storage
+ type: string
+ fsType:
+ description: Filesystem type to mount. Must be a filesystem
+ type supported by the host operating system. Ex.
+ "ext4", "xfs", "ntfs". Implicitly inferred to be
+ "ext4" if unspecified.
+ type: string
+ kind:
+ description: 'Expected values Shared: multiple blob
+ disks per storage account Dedicated: single blob
+ disk per storage account Managed: azure managed
+ data disk (only in managed availability set). defaults
+ to shared'
+ type: string
+ readOnly:
+ description: Defaults to false (read/write). ReadOnly
+ here will force the ReadOnly setting in VolumeMounts.
+ type: boolean
+ required:
+ - diskName
+ - diskURI
+ type: object
+ azureFile:
+ description: AzureFile represents an Azure File Service
+ mount on the host and bind mount to the pod.
+ properties:
+ readOnly:
+ description: Defaults to false (read/write). ReadOnly
+ here will force the ReadOnly setting in VolumeMounts.
+ type: boolean
+ secretName:
+ description: the name of secret that contains Azure
+ Storage Account Name and Key
+ type: string
+ shareName:
+ description: Share Name
+ type: string
+ required:
+ - secretName
+ - shareName
+ type: object
+ cephfs:
+ description: CephFS represents a Ceph FS mount on the
+ host that shares a pod's lifetime
+ properties:
+ monitors:
+ description: 'Required: Monitors is a collection of
+ Ceph monitors More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
+ items:
+ type: string
+ type: array
+ path:
+ description: 'Optional: Used as the mounted root,
+ rather than the full Ceph tree, default is /'
+ type: string
+ readOnly:
+ description: 'Optional: Defaults to false (read/write).
+ ReadOnly here will force the ReadOnly setting in
+ VolumeMounts. More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
+ type: boolean
+ secretFile:
+ description: 'Optional: SecretFile is the path to
+ key ring for User, default is /etc/ceph/user.secret
+ More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
+ type: string
+ secretRef:
+ description: 'Optional: SecretRef is reference to
+ the authentication secret for User, default is empty.
+ More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ type: object
+ user:
+ description: 'Optional: User is the rados user name,
+ default is admin More info: https://examples.k8s.io/volumes/cephfs/README.md#how-to-use-it'
+ type: string
+ required:
+ - monitors
+ type: object
+ cinder:
+ description: 'Cinder represents a cinder volume attached
+ and mounted on kubelets host machine. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
+ properties:
+ fsType:
+ description: 'Filesystem type to mount. Must be a
+ filesystem type supported by the host operating
+ system. Examples: "ext4", "xfs", "ntfs". Implicitly
+ inferred to be "ext4" if unspecified. More info:
+ https://examples.k8s.io/mysql-cinder-pd/README.md'
+ type: string
+ readOnly:
+ description: 'Optional: Defaults to false (read/write).
+ ReadOnly here will force the ReadOnly setting in
+ VolumeMounts. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
+ type: boolean
+ secretRef:
+ description: 'Optional: points to a secret object
+ containing parameters used to connect to OpenStack.'
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ type: object
+ volumeID:
+ description: 'volume id used to identify the volume
+ in cinder. More info: https://examples.k8s.io/mysql-cinder-pd/README.md'
+ type: string
+ required:
+ - volumeID
+ type: object
+ configMap:
+ description: ConfigMap represents a configMap that should
+ populate this volume
+ properties:
+ defaultMode:
+ description: 'Optional: mode bits used to set permissions
+ on created files by default. Must be an octal value
+ between 0000 and 0777 or a decimal value between
+ 0 and 511. YAML accepts both octal and decimal values,
+ JSON requires decimal values for mode bits. Defaults
+ to 0644. Directories within the path are not affected
+ by this setting. This might be in conflict with
+ other options that affect the file mode, like fsGroup,
+ and the result can be other mode bits set.'
+ format: int32
+ type: integer
+ items:
+ description: If unspecified, each key-value pair in
+ the Data field of the referenced ConfigMap will
+ be projected into the volume as a file whose name
+ is the key and content is the value. If specified,
+ the listed keys will be projected into the specified
+ paths, and unlisted keys will not be present. If
+ a key is specified which is not present in the ConfigMap,
+ the volume setup will error unless it is marked
+ optional. Paths must be relative and may not contain
+ the '..' path or start with '..'.
+ items:
+ description: Maps a string key to a path within
+ a volume.
+ properties:
+ key:
+ description: The key to project.
+ type: string
+ mode:
+ description: 'Optional: mode bits used to set
+ permissions on this file. Must be an octal
+ value between 0000 and 0777 or a decimal value
+ between 0 and 511. YAML accepts both octal
+ and decimal values, JSON requires decimal
+ values for mode bits. If not specified, the
+ volume defaultMode will be used. This might
+ be in conflict with other options that affect
+ the file mode, like fsGroup, and the result
+ can be other mode bits set.'
+ format: int32
+ type: integer
+ path:
+ description: The relative path of the file to
+ map the key to. May not be an absolute path.
+ May not contain the path element '..'. May
+ not start with the string '..'.
+ type: string
+ required:
+ - key
+ - path
+ type: object
+ type: array
+ name:
+ description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ optional:
+ description: Specify whether the ConfigMap or its
+ keys must be defined
+ type: boolean
+ type: object
+ csi:
+ description: CSI (Container Storage Interface) represents
+ ephemeral storage that is handled by certain external
+ CSI drivers (Beta feature).
+ properties:
+ driver:
+ description: Driver is the name of the CSI driver
+ that handles this volume. Consult with your admin
+ for the correct name as registered in the cluster.
+ type: string
+ fsType:
+ description: Filesystem type to mount. Ex. "ext4",
+ "xfs", "ntfs". If not provided, the empty value
+ is passed to the associated CSI driver which will
+ determine the default filesystem to apply.
+ type: string
+ nodePublishSecretRef:
+ description: NodePublishSecretRef is a reference to
+ the secret object containing sensitive information
+ to pass to the CSI driver to complete the CSI NodePublishVolume
+ and NodeUnpublishVolume calls. This field is optional,
+ and may be empty if no secret is required. If the
+ secret object contains more than one secret, all
+ secret references are passed.
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ type: object
+ readOnly:
+ description: Specifies a read-only configuration for
+ the volume. Defaults to false (read/write).
+ type: boolean
+ volumeAttributes:
+ additionalProperties:
+ type: string
+ description: VolumeAttributes stores driver-specific
+ properties that are passed to the CSI driver. Consult
+ your driver's documentation for supported values.
+ type: object
+ required:
+ - driver
+ type: object
+ downwardAPI:
+ description: DownwardAPI represents downward API about
+ the pod that should populate this volume
+ properties:
+ defaultMode:
+ description: 'Optional: mode bits to use on created
+ files by default. Must be a Optional: mode bits
+ used to set permissions on created files by default.
+ Must be an octal value between 0000 and 0777 or
+ a decimal value between 0 and 511. YAML accepts
+ both octal and decimal values, JSON requires decimal
+ values for mode bits. Defaults to 0644. Directories
+ within the path are not affected by this setting.
+ This might be in conflict with other options that
+ affect the file mode, like fsGroup, and the result
+ can be other mode bits set.'
+ format: int32
+ type: integer
+ items:
+ description: Items is a list of downward API volume
+ file
+ items:
+ description: DownwardAPIVolumeFile represents information
+ to create the file containing the pod field
+ properties:
+ fieldRef:
+ description: 'Required: Selects a field of the
+ pod: only annotations, labels, name and namespace
+ are supported.'
+ properties:
+ apiVersion:
+ description: Version of the schema the FieldPath
+ is written in terms of, defaults to "v1".
+ type: string
+ fieldPath:
+ description: Path of the field to select
+ in the specified API version.
+ type: string
+ required:
+ - fieldPath
+ type: object
+ mode:
+ description: 'Optional: mode bits used to set
+ permissions on this file, must be an octal
+ value between 0000 and 0777 or a decimal value
+ between 0 and 511. YAML accepts both octal
+ and decimal values, JSON requires decimal
+ values for mode bits. If not specified, the
+ volume defaultMode will be used. This might
+ be in conflict with other options that affect
+ the file mode, like fsGroup, and the result
+ can be other mode bits set.'
+ format: int32
+ type: integer
+ path:
+ description: 'Required: Path is the relative
+ path name of the file to be created. Must
+ not be absolute or contain the ''..'' path.
+ Must be utf-8 encoded. The first item of the
+ relative path must not start with ''..'''
+ type: string
+ resourceFieldRef:
+ description: 'Selects a resource of the container:
+ only resources limits and requests (limits.cpu,
+ limits.memory, requests.cpu and requests.memory)
+ are currently supported.'
+ properties:
+ containerName:
+ description: 'Container name: required for
+ volumes, optional for env vars'
+ type: string
+ divisor:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Specifies the output format
+ of the exposed resources, defaults to
+ "1"
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ resource:
+ description: 'Required: resource to select'
+ type: string
+ required:
+ - resource
+ type: object
+ required:
+ - path
+ type: object
+ type: array
+ type: object
+ emptyDir:
+ description: 'EmptyDir represents a temporary directory
+ that shares a pod''s lifetime. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir'
+ properties:
+ medium:
+ description: 'What type of storage medium should back
+ this directory. The default is "" which means to
+ use the node''s default medium. Must be an empty
+ string (default) or Memory. More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir'
+ type: string
+ sizeLimit:
+ anyOf:
+ - type: integer
+ - type: string
+ description: 'Total amount of local storage required
+ for this EmptyDir volume. The size limit is also
+ applicable for memory medium. The maximum usage
+ on memory medium EmptyDir would be the minimum value
+ between the SizeLimit specified here and the sum
+ of memory limits of all containers in a pod. The
+ default is nil which means that the limit is undefined.
+ More info: http://kubernetes.io/docs/user-guide/volumes#emptydir'
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ ephemeral:
+ description: "Ephemeral represents a volume that is handled
+ by a cluster storage driver. The volume's lifecycle
+ is tied to the pod that defines it - it will be created
+ before the pod starts, and deleted when the pod is removed.
+ \n Use this if: a) the volume is only needed while the
+ pod runs, b) features of normal volumes like restoring
+ from snapshot or capacity tracking are needed, c)
+ the storage driver is specified through a storage class,
+ and d) the storage driver supports dynamic volume provisioning
+ through a PersistentVolumeClaim (see EphemeralVolumeSource
+ for more information on the connection between this
+ volume type and PersistentVolumeClaim). \n Use PersistentVolumeClaim
+ or one of the vendor-specific APIs for volumes that
+ persist for longer than the lifecycle of an individual
+ pod. \n Use CSI for light-weight local ephemeral volumes
+ if the CSI driver is meant to be used that way - see
+ the documentation of the driver for more information.
+ \n A pod can use both types of ephemeral volumes and
+ persistent volumes at the same time. \n This is a beta
+ feature and only available when the GenericEphemeralVolume
+ feature gate is enabled."
+ properties:
+ volumeClaimTemplate:
+ description: "Will be used to create a stand-alone
+ PVC to provision the volume. The pod in which this
+ EphemeralVolumeSource is embedded will be the owner
+ of the PVC, i.e. the PVC will be deleted together
+ with the pod. The name of the PVC will be `-` where `` is the
+ name from the `PodSpec.Volumes` array entry. Pod
+ validation will reject the pod if the concatenated
+ name is not valid for a PVC (for example, too long).
+ \n An existing PVC with that name that is not owned
+ by the pod will *not* be used for the pod to avoid
+ using an unrelated volume by mistake. Starting the
+ pod is then blocked until the unrelated PVC is removed.
+ If such a pre-created PVC is meant to be used by
+ the pod, the PVC has to updated with an owner reference
+ to the pod once the pod exists. Normally this should
+ not be necessary, but it may be useful when manually
+ reconstructing a broken cluster. \n This field is
+ read-only and no changes will be made by Kubernetes
+ to the PVC after it has been created. \n Required,
+ must not be nil."
+ properties:
+ metadata:
+ description: May contain labels and annotations
+ that will be copied into the PVC when creating
+ it. No other fields are allowed and will be
+ rejected during validation.
+ type: object
+ spec:
+ description: The specification for the PersistentVolumeClaim.
+ The entire content is copied unchanged into
+ the PVC that gets created from this template.
+ The same fields as in a PersistentVolumeClaim
+ are also valid here.
+ properties:
+ accessModes:
+ description: 'AccessModes contains the desired
+ access modes the volume should have. More
+ info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
+ items:
+ type: string
+ type: array
+ dataSource:
+ description: 'This field can be used to specify
+ either: * An existing VolumeSnapshot object
+ (snapshot.storage.k8s.io/VolumeSnapshot)
+ * An existing PVC (PersistentVolumeClaim)
+ * An existing custom resource that implements
+ data population (Alpha) In order to use
+ custom resource types that implement data
+ population, the AnyVolumeDataSource feature
+ gate must be enabled. If the provisioner
+ or an external controller can support the
+ specified data source, it will create a
+ new volume based on the contents of the
+ specified data source.'
+ properties:
+ apiGroup:
+ description: APIGroup is the group for
+ the resource being referenced. If APIGroup
+ is not specified, the specified Kind
+ must be in the core API group. For any
+ other third-party types, APIGroup is
+ required.
+ type: string
+ kind:
+ description: Kind is the type of resource
+ being referenced
+ type: string
+ name:
+ description: Name is the name of resource
+ being referenced
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ description: 'Resources represents the minimum
+ resources the volume should have. More info:
+ https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Limits describes the maximum
+ amount of compute resources allowed.
+ More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Requests describes the minimum
+ amount of compute resources required.
+ If Requests is omitted for a container,
+ it defaults to Limits if that is explicitly
+ specified, otherwise to an implementation-defined
+ value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ type: object
+ selector:
+ description: A label query over volumes to
+ consider for binding.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list
+ of label selector requirements. The
+ requirements are ANDed.
+ items:
+ description: A label selector requirement
+ is a selector that contains values,
+ a key, and an operator that relates
+ the key and values.
+ properties:
+ key:
+ description: key is the label key
+ that the selector applies to.
+ type: string
+ operator:
+ description: operator represents
+ a key's relationship to a set
+ of values. Valid operators are
+ In, NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array
+ of string values. If the operator
+ is In or NotIn, the values array
+ must be non-empty. If the operator
+ is Exists or DoesNotExist, the
+ values array must be empty. This
+ array is replaced during a strategic
+ merge patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value}
+ pairs. A single {key,value} in the matchLabels
+ map is equivalent to an element of matchExpressions,
+ whose key field is "key", the operator
+ is "In", and the values array contains
+ only "value". The requirements are ANDed.
+ type: object
+ type: object
+ storageClassName:
+ description: 'Name of the StorageClass required
+ by the claim. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
+ type: string
+ volumeMode:
+ description: volumeMode defines what type
+ of volume is required by the claim. Value
+ of Filesystem is implied when not included
+ in claim spec.
+ type: string
+ volumeName:
+ description: VolumeName is the binding reference
+ to the PersistentVolume backing this claim.
+ type: string
+ type: object
+ required:
+ - spec
+ type: object
+ type: object
+ fc:
+ description: FC represents a Fibre Channel resource that
+ is attached to a kubelet's host machine and then exposed
+ to the pod.
+ properties:
+ fsType:
+ description: 'Filesystem type to mount. Must be a
+ filesystem type supported by the host operating
+ system. Ex. "ext4", "xfs", "ntfs". Implicitly inferred
+ to be "ext4" if unspecified. TODO: how do we prevent
+ errors in the filesystem from compromising the machine'
+ type: string
+ lun:
+ description: 'Optional: FC target lun number'
+ format: int32
+ type: integer
+ readOnly:
+ description: 'Optional: Defaults to false (read/write).
+ ReadOnly here will force the ReadOnly setting in
+ VolumeMounts.'
+ type: boolean
+ targetWWNs:
+ description: 'Optional: FC target worldwide names
+ (WWNs)'
+ items:
+ type: string
+ type: array
+ wwids:
+ description: 'Optional: FC volume world wide identifiers
+ (wwids) Either wwids or combination of targetWWNs
+ and lun must be set, but not both simultaneously.'
+ items:
+ type: string
+ type: array
+ type: object
+ flexVolume:
+ description: FlexVolume represents a generic volume resource
+ that is provisioned/attached using an exec based plugin.
+ properties:
+ driver:
+ description: Driver is the name of the driver to use
+ for this volume.
+ type: string
+ fsType:
+ description: Filesystem type to mount. Must be a filesystem
+ type supported by the host operating system. Ex.
+ "ext4", "xfs", "ntfs". The default filesystem depends
+ on FlexVolume script.
+ type: string
+ options:
+ additionalProperties:
+ type: string
+ description: 'Optional: Extra command options if any.'
+ type: object
+ readOnly:
+ description: 'Optional: Defaults to false (read/write).
+ ReadOnly here will force the ReadOnly setting in
+ VolumeMounts.'
+ type: boolean
+ secretRef:
+ description: 'Optional: SecretRef is reference to
+ the secret object containing sensitive information
+ to pass to the plugin scripts. This may be empty
+ if no secret object is specified. If the secret
+ object contains more than one secret, all secrets
+ are passed to the plugin scripts.'
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ type: object
+ required:
+ - driver
+ type: object
+ flocker:
+ description: Flocker represents a Flocker volume attached
+ to a kubelet's host machine. This depends on the Flocker
+ control service being running
+ properties:
+ datasetName:
+ description: Name of the dataset stored as metadata
+ -> name on the dataset for Flocker should be considered
+ as deprecated
+ type: string
+ datasetUUID:
+ description: UUID of the dataset. This is unique identifier
+ of a Flocker dataset
+ type: string
+ type: object
+ gcePersistentDisk:
+ description: 'GCEPersistentDisk represents a GCE Disk
+ resource that is attached to a kubelet''s host machine
+ and then exposed to the pod. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk'
+ properties:
+ fsType:
+ description: 'Filesystem type of the volume that you
+ want to mount. Tip: Ensure that the filesystem type
+ is supported by the host operating system. Examples:
+ "ext4", "xfs", "ntfs". Implicitly inferred to be
+ "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk
+ TODO: how do we prevent errors in the filesystem
+ from compromising the machine'
+ type: string
+ partition:
+ description: 'The partition in the volume that you
+ want to mount. If omitted, the default is to mount
+ by volume name. Examples: For volume /dev/sda1,
+ you specify the partition as "1". Similarly, the
+ volume partition for /dev/sda is "0" (or you can
+ leave the property empty). More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk'
+ format: int32
+ type: integer
+ pdName:
+ description: 'Unique name of the PD resource in GCE.
+ Used to identify the disk in GCE. More info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk'
+ type: string
+ readOnly:
+ description: 'ReadOnly here will force the ReadOnly
+ setting in VolumeMounts. Defaults to false. More
+ info: https://kubernetes.io/docs/concepts/storage/volumes#gcepersistentdisk'
+ type: boolean
+ required:
+ - pdName
+ type: object
+ gitRepo:
+ description: 'GitRepo represents a git repository at a
+ particular revision. DEPRECATED: GitRepo is deprecated.
+ To provision a container with a git repo, mount an EmptyDir
+ into an InitContainer that clones the repo using git,
+ then mount the EmptyDir into the Pod''s container.'
+ properties:
+ directory:
+ description: Target directory name. Must not contain
+ or start with '..'. If '.' is supplied, the volume
+ directory will be the git repository. Otherwise,
+ if specified, the volume will contain the git repository
+ in the subdirectory with the given name.
+ type: string
+ repository:
+ description: Repository URL
+ type: string
+ revision:
+ description: Commit hash for the specified revision.
+ type: string
+ required:
+ - repository
+ type: object
+ glusterfs:
+ description: 'Glusterfs represents a Glusterfs mount on
+ the host that shares a pod''s lifetime. More info: https://examples.k8s.io/volumes/glusterfs/README.md'
+ properties:
+ endpoints:
+ description: 'EndpointsName is the endpoint name that
+ details Glusterfs topology. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod'
+ type: string
+ path:
+ description: 'Path is the Glusterfs volume path. More
+ info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod'
+ type: string
+ readOnly:
+ description: 'ReadOnly here will force the Glusterfs
+ volume to be mounted with read-only permissions.
+ Defaults to false. More info: https://examples.k8s.io/volumes/glusterfs/README.md#create-a-pod'
+ type: boolean
+ required:
+ - endpoints
+ - path
+ type: object
+ hostPath:
+ description: 'HostPath represents a pre-existing file
+ or directory on the host machine that is directly exposed
+ to the container. This is generally used for system
+ agents or other privileged things that are allowed to
+ see the host machine. Most containers will NOT need
+ this. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath
+ --- TODO(jonesdl) We need to restrict who can use host
+ directory mounts and who can/can not mount host directories
+ as read/write.'
+ properties:
+ path:
+ description: 'Path of the directory on the host. If
+ the path is a symlink, it will follow the link to
+ the real path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath'
+ type: string
+ type:
+ description: 'Type for HostPath Volume Defaults to
+ "" More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath'
+ type: string
+ required:
+ - path
+ type: object
+ iscsi:
+ description: 'ISCSI represents an ISCSI Disk resource
+ that is attached to a kubelet''s host machine and then
+ exposed to the pod. More info: https://examples.k8s.io/volumes/iscsi/README.md'
+ properties:
+ chapAuthDiscovery:
+ description: whether support iSCSI Discovery CHAP
+ authentication
+ type: boolean
+ chapAuthSession:
+ description: whether support iSCSI Session CHAP authentication
+ type: boolean
+ fsType:
+ description: 'Filesystem type of the volume that you
+ want to mount. Tip: Ensure that the filesystem type
+ is supported by the host operating system. Examples:
+ "ext4", "xfs", "ntfs". Implicitly inferred to be
+ "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#iscsi
+ TODO: how do we prevent errors in the filesystem
+ from compromising the machine'
+ type: string
+ initiatorName:
+ description: Custom iSCSI Initiator Name. If initiatorName
+ is specified with iscsiInterface simultaneously,
+ new iSCSI interface :
+ will be created for the connection.
+ type: string
+ iqn:
+ description: Target iSCSI Qualified Name.
+ type: string
+ iscsiInterface:
+ description: iSCSI Interface Name that uses an iSCSI
+ transport. Defaults to 'default' (tcp).
+ type: string
+ lun:
+ description: iSCSI Target Lun number.
+ format: int32
+ type: integer
+ portals:
+ description: iSCSI Target Portal List. The portal
+ is either an IP or ip_addr:port if the port is other
+ than default (typically TCP ports 860 and 3260).
+ items:
+ type: string
+ type: array
+ readOnly:
+ description: ReadOnly here will force the ReadOnly
+ setting in VolumeMounts. Defaults to false.
+ type: boolean
+ secretRef:
+ description: CHAP Secret for iSCSI target and initiator
+ authentication
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ type: object
+ targetPortal:
+ description: iSCSI Target Portal. The Portal is either
+ an IP or ip_addr:port if the port is other than
+ default (typically TCP ports 860 and 3260).
+ type: string
+ required:
+ - iqn
+ - lun
+ - targetPortal
+ type: object
+ name:
+ description: 'Volume''s name. Must be a DNS_LABEL and
+ unique within the pod. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names'
+ type: string
+ nfs:
+ description: 'NFS represents an NFS mount on the host
+ that shares a pod''s lifetime More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs'
+ properties:
+ path:
+ description: 'Path that is exported by the NFS server.
+ More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs'
+ type: string
+ readOnly:
+ description: 'ReadOnly here will force the NFS export
+ to be mounted with read-only permissions. Defaults
+ to false. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs'
+ type: boolean
+ server:
+ description: 'Server is the hostname or IP address
+ of the NFS server. More info: https://kubernetes.io/docs/concepts/storage/volumes#nfs'
+ type: string
+ required:
+ - path
+ - server
+ type: object
+ persistentVolumeClaim:
+ description: 'PersistentVolumeClaimVolumeSource represents
+ a reference to a PersistentVolumeClaim in the same namespace.
+ More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
+ properties:
+ claimName:
+ description: 'ClaimName is the name of a PersistentVolumeClaim
+ in the same namespace as the pod using this volume.
+ More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#persistentvolumeclaims'
+ type: string
+ readOnly:
+ description: Will force the ReadOnly setting in VolumeMounts.
+ Default false.
+ type: boolean
+ required:
+ - claimName
+ type: object
+ photonPersistentDisk:
+ description: PhotonPersistentDisk represents a PhotonController
+ persistent disk attached and mounted on kubelets host
+ machine
+ properties:
+ fsType:
+ description: Filesystem type to mount. Must be a filesystem
+ type supported by the host operating system. Ex.
+ "ext4", "xfs", "ntfs". Implicitly inferred to be
+ "ext4" if unspecified.
+ type: string
+ pdID:
+ description: ID that identifies Photon Controller
+ persistent disk
+ type: string
+ required:
+ - pdID
+ type: object
+ portworxVolume:
+ description: PortworxVolume represents a portworx volume
+ attached and mounted on kubelets host machine
+ properties:
+ fsType:
+ description: FSType represents the filesystem type
+ to mount Must be a filesystem type supported by
+ the host operating system. Ex. "ext4", "xfs". Implicitly
+ inferred to be "ext4" if unspecified.
+ type: string
+ readOnly:
+ description: Defaults to false (read/write). ReadOnly
+ here will force the ReadOnly setting in VolumeMounts.
+ type: boolean
+ volumeID:
+ description: VolumeID uniquely identifies a Portworx
+ volume
+ type: string
+ required:
+ - volumeID
+ type: object
+ projected:
+ description: Items for all in one resources secrets, configmaps,
+ and downward API
+ properties:
+ defaultMode:
+ description: Mode bits used to set permissions on
+ created files by default. Must be an octal value
+ between 0000 and 0777 or a decimal value between
+ 0 and 511. YAML accepts both octal and decimal values,
+ JSON requires decimal values for mode bits. Directories
+ within the path are not affected by this setting.
+ This might be in conflict with other options that
+ affect the file mode, like fsGroup, and the result
+ can be other mode bits set.
+ format: int32
+ type: integer
+ sources:
+ description: list of volume projections
+ items:
+ description: Projection that may be projected along
+ with other supported volume types
+ properties:
+ configMap:
+ description: information about the configMap
+ data to project
+ properties:
+ items:
+ description: If unspecified, each key-value
+ pair in the Data field of the referenced
+ ConfigMap will be projected into the volume
+ as a file whose name is the key and content
+ is the value. If specified, the listed
+ keys will be projected into the specified
+ paths, and unlisted keys will not be present.
+ If a key is specified which is not present
+ in the ConfigMap, the volume setup will
+ error unless it is marked optional. Paths
+ must be relative and may not contain the
+ '..' path or start with '..'.
+ items:
+ description: Maps a string key to a path
+ within a volume.
+ properties:
+ key:
+ description: The key to project.
+ type: string
+ mode:
+ description: 'Optional: mode bits
+ used to set permissions on this
+ file. Must be an octal value between
+ 0000 and 0777 or a decimal value
+ between 0 and 511. YAML accepts
+ both octal and decimal values, JSON
+ requires decimal values for mode
+ bits. If not specified, the volume
+ defaultMode will be used. This might
+ be in conflict with other options
+ that affect the file mode, like
+ fsGroup, and the result can be other
+ mode bits set.'
+ format: int32
+ type: integer
+ path:
+ description: The relative path of
+ the file to map the key to. May
+ not be an absolute path. May not
+ contain the path element '..'. May
+ not start with the string '..'.
+ type: string
+ required:
+ - key
+ - path
+ type: object
+ type: array
+ name:
+ description: 'Name of the referent. More
+ info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the ConfigMap
+ or its keys must be defined
+ type: boolean
+ type: object
+ downwardAPI:
+ description: information about the downwardAPI
+ data to project
+ properties:
+ items:
+ description: Items is a list of DownwardAPIVolume
+ file
+ items:
+ description: DownwardAPIVolumeFile represents
+ information to create the file containing
+ the pod field
+ properties:
+ fieldRef:
+ description: 'Required: Selects a
+ field of the pod: only annotations,
+ labels, name and namespace are supported.'
+ properties:
+ apiVersion:
+ description: Version of the schema
+ the FieldPath is written in
+ terms of, defaults to "v1".
+ type: string
+ fieldPath:
+ description: Path of the field
+ to select in the specified API
+ version.
+ type: string
+ required:
+ - fieldPath
+ type: object
+ mode:
+ description: 'Optional: mode bits
+ used to set permissions on this
+ file, must be an octal value between
+ 0000 and 0777 or a decimal value
+ between 0 and 511. YAML accepts
+ both octal and decimal values, JSON
+ requires decimal values for mode
+ bits. If not specified, the volume
+ defaultMode will be used. This might
+ be in conflict with other options
+ that affect the file mode, like
+ fsGroup, and the result can be other
+ mode bits set.'
+ format: int32
+ type: integer
+ path:
+ description: 'Required: Path is the
+ relative path name of the file to
+ be created. Must not be absolute
+ or contain the ''..'' path. Must
+ be utf-8 encoded. The first item
+ of the relative path must not start
+ with ''..'''
+ type: string
+ resourceFieldRef:
+ description: 'Selects a resource of
+ the container: only resources limits
+ and requests (limits.cpu, limits.memory,
+ requests.cpu and requests.memory)
+ are currently supported.'
+ properties:
+ containerName:
+ description: 'Container name:
+ required for volumes, optional
+ for env vars'
+ type: string
+ divisor:
+ anyOf:
+ - type: integer
+ - type: string
+ description: Specifies the output
+ format of the exposed resources,
+ defaults to "1"
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ resource:
+ description: 'Required: resource
+ to select'
+ type: string
+ required:
+ - resource
+ type: object
+ required:
+ - path
+ type: object
+ type: array
+ type: object
+ secret:
+ description: information about the secret data
+ to project
+ properties:
+ items:
+ description: If unspecified, each key-value
+ pair in the Data field of the referenced
+ Secret will be projected into the volume
+ as a file whose name is the key and content
+ is the value. If specified, the listed
+ keys will be projected into the specified
+ paths, and unlisted keys will not be present.
+ If a key is specified which is not present
+ in the Secret, the volume setup will error
+ unless it is marked optional. Paths must
+ be relative and may not contain the '..'
+ path or start with '..'.
+ items:
+ description: Maps a string key to a path
+ within a volume.
+ properties:
+ key:
+ description: The key to project.
+ type: string
+ mode:
+ description: 'Optional: mode bits
+ used to set permissions on this
+ file. Must be an octal value between
+ 0000 and 0777 or a decimal value
+ between 0 and 511. YAML accepts
+ both octal and decimal values, JSON
+ requires decimal values for mode
+ bits. If not specified, the volume
+ defaultMode will be used. This might
+ be in conflict with other options
+ that affect the file mode, like
+ fsGroup, and the result can be other
+ mode bits set.'
+ format: int32
+ type: integer
+ path:
+ description: The relative path of
+ the file to map the key to. May
+ not be an absolute path. May not
+ contain the path element '..'. May
+ not start with the string '..'.
+ type: string
+ required:
+ - key
+ - path
+ type: object
+ type: array
+ name:
+ description: 'Name of the referent. More
+ info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion,
+ kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the Secret
+ or its key must be defined
+ type: boolean
+ type: object
+ serviceAccountToken:
+ description: information about the serviceAccountToken
+ data to project
+ properties:
+ audience:
+ description: Audience is the intended audience
+ of the token. A recipient of a token must
+ identify itself with an identifier specified
+ in the audience of the token, and otherwise
+ should reject the token. The audience
+ defaults to the identifier of the apiserver.
+ type: string
+ expirationSeconds:
+ description: ExpirationSeconds is the requested
+ duration of validity of the service account
+ token. As the token approaches expiration,
+ the kubelet volume plugin will proactively
+ rotate the service account token. The
+ kubelet will start trying to rotate the
+ token if the token is older than 80 percent
+ of its time to live or if the token is
+ older than 24 hours.Defaults to 1 hour
+ and must be at least 10 minutes.
+ format: int64
+ type: integer
+ path:
+ description: Path is the path relative to
+ the mount point of the file to project
+ the token into.
+ type: string
+ required:
+ - path
+ type: object
+ type: object
+ type: array
+ type: object
+ quobyte:
+ description: Quobyte represents a Quobyte mount on the
+ host that shares a pod's lifetime
+ properties:
+ group:
+ description: Group to map volume access to Default
+ is no group
+ type: string
+ readOnly:
+ description: ReadOnly here will force the Quobyte
+ volume to be mounted with read-only permissions.
+ Defaults to false.
+ type: boolean
+ registry:
+ description: Registry represents a single or multiple
+ Quobyte Registry services specified as a string
+ as host:port pair (multiple entries are separated
+ with commas) which acts as the central registry
+ for volumes
+ type: string
+ tenant:
+ description: Tenant owning the given Quobyte volume
+ in the Backend Used with dynamically provisioned
+ Quobyte volumes, value is set by the plugin
+ type: string
+ user:
+ description: User to map volume access to Defaults
+ to serivceaccount user
+ type: string
+ volume:
+ description: Volume is a string that references an
+ already created Quobyte volume by name.
+ type: string
+ required:
+ - registry
+ - volume
+ type: object
+ rbd:
+ description: 'RBD represents a Rados Block Device mount
+ on the host that shares a pod''s lifetime. More info:
+ https://examples.k8s.io/volumes/rbd/README.md'
+ properties:
+ fsType:
+ description: 'Filesystem type of the volume that you
+ want to mount. Tip: Ensure that the filesystem type
+ is supported by the host operating system. Examples:
+ "ext4", "xfs", "ntfs". Implicitly inferred to be
+ "ext4" if unspecified. More info: https://kubernetes.io/docs/concepts/storage/volumes#rbd
+ TODO: how do we prevent errors in the filesystem
+ from compromising the machine'
+ type: string
+ image:
+ description: 'The rados image name. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
+ type: string
+ keyring:
+ description: 'Keyring is the path to key ring for
+ RBDUser. Default is /etc/ceph/keyring. More info:
+ https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
+ type: string
+ monitors:
+ description: 'A collection of Ceph monitors. More
+ info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
+ items:
+ type: string
+ type: array
+ pool:
+ description: 'The rados pool name. Default is rbd.
+ More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
+ type: string
+ readOnly:
+ description: 'ReadOnly here will force the ReadOnly
+ setting in VolumeMounts. Defaults to false. More
+ info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
+ type: boolean
+ secretRef:
+ description: 'SecretRef is name of the authentication
+ secret for RBDUser. If provided overrides keyring.
+ Default is nil. More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ type: object
+ user:
+ description: 'The rados user name. Default is admin.
+ More info: https://examples.k8s.io/volumes/rbd/README.md#how-to-use-it'
+ type: string
+ required:
+ - image
+ - monitors
+ type: object
+ scaleIO:
+ description: ScaleIO represents a ScaleIO persistent volume
+ attached and mounted on Kubernetes nodes.
+ properties:
+ fsType:
+ description: Filesystem type to mount. Must be a filesystem
+ type supported by the host operating system. Ex.
+ "ext4", "xfs", "ntfs". Default is "xfs".
+ type: string
+ gateway:
+ description: The host address of the ScaleIO API Gateway.
+ type: string
+ protectionDomain:
+ description: The name of the ScaleIO Protection Domain
+ for the configured storage.
+ type: string
+ readOnly:
+ description: Defaults to false (read/write). ReadOnly
+ here will force the ReadOnly setting in VolumeMounts.
+ type: boolean
+ secretRef:
+ description: SecretRef references to the secret for
+ ScaleIO user and other sensitive information. If
+ this is not provided, Login operation will fail.
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ type: object
+ sslEnabled:
+ description: Flag to enable/disable SSL communication
+ with Gateway, default false
+ type: boolean
+ storageMode:
+ description: Indicates whether the storage for a volume
+ should be ThickProvisioned or ThinProvisioned. Default
+ is ThinProvisioned.
+ type: string
+ storagePool:
+ description: The ScaleIO Storage Pool associated with
+ the protection domain.
+ type: string
+ system:
+ description: The name of the storage system as configured
+ in ScaleIO.
+ type: string
+ volumeName:
+ description: The name of a volume already created
+ in the ScaleIO system that is associated with this
+ volume source.
+ type: string
+ required:
+ - gateway
+ - secretRef
+ - system
+ type: object
+ secret:
+ description: 'Secret represents a secret that should populate
+ this volume. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret'
+ properties:
+ defaultMode:
+ description: 'Optional: mode bits used to set permissions
+ on created files by default. Must be an octal value
+ between 0000 and 0777 or a decimal value between
+ 0 and 511. YAML accepts both octal and decimal values,
+ JSON requires decimal values for mode bits. Defaults
+ to 0644. Directories within the path are not affected
+ by this setting. This might be in conflict with
+ other options that affect the file mode, like fsGroup,
+ and the result can be other mode bits set.'
+ format: int32
+ type: integer
+ items:
+ description: If unspecified, each key-value pair in
+ the Data field of the referenced Secret will be
+ projected into the volume as a file whose name is
+ the key and content is the value. If specified,
+ the listed keys will be projected into the specified
+ paths, and unlisted keys will not be present. If
+ a key is specified which is not present in the Secret,
+ the volume setup will error unless it is marked
+ optional. Paths must be relative and may not contain
+ the '..' path or start with '..'.
+ items:
+ description: Maps a string key to a path within
+ a volume.
+ properties:
+ key:
+ description: The key to project.
+ type: string
+ mode:
+ description: 'Optional: mode bits used to set
+ permissions on this file. Must be an octal
+ value between 0000 and 0777 or a decimal value
+ between 0 and 511. YAML accepts both octal
+ and decimal values, JSON requires decimal
+ values for mode bits. If not specified, the
+ volume defaultMode will be used. This might
+ be in conflict with other options that affect
+ the file mode, like fsGroup, and the result
+ can be other mode bits set.'
+ format: int32
+ type: integer
+ path:
+ description: The relative path of the file to
+ map the key to. May not be an absolute path.
+ May not contain the path element '..'. May
+ not start with the string '..'.
+ type: string
+ required:
+ - key
+ - path
+ type: object
+ type: array
+ optional:
+ description: Specify whether the Secret or its keys
+ must be defined
+ type: boolean
+ secretName:
+ description: 'Name of the secret in the pod''s namespace
+ to use. More info: https://kubernetes.io/docs/concepts/storage/volumes#secret'
+ type: string
+ type: object
+ storageos:
+ description: StorageOS represents a StorageOS volume attached
+ and mounted on Kubernetes nodes.
+ properties:
+ fsType:
+ description: Filesystem type to mount. Must be a filesystem
+ type supported by the host operating system. Ex.
+ "ext4", "xfs", "ntfs". Implicitly inferred to be
+ "ext4" if unspecified.
+ type: string
+ readOnly:
+ description: Defaults to false (read/write). ReadOnly
+ here will force the ReadOnly setting in VolumeMounts.
+ type: boolean
+ secretRef:
+ description: SecretRef specifies the secret to use
+ for obtaining the StorageOS API credentials. If
+ not specified, default values will be attempted.
+ properties:
+ name:
+ description: 'Name of the referent. More info:
+ https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind,
+ uid?'
+ type: string
+ type: object
+ volumeName:
+ description: VolumeName is the human-readable name
+ of the StorageOS volume. Volume names are only
+ unique within a namespace.
+ type: string
+ volumeNamespace:
+ description: VolumeNamespace specifies the scope of
+ the volume within StorageOS. If no namespace is
+ specified then the Pod's namespace will be used. This
+ allows the Kubernetes name scoping to be mirrored
+ within StorageOS for tighter integration. Set VolumeName
+ to any name to override the default behaviour. Set
+ to "default" if you are not using namespaces within
+ StorageOS. Namespaces that do not pre-exist within
+ StorageOS will be created.
+ type: string
+ type: object
+ vsphereVolume:
+ description: VsphereVolume represents a vSphere volume
+ attached and mounted on kubelets host machine
+ properties:
+ fsType:
+ description: Filesystem type to mount. Must be a filesystem
+ type supported by the host operating system. Ex.
+ "ext4", "xfs", "ntfs". Implicitly inferred to be
+ "ext4" if unspecified.
+ type: string
+ storagePolicyID:
+ description: Storage Policy Based Management (SPBM)
+ profile ID associated with the StoragePolicyName.
+ type: string
+ storagePolicyName:
+ description: Storage Policy Based Management (SPBM)
+ profile name.
+ type: string
+ volumePath:
+ description: Path that identifies vSphere volume vmdk
+ type: string
+ required:
+ - volumePath
+ type: object
+ required:
+ - name
+ type: object
+ type: array
+ type: object
+ queryLimits:
+ description: QueryLimits represents limits for a query
+ properties:
+ ignoreCommands:
+ description: IgnoreCommands the list of commands to be ignored.
+ items:
+ type: string
+ type: array
+ ignoreDb:
+ description: IgnoreDb is the list of database that are ignored
+ by pt-kill (--ignore-db flag).
+ items:
+ type: string
+ type: array
+ ignoreUser:
+ description: IgnoreUser the list of users to be ignored.
+ items:
+ type: string
+ type: array
+ kill:
+ description: Kill represents the mode of which the matching
+ queries in each class will be killed, (the --victims flag).
+ Can be one of oldest|all|all-but-oldest. By default, the matching
+ query with the highest Time value is killed (the oldest query.
+ type: string
+ killMode:
+ description: 'KillMode can be: `connection` or `query`, when
+ it''s used `connection` means that when a query is matched
+ the connection is killed (using --kill flag) and if it''s
+ used `query` means that the query is killed (using --kill-query
+ flag)'
+ type: string
+ maxIdleTime:
+ description: MaxIdleTime match queries that have been idle for
+ longer then this time, in seconds. (--idle-time flag)
+ type: integer
+ maxQueryTime:
+ description: MaxQueryTime match queries that have been running
+ for longer then this time, in seconds. This field is required.
+ (--busy-time flag)
+ type: integer
+ required:
+ - maxQueryTime
+ type: object
+ rcloneExtraArgs:
+ description: RcloneExtraArgs is a list of extra command line arguments
+ to pass to rclone.
+ items:
+ type: string
+ type: array
+ readOnly:
+ description: Makes the cluster READ ONLY. This has not a strong
+ guarantee, in case of a failover the cluster will be writable
+ for at least a few seconds.
+ type: boolean
+ replicas:
+ description: The number of pods. This updates replicas filed Defaults
+ to 0
+ format: int32
+ type: integer
+ secretName:
+ description: The secret name that contains connection information
+ to initialize database, like USER, PASSWORD, ROOT_PASSWORD and
+ so on This secret will be updated with DB_CONNECT_URL and some
+ more configs. Can be specified partially
+ maxLength: 63
+ minLength: 1
+ type: string
+ serverIDOffset:
+ description: Set a custom offset for Server IDs. ServerID for each
+ node will be the index of the statefulset, plus offset
+ type: integer
+ tmpfsSize:
+ anyOf:
+ - type: integer
+ - type: string
+ description: 'TmpfsSize if specified, mounts a tmpfs of this size
+ into /tmp DEPRECATED: use instead PodSpec.Volumes and PodSpec.VolumeMounts'
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ volumeSpec:
+ description: PVC extra specifiaction
+ properties:
+ emptyDir:
+ description: EmptyDir to use as data volume for mysql. EmptyDir
+ represents a temporary directory that shares a pod's lifetime.
+ properties:
+ medium:
+ description: 'What type of storage medium should back this
+ directory. The default is "" which means to use the node''s
+ default medium. Must be an empty string (default) or Memory.
+ More info: https://kubernetes.io/docs/concepts/storage/volumes#emptydir'
+ type: string
+ sizeLimit:
+ anyOf:
+ - type: integer
+ - type: string
+ description: 'Total amount of local storage required for
+ this EmptyDir volume. The size limit is also applicable
+ for memory medium. The maximum usage on memory medium
+ EmptyDir would be the minimum value between the SizeLimit
+ specified here and the sum of memory limits of all containers
+ in a pod. The default is nil which means that the limit
+ is undefined. More info: http://kubernetes.io/docs/user-guide/volumes#emptydir'
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ type: object
+ hostPath:
+ description: HostPath to use as data volume for mysql. HostPath
+ represents a pre-existing file or directory on the host machine
+ that is directly exposed to the container.
+ properties:
+ path:
+ description: 'Path of the directory on the host. If the
+ path is a symlink, it will follow the link to the real
+ path. More info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath'
+ type: string
+ type:
+ description: 'Type for HostPath Volume Defaults to "" More
+ info: https://kubernetes.io/docs/concepts/storage/volumes#hostpath'
+ type: string
+ required:
+ - path
+ type: object
+ keepAfterDelete:
+ description: KeepAfterDelete specifies whether the PVC should
+ be kept after the MysqlCluster is deleted.
+ type: boolean
+ persistentVolumeClaim:
+ description: PersistentVolumeClaim to specify PVC spec for the
+ volume for mysql data. It has the highest level of precedence,
+ followed by HostPath and EmptyDir. And represents the PVC
+ specification.
+ properties:
+ accessModes:
+ description: 'AccessModes contains the desired access modes
+ the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#access-modes-1'
+ items:
+ type: string
+ type: array
+ dataSource:
+ description: 'This field can be used to specify either:
+ * An existing VolumeSnapshot object (snapshot.storage.k8s.io/VolumeSnapshot)
+ * An existing PVC (PersistentVolumeClaim) * An existing
+ custom resource that implements data population (Alpha)
+ In order to use custom resource types that implement data
+ population, the AnyVolumeDataSource feature gate must
+ be enabled. If the provisioner or an external controller
+ can support the specified data source, it will create
+ a new volume based on the contents of the specified data
+ source.'
+ properties:
+ apiGroup:
+ description: APIGroup is the group for the resource
+ being referenced. If APIGroup is not specified, the
+ specified Kind must be in the core API group. For
+ any other third-party types, APIGroup is required.
+ type: string
+ kind:
+ description: Kind is the type of resource being referenced
+ type: string
+ name:
+ description: Name is the name of resource being referenced
+ type: string
+ required:
+ - kind
+ - name
+ type: object
+ resources:
+ description: 'Resources represents the minimum resources
+ the volume should have. More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#resources'
+ properties:
+ limits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Limits describes the maximum amount of
+ compute resources allowed. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ requests:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'Requests describes the minimum amount
+ of compute resources required. If Requests is omitted
+ for a container, it defaults to Limits if that is
+ explicitly specified, otherwise to an implementation-defined
+ value. More info: https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/'
+ type: object
+ type: object
+ selector:
+ description: A label query over volumes to consider for
+ binding.
+ properties:
+ matchExpressions:
+ description: matchExpressions is a list of label selector
+ requirements. The requirements are ANDed.
+ items:
+ description: A label selector requirement is a selector
+ that contains values, a key, and an operator that
+ relates the key and values.
+ properties:
+ key:
+ description: key is the label key that the selector
+ applies to.
+ type: string
+ operator:
+ description: operator represents a key's relationship
+ to a set of values. Valid operators are In,
+ NotIn, Exists and DoesNotExist.
+ type: string
+ values:
+ description: values is an array of string values.
+ If the operator is In or NotIn, the values array
+ must be non-empty. If the operator is Exists
+ or DoesNotExist, the values array must be empty.
+ This array is replaced during a strategic merge
+ patch.
+ items:
+ type: string
+ type: array
+ required:
+ - key
+ - operator
+ type: object
+ type: array
+ matchLabels:
+ additionalProperties:
+ type: string
+ description: matchLabels is a map of {key,value} pairs.
+ A single {key,value} in the matchLabels map is equivalent
+ to an element of matchExpressions, whose key field
+ is "key", the operator is "In", and the values array
+ contains only "value". The requirements are ANDed.
+ type: object
+ type: object
+ storageClassName:
+ description: 'Name of the StorageClass required by the claim.
+ More info: https://kubernetes.io/docs/concepts/storage/persistent-volumes#class-1'
+ type: string
+ volumeMode:
+ description: volumeMode defines what type of volume is required
+ by the claim. Value of Filesystem is implied when not
+ included in claim spec.
+ type: string
+ volumeName:
+ description: VolumeName is the binding reference to the
+ PersistentVolume backing this claim.
+ type: string
+ type: object
+ type: object
+ xbstreamExtraArgs:
+ description: XbstreamExtraArgs is a list of extra command line arguments
+ to pass to xbstream.
+ items:
+ type: string
+ type: array
+ xtrabackupExtraArgs:
+ description: XtrabackupExtraArgs is a list of extra command line
+ arguments to pass to xtrabackup.
+ items:
+ type: string
+ type: array
+ xtrabackupPrepareExtraArgs:
+ description: XtrabackupPrepareExtraArgs is a list of extra command
+ line arguments to pass to xtrabackup during --prepare.
+ items:
+ type: string
+ type: array
+ xtrabackupTargetDir:
+ description: XtrabackupTargetDir is a backup destination directory
+ for xtrabackup.
+ type: string
+ required:
+ - secretName
+ type: object
+ status:
+ description: MysqlClusterStatus defines the observed state of MysqlCluster
+ properties:
+ conditions:
+ description: Conditions contains the list of the cluster conditions
+ fulfilled
+ items:
+ description: ClusterCondition defines type for cluster conditions.
+ properties:
+ lastTransitionTime:
+ description: LastTransitionTime
+ format: date-time
+ type: string
+ message:
+ description: Message
+ type: string
+ reason:
+ description: Reason
+ type: string
+ status:
+ description: Status of the condition, one of (\"True\", \"False\",
+ \"Unknown\")
+ type: string
+ type:
+ description: type of cluster condition, values in (\"Ready\")
+ type: string
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ type: object
+ type: array
+ nodes:
+ description: Nodes contains informations from orchestrator
+ items:
+ description: NodeStatus defines type for status of a node into
+ cluster.
+ properties:
+ conditions:
+ items:
+ description: NodeCondition defines type for representing
+ node conditions.
+ properties:
+ lastTransitionTime:
+ format: date-time
+ type: string
+ status:
+ type: string
+ type:
+ description: NodeConditionType defines type for node
+ condition type.
+ type: string
+ required:
+ - lastTransitionTime
+ - status
+ - type
+ type: object
+ type: array
+ name:
+ type: string
+ required:
+ - name
+ type: object
+ type: array
+ readyNodes:
+ description: ReadyNodes represents number of the nodes that are
+ in ready state
+ type: integer
+ type: object
+ type: object
+ served: true
+ storage: true
+ subresources:
+ scale:
+ specReplicasPath: .spec.replicas
+ statusReplicasPath: .status.readyNodes
+ status: {}
+ status:
+ acceptedNames:
+ kind: MysqlCluster
+ listKind: MysqlClusterList
+ plural: mysqlclusters
+ shortNames:
+ - mysql
+ singular: mysqlcluster
+ conditions:
+ - lastTransitionTime: "2023-11-01T09:02:32Z"
+ message: no conflicts found
+ reason: NoConflicts
+ status: "True"
+ type: NamesAccepted
+ - lastTransitionTime: "2023-11-01T09:02:32Z"
+ message: the initial names have been accepted
+ reason: InitialNamesAccepted
+ status: "True"
+ type: Established
+ storedVersions:
+ - v1alpha1
+- apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.7.0
+ creationTimestamp: "2023-11-01T09:02:35Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/name: mysql-operator
+ name: mysqldatabases.mysql.presslabs.org
+ resourceVersion: "1064347"
+ uid: 41d409d3-13f7-4f0d-a248-fde6b96752ac
+ spec:
+ conversion:
+ strategy: None
+ group: mysql.presslabs.org
+ names:
+ kind: MysqlDatabase
+ listKind: MysqlDatabaseList
+ plural: mysqldatabases
+ singular: mysqldatabase
+ scope: Namespaced
+ versions:
+ - additionalPrinterColumns:
+ - description: The database status
+ jsonPath: .status.conditions[?(@.type == 'Ready')].status
+ name: Ready
+ type: string
+ - jsonPath: .spec.clusterRef.name
+ name: Cluster
+ type: string
+ - jsonPath: .spec.database
+ name: Database
+ type: string
+ - jsonPath: .metadata.creationTimestamp
+ name: Age
+ type: date
+ name: v1alpha1
+ schema:
+ openAPIV3Schema:
+ description: MysqlDatabase is the Schema for the MySQL database API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource
+ this object represents. Servers may infer this from the endpoint the
+ client submits requests to. Cannot be updated. In CamelCase. More
+ info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: MysqlDatabaseSpec defines the desired state of MysqlDatabaseSpec
+ properties:
+ characterSet:
+ description: CharacterSet represents the charset name used when
+ database is created
+ type: string
+ clusterRef:
+ description: ClusterRef represents a reference to the MySQL cluster.
+ This field should be immutable.
+ properties:
+ name:
+ description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind, uid?'
+ type: string
+ namespace:
+ description: Namespace the MySQL cluster namespace
+ type: string
+ type: object
+ collation:
+ description: Collation represents the collation name used as default
+ database collation
+ type: string
+ database:
+ description: Database represents the database name which will be
+ created. This field should be immutable.
+ type: string
+ required:
+ - clusterRef
+ - database
+ type: object
+ status:
+ description: MysqlDatabaseStatus defines the observed state of MysqlDatabase
+ properties:
+ conditions:
+ description: Conditions represents the MysqlDatabase resource conditions
+ list.
+ items:
+ description: MysqlDatabaseCondition defines the condition struct
+ for a MysqlDatabase resource
+ properties:
+ lastTransitionTime:
+ description: Last time the condition transitioned from one
+ status to another.
+ format: date-time
+ type: string
+ lastUpdateTime:
+ description: The last time this condition was updated.
+ format: date-time
+ type: string
+ message:
+ description: A human readable message indicating details about
+ the transition.
+ type: string
+ reason:
+ description: The reason for the condition's last transition.
+ type: string
+ status:
+ description: Status of the condition, one of True, False,
+ Unknown.
+ type: string
+ type:
+ description: Type of MysqlDatabase condition.
+ type: string
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ type: object
+ type: array
+ type: object
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
+ status:
+ acceptedNames:
+ kind: MysqlDatabase
+ listKind: MysqlDatabaseList
+ plural: mysqldatabases
+ singular: mysqldatabase
+ conditions:
+ - lastTransitionTime: "2023-11-01T09:02:35Z"
+ message: no conflicts found
+ reason: NoConflicts
+ status: "True"
+ type: NamesAccepted
+ - lastTransitionTime: "2023-11-01T09:02:35Z"
+ message: the initial names have been accepted
+ reason: InitialNamesAccepted
+ status: "True"
+ type: Established
+ storedVersions:
+ - v1alpha1
+- apiVersion: apiextensions.k8s.io/v1
+ kind: CustomResourceDefinition
+ metadata:
+ annotations:
+ controller-gen.kubebuilder.io/version: v0.7.0
+ creationTimestamp: "2023-11-01T09:02:38Z"
+ generation: 1
+ labels:
+ app.kubernetes.io/name: mysql-operator
+ name: mysqlusers.mysql.presslabs.org
+ resourceVersion: "1064355"
+ uid: 46c02726-ba83-4be0-8551-0173ebf3bb6a
+ spec:
+ conversion:
+ strategy: None
+ group: mysql.presslabs.org
+ names:
+ kind: MysqlUser
+ listKind: MysqlUserList
+ plural: mysqlusers
+ singular: mysqluser
+ scope: Namespaced
+ versions:
+ - additionalPrinterColumns:
+ - description: The user status
+ jsonPath: .status.conditions[?(@.type == 'Ready')].status
+ name: Ready
+ type: string
+ - jsonPath: .spec.clusterRef.name
+ name: Cluster
+ type: string
+ - jsonPath: .spec.user
+ name: UserName
+ type: string
+ - jsonPath: .metadata.creationTimestamp
+ name: Age
+ type: date
+ name: v1alpha1
+ schema:
+ openAPIV3Schema:
+ description: MysqlUser is the Schema for the MySQL User API
+ properties:
+ apiVersion:
+ description: 'APIVersion defines the versioned schema of this representation
+ of an object. Servers should convert recognized schemas to the latest
+ internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources'
+ type: string
+ kind:
+ description: 'Kind is a string value representing the REST resource
+ this object represents. Servers may infer this from the endpoint the
+ client submits requests to. Cannot be updated. In CamelCase. More
+ info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds'
+ type: string
+ metadata:
+ type: object
+ spec:
+ description: MysqlUserSpec defines the desired state of MysqlUserSpec
+ properties:
+ allowedHosts:
+ description: AllowedHosts is the allowed host to connect from.
+ items:
+ type: string
+ type: array
+ clusterRef:
+ description: ClusterRef represents a reference to the MySQL cluster.
+ This field should be immutable.
+ properties:
+ name:
+ description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind, uid?'
+ type: string
+ namespace:
+ description: Namespace the MySQL cluster namespace
+ type: string
+ type: object
+ password:
+ description: Password is the password for the user.
+ properties:
+ key:
+ description: The key of the secret to select from. Must be
+ a valid secret key.
+ type: string
+ name:
+ description: 'Name of the referent. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/names/#names
+ TODO: Add other useful fields. apiVersion, kind, uid?'
+ type: string
+ optional:
+ description: Specify whether the Secret or its key must be defined
+ type: boolean
+ required:
+ - key
+ type: object
+ permissions:
+ description: Permissions is the list of roles that user has in the
+ specified database.
+ items:
+ description: MysqlPermission defines a MySQL schema permission
+ properties:
+ permissions:
+ description: Permissions represents the permissions granted
+ on the schema/tables
+ items:
+ type: string
+ type: array
+ schema:
+ description: Schema represents the schema to which the permission
+ applies
+ type: string
+ tables:
+ description: Tables represents the tables inside the schema
+ to which the permission applies
+ items:
+ type: string
+ type: array
+ required:
+ - permissions
+ - schema
+ - tables
+ type: object
+ type: array
+ resourceLimits:
+ additionalProperties:
+ anyOf:
+ - type: integer
+ - type: string
+ pattern: ^(\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))(([KMGTPE]i)|[numkMGTPE]|([eE](\+|-)?(([0-9]+(\.[0-9]*)?)|(\.[0-9]+))))?$
+ x-kubernetes-int-or-string: true
+ description: 'ResourceLimits allow settings limit per mysql user
+ as defined here: https://dev.mysql.com/doc/refman/5.7/en/user-resources.html'
+ type: object
+ user:
+ description: User is the name of the user that will be created with
+ will access the specified database. This field should be immutable.
+ type: string
+ required:
+ - allowedHosts
+ - clusterRef
+ - password
+ - user
+ type: object
+ status:
+ description: MysqlUserStatus defines the observed state of MysqlUser
+ properties:
+ allowedHosts:
+ description: AllowedHosts contains the list of hosts that the user
+ is allowed to connect from.
+ items:
+ type: string
+ type: array
+ conditions:
+ description: Conditions represents the MysqlUser resource conditions
+ list.
+ items:
+ description: MySQLUserCondition defines the condition struct for
+ a MysqlUser resource
+ properties:
+ lastTransitionTime:
+ description: Last time the condition transitioned from one
+ status to another.
+ format: date-time
+ type: string
+ lastUpdateTime:
+ description: The last time this condition was updated.
+ format: date-time
+ type: string
+ message:
+ description: A human readable message indicating details about
+ the transition.
+ type: string
+ reason:
+ description: The reason for the condition's last transition.
+ type: string
+ status:
+ description: Status of the condition, one of True, False,
+ Unknown.
+ type: string
+ type:
+ description: Type of MysqlUser condition.
+ type: string
+ required:
+ - lastTransitionTime
+ - message
+ - reason
+ - status
+ - type
+ type: object
+ type: array
+ type: object
+ type: object
+ served: true
+ storage: true
+ subresources:
+ status: {}
+ status:
+ acceptedNames:
+ kind: MysqlUser
+ listKind: MysqlUserList
+ plural: mysqlusers
+ singular: mysqluser
+ conditions:
+ - lastTransitionTime: "2023-11-01T09:02:38Z"
+ message: no conflicts found
+ reason: NoConflicts
+ status: "True"
+ type: NamesAccepted
+ - lastTransitionTime: "2023-11-01T09:02:38Z"
+ message: the initial names have been accepted
+ reason: InitialNamesAccepted
+ status: "True"
+ type: Established
+ storedVersions:
+ - v1alpha1
+kind: List
+metadata:
+ resourceVersion: ""
diff --git a/test/e2e/deploy/mysql-operator/mysqll-operator.yaml b/test/e2e/deploy/mysql-operator/mysqll-operator.yaml
new file mode 100644
index 000000000..7f6475440
--- /dev/null
+++ b/test/e2e/deploy/mysql-operator/mysqll-operator.yaml
@@ -0,0 +1,281 @@
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: mysql-operator
+---
+apiVersion: apps/v1
+kind: StatefulSet
+metadata:
+ name: mysql-operator
+ namespace: mysql-operator
+spec:
+ selector:
+ matchLabels:
+ app.kubernetes.io/instance: mysql-operator
+ app.kubernetes.io/name: mysql-operator
+ serviceName: mysql-operator-orc
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/instance: mysql-operator
+ app.kubernetes.io/name: mysql-operator
+ spec:
+ containers:
+ - args:
+ - --leader-election-namespace=mysql-operator
+ - --orchestrator-uri=http://mysql-operator.mysql-operator/api
+ - --sidecar-image=docker.io/bitpoke/mysql-operator-sidecar-5.7:v0.6.3
+ - --sidecar-mysql8-image=docker.io/bitpoke/mysql-operator-sidecar-8.0:v0.6.3
+ - --metrics-exporter-image=docker.io/prom/mysqld-exporter:v0.13.0
+ - --failover-before-shutdown=true
+ env:
+ - name: ORC_TOPOLOGY_USER
+ valueFrom:
+ secretKeyRef:
+ key: TOPOLOGY_USER
+ name: mysql-operator-orc
+ - name: ORC_TOPOLOGY_PASSWORD
+ valueFrom:
+ secretKeyRef:
+ key: TOPOLOGY_PASSWORD
+ name: mysql-operator-orc
+ image: docker.io/bitpoke/mysql-operator:v0.6.3
+ imagePullPolicy: IfNotPresent
+ livenessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /healthz
+ port: 8081
+ scheme: HTTP
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 1
+ name: operator
+ ports:
+ - containerPort: 8080
+ name: prometheus
+ protocol: TCP
+ readinessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /readyz
+ port: 8081
+ scheme: HTTP
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 1
+ - env:
+ - name: POD_IP
+ valueFrom:
+ fieldRef:
+ apiVersion: v1
+ fieldPath: status.podIP
+ envFrom:
+ - prefix: ORC_
+ secretRef:
+ name: mysql-operator-orc
+ image: docker.io/bitpoke/mysql-operator-orchestrator:v0.6.3
+ imagePullPolicy: IfNotPresent
+ livenessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /api/lb-check
+ port: 3000
+ scheme: HTTP
+ initialDelaySeconds: 200
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 10
+ name: orchestrator
+ ports:
+ - containerPort: 3000
+ name: http
+ protocol: TCP
+ - containerPort: 10008
+ name: raft
+ protocol: TCP
+ readinessProbe:
+ failureThreshold: 3
+ httpGet:
+ path: /api/raft-health
+ port: 3000
+ scheme: HTTP
+ periodSeconds: 10
+ successThreshold: 1
+ timeoutSeconds: 10
+ volumeMounts:
+ - mountPath: /var/lib/orchestrator
+ name: data
+ - mountPath: /usr/local/share/orchestrator/templates
+ name: config
+ restartPolicy: Always
+ securityContext:
+ fsGroup: 65532
+ runAsGroup: 65532
+ runAsNonRoot: true
+ runAsUser: 65532
+ serviceAccount: mysql-operator
+ serviceAccountName: mysql-operator
+ volumes:
+ - configMap:
+ defaultMode: 420
+ name: mysql-operator-orc
+ name: config
+ - emptyDir: { }
+ name: data
+ updateStrategy:
+ rollingUpdate:
+ partition: 0
+ type: RollingUpdate
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysql-operator
+ namespace: mysql-operator
+spec:
+ ipFamilies:
+ - IPv4
+ ipFamilyPolicy: SingleStack
+ ports:
+ - name: http
+ port: 80
+ protocol: TCP
+ targetPort: http
+ - name: prometheus
+ port: 9125
+ protocol: TCP
+ targetPort: prometheus
+ selector:
+ app.kubernetes.io/instance: mysql-operator
+ app.kubernetes.io/name: mysql-operator
+ type: ClusterIP
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysql-operator-0-orc-svc
+ namespace: mysql-operator
+spec:
+ ipFamilies:
+ - IPv4
+ ipFamilyPolicy: SingleStack
+ ports:
+ - name: http
+ port: 80
+ protocol: TCP
+ targetPort: 3000
+ - name: raft
+ port: 10008
+ protocol: TCP
+ targetPort: 10008
+ publishNotReadyAddresses: true
+ selector:
+ statefulset.kubernetes.io/pod-name: mysql-operator-0
+ type: ClusterIP
+---
+apiVersion: v1
+data:
+ TOPOLOGY_PASSWORD: S2ZISEdUOHNOdQ==
+ TOPOLOGY_USER: b3JjaGVzdHJhdG9y
+kind: Secret
+metadata:
+ name: mysql-operator-orc
+ namespace: mysql-operator
+type: Opaque
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: mysql-operator-0-orc-svc
+ namespace: mysql-operator
+spec:
+ ipFamilies:
+ - IPv4
+ ipFamilyPolicy: SingleStack
+ ports:
+ - name: http
+ port: 80
+ protocol: TCP
+ targetPort: 3000
+ - name: raft
+ port: 10008
+ protocol: TCP
+ targetPort: 10008
+ publishNotReadyAddresses: true
+ selector:
+ statefulset.kubernetes.io/pod-name: mysql-operator-0
+ type: ClusterIP
+---
+apiVersion: v1
+kind: ServiceAccount
+metadata:
+ name: mysql-operator
+ namespace: mysql-operator
+---
+apiVersion: v1
+data:
+ orc-topology.cnf: |
+ [client]
+ user = {{ .Env.ORC_TOPOLOGY_USER }}
+ password = {{ .Env.ORC_TOPOLOGY_PASSWORD }}
+ orchestrator.conf.json: |-
+ {
+ "ApplyMySQLPromotionAfterMasterFailover": true,
+ "BackendDB": "sqlite",
+ "Debug": false,
+ "DetachLostReplicasAfterMasterFailover": true,
+ "DetectClusterAliasQuery": "SELECT CONCAT(SUBSTRING(@@hostname, 1, LENGTH(@@hostname) - 1 - LENGTH(SUBSTRING_INDEX(@@hostname,'-',-2))),'.',SUBSTRING_INDEX(@@report_host,'.',-1))",
+ "DetectInstanceAliasQuery": "SELECT @@hostname",
+ "DiscoverByShowSlaveHosts": false,
+ "FailMasterPromotionIfSQLThreadNotUpToDate": true,
+ "HTTPAdvertise": "http://{{ .Env.HOSTNAME }}-orc-svc:80",
+ "HostnameResolveMethod": "none",
+ "InstancePollSeconds": 5,
+ "ListenAddress": ":3000",
+ "MasterFailoverLostInstancesDowntimeMinutes": 10,
+ "MySQLHostnameResolveMethod": "@@report_host",
+ "MySQLTopologyCredentialsConfigFile": "/etc/orchestrator/orc-topology.cnf",
+ "OnFailureDetectionProcesses": [
+ "/usr/local/bin/orc-helper event -w '{failureClusterAlias}' 'OrcFailureDetection' 'Failure: {failureType}, failed host: {failedHost}, lost replcas: {lostReplicas}' || true",
+ "/usr/local/bin/orc-helper failover-in-progress '{failureClusterAlias}' '{failureDescription}' || true"
+ ],
+ "PostIntermediateMasterFailoverProcesses": [
+ "/usr/local/bin/orc-helper event '{failureClusterAlias}' 'OrcPostIntermediateMasterFailover' 'Failure type: {failureType}, failed hosts: {failedHost}, slaves: {countSlaves}' || true"
+ ],
+ "PostMasterFailoverProcesses": [
+ "/usr/local/bin/orc-helper event '{failureClusterAlias}' 'OrcPostMasterFailover' 'Failure type: {failureType}, new master: {successorHost}, slaves: {slaveHosts}' || true"
+ ],
+ "PostUnsuccessfulFailoverProcesses": [
+ "/usr/local/bin/orc-helper event -w '{failureClusterAlias}' 'OrcPostUnsuccessfulFailover' 'Failure: {failureType}, failed host: {failedHost} with {countSlaves} slaves' || true"
+ ],
+ "PreFailoverProcesses": [
+ "/usr/local/bin/orc-helper failover-in-progress '{failureClusterAlias}' '{failureDescription}' || true"
+ ],
+ "ProcessesShellCommand": "sh",
+ "RaftAdvertise": "{{ .Env.HOSTNAME }}-orc-svc",
+ "RaftBind": "{{ .Env.HOSTNAME }}",
+ "RaftDataDir": "/var/lib/orchestrator",
+ "RaftEnabled": true,
+ "RaftNodes": [],
+ "RecoverIntermediateMasterClusterFilters": [
+ ".*"
+ ],
+ "RecoverMasterClusterFilters": [
+ ".*"
+ ],
+ "RecoveryIgnoreHostnameFilters": [],
+ "RecoveryPeriodBlockSeconds": 300,
+ "RemoveTextFromHostnameDisplay": ":3306",
+ "SQLite3DataFile": "/var/lib/orchestrator/orc.db",
+ "SlaveLagQuery": "SELECT TIMESTAMPDIFF(SECOND,ts,UTC_TIMESTAMP()) as drift FROM sys_operator.heartbeat ORDER BY drift ASC LIMIT 1",
+ "UnseenInstanceForgetHours": 1
+ }
+kind: ConfigMap
+metadata:
+ name: mysql-operator-orc
+ namespace: mysql-operator
+
+
+
+
diff --git a/test/e2e/deploy/mysql-operator/rbac.yaml b/test/e2e/deploy/mysql-operator/rbac.yaml
new file mode 100644
index 000000000..9ef882fd4
--- /dev/null
+++ b/test/e2e/deploy/mysql-operator/rbac.yaml
@@ -0,0 +1,152 @@
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRole
+metadata:
+ name: mysql-operator
+rules:
+ - apiGroups:
+ - apps
+ resources:
+ - statefulsets
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - batch
+ resources:
+ - jobs
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - coordination.k8s.io
+ resources:
+ - leases
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - configmaps
+ - events
+ - jobs
+ - persistentvolumeclaims
+ - pods
+ - secrets
+ - services
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - ""
+ resources:
+ - pods/status
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - mysql.presslabs.org
+ resources:
+ - mysqlbackups
+ - mysqlbackups/finalizers
+ - mysqlbackups/status
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - mysql.presslabs.org
+ resources:
+ - mysqlclusters
+ - mysqlclusters/finalizers
+ - mysqlclusters/status
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - mysql.presslabs.org
+ resources:
+ - mysqldatabases
+ - mysqldatabases/finalizers
+ - mysqldatabases/status
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - mysql.presslabs.org
+ resources:
+ - mysqlusers
+ - mysqlusers/status
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+ - apiGroups:
+ - policy
+ resources:
+ - poddisruptionbudgets
+ verbs:
+ - create
+ - delete
+ - get
+ - list
+ - patch
+ - update
+ - watch
+---
+apiVersion: rbac.authorization.k8s.io/v1
+kind: ClusterRoleBinding
+metadata:
+ name: mysql-operator
+roleRef:
+ apiGroup: rbac.authorization.k8s.io
+ kind: ClusterRole
+ name: mysql-operator
+subjects:
+ - kind: ServiceAccount
+ name: mysql-operator
+ namespace: mysql-operator
+
diff --git a/test/e2e/deploy/nginx/nginx.yaml b/test/e2e/deploy/nginx/nginx.yaml
new file mode 100644
index 000000000..5c3c8a1ac
--- /dev/null
+++ b/test/e2e/deploy/nginx/nginx.yaml
@@ -0,0 +1,67 @@
+apiVersion: v1
+kind: Namespace
+metadata:
+ name: kosmos-e2e
+---
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: nginx-deployment
+ namespace: kosmos-e2e
+spec:
+ replicas: 2
+ selector:
+ matchLabels:
+ app: nginx
+ template:
+ metadata:
+ labels:
+ app: nginx
+ spec:
+ tolerations:
+ - key: "kosmos.io/node"
+ operator: "Equal"
+ value: "true"
+ effect: "NoSchedule"
+ affinity:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: kubernetes.io/hostname
+ operator: NotIn
+ values:
+ - cluster-host-control-plane
+ - cluster-host-worker
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app: nginx
+ topologyKey: kubernetes.io/hostname
+ containers:
+ - name: nginx
+ image: nginx:latest
+ imagePullPolicy: IfNotPresent
+ ports:
+ - containerPort: 80
+---
+apiVersion: v1
+kind: Service
+metadata:
+ name: nginx-service
+ namespace: kosmos-e2e
+ annotations:
+ kosmos.io/auto-create-mcs: "true"
+spec:
+ ipFamilies:
+ - IPv4
+ ipFamilyPolicy: SingleStack
+ selector:
+ app: nginx
+ ports:
+ - protocol: TCP
+ port: 80
+ targetPort: 80
+ nodePort: 31443
+ type: NodePort
diff --git a/test/e2e/elector_test.go b/test/e2e/elector_test.go
index 9c34e626f..eb35cbc48 100644
--- a/test/e2e/elector_test.go
+++ b/test/e2e/elector_test.go
@@ -15,7 +15,7 @@ var _ = ginkgo.Describe("elector testing", func() {
ginkgo.Context("gateway role add test", func() {
ginkgo.It("Check if gateway role gateway role is set correctly", func() {
gomega.Eventually(func(g gomega.Gomega) (bool, error) {
- clusterNodes, err := clusterLinkClient.KosmosV1alpha1().ClusterNodes().List(context.TODO(), metav1.ListOptions{})
+ clusterNodes, err := hostClusterLinkClient.KosmosV1alpha1().ClusterNodes().List(context.TODO(), metav1.ListOptions{})
if err != nil {
return false, err
}
diff --git a/test/e2e/framework/cluster.go b/test/e2e/framework/cluster.go
index 4a5a8b6ab..da11c1d20 100644
--- a/test/e2e/framework/cluster.go
+++ b/test/e2e/framework/cluster.go
@@ -7,22 +7,78 @@ import (
"log"
"os/exec"
+ "github.com/onsi/ginkgo/v2"
+ "github.com/onsi/gomega"
+ corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
"k8s.io/client-go/tools/clientcmd"
"k8s.io/klog/v2"
"github.com/kosmos.io/kosmos/hack/projectpath"
- clusterlinkv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
"github.com/kosmos.io/kosmos/pkg/generated/clientset/versioned"
)
-func FetchClusters(client versioned.Interface) ([]clusterlinkv1alpha1.Cluster, error) {
+func FetchClusters(client versioned.Interface) ([]kosmosv1alpha1.Cluster, error) {
clusters, err := client.KosmosV1alpha1().Clusters().List(context.TODO(), metav1.ListOptions{})
if err != nil {
return nil, err
}
- return clusters.Items, nil
+ return clusters.DeepCopy().Items, nil
+}
+
+func CreateClusters(client versioned.Interface, cluster *kosmosv1alpha1.Cluster) (err error) {
+ _, err = client.KosmosV1alpha1().Clusters().Create(context.TODO(), cluster, metav1.CreateOptions{})
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func DeleteClusters(client versioned.Interface, clustername string) (err error) {
+ err = client.KosmosV1alpha1().Clusters().Delete(context.TODO(), clustername, metav1.DeleteOptions{})
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func FetchNodes(client kubernetes.Interface) ([]corev1.Node, error) {
+ nodes, err := client.CoreV1().Nodes().List(context.TODO(), metav1.ListOptions{})
+ if err != nil {
+ return nil, err
+ }
+ return nodes.DeepCopy().Items, nil
+}
+func DeleteNode(client kubernetes.Interface, node string) (err error) {
+ err = client.CoreV1().Nodes().Delete(context.TODO(), node, metav1.DeleteOptions{})
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func UpdateNodeLabels(client kubernetes.Interface, node corev1.Node) (err error) {
+ _, err = client.CoreV1().Nodes().Update(context.TODO(), &node, metav1.UpdateOptions{})
+ if err != nil {
+ return err
+ }
+ return nil
+}
+
+func WaitNodePresentOnCluster(client kubernetes.Interface, node string) {
+ ginkgo.By(fmt.Sprintf("Waiting for node(%v) on host cluster", node), func() {
+ gomega.Eventually(func() bool {
+ _, err := client.CoreV1().Nodes().Get(context.TODO(), node, metav1.GetOptions{})
+ if err != nil {
+ klog.Errorf("Failed to get node(%v) on host cluster", node, err)
+ return false
+ }
+ return true
+ }, PollTimeout, PollInterval).Should(gomega.Equal(true))
+ })
}
func LoadRESTClientConfig(kubeconfig string, context string) (*rest.Config, error) {
diff --git a/test/e2e/framework/deployment_sample.go b/test/e2e/framework/deployment_sample.go
new file mode 100644
index 000000000..37034f0c7
--- /dev/null
+++ b/test/e2e/framework/deployment_sample.go
@@ -0,0 +1,169 @@
+package framework
+
+import (
+ "context"
+ "fmt"
+ "time"
+
+ "github.com/onsi/ginkgo/v2"
+ "github.com/onsi/gomega"
+ appsv1 "k8s.io/api/apps/v1"
+ corev1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ "k8s.io/apimachinery/pkg/api/resource"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/klog/v2"
+)
+
+const (
+ // PollInterval defines the interval time for a poll operation.
+ PollInterval = 15 * time.Second
+
+ // PollTimeout defines the time after which the poll operation times out.
+ PollTimeout = 180 * time.Second
+)
+
+func NewDeployment(namespace, name string, replicas *int32, nodes []string) *appsv1.Deployment {
+ return &appsv1.Deployment{
+ TypeMeta: metav1.TypeMeta{
+ APIVersion: "apps/v1",
+ Kind: "Deployment",
+ },
+
+ ObjectMeta: metav1.ObjectMeta{
+ Namespace: namespace,
+ Name: name,
+ },
+
+ Spec: appsv1.DeploymentSpec{
+ Replicas: replicas,
+
+ Selector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "app": name,
+ },
+ },
+
+ Template: corev1.PodTemplateSpec{
+ ObjectMeta: metav1.ObjectMeta{
+ Labels: map[string]string{
+ "app": name,
+ },
+ },
+
+ Spec: corev1.PodSpec{
+ Tolerations: []corev1.Toleration{
+ {
+ Key: "kosmos.io/node",
+ Operator: corev1.TolerationOpEqual,
+ Value: "true",
+ Effect: corev1.TaintEffectNoSchedule,
+ },
+ },
+
+ HostNetwork: true,
+
+ Affinity: &corev1.Affinity{
+ NodeAffinity: &corev1.NodeAffinity{
+ RequiredDuringSchedulingIgnoredDuringExecution: &corev1.NodeSelector{
+ NodeSelectorTerms: []corev1.NodeSelectorTerm{
+ {
+ MatchExpressions: []corev1.NodeSelectorRequirement{
+ {
+ Key: "kubernetes.io/hostname",
+ Operator: corev1.NodeSelectorOpIn,
+ Values: nodes,
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+
+ Containers: []corev1.Container{
+ {
+ Name: "nginx-container",
+ Image: "registry.paas/cmss/nginx:1.14.2",
+
+ Ports: []corev1.ContainerPort{
+ {
+ ContainerPort: 80,
+ Protocol: "TCP",
+ },
+ },
+
+ Resources: corev1.ResourceRequirements{
+ Limits: map[corev1.ResourceName]resource.Quantity{
+ corev1.ResourceCPU: resource.MustParse("100m"),
+ },
+ },
+ },
+ },
+ },
+ },
+ },
+ }
+}
+
+func CreateDeployment(client kubernetes.Interface, deployment *appsv1.Deployment) {
+ ginkgo.By(fmt.Sprintf("Creating Deployment(%s/%s)", deployment.Namespace, deployment.Name), func() {
+ _, err := client.AppsV1().Deployments(deployment.Namespace).Create(context.TODO(), deployment, metav1.CreateOptions{})
+ if err != nil {
+ klog.Errorf("create deployment occur error :", err)
+ gomega.Expect(apierrors.IsAlreadyExists(err)).Should(gomega.Equal(true))
+ }
+ })
+}
+
+func WaitDeploymentPresentOnCluster(client kubernetes.Interface, namespace, name, cluster string) {
+ ginkgo.By(fmt.Sprintf("Waiting for deployment(%v/%v) on cluster(%v)", namespace, name, cluster), func() {
+ gomega.Eventually(func() bool {
+ _, err := client.AppsV1().Deployments(namespace).Get(context.TODO(), name, metav1.GetOptions{})
+ if err != nil {
+ klog.Errorf("Failed to get deployment(%s/%s) on cluster(%s), err: %v", namespace, name, cluster, err)
+ return false
+ }
+ return true
+ }, PollTimeout, PollInterval).Should(gomega.Equal(true))
+ })
+}
+
+func RemoveDeploymentOnCluster(client kubernetes.Interface, namespace, name string) {
+ ginkgo.By(fmt.Sprintf("Removing Deployment(%s/%s)", namespace, name), func() {
+ err := client.AppsV1().Deployments(namespace).Delete(context.TODO(), name, metav1.DeleteOptions{})
+ if err == nil || apierrors.IsNotFound(err) {
+ return
+ }
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ })
+}
+
+func HasElement(str string, strs []string) bool {
+ for _, e := range strs {
+ if e == str {
+ return true
+ }
+ }
+ return false
+}
+
+func WaitPodPresentOnCluster(client kubernetes.Interface, namespace, cluster string, nodes []string, opt metav1.ListOptions) {
+ ginkgo.By(fmt.Sprintf("Waiting for pod of the deployment on cluster(%v)", cluster), func() {
+ gomega.Eventually(func() bool {
+ pods, err := client.CoreV1().Pods(namespace).List(context.TODO(), opt)
+ if err != nil {
+ klog.Errorf("Failed to get pod on cluster(%s), err: %v", cluster, err)
+ return false
+ }
+
+ for _, pod := range pods.Items {
+ if HasElement(pod.Spec.NodeName, nodes) {
+ return true
+ }
+ }
+ return false
+ }, PollTimeout, PollInterval).Should(gomega.Equal(true))
+ })
+}
diff --git a/test/e2e/leaf_node_test.go b/test/e2e/leaf_node_test.go
new file mode 100644
index 000000000..2e6c5c12b
--- /dev/null
+++ b/test/e2e/leaf_node_test.go
@@ -0,0 +1,269 @@
+// nolint:dupl
+package e2e
+
+import (
+ "fmt"
+ "reflect"
+
+ "github.com/onsi/ginkgo/v2"
+ "github.com/onsi/gomega"
+ appsv1 "k8s.io/api/apps/v1"
+ corev1 "k8s.io/api/core/v1"
+ apierrors "k8s.io/apimachinery/pkg/api/errors"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+
+ kosmosv1alpha1 "github.com/kosmos.io/kosmos/pkg/apis/kosmos/v1alpha1"
+ "github.com/kosmos.io/kosmos/pkg/utils"
+ "github.com/kosmos.io/kosmos/test/e2e/framework"
+)
+
+const (
+ ONE2CLUSTER = "-one2cluster"
+ ONE2NODE = "-one2node"
+ ONE2PARTY = "-one2party"
+)
+
+var (
+ one2Cluster *kosmosv1alpha1.Cluster
+ one2Node *kosmosv1alpha1.Cluster
+ one2Party *kosmosv1alpha1.Cluster
+ partyNodeNames []string
+ memberNodeNames []string
+)
+
+var _ = ginkgo.Describe("Test leaf node mode -- one2cluster, one2node, one2party", func() {
+ ginkgo.BeforeEach(func() {
+ clusters, err := framework.FetchClusters(hostClusterLinkClient)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ gomega.Expect(clusters).ShouldNot(gomega.BeEmpty())
+ partyNodeNames = make([]string, 0)
+ memberNodeNames = make([]string, 0)
+
+ for _, cluster := range clusters {
+ if cluster.Name == "cluster-member1" {
+ nodes, err := framework.FetchNodes(firstKubeClient)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ cluster.ResourceVersion = ""
+
+ one2Cluster = cluster.DeepCopy()
+ one2Cluster.Name += ONE2CLUSTER
+ one2Cluster.Spec.ClusterTreeOptions.Enable = true
+ one2Cluster.Spec.ClusterTreeOptions.LeafModels = nil
+
+ one2Node = cluster.DeepCopy()
+ one2Node.Name += ONE2NODE
+ one2Node.Spec.ClusterTreeOptions.Enable = true
+
+ one2Party = cluster.DeepCopy()
+ one2Party.Name += ONE2PARTY
+
+ nodeLeafModels := make([]kosmosv1alpha1.LeafModel, 0)
+ for i, node := range nodes {
+ if i < 2 {
+ nodeLabels := node.Labels
+ if nodeLabels == nil {
+ nodeLabels = make(map[string]string, 0)
+ }
+ nodeLabels["test-leaf-party-mode"] = "yes"
+ node.SetLabels(nodeLabels)
+ node.ResourceVersion = ""
+ err = framework.UpdateNodeLabels(firstKubeClient, node)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ }
+
+ nodeLeaf := kosmosv1alpha1.LeafModel{
+ LeafNodeName: one2Node.Name,
+ Taints: []corev1.Taint{
+ {
+ Effect: utils.KosmosNodeTaintEffect,
+ Key: utils.KosmosNodeTaintKey,
+ Value: utils.KosmosNodeValue,
+ },
+ },
+ NodeSelector: kosmosv1alpha1.NodeSelector{
+ NodeName: node.Name,
+ LabelSelector: nil,
+ },
+ }
+ nodeLeafModels = append(nodeLeafModels, nodeLeaf)
+ memberNodeNames = append(memberNodeNames, node.Name)
+
+ }
+ one2Node.Spec.ClusterTreeOptions.LeafModels = nodeLeafModels
+
+ partyLeaf := kosmosv1alpha1.LeafModel{
+ LeafNodeName: one2Party.Name,
+ Taints: []corev1.Taint{
+ {
+ Effect: utils.KosmosNodeTaintEffect,
+ Key: utils.KosmosNodeTaintKey,
+ Value: utils.KosmosNodeValue,
+ },
+ },
+ NodeSelector: kosmosv1alpha1.NodeSelector{
+ LabelSelector: &metav1.LabelSelector{
+ MatchLabels: map[string]string{
+ "test-leaf-party-mode": "yes",
+ },
+ },
+ },
+ }
+
+ partyNodeNames = append(partyNodeNames, fmt.Sprintf("%v%v%v", utils.KosmosNodePrefix, partyLeaf.LeafNodeName, "-0"))
+ one2Party.Spec.ClusterTreeOptions.LeafModels = []kosmosv1alpha1.LeafModel{partyLeaf}
+
+ break
+ }
+ }
+ })
+
+ ginkgo.Context("Test one2cluster mode", func() {
+ var deploy *appsv1.Deployment
+ ginkgo.BeforeEach(func() {
+ err := framework.DeleteClusters(hostClusterLinkClient, one2Cluster.Name)
+ if err != nil && !apierrors.IsNotFound(err) {
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ }
+ err = framework.CreateClusters(hostClusterLinkClient, one2Cluster)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+
+ framework.WaitNodePresentOnCluster(hostKubeClient, utils.KosmosNodePrefix+one2Cluster.GetName())
+
+ })
+
+ ginkgo.It("Test one2cluster mode", func() {
+ ginkgo.By("Test one2cluster mode", func() {
+ nodeNameInRoot := utils.KosmosNodePrefix + one2Cluster.GetName()
+ nodes := []string{nodeNameInRoot}
+ deployName := one2Cluster.GetName() + "-nginx"
+ rp := int32(1)
+ deploy = framework.NewDeployment(corev1.NamespaceDefault, deployName, &rp, nodes)
+ framework.RemoveDeploymentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name)
+ framework.CreateDeployment(hostKubeClient, deploy)
+
+ framework.WaitDeploymentPresentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name, one2Cluster.Name)
+
+ opt := metav1.ListOptions{
+ LabelSelector: fmt.Sprintf("app=%v", deployName),
+ }
+ framework.WaitPodPresentOnCluster(hostKubeClient, deploy.Namespace, one2Cluster.Name, nodes, opt)
+ framework.WaitPodPresentOnCluster(firstKubeClient, deploy.Namespace, one2Cluster.Name, memberNodeNames, opt)
+ })
+ })
+ ginkgo.AfterEach(func() {
+ if deploy != nil && !reflect.DeepEqual(deploy, appsv1.Deployment{}) {
+ framework.RemoveDeploymentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name)
+ }
+
+ err := framework.DeleteClusters(hostClusterLinkClient, one2Cluster.Name)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+
+ err = framework.DeleteNode(hostKubeClient, utils.KosmosNodePrefix+one2Cluster.GetName())
+ if err != nil && !apierrors.IsNotFound(err) {
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ }
+ })
+ })
+
+ ginkgo.Context("Test one2node mode", func() {
+ var deploy *appsv1.Deployment
+ ginkgo.BeforeEach(func() {
+ err := framework.DeleteClusters(hostClusterLinkClient, one2Node.Name)
+ if err != nil && !apierrors.IsNotFound(err) {
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ }
+ err = framework.CreateClusters(hostClusterLinkClient, one2Node)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ if len(memberNodeNames) > 0 {
+ framework.WaitNodePresentOnCluster(hostKubeClient, memberNodeNames[0])
+ }
+ })
+
+ ginkgo.It("Test one2node mode", func() {
+ ginkgo.By("Test one2cluster mode", func() {
+ deployName := one2Node.GetName() + "-nginx"
+ rp := int32(1)
+ deploy = framework.NewDeployment(corev1.NamespaceDefault, deployName, &rp, memberNodeNames)
+ framework.RemoveDeploymentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name)
+ framework.CreateDeployment(hostKubeClient, deploy)
+
+ framework.WaitDeploymentPresentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name, one2Node.Name)
+
+ opt := metav1.ListOptions{
+ LabelSelector: fmt.Sprintf("app=%v", deployName),
+ }
+ framework.WaitPodPresentOnCluster(hostKubeClient, deploy.Namespace, one2Node.Name, memberNodeNames, opt)
+ framework.WaitPodPresentOnCluster(firstKubeClient, deploy.Namespace, one2Node.Name, memberNodeNames, opt)
+ })
+ })
+
+ ginkgo.AfterEach(func() {
+ if deploy != nil && !reflect.DeepEqual(deploy, appsv1.Deployment{}) {
+ framework.RemoveDeploymentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name)
+ }
+
+ err := framework.DeleteClusters(hostClusterLinkClient, one2Node.Name)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+
+ if len(memberNodeNames) > 0 {
+ for _, node := range memberNodeNames {
+ err = framework.DeleteNode(hostKubeClient, node)
+ if err != nil && !apierrors.IsNotFound(err) {
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ }
+ }
+ }
+ })
+ })
+
+ ginkgo.Context("Test one2party mode", func() {
+ var deploy *appsv1.Deployment
+ ginkgo.BeforeEach(func() {
+ err := framework.DeleteClusters(hostClusterLinkClient, one2Party.Name)
+ if err != nil && !apierrors.IsNotFound(err) {
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ }
+ err = framework.CreateClusters(hostClusterLinkClient, one2Party)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+
+ if len(partyNodeNames) > 0 {
+ framework.WaitNodePresentOnCluster(hostKubeClient, partyNodeNames[0])
+ }
+ })
+
+ ginkgo.It("Test one2party mode", func() {
+ ginkgo.By("Test one2party mode", func() {
+ deployName := one2Party.GetName() + "-nginx"
+ rp := int32(1)
+ deploy = framework.NewDeployment(corev1.NamespaceDefault, deployName, &rp, partyNodeNames)
+ framework.RemoveDeploymentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name)
+ framework.CreateDeployment(hostKubeClient, deploy)
+
+ framework.WaitDeploymentPresentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name, one2Party.Name)
+
+ opt := metav1.ListOptions{
+ LabelSelector: fmt.Sprintf("app=%v", deployName),
+ }
+ framework.WaitPodPresentOnCluster(hostKubeClient, deploy.Namespace, one2Party.Name, partyNodeNames, opt)
+ framework.WaitPodPresentOnCluster(firstKubeClient, deploy.Namespace, one2Party.Name, memberNodeNames, opt)
+ })
+ })
+ ginkgo.AfterEach(func() {
+ if deploy != nil && !reflect.DeepEqual(deploy, appsv1.Deployment{}) {
+ framework.RemoveDeploymentOnCluster(hostKubeClient, deploy.Namespace, deploy.Name)
+ }
+
+ err := framework.DeleteClusters(hostClusterLinkClient, one2Party.Name)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+
+ if len(partyNodeNames) > 0 {
+ for _, node := range partyNodeNames {
+ err = framework.DeleteNode(hostKubeClient, node)
+ if err != nil && !apierrors.IsNotFound(err) {
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ }
+ }
+ }
+ })
+ })
+})
diff --git a/test/e2e/suit_test.go b/test/e2e/suit_test.go
index 3f5fc191a..02cc95cd7 100644
--- a/test/e2e/suit_test.go
+++ b/test/e2e/suit_test.go
@@ -21,15 +21,20 @@ var (
pollInterval time.Duration
// pollTimeout defines the time after which the poll operation times out.
pollTimeout time.Duration
-)
-var (
- kubeconfig string
- hostContext string
- restConfig *rest.Config
- kubeClient kubernetes.Interface
- dynamicClient dynamic.Interface
- clusterLinkClient versioned.Interface
+ kubeconfig = os.Getenv("KUBECONFIG")
+
+ // host clusters
+ hostContext string
+ hostKubeClient kubernetes.Interface
+ hostDynamicClient dynamic.Interface
+ hostClusterLinkClient versioned.Interface
+
+ // first-cluster
+ firstContext string
+ firstRestConfig *rest.Config
+ firstKubeClient kubernetes.Interface
+ firstDynamicClient dynamic.Interface
)
const (
@@ -43,6 +48,7 @@ func init() {
flag.DurationVar(&pollInterval, "poll-interval", 5*time.Second, "poll-interval defines the interval time for a poll operation")
flag.DurationVar(&pollTimeout, "poll-timeout", 300*time.Second, "poll-timeout defines the time which the poll operation times out")
flag.StringVar(&hostContext, "host-context", "kind-cluster-host", "name of the host cluster context in kubeconfig file.")
+ flag.StringVar(&firstContext, "first-context", "kind-cluster-member1", "name of the first member cluster context in kubeconfig file.")
}
func TestE2E(t *testing.T) {
@@ -53,15 +59,23 @@ func TestE2E(t *testing.T) {
var _ = ginkgo.SynchronizedBeforeSuite(func() []byte {
return nil
}, func(bytes []byte) {
- kubeconfig = os.Getenv("KUBECONFIG")
+ // InitClient Initialize the client connecting to the HOST/FIRST/SECOND cluster
gomega.Expect(kubeconfig).ShouldNot(gomega.BeEmpty())
- config, err := framework.LoadRESTClientConfig(kubeconfig, hostContext)
+ hostRestConfig, err := framework.LoadRESTClientConfig(kubeconfig, hostContext)
gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
- restConfig = config
- kubeClient, err = kubernetes.NewForConfig(restConfig)
+ hostKubeClient, err = kubernetes.NewForConfig(hostRestConfig)
gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
- clusterLinkClient, err = versioned.NewForConfig(restConfig)
+ hostDynamicClient, err = dynamic.NewForConfig(hostRestConfig)
gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
- dynamicClient, err = dynamic.NewForConfig(restConfig)
+ hostClusterLinkClient, err = versioned.NewForConfig(hostRestConfig)
gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+
+ gomega.Expect(kubeconfig).ShouldNot(gomega.BeEmpty())
+ firstRestConfig, err = framework.LoadRESTClientConfig(kubeconfig, firstContext)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ firstKubeClient, err = kubernetes.NewForConfig(firstRestConfig)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+ firstDynamicClient, err = dynamic.NewForConfig(firstRestConfig)
+ gomega.Expect(err).ShouldNot(gomega.HaveOccurred())
+
})
diff --git a/vendor/github.com/Microsoft/go-winio/.gitattributes b/vendor/github.com/Microsoft/go-winio/.gitattributes
new file mode 100644
index 000000000..94f480de9
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/.gitattributes
@@ -0,0 +1 @@
+* text=auto eol=lf
\ No newline at end of file
diff --git a/vendor/github.com/Microsoft/go-winio/.gitignore b/vendor/github.com/Microsoft/go-winio/.gitignore
new file mode 100644
index 000000000..815e20660
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/.gitignore
@@ -0,0 +1,10 @@
+.vscode/
+
+*.exe
+
+# testing
+testdata
+
+# go workspaces
+go.work
+go.work.sum
diff --git a/vendor/github.com/Microsoft/go-winio/.golangci.yml b/vendor/github.com/Microsoft/go-winio/.golangci.yml
new file mode 100644
index 000000000..af403bb13
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/.golangci.yml
@@ -0,0 +1,144 @@
+run:
+ skip-dirs:
+ - pkg/etw/sample
+
+linters:
+ enable:
+ # style
+ - containedctx # struct contains a context
+ - dupl # duplicate code
+ - errname # erorrs are named correctly
+ - goconst # strings that should be constants
+ - godot # comments end in a period
+ - misspell
+ - nolintlint # "//nolint" directives are properly explained
+ - revive # golint replacement
+ - stylecheck # golint replacement, less configurable than revive
+ - unconvert # unnecessary conversions
+ - wastedassign
+
+ # bugs, performance, unused, etc ...
+ - contextcheck # function uses a non-inherited context
+ - errorlint # errors not wrapped for 1.13
+ - exhaustive # check exhaustiveness of enum switch statements
+ - gofmt # files are gofmt'ed
+ - gosec # security
+ - nestif # deeply nested ifs
+ - nilerr # returns nil even with non-nil error
+ - prealloc # slices that can be pre-allocated
+ - structcheck # unused struct fields
+ - unparam # unused function params
+
+issues:
+ exclude-rules:
+ # err is very often shadowed in nested scopes
+ - linters:
+ - govet
+ text: '^shadow: declaration of "err" shadows declaration'
+
+ # ignore long lines for skip autogen directives
+ - linters:
+ - revive
+ text: "^line-length-limit: "
+ source: "^//(go:generate|sys) "
+
+ # allow unjustified ignores of error checks in defer statements
+ - linters:
+ - nolintlint
+ text: "^directive `//nolint:errcheck` should provide explanation"
+ source: '^\s*defer '
+
+ # allow unjustified ignores of error lints for io.EOF
+ - linters:
+ - nolintlint
+ text: "^directive `//nolint:errorlint` should provide explanation"
+ source: '[=|!]= io.EOF'
+
+
+linters-settings:
+ govet:
+ enable-all: true
+ disable:
+ # struct order is often for Win32 compat
+ # also, ignore pointer bytes/GC issues for now until performance becomes an issue
+ - fieldalignment
+ check-shadowing: true
+ nolintlint:
+ allow-leading-space: false
+ require-explanation: true
+ require-specific: true
+ revive:
+ # revive is more configurable than static check, so likely the preferred alternative to static-check
+ # (once the perf issue is solved: https://github.com/golangci/golangci-lint/issues/2997)
+ enable-all-rules:
+ true
+ # https://github.com/mgechev/revive/blob/master/RULES_DESCRIPTIONS.md
+ rules:
+ # rules with required arguments
+ - name: argument-limit
+ disabled: true
+ - name: banned-characters
+ disabled: true
+ - name: cognitive-complexity
+ disabled: true
+ - name: cyclomatic
+ disabled: true
+ - name: file-header
+ disabled: true
+ - name: function-length
+ disabled: true
+ - name: function-result-limit
+ disabled: true
+ - name: max-public-structs
+ disabled: true
+ # geneally annoying rules
+ - name: add-constant # complains about any and all strings and integers
+ disabled: true
+ - name: confusing-naming # we frequently use "Foo()" and "foo()" together
+ disabled: true
+ - name: flag-parameter # excessive, and a common idiom we use
+ disabled: true
+ # general config
+ - name: line-length-limit
+ arguments:
+ - 140
+ - name: var-naming
+ arguments:
+ - []
+ - - CID
+ - CRI
+ - CTRD
+ - DACL
+ - DLL
+ - DOS
+ - ETW
+ - FSCTL
+ - GCS
+ - GMSA
+ - HCS
+ - HV
+ - IO
+ - LCOW
+ - LDAP
+ - LPAC
+ - LTSC
+ - MMIO
+ - NT
+ - OCI
+ - PMEM
+ - PWSH
+ - RX
+ - SACl
+ - SID
+ - SMB
+ - TX
+ - VHD
+ - VHDX
+ - VMID
+ - VPCI
+ - WCOW
+ - WIM
+ stylecheck:
+ checks:
+ - "all"
+ - "-ST1003" # use revive's var naming
diff --git a/vendor/github.com/Microsoft/go-winio/CODEOWNERS b/vendor/github.com/Microsoft/go-winio/CODEOWNERS
new file mode 100644
index 000000000..ae1b4942b
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/CODEOWNERS
@@ -0,0 +1 @@
+ * @microsoft/containerplat
diff --git a/vendor/github.com/Microsoft/go-winio/LICENSE b/vendor/github.com/Microsoft/go-winio/LICENSE
new file mode 100644
index 000000000..b8b569d77
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/LICENSE
@@ -0,0 +1,22 @@
+The MIT License (MIT)
+
+Copyright (c) 2015 Microsoft
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
+
diff --git a/vendor/github.com/Microsoft/go-winio/README.md b/vendor/github.com/Microsoft/go-winio/README.md
new file mode 100644
index 000000000..7474b4f0b
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/README.md
@@ -0,0 +1,89 @@
+# go-winio [![Build Status](https://github.com/microsoft/go-winio/actions/workflows/ci.yml/badge.svg)](https://github.com/microsoft/go-winio/actions/workflows/ci.yml)
+
+This repository contains utilities for efficiently performing Win32 IO operations in
+Go. Currently, this is focused on accessing named pipes and other file handles, and
+for using named pipes as a net transport.
+
+This code relies on IO completion ports to avoid blocking IO on system threads, allowing Go
+to reuse the thread to schedule another goroutine. This limits support to Windows Vista and
+newer operating systems. This is similar to the implementation of network sockets in Go's net
+package.
+
+Please see the LICENSE file for licensing information.
+
+## Contributing
+
+This project welcomes contributions and suggestions.
+Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that
+you have the right to, and actually do, grant us the rights to use your contribution.
+For details, visit [Microsoft CLA](https://cla.microsoft.com).
+
+When you submit a pull request, a CLA-bot will automatically determine whether you need to
+provide a CLA and decorate the PR appropriately (e.g., label, comment).
+Simply follow the instructions provided by the bot.
+You will only need to do this once across all repos using our CLA.
+
+Additionally, the pull request pipeline requires the following steps to be performed before
+mergining.
+
+### Code Sign-Off
+
+We require that contributors sign their commits using [`git commit --signoff`][git-commit-s]
+to certify they either authored the work themselves or otherwise have permission to use it in this project.
+
+A range of commits can be signed off using [`git rebase --signoff`][git-rebase-s].
+
+Please see [the developer certificate](https://developercertificate.org) for more info,
+as well as to make sure that you can attest to the rules listed.
+Our CI uses the DCO Github app to ensure that all commits in a given PR are signed-off.
+
+### Linting
+
+Code must pass a linting stage, which uses [`golangci-lint`][lint].
+The linting settings are stored in [`.golangci.yaml`](./.golangci.yaml), and can be run
+automatically with VSCode by adding the following to your workspace or folder settings:
+
+```json
+ "go.lintTool": "golangci-lint",
+ "go.lintOnSave": "package",
+```
+
+Additional editor [integrations options are also available][lint-ide].
+
+Alternatively, `golangci-lint` can be [installed locally][lint-install] and run from the repo root:
+
+```shell
+# use . or specify a path to only lint a package
+# to show all lint errors, use flags "--max-issues-per-linter=0 --max-same-issues=0"
+> golangci-lint run ./...
+```
+
+### Go Generate
+
+The pipeline checks that auto-generated code, via `go generate`, are up to date.
+
+This can be done for the entire repo:
+
+```shell
+> go generate ./...
+```
+
+## Code of Conduct
+
+This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
+For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
+contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
+
+## Special Thanks
+
+Thanks to [natefinch][natefinch] for the inspiration for this library.
+See [npipe](https://github.com/natefinch/npipe) for another named pipe implementation.
+
+[lint]: https://golangci-lint.run/
+[lint-ide]: https://golangci-lint.run/usage/integrations/#editor-integration
+[lint-install]: https://golangci-lint.run/usage/install/#local-installation
+
+[git-commit-s]: https://git-scm.com/docs/git-commit#Documentation/git-commit.txt--s
+[git-rebase-s]: https://git-scm.com/docs/git-rebase#Documentation/git-rebase.txt---signoff
+
+[natefinch]: https://github.com/natefinch
diff --git a/vendor/github.com/Microsoft/go-winio/SECURITY.md b/vendor/github.com/Microsoft/go-winio/SECURITY.md
new file mode 100644
index 000000000..869fdfe2b
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/SECURITY.md
@@ -0,0 +1,41 @@
+
+
+## Security
+
+Microsoft takes the security of our software products and services seriously, which includes all source code repositories managed through our GitHub organizations, which include [Microsoft](https://github.com/Microsoft), [Azure](https://github.com/Azure), [DotNet](https://github.com/dotnet), [AspNet](https://github.com/aspnet), [Xamarin](https://github.com/xamarin), and [our GitHub organizations](https://opensource.microsoft.com/).
+
+If you believe you have found a security vulnerability in any Microsoft-owned repository that meets [Microsoft's definition of a security vulnerability](https://aka.ms/opensource/security/definition), please report it to us as described below.
+
+## Reporting Security Issues
+
+**Please do not report security vulnerabilities through public GitHub issues.**
+
+Instead, please report them to the Microsoft Security Response Center (MSRC) at [https://msrc.microsoft.com/create-report](https://aka.ms/opensource/security/create-report).
+
+If you prefer to submit without logging in, send email to [secure@microsoft.com](mailto:secure@microsoft.com). If possible, encrypt your message with our PGP key; please download it from the [Microsoft Security Response Center PGP Key page](https://aka.ms/opensource/security/pgpkey).
+
+You should receive a response within 24 hours. If for some reason you do not, please follow up via email to ensure we received your original message. Additional information can be found at [microsoft.com/msrc](https://aka.ms/opensource/security/msrc).
+
+Please include the requested information listed below (as much as you can provide) to help us better understand the nature and scope of the possible issue:
+
+ * Type of issue (e.g. buffer overflow, SQL injection, cross-site scripting, etc.)
+ * Full paths of source file(s) related to the manifestation of the issue
+ * The location of the affected source code (tag/branch/commit or direct URL)
+ * Any special configuration required to reproduce the issue
+ * Step-by-step instructions to reproduce the issue
+ * Proof-of-concept or exploit code (if possible)
+ * Impact of the issue, including how an attacker might exploit the issue
+
+This information will help us triage your report more quickly.
+
+If you are reporting for a bug bounty, more complete reports can contribute to a higher bounty award. Please visit our [Microsoft Bug Bounty Program](https://aka.ms/opensource/security/bounty) page for more details about our active programs.
+
+## Preferred Languages
+
+We prefer all communications to be in English.
+
+## Policy
+
+Microsoft follows the principle of [Coordinated Vulnerability Disclosure](https://aka.ms/opensource/security/cvd).
+
+
diff --git a/vendor/github.com/Microsoft/go-winio/backup.go b/vendor/github.com/Microsoft/go-winio/backup.go
new file mode 100644
index 000000000..09621c884
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/backup.go
@@ -0,0 +1,290 @@
+//go:build windows
+// +build windows
+
+package winio
+
+import (
+ "encoding/binary"
+ "errors"
+ "fmt"
+ "io"
+ "os"
+ "runtime"
+ "syscall"
+ "unicode/utf16"
+
+ "golang.org/x/sys/windows"
+)
+
+//sys backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupRead
+//sys backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupWrite
+
+const (
+ BackupData = uint32(iota + 1)
+ BackupEaData
+ BackupSecurity
+ BackupAlternateData
+ BackupLink
+ BackupPropertyData
+ BackupObjectId //revive:disable-line:var-naming ID, not Id
+ BackupReparseData
+ BackupSparseBlock
+ BackupTxfsData
+)
+
+const (
+ StreamSparseAttributes = uint32(8)
+)
+
+//nolint:revive // var-naming: ALL_CAPS
+const (
+ WRITE_DAC = windows.WRITE_DAC
+ WRITE_OWNER = windows.WRITE_OWNER
+ ACCESS_SYSTEM_SECURITY = windows.ACCESS_SYSTEM_SECURITY
+)
+
+// BackupHeader represents a backup stream of a file.
+type BackupHeader struct {
+ //revive:disable-next-line:var-naming ID, not Id
+ Id uint32 // The backup stream ID
+ Attributes uint32 // Stream attributes
+ Size int64 // The size of the stream in bytes
+ Name string // The name of the stream (for BackupAlternateData only).
+ Offset int64 // The offset of the stream in the file (for BackupSparseBlock only).
+}
+
+type win32StreamID struct {
+ StreamID uint32
+ Attributes uint32
+ Size uint64
+ NameSize uint32
+}
+
+// BackupStreamReader reads from a stream produced by the BackupRead Win32 API and produces a series
+// of BackupHeader values.
+type BackupStreamReader struct {
+ r io.Reader
+ bytesLeft int64
+}
+
+// NewBackupStreamReader produces a BackupStreamReader from any io.Reader.
+func NewBackupStreamReader(r io.Reader) *BackupStreamReader {
+ return &BackupStreamReader{r, 0}
+}
+
+// Next returns the next backup stream and prepares for calls to Read(). It skips the remainder of the current stream if
+// it was not completely read.
+func (r *BackupStreamReader) Next() (*BackupHeader, error) {
+ if r.bytesLeft > 0 { //nolint:nestif // todo: flatten this
+ if s, ok := r.r.(io.Seeker); ok {
+ // Make sure Seek on io.SeekCurrent sometimes succeeds
+ // before trying the actual seek.
+ if _, err := s.Seek(0, io.SeekCurrent); err == nil {
+ if _, err = s.Seek(r.bytesLeft, io.SeekCurrent); err != nil {
+ return nil, err
+ }
+ r.bytesLeft = 0
+ }
+ }
+ if _, err := io.Copy(io.Discard, r); err != nil {
+ return nil, err
+ }
+ }
+ var wsi win32StreamID
+ if err := binary.Read(r.r, binary.LittleEndian, &wsi); err != nil {
+ return nil, err
+ }
+ hdr := &BackupHeader{
+ Id: wsi.StreamID,
+ Attributes: wsi.Attributes,
+ Size: int64(wsi.Size),
+ }
+ if wsi.NameSize != 0 {
+ name := make([]uint16, int(wsi.NameSize/2))
+ if err := binary.Read(r.r, binary.LittleEndian, name); err != nil {
+ return nil, err
+ }
+ hdr.Name = syscall.UTF16ToString(name)
+ }
+ if wsi.StreamID == BackupSparseBlock {
+ if err := binary.Read(r.r, binary.LittleEndian, &hdr.Offset); err != nil {
+ return nil, err
+ }
+ hdr.Size -= 8
+ }
+ r.bytesLeft = hdr.Size
+ return hdr, nil
+}
+
+// Read reads from the current backup stream.
+func (r *BackupStreamReader) Read(b []byte) (int, error) {
+ if r.bytesLeft == 0 {
+ return 0, io.EOF
+ }
+ if int64(len(b)) > r.bytesLeft {
+ b = b[:r.bytesLeft]
+ }
+ n, err := r.r.Read(b)
+ r.bytesLeft -= int64(n)
+ if err == io.EOF {
+ err = io.ErrUnexpectedEOF
+ } else if r.bytesLeft == 0 && err == nil {
+ err = io.EOF
+ }
+ return n, err
+}
+
+// BackupStreamWriter writes a stream compatible with the BackupWrite Win32 API.
+type BackupStreamWriter struct {
+ w io.Writer
+ bytesLeft int64
+}
+
+// NewBackupStreamWriter produces a BackupStreamWriter on top of an io.Writer.
+func NewBackupStreamWriter(w io.Writer) *BackupStreamWriter {
+ return &BackupStreamWriter{w, 0}
+}
+
+// WriteHeader writes the next backup stream header and prepares for calls to Write().
+func (w *BackupStreamWriter) WriteHeader(hdr *BackupHeader) error {
+ if w.bytesLeft != 0 {
+ return fmt.Errorf("missing %d bytes", w.bytesLeft)
+ }
+ name := utf16.Encode([]rune(hdr.Name))
+ wsi := win32StreamID{
+ StreamID: hdr.Id,
+ Attributes: hdr.Attributes,
+ Size: uint64(hdr.Size),
+ NameSize: uint32(len(name) * 2),
+ }
+ if hdr.Id == BackupSparseBlock {
+ // Include space for the int64 block offset
+ wsi.Size += 8
+ }
+ if err := binary.Write(w.w, binary.LittleEndian, &wsi); err != nil {
+ return err
+ }
+ if len(name) != 0 {
+ if err := binary.Write(w.w, binary.LittleEndian, name); err != nil {
+ return err
+ }
+ }
+ if hdr.Id == BackupSparseBlock {
+ if err := binary.Write(w.w, binary.LittleEndian, hdr.Offset); err != nil {
+ return err
+ }
+ }
+ w.bytesLeft = hdr.Size
+ return nil
+}
+
+// Write writes to the current backup stream.
+func (w *BackupStreamWriter) Write(b []byte) (int, error) {
+ if w.bytesLeft < int64(len(b)) {
+ return 0, fmt.Errorf("too many bytes by %d", int64(len(b))-w.bytesLeft)
+ }
+ n, err := w.w.Write(b)
+ w.bytesLeft -= int64(n)
+ return n, err
+}
+
+// BackupFileReader provides an io.ReadCloser interface on top of the BackupRead Win32 API.
+type BackupFileReader struct {
+ f *os.File
+ includeSecurity bool
+ ctx uintptr
+}
+
+// NewBackupFileReader returns a new BackupFileReader from a file handle. If includeSecurity is true,
+// Read will attempt to read the security descriptor of the file.
+func NewBackupFileReader(f *os.File, includeSecurity bool) *BackupFileReader {
+ r := &BackupFileReader{f, includeSecurity, 0}
+ return r
+}
+
+// Read reads a backup stream from the file by calling the Win32 API BackupRead().
+func (r *BackupFileReader) Read(b []byte) (int, error) {
+ var bytesRead uint32
+ err := backupRead(syscall.Handle(r.f.Fd()), b, &bytesRead, false, r.includeSecurity, &r.ctx)
+ if err != nil {
+ return 0, &os.PathError{Op: "BackupRead", Path: r.f.Name(), Err: err}
+ }
+ runtime.KeepAlive(r.f)
+ if bytesRead == 0 {
+ return 0, io.EOF
+ }
+ return int(bytesRead), nil
+}
+
+// Close frees Win32 resources associated with the BackupFileReader. It does not close
+// the underlying file.
+func (r *BackupFileReader) Close() error {
+ if r.ctx != 0 {
+ _ = backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
+ runtime.KeepAlive(r.f)
+ r.ctx = 0
+ }
+ return nil
+}
+
+// BackupFileWriter provides an io.WriteCloser interface on top of the BackupWrite Win32 API.
+type BackupFileWriter struct {
+ f *os.File
+ includeSecurity bool
+ ctx uintptr
+}
+
+// NewBackupFileWriter returns a new BackupFileWriter from a file handle. If includeSecurity is true,
+// Write() will attempt to restore the security descriptor from the stream.
+func NewBackupFileWriter(f *os.File, includeSecurity bool) *BackupFileWriter {
+ w := &BackupFileWriter{f, includeSecurity, 0}
+ return w
+}
+
+// Write restores a portion of the file using the provided backup stream.
+func (w *BackupFileWriter) Write(b []byte) (int, error) {
+ var bytesWritten uint32
+ err := backupWrite(syscall.Handle(w.f.Fd()), b, &bytesWritten, false, w.includeSecurity, &w.ctx)
+ if err != nil {
+ return 0, &os.PathError{Op: "BackupWrite", Path: w.f.Name(), Err: err}
+ }
+ runtime.KeepAlive(w.f)
+ if int(bytesWritten) != len(b) {
+ return int(bytesWritten), errors.New("not all bytes could be written")
+ }
+ return len(b), nil
+}
+
+// Close frees Win32 resources associated with the BackupFileWriter. It does not
+// close the underlying file.
+func (w *BackupFileWriter) Close() error {
+ if w.ctx != 0 {
+ _ = backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
+ runtime.KeepAlive(w.f)
+ w.ctx = 0
+ }
+ return nil
+}
+
+// OpenForBackup opens a file or directory, potentially skipping access checks if the backup
+// or restore privileges have been acquired.
+//
+// If the file opened was a directory, it cannot be used with Readdir().
+func OpenForBackup(path string, access uint32, share uint32, createmode uint32) (*os.File, error) {
+ winPath, err := syscall.UTF16FromString(path)
+ if err != nil {
+ return nil, err
+ }
+ h, err := syscall.CreateFile(&winPath[0],
+ access,
+ share,
+ nil,
+ createmode,
+ syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT,
+ 0)
+ if err != nil {
+ err = &os.PathError{Op: "open", Path: path, Err: err}
+ return nil, err
+ }
+ return os.NewFile(uintptr(h), path), nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/backuptar/doc.go b/vendor/github.com/Microsoft/go-winio/backuptar/doc.go
new file mode 100644
index 000000000..965d52ab0
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/backuptar/doc.go
@@ -0,0 +1,3 @@
+// This file only exists to allow go get on non-Windows platforms.
+
+package backuptar
diff --git a/vendor/github.com/Microsoft/go-winio/backuptar/strconv.go b/vendor/github.com/Microsoft/go-winio/backuptar/strconv.go
new file mode 100644
index 000000000..455fd798e
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/backuptar/strconv.go
@@ -0,0 +1,70 @@
+//go:build windows
+
+package backuptar
+
+import (
+ "archive/tar"
+ "fmt"
+ "strconv"
+ "strings"
+ "time"
+)
+
+// Functions copied from https://github.com/golang/go/blob/master/src/archive/tar/strconv.go
+// as we need to manage the LIBARCHIVE.creationtime PAXRecord manually.
+// Idea taken from containerd which did the same thing.
+
+// parsePAXTime takes a string of the form %d.%d as described in the PAX
+// specification. Note that this implementation allows for negative timestamps,
+// which is allowed for by the PAX specification, but not always portable.
+func parsePAXTime(s string) (time.Time, error) {
+ const maxNanoSecondDigits = 9
+
+ // Split string into seconds and sub-seconds parts.
+ ss, sn := s, ""
+ if pos := strings.IndexByte(s, '.'); pos >= 0 {
+ ss, sn = s[:pos], s[pos+1:]
+ }
+
+ // Parse the seconds.
+ secs, err := strconv.ParseInt(ss, 10, 64)
+ if err != nil {
+ return time.Time{}, tar.ErrHeader
+ }
+ if len(sn) == 0 {
+ return time.Unix(secs, 0), nil // No sub-second values
+ }
+
+ // Parse the nanoseconds.
+ if strings.Trim(sn, "0123456789") != "" {
+ return time.Time{}, tar.ErrHeader
+ }
+ if len(sn) < maxNanoSecondDigits {
+ sn += strings.Repeat("0", maxNanoSecondDigits-len(sn)) // Right pad
+ } else {
+ sn = sn[:maxNanoSecondDigits] // Right truncate
+ }
+ nsecs, _ := strconv.ParseInt(sn, 10, 64) // Must succeed
+ if len(ss) > 0 && ss[0] == '-' {
+ return time.Unix(secs, -1*nsecs), nil // Negative correction
+ }
+ return time.Unix(secs, nsecs), nil
+}
+
+// formatPAXTime converts ts into a time of the form %d.%d as described in the
+// PAX specification. This function is capable of negative timestamps.
+func formatPAXTime(ts time.Time) (s string) {
+ secs, nsecs := ts.Unix(), ts.Nanosecond()
+ if nsecs == 0 {
+ return strconv.FormatInt(secs, 10)
+ }
+
+ // If seconds is negative, then perform correction.
+ sign := ""
+ if secs < 0 {
+ sign = "-" // Remember sign
+ secs = -(secs + 1) // Add a second to secs
+ nsecs = -(nsecs - 1e9) // Take that second away from nsecs
+ }
+ return strings.TrimRight(fmt.Sprintf("%s%d.%09d", sign, secs, nsecs), "0")
+}
diff --git a/vendor/github.com/Microsoft/go-winio/backuptar/tar.go b/vendor/github.com/Microsoft/go-winio/backuptar/tar.go
new file mode 100644
index 000000000..6b3b0cd51
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/backuptar/tar.go
@@ -0,0 +1,509 @@
+//go:build windows
+// +build windows
+
+package backuptar
+
+import (
+ "archive/tar"
+ "encoding/base64"
+ "fmt"
+ "io"
+ "path/filepath"
+ "strconv"
+ "strings"
+ "syscall"
+ "time"
+
+ "github.com/Microsoft/go-winio"
+ "golang.org/x/sys/windows"
+)
+
+//nolint:deadcode,varcheck // keep unused constants for potential future use
+const (
+ cISUID = 0004000 // Set uid
+ cISGID = 0002000 // Set gid
+ cISVTX = 0001000 // Save text (sticky bit)
+ cISDIR = 0040000 // Directory
+ cISFIFO = 0010000 // FIFO
+ cISREG = 0100000 // Regular file
+ cISLNK = 0120000 // Symbolic link
+ cISBLK = 0060000 // Block special file
+ cISCHR = 0020000 // Character special file
+ cISSOCK = 0140000 // Socket
+)
+
+const (
+ hdrFileAttributes = "MSWINDOWS.fileattr"
+ hdrSecurityDescriptor = "MSWINDOWS.sd"
+ hdrRawSecurityDescriptor = "MSWINDOWS.rawsd"
+ hdrMountPoint = "MSWINDOWS.mountpoint"
+ hdrEaPrefix = "MSWINDOWS.xattr."
+
+ hdrCreationTime = "LIBARCHIVE.creationtime"
+)
+
+// zeroReader is an io.Reader that always returns 0s.
+type zeroReader struct{}
+
+func (zeroReader) Read(b []byte) (int, error) {
+ for i := range b {
+ b[i] = 0
+ }
+ return len(b), nil
+}
+
+func copySparse(t *tar.Writer, br *winio.BackupStreamReader) error {
+ curOffset := int64(0)
+ for {
+ bhdr, err := br.Next()
+ if err == io.EOF { //nolint:errorlint
+ err = io.ErrUnexpectedEOF
+ }
+ if err != nil {
+ return err
+ }
+ if bhdr.Id != winio.BackupSparseBlock {
+ return fmt.Errorf("unexpected stream %d", bhdr.Id)
+ }
+
+ // We can't seek backwards, since we have already written that data to the tar.Writer.
+ if bhdr.Offset < curOffset {
+ return fmt.Errorf("cannot seek back from %d to %d", curOffset, bhdr.Offset)
+ }
+ // archive/tar does not support writing sparse files
+ // so just write zeroes to catch up to the current offset.
+ if _, err = io.CopyN(t, zeroReader{}, bhdr.Offset-curOffset); err != nil {
+ return fmt.Errorf("seek to offset %d: %w", bhdr.Offset, err)
+ }
+ if bhdr.Size == 0 {
+ // A sparse block with size = 0 is used to mark the end of the sparse blocks.
+ break
+ }
+ n, err := io.Copy(t, br)
+ if err != nil {
+ return err
+ }
+ if n != bhdr.Size {
+ return fmt.Errorf("copied %d bytes instead of %d at offset %d", n, bhdr.Size, bhdr.Offset)
+ }
+ curOffset = bhdr.Offset + n
+ }
+ return nil
+}
+
+// BasicInfoHeader creates a tar header from basic file information.
+func BasicInfoHeader(name string, size int64, fileInfo *winio.FileBasicInfo) *tar.Header {
+ hdr := &tar.Header{
+ Format: tar.FormatPAX,
+ Name: filepath.ToSlash(name),
+ Size: size,
+ Typeflag: tar.TypeReg,
+ ModTime: time.Unix(0, fileInfo.LastWriteTime.Nanoseconds()),
+ ChangeTime: time.Unix(0, fileInfo.ChangeTime.Nanoseconds()),
+ AccessTime: time.Unix(0, fileInfo.LastAccessTime.Nanoseconds()),
+ PAXRecords: make(map[string]string),
+ }
+ hdr.PAXRecords[hdrFileAttributes] = fmt.Sprintf("%d", fileInfo.FileAttributes)
+ hdr.PAXRecords[hdrCreationTime] = formatPAXTime(time.Unix(0, fileInfo.CreationTime.Nanoseconds()))
+
+ if (fileInfo.FileAttributes & syscall.FILE_ATTRIBUTE_DIRECTORY) != 0 {
+ hdr.Mode |= cISDIR
+ hdr.Size = 0
+ hdr.Typeflag = tar.TypeDir
+ }
+ return hdr
+}
+
+// SecurityDescriptorFromTarHeader reads the SDDL associated with the header of the current file
+// from the tar header and returns the security descriptor into a byte slice.
+func SecurityDescriptorFromTarHeader(hdr *tar.Header) ([]byte, error) {
+ if sdraw, ok := hdr.PAXRecords[hdrRawSecurityDescriptor]; ok {
+ sd, err := base64.StdEncoding.DecodeString(sdraw)
+ if err != nil {
+ // Not returning sd as-is in the error-case, as base64.DecodeString
+ // may return partially decoded data (not nil or empty slice) in case
+ // of a failure: https://github.com/golang/go/blob/go1.17.7/src/encoding/base64/base64.go#L382-L387
+ return nil, err
+ }
+ return sd, nil
+ }
+ // Maintaining old SDDL-based behavior for backward compatibility. All new
+ // tar headers written by this library will have raw binary for the security
+ // descriptor.
+ if sddl, ok := hdr.PAXRecords[hdrSecurityDescriptor]; ok {
+ return winio.SddlToSecurityDescriptor(sddl)
+ }
+ return nil, nil
+}
+
+// ExtendedAttributesFromTarHeader reads the EAs associated with the header of the
+// current file from the tar header and returns it as a byte slice.
+func ExtendedAttributesFromTarHeader(hdr *tar.Header) ([]byte, error) {
+ var eas []winio.ExtendedAttribute //nolint:prealloc // len(eas) <= len(hdr.PAXRecords); prealloc is wasteful
+ for k, v := range hdr.PAXRecords {
+ if !strings.HasPrefix(k, hdrEaPrefix) {
+ continue
+ }
+ data, err := base64.StdEncoding.DecodeString(v)
+ if err != nil {
+ return nil, err
+ }
+ eas = append(eas, winio.ExtendedAttribute{
+ Name: k[len(hdrEaPrefix):],
+ Value: data,
+ })
+ }
+ var eaData []byte
+ var err error
+ if len(eas) != 0 {
+ eaData, err = winio.EncodeExtendedAttributes(eas)
+ if err != nil {
+ return nil, err
+ }
+ }
+ return eaData, nil
+}
+
+// EncodeReparsePointFromTarHeader reads the ReparsePoint structure from the tar header
+// and encodes it into a byte slice. The file for which this function is called must be a
+// symlink.
+func EncodeReparsePointFromTarHeader(hdr *tar.Header) []byte {
+ _, isMountPoint := hdr.PAXRecords[hdrMountPoint]
+ rp := winio.ReparsePoint{
+ Target: filepath.FromSlash(hdr.Linkname),
+ IsMountPoint: isMountPoint,
+ }
+ return winio.EncodeReparsePoint(&rp)
+}
+
+// WriteTarFileFromBackupStream writes a file to a tar writer using data from a Win32 backup stream.
+//
+// This encodes Win32 metadata as tar pax vendor extensions starting with MSWINDOWS.
+//
+// The additional Win32 metadata is:
+//
+// - MSWINDOWS.fileattr: The Win32 file attributes, as a decimal value
+// - MSWINDOWS.rawsd: The Win32 security descriptor, in raw binary format
+// - MSWINDOWS.mountpoint: If present, this is a mount point and not a symlink, even though the type is '2' (symlink)
+func WriteTarFileFromBackupStream(t *tar.Writer, r io.Reader, name string, size int64, fileInfo *winio.FileBasicInfo) error {
+ name = filepath.ToSlash(name)
+ hdr := BasicInfoHeader(name, size, fileInfo)
+
+ // If r can be seeked, then this function is two-pass: pass 1 collects the
+ // tar header data, and pass 2 copies the data stream. If r cannot be
+ // seeked, then some header data (in particular EAs) will be silently lost.
+ var (
+ restartPos int64
+ err error
+ )
+ sr, readTwice := r.(io.Seeker)
+ if readTwice {
+ if restartPos, err = sr.Seek(0, io.SeekCurrent); err != nil {
+ readTwice = false
+ }
+ }
+
+ br := winio.NewBackupStreamReader(r)
+ var dataHdr *winio.BackupHeader
+ for dataHdr == nil {
+ bhdr, err := br.Next()
+ if err == io.EOF { //nolint:errorlint
+ break
+ }
+ if err != nil {
+ return err
+ }
+ switch bhdr.Id {
+ case winio.BackupData:
+ hdr.Mode |= cISREG
+ if !readTwice {
+ dataHdr = bhdr
+ }
+ case winio.BackupSecurity:
+ sd, err := io.ReadAll(br)
+ if err != nil {
+ return err
+ }
+ hdr.PAXRecords[hdrRawSecurityDescriptor] = base64.StdEncoding.EncodeToString(sd)
+
+ case winio.BackupReparseData:
+ hdr.Mode |= cISLNK
+ hdr.Typeflag = tar.TypeSymlink
+ reparseBuffer, _ := io.ReadAll(br)
+ rp, err := winio.DecodeReparsePoint(reparseBuffer)
+ if err != nil {
+ return err
+ }
+ if rp.IsMountPoint {
+ hdr.PAXRecords[hdrMountPoint] = "1"
+ }
+ hdr.Linkname = rp.Target
+
+ case winio.BackupEaData:
+ eab, err := io.ReadAll(br)
+ if err != nil {
+ return err
+ }
+ eas, err := winio.DecodeExtendedAttributes(eab)
+ if err != nil {
+ return err
+ }
+ for _, ea := range eas {
+ // Use base64 encoding for the binary value. Note that there
+ // is no way to encode the EA's flags, since their use doesn't
+ // make any sense for persisted EAs.
+ hdr.PAXRecords[hdrEaPrefix+ea.Name] = base64.StdEncoding.EncodeToString(ea.Value)
+ }
+
+ case winio.BackupAlternateData, winio.BackupLink, winio.BackupPropertyData, winio.BackupObjectId, winio.BackupTxfsData:
+ // ignore these streams
+ default:
+ return fmt.Errorf("%s: unknown stream ID %d", name, bhdr.Id)
+ }
+ }
+
+ err = t.WriteHeader(hdr)
+ if err != nil {
+ return err
+ }
+
+ if readTwice {
+ // Get back to the data stream.
+ if _, err = sr.Seek(restartPos, io.SeekStart); err != nil {
+ return err
+ }
+ for dataHdr == nil {
+ bhdr, err := br.Next()
+ if err == io.EOF { //nolint:errorlint
+ break
+ }
+ if err != nil {
+ return err
+ }
+ if bhdr.Id == winio.BackupData {
+ dataHdr = bhdr
+ }
+ }
+ }
+
+ // The logic for copying file contents is fairly complicated due to the need for handling sparse files,
+ // and the weird ways they are represented by BackupRead. A normal file will always either have a data stream
+ // with size and content, or no data stream at all (if empty). However, for a sparse file, the content can also
+ // be represented using a series of sparse block streams following the data stream. Additionally, the way sparse
+ // files are handled by BackupRead has changed in the OS recently. The specifics of the representation are described
+ // in the list at the bottom of this block comment.
+ //
+ // Sparse files can be represented in four different ways, based on the specifics of the file.
+ // - Size = 0:
+ // Previously: BackupRead yields no data stream and no sparse block streams.
+ // Recently: BackupRead yields a data stream with size = 0. There are no following sparse block streams.
+ // - Size > 0, no allocated ranges:
+ // BackupRead yields a data stream with size = 0. Following is a single sparse block stream with
+ // size = 0 and offset = .
+ // - Size > 0, one allocated range:
+ // BackupRead yields a data stream with size = containing the file contents. There are no
+ // sparse block streams. This is the case if you take a normal file with contents and simply set the
+ // sparse flag on it.
+ // - Size > 0, multiple allocated ranges:
+ // BackupRead yields a data stream with size = 0. Following are sparse block streams for each allocated
+ // range of the file containing the range contents. Finally there is a sparse block stream with
+ // size = 0 and offset = .
+
+ if dataHdr != nil { //nolint:nestif // todo: reduce nesting complexity
+ // A data stream was found. Copy the data.
+ // We assume that we will either have a data stream size > 0 XOR have sparse block streams.
+ if dataHdr.Size > 0 || (dataHdr.Attributes&winio.StreamSparseAttributes) == 0 {
+ if size != dataHdr.Size {
+ return fmt.Errorf("%s: mismatch between file size %d and header size %d", name, size, dataHdr.Size)
+ }
+ if _, err = io.Copy(t, br); err != nil {
+ return fmt.Errorf("%s: copying contents from data stream: %w", name, err)
+ }
+ } else if size > 0 {
+ // As of a recent OS change, BackupRead now returns a data stream for empty sparse files.
+ // These files have no sparse block streams, so skip the copySparse call if file size = 0.
+ if err = copySparse(t, br); err != nil {
+ return fmt.Errorf("%s: copying contents from sparse block stream: %w", name, err)
+ }
+ }
+ }
+
+ // Look for streams after the data stream. The only ones we handle are alternate data streams.
+ // Other streams may have metadata that could be serialized, but the tar header has already
+ // been written. In practice, this means that we don't get EA or TXF metadata.
+ for {
+ bhdr, err := br.Next()
+ if err == io.EOF { //nolint:errorlint
+ break
+ }
+ if err != nil {
+ return err
+ }
+ switch bhdr.Id {
+ case winio.BackupAlternateData:
+ if (bhdr.Attributes & winio.StreamSparseAttributes) != 0 {
+ // Unsupported for now, since the size of the alternate stream is not present
+ // in the backup stream until after the data has been read.
+ return fmt.Errorf("%s: tar of sparse alternate data streams is unsupported", name)
+ }
+ altName := strings.TrimSuffix(bhdr.Name, ":$DATA")
+ hdr = &tar.Header{
+ Format: hdr.Format,
+ Name: name + altName,
+ Mode: hdr.Mode,
+ Typeflag: tar.TypeReg,
+ Size: bhdr.Size,
+ ModTime: hdr.ModTime,
+ AccessTime: hdr.AccessTime,
+ ChangeTime: hdr.ChangeTime,
+ }
+ err = t.WriteHeader(hdr)
+ if err != nil {
+ return err
+ }
+ _, err = io.Copy(t, br)
+ if err != nil {
+ return err
+ }
+ case winio.BackupEaData, winio.BackupLink, winio.BackupPropertyData, winio.BackupObjectId, winio.BackupTxfsData:
+ // ignore these streams
+ default:
+ return fmt.Errorf("%s: unknown stream ID %d after data", name, bhdr.Id)
+ }
+ }
+ return nil
+}
+
+// FileInfoFromHeader retrieves basic Win32 file information from a tar header, using the additional metadata written by
+// WriteTarFileFromBackupStream.
+func FileInfoFromHeader(hdr *tar.Header) (name string, size int64, fileInfo *winio.FileBasicInfo, err error) {
+ name = hdr.Name
+ if hdr.Typeflag == tar.TypeReg || hdr.Typeflag == tar.TypeRegA {
+ size = hdr.Size
+ }
+ fileInfo = &winio.FileBasicInfo{
+ LastAccessTime: windows.NsecToFiletime(hdr.AccessTime.UnixNano()),
+ LastWriteTime: windows.NsecToFiletime(hdr.ModTime.UnixNano()),
+ ChangeTime: windows.NsecToFiletime(hdr.ChangeTime.UnixNano()),
+ // Default to ModTime, we'll pull hdrCreationTime below if present
+ CreationTime: windows.NsecToFiletime(hdr.ModTime.UnixNano()),
+ }
+ if attrStr, ok := hdr.PAXRecords[hdrFileAttributes]; ok {
+ attr, err := strconv.ParseUint(attrStr, 10, 32)
+ if err != nil {
+ return "", 0, nil, err
+ }
+ fileInfo.FileAttributes = uint32(attr)
+ } else {
+ if hdr.Typeflag == tar.TypeDir {
+ fileInfo.FileAttributes |= syscall.FILE_ATTRIBUTE_DIRECTORY
+ }
+ }
+ if creationTimeStr, ok := hdr.PAXRecords[hdrCreationTime]; ok {
+ creationTime, err := parsePAXTime(creationTimeStr)
+ if err != nil {
+ return "", 0, nil, err
+ }
+ fileInfo.CreationTime = windows.NsecToFiletime(creationTime.UnixNano())
+ }
+ return name, size, fileInfo, err
+}
+
+// WriteBackupStreamFromTarFile writes a Win32 backup stream from the current tar file. Since this function may process multiple
+// tar file entries in order to collect all the alternate data streams for the file, it returns the next
+// tar file that was not processed, or io.EOF is there are no more.
+func WriteBackupStreamFromTarFile(w io.Writer, t *tar.Reader, hdr *tar.Header) (*tar.Header, error) {
+ bw := winio.NewBackupStreamWriter(w)
+
+ sd, err := SecurityDescriptorFromTarHeader(hdr)
+ if err != nil {
+ return nil, err
+ }
+ if len(sd) != 0 {
+ bhdr := winio.BackupHeader{
+ Id: winio.BackupSecurity,
+ Size: int64(len(sd)),
+ }
+ err := bw.WriteHeader(&bhdr)
+ if err != nil {
+ return nil, err
+ }
+ _, err = bw.Write(sd)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ eadata, err := ExtendedAttributesFromTarHeader(hdr)
+ if err != nil {
+ return nil, err
+ }
+ if len(eadata) != 0 {
+ bhdr := winio.BackupHeader{
+ Id: winio.BackupEaData,
+ Size: int64(len(eadata)),
+ }
+ err = bw.WriteHeader(&bhdr)
+ if err != nil {
+ return nil, err
+ }
+ _, err = bw.Write(eadata)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ if hdr.Typeflag == tar.TypeSymlink {
+ reparse := EncodeReparsePointFromTarHeader(hdr)
+ bhdr := winio.BackupHeader{
+ Id: winio.BackupReparseData,
+ Size: int64(len(reparse)),
+ }
+ err := bw.WriteHeader(&bhdr)
+ if err != nil {
+ return nil, err
+ }
+ _, err = bw.Write(reparse)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ if hdr.Typeflag == tar.TypeReg || hdr.Typeflag == tar.TypeRegA {
+ bhdr := winio.BackupHeader{
+ Id: winio.BackupData,
+ Size: hdr.Size,
+ }
+ err := bw.WriteHeader(&bhdr)
+ if err != nil {
+ return nil, err
+ }
+ _, err = io.Copy(bw, t)
+ if err != nil {
+ return nil, err
+ }
+ }
+ // Copy all the alternate data streams and return the next non-ADS header.
+ for {
+ ahdr, err := t.Next()
+ if err != nil {
+ return nil, err
+ }
+ if ahdr.Typeflag != tar.TypeReg || !strings.HasPrefix(ahdr.Name, hdr.Name+":") {
+ return ahdr, nil
+ }
+ bhdr := winio.BackupHeader{
+ Id: winio.BackupAlternateData,
+ Size: ahdr.Size,
+ Name: ahdr.Name[len(hdr.Name):] + ":$DATA",
+ }
+ err = bw.WriteHeader(&bhdr)
+ if err != nil {
+ return nil, err
+ }
+ _, err = io.Copy(bw, t)
+ if err != nil {
+ return nil, err
+ }
+ }
+}
diff --git a/vendor/github.com/Microsoft/go-winio/doc.go b/vendor/github.com/Microsoft/go-winio/doc.go
new file mode 100644
index 000000000..1f5bfe2d5
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/doc.go
@@ -0,0 +1,22 @@
+// This package provides utilities for efficiently performing Win32 IO operations in Go.
+// Currently, this package is provides support for genreal IO and management of
+// - named pipes
+// - files
+// - [Hyper-V sockets]
+//
+// This code is similar to Go's [net] package, and uses IO completion ports to avoid
+// blocking IO on system threads, allowing Go to reuse the thread to schedule other goroutines.
+//
+// This limits support to Windows Vista and newer operating systems.
+//
+// Additionally, this package provides support for:
+// - creating and managing GUIDs
+// - writing to [ETW]
+// - opening and manageing VHDs
+// - parsing [Windows Image files]
+// - auto-generating Win32 API code
+//
+// [Hyper-V sockets]: https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service
+// [ETW]: https://docs.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw-
+// [Windows Image files]: https://docs.microsoft.com/en-us/windows-hardware/manufacture/desktop/work-with-windows-images
+package winio
diff --git a/vendor/github.com/Microsoft/go-winio/ea.go b/vendor/github.com/Microsoft/go-winio/ea.go
new file mode 100644
index 000000000..e104dbdfd
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/ea.go
@@ -0,0 +1,137 @@
+package winio
+
+import (
+ "bytes"
+ "encoding/binary"
+ "errors"
+)
+
+type fileFullEaInformation struct {
+ NextEntryOffset uint32
+ Flags uint8
+ NameLength uint8
+ ValueLength uint16
+}
+
+var (
+ fileFullEaInformationSize = binary.Size(&fileFullEaInformation{})
+
+ errInvalidEaBuffer = errors.New("invalid extended attribute buffer")
+ errEaNameTooLarge = errors.New("extended attribute name too large")
+ errEaValueTooLarge = errors.New("extended attribute value too large")
+)
+
+// ExtendedAttribute represents a single Windows EA.
+type ExtendedAttribute struct {
+ Name string
+ Value []byte
+ Flags uint8
+}
+
+func parseEa(b []byte) (ea ExtendedAttribute, nb []byte, err error) {
+ var info fileFullEaInformation
+ err = binary.Read(bytes.NewReader(b), binary.LittleEndian, &info)
+ if err != nil {
+ err = errInvalidEaBuffer
+ return ea, nb, err
+ }
+
+ nameOffset := fileFullEaInformationSize
+ nameLen := int(info.NameLength)
+ valueOffset := nameOffset + int(info.NameLength) + 1
+ valueLen := int(info.ValueLength)
+ nextOffset := int(info.NextEntryOffset)
+ if valueLen+valueOffset > len(b) || nextOffset < 0 || nextOffset > len(b) {
+ err = errInvalidEaBuffer
+ return ea, nb, err
+ }
+
+ ea.Name = string(b[nameOffset : nameOffset+nameLen])
+ ea.Value = b[valueOffset : valueOffset+valueLen]
+ ea.Flags = info.Flags
+ if info.NextEntryOffset != 0 {
+ nb = b[info.NextEntryOffset:]
+ }
+ return ea, nb, err
+}
+
+// DecodeExtendedAttributes decodes a list of EAs from a FILE_FULL_EA_INFORMATION
+// buffer retrieved from BackupRead, ZwQueryEaFile, etc.
+func DecodeExtendedAttributes(b []byte) (eas []ExtendedAttribute, err error) {
+ for len(b) != 0 {
+ ea, nb, err := parseEa(b)
+ if err != nil {
+ return nil, err
+ }
+
+ eas = append(eas, ea)
+ b = nb
+ }
+ return eas, err
+}
+
+func writeEa(buf *bytes.Buffer, ea *ExtendedAttribute, last bool) error {
+ if int(uint8(len(ea.Name))) != len(ea.Name) {
+ return errEaNameTooLarge
+ }
+ if int(uint16(len(ea.Value))) != len(ea.Value) {
+ return errEaValueTooLarge
+ }
+ entrySize := uint32(fileFullEaInformationSize + len(ea.Name) + 1 + len(ea.Value))
+ withPadding := (entrySize + 3) &^ 3
+ nextOffset := uint32(0)
+ if !last {
+ nextOffset = withPadding
+ }
+ info := fileFullEaInformation{
+ NextEntryOffset: nextOffset,
+ Flags: ea.Flags,
+ NameLength: uint8(len(ea.Name)),
+ ValueLength: uint16(len(ea.Value)),
+ }
+
+ err := binary.Write(buf, binary.LittleEndian, &info)
+ if err != nil {
+ return err
+ }
+
+ _, err = buf.Write([]byte(ea.Name))
+ if err != nil {
+ return err
+ }
+
+ err = buf.WriteByte(0)
+ if err != nil {
+ return err
+ }
+
+ _, err = buf.Write(ea.Value)
+ if err != nil {
+ return err
+ }
+
+ _, err = buf.Write([]byte{0, 0, 0}[0 : withPadding-entrySize])
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+// EncodeExtendedAttributes encodes a list of EAs into a FILE_FULL_EA_INFORMATION
+// buffer for use with BackupWrite, ZwSetEaFile, etc.
+func EncodeExtendedAttributes(eas []ExtendedAttribute) ([]byte, error) {
+ var buf bytes.Buffer
+ for i := range eas {
+ last := false
+ if i == len(eas)-1 {
+ last = true
+ }
+
+ err := writeEa(&buf, &eas[i], last)
+ if err != nil {
+ return nil, err
+ }
+ }
+ return buf.Bytes(), nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/file.go b/vendor/github.com/Microsoft/go-winio/file.go
new file mode 100644
index 000000000..175a99d3f
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/file.go
@@ -0,0 +1,331 @@
+//go:build windows
+// +build windows
+
+package winio
+
+import (
+ "errors"
+ "io"
+ "runtime"
+ "sync"
+ "sync/atomic"
+ "syscall"
+ "time"
+
+ "golang.org/x/sys/windows"
+)
+
+//sys cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) = CancelIoEx
+//sys createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) = CreateIoCompletionPort
+//sys getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) = GetQueuedCompletionStatus
+//sys setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) = SetFileCompletionNotificationModes
+//sys wsaGetOverlappedResult(h syscall.Handle, o *syscall.Overlapped, bytes *uint32, wait bool, flags *uint32) (err error) = ws2_32.WSAGetOverlappedResult
+
+type atomicBool int32
+
+func (b *atomicBool) isSet() bool { return atomic.LoadInt32((*int32)(b)) != 0 }
+func (b *atomicBool) setFalse() { atomic.StoreInt32((*int32)(b), 0) }
+func (b *atomicBool) setTrue() { atomic.StoreInt32((*int32)(b), 1) }
+
+//revive:disable-next-line:predeclared Keep "new" to maintain consistency with "atomic" pkg
+func (b *atomicBool) swap(new bool) bool {
+ var newInt int32
+ if new {
+ newInt = 1
+ }
+ return atomic.SwapInt32((*int32)(b), newInt) == 1
+}
+
+var (
+ ErrFileClosed = errors.New("file has already been closed")
+ ErrTimeout = &timeoutError{}
+)
+
+type timeoutError struct{}
+
+func (*timeoutError) Error() string { return "i/o timeout" }
+func (*timeoutError) Timeout() bool { return true }
+func (*timeoutError) Temporary() bool { return true }
+
+type timeoutChan chan struct{}
+
+var ioInitOnce sync.Once
+var ioCompletionPort syscall.Handle
+
+// ioResult contains the result of an asynchronous IO operation.
+type ioResult struct {
+ bytes uint32
+ err error
+}
+
+// ioOperation represents an outstanding asynchronous Win32 IO.
+type ioOperation struct {
+ o syscall.Overlapped
+ ch chan ioResult
+}
+
+func initIO() {
+ h, err := createIoCompletionPort(syscall.InvalidHandle, 0, 0, 0xffffffff)
+ if err != nil {
+ panic(err)
+ }
+ ioCompletionPort = h
+ go ioCompletionProcessor(h)
+}
+
+// win32File implements Reader, Writer, and Closer on a Win32 handle without blocking in a syscall.
+// It takes ownership of this handle and will close it if it is garbage collected.
+type win32File struct {
+ handle syscall.Handle
+ wg sync.WaitGroup
+ wgLock sync.RWMutex
+ closing atomicBool
+ socket bool
+ readDeadline deadlineHandler
+ writeDeadline deadlineHandler
+}
+
+type deadlineHandler struct {
+ setLock sync.Mutex
+ channel timeoutChan
+ channelLock sync.RWMutex
+ timer *time.Timer
+ timedout atomicBool
+}
+
+// makeWin32File makes a new win32File from an existing file handle.
+func makeWin32File(h syscall.Handle) (*win32File, error) {
+ f := &win32File{handle: h}
+ ioInitOnce.Do(initIO)
+ _, err := createIoCompletionPort(h, ioCompletionPort, 0, 0xffffffff)
+ if err != nil {
+ return nil, err
+ }
+ err = setFileCompletionNotificationModes(h, windows.FILE_SKIP_COMPLETION_PORT_ON_SUCCESS|windows.FILE_SKIP_SET_EVENT_ON_HANDLE)
+ if err != nil {
+ return nil, err
+ }
+ f.readDeadline.channel = make(timeoutChan)
+ f.writeDeadline.channel = make(timeoutChan)
+ return f, nil
+}
+
+func MakeOpenFile(h syscall.Handle) (io.ReadWriteCloser, error) {
+ // If we return the result of makeWin32File directly, it can result in an
+ // interface-wrapped nil, rather than a nil interface value.
+ f, err := makeWin32File(h)
+ if err != nil {
+ return nil, err
+ }
+ return f, nil
+}
+
+// closeHandle closes the resources associated with a Win32 handle.
+func (f *win32File) closeHandle() {
+ f.wgLock.Lock()
+ // Atomically set that we are closing, releasing the resources only once.
+ if !f.closing.swap(true) {
+ f.wgLock.Unlock()
+ // cancel all IO and wait for it to complete
+ _ = cancelIoEx(f.handle, nil)
+ f.wg.Wait()
+ // at this point, no new IO can start
+ syscall.Close(f.handle)
+ f.handle = 0
+ } else {
+ f.wgLock.Unlock()
+ }
+}
+
+// Close closes a win32File.
+func (f *win32File) Close() error {
+ f.closeHandle()
+ return nil
+}
+
+// IsClosed checks if the file has been closed.
+func (f *win32File) IsClosed() bool {
+ return f.closing.isSet()
+}
+
+// prepareIO prepares for a new IO operation.
+// The caller must call f.wg.Done() when the IO is finished, prior to Close() returning.
+func (f *win32File) prepareIO() (*ioOperation, error) {
+ f.wgLock.RLock()
+ if f.closing.isSet() {
+ f.wgLock.RUnlock()
+ return nil, ErrFileClosed
+ }
+ f.wg.Add(1)
+ f.wgLock.RUnlock()
+ c := &ioOperation{}
+ c.ch = make(chan ioResult)
+ return c, nil
+}
+
+// ioCompletionProcessor processes completed async IOs forever.
+func ioCompletionProcessor(h syscall.Handle) {
+ for {
+ var bytes uint32
+ var key uintptr
+ var op *ioOperation
+ err := getQueuedCompletionStatus(h, &bytes, &key, &op, syscall.INFINITE)
+ if op == nil {
+ panic(err)
+ }
+ op.ch <- ioResult{bytes, err}
+ }
+}
+
+// todo: helsaawy - create an asyncIO version that takes a context
+
+// asyncIO processes the return value from ReadFile or WriteFile, blocking until
+// the operation has actually completed.
+func (f *win32File) asyncIO(c *ioOperation, d *deadlineHandler, bytes uint32, err error) (int, error) {
+ if err != syscall.ERROR_IO_PENDING { //nolint:errorlint // err is Errno
+ return int(bytes), err
+ }
+
+ if f.closing.isSet() {
+ _ = cancelIoEx(f.handle, &c.o)
+ }
+
+ var timeout timeoutChan
+ if d != nil {
+ d.channelLock.Lock()
+ timeout = d.channel
+ d.channelLock.Unlock()
+ }
+
+ var r ioResult
+ select {
+ case r = <-c.ch:
+ err = r.err
+ if err == syscall.ERROR_OPERATION_ABORTED { //nolint:errorlint // err is Errno
+ if f.closing.isSet() {
+ err = ErrFileClosed
+ }
+ } else if err != nil && f.socket {
+ // err is from Win32. Query the overlapped structure to get the winsock error.
+ var bytes, flags uint32
+ err = wsaGetOverlappedResult(f.handle, &c.o, &bytes, false, &flags)
+ }
+ case <-timeout:
+ _ = cancelIoEx(f.handle, &c.o)
+ r = <-c.ch
+ err = r.err
+ if err == syscall.ERROR_OPERATION_ABORTED { //nolint:errorlint // err is Errno
+ err = ErrTimeout
+ }
+ }
+
+ // runtime.KeepAlive is needed, as c is passed via native
+ // code to ioCompletionProcessor, c must remain alive
+ // until the channel read is complete.
+ // todo: (de)allocate *ioOperation via win32 heap functions, instead of needing to KeepAlive?
+ runtime.KeepAlive(c)
+ return int(r.bytes), err
+}
+
+// Read reads from a file handle.
+func (f *win32File) Read(b []byte) (int, error) {
+ c, err := f.prepareIO()
+ if err != nil {
+ return 0, err
+ }
+ defer f.wg.Done()
+
+ if f.readDeadline.timedout.isSet() {
+ return 0, ErrTimeout
+ }
+
+ var bytes uint32
+ err = syscall.ReadFile(f.handle, b, &bytes, &c.o)
+ n, err := f.asyncIO(c, &f.readDeadline, bytes, err)
+ runtime.KeepAlive(b)
+
+ // Handle EOF conditions.
+ if err == nil && n == 0 && len(b) != 0 {
+ return 0, io.EOF
+ } else if err == syscall.ERROR_BROKEN_PIPE { //nolint:errorlint // err is Errno
+ return 0, io.EOF
+ } else {
+ return n, err
+ }
+}
+
+// Write writes to a file handle.
+func (f *win32File) Write(b []byte) (int, error) {
+ c, err := f.prepareIO()
+ if err != nil {
+ return 0, err
+ }
+ defer f.wg.Done()
+
+ if f.writeDeadline.timedout.isSet() {
+ return 0, ErrTimeout
+ }
+
+ var bytes uint32
+ err = syscall.WriteFile(f.handle, b, &bytes, &c.o)
+ n, err := f.asyncIO(c, &f.writeDeadline, bytes, err)
+ runtime.KeepAlive(b)
+ return n, err
+}
+
+func (f *win32File) SetReadDeadline(deadline time.Time) error {
+ return f.readDeadline.set(deadline)
+}
+
+func (f *win32File) SetWriteDeadline(deadline time.Time) error {
+ return f.writeDeadline.set(deadline)
+}
+
+func (f *win32File) Flush() error {
+ return syscall.FlushFileBuffers(f.handle)
+}
+
+func (f *win32File) Fd() uintptr {
+ return uintptr(f.handle)
+}
+
+func (d *deadlineHandler) set(deadline time.Time) error {
+ d.setLock.Lock()
+ defer d.setLock.Unlock()
+
+ if d.timer != nil {
+ if !d.timer.Stop() {
+ <-d.channel
+ }
+ d.timer = nil
+ }
+ d.timedout.setFalse()
+
+ select {
+ case <-d.channel:
+ d.channelLock.Lock()
+ d.channel = make(chan struct{})
+ d.channelLock.Unlock()
+ default:
+ }
+
+ if deadline.IsZero() {
+ return nil
+ }
+
+ timeoutIO := func() {
+ d.timedout.setTrue()
+ close(d.channel)
+ }
+
+ now := time.Now()
+ duration := deadline.Sub(now)
+ if deadline.After(now) {
+ // Deadline is in the future, set a timer to wait
+ d.timer = time.AfterFunc(duration, timeoutIO)
+ } else {
+ // Deadline is in the past. Cancel all pending IO now.
+ timeoutIO()
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/fileinfo.go b/vendor/github.com/Microsoft/go-winio/fileinfo.go
new file mode 100644
index 000000000..702950e72
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/fileinfo.go
@@ -0,0 +1,92 @@
+//go:build windows
+// +build windows
+
+package winio
+
+import (
+ "os"
+ "runtime"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+// FileBasicInfo contains file access time and file attributes information.
+type FileBasicInfo struct {
+ CreationTime, LastAccessTime, LastWriteTime, ChangeTime windows.Filetime
+ FileAttributes uint32
+ _ uint32 // padding
+}
+
+// GetFileBasicInfo retrieves times and attributes for a file.
+func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
+ bi := &FileBasicInfo{}
+ if err := windows.GetFileInformationByHandleEx(
+ windows.Handle(f.Fd()),
+ windows.FileBasicInfo,
+ (*byte)(unsafe.Pointer(bi)),
+ uint32(unsafe.Sizeof(*bi)),
+ ); err != nil {
+ return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
+ }
+ runtime.KeepAlive(f)
+ return bi, nil
+}
+
+// SetFileBasicInfo sets times and attributes for a file.
+func SetFileBasicInfo(f *os.File, bi *FileBasicInfo) error {
+ if err := windows.SetFileInformationByHandle(
+ windows.Handle(f.Fd()),
+ windows.FileBasicInfo,
+ (*byte)(unsafe.Pointer(bi)),
+ uint32(unsafe.Sizeof(*bi)),
+ ); err != nil {
+ return &os.PathError{Op: "SetFileInformationByHandle", Path: f.Name(), Err: err}
+ }
+ runtime.KeepAlive(f)
+ return nil
+}
+
+// FileStandardInfo contains extended information for the file.
+// FILE_STANDARD_INFO in WinBase.h
+// https://docs.microsoft.com/en-us/windows/win32/api/winbase/ns-winbase-file_standard_info
+type FileStandardInfo struct {
+ AllocationSize, EndOfFile int64
+ NumberOfLinks uint32
+ DeletePending, Directory bool
+}
+
+// GetFileStandardInfo retrieves ended information for the file.
+func GetFileStandardInfo(f *os.File) (*FileStandardInfo, error) {
+ si := &FileStandardInfo{}
+ if err := windows.GetFileInformationByHandleEx(windows.Handle(f.Fd()),
+ windows.FileStandardInfo,
+ (*byte)(unsafe.Pointer(si)),
+ uint32(unsafe.Sizeof(*si))); err != nil {
+ return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
+ }
+ runtime.KeepAlive(f)
+ return si, nil
+}
+
+// FileIDInfo contains the volume serial number and file ID for a file. This pair should be
+// unique on a system.
+type FileIDInfo struct {
+ VolumeSerialNumber uint64
+ FileID [16]byte
+}
+
+// GetFileID retrieves the unique (volume, file ID) pair for a file.
+func GetFileID(f *os.File) (*FileIDInfo, error) {
+ fileID := &FileIDInfo{}
+ if err := windows.GetFileInformationByHandleEx(
+ windows.Handle(f.Fd()),
+ windows.FileIdInfo,
+ (*byte)(unsafe.Pointer(fileID)),
+ uint32(unsafe.Sizeof(*fileID)),
+ ); err != nil {
+ return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
+ }
+ runtime.KeepAlive(f)
+ return fileID, nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/hvsock.go b/vendor/github.com/Microsoft/go-winio/hvsock.go
new file mode 100644
index 000000000..52f1c280f
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/hvsock.go
@@ -0,0 +1,575 @@
+//go:build windows
+// +build windows
+
+package winio
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "os"
+ "syscall"
+ "time"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+
+ "github.com/Microsoft/go-winio/internal/socket"
+ "github.com/Microsoft/go-winio/pkg/guid"
+)
+
+const afHVSock = 34 // AF_HYPERV
+
+// Well known Service and VM IDs
+//https://docs.microsoft.com/en-us/virtualization/hyper-v-on-windows/user-guide/make-integration-service#vmid-wildcards
+
+// HvsockGUIDWildcard is the wildcard VmId for accepting connections from all partitions.
+func HvsockGUIDWildcard() guid.GUID { // 00000000-0000-0000-0000-000000000000
+ return guid.GUID{}
+}
+
+// HvsockGUIDBroadcast is the wildcard VmId for broadcasting sends to all partitions.
+func HvsockGUIDBroadcast() guid.GUID { //ffffffff-ffff-ffff-ffff-ffffffffffff
+ return guid.GUID{
+ Data1: 0xffffffff,
+ Data2: 0xffff,
+ Data3: 0xffff,
+ Data4: [8]uint8{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff},
+ }
+}
+
+// HvsockGUIDLoopback is the Loopback VmId for accepting connections to the same partition as the connector.
+func HvsockGUIDLoopback() guid.GUID { // e0e16197-dd56-4a10-9195-5ee7a155a838
+ return guid.GUID{
+ Data1: 0xe0e16197,
+ Data2: 0xdd56,
+ Data3: 0x4a10,
+ Data4: [8]uint8{0x91, 0x95, 0x5e, 0xe7, 0xa1, 0x55, 0xa8, 0x38},
+ }
+}
+
+// HvsockGUIDSiloHost is the address of a silo's host partition:
+// - The silo host of a hosted silo is the utility VM.
+// - The silo host of a silo on a physical host is the physical host.
+func HvsockGUIDSiloHost() guid.GUID { // 36bd0c5c-7276-4223-88ba-7d03b654c568
+ return guid.GUID{
+ Data1: 0x36bd0c5c,
+ Data2: 0x7276,
+ Data3: 0x4223,
+ Data4: [8]byte{0x88, 0xba, 0x7d, 0x03, 0xb6, 0x54, 0xc5, 0x68},
+ }
+}
+
+// HvsockGUIDChildren is the wildcard VmId for accepting connections from the connector's child partitions.
+func HvsockGUIDChildren() guid.GUID { // 90db8b89-0d35-4f79-8ce9-49ea0ac8b7cd
+ return guid.GUID{
+ Data1: 0x90db8b89,
+ Data2: 0xd35,
+ Data3: 0x4f79,
+ Data4: [8]uint8{0x8c, 0xe9, 0x49, 0xea, 0xa, 0xc8, 0xb7, 0xcd},
+ }
+}
+
+// HvsockGUIDParent is the wildcard VmId for accepting connections from the connector's parent partition.
+// Listening on this VmId accepts connection from:
+// - Inside silos: silo host partition.
+// - Inside hosted silo: host of the VM.
+// - Inside VM: VM host.
+// - Physical host: Not supported.
+func HvsockGUIDParent() guid.GUID { // a42e7cda-d03f-480c-9cc2-a4de20abb878
+ return guid.GUID{
+ Data1: 0xa42e7cda,
+ Data2: 0xd03f,
+ Data3: 0x480c,
+ Data4: [8]uint8{0x9c, 0xc2, 0xa4, 0xde, 0x20, 0xab, 0xb8, 0x78},
+ }
+}
+
+// hvsockVsockServiceTemplate is the Service GUID used for the VSOCK protocol.
+func hvsockVsockServiceTemplate() guid.GUID { // 00000000-facb-11e6-bd58-64006a7986d3
+ return guid.GUID{
+ Data2: 0xfacb,
+ Data3: 0x11e6,
+ Data4: [8]uint8{0xbd, 0x58, 0x64, 0x00, 0x6a, 0x79, 0x86, 0xd3},
+ }
+}
+
+// An HvsockAddr is an address for a AF_HYPERV socket.
+type HvsockAddr struct {
+ VMID guid.GUID
+ ServiceID guid.GUID
+}
+
+type rawHvsockAddr struct {
+ Family uint16
+ _ uint16
+ VMID guid.GUID
+ ServiceID guid.GUID
+}
+
+var _ socket.RawSockaddr = &rawHvsockAddr{}
+
+// Network returns the address's network name, "hvsock".
+func (*HvsockAddr) Network() string {
+ return "hvsock"
+}
+
+func (addr *HvsockAddr) String() string {
+ return fmt.Sprintf("%s:%s", &addr.VMID, &addr.ServiceID)
+}
+
+// VsockServiceID returns an hvsock service ID corresponding to the specified AF_VSOCK port.
+func VsockServiceID(port uint32) guid.GUID {
+ g := hvsockVsockServiceTemplate() // make a copy
+ g.Data1 = port
+ return g
+}
+
+func (addr *HvsockAddr) raw() rawHvsockAddr {
+ return rawHvsockAddr{
+ Family: afHVSock,
+ VMID: addr.VMID,
+ ServiceID: addr.ServiceID,
+ }
+}
+
+func (addr *HvsockAddr) fromRaw(raw *rawHvsockAddr) {
+ addr.VMID = raw.VMID
+ addr.ServiceID = raw.ServiceID
+}
+
+// Sockaddr returns a pointer to and the size of this struct.
+//
+// Implements the [socket.RawSockaddr] interface, and allows use in
+// [socket.Bind] and [socket.ConnectEx].
+func (r *rawHvsockAddr) Sockaddr() (unsafe.Pointer, int32, error) {
+ return unsafe.Pointer(r), int32(unsafe.Sizeof(rawHvsockAddr{})), nil
+}
+
+// Sockaddr interface allows use with `sockets.Bind()` and `.ConnectEx()`.
+func (r *rawHvsockAddr) FromBytes(b []byte) error {
+ n := int(unsafe.Sizeof(rawHvsockAddr{}))
+
+ if len(b) < n {
+ return fmt.Errorf("got %d, want %d: %w", len(b), n, socket.ErrBufferSize)
+ }
+
+ copy(unsafe.Slice((*byte)(unsafe.Pointer(r)), n), b[:n])
+ if r.Family != afHVSock {
+ return fmt.Errorf("got %d, want %d: %w", r.Family, afHVSock, socket.ErrAddrFamily)
+ }
+
+ return nil
+}
+
+// HvsockListener is a socket listener for the AF_HYPERV address family.
+type HvsockListener struct {
+ sock *win32File
+ addr HvsockAddr
+}
+
+var _ net.Listener = &HvsockListener{}
+
+// HvsockConn is a connected socket of the AF_HYPERV address family.
+type HvsockConn struct {
+ sock *win32File
+ local, remote HvsockAddr
+}
+
+var _ net.Conn = &HvsockConn{}
+
+func newHVSocket() (*win32File, error) {
+ fd, err := syscall.Socket(afHVSock, syscall.SOCK_STREAM, 1)
+ if err != nil {
+ return nil, os.NewSyscallError("socket", err)
+ }
+ f, err := makeWin32File(fd)
+ if err != nil {
+ syscall.Close(fd)
+ return nil, err
+ }
+ f.socket = true
+ return f, nil
+}
+
+// ListenHvsock listens for connections on the specified hvsock address.
+func ListenHvsock(addr *HvsockAddr) (_ *HvsockListener, err error) {
+ l := &HvsockListener{addr: *addr}
+ sock, err := newHVSocket()
+ if err != nil {
+ return nil, l.opErr("listen", err)
+ }
+ sa := addr.raw()
+ err = socket.Bind(windows.Handle(sock.handle), &sa)
+ if err != nil {
+ return nil, l.opErr("listen", os.NewSyscallError("socket", err))
+ }
+ err = syscall.Listen(sock.handle, 16)
+ if err != nil {
+ return nil, l.opErr("listen", os.NewSyscallError("listen", err))
+ }
+ return &HvsockListener{sock: sock, addr: *addr}, nil
+}
+
+func (l *HvsockListener) opErr(op string, err error) error {
+ return &net.OpError{Op: op, Net: "hvsock", Addr: &l.addr, Err: err}
+}
+
+// Addr returns the listener's network address.
+func (l *HvsockListener) Addr() net.Addr {
+ return &l.addr
+}
+
+// Accept waits for the next connection and returns it.
+func (l *HvsockListener) Accept() (_ net.Conn, err error) {
+ sock, err := newHVSocket()
+ if err != nil {
+ return nil, l.opErr("accept", err)
+ }
+ defer func() {
+ if sock != nil {
+ sock.Close()
+ }
+ }()
+ c, err := l.sock.prepareIO()
+ if err != nil {
+ return nil, l.opErr("accept", err)
+ }
+ defer l.sock.wg.Done()
+
+ // AcceptEx, per documentation, requires an extra 16 bytes per address.
+ //
+ // https://docs.microsoft.com/en-us/windows/win32/api/mswsock/nf-mswsock-acceptex
+ const addrlen = uint32(16 + unsafe.Sizeof(rawHvsockAddr{}))
+ var addrbuf [addrlen * 2]byte
+
+ var bytes uint32
+ err = syscall.AcceptEx(l.sock.handle, sock.handle, &addrbuf[0], 0 /*rxdatalen*/, addrlen, addrlen, &bytes, &c.o)
+ if _, err = l.sock.asyncIO(c, nil, bytes, err); err != nil {
+ return nil, l.opErr("accept", os.NewSyscallError("acceptex", err))
+ }
+
+ conn := &HvsockConn{
+ sock: sock,
+ }
+ // The local address returned in the AcceptEx buffer is the same as the Listener socket's
+ // address. However, the service GUID reported by GetSockName is different from the Listeners
+ // socket, and is sometimes the same as the local address of the socket that dialed the
+ // address, with the service GUID.Data1 incremented, but othertimes is different.
+ // todo: does the local address matter? is the listener's address or the actual address appropriate?
+ conn.local.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[0])))
+ conn.remote.fromRaw((*rawHvsockAddr)(unsafe.Pointer(&addrbuf[addrlen])))
+
+ // initialize the accepted socket and update its properties with those of the listening socket
+ if err = windows.Setsockopt(windows.Handle(sock.handle),
+ windows.SOL_SOCKET, windows.SO_UPDATE_ACCEPT_CONTEXT,
+ (*byte)(unsafe.Pointer(&l.sock.handle)), int32(unsafe.Sizeof(l.sock.handle))); err != nil {
+ return nil, conn.opErr("accept", os.NewSyscallError("setsockopt", err))
+ }
+
+ sock = nil
+ return conn, nil
+}
+
+// Close closes the listener, causing any pending Accept calls to fail.
+func (l *HvsockListener) Close() error {
+ return l.sock.Close()
+}
+
+// HvsockDialer configures and dials a Hyper-V Socket (ie, [HvsockConn]).
+type HvsockDialer struct {
+ // Deadline is the time the Dial operation must connect before erroring.
+ Deadline time.Time
+
+ // Retries is the number of additional connects to try if the connection times out, is refused,
+ // or the host is unreachable
+ Retries uint
+
+ // RetryWait is the time to wait after a connection error to retry
+ RetryWait time.Duration
+
+ rt *time.Timer // redial wait timer
+}
+
+// Dial the Hyper-V socket at addr.
+//
+// See [HvsockDialer.Dial] for more information.
+func Dial(ctx context.Context, addr *HvsockAddr) (conn *HvsockConn, err error) {
+ return (&HvsockDialer{}).Dial(ctx, addr)
+}
+
+// Dial attempts to connect to the Hyper-V socket at addr, and returns a connection if successful.
+// Will attempt (HvsockDialer).Retries if dialing fails, waiting (HvsockDialer).RetryWait between
+// retries.
+//
+// Dialing can be cancelled either by providing (HvsockDialer).Deadline, or cancelling ctx.
+func (d *HvsockDialer) Dial(ctx context.Context, addr *HvsockAddr) (conn *HvsockConn, err error) {
+ op := "dial"
+ // create the conn early to use opErr()
+ conn = &HvsockConn{
+ remote: *addr,
+ }
+
+ if !d.Deadline.IsZero() {
+ var cancel context.CancelFunc
+ ctx, cancel = context.WithDeadline(ctx, d.Deadline)
+ defer cancel()
+ }
+
+ // preemptive timeout/cancellation check
+ if err = ctx.Err(); err != nil {
+ return nil, conn.opErr(op, err)
+ }
+
+ sock, err := newHVSocket()
+ if err != nil {
+ return nil, conn.opErr(op, err)
+ }
+ defer func() {
+ if sock != nil {
+ sock.Close()
+ }
+ }()
+
+ sa := addr.raw()
+ err = socket.Bind(windows.Handle(sock.handle), &sa)
+ if err != nil {
+ return nil, conn.opErr(op, os.NewSyscallError("bind", err))
+ }
+
+ c, err := sock.prepareIO()
+ if err != nil {
+ return nil, conn.opErr(op, err)
+ }
+ defer sock.wg.Done()
+ var bytes uint32
+ for i := uint(0); i <= d.Retries; i++ {
+ err = socket.ConnectEx(
+ windows.Handle(sock.handle),
+ &sa,
+ nil, // sendBuf
+ 0, // sendDataLen
+ &bytes,
+ (*windows.Overlapped)(unsafe.Pointer(&c.o)))
+ _, err = sock.asyncIO(c, nil, bytes, err)
+ if i < d.Retries && canRedial(err) {
+ if err = d.redialWait(ctx); err == nil {
+ continue
+ }
+ }
+ break
+ }
+ if err != nil {
+ return nil, conn.opErr(op, os.NewSyscallError("connectex", err))
+ }
+
+ // update the connection properties, so shutdown can be used
+ if err = windows.Setsockopt(
+ windows.Handle(sock.handle),
+ windows.SOL_SOCKET,
+ windows.SO_UPDATE_CONNECT_CONTEXT,
+ nil, // optvalue
+ 0, // optlen
+ ); err != nil {
+ return nil, conn.opErr(op, os.NewSyscallError("setsockopt", err))
+ }
+
+ // get the local name
+ var sal rawHvsockAddr
+ err = socket.GetSockName(windows.Handle(sock.handle), &sal)
+ if err != nil {
+ return nil, conn.opErr(op, os.NewSyscallError("getsockname", err))
+ }
+ conn.local.fromRaw(&sal)
+
+ // one last check for timeout, since asyncIO doesn't check the context
+ if err = ctx.Err(); err != nil {
+ return nil, conn.opErr(op, err)
+ }
+
+ conn.sock = sock
+ sock = nil
+
+ return conn, nil
+}
+
+// redialWait waits before attempting to redial, resetting the timer as appropriate.
+func (d *HvsockDialer) redialWait(ctx context.Context) (err error) {
+ if d.RetryWait == 0 {
+ return nil
+ }
+
+ if d.rt == nil {
+ d.rt = time.NewTimer(d.RetryWait)
+ } else {
+ // should already be stopped and drained
+ d.rt.Reset(d.RetryWait)
+ }
+
+ select {
+ case <-ctx.Done():
+ case <-d.rt.C:
+ return nil
+ }
+
+ // stop and drain the timer
+ if !d.rt.Stop() {
+ <-d.rt.C
+ }
+ return ctx.Err()
+}
+
+// assumes error is a plain, unwrapped syscall.Errno provided by direct syscall.
+func canRedial(err error) bool {
+ //nolint:errorlint // guaranteed to be an Errno
+ switch err {
+ case windows.WSAECONNREFUSED, windows.WSAENETUNREACH, windows.WSAETIMEDOUT,
+ windows.ERROR_CONNECTION_REFUSED, windows.ERROR_CONNECTION_UNAVAIL:
+ return true
+ default:
+ return false
+ }
+}
+
+func (conn *HvsockConn) opErr(op string, err error) error {
+ // translate from "file closed" to "socket closed"
+ if errors.Is(err, ErrFileClosed) {
+ err = socket.ErrSocketClosed
+ }
+ return &net.OpError{Op: op, Net: "hvsock", Source: &conn.local, Addr: &conn.remote, Err: err}
+}
+
+func (conn *HvsockConn) Read(b []byte) (int, error) {
+ c, err := conn.sock.prepareIO()
+ if err != nil {
+ return 0, conn.opErr("read", err)
+ }
+ defer conn.sock.wg.Done()
+ buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))}
+ var flags, bytes uint32
+ err = syscall.WSARecv(conn.sock.handle, &buf, 1, &bytes, &flags, &c.o, nil)
+ n, err := conn.sock.asyncIO(c, &conn.sock.readDeadline, bytes, err)
+ if err != nil {
+ var eno windows.Errno
+ if errors.As(err, &eno) {
+ err = os.NewSyscallError("wsarecv", eno)
+ }
+ return 0, conn.opErr("read", err)
+ } else if n == 0 {
+ err = io.EOF
+ }
+ return n, err
+}
+
+func (conn *HvsockConn) Write(b []byte) (int, error) {
+ t := 0
+ for len(b) != 0 {
+ n, err := conn.write(b)
+ if err != nil {
+ return t + n, err
+ }
+ t += n
+ b = b[n:]
+ }
+ return t, nil
+}
+
+func (conn *HvsockConn) write(b []byte) (int, error) {
+ c, err := conn.sock.prepareIO()
+ if err != nil {
+ return 0, conn.opErr("write", err)
+ }
+ defer conn.sock.wg.Done()
+ buf := syscall.WSABuf{Buf: &b[0], Len: uint32(len(b))}
+ var bytes uint32
+ err = syscall.WSASend(conn.sock.handle, &buf, 1, &bytes, 0, &c.o, nil)
+ n, err := conn.sock.asyncIO(c, &conn.sock.writeDeadline, bytes, err)
+ if err != nil {
+ var eno windows.Errno
+ if errors.As(err, &eno) {
+ err = os.NewSyscallError("wsasend", eno)
+ }
+ return 0, conn.opErr("write", err)
+ }
+ return n, err
+}
+
+// Close closes the socket connection, failing any pending read or write calls.
+func (conn *HvsockConn) Close() error {
+ return conn.sock.Close()
+}
+
+func (conn *HvsockConn) IsClosed() bool {
+ return conn.sock.IsClosed()
+}
+
+// shutdown disables sending or receiving on a socket.
+func (conn *HvsockConn) shutdown(how int) error {
+ if conn.IsClosed() {
+ return socket.ErrSocketClosed
+ }
+
+ err := syscall.Shutdown(conn.sock.handle, how)
+ if err != nil {
+ // If the connection was closed, shutdowns fail with "not connected"
+ if errors.Is(err, windows.WSAENOTCONN) ||
+ errors.Is(err, windows.WSAESHUTDOWN) {
+ err = socket.ErrSocketClosed
+ }
+ return os.NewSyscallError("shutdown", err)
+ }
+ return nil
+}
+
+// CloseRead shuts down the read end of the socket, preventing future read operations.
+func (conn *HvsockConn) CloseRead() error {
+ err := conn.shutdown(syscall.SHUT_RD)
+ if err != nil {
+ return conn.opErr("closeread", err)
+ }
+ return nil
+}
+
+// CloseWrite shuts down the write end of the socket, preventing future write operations and
+// notifying the other endpoint that no more data will be written.
+func (conn *HvsockConn) CloseWrite() error {
+ err := conn.shutdown(syscall.SHUT_WR)
+ if err != nil {
+ return conn.opErr("closewrite", err)
+ }
+ return nil
+}
+
+// LocalAddr returns the local address of the connection.
+func (conn *HvsockConn) LocalAddr() net.Addr {
+ return &conn.local
+}
+
+// RemoteAddr returns the remote address of the connection.
+func (conn *HvsockConn) RemoteAddr() net.Addr {
+ return &conn.remote
+}
+
+// SetDeadline implements the net.Conn SetDeadline method.
+func (conn *HvsockConn) SetDeadline(t time.Time) error {
+ // todo: implement `SetDeadline` for `win32File`
+ if err := conn.SetReadDeadline(t); err != nil {
+ return fmt.Errorf("set read deadline: %w", err)
+ }
+ if err := conn.SetWriteDeadline(t); err != nil {
+ return fmt.Errorf("set write deadline: %w", err)
+ }
+ return nil
+}
+
+// SetReadDeadline implements the net.Conn SetReadDeadline method.
+func (conn *HvsockConn) SetReadDeadline(t time.Time) error {
+ return conn.sock.SetReadDeadline(t)
+}
+
+// SetWriteDeadline implements the net.Conn SetWriteDeadline method.
+func (conn *HvsockConn) SetWriteDeadline(t time.Time) error {
+ return conn.sock.SetWriteDeadline(t)
+}
diff --git a/vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go b/vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go
new file mode 100644
index 000000000..7e82f9afa
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/internal/socket/rawaddr.go
@@ -0,0 +1,20 @@
+package socket
+
+import (
+ "unsafe"
+)
+
+// RawSockaddr allows structs to be used with [Bind] and [ConnectEx]. The
+// struct must meet the Win32 sockaddr requirements specified here:
+// https://docs.microsoft.com/en-us/windows/win32/winsock/sockaddr-2
+//
+// Specifically, the struct size must be least larger than an int16 (unsigned short)
+// for the address family.
+type RawSockaddr interface {
+ // Sockaddr returns a pointer to the RawSockaddr and its struct size, allowing
+ // for the RawSockaddr's data to be overwritten by syscalls (if necessary).
+ //
+ // It is the callers responsibility to validate that the values are valid; invalid
+ // pointers or size can cause a panic.
+ Sockaddr() (unsafe.Pointer, int32, error)
+}
diff --git a/vendor/github.com/Microsoft/go-winio/internal/socket/socket.go b/vendor/github.com/Microsoft/go-winio/internal/socket/socket.go
new file mode 100644
index 000000000..39e8c05f8
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/internal/socket/socket.go
@@ -0,0 +1,179 @@
+//go:build windows
+
+package socket
+
+import (
+ "errors"
+ "fmt"
+ "net"
+ "sync"
+ "syscall"
+ "unsafe"
+
+ "github.com/Microsoft/go-winio/pkg/guid"
+ "golang.org/x/sys/windows"
+)
+
+//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go socket.go
+
+//sys getsockname(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) [failretval==socketError] = ws2_32.getsockname
+//sys getpeername(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) [failretval==socketError] = ws2_32.getpeername
+//sys bind(s windows.Handle, name unsafe.Pointer, namelen int32) (err error) [failretval==socketError] = ws2_32.bind
+
+const socketError = uintptr(^uint32(0))
+
+var (
+ // todo(helsaawy): create custom error types to store the desired vs actual size and addr family?
+
+ ErrBufferSize = errors.New("buffer size")
+ ErrAddrFamily = errors.New("address family")
+ ErrInvalidPointer = errors.New("invalid pointer")
+ ErrSocketClosed = fmt.Errorf("socket closed: %w", net.ErrClosed)
+)
+
+// todo(helsaawy): replace these with generics, ie: GetSockName[S RawSockaddr](s windows.Handle) (S, error)
+
+// GetSockName writes the local address of socket s to the [RawSockaddr] rsa.
+// If rsa is not large enough, the [windows.WSAEFAULT] is returned.
+func GetSockName(s windows.Handle, rsa RawSockaddr) error {
+ ptr, l, err := rsa.Sockaddr()
+ if err != nil {
+ return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
+ }
+
+ // although getsockname returns WSAEFAULT if the buffer is too small, it does not set
+ // &l to the correct size, so--apart from doubling the buffer repeatedly--there is no remedy
+ return getsockname(s, ptr, &l)
+}
+
+// GetPeerName returns the remote address the socket is connected to.
+//
+// See [GetSockName] for more information.
+func GetPeerName(s windows.Handle, rsa RawSockaddr) error {
+ ptr, l, err := rsa.Sockaddr()
+ if err != nil {
+ return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
+ }
+
+ return getpeername(s, ptr, &l)
+}
+
+func Bind(s windows.Handle, rsa RawSockaddr) (err error) {
+ ptr, l, err := rsa.Sockaddr()
+ if err != nil {
+ return fmt.Errorf("could not retrieve socket pointer and size: %w", err)
+ }
+
+ return bind(s, ptr, l)
+}
+
+// "golang.org/x/sys/windows".ConnectEx and .Bind only accept internal implementations of the
+// their sockaddr interface, so they cannot be used with HvsockAddr
+// Replicate functionality here from
+// https://cs.opensource.google/go/x/sys/+/master:windows/syscall_windows.go
+
+// The function pointers to `AcceptEx`, `ConnectEx` and `GetAcceptExSockaddrs` must be loaded at
+// runtime via a WSAIoctl call:
+// https://docs.microsoft.com/en-us/windows/win32/api/Mswsock/nc-mswsock-lpfn_connectex#remarks
+
+type runtimeFunc struct {
+ id guid.GUID
+ once sync.Once
+ addr uintptr
+ err error
+}
+
+func (f *runtimeFunc) Load() error {
+ f.once.Do(func() {
+ var s windows.Handle
+ s, f.err = windows.Socket(windows.AF_INET, windows.SOCK_STREAM, windows.IPPROTO_TCP)
+ if f.err != nil {
+ return
+ }
+ defer windows.CloseHandle(s) //nolint:errcheck
+
+ var n uint32
+ f.err = windows.WSAIoctl(s,
+ windows.SIO_GET_EXTENSION_FUNCTION_POINTER,
+ (*byte)(unsafe.Pointer(&f.id)),
+ uint32(unsafe.Sizeof(f.id)),
+ (*byte)(unsafe.Pointer(&f.addr)),
+ uint32(unsafe.Sizeof(f.addr)),
+ &n,
+ nil, //overlapped
+ 0, //completionRoutine
+ )
+ })
+ return f.err
+}
+
+var (
+ // todo: add `AcceptEx` and `GetAcceptExSockaddrs`
+ WSAID_CONNECTEX = guid.GUID{ //revive:disable-line:var-naming ALL_CAPS
+ Data1: 0x25a207b9,
+ Data2: 0xddf3,
+ Data3: 0x4660,
+ Data4: [8]byte{0x8e, 0xe9, 0x76, 0xe5, 0x8c, 0x74, 0x06, 0x3e},
+ }
+
+ connectExFunc = runtimeFunc{id: WSAID_CONNECTEX}
+)
+
+func ConnectEx(
+ fd windows.Handle,
+ rsa RawSockaddr,
+ sendBuf *byte,
+ sendDataLen uint32,
+ bytesSent *uint32,
+ overlapped *windows.Overlapped,
+) error {
+ if err := connectExFunc.Load(); err != nil {
+ return fmt.Errorf("failed to load ConnectEx function pointer: %w", err)
+ }
+ ptr, n, err := rsa.Sockaddr()
+ if err != nil {
+ return err
+ }
+ return connectEx(fd, ptr, n, sendBuf, sendDataLen, bytesSent, overlapped)
+}
+
+// BOOL LpfnConnectex(
+// [in] SOCKET s,
+// [in] const sockaddr *name,
+// [in] int namelen,
+// [in, optional] PVOID lpSendBuffer,
+// [in] DWORD dwSendDataLength,
+// [out] LPDWORD lpdwBytesSent,
+// [in] LPOVERLAPPED lpOverlapped
+// )
+
+func connectEx(
+ s windows.Handle,
+ name unsafe.Pointer,
+ namelen int32,
+ sendBuf *byte,
+ sendDataLen uint32,
+ bytesSent *uint32,
+ overlapped *windows.Overlapped,
+) (err error) {
+ // todo: after upgrading to 1.18, switch from syscall.Syscall9 to syscall.SyscallN
+ r1, _, e1 := syscall.Syscall9(connectExFunc.addr,
+ 7,
+ uintptr(s),
+ uintptr(name),
+ uintptr(namelen),
+ uintptr(unsafe.Pointer(sendBuf)),
+ uintptr(sendDataLen),
+ uintptr(unsafe.Pointer(bytesSent)),
+ uintptr(unsafe.Pointer(overlapped)),
+ 0,
+ 0)
+ if r1 == 0 {
+ if e1 != 0 {
+ err = error(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return err
+}
diff --git a/vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go b/vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go
new file mode 100644
index 000000000..6d2e1a9e4
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/internal/socket/zsyscall_windows.go
@@ -0,0 +1,72 @@
+//go:build windows
+
+// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT.
+
+package socket
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+ errERROR_EINVAL error = syscall.EINVAL
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return errERROR_EINVAL
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modws2_32 = windows.NewLazySystemDLL("ws2_32.dll")
+
+ procbind = modws2_32.NewProc("bind")
+ procgetpeername = modws2_32.NewProc("getpeername")
+ procgetsockname = modws2_32.NewProc("getsockname")
+)
+
+func bind(s windows.Handle, name unsafe.Pointer, namelen int32) (err error) {
+ r1, _, e1 := syscall.Syscall(procbind.Addr(), 3, uintptr(s), uintptr(name), uintptr(namelen))
+ if r1 == socketError {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getpeername(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) {
+ r1, _, e1 := syscall.Syscall(procgetpeername.Addr(), 3, uintptr(s), uintptr(name), uintptr(unsafe.Pointer(namelen)))
+ if r1 == socketError {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getsockname(s windows.Handle, name unsafe.Pointer, namelen *int32) (err error) {
+ r1, _, e1 := syscall.Syscall(procgetsockname.Addr(), 3, uintptr(s), uintptr(name), uintptr(unsafe.Pointer(namelen)))
+ if r1 == socketError {
+ err = errnoErr(e1)
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/go-winio/pipe.go b/vendor/github.com/Microsoft/go-winio/pipe.go
new file mode 100644
index 000000000..ca6e38fc0
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pipe.go
@@ -0,0 +1,521 @@
+//go:build windows
+// +build windows
+
+package winio
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "io"
+ "net"
+ "os"
+ "runtime"
+ "syscall"
+ "time"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+//sys connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) = ConnectNamedPipe
+//sys createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateNamedPipeW
+//sys createFile(name string, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateFileW
+//sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo
+//sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW
+//sys localAlloc(uFlags uint32, length uint32) (ptr uintptr) = LocalAlloc
+//sys ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntStatus) = ntdll.NtCreateNamedPipeFile
+//sys rtlNtStatusToDosError(status ntStatus) (winerr error) = ntdll.RtlNtStatusToDosErrorNoTeb
+//sys rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntStatus) = ntdll.RtlDosPathNameToNtPathName_U
+//sys rtlDefaultNpAcl(dacl *uintptr) (status ntStatus) = ntdll.RtlDefaultNpAcl
+
+type ioStatusBlock struct {
+ Status, Information uintptr
+}
+
+type objectAttributes struct {
+ Length uintptr
+ RootDirectory uintptr
+ ObjectName *unicodeString
+ Attributes uintptr
+ SecurityDescriptor *securityDescriptor
+ SecurityQoS uintptr
+}
+
+type unicodeString struct {
+ Length uint16
+ MaximumLength uint16
+ Buffer uintptr
+}
+
+type securityDescriptor struct {
+ Revision byte
+ Sbz1 byte
+ Control uint16
+ Owner uintptr
+ Group uintptr
+ Sacl uintptr //revive:disable-line:var-naming SACL, not Sacl
+ Dacl uintptr //revive:disable-line:var-naming DACL, not Dacl
+}
+
+type ntStatus int32
+
+func (status ntStatus) Err() error {
+ if status >= 0 {
+ return nil
+ }
+ return rtlNtStatusToDosError(status)
+}
+
+var (
+ // ErrPipeListenerClosed is returned for pipe operations on listeners that have been closed.
+ ErrPipeListenerClosed = net.ErrClosed
+
+ errPipeWriteClosed = errors.New("pipe has been closed for write")
+)
+
+type win32Pipe struct {
+ *win32File
+ path string
+}
+
+type win32MessageBytePipe struct {
+ win32Pipe
+ writeClosed bool
+ readEOF bool
+}
+
+type pipeAddress string
+
+func (f *win32Pipe) LocalAddr() net.Addr {
+ return pipeAddress(f.path)
+}
+
+func (f *win32Pipe) RemoteAddr() net.Addr {
+ return pipeAddress(f.path)
+}
+
+func (f *win32Pipe) SetDeadline(t time.Time) error {
+ if err := f.SetReadDeadline(t); err != nil {
+ return err
+ }
+ return f.SetWriteDeadline(t)
+}
+
+// CloseWrite closes the write side of a message pipe in byte mode.
+func (f *win32MessageBytePipe) CloseWrite() error {
+ if f.writeClosed {
+ return errPipeWriteClosed
+ }
+ err := f.win32File.Flush()
+ if err != nil {
+ return err
+ }
+ _, err = f.win32File.Write(nil)
+ if err != nil {
+ return err
+ }
+ f.writeClosed = true
+ return nil
+}
+
+// Write writes bytes to a message pipe in byte mode. Zero-byte writes are ignored, since
+// they are used to implement CloseWrite().
+func (f *win32MessageBytePipe) Write(b []byte) (int, error) {
+ if f.writeClosed {
+ return 0, errPipeWriteClosed
+ }
+ if len(b) == 0 {
+ return 0, nil
+ }
+ return f.win32File.Write(b)
+}
+
+// Read reads bytes from a message pipe in byte mode. A read of a zero-byte message on a message
+// mode pipe will return io.EOF, as will all subsequent reads.
+func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
+ if f.readEOF {
+ return 0, io.EOF
+ }
+ n, err := f.win32File.Read(b)
+ if err == io.EOF { //nolint:errorlint
+ // If this was the result of a zero-byte read, then
+ // it is possible that the read was due to a zero-size
+ // message. Since we are simulating CloseWrite with a
+ // zero-byte message, ensure that all future Read() calls
+ // also return EOF.
+ f.readEOF = true
+ } else if err == syscall.ERROR_MORE_DATA { //nolint:errorlint // err is Errno
+ // ERROR_MORE_DATA indicates that the pipe's read mode is message mode
+ // and the message still has more bytes. Treat this as a success, since
+ // this package presents all named pipes as byte streams.
+ err = nil
+ }
+ return n, err
+}
+
+func (pipeAddress) Network() string {
+ return "pipe"
+}
+
+func (s pipeAddress) String() string {
+ return string(s)
+}
+
+// tryDialPipe attempts to dial the pipe at `path` until `ctx` cancellation or timeout.
+func tryDialPipe(ctx context.Context, path *string, access uint32) (syscall.Handle, error) {
+ for {
+ select {
+ case <-ctx.Done():
+ return syscall.Handle(0), ctx.Err()
+ default:
+ h, err := createFile(*path,
+ access,
+ 0,
+ nil,
+ syscall.OPEN_EXISTING,
+ windows.FILE_FLAG_OVERLAPPED|windows.SECURITY_SQOS_PRESENT|windows.SECURITY_ANONYMOUS,
+ 0)
+ if err == nil {
+ return h, nil
+ }
+ if err != windows.ERROR_PIPE_BUSY { //nolint:errorlint // err is Errno
+ return h, &os.PathError{Err: err, Op: "open", Path: *path}
+ }
+ // Wait 10 msec and try again. This is a rather simplistic
+ // view, as we always try each 10 milliseconds.
+ time.Sleep(10 * time.Millisecond)
+ }
+ }
+}
+
+// DialPipe connects to a named pipe by path, timing out if the connection
+// takes longer than the specified duration. If timeout is nil, then we use
+// a default timeout of 2 seconds. (We do not use WaitNamedPipe.)
+func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
+ var absTimeout time.Time
+ if timeout != nil {
+ absTimeout = time.Now().Add(*timeout)
+ } else {
+ absTimeout = time.Now().Add(2 * time.Second)
+ }
+ ctx, cancel := context.WithDeadline(context.Background(), absTimeout)
+ defer cancel()
+ conn, err := DialPipeContext(ctx, path)
+ if errors.Is(err, context.DeadlineExceeded) {
+ return nil, ErrTimeout
+ }
+ return conn, err
+}
+
+// DialPipeContext attempts to connect to a named pipe by `path` until `ctx`
+// cancellation or timeout.
+func DialPipeContext(ctx context.Context, path string) (net.Conn, error) {
+ return DialPipeAccess(ctx, path, syscall.GENERIC_READ|syscall.GENERIC_WRITE)
+}
+
+// DialPipeAccess attempts to connect to a named pipe by `path` with `access` until `ctx`
+// cancellation or timeout.
+func DialPipeAccess(ctx context.Context, path string, access uint32) (net.Conn, error) {
+ var err error
+ var h syscall.Handle
+ h, err = tryDialPipe(ctx, &path, access)
+ if err != nil {
+ return nil, err
+ }
+
+ var flags uint32
+ err = getNamedPipeInfo(h, &flags, nil, nil, nil)
+ if err != nil {
+ return nil, err
+ }
+
+ f, err := makeWin32File(h)
+ if err != nil {
+ syscall.Close(h)
+ return nil, err
+ }
+
+ // If the pipe is in message mode, return a message byte pipe, which
+ // supports CloseWrite().
+ if flags&windows.PIPE_TYPE_MESSAGE != 0 {
+ return &win32MessageBytePipe{
+ win32Pipe: win32Pipe{win32File: f, path: path},
+ }, nil
+ }
+ return &win32Pipe{win32File: f, path: path}, nil
+}
+
+type acceptResponse struct {
+ f *win32File
+ err error
+}
+
+type win32PipeListener struct {
+ firstHandle syscall.Handle
+ path string
+ config PipeConfig
+ acceptCh chan (chan acceptResponse)
+ closeCh chan int
+ doneCh chan int
+}
+
+func makeServerPipeHandle(path string, sd []byte, c *PipeConfig, first bool) (syscall.Handle, error) {
+ path16, err := syscall.UTF16FromString(path)
+ if err != nil {
+ return 0, &os.PathError{Op: "open", Path: path, Err: err}
+ }
+
+ var oa objectAttributes
+ oa.Length = unsafe.Sizeof(oa)
+
+ var ntPath unicodeString
+ if err := rtlDosPathNameToNtPathName(&path16[0],
+ &ntPath,
+ 0,
+ 0,
+ ).Err(); err != nil {
+ return 0, &os.PathError{Op: "open", Path: path, Err: err}
+ }
+ defer localFree(ntPath.Buffer)
+ oa.ObjectName = &ntPath
+
+ // The security descriptor is only needed for the first pipe.
+ if first {
+ if sd != nil {
+ l := uint32(len(sd))
+ sdb := localAlloc(0, l)
+ defer localFree(sdb)
+ copy((*[0xffff]byte)(unsafe.Pointer(sdb))[:], sd)
+ oa.SecurityDescriptor = (*securityDescriptor)(unsafe.Pointer(sdb))
+ } else {
+ // Construct the default named pipe security descriptor.
+ var dacl uintptr
+ if err := rtlDefaultNpAcl(&dacl).Err(); err != nil {
+ return 0, fmt.Errorf("getting default named pipe ACL: %w", err)
+ }
+ defer localFree(dacl)
+
+ sdb := &securityDescriptor{
+ Revision: 1,
+ Control: windows.SE_DACL_PRESENT,
+ Dacl: dacl,
+ }
+ oa.SecurityDescriptor = sdb
+ }
+ }
+
+ typ := uint32(windows.FILE_PIPE_REJECT_REMOTE_CLIENTS)
+ if c.MessageMode {
+ typ |= windows.FILE_PIPE_MESSAGE_TYPE
+ }
+
+ disposition := uint32(windows.FILE_OPEN)
+ access := uint32(syscall.GENERIC_READ | syscall.GENERIC_WRITE | syscall.SYNCHRONIZE)
+ if first {
+ disposition = windows.FILE_CREATE
+ // By not asking for read or write access, the named pipe file system
+ // will put this pipe into an initially disconnected state, blocking
+ // client connections until the next call with first == false.
+ access = syscall.SYNCHRONIZE
+ }
+
+ timeout := int64(-50 * 10000) // 50ms
+
+ var (
+ h syscall.Handle
+ iosb ioStatusBlock
+ )
+ err = ntCreateNamedPipeFile(&h,
+ access,
+ &oa,
+ &iosb,
+ syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE,
+ disposition,
+ 0,
+ typ,
+ 0,
+ 0,
+ 0xffffffff,
+ uint32(c.InputBufferSize),
+ uint32(c.OutputBufferSize),
+ &timeout).Err()
+ if err != nil {
+ return 0, &os.PathError{Op: "open", Path: path, Err: err}
+ }
+
+ runtime.KeepAlive(ntPath)
+ return h, nil
+}
+
+func (l *win32PipeListener) makeServerPipe() (*win32File, error) {
+ h, err := makeServerPipeHandle(l.path, nil, &l.config, false)
+ if err != nil {
+ return nil, err
+ }
+ f, err := makeWin32File(h)
+ if err != nil {
+ syscall.Close(h)
+ return nil, err
+ }
+ return f, nil
+}
+
+func (l *win32PipeListener) makeConnectedServerPipe() (*win32File, error) {
+ p, err := l.makeServerPipe()
+ if err != nil {
+ return nil, err
+ }
+
+ // Wait for the client to connect.
+ ch := make(chan error)
+ go func(p *win32File) {
+ ch <- connectPipe(p)
+ }(p)
+
+ select {
+ case err = <-ch:
+ if err != nil {
+ p.Close()
+ p = nil
+ }
+ case <-l.closeCh:
+ // Abort the connect request by closing the handle.
+ p.Close()
+ p = nil
+ err = <-ch
+ if err == nil || err == ErrFileClosed { //nolint:errorlint // err is Errno
+ err = ErrPipeListenerClosed
+ }
+ }
+ return p, err
+}
+
+func (l *win32PipeListener) listenerRoutine() {
+ closed := false
+ for !closed {
+ select {
+ case <-l.closeCh:
+ closed = true
+ case responseCh := <-l.acceptCh:
+ var (
+ p *win32File
+ err error
+ )
+ for {
+ p, err = l.makeConnectedServerPipe()
+ // If the connection was immediately closed by the client, try
+ // again.
+ if err != windows.ERROR_NO_DATA { //nolint:errorlint // err is Errno
+ break
+ }
+ }
+ responseCh <- acceptResponse{p, err}
+ closed = err == ErrPipeListenerClosed //nolint:errorlint // err is Errno
+ }
+ }
+ syscall.Close(l.firstHandle)
+ l.firstHandle = 0
+ // Notify Close() and Accept() callers that the handle has been closed.
+ close(l.doneCh)
+}
+
+// PipeConfig contain configuration for the pipe listener.
+type PipeConfig struct {
+ // SecurityDescriptor contains a Windows security descriptor in SDDL format.
+ SecurityDescriptor string
+
+ // MessageMode determines whether the pipe is in byte or message mode. In either
+ // case the pipe is read in byte mode by default. The only practical difference in
+ // this implementation is that CloseWrite() is only supported for message mode pipes;
+ // CloseWrite() is implemented as a zero-byte write, but zero-byte writes are only
+ // transferred to the reader (and returned as io.EOF in this implementation)
+ // when the pipe is in message mode.
+ MessageMode bool
+
+ // InputBufferSize specifies the size of the input buffer, in bytes.
+ InputBufferSize int32
+
+ // OutputBufferSize specifies the size of the output buffer, in bytes.
+ OutputBufferSize int32
+}
+
+// ListenPipe creates a listener on a Windows named pipe path, e.g. \\.\pipe\mypipe.
+// The pipe must not already exist.
+func ListenPipe(path string, c *PipeConfig) (net.Listener, error) {
+ var (
+ sd []byte
+ err error
+ )
+ if c == nil {
+ c = &PipeConfig{}
+ }
+ if c.SecurityDescriptor != "" {
+ sd, err = SddlToSecurityDescriptor(c.SecurityDescriptor)
+ if err != nil {
+ return nil, err
+ }
+ }
+ h, err := makeServerPipeHandle(path, sd, c, true)
+ if err != nil {
+ return nil, err
+ }
+ l := &win32PipeListener{
+ firstHandle: h,
+ path: path,
+ config: *c,
+ acceptCh: make(chan (chan acceptResponse)),
+ closeCh: make(chan int),
+ doneCh: make(chan int),
+ }
+ go l.listenerRoutine()
+ return l, nil
+}
+
+func connectPipe(p *win32File) error {
+ c, err := p.prepareIO()
+ if err != nil {
+ return err
+ }
+ defer p.wg.Done()
+
+ err = connectNamedPipe(p.handle, &c.o)
+ _, err = p.asyncIO(c, nil, 0, err)
+ if err != nil && err != windows.ERROR_PIPE_CONNECTED { //nolint:errorlint // err is Errno
+ return err
+ }
+ return nil
+}
+
+func (l *win32PipeListener) Accept() (net.Conn, error) {
+ ch := make(chan acceptResponse)
+ select {
+ case l.acceptCh <- ch:
+ response := <-ch
+ err := response.err
+ if err != nil {
+ return nil, err
+ }
+ if l.config.MessageMode {
+ return &win32MessageBytePipe{
+ win32Pipe: win32Pipe{win32File: response.f, path: l.path},
+ }, nil
+ }
+ return &win32Pipe{win32File: response.f, path: l.path}, nil
+ case <-l.doneCh:
+ return nil, ErrPipeListenerClosed
+ }
+}
+
+func (l *win32PipeListener) Close() error {
+ select {
+ case l.closeCh <- 1:
+ <-l.doneCh
+ case <-l.doneCh:
+ }
+ return nil
+}
+
+func (l *win32PipeListener) Addr() net.Addr {
+ return pipeAddress(l.path)
+}
diff --git a/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go b/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go
new file mode 100644
index 000000000..48ce4e924
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pkg/guid/guid.go
@@ -0,0 +1,232 @@
+// Package guid provides a GUID type. The backing structure for a GUID is
+// identical to that used by the golang.org/x/sys/windows GUID type.
+// There are two main binary encodings used for a GUID, the big-endian encoding,
+// and the Windows (mixed-endian) encoding. See here for details:
+// https://en.wikipedia.org/wiki/Universally_unique_identifier#Encoding
+package guid
+
+import (
+ "crypto/rand"
+ "crypto/sha1" //nolint:gosec // not used for secure application
+ "encoding"
+ "encoding/binary"
+ "fmt"
+ "strconv"
+)
+
+//go:generate go run golang.org/x/tools/cmd/stringer -type=Variant -trimprefix=Variant -linecomment
+
+// Variant specifies which GUID variant (or "type") of the GUID. It determines
+// how the entirety of the rest of the GUID is interpreted.
+type Variant uint8
+
+// The variants specified by RFC 4122 section 4.1.1.
+const (
+ // VariantUnknown specifies a GUID variant which does not conform to one of
+ // the variant encodings specified in RFC 4122.
+ VariantUnknown Variant = iota
+ VariantNCS
+ VariantRFC4122 // RFC 4122
+ VariantMicrosoft
+ VariantFuture
+)
+
+// Version specifies how the bits in the GUID were generated. For instance, a
+// version 4 GUID is randomly generated, and a version 5 is generated from the
+// hash of an input string.
+type Version uint8
+
+func (v Version) String() string {
+ return strconv.FormatUint(uint64(v), 10)
+}
+
+var _ = (encoding.TextMarshaler)(GUID{})
+var _ = (encoding.TextUnmarshaler)(&GUID{})
+
+// NewV4 returns a new version 4 (pseudorandom) GUID, as defined by RFC 4122.
+func NewV4() (GUID, error) {
+ var b [16]byte
+ if _, err := rand.Read(b[:]); err != nil {
+ return GUID{}, err
+ }
+
+ g := FromArray(b)
+ g.setVersion(4) // Version 4 means randomly generated.
+ g.setVariant(VariantRFC4122)
+
+ return g, nil
+}
+
+// NewV5 returns a new version 5 (generated from a string via SHA-1 hashing)
+// GUID, as defined by RFC 4122. The RFC is unclear on the encoding of the name,
+// and the sample code treats it as a series of bytes, so we do the same here.
+//
+// Some implementations, such as those found on Windows, treat the name as a
+// big-endian UTF16 stream of bytes. If that is desired, the string can be
+// encoded as such before being passed to this function.
+func NewV5(namespace GUID, name []byte) (GUID, error) {
+ b := sha1.New() //nolint:gosec // not used for secure application
+ namespaceBytes := namespace.ToArray()
+ b.Write(namespaceBytes[:])
+ b.Write(name)
+
+ a := [16]byte{}
+ copy(a[:], b.Sum(nil))
+
+ g := FromArray(a)
+ g.setVersion(5) // Version 5 means generated from a string.
+ g.setVariant(VariantRFC4122)
+
+ return g, nil
+}
+
+func fromArray(b [16]byte, order binary.ByteOrder) GUID {
+ var g GUID
+ g.Data1 = order.Uint32(b[0:4])
+ g.Data2 = order.Uint16(b[4:6])
+ g.Data3 = order.Uint16(b[6:8])
+ copy(g.Data4[:], b[8:16])
+ return g
+}
+
+func (g GUID) toArray(order binary.ByteOrder) [16]byte {
+ b := [16]byte{}
+ order.PutUint32(b[0:4], g.Data1)
+ order.PutUint16(b[4:6], g.Data2)
+ order.PutUint16(b[6:8], g.Data3)
+ copy(b[8:16], g.Data4[:])
+ return b
+}
+
+// FromArray constructs a GUID from a big-endian encoding array of 16 bytes.
+func FromArray(b [16]byte) GUID {
+ return fromArray(b, binary.BigEndian)
+}
+
+// ToArray returns an array of 16 bytes representing the GUID in big-endian
+// encoding.
+func (g GUID) ToArray() [16]byte {
+ return g.toArray(binary.BigEndian)
+}
+
+// FromWindowsArray constructs a GUID from a Windows encoding array of bytes.
+func FromWindowsArray(b [16]byte) GUID {
+ return fromArray(b, binary.LittleEndian)
+}
+
+// ToWindowsArray returns an array of 16 bytes representing the GUID in Windows
+// encoding.
+func (g GUID) ToWindowsArray() [16]byte {
+ return g.toArray(binary.LittleEndian)
+}
+
+func (g GUID) String() string {
+ return fmt.Sprintf(
+ "%08x-%04x-%04x-%04x-%012x",
+ g.Data1,
+ g.Data2,
+ g.Data3,
+ g.Data4[:2],
+ g.Data4[2:])
+}
+
+// FromString parses a string containing a GUID and returns the GUID. The only
+// format currently supported is the `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`
+// format.
+func FromString(s string) (GUID, error) {
+ if len(s) != 36 {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ if s[8] != '-' || s[13] != '-' || s[18] != '-' || s[23] != '-' {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+
+ var g GUID
+
+ data1, err := strconv.ParseUint(s[0:8], 16, 32)
+ if err != nil {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ g.Data1 = uint32(data1)
+
+ data2, err := strconv.ParseUint(s[9:13], 16, 16)
+ if err != nil {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ g.Data2 = uint16(data2)
+
+ data3, err := strconv.ParseUint(s[14:18], 16, 16)
+ if err != nil {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ g.Data3 = uint16(data3)
+
+ for i, x := range []int{19, 21, 24, 26, 28, 30, 32, 34} {
+ v, err := strconv.ParseUint(s[x:x+2], 16, 8)
+ if err != nil {
+ return GUID{}, fmt.Errorf("invalid GUID %q", s)
+ }
+ g.Data4[i] = uint8(v)
+ }
+
+ return g, nil
+}
+
+func (g *GUID) setVariant(v Variant) {
+ d := g.Data4[0]
+ switch v {
+ case VariantNCS:
+ d = (d & 0x7f)
+ case VariantRFC4122:
+ d = (d & 0x3f) | 0x80
+ case VariantMicrosoft:
+ d = (d & 0x1f) | 0xc0
+ case VariantFuture:
+ d = (d & 0x0f) | 0xe0
+ case VariantUnknown:
+ fallthrough
+ default:
+ panic(fmt.Sprintf("invalid variant: %d", v))
+ }
+ g.Data4[0] = d
+}
+
+// Variant returns the GUID variant, as defined in RFC 4122.
+func (g GUID) Variant() Variant {
+ b := g.Data4[0]
+ if b&0x80 == 0 {
+ return VariantNCS
+ } else if b&0xc0 == 0x80 {
+ return VariantRFC4122
+ } else if b&0xe0 == 0xc0 {
+ return VariantMicrosoft
+ } else if b&0xe0 == 0xe0 {
+ return VariantFuture
+ }
+ return VariantUnknown
+}
+
+func (g *GUID) setVersion(v Version) {
+ g.Data3 = (g.Data3 & 0x0fff) | (uint16(v) << 12)
+}
+
+// Version returns the GUID version, as defined in RFC 4122.
+func (g GUID) Version() Version {
+ return Version((g.Data3 & 0xF000) >> 12)
+}
+
+// MarshalText returns the textual representation of the GUID.
+func (g GUID) MarshalText() ([]byte, error) {
+ return []byte(g.String()), nil
+}
+
+// UnmarshalText takes the textual representation of a GUID, and unmarhals it
+// into this GUID.
+func (g *GUID) UnmarshalText(text []byte) error {
+ g2, err := FromString(string(text))
+ if err != nil {
+ return err
+ }
+ *g = g2
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go b/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go
new file mode 100644
index 000000000..805bd3548
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_nonwindows.go
@@ -0,0 +1,16 @@
+//go:build !windows
+// +build !windows
+
+package guid
+
+// GUID represents a GUID/UUID. It has the same structure as
+// golang.org/x/sys/windows.GUID so that it can be used with functions expecting
+// that type. It is defined as its own type as that is only available to builds
+// targeted at `windows`. The representation matches that used by native Windows
+// code.
+type GUID struct {
+ Data1 uint32
+ Data2 uint16
+ Data3 uint16
+ Data4 [8]byte
+}
diff --git a/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go b/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go
new file mode 100644
index 000000000..27e45ee5c
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pkg/guid/guid_windows.go
@@ -0,0 +1,13 @@
+//go:build windows
+// +build windows
+
+package guid
+
+import "golang.org/x/sys/windows"
+
+// GUID represents a GUID/UUID. It has the same structure as
+// golang.org/x/sys/windows.GUID so that it can be used with functions expecting
+// that type. It is defined as its own type so that stringification and
+// marshaling can be supported. The representation matches that used by native
+// Windows code.
+type GUID windows.GUID
diff --git a/vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go b/vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go
new file mode 100644
index 000000000..4076d3132
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pkg/guid/variant_string.go
@@ -0,0 +1,27 @@
+// Code generated by "stringer -type=Variant -trimprefix=Variant -linecomment"; DO NOT EDIT.
+
+package guid
+
+import "strconv"
+
+func _() {
+ // An "invalid array index" compiler error signifies that the constant values have changed.
+ // Re-run the stringer command to generate them again.
+ var x [1]struct{}
+ _ = x[VariantUnknown-0]
+ _ = x[VariantNCS-1]
+ _ = x[VariantRFC4122-2]
+ _ = x[VariantMicrosoft-3]
+ _ = x[VariantFuture-4]
+}
+
+const _Variant_name = "UnknownNCSRFC 4122MicrosoftFuture"
+
+var _Variant_index = [...]uint8{0, 7, 10, 18, 27, 33}
+
+func (i Variant) String() string {
+ if i >= Variant(len(_Variant_index)-1) {
+ return "Variant(" + strconv.FormatInt(int64(i), 10) + ")"
+ }
+ return _Variant_name[_Variant_index[i]:_Variant_index[i+1]]
+}
diff --git a/vendor/github.com/Microsoft/go-winio/pkg/security/grantvmgroupaccess.go b/vendor/github.com/Microsoft/go-winio/pkg/security/grantvmgroupaccess.go
new file mode 100644
index 000000000..6df87b749
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pkg/security/grantvmgroupaccess.go
@@ -0,0 +1,169 @@
+//go:build windows
+// +build windows
+
+package security
+
+import (
+ "fmt"
+ "os"
+ "syscall"
+ "unsafe"
+)
+
+type (
+ accessMask uint32
+ accessMode uint32
+ desiredAccess uint32
+ inheritMode uint32
+ objectType uint32
+ shareMode uint32
+ securityInformation uint32
+ trusteeForm uint32
+ trusteeType uint32
+
+ //nolint:structcheck // structcheck thinks fields are unused, but the are used to pass data to OS
+ explicitAccess struct {
+ accessPermissions accessMask
+ accessMode accessMode
+ inheritance inheritMode
+ trustee trustee
+ }
+
+ //nolint:structcheck,unused // structcheck thinks fields are unused, but the are used to pass data to OS
+ trustee struct {
+ multipleTrustee *trustee
+ multipleTrusteeOperation int32
+ trusteeForm trusteeForm
+ trusteeType trusteeType
+ name uintptr
+ }
+)
+
+const (
+ accessMaskDesiredPermission accessMask = 1 << 31 // GENERIC_READ
+
+ accessModeGrant accessMode = 1
+
+ desiredAccessReadControl desiredAccess = 0x20000
+ desiredAccessWriteDac desiredAccess = 0x40000
+
+ //cspell:disable-next-line
+ gvmga = "GrantVmGroupAccess:"
+
+ inheritModeNoInheritance inheritMode = 0x0
+ inheritModeSubContainersAndObjectsInherit inheritMode = 0x3
+
+ objectTypeFileObject objectType = 0x1
+
+ securityInformationDACL securityInformation = 0x4
+
+ shareModeRead shareMode = 0x1
+ shareModeWrite shareMode = 0x2
+
+ sidVMGroup = "S-1-5-83-0"
+
+ trusteeFormIsSID trusteeForm = 0
+
+ trusteeTypeWellKnownGroup trusteeType = 5
+)
+
+// GrantVMGroupAccess sets the DACL for a specified file or directory to
+// include Grant ACE entries for the VM Group SID. This is a golang re-
+// implementation of the same function in vmcompute, just not exported in
+// RS5. Which kind of sucks. Sucks a lot :/
+//
+//revive:disable-next-line:var-naming VM, not Vm
+func GrantVmGroupAccess(name string) error {
+ // Stat (to determine if `name` is a directory).
+ s, err := os.Stat(name)
+ if err != nil {
+ return fmt.Errorf("%s os.Stat %s: %w", gvmga, name, err)
+ }
+
+ // Get a handle to the file/directory. Must defer Close on success.
+ fd, err := createFile(name, s.IsDir())
+ if err != nil {
+ return err // Already wrapped
+ }
+ defer syscall.CloseHandle(fd) //nolint:errcheck
+
+ // Get the current DACL and Security Descriptor. Must defer LocalFree on success.
+ ot := objectTypeFileObject
+ si := securityInformationDACL
+ sd := uintptr(0)
+ origDACL := uintptr(0)
+ if err := getSecurityInfo(fd, uint32(ot), uint32(si), nil, nil, &origDACL, nil, &sd); err != nil {
+ return fmt.Errorf("%s GetSecurityInfo %s: %w", gvmga, name, err)
+ }
+ defer syscall.LocalFree((syscall.Handle)(unsafe.Pointer(sd))) //nolint:errcheck
+
+ // Generate a new DACL which is the current DACL with the required ACEs added.
+ // Must defer LocalFree on success.
+ newDACL, err := generateDACLWithAcesAdded(name, s.IsDir(), origDACL)
+ if err != nil {
+ return err // Already wrapped
+ }
+ defer syscall.LocalFree((syscall.Handle)(unsafe.Pointer(newDACL))) //nolint:errcheck
+
+ // And finally use SetSecurityInfo to apply the updated DACL.
+ if err := setSecurityInfo(fd, uint32(ot), uint32(si), uintptr(0), uintptr(0), newDACL, uintptr(0)); err != nil {
+ return fmt.Errorf("%s SetSecurityInfo %s: %w", gvmga, name, err)
+ }
+
+ return nil
+}
+
+// createFile is a helper function to call [Nt]CreateFile to get a handle to
+// the file or directory.
+func createFile(name string, isDir bool) (syscall.Handle, error) {
+ namep, err := syscall.UTF16FromString(name)
+ if err != nil {
+ return syscall.InvalidHandle, fmt.Errorf("could not convernt name to UTF-16: %w", err)
+ }
+ da := uint32(desiredAccessReadControl | desiredAccessWriteDac)
+ sm := uint32(shareModeRead | shareModeWrite)
+ fa := uint32(syscall.FILE_ATTRIBUTE_NORMAL)
+ if isDir {
+ fa |= syscall.FILE_FLAG_BACKUP_SEMANTICS
+ }
+ fd, err := syscall.CreateFile(&namep[0], da, sm, nil, syscall.OPEN_EXISTING, fa, 0)
+ if err != nil {
+ return syscall.InvalidHandle, fmt.Errorf("%s syscall.CreateFile %s: %w", gvmga, name, err)
+ }
+ return fd, nil
+}
+
+// generateDACLWithAcesAdded generates a new DACL with the two needed ACEs added.
+// The caller is responsible for LocalFree of the returned DACL on success.
+func generateDACLWithAcesAdded(name string, isDir bool, origDACL uintptr) (uintptr, error) {
+ // Generate pointers to the SIDs based on the string SIDs
+ sid, err := syscall.StringToSid(sidVMGroup)
+ if err != nil {
+ return 0, fmt.Errorf("%s syscall.StringToSid %s %s: %w", gvmga, name, sidVMGroup, err)
+ }
+
+ inheritance := inheritModeNoInheritance
+ if isDir {
+ inheritance = inheritModeSubContainersAndObjectsInherit
+ }
+
+ eaArray := []explicitAccess{
+ {
+ accessPermissions: accessMaskDesiredPermission,
+ accessMode: accessModeGrant,
+ inheritance: inheritance,
+ trustee: trustee{
+ trusteeForm: trusteeFormIsSID,
+ trusteeType: trusteeTypeWellKnownGroup,
+ name: uintptr(unsafe.Pointer(sid)),
+ },
+ },
+ }
+
+ modifiedDACL := uintptr(0)
+ if err := setEntriesInAcl(uintptr(uint32(1)), uintptr(unsafe.Pointer(&eaArray[0])), origDACL, &modifiedDACL); err != nil {
+ return 0, fmt.Errorf("%s SetEntriesInAcl %s: %w", gvmga, name, err)
+ }
+
+ return modifiedDACL, nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/pkg/security/syscall_windows.go b/vendor/github.com/Microsoft/go-winio/pkg/security/syscall_windows.go
new file mode 100644
index 000000000..71326e4e4
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pkg/security/syscall_windows.go
@@ -0,0 +1,7 @@
+package security
+
+//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go syscall_windows.go
+
+//sys getSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, ppsidOwner **uintptr, ppsidGroup **uintptr, ppDacl *uintptr, ppSacl *uintptr, ppSecurityDescriptor *uintptr) (win32err error) = advapi32.GetSecurityInfo
+//sys setSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, psidOwner uintptr, psidGroup uintptr, pDacl uintptr, pSacl uintptr) (win32err error) = advapi32.SetSecurityInfo
+//sys setEntriesInAcl(count uintptr, pListOfEEs uintptr, oldAcl uintptr, newAcl *uintptr) (win32err error) = advapi32.SetEntriesInAclW
diff --git a/vendor/github.com/Microsoft/go-winio/pkg/security/zsyscall_windows.go b/vendor/github.com/Microsoft/go-winio/pkg/security/zsyscall_windows.go
new file mode 100644
index 000000000..26c986b88
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/pkg/security/zsyscall_windows.go
@@ -0,0 +1,72 @@
+//go:build windows
+
+// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT.
+
+package security
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+ errERROR_EINVAL error = syscall.EINVAL
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return errERROR_EINVAL
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
+
+ procGetSecurityInfo = modadvapi32.NewProc("GetSecurityInfo")
+ procSetEntriesInAclW = modadvapi32.NewProc("SetEntriesInAclW")
+ procSetSecurityInfo = modadvapi32.NewProc("SetSecurityInfo")
+)
+
+func getSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, ppsidOwner **uintptr, ppsidGroup **uintptr, ppDacl *uintptr, ppSacl *uintptr, ppSecurityDescriptor *uintptr) (win32err error) {
+ r0, _, _ := syscall.Syscall9(procGetSecurityInfo.Addr(), 8, uintptr(handle), uintptr(objectType), uintptr(si), uintptr(unsafe.Pointer(ppsidOwner)), uintptr(unsafe.Pointer(ppsidGroup)), uintptr(unsafe.Pointer(ppDacl)), uintptr(unsafe.Pointer(ppSacl)), uintptr(unsafe.Pointer(ppSecurityDescriptor)), 0)
+ if r0 != 0 {
+ win32err = syscall.Errno(r0)
+ }
+ return
+}
+
+func setEntriesInAcl(count uintptr, pListOfEEs uintptr, oldAcl uintptr, newAcl *uintptr) (win32err error) {
+ r0, _, _ := syscall.Syscall6(procSetEntriesInAclW.Addr(), 4, uintptr(count), uintptr(pListOfEEs), uintptr(oldAcl), uintptr(unsafe.Pointer(newAcl)), 0, 0)
+ if r0 != 0 {
+ win32err = syscall.Errno(r0)
+ }
+ return
+}
+
+func setSecurityInfo(handle syscall.Handle, objectType uint32, si uint32, psidOwner uintptr, psidGroup uintptr, pDacl uintptr, pSacl uintptr) (win32err error) {
+ r0, _, _ := syscall.Syscall9(procSetSecurityInfo.Addr(), 7, uintptr(handle), uintptr(objectType), uintptr(si), uintptr(psidOwner), uintptr(psidGroup), uintptr(pDacl), uintptr(pSacl), 0, 0)
+ if r0 != 0 {
+ win32err = syscall.Errno(r0)
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/go-winio/privilege.go b/vendor/github.com/Microsoft/go-winio/privilege.go
new file mode 100644
index 000000000..0ff9dac90
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/privilege.go
@@ -0,0 +1,197 @@
+//go:build windows
+// +build windows
+
+package winio
+
+import (
+ "bytes"
+ "encoding/binary"
+ "fmt"
+ "runtime"
+ "sync"
+ "syscall"
+ "unicode/utf16"
+
+ "golang.org/x/sys/windows"
+)
+
+//sys adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) [true] = advapi32.AdjustTokenPrivileges
+//sys impersonateSelf(level uint32) (err error) = advapi32.ImpersonateSelf
+//sys revertToSelf() (err error) = advapi32.RevertToSelf
+//sys openThreadToken(thread syscall.Handle, accessMask uint32, openAsSelf bool, token *windows.Token) (err error) = advapi32.OpenThreadToken
+//sys getCurrentThread() (h syscall.Handle) = GetCurrentThread
+//sys lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) = advapi32.LookupPrivilegeValueW
+//sys lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) = advapi32.LookupPrivilegeNameW
+//sys lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) = advapi32.LookupPrivilegeDisplayNameW
+
+const (
+ //revive:disable-next-line:var-naming ALL_CAPS
+ SE_PRIVILEGE_ENABLED = windows.SE_PRIVILEGE_ENABLED
+
+ //revive:disable-next-line:var-naming ALL_CAPS
+ ERROR_NOT_ALL_ASSIGNED syscall.Errno = windows.ERROR_NOT_ALL_ASSIGNED
+
+ SeBackupPrivilege = "SeBackupPrivilege"
+ SeRestorePrivilege = "SeRestorePrivilege"
+ SeSecurityPrivilege = "SeSecurityPrivilege"
+)
+
+var (
+ privNames = make(map[string]uint64)
+ privNameMutex sync.Mutex
+)
+
+// PrivilegeError represents an error enabling privileges.
+type PrivilegeError struct {
+ privileges []uint64
+}
+
+func (e *PrivilegeError) Error() string {
+ s := "Could not enable privilege "
+ if len(e.privileges) > 1 {
+ s = "Could not enable privileges "
+ }
+ for i, p := range e.privileges {
+ if i != 0 {
+ s += ", "
+ }
+ s += `"`
+ s += getPrivilegeName(p)
+ s += `"`
+ }
+ return s
+}
+
+// RunWithPrivilege enables a single privilege for a function call.
+func RunWithPrivilege(name string, fn func() error) error {
+ return RunWithPrivileges([]string{name}, fn)
+}
+
+// RunWithPrivileges enables privileges for a function call.
+func RunWithPrivileges(names []string, fn func() error) error {
+ privileges, err := mapPrivileges(names)
+ if err != nil {
+ return err
+ }
+ runtime.LockOSThread()
+ defer runtime.UnlockOSThread()
+ token, err := newThreadToken()
+ if err != nil {
+ return err
+ }
+ defer releaseThreadToken(token)
+ err = adjustPrivileges(token, privileges, SE_PRIVILEGE_ENABLED)
+ if err != nil {
+ return err
+ }
+ return fn()
+}
+
+func mapPrivileges(names []string) ([]uint64, error) {
+ privileges := make([]uint64, 0, len(names))
+ privNameMutex.Lock()
+ defer privNameMutex.Unlock()
+ for _, name := range names {
+ p, ok := privNames[name]
+ if !ok {
+ err := lookupPrivilegeValue("", name, &p)
+ if err != nil {
+ return nil, err
+ }
+ privNames[name] = p
+ }
+ privileges = append(privileges, p)
+ }
+ return privileges, nil
+}
+
+// EnableProcessPrivileges enables privileges globally for the process.
+func EnableProcessPrivileges(names []string) error {
+ return enableDisableProcessPrivilege(names, SE_PRIVILEGE_ENABLED)
+}
+
+// DisableProcessPrivileges disables privileges globally for the process.
+func DisableProcessPrivileges(names []string) error {
+ return enableDisableProcessPrivilege(names, 0)
+}
+
+func enableDisableProcessPrivilege(names []string, action uint32) error {
+ privileges, err := mapPrivileges(names)
+ if err != nil {
+ return err
+ }
+
+ p := windows.CurrentProcess()
+ var token windows.Token
+ err = windows.OpenProcessToken(p, windows.TOKEN_ADJUST_PRIVILEGES|windows.TOKEN_QUERY, &token)
+ if err != nil {
+ return err
+ }
+
+ defer token.Close()
+ return adjustPrivileges(token, privileges, action)
+}
+
+func adjustPrivileges(token windows.Token, privileges []uint64, action uint32) error {
+ var b bytes.Buffer
+ _ = binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
+ for _, p := range privileges {
+ _ = binary.Write(&b, binary.LittleEndian, p)
+ _ = binary.Write(&b, binary.LittleEndian, action)
+ }
+ prevState := make([]byte, b.Len())
+ reqSize := uint32(0)
+ success, err := adjustTokenPrivileges(token, false, &b.Bytes()[0], uint32(len(prevState)), &prevState[0], &reqSize)
+ if !success {
+ return err
+ }
+ if err == ERROR_NOT_ALL_ASSIGNED { //nolint:errorlint // err is Errno
+ return &PrivilegeError{privileges}
+ }
+ return nil
+}
+
+func getPrivilegeName(luid uint64) string {
+ var nameBuffer [256]uint16
+ bufSize := uint32(len(nameBuffer))
+ err := lookupPrivilegeName("", &luid, &nameBuffer[0], &bufSize)
+ if err != nil {
+ return fmt.Sprintf("", luid)
+ }
+
+ var displayNameBuffer [256]uint16
+ displayBufSize := uint32(len(displayNameBuffer))
+ var langID uint32
+ err = lookupPrivilegeDisplayName("", &nameBuffer[0], &displayNameBuffer[0], &displayBufSize, &langID)
+ if err != nil {
+ return fmt.Sprintf("", string(utf16.Decode(nameBuffer[:bufSize])))
+ }
+
+ return string(utf16.Decode(displayNameBuffer[:displayBufSize]))
+}
+
+func newThreadToken() (windows.Token, error) {
+ err := impersonateSelf(windows.SecurityImpersonation)
+ if err != nil {
+ return 0, err
+ }
+
+ var token windows.Token
+ err = openThreadToken(getCurrentThread(), syscall.TOKEN_ADJUST_PRIVILEGES|syscall.TOKEN_QUERY, false, &token)
+ if err != nil {
+ rerr := revertToSelf()
+ if rerr != nil {
+ panic(rerr)
+ }
+ return 0, err
+ }
+ return token, nil
+}
+
+func releaseThreadToken(h windows.Token) {
+ err := revertToSelf()
+ if err != nil {
+ panic(err)
+ }
+ h.Close()
+}
diff --git a/vendor/github.com/Microsoft/go-winio/reparse.go b/vendor/github.com/Microsoft/go-winio/reparse.go
new file mode 100644
index 000000000..67d1a104a
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/reparse.go
@@ -0,0 +1,131 @@
+//go:build windows
+// +build windows
+
+package winio
+
+import (
+ "bytes"
+ "encoding/binary"
+ "fmt"
+ "strings"
+ "unicode/utf16"
+ "unsafe"
+)
+
+const (
+ reparseTagMountPoint = 0xA0000003
+ reparseTagSymlink = 0xA000000C
+)
+
+type reparseDataBuffer struct {
+ ReparseTag uint32
+ ReparseDataLength uint16
+ Reserved uint16
+ SubstituteNameOffset uint16
+ SubstituteNameLength uint16
+ PrintNameOffset uint16
+ PrintNameLength uint16
+}
+
+// ReparsePoint describes a Win32 symlink or mount point.
+type ReparsePoint struct {
+ Target string
+ IsMountPoint bool
+}
+
+// UnsupportedReparsePointError is returned when trying to decode a non-symlink or
+// mount point reparse point.
+type UnsupportedReparsePointError struct {
+ Tag uint32
+}
+
+func (e *UnsupportedReparsePointError) Error() string {
+ return fmt.Sprintf("unsupported reparse point %x", e.Tag)
+}
+
+// DecodeReparsePoint decodes a Win32 REPARSE_DATA_BUFFER structure containing either a symlink
+// or a mount point.
+func DecodeReparsePoint(b []byte) (*ReparsePoint, error) {
+ tag := binary.LittleEndian.Uint32(b[0:4])
+ return DecodeReparsePointData(tag, b[8:])
+}
+
+func DecodeReparsePointData(tag uint32, b []byte) (*ReparsePoint, error) {
+ isMountPoint := false
+ switch tag {
+ case reparseTagMountPoint:
+ isMountPoint = true
+ case reparseTagSymlink:
+ default:
+ return nil, &UnsupportedReparsePointError{tag}
+ }
+ nameOffset := 8 + binary.LittleEndian.Uint16(b[4:6])
+ if !isMountPoint {
+ nameOffset += 4
+ }
+ nameLength := binary.LittleEndian.Uint16(b[6:8])
+ name := make([]uint16, nameLength/2)
+ err := binary.Read(bytes.NewReader(b[nameOffset:nameOffset+nameLength]), binary.LittleEndian, &name)
+ if err != nil {
+ return nil, err
+ }
+ return &ReparsePoint{string(utf16.Decode(name)), isMountPoint}, nil
+}
+
+func isDriveLetter(c byte) bool {
+ return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
+}
+
+// EncodeReparsePoint encodes a Win32 REPARSE_DATA_BUFFER structure describing a symlink or
+// mount point.
+func EncodeReparsePoint(rp *ReparsePoint) []byte {
+ // Generate an NT path and determine if this is a relative path.
+ var ntTarget string
+ relative := false
+ if strings.HasPrefix(rp.Target, `\\?\`) {
+ ntTarget = `\??\` + rp.Target[4:]
+ } else if strings.HasPrefix(rp.Target, `\\`) {
+ ntTarget = `\??\UNC\` + rp.Target[2:]
+ } else if len(rp.Target) >= 2 && isDriveLetter(rp.Target[0]) && rp.Target[1] == ':' {
+ ntTarget = `\??\` + rp.Target
+ } else {
+ ntTarget = rp.Target
+ relative = true
+ }
+
+ // The paths must be NUL-terminated even though they are counted strings.
+ target16 := utf16.Encode([]rune(rp.Target + "\x00"))
+ ntTarget16 := utf16.Encode([]rune(ntTarget + "\x00"))
+
+ size := int(unsafe.Sizeof(reparseDataBuffer{})) - 8
+ size += len(ntTarget16)*2 + len(target16)*2
+
+ tag := uint32(reparseTagMountPoint)
+ if !rp.IsMountPoint {
+ tag = reparseTagSymlink
+ size += 4 // Add room for symlink flags
+ }
+
+ data := reparseDataBuffer{
+ ReparseTag: tag,
+ ReparseDataLength: uint16(size),
+ SubstituteNameOffset: 0,
+ SubstituteNameLength: uint16((len(ntTarget16) - 1) * 2),
+ PrintNameOffset: uint16(len(ntTarget16) * 2),
+ PrintNameLength: uint16((len(target16) - 1) * 2),
+ }
+
+ var b bytes.Buffer
+ _ = binary.Write(&b, binary.LittleEndian, &data)
+ if !rp.IsMountPoint {
+ flags := uint32(0)
+ if relative {
+ flags |= 1
+ }
+ _ = binary.Write(&b, binary.LittleEndian, flags)
+ }
+
+ _ = binary.Write(&b, binary.LittleEndian, ntTarget16)
+ _ = binary.Write(&b, binary.LittleEndian, target16)
+ return b.Bytes()
+}
diff --git a/vendor/github.com/Microsoft/go-winio/sd.go b/vendor/github.com/Microsoft/go-winio/sd.go
new file mode 100644
index 000000000..5550ef6b6
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/sd.go
@@ -0,0 +1,144 @@
+//go:build windows
+// +build windows
+
+package winio
+
+import (
+ "errors"
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+//sys lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountNameW
+//sys lookupAccountSid(systemName *uint16, sid *byte, name *uint16, nameSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountSidW
+//sys convertSidToStringSid(sid *byte, str **uint16) (err error) = advapi32.ConvertSidToStringSidW
+//sys convertStringSidToSid(str *uint16, sid **byte) (err error) = advapi32.ConvertStringSidToSidW
+//sys convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) = advapi32.ConvertStringSecurityDescriptorToSecurityDescriptorW
+//sys convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) = advapi32.ConvertSecurityDescriptorToStringSecurityDescriptorW
+//sys localFree(mem uintptr) = LocalFree
+//sys getSecurityDescriptorLength(sd uintptr) (len uint32) = advapi32.GetSecurityDescriptorLength
+
+type AccountLookupError struct {
+ Name string
+ Err error
+}
+
+func (e *AccountLookupError) Error() string {
+ if e.Name == "" {
+ return "lookup account: empty account name specified"
+ }
+ var s string
+ switch {
+ case errors.Is(e.Err, windows.ERROR_INVALID_SID):
+ s = "the security ID structure is invalid"
+ case errors.Is(e.Err, windows.ERROR_NONE_MAPPED):
+ s = "not found"
+ default:
+ s = e.Err.Error()
+ }
+ return "lookup account " + e.Name + ": " + s
+}
+
+func (e *AccountLookupError) Unwrap() error { return e.Err }
+
+type SddlConversionError struct {
+ Sddl string
+ Err error
+}
+
+func (e *SddlConversionError) Error() string {
+ return "convert " + e.Sddl + ": " + e.Err.Error()
+}
+
+func (e *SddlConversionError) Unwrap() error { return e.Err }
+
+// LookupSidByName looks up the SID of an account by name
+//
+//revive:disable-next-line:var-naming SID, not Sid
+func LookupSidByName(name string) (sid string, err error) {
+ if name == "" {
+ return "", &AccountLookupError{name, windows.ERROR_NONE_MAPPED}
+ }
+
+ var sidSize, sidNameUse, refDomainSize uint32
+ err = lookupAccountName(nil, name, nil, &sidSize, nil, &refDomainSize, &sidNameUse)
+ if err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER { //nolint:errorlint // err is Errno
+ return "", &AccountLookupError{name, err}
+ }
+ sidBuffer := make([]byte, sidSize)
+ refDomainBuffer := make([]uint16, refDomainSize)
+ err = lookupAccountName(nil, name, &sidBuffer[0], &sidSize, &refDomainBuffer[0], &refDomainSize, &sidNameUse)
+ if err != nil {
+ return "", &AccountLookupError{name, err}
+ }
+ var strBuffer *uint16
+ err = convertSidToStringSid(&sidBuffer[0], &strBuffer)
+ if err != nil {
+ return "", &AccountLookupError{name, err}
+ }
+ sid = syscall.UTF16ToString((*[0xffff]uint16)(unsafe.Pointer(strBuffer))[:])
+ localFree(uintptr(unsafe.Pointer(strBuffer)))
+ return sid, nil
+}
+
+// LookupNameBySid looks up the name of an account by SID
+//
+//revive:disable-next-line:var-naming SID, not Sid
+func LookupNameBySid(sid string) (name string, err error) {
+ if sid == "" {
+ return "", &AccountLookupError{sid, windows.ERROR_NONE_MAPPED}
+ }
+
+ sidBuffer, err := windows.UTF16PtrFromString(sid)
+ if err != nil {
+ return "", &AccountLookupError{sid, err}
+ }
+
+ var sidPtr *byte
+ if err = convertStringSidToSid(sidBuffer, &sidPtr); err != nil {
+ return "", &AccountLookupError{sid, err}
+ }
+ defer localFree(uintptr(unsafe.Pointer(sidPtr)))
+
+ var nameSize, refDomainSize, sidNameUse uint32
+ err = lookupAccountSid(nil, sidPtr, nil, &nameSize, nil, &refDomainSize, &sidNameUse)
+ if err != nil && err != windows.ERROR_INSUFFICIENT_BUFFER { //nolint:errorlint // err is Errno
+ return "", &AccountLookupError{sid, err}
+ }
+
+ nameBuffer := make([]uint16, nameSize)
+ refDomainBuffer := make([]uint16, refDomainSize)
+ err = lookupAccountSid(nil, sidPtr, &nameBuffer[0], &nameSize, &refDomainBuffer[0], &refDomainSize, &sidNameUse)
+ if err != nil {
+ return "", &AccountLookupError{sid, err}
+ }
+
+ name = windows.UTF16ToString(nameBuffer)
+ return name, nil
+}
+
+func SddlToSecurityDescriptor(sddl string) ([]byte, error) {
+ var sdBuffer uintptr
+ err := convertStringSecurityDescriptorToSecurityDescriptor(sddl, 1, &sdBuffer, nil)
+ if err != nil {
+ return nil, &SddlConversionError{sddl, err}
+ }
+ defer localFree(sdBuffer)
+ sd := make([]byte, getSecurityDescriptorLength(sdBuffer))
+ copy(sd, (*[0xffff]byte)(unsafe.Pointer(sdBuffer))[:len(sd)])
+ return sd, nil
+}
+
+func SecurityDescriptorToSddl(sd []byte) (string, error) {
+ var sddl *uint16
+ // The returned string length seems to include an arbitrary number of terminating NULs.
+ // Don't use it.
+ err := convertSecurityDescriptorToStringSecurityDescriptor(&sd[0], 1, 0xff, &sddl, nil)
+ if err != nil {
+ return "", err
+ }
+ defer localFree(uintptr(unsafe.Pointer(sddl)))
+ return syscall.UTF16ToString((*[0xffff]uint16)(unsafe.Pointer(sddl))[:]), nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/syscall.go b/vendor/github.com/Microsoft/go-winio/syscall.go
new file mode 100644
index 000000000..a6ca111b3
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/syscall.go
@@ -0,0 +1,5 @@
+//go:build windows
+
+package winio
+
+//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zsyscall_windows.go ./*.go
diff --git a/vendor/github.com/Microsoft/go-winio/tools.go b/vendor/github.com/Microsoft/go-winio/tools.go
new file mode 100644
index 000000000..2aa045843
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/tools.go
@@ -0,0 +1,5 @@
+//go:build tools
+
+package winio
+
+import _ "golang.org/x/tools/cmd/stringer"
diff --git a/vendor/github.com/Microsoft/go-winio/vhd/vhd.go b/vendor/github.com/Microsoft/go-winio/vhd/vhd.go
new file mode 100644
index 000000000..b54cad112
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/vhd/vhd.go
@@ -0,0 +1,377 @@
+//go:build windows
+// +build windows
+
+package vhd
+
+import (
+ "fmt"
+ "syscall"
+
+ "github.com/Microsoft/go-winio/pkg/guid"
+ "golang.org/x/sys/windows"
+)
+
+//go:generate go run github.com/Microsoft/go-winio/tools/mkwinsyscall -output zvhd_windows.go vhd.go
+
+//sys createVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (win32err error) = virtdisk.CreateVirtualDisk
+//sys openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) = virtdisk.OpenVirtualDisk
+//sys attachVirtualDisk(handle syscall.Handle, securityDescriptor *uintptr, attachVirtualDiskFlag uint32, providerSpecificFlags uint32, parameters *AttachVirtualDiskParameters, overlapped *syscall.Overlapped) (win32err error) = virtdisk.AttachVirtualDisk
+//sys detachVirtualDisk(handle syscall.Handle, detachVirtualDiskFlags uint32, providerSpecificFlags uint32) (win32err error) = virtdisk.DetachVirtualDisk
+//sys getVirtualDiskPhysicalPath(handle syscall.Handle, diskPathSizeInBytes *uint32, buffer *uint16) (win32err error) = virtdisk.GetVirtualDiskPhysicalPath
+
+type (
+ CreateVirtualDiskFlag uint32
+ VirtualDiskFlag uint32
+ AttachVirtualDiskFlag uint32
+ DetachVirtualDiskFlag uint32
+ VirtualDiskAccessMask uint32
+)
+
+type VirtualStorageType struct {
+ DeviceID uint32
+ VendorID guid.GUID
+}
+
+type CreateVersion2 struct {
+ UniqueID guid.GUID
+ MaximumSize uint64
+ BlockSizeInBytes uint32
+ SectorSizeInBytes uint32
+ PhysicalSectorSizeInByte uint32
+ ParentPath *uint16 // string
+ SourcePath *uint16 // string
+ OpenFlags uint32
+ ParentVirtualStorageType VirtualStorageType
+ SourceVirtualStorageType VirtualStorageType
+ ResiliencyGUID guid.GUID
+}
+
+type CreateVirtualDiskParameters struct {
+ Version uint32 // Must always be set to 2
+ Version2 CreateVersion2
+}
+
+type OpenVersion2 struct {
+ GetInfoOnly bool
+ ReadOnly bool
+ ResiliencyGUID guid.GUID
+}
+
+type OpenVirtualDiskParameters struct {
+ Version uint32 // Must always be set to 2
+ Version2 OpenVersion2
+}
+
+// The higher level `OpenVersion2` struct uses `bool`s to refer to `GetInfoOnly` and `ReadOnly` for ease of use. However,
+// the internal windows structure uses `BOOL`s aka int32s for these types. `openVersion2` is used for translating
+// `OpenVersion2` fields to the correct windows internal field types on the `Open____` methods.
+type openVersion2 struct {
+ getInfoOnly int32
+ readOnly int32
+ resiliencyGUID guid.GUID
+}
+
+type openVirtualDiskParameters struct {
+ version uint32
+ version2 openVersion2
+}
+
+type AttachVersion2 struct {
+ RestrictedOffset uint64
+ RestrictedLength uint64
+}
+
+type AttachVirtualDiskParameters struct {
+ Version uint32
+ Version2 AttachVersion2
+}
+
+const (
+ //revive:disable-next-line:var-naming ALL_CAPS
+ VIRTUAL_STORAGE_TYPE_DEVICE_VHDX = 0x3
+
+ // Access Mask for opening a VHD.
+ VirtualDiskAccessNone VirtualDiskAccessMask = 0x00000000
+ VirtualDiskAccessAttachRO VirtualDiskAccessMask = 0x00010000
+ VirtualDiskAccessAttachRW VirtualDiskAccessMask = 0x00020000
+ VirtualDiskAccessDetach VirtualDiskAccessMask = 0x00040000
+ VirtualDiskAccessGetInfo VirtualDiskAccessMask = 0x00080000
+ VirtualDiskAccessCreate VirtualDiskAccessMask = 0x00100000
+ VirtualDiskAccessMetaOps VirtualDiskAccessMask = 0x00200000
+ VirtualDiskAccessRead VirtualDiskAccessMask = 0x000d0000
+ VirtualDiskAccessAll VirtualDiskAccessMask = 0x003f0000
+ VirtualDiskAccessWritable VirtualDiskAccessMask = 0x00320000
+
+ // Flags for creating a VHD.
+ CreateVirtualDiskFlagNone CreateVirtualDiskFlag = 0x0
+ CreateVirtualDiskFlagFullPhysicalAllocation CreateVirtualDiskFlag = 0x1
+ CreateVirtualDiskFlagPreventWritesToSourceDisk CreateVirtualDiskFlag = 0x2
+ CreateVirtualDiskFlagDoNotCopyMetadataFromParent CreateVirtualDiskFlag = 0x4
+ CreateVirtualDiskFlagCreateBackingStorage CreateVirtualDiskFlag = 0x8
+ CreateVirtualDiskFlagUseChangeTrackingSourceLimit CreateVirtualDiskFlag = 0x10
+ CreateVirtualDiskFlagPreserveParentChangeTrackingState CreateVirtualDiskFlag = 0x20
+ CreateVirtualDiskFlagVhdSetUseOriginalBackingStorage CreateVirtualDiskFlag = 0x40 //revive:disable-line:var-naming VHD, not Vhd
+ CreateVirtualDiskFlagSparseFile CreateVirtualDiskFlag = 0x80
+ CreateVirtualDiskFlagPmemCompatible CreateVirtualDiskFlag = 0x100 //revive:disable-line:var-naming PMEM, not Pmem
+ CreateVirtualDiskFlagSupportCompressedVolumes CreateVirtualDiskFlag = 0x200
+
+ // Flags for opening a VHD.
+ OpenVirtualDiskFlagNone VirtualDiskFlag = 0x00000000
+ OpenVirtualDiskFlagNoParents VirtualDiskFlag = 0x00000001
+ OpenVirtualDiskFlagBlankFile VirtualDiskFlag = 0x00000002
+ OpenVirtualDiskFlagBootDrive VirtualDiskFlag = 0x00000004
+ OpenVirtualDiskFlagCachedIO VirtualDiskFlag = 0x00000008
+ OpenVirtualDiskFlagCustomDiffChain VirtualDiskFlag = 0x00000010
+ OpenVirtualDiskFlagParentCachedIO VirtualDiskFlag = 0x00000020
+ OpenVirtualDiskFlagVhdsetFileOnly VirtualDiskFlag = 0x00000040
+ OpenVirtualDiskFlagIgnoreRelativeParentLocator VirtualDiskFlag = 0x00000080
+ OpenVirtualDiskFlagNoWriteHardening VirtualDiskFlag = 0x00000100
+ OpenVirtualDiskFlagSupportCompressedVolumes VirtualDiskFlag = 0x00000200
+
+ // Flags for attaching a VHD.
+ AttachVirtualDiskFlagNone AttachVirtualDiskFlag = 0x00000000
+ AttachVirtualDiskFlagReadOnly AttachVirtualDiskFlag = 0x00000001
+ AttachVirtualDiskFlagNoDriveLetter AttachVirtualDiskFlag = 0x00000002
+ AttachVirtualDiskFlagPermanentLifetime AttachVirtualDiskFlag = 0x00000004
+ AttachVirtualDiskFlagNoLocalHost AttachVirtualDiskFlag = 0x00000008
+ AttachVirtualDiskFlagNoSecurityDescriptor AttachVirtualDiskFlag = 0x00000010
+ AttachVirtualDiskFlagBypassDefaultEncryptionPolicy AttachVirtualDiskFlag = 0x00000020
+ AttachVirtualDiskFlagNonPnp AttachVirtualDiskFlag = 0x00000040
+ AttachVirtualDiskFlagRestrictedRange AttachVirtualDiskFlag = 0x00000080
+ AttachVirtualDiskFlagSinglePartition AttachVirtualDiskFlag = 0x00000100
+ AttachVirtualDiskFlagRegisterVolume AttachVirtualDiskFlag = 0x00000200
+
+ // Flags for detaching a VHD.
+ DetachVirtualDiskFlagNone DetachVirtualDiskFlag = 0x0
+)
+
+// CreateVhdx is a helper function to create a simple vhdx file at the given path using
+// default values.
+//
+//revive:disable-next-line:var-naming VHDX, not Vhdx
+func CreateVhdx(path string, maxSizeInGb, blockSizeInMb uint32) error {
+ params := CreateVirtualDiskParameters{
+ Version: 2,
+ Version2: CreateVersion2{
+ MaximumSize: uint64(maxSizeInGb) * 1024 * 1024 * 1024,
+ BlockSizeInBytes: blockSizeInMb * 1024 * 1024,
+ },
+ }
+
+ handle, err := CreateVirtualDisk(path, VirtualDiskAccessNone, CreateVirtualDiskFlagNone, ¶ms)
+ if err != nil {
+ return err
+ }
+
+ return syscall.CloseHandle(handle)
+}
+
+// DetachVirtualDisk detaches a virtual hard disk by handle.
+func DetachVirtualDisk(handle syscall.Handle) (err error) {
+ if err := detachVirtualDisk(handle, 0, 0); err != nil {
+ return fmt.Errorf("failed to detach virtual disk: %w", err)
+ }
+ return nil
+}
+
+// DetachVhd detaches a vhd found at `path`.
+//
+//revive:disable-next-line:var-naming VHD, not Vhd
+func DetachVhd(path string) error {
+ handle, err := OpenVirtualDisk(
+ path,
+ VirtualDiskAccessNone,
+ OpenVirtualDiskFlagCachedIO|OpenVirtualDiskFlagIgnoreRelativeParentLocator,
+ )
+ if err != nil {
+ return err
+ }
+ defer syscall.CloseHandle(handle) //nolint:errcheck
+ return DetachVirtualDisk(handle)
+}
+
+// AttachVirtualDisk attaches a virtual hard disk for use.
+func AttachVirtualDisk(
+ handle syscall.Handle,
+ attachVirtualDiskFlag AttachVirtualDiskFlag,
+ parameters *AttachVirtualDiskParameters,
+) (err error) {
+ // Supports both version 1 and 2 of the attach parameters as version 2 wasn't present in RS5.
+ if err := attachVirtualDisk(
+ handle,
+ nil,
+ uint32(attachVirtualDiskFlag),
+ 0,
+ parameters,
+ nil,
+ ); err != nil {
+ return fmt.Errorf("failed to attach virtual disk: %w", err)
+ }
+ return nil
+}
+
+// AttachVhd attaches a virtual hard disk at `path` for use. Attaches using version 2
+// of the ATTACH_VIRTUAL_DISK_PARAMETERS.
+//
+//revive:disable-next-line:var-naming VHD, not Vhd
+func AttachVhd(path string) (err error) {
+ handle, err := OpenVirtualDisk(
+ path,
+ VirtualDiskAccessNone,
+ OpenVirtualDiskFlagCachedIO|OpenVirtualDiskFlagIgnoreRelativeParentLocator,
+ )
+ if err != nil {
+ return err
+ }
+
+ defer syscall.CloseHandle(handle) //nolint:errcheck
+ params := AttachVirtualDiskParameters{Version: 2}
+ if err := AttachVirtualDisk(
+ handle,
+ AttachVirtualDiskFlagNone,
+ ¶ms,
+ ); err != nil {
+ return fmt.Errorf("failed to attach virtual disk: %w", err)
+ }
+ return nil
+}
+
+// OpenVirtualDisk obtains a handle to a VHD opened with supplied access mask and flags.
+func OpenVirtualDisk(
+ vhdPath string,
+ virtualDiskAccessMask VirtualDiskAccessMask,
+ openVirtualDiskFlags VirtualDiskFlag,
+) (syscall.Handle, error) {
+ parameters := OpenVirtualDiskParameters{Version: 2}
+ handle, err := OpenVirtualDiskWithParameters(
+ vhdPath,
+ virtualDiskAccessMask,
+ openVirtualDiskFlags,
+ ¶meters,
+ )
+ if err != nil {
+ return 0, err
+ }
+ return handle, nil
+}
+
+// OpenVirtualDiskWithParameters obtains a handle to a VHD opened with supplied access mask, flags and parameters.
+func OpenVirtualDiskWithParameters(
+ vhdPath string,
+ virtualDiskAccessMask VirtualDiskAccessMask,
+ openVirtualDiskFlags VirtualDiskFlag,
+ parameters *OpenVirtualDiskParameters,
+) (syscall.Handle, error) {
+ var (
+ handle syscall.Handle
+ defaultType VirtualStorageType
+ getInfoOnly int32
+ readOnly int32
+ )
+ if parameters.Version != 2 {
+ return handle, fmt.Errorf("only version 2 VHDs are supported, found version: %d", parameters.Version)
+ }
+ if parameters.Version2.GetInfoOnly {
+ getInfoOnly = 1
+ }
+ if parameters.Version2.ReadOnly {
+ readOnly = 1
+ }
+ params := &openVirtualDiskParameters{
+ version: parameters.Version,
+ version2: openVersion2{
+ getInfoOnly,
+ readOnly,
+ parameters.Version2.ResiliencyGUID,
+ },
+ }
+ if err := openVirtualDisk(
+ &defaultType,
+ vhdPath,
+ uint32(virtualDiskAccessMask),
+ uint32(openVirtualDiskFlags),
+ params,
+ &handle,
+ ); err != nil {
+ return 0, fmt.Errorf("failed to open virtual disk: %w", err)
+ }
+ return handle, nil
+}
+
+// CreateVirtualDisk creates a virtual harddisk and returns a handle to the disk.
+func CreateVirtualDisk(
+ path string,
+ virtualDiskAccessMask VirtualDiskAccessMask,
+ createVirtualDiskFlags CreateVirtualDiskFlag,
+ parameters *CreateVirtualDiskParameters,
+) (syscall.Handle, error) {
+ var (
+ handle syscall.Handle
+ defaultType VirtualStorageType
+ )
+ if parameters.Version != 2 {
+ return handle, fmt.Errorf("only version 2 VHDs are supported, found version: %d", parameters.Version)
+ }
+
+ if err := createVirtualDisk(
+ &defaultType,
+ path,
+ uint32(virtualDiskAccessMask),
+ nil,
+ uint32(createVirtualDiskFlags),
+ 0,
+ parameters,
+ nil,
+ &handle,
+ ); err != nil {
+ return handle, fmt.Errorf("failed to create virtual disk: %w", err)
+ }
+ return handle, nil
+}
+
+// GetVirtualDiskPhysicalPath takes a handle to a virtual hard disk and returns the physical
+// path of the disk on the machine. This path is in the form \\.\PhysicalDriveX where X is an integer
+// that represents the particular enumeration of the physical disk on the caller's system.
+func GetVirtualDiskPhysicalPath(handle syscall.Handle) (_ string, err error) {
+ var (
+ diskPathSizeInBytes uint32 = 256 * 2 // max path length 256 wide chars
+ diskPhysicalPathBuf [256]uint16
+ )
+ if err := getVirtualDiskPhysicalPath(
+ handle,
+ &diskPathSizeInBytes,
+ &diskPhysicalPathBuf[0],
+ ); err != nil {
+ return "", fmt.Errorf("failed to get disk physical path: %w", err)
+ }
+ return windows.UTF16ToString(diskPhysicalPathBuf[:]), nil
+}
+
+// CreateDiffVhd is a helper function to create a differencing virtual disk.
+//
+//revive:disable-next-line:var-naming VHD, not Vhd
+func CreateDiffVhd(diffVhdPath, baseVhdPath string, blockSizeInMB uint32) error {
+ // Setting `ParentPath` is how to signal to create a differencing disk.
+ createParams := &CreateVirtualDiskParameters{
+ Version: 2,
+ Version2: CreateVersion2{
+ ParentPath: windows.StringToUTF16Ptr(baseVhdPath),
+ BlockSizeInBytes: blockSizeInMB * 1024 * 1024,
+ OpenFlags: uint32(OpenVirtualDiskFlagCachedIO),
+ },
+ }
+
+ vhdHandle, err := CreateVirtualDisk(
+ diffVhdPath,
+ VirtualDiskAccessNone,
+ CreateVirtualDiskFlagNone,
+ createParams,
+ )
+ if err != nil {
+ return fmt.Errorf("failed to create differencing vhd: %w", err)
+ }
+ if err := syscall.CloseHandle(vhdHandle); err != nil {
+ return fmt.Errorf("failed to close differencing vhd handle: %w", err)
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/go-winio/vhd/zvhd_windows.go b/vendor/github.com/Microsoft/go-winio/vhd/zvhd_windows.go
new file mode 100644
index 000000000..d0e917d2b
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/vhd/zvhd_windows.go
@@ -0,0 +1,108 @@
+//go:build windows
+
+// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT.
+
+package vhd
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+ errERROR_EINVAL error = syscall.EINVAL
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return errERROR_EINVAL
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modvirtdisk = windows.NewLazySystemDLL("virtdisk.dll")
+
+ procAttachVirtualDisk = modvirtdisk.NewProc("AttachVirtualDisk")
+ procCreateVirtualDisk = modvirtdisk.NewProc("CreateVirtualDisk")
+ procDetachVirtualDisk = modvirtdisk.NewProc("DetachVirtualDisk")
+ procGetVirtualDiskPhysicalPath = modvirtdisk.NewProc("GetVirtualDiskPhysicalPath")
+ procOpenVirtualDisk = modvirtdisk.NewProc("OpenVirtualDisk")
+)
+
+func attachVirtualDisk(handle syscall.Handle, securityDescriptor *uintptr, attachVirtualDiskFlag uint32, providerSpecificFlags uint32, parameters *AttachVirtualDiskParameters, overlapped *syscall.Overlapped) (win32err error) {
+ r0, _, _ := syscall.Syscall6(procAttachVirtualDisk.Addr(), 6, uintptr(handle), uintptr(unsafe.Pointer(securityDescriptor)), uintptr(attachVirtualDiskFlag), uintptr(providerSpecificFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(overlapped)))
+ if r0 != 0 {
+ win32err = syscall.Errno(r0)
+ }
+ return
+}
+
+func createVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (win32err error) {
+ var _p0 *uint16
+ _p0, win32err = syscall.UTF16PtrFromString(path)
+ if win32err != nil {
+ return
+ }
+ return _createVirtualDisk(virtualStorageType, _p0, virtualDiskAccessMask, securityDescriptor, createVirtualDiskFlags, providerSpecificFlags, parameters, overlapped, handle)
+}
+
+func _createVirtualDisk(virtualStorageType *VirtualStorageType, path *uint16, virtualDiskAccessMask uint32, securityDescriptor *uintptr, createVirtualDiskFlags uint32, providerSpecificFlags uint32, parameters *CreateVirtualDiskParameters, overlapped *syscall.Overlapped, handle *syscall.Handle) (win32err error) {
+ r0, _, _ := syscall.Syscall9(procCreateVirtualDisk.Addr(), 9, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(unsafe.Pointer(securityDescriptor)), uintptr(createVirtualDiskFlags), uintptr(providerSpecificFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(overlapped)), uintptr(unsafe.Pointer(handle)))
+ if r0 != 0 {
+ win32err = syscall.Errno(r0)
+ }
+ return
+}
+
+func detachVirtualDisk(handle syscall.Handle, detachVirtualDiskFlags uint32, providerSpecificFlags uint32) (win32err error) {
+ r0, _, _ := syscall.Syscall(procDetachVirtualDisk.Addr(), 3, uintptr(handle), uintptr(detachVirtualDiskFlags), uintptr(providerSpecificFlags))
+ if r0 != 0 {
+ win32err = syscall.Errno(r0)
+ }
+ return
+}
+
+func getVirtualDiskPhysicalPath(handle syscall.Handle, diskPathSizeInBytes *uint32, buffer *uint16) (win32err error) {
+ r0, _, _ := syscall.Syscall(procGetVirtualDiskPhysicalPath.Addr(), 3, uintptr(handle), uintptr(unsafe.Pointer(diskPathSizeInBytes)), uintptr(unsafe.Pointer(buffer)))
+ if r0 != 0 {
+ win32err = syscall.Errno(r0)
+ }
+ return
+}
+
+func openVirtualDisk(virtualStorageType *VirtualStorageType, path string, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) {
+ var _p0 *uint16
+ _p0, win32err = syscall.UTF16PtrFromString(path)
+ if win32err != nil {
+ return
+ }
+ return _openVirtualDisk(virtualStorageType, _p0, virtualDiskAccessMask, openVirtualDiskFlags, parameters, handle)
+}
+
+func _openVirtualDisk(virtualStorageType *VirtualStorageType, path *uint16, virtualDiskAccessMask uint32, openVirtualDiskFlags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (win32err error) {
+ r0, _, _ := syscall.Syscall6(procOpenVirtualDisk.Addr(), 6, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(openVirtualDiskFlags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(handle)))
+ if r0 != 0 {
+ win32err = syscall.Errno(r0)
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go b/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go
new file mode 100644
index 000000000..83f45a135
--- /dev/null
+++ b/vendor/github.com/Microsoft/go-winio/zsyscall_windows.go
@@ -0,0 +1,438 @@
+//go:build windows
+
+// Code generated by 'go generate' using "github.com/Microsoft/go-winio/tools/mkwinsyscall"; DO NOT EDIT.
+
+package winio
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+ errERROR_EINVAL error = syscall.EINVAL
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return errERROR_EINVAL
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
+ modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
+ modntdll = windows.NewLazySystemDLL("ntdll.dll")
+ modws2_32 = windows.NewLazySystemDLL("ws2_32.dll")
+
+ procAdjustTokenPrivileges = modadvapi32.NewProc("AdjustTokenPrivileges")
+ procConvertSecurityDescriptorToStringSecurityDescriptorW = modadvapi32.NewProc("ConvertSecurityDescriptorToStringSecurityDescriptorW")
+ procConvertSidToStringSidW = modadvapi32.NewProc("ConvertSidToStringSidW")
+ procConvertStringSecurityDescriptorToSecurityDescriptorW = modadvapi32.NewProc("ConvertStringSecurityDescriptorToSecurityDescriptorW")
+ procConvertStringSidToSidW = modadvapi32.NewProc("ConvertStringSidToSidW")
+ procGetSecurityDescriptorLength = modadvapi32.NewProc("GetSecurityDescriptorLength")
+ procImpersonateSelf = modadvapi32.NewProc("ImpersonateSelf")
+ procLookupAccountNameW = modadvapi32.NewProc("LookupAccountNameW")
+ procLookupAccountSidW = modadvapi32.NewProc("LookupAccountSidW")
+ procLookupPrivilegeDisplayNameW = modadvapi32.NewProc("LookupPrivilegeDisplayNameW")
+ procLookupPrivilegeNameW = modadvapi32.NewProc("LookupPrivilegeNameW")
+ procLookupPrivilegeValueW = modadvapi32.NewProc("LookupPrivilegeValueW")
+ procOpenThreadToken = modadvapi32.NewProc("OpenThreadToken")
+ procRevertToSelf = modadvapi32.NewProc("RevertToSelf")
+ procBackupRead = modkernel32.NewProc("BackupRead")
+ procBackupWrite = modkernel32.NewProc("BackupWrite")
+ procCancelIoEx = modkernel32.NewProc("CancelIoEx")
+ procConnectNamedPipe = modkernel32.NewProc("ConnectNamedPipe")
+ procCreateFileW = modkernel32.NewProc("CreateFileW")
+ procCreateIoCompletionPort = modkernel32.NewProc("CreateIoCompletionPort")
+ procCreateNamedPipeW = modkernel32.NewProc("CreateNamedPipeW")
+ procGetCurrentThread = modkernel32.NewProc("GetCurrentThread")
+ procGetNamedPipeHandleStateW = modkernel32.NewProc("GetNamedPipeHandleStateW")
+ procGetNamedPipeInfo = modkernel32.NewProc("GetNamedPipeInfo")
+ procGetQueuedCompletionStatus = modkernel32.NewProc("GetQueuedCompletionStatus")
+ procLocalAlloc = modkernel32.NewProc("LocalAlloc")
+ procLocalFree = modkernel32.NewProc("LocalFree")
+ procSetFileCompletionNotificationModes = modkernel32.NewProc("SetFileCompletionNotificationModes")
+ procNtCreateNamedPipeFile = modntdll.NewProc("NtCreateNamedPipeFile")
+ procRtlDefaultNpAcl = modntdll.NewProc("RtlDefaultNpAcl")
+ procRtlDosPathNameToNtPathName_U = modntdll.NewProc("RtlDosPathNameToNtPathName_U")
+ procRtlNtStatusToDosErrorNoTeb = modntdll.NewProc("RtlNtStatusToDosErrorNoTeb")
+ procWSAGetOverlappedResult = modws2_32.NewProc("WSAGetOverlappedResult")
+)
+
+func adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) {
+ var _p0 uint32
+ if releaseAll {
+ _p0 = 1
+ }
+ r0, _, e1 := syscall.Syscall6(procAdjustTokenPrivileges.Addr(), 6, uintptr(token), uintptr(_p0), uintptr(unsafe.Pointer(input)), uintptr(outputSize), uintptr(unsafe.Pointer(output)), uintptr(unsafe.Pointer(requiredSize)))
+ success = r0 != 0
+ if true {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procConvertSecurityDescriptorToStringSecurityDescriptorW.Addr(), 5, uintptr(unsafe.Pointer(sd)), uintptr(revision), uintptr(secInfo), uintptr(unsafe.Pointer(sddl)), uintptr(unsafe.Pointer(sddlSize)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func convertSidToStringSid(sid *byte, str **uint16) (err error) {
+ r1, _, e1 := syscall.Syscall(procConvertSidToStringSidW.Addr(), 2, uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(str)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(str)
+ if err != nil {
+ return
+ }
+ return _convertStringSecurityDescriptorToSecurityDescriptor(_p0, revision, sd, size)
+}
+
+func _convertStringSecurityDescriptorToSecurityDescriptor(str *uint16, revision uint32, sd *uintptr, size *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procConvertStringSecurityDescriptorToSecurityDescriptorW.Addr(), 4, uintptr(unsafe.Pointer(str)), uintptr(revision), uintptr(unsafe.Pointer(sd)), uintptr(unsafe.Pointer(size)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func convertStringSidToSid(str *uint16, sid **byte) (err error) {
+ r1, _, e1 := syscall.Syscall(procConvertStringSidToSidW.Addr(), 2, uintptr(unsafe.Pointer(str)), uintptr(unsafe.Pointer(sid)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getSecurityDescriptorLength(sd uintptr) (len uint32) {
+ r0, _, _ := syscall.Syscall(procGetSecurityDescriptorLength.Addr(), 1, uintptr(sd), 0, 0)
+ len = uint32(r0)
+ return
+}
+
+func impersonateSelf(level uint32) (err error) {
+ r1, _, e1 := syscall.Syscall(procImpersonateSelf.Addr(), 1, uintptr(level), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(accountName)
+ if err != nil {
+ return
+ }
+ return _lookupAccountName(systemName, _p0, sid, sidSize, refDomain, refDomainSize, sidNameUse)
+}
+
+func _lookupAccountName(systemName *uint16, accountName *uint16, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall9(procLookupAccountNameW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(accountName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(sidSize)), uintptr(unsafe.Pointer(refDomain)), uintptr(unsafe.Pointer(refDomainSize)), uintptr(unsafe.Pointer(sidNameUse)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupAccountSid(systemName *uint16, sid *byte, name *uint16, nameSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall9(procLookupAccountSidW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(nameSize)), uintptr(unsafe.Pointer(refDomain)), uintptr(unsafe.Pointer(refDomainSize)), uintptr(unsafe.Pointer(sidNameUse)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(systemName)
+ if err != nil {
+ return
+ }
+ return _lookupPrivilegeDisplayName(_p0, name, buffer, size, languageId)
+}
+
+func _lookupPrivilegeDisplayName(systemName *uint16, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procLookupPrivilegeDisplayNameW.Addr(), 5, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), uintptr(unsafe.Pointer(languageId)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(systemName)
+ if err != nil {
+ return
+ }
+ return _lookupPrivilegeName(_p0, luid, buffer, size)
+}
+
+func _lookupPrivilegeName(systemName *uint16, luid *uint64, buffer *uint16, size *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procLookupPrivilegeNameW.Addr(), 4, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(luid)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(systemName)
+ if err != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, err = syscall.UTF16PtrFromString(name)
+ if err != nil {
+ return
+ }
+ return _lookupPrivilegeValue(_p0, _p1, luid)
+}
+
+func _lookupPrivilegeValue(systemName *uint16, name *uint16, luid *uint64) (err error) {
+ r1, _, e1 := syscall.Syscall(procLookupPrivilegeValueW.Addr(), 3, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(luid)))
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func openThreadToken(thread syscall.Handle, accessMask uint32, openAsSelf bool, token *windows.Token) (err error) {
+ var _p0 uint32
+ if openAsSelf {
+ _p0 = 1
+ }
+ r1, _, e1 := syscall.Syscall6(procOpenThreadToken.Addr(), 4, uintptr(thread), uintptr(accessMask), uintptr(_p0), uintptr(unsafe.Pointer(token)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func revertToSelf() (err error) {
+ r1, _, e1 := syscall.Syscall(procRevertToSelf.Addr(), 0, 0, 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) {
+ var _p0 *byte
+ if len(b) > 0 {
+ _p0 = &b[0]
+ }
+ var _p1 uint32
+ if abort {
+ _p1 = 1
+ }
+ var _p2 uint32
+ if processSecurity {
+ _p2 = 1
+ }
+ r1, _, e1 := syscall.Syscall9(procBackupRead.Addr(), 7, uintptr(h), uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(unsafe.Pointer(bytesRead)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(context)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) {
+ var _p0 *byte
+ if len(b) > 0 {
+ _p0 = &b[0]
+ }
+ var _p1 uint32
+ if abort {
+ _p1 = 1
+ }
+ var _p2 uint32
+ if processSecurity {
+ _p2 = 1
+ }
+ r1, _, e1 := syscall.Syscall9(procBackupWrite.Addr(), 7, uintptr(h), uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(unsafe.Pointer(bytesWritten)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(context)), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) {
+ r1, _, e1 := syscall.Syscall(procCancelIoEx.Addr(), 2, uintptr(file), uintptr(unsafe.Pointer(o)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) {
+ r1, _, e1 := syscall.Syscall(procConnectNamedPipe.Addr(), 2, uintptr(pipe), uintptr(unsafe.Pointer(o)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func createFile(name string, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(name)
+ if err != nil {
+ return
+ }
+ return _createFile(_p0, access, mode, sa, createmode, attrs, templatefile)
+}
+
+func _createFile(name *uint16, access uint32, mode uint32, sa *syscall.SecurityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) {
+ r0, _, e1 := syscall.Syscall9(procCreateFileW.Addr(), 7, uintptr(unsafe.Pointer(name)), uintptr(access), uintptr(mode), uintptr(unsafe.Pointer(sa)), uintptr(createmode), uintptr(attrs), uintptr(templatefile), 0, 0)
+ handle = syscall.Handle(r0)
+ if handle == syscall.InvalidHandle {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) {
+ r0, _, e1 := syscall.Syscall6(procCreateIoCompletionPort.Addr(), 4, uintptr(file), uintptr(port), uintptr(key), uintptr(threadCount), 0, 0)
+ newport = syscall.Handle(r0)
+ if newport == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(name)
+ if err != nil {
+ return
+ }
+ return _createNamedPipe(_p0, flags, pipeMode, maxInstances, outSize, inSize, defaultTimeout, sa)
+}
+
+func _createNamedPipe(name *uint16, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *syscall.SecurityAttributes) (handle syscall.Handle, err error) {
+ r0, _, e1 := syscall.Syscall9(procCreateNamedPipeW.Addr(), 8, uintptr(unsafe.Pointer(name)), uintptr(flags), uintptr(pipeMode), uintptr(maxInstances), uintptr(outSize), uintptr(inSize), uintptr(defaultTimeout), uintptr(unsafe.Pointer(sa)), 0)
+ handle = syscall.Handle(r0)
+ if handle == syscall.InvalidHandle {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getCurrentThread() (h syscall.Handle) {
+ r0, _, _ := syscall.Syscall(procGetCurrentThread.Addr(), 0, 0, 0, 0)
+ h = syscall.Handle(r0)
+ return
+}
+
+func getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) {
+ r1, _, e1 := syscall.Syscall9(procGetNamedPipeHandleStateW.Addr(), 7, uintptr(pipe), uintptr(unsafe.Pointer(state)), uintptr(unsafe.Pointer(curInstances)), uintptr(unsafe.Pointer(maxCollectionCount)), uintptr(unsafe.Pointer(collectDataTimeout)), uintptr(unsafe.Pointer(userName)), uintptr(maxUserNameSize), 0, 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procGetNamedPipeInfo.Addr(), 5, uintptr(pipe), uintptr(unsafe.Pointer(flags)), uintptr(unsafe.Pointer(outSize)), uintptr(unsafe.Pointer(inSize)), uintptr(unsafe.Pointer(maxInstances)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procGetQueuedCompletionStatus.Addr(), 5, uintptr(port), uintptr(unsafe.Pointer(bytes)), uintptr(unsafe.Pointer(key)), uintptr(unsafe.Pointer(o)), uintptr(timeout), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func localAlloc(uFlags uint32, length uint32) (ptr uintptr) {
+ r0, _, _ := syscall.Syscall(procLocalAlloc.Addr(), 2, uintptr(uFlags), uintptr(length), 0)
+ ptr = uintptr(r0)
+ return
+}
+
+func localFree(mem uintptr) {
+ syscall.Syscall(procLocalFree.Addr(), 1, uintptr(mem), 0, 0)
+ return
+}
+
+func setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) {
+ r1, _, e1 := syscall.Syscall(procSetFileCompletionNotificationModes.Addr(), 2, uintptr(h), uintptr(flags), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
+
+func ntCreateNamedPipeFile(pipe *syscall.Handle, access uint32, oa *objectAttributes, iosb *ioStatusBlock, share uint32, disposition uint32, options uint32, typ uint32, readMode uint32, completionMode uint32, maxInstances uint32, inboundQuota uint32, outputQuota uint32, timeout *int64) (status ntStatus) {
+ r0, _, _ := syscall.Syscall15(procNtCreateNamedPipeFile.Addr(), 14, uintptr(unsafe.Pointer(pipe)), uintptr(access), uintptr(unsafe.Pointer(oa)), uintptr(unsafe.Pointer(iosb)), uintptr(share), uintptr(disposition), uintptr(options), uintptr(typ), uintptr(readMode), uintptr(completionMode), uintptr(maxInstances), uintptr(inboundQuota), uintptr(outputQuota), uintptr(unsafe.Pointer(timeout)), 0)
+ status = ntStatus(r0)
+ return
+}
+
+func rtlDefaultNpAcl(dacl *uintptr) (status ntStatus) {
+ r0, _, _ := syscall.Syscall(procRtlDefaultNpAcl.Addr(), 1, uintptr(unsafe.Pointer(dacl)), 0, 0)
+ status = ntStatus(r0)
+ return
+}
+
+func rtlDosPathNameToNtPathName(name *uint16, ntName *unicodeString, filePart uintptr, reserved uintptr) (status ntStatus) {
+ r0, _, _ := syscall.Syscall6(procRtlDosPathNameToNtPathName_U.Addr(), 4, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(ntName)), uintptr(filePart), uintptr(reserved), 0, 0)
+ status = ntStatus(r0)
+ return
+}
+
+func rtlNtStatusToDosError(status ntStatus) (winerr error) {
+ r0, _, _ := syscall.Syscall(procRtlNtStatusToDosErrorNoTeb.Addr(), 1, uintptr(status), 0, 0)
+ if r0 != 0 {
+ winerr = syscall.Errno(r0)
+ }
+ return
+}
+
+func wsaGetOverlappedResult(h syscall.Handle, o *syscall.Overlapped, bytes *uint32, wait bool, flags *uint32) (err error) {
+ var _p0 uint32
+ if wait {
+ _p0 = 1
+ }
+ r1, _, e1 := syscall.Syscall6(procWSAGetOverlappedResult.Addr(), 5, uintptr(h), uintptr(unsafe.Pointer(o)), uintptr(unsafe.Pointer(bytes)), uintptr(_p0), uintptr(unsafe.Pointer(flags)), 0)
+ if r1 == 0 {
+ err = errnoErr(e1)
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/.gitattributes b/vendor/github.com/Microsoft/hcsshim/.gitattributes
new file mode 100644
index 000000000..94f480de9
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/.gitattributes
@@ -0,0 +1 @@
+* text=auto eol=lf
\ No newline at end of file
diff --git a/vendor/github.com/Microsoft/hcsshim/.gitignore b/vendor/github.com/Microsoft/hcsshim/.gitignore
new file mode 100644
index 000000000..54ed6f06c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/.gitignore
@@ -0,0 +1,38 @@
+# Binaries for programs and plugins
+*.exe
+*.dll
+*.so
+*.dylib
+
+# Ignore vscode setting files
+.vscode/
+
+# Test binary, build with `go test -c`
+*.test
+
+# Output of the go coverage tool, specifically when used with LiteIDE
+*.out
+
+# Project-local glide cache, RE: https://github.com/Masterminds/glide/issues/736
+.glide/
+
+# Ignore gcs bin directory
+service/bin/
+service/pkg/
+
+*.img
+*.vhd
+*.tar.gz
+
+# Make stuff
+.rootfs-done
+bin/*
+rootfs/*
+*.o
+/build/
+
+deps/*
+out/*
+
+.idea/
+.vscode/
\ No newline at end of file
diff --git a/vendor/github.com/Microsoft/hcsshim/.golangci.yml b/vendor/github.com/Microsoft/hcsshim/.golangci.yml
new file mode 100644
index 000000000..2400e7f1e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/.golangci.yml
@@ -0,0 +1,99 @@
+run:
+ timeout: 8m
+
+linters:
+ enable:
+ - stylecheck
+
+linters-settings:
+ stylecheck:
+ # https://staticcheck.io/docs/checks
+ checks: ["all"]
+
+
+issues:
+ # This repo has a LOT of generated schema files, operating system bindings, and other things that ST1003 from stylecheck won't like
+ # (screaming case Windows api constants for example). There's also some structs that we *could* change the initialisms to be Go
+ # friendly (Id -> ID) but they're exported and it would be a breaking change. This makes it so that most new code, code that isn't
+ # supposed to be a pretty faithful mapping to an OS call/constants, or non-generated code still checks if we're following idioms,
+ # while ignoring the things that are just noise or would be more of a hassle than it'd be worth to change.
+ exclude-rules:
+ - path: layer.go
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: hcsshim.go
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\hcs\\schema2\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\wclayer\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: hcn\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\hcs\\schema1\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\hns\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: ext4\\internal\\compactext4\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: ext4\\internal\\format\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\guestrequest\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\guest\\prot\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\windevice\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\winapi\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\vmcompute\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\regstate\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
+
+ - path: internal\\hcserror\\
+ linters:
+ - stylecheck
+ Text: "ST1003:"
\ No newline at end of file
diff --git a/vendor/github.com/Microsoft/hcsshim/CODEOWNERS b/vendor/github.com/Microsoft/hcsshim/CODEOWNERS
new file mode 100644
index 000000000..f4c5a07d1
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/CODEOWNERS
@@ -0,0 +1 @@
+* @microsoft/containerplat
\ No newline at end of file
diff --git a/vendor/github.com/Microsoft/hcsshim/LICENSE b/vendor/github.com/Microsoft/hcsshim/LICENSE
new file mode 100644
index 000000000..49d21669a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/LICENSE
@@ -0,0 +1,21 @@
+The MIT License (MIT)
+
+Copyright (c) 2015 Microsoft
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
\ No newline at end of file
diff --git a/vendor/github.com/Microsoft/hcsshim/Makefile b/vendor/github.com/Microsoft/hcsshim/Makefile
new file mode 100644
index 000000000..a8f5516cd
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/Makefile
@@ -0,0 +1,87 @@
+BASE:=base.tar.gz
+
+GO:=go
+GO_FLAGS:=-ldflags "-s -w" # strip Go binaries
+CGO_ENABLED:=0
+GOMODVENDOR:=
+
+CFLAGS:=-O2 -Wall
+LDFLAGS:=-static -s # strip C binaries
+
+GO_FLAGS_EXTRA:=
+ifeq "$(GOMODVENDOR)" "1"
+GO_FLAGS_EXTRA += -mod=vendor
+endif
+GO_BUILD:=CGO_ENABLED=$(CGO_ENABLED) $(GO) build $(GO_FLAGS) $(GO_FLAGS_EXTRA)
+
+SRCROOT=$(dir $(abspath $(firstword $(MAKEFILE_LIST))))
+
+# The link aliases for gcstools
+GCS_TOOLS=\
+ generichook
+
+.PHONY: all always rootfs test
+
+all: out/initrd.img out/rootfs.tar.gz
+
+clean:
+ find -name '*.o' -print0 | xargs -0 -r rm
+ rm -rf bin deps rootfs out
+
+test:
+ cd $(SRCROOT) && go test -v ./internal/guest/...
+
+out/delta.tar.gz: bin/init bin/vsockexec bin/cmd/gcs bin/cmd/gcstools Makefile
+ @mkdir -p out
+ rm -rf rootfs
+ mkdir -p rootfs/bin/
+ cp bin/init rootfs/
+ cp bin/vsockexec rootfs/bin/
+ cp bin/cmd/gcs rootfs/bin/
+ cp bin/cmd/gcstools rootfs/bin/
+ for tool in $(GCS_TOOLS); do ln -s gcstools rootfs/bin/$$tool; done
+ git -C $(SRCROOT) rev-parse HEAD > rootfs/gcs.commit && \
+ git -C $(SRCROOT) rev-parse --abbrev-ref HEAD > rootfs/gcs.branch
+ tar -zcf $@ -C rootfs .
+ rm -rf rootfs
+
+out/rootfs.tar.gz: out/initrd.img
+ rm -rf rootfs-conv
+ mkdir rootfs-conv
+ gunzip -c out/initrd.img | (cd rootfs-conv && cpio -imd)
+ tar -zcf $@ -C rootfs-conv .
+ rm -rf rootfs-conv
+
+out/initrd.img: $(BASE) out/delta.tar.gz $(SRCROOT)/hack/catcpio.sh
+ $(SRCROOT)/hack/catcpio.sh "$(BASE)" out/delta.tar.gz > out/initrd.img.uncompressed
+ gzip -c out/initrd.img.uncompressed > $@
+ rm out/initrd.img.uncompressed
+
+-include deps/cmd/gcs.gomake
+-include deps/cmd/gcstools.gomake
+
+# Implicit rule for includes that define Go targets.
+%.gomake: $(SRCROOT)/Makefile
+ @mkdir -p $(dir $@)
+ @/bin/echo $(@:deps/%.gomake=bin/%): $(SRCROOT)/hack/gomakedeps.sh > $@.new
+ @/bin/echo -e '\t@mkdir -p $$(dir $$@) $(dir $@)' >> $@.new
+ @/bin/echo -e '\t$$(GO_BUILD) -o $$@.new $$(SRCROOT)/$$(@:bin/%=%)' >> $@.new
+ @/bin/echo -e '\tGO="$(GO)" $$(SRCROOT)/hack/gomakedeps.sh $$@ $$(SRCROOT)/$$(@:bin/%=%) $$(GO_FLAGS) $$(GO_FLAGS_EXTRA) > $(@:%.gomake=%.godeps).new' >> $@.new
+ @/bin/echo -e '\tmv $(@:%.gomake=%.godeps).new $(@:%.gomake=%.godeps)' >> $@.new
+ @/bin/echo -e '\tmv $$@.new $$@' >> $@.new
+ @/bin/echo -e '-include $(@:%.gomake=%.godeps)' >> $@.new
+ mv $@.new $@
+
+VPATH=$(SRCROOT)
+
+bin/vsockexec: vsockexec/vsockexec.o vsockexec/vsock.o
+ @mkdir -p bin
+ $(CC) $(LDFLAGS) -o $@ $^
+
+bin/init: init/init.o vsockexec/vsock.o
+ @mkdir -p bin
+ $(CC) $(LDFLAGS) -o $@ $^
+
+%.o: %.c
+ @mkdir -p $(dir $@)
+ $(CC) $(CFLAGS) $(CPPFLAGS) -c -o $@ $<
diff --git a/vendor/github.com/Microsoft/hcsshim/Protobuild.toml b/vendor/github.com/Microsoft/hcsshim/Protobuild.toml
new file mode 100644
index 000000000..ee18671aa
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/Protobuild.toml
@@ -0,0 +1,49 @@
+version = "unstable"
+generator = "gogoctrd"
+plugins = ["grpc", "fieldpath"]
+
+# Control protoc include paths. Below are usually some good defaults, but feel
+# free to try it without them if it works for your project.
+[includes]
+ # Include paths that will be added before all others. Typically, you want to
+ # treat the root of the project as an include, but this may not be necessary.
+ before = ["./protobuf"]
+
+ # Paths that should be treated as include roots in relation to the vendor
+ # directory. These will be calculated with the vendor directory nearest the
+ # target package.
+ packages = ["github.com/gogo/protobuf"]
+
+ # Paths that will be added untouched to the end of the includes. We use
+ # `/usr/local/include` to pickup the common install location of protobuf.
+ # This is the default.
+ after = ["/usr/local/include"]
+
+# This section maps protobuf imports to Go packages. These will become
+# `-M` directives in the call to the go protobuf generator.
+[packages]
+ "gogoproto/gogo.proto" = "github.com/gogo/protobuf/gogoproto"
+ "google/protobuf/any.proto" = "github.com/gogo/protobuf/types"
+ "google/protobuf/empty.proto" = "github.com/gogo/protobuf/types"
+ "google/protobuf/struct.proto" = "github.com/gogo/protobuf/types"
+ "google/protobuf/descriptor.proto" = "github.com/gogo/protobuf/protoc-gen-gogo/descriptor"
+ "google/protobuf/field_mask.proto" = "github.com/gogo/protobuf/types"
+ "google/protobuf/timestamp.proto" = "github.com/gogo/protobuf/types"
+ "google/protobuf/duration.proto" = "github.com/gogo/protobuf/types"
+ "github/containerd/cgroups/stats/v1/metrics.proto" = "github.com/containerd/cgroups/stats/v1"
+
+[[overrides]]
+prefixes = ["github.com/Microsoft/hcsshim/internal/shimdiag"]
+plugins = ["ttrpc"]
+
+[[overrides]]
+prefixes = ["github.com/Microsoft/hcsshim/internal/computeagent"]
+plugins = ["ttrpc"]
+
+[[overrides]]
+prefixes = ["github.com/Microsoft/hcsshim/internal/ncproxyttrpc"]
+plugins = ["ttrpc"]
+
+[[overrides]]
+prefixes = ["github.com/Microsoft/hcsshim/internal/vmservice"]
+plugins = ["ttrpc"]
\ No newline at end of file
diff --git a/vendor/github.com/Microsoft/hcsshim/README.md b/vendor/github.com/Microsoft/hcsshim/README.md
new file mode 100644
index 000000000..b8ca926a9
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/README.md
@@ -0,0 +1,120 @@
+# hcsshim
+
+[![Build status](https://github.com/microsoft/hcsshim/actions/workflows/ci.yml/badge.svg?branch=master)](https://github.com/microsoft/hcsshim/actions?query=branch%3Amaster)
+
+This package contains the Golang interface for using the Windows [Host Compute Service](https://techcommunity.microsoft.com/t5/containers/introducing-the-host-compute-service-hcs/ba-p/382332) (HCS) to launch and manage [Windows Containers](https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/). It also contains other helpers and functions for managing Windows Containers such as the Golang interface for the Host Network Service (HNS), as well as code for the [guest agent](./internal/guest/README.md) (commonly referred to as the GCS or Guest Compute Service in the codebase) used to support running Linux Hyper-V containers.
+
+It is primarily used in the [Moby](https://github.com/moby/moby) and [Containerd](https://github.com/containerd/containerd) projects, but it can be freely used by other projects as well.
+
+## Building
+
+While this repository can be used as a library of sorts to call the HCS apis, there are a couple binaries built out of the repository as well. The main ones being the Linux guest agent, and an implementation of the [runtime v2 containerd shim api](https://github.com/containerd/containerd/blob/master/runtime/v2/README.md).
+### Linux Hyper-V Container Guest Agent
+
+To build the Linux guest agent itself all that's needed is to set your GOOS to "Linux" and build out of ./cmd/gcs.
+```powershell
+C:\> $env:GOOS="linux"
+C:\> go build .\cmd\gcs\
+```
+
+or on a Linux machine
+```sh
+> go build ./cmd/gcs
+```
+
+If you want it to be packaged inside of a rootfs to boot with alongside all of the other tools then you'll need to provide a rootfs that it can be packaged inside of. An easy way is to export the rootfs of a container.
+
+```sh
+docker pull busybox
+docker run --name base_image_container busybox
+docker export base_image_container | gzip > base.tar.gz
+BASE=./base.tar.gz
+make all
+```
+
+If the build is successful, in the `./out` folder you should see:
+```sh
+> ls ./out/
+delta.tar.gz initrd.img rootfs.tar.gz
+```
+
+### Containerd Shim
+For info on the Runtime V2 API: https://github.com/containerd/containerd/blob/master/runtime/v2/README.md.
+
+Contrary to the typical Linux architecture of shim -> runc, the runhcs shim is used both to launch and manage the lifetime of containers.
+
+```powershell
+C:\> $env:GOOS="windows"
+C:\> go build .\cmd\containerd-shim-runhcs-v1
+```
+
+Then place the binary in the same directory that Containerd is located at in your environment. A default Containerd configuration file can be generated by running:
+```powershell
+.\containerd.exe config default | Out-File "C:\Program Files\containerd\config.toml" -Encoding ascii
+```
+
+This config file will already have the shim set as the default runtime for cri interactions.
+
+To trial using the shim out with ctr.exe:
+```powershell
+C:\> ctr.exe run --runtime io.containerd.runhcs.v1 --rm mcr.microsoft.com/windows/nanoserver:2004 windows-test cmd /c "echo Hello World!"
+```
+
+## Contributing
+
+This project welcomes contributions and suggestions. Most contributions require you to agree to a
+Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
+the rights to use your contribution. For details, visit https://cla.microsoft.com.
+
+When you submit a pull request, a CLA-bot will automatically determine whether you need to provide
+a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions
+provided by the bot. You will only need to do this once across all repos using our CLA.
+
+We also require that contributors [sign their commits](https://git-scm.com/docs/git-commit) using `git commit -s` or `git commit --signoff` to
+certify they either authored the work themselves or otherwise have permission to use it in this project. Please see https://developercertificate.org/ for
+more info, as well as to make sure that you can attest to the rules listed. Our CI uses the [DCO Github app](https://github.com/apps/dco) to ensure
+that all commits in a given PR are signed-off.
+
+### Test Directory (Important to note)
+
+This project has tried to trim some dependencies from the root Go modules file that would be cumbersome to get transitively included if this
+project is being vendored/used as a library. Some of these dependencies were only being used for tests, so the /test directory in this project also has
+its own go.mod file where these are now included to get around this issue. Our tests rely on the code in this project to run, so the test Go modules file
+has a relative path replace directive to pull in the latest hcsshim code that the tests actually touch from this project
+(which is the repo itself on your disk).
+
+```
+replace (
+ github.com/Microsoft/hcsshim => ../
+)
+```
+
+Because of this, for most code changes you may need to run `go mod vendor` + `go mod tidy` in the /test directory in this repository, as the
+CI in this project will check if the files are out of date and will fail if this is true.
+
+
+## Code of Conduct
+
+This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
+For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
+contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
+
+## Dependencies
+
+This project requires Golang 1.9 or newer to build.
+
+For system requirements to run this project, see the Microsoft docs on [Windows Container requirements](https://docs.microsoft.com/en-us/virtualization/windowscontainers/deploy-containers/system-requirements).
+
+## Reporting Security Issues
+
+Security issues and bugs should be reported privately, via email, to the Microsoft Security
+Response Center (MSRC) at [secure@microsoft.com](mailto:secure@microsoft.com). You should
+receive a response within 24 hours. If for some reason you do not, please follow up via
+email to ensure we received your original message. Further information, including the
+[MSRC PGP](https://technet.microsoft.com/en-us/security/dn606155) key, can be found in
+the [Security TechCenter](https://technet.microsoft.com/en-us/security/default).
+
+For additional details, see [Report a Computer Security Vulnerability](https://technet.microsoft.com/en-us/security/ff852094.aspx) on Technet
+
+---------------
+Copyright (c) 2018 Microsoft Corp. All rights reserved.
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/attach.go b/vendor/github.com/Microsoft/hcsshim/computestorage/attach.go
new file mode 100644
index 000000000..7f1f2823d
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/attach.go
@@ -0,0 +1,38 @@
+package computestorage
+
+import (
+ "context"
+ "encoding/json"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+)
+
+// AttachLayerStorageFilter sets up the layer storage filter on a writable
+// container layer.
+//
+// `layerPath` is a path to a directory the writable layer is mounted. If the
+// path does not end in a `\` the platform will append it automatically.
+//
+// `layerData` is the parent read-only layer data.
+func AttachLayerStorageFilter(ctx context.Context, layerPath string, layerData LayerData) (err error) {
+ title := "hcsshim.AttachLayerStorageFilter"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("layerPath", layerPath),
+ )
+
+ bytes, err := json.Marshal(layerData)
+ if err != nil {
+ return err
+ }
+
+ err = hcsAttachLayerStorageFilter(layerPath, string(bytes))
+ if err != nil {
+ return errors.Wrap(err, "failed to attach layer storage filter")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/destroy.go b/vendor/github.com/Microsoft/hcsshim/computestorage/destroy.go
new file mode 100644
index 000000000..8e28e6c50
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/destroy.go
@@ -0,0 +1,26 @@
+package computestorage
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+)
+
+// DestroyLayer deletes a container layer.
+//
+// `layerPath` is a path to a directory containing the layer to export.
+func DestroyLayer(ctx context.Context, layerPath string) (err error) {
+ title := "hcsshim.DestroyLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("layerPath", layerPath))
+
+ err = hcsDestroyLayer(layerPath)
+ if err != nil {
+ return errors.Wrap(err, "failed to destroy layer")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/detach.go b/vendor/github.com/Microsoft/hcsshim/computestorage/detach.go
new file mode 100644
index 000000000..435473257
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/detach.go
@@ -0,0 +1,26 @@
+package computestorage
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+)
+
+// DetachLayerStorageFilter detaches the layer storage filter on a writable container layer.
+//
+// `layerPath` is a path to a directory containing the layer to export.
+func DetachLayerStorageFilter(ctx context.Context, layerPath string) (err error) {
+ title := "hcsshim.DetachLayerStorageFilter"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("layerPath", layerPath))
+
+ err = hcsDetachLayerStorageFilter(layerPath)
+ if err != nil {
+ return errors.Wrap(err, "failed to detach layer storage filter")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/export.go b/vendor/github.com/Microsoft/hcsshim/computestorage/export.go
new file mode 100644
index 000000000..a1b12dd12
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/export.go
@@ -0,0 +1,46 @@
+package computestorage
+
+import (
+ "context"
+ "encoding/json"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+)
+
+// ExportLayer exports a container layer.
+//
+// `layerPath` is a path to a directory containing the layer to export.
+//
+// `exportFolderPath` is a pre-existing folder to export the layer to.
+//
+// `layerData` is the parent layer data.
+//
+// `options` are the export options applied to the exported layer.
+func ExportLayer(ctx context.Context, layerPath, exportFolderPath string, layerData LayerData, options ExportLayerOptions) (err error) {
+ title := "hcsshim.ExportLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("layerPath", layerPath),
+ trace.StringAttribute("exportFolderPath", exportFolderPath),
+ )
+
+ ldbytes, err := json.Marshal(layerData)
+ if err != nil {
+ return err
+ }
+
+ obytes, err := json.Marshal(options)
+ if err != nil {
+ return err
+ }
+
+ err = hcsExportLayer(layerPath, exportFolderPath, string(ldbytes), string(obytes))
+ if err != nil {
+ return errors.Wrap(err, "failed to export layer")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/format.go b/vendor/github.com/Microsoft/hcsshim/computestorage/format.go
new file mode 100644
index 000000000..83c0fa33f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/format.go
@@ -0,0 +1,26 @@
+package computestorage
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+ "golang.org/x/sys/windows"
+)
+
+// FormatWritableLayerVhd formats a virtual disk for use as a writable container layer.
+//
+// If the VHD is not mounted it will be temporarily mounted.
+func FormatWritableLayerVhd(ctx context.Context, vhdHandle windows.Handle) (err error) {
+ title := "hcsshim.FormatWritableLayerVhd"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+
+ err = hcsFormatWritableLayerVhd(vhdHandle)
+ if err != nil {
+ return errors.Wrap(err, "failed to format writable layer vhd")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/helpers.go b/vendor/github.com/Microsoft/hcsshim/computestorage/helpers.go
new file mode 100644
index 000000000..87fee452c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/helpers.go
@@ -0,0 +1,193 @@
+package computestorage
+
+import (
+ "context"
+ "os"
+ "path/filepath"
+ "syscall"
+
+ "github.com/Microsoft/go-winio/pkg/security"
+ "github.com/Microsoft/go-winio/vhd"
+ "github.com/pkg/errors"
+ "golang.org/x/sys/windows"
+)
+
+const defaultVHDXBlockSizeInMB = 1
+
+// SetupContainerBaseLayer is a helper to setup a containers scratch. It
+// will create and format the vhdx's inside and the size is configurable with the sizeInGB
+// parameter.
+//
+// `layerPath` is the path to the base container layer on disk.
+//
+// `baseVhdPath` is the path to where the base vhdx for the base layer should be created.
+//
+// `diffVhdPath` is the path where the differencing disk for the base layer should be created.
+//
+// `sizeInGB` is the size in gigabytes to make the base vhdx.
+func SetupContainerBaseLayer(ctx context.Context, layerPath, baseVhdPath, diffVhdPath string, sizeInGB uint64) (err error) {
+ var (
+ hivesPath = filepath.Join(layerPath, "Hives")
+ layoutPath = filepath.Join(layerPath, "Layout")
+ )
+
+ // We need to remove the hives directory and layout file as `SetupBaseOSLayer` fails if these files
+ // already exist. `SetupBaseOSLayer` will create these files internally. We also remove the base and
+ // differencing disks if they exist in case we're asking for a different size.
+ if _, err := os.Stat(hivesPath); err == nil {
+ if err := os.RemoveAll(hivesPath); err != nil {
+ return errors.Wrap(err, "failed to remove prexisting hives directory")
+ }
+ }
+ if _, err := os.Stat(layoutPath); err == nil {
+ if err := os.RemoveAll(layoutPath); err != nil {
+ return errors.Wrap(err, "failed to remove prexisting layout file")
+ }
+ }
+
+ if _, err := os.Stat(baseVhdPath); err == nil {
+ if err := os.RemoveAll(baseVhdPath); err != nil {
+ return errors.Wrap(err, "failed to remove base vhdx path")
+ }
+ }
+ if _, err := os.Stat(diffVhdPath); err == nil {
+ if err := os.RemoveAll(diffVhdPath); err != nil {
+ return errors.Wrap(err, "failed to remove differencing vhdx")
+ }
+ }
+
+ createParams := &vhd.CreateVirtualDiskParameters{
+ Version: 2,
+ Version2: vhd.CreateVersion2{
+ MaximumSize: sizeInGB * 1024 * 1024 * 1024,
+ BlockSizeInBytes: defaultVHDXBlockSizeInMB * 1024 * 1024,
+ },
+ }
+ handle, err := vhd.CreateVirtualDisk(baseVhdPath, vhd.VirtualDiskAccessNone, vhd.CreateVirtualDiskFlagNone, createParams)
+ if err != nil {
+ return errors.Wrap(err, "failed to create vhdx")
+ }
+
+ defer func() {
+ if err != nil {
+ _ = syscall.CloseHandle(handle)
+ os.RemoveAll(baseVhdPath)
+ os.RemoveAll(diffVhdPath)
+ }
+ }()
+
+ if err = FormatWritableLayerVhd(ctx, windows.Handle(handle)); err != nil {
+ return err
+ }
+ // Base vhd handle must be closed before calling SetupBaseLayer in case of Container layer
+ if err = syscall.CloseHandle(handle); err != nil {
+ return errors.Wrap(err, "failed to close vhdx handle")
+ }
+
+ options := OsLayerOptions{
+ Type: OsLayerTypeContainer,
+ }
+
+ // SetupBaseOSLayer expects an empty vhd handle for a container layer and will
+ // error out otherwise.
+ if err = SetupBaseOSLayer(ctx, layerPath, 0, options); err != nil {
+ return err
+ }
+ // Create the differencing disk that will be what's copied for the final rw layer
+ // for a container.
+ if err = vhd.CreateDiffVhd(diffVhdPath, baseVhdPath, defaultVHDXBlockSizeInMB); err != nil {
+ return errors.Wrap(err, "failed to create differencing disk")
+ }
+
+ if err = security.GrantVmGroupAccess(baseVhdPath); err != nil {
+ return errors.Wrapf(err, "failed to grant vm group access to %s", baseVhdPath)
+ }
+ if err = security.GrantVmGroupAccess(diffVhdPath); err != nil {
+ return errors.Wrapf(err, "failed to grant vm group access to %s", diffVhdPath)
+ }
+ return nil
+}
+
+// SetupUtilityVMBaseLayer is a helper to setup a UVMs scratch space. It will create and format
+// the vhdx inside and the size is configurable by the sizeInGB parameter.
+//
+// `uvmPath` is the path to the UtilityVM filesystem.
+//
+// `baseVhdPath` is the path to where the base vhdx for the UVM should be created.
+//
+// `diffVhdPath` is the path where the differencing disk for the UVM should be created.
+//
+// `sizeInGB` specifies the size in gigabytes to make the base vhdx.
+func SetupUtilityVMBaseLayer(ctx context.Context, uvmPath, baseVhdPath, diffVhdPath string, sizeInGB uint64) (err error) {
+ // Remove the base and differencing disks if they exist in case we're asking for a different size.
+ if _, err := os.Stat(baseVhdPath); err == nil {
+ if err := os.RemoveAll(baseVhdPath); err != nil {
+ return errors.Wrap(err, "failed to remove base vhdx")
+ }
+ }
+ if _, err := os.Stat(diffVhdPath); err == nil {
+ if err := os.RemoveAll(diffVhdPath); err != nil {
+ return errors.Wrap(err, "failed to remove differencing vhdx")
+ }
+ }
+
+ // Just create the vhdx for utilityVM layer, no need to format it.
+ createParams := &vhd.CreateVirtualDiskParameters{
+ Version: 2,
+ Version2: vhd.CreateVersion2{
+ MaximumSize: sizeInGB * 1024 * 1024 * 1024,
+ BlockSizeInBytes: defaultVHDXBlockSizeInMB * 1024 * 1024,
+ },
+ }
+ handle, err := vhd.CreateVirtualDisk(baseVhdPath, vhd.VirtualDiskAccessNone, vhd.CreateVirtualDiskFlagNone, createParams)
+ if err != nil {
+ return errors.Wrap(err, "failed to create vhdx")
+ }
+
+ defer func() {
+ if err != nil {
+ _ = syscall.CloseHandle(handle)
+ os.RemoveAll(baseVhdPath)
+ os.RemoveAll(diffVhdPath)
+ }
+ }()
+
+ // If it is a UtilityVM layer then the base vhdx must be attached when calling
+ // `SetupBaseOSLayer`
+ attachParams := &vhd.AttachVirtualDiskParameters{
+ Version: 2,
+ }
+ if err := vhd.AttachVirtualDisk(handle, vhd.AttachVirtualDiskFlagNone, attachParams); err != nil {
+ return errors.Wrapf(err, "failed to attach virtual disk")
+ }
+
+ options := OsLayerOptions{
+ Type: OsLayerTypeVM,
+ }
+ if err := SetupBaseOSLayer(ctx, uvmPath, windows.Handle(handle), options); err != nil {
+ return err
+ }
+
+ // Detach and close the handle after setting up the layer as we don't need the handle
+ // for anything else and we no longer need to be attached either.
+ if err = vhd.DetachVirtualDisk(handle); err != nil {
+ return errors.Wrap(err, "failed to detach vhdx")
+ }
+ if err = syscall.CloseHandle(handle); err != nil {
+ return errors.Wrap(err, "failed to close vhdx handle")
+ }
+
+ // Create the differencing disk that will be what's copied for the final rw layer
+ // for a container.
+ if err = vhd.CreateDiffVhd(diffVhdPath, baseVhdPath, defaultVHDXBlockSizeInMB); err != nil {
+ return errors.Wrap(err, "failed to create differencing disk")
+ }
+
+ if err := security.GrantVmGroupAccess(baseVhdPath); err != nil {
+ return errors.Wrapf(err, "failed to grant vm group access to %s", baseVhdPath)
+ }
+ if err := security.GrantVmGroupAccess(diffVhdPath); err != nil {
+ return errors.Wrapf(err, "failed to grant vm group access to %s", diffVhdPath)
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/import.go b/vendor/github.com/Microsoft/hcsshim/computestorage/import.go
new file mode 100644
index 000000000..0c61dab32
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/import.go
@@ -0,0 +1,41 @@
+package computestorage
+
+import (
+ "context"
+ "encoding/json"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+)
+
+// ImportLayer imports a container layer.
+//
+// `layerPath` is a path to a directory to import the layer to. If the directory
+// does not exist it will be automatically created.
+//
+// `sourceFolderpath` is a pre-existing folder that contains the layer to
+// import.
+//
+// `layerData` is the parent layer data.
+func ImportLayer(ctx context.Context, layerPath, sourceFolderPath string, layerData LayerData) (err error) {
+ title := "hcsshim.ImportLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("layerPath", layerPath),
+ trace.StringAttribute("sourceFolderPath", sourceFolderPath),
+ )
+
+ bytes, err := json.Marshal(layerData)
+ if err != nil {
+ return err
+ }
+
+ err = hcsImportLayer(layerPath, sourceFolderPath, string(bytes))
+ if err != nil {
+ return errors.Wrap(err, "failed to import layer")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/initialize.go b/vendor/github.com/Microsoft/hcsshim/computestorage/initialize.go
new file mode 100644
index 000000000..53ed8ea6e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/initialize.go
@@ -0,0 +1,38 @@
+package computestorage
+
+import (
+ "context"
+ "encoding/json"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+)
+
+// InitializeWritableLayer initializes a writable layer for a container.
+//
+// `layerPath` is a path to a directory the layer is mounted. If the
+// path does not end in a `\` the platform will append it automatically.
+//
+// `layerData` is the parent read-only layer data.
+func InitializeWritableLayer(ctx context.Context, layerPath string, layerData LayerData) (err error) {
+ title := "hcsshim.InitializeWritableLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("layerPath", layerPath),
+ )
+
+ bytes, err := json.Marshal(layerData)
+ if err != nil {
+ return err
+ }
+
+ // Options are not used in the platform as of RS5
+ err = hcsInitializeWritableLayer(layerPath, string(bytes), "")
+ if err != nil {
+ return errors.Wrap(err, "failed to intitialize container layer")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/mount.go b/vendor/github.com/Microsoft/hcsshim/computestorage/mount.go
new file mode 100644
index 000000000..fcdbbef81
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/mount.go
@@ -0,0 +1,27 @@
+package computestorage
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/interop"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+ "golang.org/x/sys/windows"
+)
+
+// GetLayerVhdMountPath returns the volume path for a virtual disk of a writable container layer.
+func GetLayerVhdMountPath(ctx context.Context, vhdHandle windows.Handle) (path string, err error) {
+ title := "hcsshim.GetLayerVhdMountPath"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+
+ var mountPath *uint16
+ err = hcsGetLayerVhdMountPath(vhdHandle, &mountPath)
+ if err != nil {
+ return "", errors.Wrap(err, "failed to get vhd mount path")
+ }
+ path = interop.ConvertAndFreeCoTaskMemString(mountPath)
+ return path, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/setup.go b/vendor/github.com/Microsoft/hcsshim/computestorage/setup.go
new file mode 100644
index 000000000..06aaf841e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/setup.go
@@ -0,0 +1,74 @@
+package computestorage
+
+import (
+ "context"
+ "encoding/json"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/Microsoft/hcsshim/osversion"
+ "github.com/pkg/errors"
+ "go.opencensus.io/trace"
+ "golang.org/x/sys/windows"
+)
+
+// SetupBaseOSLayer sets up a layer that contains a base OS for a container.
+//
+// `layerPath` is a path to a directory containing the layer.
+//
+// `vhdHandle` is an empty file handle of `options.Type == OsLayerTypeContainer`
+// or else it is a file handle to the 'SystemTemplateBase.vhdx' if `options.Type
+// == OsLayerTypeVm`.
+//
+// `options` are the options applied while processing the layer.
+func SetupBaseOSLayer(ctx context.Context, layerPath string, vhdHandle windows.Handle, options OsLayerOptions) (err error) {
+ title := "hcsshim.SetupBaseOSLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("layerPath", layerPath),
+ )
+
+ bytes, err := json.Marshal(options)
+ if err != nil {
+ return err
+ }
+
+ err = hcsSetupBaseOSLayer(layerPath, vhdHandle, string(bytes))
+ if err != nil {
+ return errors.Wrap(err, "failed to setup base OS layer")
+ }
+ return nil
+}
+
+// SetupBaseOSVolume sets up a volume that contains a base OS for a container.
+//
+// `layerPath` is a path to a directory containing the layer.
+//
+// `volumePath` is the path to the volume to be used for setup.
+//
+// `options` are the options applied while processing the layer.
+func SetupBaseOSVolume(ctx context.Context, layerPath, volumePath string, options OsLayerOptions) (err error) {
+ if osversion.Build() < 19645 {
+ return errors.New("SetupBaseOSVolume is not present on builds older than 19645")
+ }
+ title := "hcsshim.SetupBaseOSVolume"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("layerPath", layerPath),
+ trace.StringAttribute("volumePath", volumePath),
+ )
+
+ bytes, err := json.Marshal(options)
+ if err != nil {
+ return err
+ }
+
+ err = hcsSetupBaseOSVolume(layerPath, volumePath, string(bytes))
+ if err != nil {
+ return errors.Wrap(err, "failed to setup base OS layer")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/storage.go b/vendor/github.com/Microsoft/hcsshim/computestorage/storage.go
new file mode 100644
index 000000000..95aff9c18
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/storage.go
@@ -0,0 +1,50 @@
+// Package computestorage is a wrapper around the HCS storage APIs. These are new storage APIs introduced
+// separate from the original graphdriver calls intended to give more freedom around creating
+// and managing container layers and scratch spaces.
+package computestorage
+
+import (
+ hcsschema "github.com/Microsoft/hcsshim/internal/hcs/schema2"
+)
+
+//go:generate go run ../mksyscall_windows.go -output zsyscall_windows.go storage.go
+
+//sys hcsImportLayer(layerPath string, sourceFolderPath string, layerData string) (hr error) = computestorage.HcsImportLayer?
+//sys hcsExportLayer(layerPath string, exportFolderPath string, layerData string, options string) (hr error) = computestorage.HcsExportLayer?
+//sys hcsDestroyLayer(layerPath string) (hr error) = computestorage.HcsDestoryLayer?
+//sys hcsSetupBaseOSLayer(layerPath string, handle windows.Handle, options string) (hr error) = computestorage.HcsSetupBaseOSLayer?
+//sys hcsInitializeWritableLayer(writableLayerPath string, layerData string, options string) (hr error) = computestorage.HcsInitializeWritableLayer?
+//sys hcsAttachLayerStorageFilter(layerPath string, layerData string) (hr error) = computestorage.HcsAttachLayerStorageFilter?
+//sys hcsDetachLayerStorageFilter(layerPath string) (hr error) = computestorage.HcsDetachLayerStorageFilter?
+//sys hcsFormatWritableLayerVhd(handle windows.Handle) (hr error) = computestorage.HcsFormatWritableLayerVhd?
+//sys hcsGetLayerVhdMountPath(vhdHandle windows.Handle, mountPath **uint16) (hr error) = computestorage.HcsGetLayerVhdMountPath?
+//sys hcsSetupBaseOSVolume(layerPath string, volumePath string, options string) (hr error) = computestorage.HcsSetupBaseOSVolume?
+
+// LayerData is the data used to describe parent layer information.
+type LayerData struct {
+ SchemaVersion hcsschema.Version `json:"SchemaVersion,omitempty"`
+ Layers []hcsschema.Layer `json:"Layers,omitempty"`
+}
+
+// ExportLayerOptions are the set of options that are used with the `computestorage.HcsExportLayer` syscall.
+type ExportLayerOptions struct {
+ IsWritableLayer bool `json:"IsWritableLayer,omitempty"`
+}
+
+// OsLayerType is the type of layer being operated on.
+type OsLayerType string
+
+const (
+ // OsLayerTypeContainer is a container layer.
+ OsLayerTypeContainer OsLayerType = "Container"
+ // OsLayerTypeVM is a virtual machine layer.
+ OsLayerTypeVM OsLayerType = "Vm"
+)
+
+// OsLayerOptions are the set of options that are used with the `SetupBaseOSLayer` and
+// `SetupBaseOSVolume` calls.
+type OsLayerOptions struct {
+ Type OsLayerType `json:"Type,omitempty"`
+ DisableCiCacheOptimization bool `json:"DisableCiCacheOptimization,omitempty"`
+ SkipUpdateBcdForBoot bool `json:"SkipUpdateBcdForBoot,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/computestorage/zsyscall_windows.go b/vendor/github.com/Microsoft/hcsshim/computestorage/zsyscall_windows.go
new file mode 100644
index 000000000..4f9518067
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/computestorage/zsyscall_windows.go
@@ -0,0 +1,319 @@
+// Code generated mksyscall_windows.exe DO NOT EDIT
+
+package computestorage
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return nil
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modcomputestorage = windows.NewLazySystemDLL("computestorage.dll")
+
+ procHcsImportLayer = modcomputestorage.NewProc("HcsImportLayer")
+ procHcsExportLayer = modcomputestorage.NewProc("HcsExportLayer")
+ procHcsDestoryLayer = modcomputestorage.NewProc("HcsDestoryLayer")
+ procHcsSetupBaseOSLayer = modcomputestorage.NewProc("HcsSetupBaseOSLayer")
+ procHcsInitializeWritableLayer = modcomputestorage.NewProc("HcsInitializeWritableLayer")
+ procHcsAttachLayerStorageFilter = modcomputestorage.NewProc("HcsAttachLayerStorageFilter")
+ procHcsDetachLayerStorageFilter = modcomputestorage.NewProc("HcsDetachLayerStorageFilter")
+ procHcsFormatWritableLayerVhd = modcomputestorage.NewProc("HcsFormatWritableLayerVhd")
+ procHcsGetLayerVhdMountPath = modcomputestorage.NewProc("HcsGetLayerVhdMountPath")
+ procHcsSetupBaseOSVolume = modcomputestorage.NewProc("HcsSetupBaseOSVolume")
+)
+
+func hcsImportLayer(layerPath string, sourceFolderPath string, layerData string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(layerPath)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(sourceFolderPath)
+ if hr != nil {
+ return
+ }
+ var _p2 *uint16
+ _p2, hr = syscall.UTF16PtrFromString(layerData)
+ if hr != nil {
+ return
+ }
+ return _hcsImportLayer(_p0, _p1, _p2)
+}
+
+func _hcsImportLayer(layerPath *uint16, sourceFolderPath *uint16, layerData *uint16) (hr error) {
+ if hr = procHcsImportLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsImportLayer.Addr(), 3, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(sourceFolderPath)), uintptr(unsafe.Pointer(layerData)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsExportLayer(layerPath string, exportFolderPath string, layerData string, options string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(layerPath)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(exportFolderPath)
+ if hr != nil {
+ return
+ }
+ var _p2 *uint16
+ _p2, hr = syscall.UTF16PtrFromString(layerData)
+ if hr != nil {
+ return
+ }
+ var _p3 *uint16
+ _p3, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsExportLayer(_p0, _p1, _p2, _p3)
+}
+
+func _hcsExportLayer(layerPath *uint16, exportFolderPath *uint16, layerData *uint16, options *uint16) (hr error) {
+ if hr = procHcsExportLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procHcsExportLayer.Addr(), 4, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(exportFolderPath)), uintptr(unsafe.Pointer(layerData)), uintptr(unsafe.Pointer(options)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsDestroyLayer(layerPath string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(layerPath)
+ if hr != nil {
+ return
+ }
+ return _hcsDestroyLayer(_p0)
+}
+
+func _hcsDestroyLayer(layerPath *uint16) (hr error) {
+ if hr = procHcsDestoryLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsDestoryLayer.Addr(), 1, uintptr(unsafe.Pointer(layerPath)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsSetupBaseOSLayer(layerPath string, handle windows.Handle, options string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(layerPath)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsSetupBaseOSLayer(_p0, handle, _p1)
+}
+
+func _hcsSetupBaseOSLayer(layerPath *uint16, handle windows.Handle, options *uint16) (hr error) {
+ if hr = procHcsSetupBaseOSLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsSetupBaseOSLayer.Addr(), 3, uintptr(unsafe.Pointer(layerPath)), uintptr(handle), uintptr(unsafe.Pointer(options)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsInitializeWritableLayer(writableLayerPath string, layerData string, options string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(writableLayerPath)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(layerData)
+ if hr != nil {
+ return
+ }
+ var _p2 *uint16
+ _p2, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsInitializeWritableLayer(_p0, _p1, _p2)
+}
+
+func _hcsInitializeWritableLayer(writableLayerPath *uint16, layerData *uint16, options *uint16) (hr error) {
+ if hr = procHcsInitializeWritableLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsInitializeWritableLayer.Addr(), 3, uintptr(unsafe.Pointer(writableLayerPath)), uintptr(unsafe.Pointer(layerData)), uintptr(unsafe.Pointer(options)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsAttachLayerStorageFilter(layerPath string, layerData string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(layerPath)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(layerData)
+ if hr != nil {
+ return
+ }
+ return _hcsAttachLayerStorageFilter(_p0, _p1)
+}
+
+func _hcsAttachLayerStorageFilter(layerPath *uint16, layerData *uint16) (hr error) {
+ if hr = procHcsAttachLayerStorageFilter.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsAttachLayerStorageFilter.Addr(), 2, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(layerData)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsDetachLayerStorageFilter(layerPath string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(layerPath)
+ if hr != nil {
+ return
+ }
+ return _hcsDetachLayerStorageFilter(_p0)
+}
+
+func _hcsDetachLayerStorageFilter(layerPath *uint16) (hr error) {
+ if hr = procHcsDetachLayerStorageFilter.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsDetachLayerStorageFilter.Addr(), 1, uintptr(unsafe.Pointer(layerPath)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsFormatWritableLayerVhd(handle windows.Handle) (hr error) {
+ if hr = procHcsFormatWritableLayerVhd.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsFormatWritableLayerVhd.Addr(), 1, uintptr(handle), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsGetLayerVhdMountPath(vhdHandle windows.Handle, mountPath **uint16) (hr error) {
+ if hr = procHcsGetLayerVhdMountPath.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsGetLayerVhdMountPath.Addr(), 2, uintptr(vhdHandle), uintptr(unsafe.Pointer(mountPath)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsSetupBaseOSVolume(layerPath string, volumePath string, options string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(layerPath)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(volumePath)
+ if hr != nil {
+ return
+ }
+ var _p2 *uint16
+ _p2, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsSetupBaseOSVolume(_p0, _p1, _p2)
+}
+
+func _hcsSetupBaseOSVolume(layerPath *uint16, volumePath *uint16, options *uint16) (hr error) {
+ if hr = procHcsSetupBaseOSVolume.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsSetupBaseOSVolume.Addr(), 3, uintptr(unsafe.Pointer(layerPath)), uintptr(unsafe.Pointer(volumePath)), uintptr(unsafe.Pointer(options)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/container.go b/vendor/github.com/Microsoft/hcsshim/container.go
new file mode 100644
index 000000000..bfd722898
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/container.go
@@ -0,0 +1,223 @@
+package hcsshim
+
+import (
+ "context"
+ "fmt"
+ "os"
+ "sync"
+ "time"
+
+ "github.com/Microsoft/hcsshim/internal/hcs"
+ "github.com/Microsoft/hcsshim/internal/hcs/schema1"
+ "github.com/Microsoft/hcsshim/internal/mergemaps"
+)
+
+// ContainerProperties holds the properties for a container and the processes running in that container
+type ContainerProperties = schema1.ContainerProperties
+
+// MemoryStats holds the memory statistics for a container
+type MemoryStats = schema1.MemoryStats
+
+// ProcessorStats holds the processor statistics for a container
+type ProcessorStats = schema1.ProcessorStats
+
+// StorageStats holds the storage statistics for a container
+type StorageStats = schema1.StorageStats
+
+// NetworkStats holds the network statistics for a container
+type NetworkStats = schema1.NetworkStats
+
+// Statistics is the structure returned by a statistics call on a container
+type Statistics = schema1.Statistics
+
+// ProcessList is the structure of an item returned by a ProcessList call on a container
+type ProcessListItem = schema1.ProcessListItem
+
+// MappedVirtualDiskController is the structure of an item returned by a MappedVirtualDiskList call on a container
+type MappedVirtualDiskController = schema1.MappedVirtualDiskController
+
+// Type of Request Support in ModifySystem
+type RequestType = schema1.RequestType
+
+// Type of Resource Support in ModifySystem
+type ResourceType = schema1.ResourceType
+
+// RequestType const
+const (
+ Add = schema1.Add
+ Remove = schema1.Remove
+ Network = schema1.Network
+)
+
+// ResourceModificationRequestResponse is the structure used to send request to the container to modify the system
+// Supported resource types are Network and Request Types are Add/Remove
+type ResourceModificationRequestResponse = schema1.ResourceModificationRequestResponse
+
+type container struct {
+ system *hcs.System
+ waitOnce sync.Once
+ waitErr error
+ waitCh chan struct{}
+}
+
+// createComputeSystemAdditionalJSON is read from the environment at initialisation
+// time. It allows an environment variable to define additional JSON which
+// is merged in the CreateComputeSystem call to HCS.
+var createContainerAdditionalJSON []byte
+
+func init() {
+ createContainerAdditionalJSON = ([]byte)(os.Getenv("HCSSHIM_CREATECONTAINER_ADDITIONALJSON"))
+}
+
+// CreateContainer creates a new container with the given configuration but does not start it.
+func CreateContainer(id string, c *ContainerConfig) (Container, error) {
+ fullConfig, err := mergemaps.MergeJSON(c, createContainerAdditionalJSON)
+ if err != nil {
+ return nil, fmt.Errorf("failed to merge additional JSON '%s': %s", createContainerAdditionalJSON, err)
+ }
+
+ system, err := hcs.CreateComputeSystem(context.Background(), id, fullConfig)
+ if err != nil {
+ return nil, err
+ }
+ return &container{system: system}, err
+}
+
+// OpenContainer opens an existing container by ID.
+func OpenContainer(id string) (Container, error) {
+ system, err := hcs.OpenComputeSystem(context.Background(), id)
+ if err != nil {
+ return nil, err
+ }
+ return &container{system: system}, err
+}
+
+// GetContainers gets a list of the containers on the system that match the query
+func GetContainers(q ComputeSystemQuery) ([]ContainerProperties, error) {
+ return hcs.GetComputeSystems(context.Background(), q)
+}
+
+// Start synchronously starts the container.
+func (container *container) Start() error {
+ return convertSystemError(container.system.Start(context.Background()), container)
+}
+
+// Shutdown requests a container shutdown, but it may not actually be shutdown until Wait() succeeds.
+func (container *container) Shutdown() error {
+ err := container.system.Shutdown(context.Background())
+ if err != nil {
+ return convertSystemError(err, container)
+ }
+ return &ContainerError{Container: container, Err: ErrVmcomputeOperationPending, Operation: "hcsshim::ComputeSystem::Shutdown"}
+}
+
+// Terminate requests a container terminate, but it may not actually be terminated until Wait() succeeds.
+func (container *container) Terminate() error {
+ err := container.system.Terminate(context.Background())
+ if err != nil {
+ return convertSystemError(err, container)
+ }
+ return &ContainerError{Container: container, Err: ErrVmcomputeOperationPending, Operation: "hcsshim::ComputeSystem::Terminate"}
+}
+
+// Waits synchronously waits for the container to shutdown or terminate.
+func (container *container) Wait() error {
+ err := container.system.Wait()
+ if err == nil {
+ err = container.system.ExitError()
+ }
+ return convertSystemError(err, container)
+}
+
+// WaitTimeout synchronously waits for the container to terminate or the duration to elapse. It
+// returns false if timeout occurs.
+func (container *container) WaitTimeout(timeout time.Duration) error {
+ container.waitOnce.Do(func() {
+ container.waitCh = make(chan struct{})
+ go func() {
+ container.waitErr = container.Wait()
+ close(container.waitCh)
+ }()
+ })
+ t := time.NewTimer(timeout)
+ defer t.Stop()
+ select {
+ case <-t.C:
+ return &ContainerError{Container: container, Err: ErrTimeout, Operation: "hcsshim::ComputeSystem::Wait"}
+ case <-container.waitCh:
+ return container.waitErr
+ }
+}
+
+// Pause pauses the execution of a container.
+func (container *container) Pause() error {
+ return convertSystemError(container.system.Pause(context.Background()), container)
+}
+
+// Resume resumes the execution of a container.
+func (container *container) Resume() error {
+ return convertSystemError(container.system.Resume(context.Background()), container)
+}
+
+// HasPendingUpdates returns true if the container has updates pending to install
+func (container *container) HasPendingUpdates() (bool, error) {
+ return false, nil
+}
+
+// Statistics returns statistics for the container. This is a legacy v1 call
+func (container *container) Statistics() (Statistics, error) {
+ properties, err := container.system.Properties(context.Background(), schema1.PropertyTypeStatistics)
+ if err != nil {
+ return Statistics{}, convertSystemError(err, container)
+ }
+
+ return properties.Statistics, nil
+}
+
+// ProcessList returns an array of ProcessListItems for the container. This is a legacy v1 call
+func (container *container) ProcessList() ([]ProcessListItem, error) {
+ properties, err := container.system.Properties(context.Background(), schema1.PropertyTypeProcessList)
+ if err != nil {
+ return nil, convertSystemError(err, container)
+ }
+
+ return properties.ProcessList, nil
+}
+
+// This is a legacy v1 call
+func (container *container) MappedVirtualDisks() (map[int]MappedVirtualDiskController, error) {
+ properties, err := container.system.Properties(context.Background(), schema1.PropertyTypeMappedVirtualDisk)
+ if err != nil {
+ return nil, convertSystemError(err, container)
+ }
+
+ return properties.MappedVirtualDiskControllers, nil
+}
+
+// CreateProcess launches a new process within the container.
+func (container *container) CreateProcess(c *ProcessConfig) (Process, error) {
+ p, err := container.system.CreateProcess(context.Background(), c)
+ if err != nil {
+ return nil, convertSystemError(err, container)
+ }
+ return &process{p: p.(*hcs.Process)}, nil
+}
+
+// OpenProcess gets an interface to an existing process within the container.
+func (container *container) OpenProcess(pid int) (Process, error) {
+ p, err := container.system.OpenProcess(context.Background(), pid)
+ if err != nil {
+ return nil, convertSystemError(err, container)
+ }
+ return &process{p: p}, nil
+}
+
+// Close cleans up any state associated with the container but does not terminate or wait for it.
+func (container *container) Close() error {
+ return convertSystemError(container.system.Close(), container)
+}
+
+// Modify the System
+func (container *container) Modify(config *ResourceModificationRequestResponse) error {
+ return convertSystemError(container.system.Modify(context.Background(), config), container)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/errors.go b/vendor/github.com/Microsoft/hcsshim/errors.go
new file mode 100644
index 000000000..f367022e7
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/errors.go
@@ -0,0 +1,245 @@
+package hcsshim
+
+import (
+ "fmt"
+ "syscall"
+
+ "github.com/Microsoft/hcsshim/internal/hns"
+
+ "github.com/Microsoft/hcsshim/internal/hcs"
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+)
+
+var (
+ // ErrComputeSystemDoesNotExist is an error encountered when the container being operated on no longer exists = hcs.exist
+ ErrComputeSystemDoesNotExist = hcs.ErrComputeSystemDoesNotExist
+
+ // ErrElementNotFound is an error encountered when the object being referenced does not exist
+ ErrElementNotFound = hcs.ErrElementNotFound
+
+ // ErrElementNotFound is an error encountered when the object being referenced does not exist
+ ErrNotSupported = hcs.ErrNotSupported
+
+ // ErrInvalidData is an error encountered when the request being sent to hcs is invalid/unsupported
+ // decimal -2147024883 / hex 0x8007000d
+ ErrInvalidData = hcs.ErrInvalidData
+
+ // ErrHandleClose is an error encountered when the handle generating the notification being waited on has been closed
+ ErrHandleClose = hcs.ErrHandleClose
+
+ // ErrAlreadyClosed is an error encountered when using a handle that has been closed by the Close method
+ ErrAlreadyClosed = hcs.ErrAlreadyClosed
+
+ // ErrInvalidNotificationType is an error encountered when an invalid notification type is used
+ ErrInvalidNotificationType = hcs.ErrInvalidNotificationType
+
+ // ErrInvalidProcessState is an error encountered when the process is not in a valid state for the requested operation
+ ErrInvalidProcessState = hcs.ErrInvalidProcessState
+
+ // ErrTimeout is an error encountered when waiting on a notification times out
+ ErrTimeout = hcs.ErrTimeout
+
+ // ErrUnexpectedContainerExit is the error encountered when a container exits while waiting for
+ // a different expected notification
+ ErrUnexpectedContainerExit = hcs.ErrUnexpectedContainerExit
+
+ // ErrUnexpectedProcessAbort is the error encountered when communication with the compute service
+ // is lost while waiting for a notification
+ ErrUnexpectedProcessAbort = hcs.ErrUnexpectedProcessAbort
+
+ // ErrUnexpectedValue is an error encountered when hcs returns an invalid value
+ ErrUnexpectedValue = hcs.ErrUnexpectedValue
+
+ // ErrVmcomputeAlreadyStopped is an error encountered when a shutdown or terminate request is made on a stopped container
+ ErrVmcomputeAlreadyStopped = hcs.ErrVmcomputeAlreadyStopped
+
+ // ErrVmcomputeOperationPending is an error encountered when the operation is being completed asynchronously
+ ErrVmcomputeOperationPending = hcs.ErrVmcomputeOperationPending
+
+ // ErrVmcomputeOperationInvalidState is an error encountered when the compute system is not in a valid state for the requested operation
+ ErrVmcomputeOperationInvalidState = hcs.ErrVmcomputeOperationInvalidState
+
+ // ErrProcNotFound is an error encountered when a procedure look up fails.
+ ErrProcNotFound = hcs.ErrProcNotFound
+
+ // ErrVmcomputeOperationAccessIsDenied is an error which can be encountered when enumerating compute systems in RS1/RS2
+ // builds when the underlying silo might be in the process of terminating. HCS was fixed in RS3.
+ ErrVmcomputeOperationAccessIsDenied = hcs.ErrVmcomputeOperationAccessIsDenied
+
+ // ErrVmcomputeInvalidJSON is an error encountered when the compute system does not support/understand the messages sent by management
+ ErrVmcomputeInvalidJSON = hcs.ErrVmcomputeInvalidJSON
+
+ // ErrVmcomputeUnknownMessage is an error encountered guest compute system doesn't support the message
+ ErrVmcomputeUnknownMessage = hcs.ErrVmcomputeUnknownMessage
+
+ // ErrNotSupported is an error encountered when hcs doesn't support the request
+ ErrPlatformNotSupported = hcs.ErrPlatformNotSupported
+)
+
+type EndpointNotFoundError = hns.EndpointNotFoundError
+type NetworkNotFoundError = hns.NetworkNotFoundError
+
+// ProcessError is an error encountered in HCS during an operation on a Process object
+type ProcessError struct {
+ Process *process
+ Operation string
+ Err error
+ Events []hcs.ErrorEvent
+}
+
+// ContainerError is an error encountered in HCS during an operation on a Container object
+type ContainerError struct {
+ Container *container
+ Operation string
+ Err error
+ Events []hcs.ErrorEvent
+}
+
+func (e *ContainerError) Error() string {
+ if e == nil {
+ return ""
+ }
+
+ if e.Container == nil {
+ return "unexpected nil container for error: " + e.Err.Error()
+ }
+
+ s := "container " + e.Container.system.ID()
+
+ if e.Operation != "" {
+ s += " encountered an error during " + e.Operation
+ }
+
+ switch e.Err.(type) {
+ case nil:
+ break
+ case syscall.Errno:
+ s += fmt.Sprintf(": failure in a Windows system call: %s (0x%x)", e.Err, hcserror.Win32FromError(e.Err))
+ default:
+ s += fmt.Sprintf(": %s", e.Err.Error())
+ }
+
+ for _, ev := range e.Events {
+ s += "\n" + ev.String()
+ }
+
+ return s
+}
+
+func (e *ProcessError) Error() string {
+ if e == nil {
+ return ""
+ }
+
+ if e.Process == nil {
+ return "Unexpected nil process for error: " + e.Err.Error()
+ }
+
+ s := fmt.Sprintf("process %d in container %s", e.Process.p.Pid(), e.Process.p.SystemID())
+ if e.Operation != "" {
+ s += " encountered an error during " + e.Operation
+ }
+
+ switch e.Err.(type) {
+ case nil:
+ break
+ case syscall.Errno:
+ s += fmt.Sprintf(": failure in a Windows system call: %s (0x%x)", e.Err, hcserror.Win32FromError(e.Err))
+ default:
+ s += fmt.Sprintf(": %s", e.Err.Error())
+ }
+
+ for _, ev := range e.Events {
+ s += "\n" + ev.String()
+ }
+
+ return s
+}
+
+// IsNotExist checks if an error is caused by the Container or Process not existing.
+// Note: Currently, ErrElementNotFound can mean that a Process has either
+// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
+// will currently return true when the error is ErrElementNotFound.
+func IsNotExist(err error) bool {
+ if _, ok := err.(EndpointNotFoundError); ok {
+ return true
+ }
+ if _, ok := err.(NetworkNotFoundError); ok {
+ return true
+ }
+ return hcs.IsNotExist(getInnerError(err))
+}
+
+// IsAlreadyClosed checks if an error is caused by the Container or Process having been
+// already closed by a call to the Close() method.
+func IsAlreadyClosed(err error) bool {
+ return hcs.IsAlreadyClosed(getInnerError(err))
+}
+
+// IsPending returns a boolean indicating whether the error is that
+// the requested operation is being completed in the background.
+func IsPending(err error) bool {
+ return hcs.IsPending(getInnerError(err))
+}
+
+// IsTimeout returns a boolean indicating whether the error is caused by
+// a timeout waiting for the operation to complete.
+func IsTimeout(err error) bool {
+ return hcs.IsTimeout(getInnerError(err))
+}
+
+// IsAlreadyStopped returns a boolean indicating whether the error is caused by
+// a Container or Process being already stopped.
+// Note: Currently, ErrElementNotFound can mean that a Process has either
+// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
+// will currently return true when the error is ErrElementNotFound.
+func IsAlreadyStopped(err error) bool {
+ return hcs.IsAlreadyStopped(getInnerError(err))
+}
+
+// IsNotSupported returns a boolean indicating whether the error is caused by
+// unsupported platform requests
+// Note: Currently Unsupported platform requests can be mean either
+// ErrVmcomputeInvalidJSON, ErrInvalidData, ErrNotSupported or ErrVmcomputeUnknownMessage
+// is thrown from the Platform
+func IsNotSupported(err error) bool {
+ return hcs.IsNotSupported(getInnerError(err))
+}
+
+// IsOperationInvalidState returns true when err is caused by
+// `ErrVmcomputeOperationInvalidState`.
+func IsOperationInvalidState(err error) bool {
+ return hcs.IsOperationInvalidState(getInnerError(err))
+}
+
+// IsAccessIsDenied returns true when err is caused by
+// `ErrVmcomputeOperationAccessIsDenied`.
+func IsAccessIsDenied(err error) bool {
+ return hcs.IsAccessIsDenied(getInnerError(err))
+}
+
+func getInnerError(err error) error {
+ switch pe := err.(type) {
+ case nil:
+ return nil
+ case *ContainerError:
+ err = pe.Err
+ case *ProcessError:
+ err = pe.Err
+ }
+ return err
+}
+
+func convertSystemError(err error, c *container) error {
+ if serr, ok := err.(*hcs.SystemError); ok {
+ return &ContainerError{Container: c, Operation: serr.Op, Err: serr.Err, Events: serr.Events}
+ }
+ return err
+}
+
+func convertProcessError(err error, p *process) error {
+ if perr, ok := err.(*hcs.ProcessError); ok {
+ return &ProcessError{Process: p, Operation: perr.Op, Err: perr.Err, Events: perr.Events}
+ }
+ return err
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/functional_tests.ps1 b/vendor/github.com/Microsoft/hcsshim/functional_tests.ps1
new file mode 100644
index 000000000..ce6edbcf3
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/functional_tests.ps1
@@ -0,0 +1,12 @@
+# Requirements so far:
+# dockerd running
+# - image microsoft/nanoserver (matching host base image) docker load -i c:\baseimages\nanoserver.tar
+# - image alpine (linux) docker pull --platform=linux alpine
+
+
+# TODO: Add this a parameter for debugging. ie "functional-tests -debug=$true"
+#$env:HCSSHIM_FUNCTIONAL_TESTS_DEBUG="yes please"
+
+#pushd uvm
+go test -v -tags "functional uvmcreate uvmscratch uvmscsi uvmvpmem uvmvsmb uvmp9" ./...
+#popd
\ No newline at end of file
diff --git a/vendor/github.com/Microsoft/hcsshim/hcsshim.go b/vendor/github.com/Microsoft/hcsshim/hcsshim.go
new file mode 100644
index 000000000..ceb3ac85e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/hcsshim.go
@@ -0,0 +1,28 @@
+// Shim for the Host Compute Service (HCS) to manage Windows Server
+// containers and Hyper-V containers.
+
+package hcsshim
+
+import (
+ "syscall"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+)
+
+//go:generate go run mksyscall_windows.go -output zsyscall_windows.go hcsshim.go
+
+//sys SetCurrentThreadCompartmentId(compartmentId uint32) (hr error) = iphlpapi.SetCurrentThreadCompartmentId
+
+const (
+ // Specific user-visible exit codes
+ WaitErrExecFailed = 32767
+
+ ERROR_GEN_FAILURE = hcserror.ERROR_GEN_FAILURE
+ ERROR_SHUTDOWN_IN_PROGRESS = syscall.Errno(1115)
+ WSAEINVAL = syscall.Errno(10022)
+
+ // Timeout on wait calls
+ TimeoutInfinite = 0xFFFFFFFF
+)
+
+type HcsError = hcserror.HcsError
diff --git a/vendor/github.com/Microsoft/hcsshim/hnsendpoint.go b/vendor/github.com/Microsoft/hcsshim/hnsendpoint.go
new file mode 100644
index 000000000..9e0059447
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/hnsendpoint.go
@@ -0,0 +1,118 @@
+package hcsshim
+
+import (
+ "github.com/Microsoft/hcsshim/internal/hns"
+)
+
+// HNSEndpoint represents a network endpoint in HNS
+type HNSEndpoint = hns.HNSEndpoint
+
+// HNSEndpointStats represent the stats for an networkendpoint in HNS
+type HNSEndpointStats = hns.EndpointStats
+
+// Namespace represents a Compartment.
+type Namespace = hns.Namespace
+
+//SystemType represents the type of the system on which actions are done
+type SystemType string
+
+// SystemType const
+const (
+ ContainerType SystemType = "Container"
+ VirtualMachineType SystemType = "VirtualMachine"
+ HostType SystemType = "Host"
+)
+
+// EndpointAttachDetachRequest is the structure used to send request to the container to modify the system
+// Supported resource types are Network and Request Types are Add/Remove
+type EndpointAttachDetachRequest = hns.EndpointAttachDetachRequest
+
+// EndpointResquestResponse is object to get the endpoint request response
+type EndpointResquestResponse = hns.EndpointResquestResponse
+
+// HNSEndpointRequest makes a HNS call to modify/query a network endpoint
+func HNSEndpointRequest(method, path, request string) (*HNSEndpoint, error) {
+ return hns.HNSEndpointRequest(method, path, request)
+}
+
+// HNSListEndpointRequest makes a HNS call to query the list of available endpoints
+func HNSListEndpointRequest() ([]HNSEndpoint, error) {
+ return hns.HNSListEndpointRequest()
+}
+
+// HotAttachEndpoint makes a HCS Call to attach the endpoint to the container
+func HotAttachEndpoint(containerID string, endpointID string) error {
+ endpoint, err := GetHNSEndpointByID(endpointID)
+ if err != nil {
+ return err
+ }
+ isAttached, err := endpoint.IsAttached(containerID)
+ if isAttached {
+ return err
+ }
+ return modifyNetworkEndpoint(containerID, endpointID, Add)
+}
+
+// HotDetachEndpoint makes a HCS Call to detach the endpoint from the container
+func HotDetachEndpoint(containerID string, endpointID string) error {
+ endpoint, err := GetHNSEndpointByID(endpointID)
+ if err != nil {
+ return err
+ }
+ isAttached, err := endpoint.IsAttached(containerID)
+ if !isAttached {
+ return err
+ }
+ return modifyNetworkEndpoint(containerID, endpointID, Remove)
+}
+
+// ModifyContainer corresponding to the container id, by sending a request
+func modifyContainer(id string, request *ResourceModificationRequestResponse) error {
+ container, err := OpenContainer(id)
+ if err != nil {
+ if IsNotExist(err) {
+ return ErrComputeSystemDoesNotExist
+ }
+ return getInnerError(err)
+ }
+ defer container.Close()
+ err = container.Modify(request)
+ if err != nil {
+ if IsNotSupported(err) {
+ return ErrPlatformNotSupported
+ }
+ return getInnerError(err)
+ }
+
+ return nil
+}
+
+func modifyNetworkEndpoint(containerID string, endpointID string, request RequestType) error {
+ requestMessage := &ResourceModificationRequestResponse{
+ Resource: Network,
+ Request: request,
+ Data: endpointID,
+ }
+ err := modifyContainer(containerID, requestMessage)
+
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
+
+// GetHNSEndpointByID get the Endpoint by ID
+func GetHNSEndpointByID(endpointID string) (*HNSEndpoint, error) {
+ return hns.GetHNSEndpointByID(endpointID)
+}
+
+// GetHNSEndpointByName gets the endpoint filtered by Name
+func GetHNSEndpointByName(endpointName string) (*HNSEndpoint, error) {
+ return hns.GetHNSEndpointByName(endpointName)
+}
+
+// GetHNSEndpointStats gets the endpoint stats by ID
+func GetHNSEndpointStats(endpointName string) (*HNSEndpointStats, error) {
+ return hns.GetHNSEndpointStats(endpointName)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/hnsglobals.go b/vendor/github.com/Microsoft/hcsshim/hnsglobals.go
new file mode 100644
index 000000000..2b5381904
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/hnsglobals.go
@@ -0,0 +1,16 @@
+package hcsshim
+
+import (
+ "github.com/Microsoft/hcsshim/internal/hns"
+)
+
+type HNSGlobals = hns.HNSGlobals
+type HNSVersion = hns.HNSVersion
+
+var (
+ HNSVersion1803 = hns.HNSVersion1803
+)
+
+func GetHNSGlobals() (*HNSGlobals, error) {
+ return hns.GetHNSGlobals()
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/hnsnetwork.go b/vendor/github.com/Microsoft/hcsshim/hnsnetwork.go
new file mode 100644
index 000000000..f775fa1d0
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/hnsnetwork.go
@@ -0,0 +1,36 @@
+package hcsshim
+
+import (
+ "github.com/Microsoft/hcsshim/internal/hns"
+)
+
+// Subnet is assoicated with a network and represents a list
+// of subnets available to the network
+type Subnet = hns.Subnet
+
+// MacPool is assoicated with a network and represents a list
+// of macaddresses available to the network
+type MacPool = hns.MacPool
+
+// HNSNetwork represents a network in HNS
+type HNSNetwork = hns.HNSNetwork
+
+// HNSNetworkRequest makes a call into HNS to update/query a single network
+func HNSNetworkRequest(method, path, request string) (*HNSNetwork, error) {
+ return hns.HNSNetworkRequest(method, path, request)
+}
+
+// HNSListNetworkRequest makes a HNS call to query the list of available networks
+func HNSListNetworkRequest(method, path, request string) ([]HNSNetwork, error) {
+ return hns.HNSListNetworkRequest(method, path, request)
+}
+
+// GetHNSNetworkByID
+func GetHNSNetworkByID(networkID string) (*HNSNetwork, error) {
+ return hns.GetHNSNetworkByID(networkID)
+}
+
+// GetHNSNetworkName filtered by Name
+func GetHNSNetworkByName(networkName string) (*HNSNetwork, error) {
+ return hns.GetHNSNetworkByName(networkName)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/hnspolicy.go b/vendor/github.com/Microsoft/hcsshim/hnspolicy.go
new file mode 100644
index 000000000..00ab26364
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/hnspolicy.go
@@ -0,0 +1,60 @@
+package hcsshim
+
+import (
+ "github.com/Microsoft/hcsshim/internal/hns"
+)
+
+// Type of Request Support in ModifySystem
+type PolicyType = hns.PolicyType
+
+// RequestType const
+const (
+ Nat = hns.Nat
+ ACL = hns.ACL
+ PA = hns.PA
+ VLAN = hns.VLAN
+ VSID = hns.VSID
+ VNet = hns.VNet
+ L2Driver = hns.L2Driver
+ Isolation = hns.Isolation
+ QOS = hns.QOS
+ OutboundNat = hns.OutboundNat
+ ExternalLoadBalancer = hns.ExternalLoadBalancer
+ Route = hns.Route
+ Proxy = hns.Proxy
+)
+
+type ProxyPolicy = hns.ProxyPolicy
+
+type NatPolicy = hns.NatPolicy
+
+type QosPolicy = hns.QosPolicy
+
+type IsolationPolicy = hns.IsolationPolicy
+
+type VlanPolicy = hns.VlanPolicy
+
+type VsidPolicy = hns.VsidPolicy
+
+type PaPolicy = hns.PaPolicy
+
+type OutboundNatPolicy = hns.OutboundNatPolicy
+
+type ActionType = hns.ActionType
+type DirectionType = hns.DirectionType
+type RuleType = hns.RuleType
+
+const (
+ Allow = hns.Allow
+ Block = hns.Block
+
+ In = hns.In
+ Out = hns.Out
+
+ Host = hns.Host
+ Switch = hns.Switch
+)
+
+type ACLPolicy = hns.ACLPolicy
+
+type Policy = hns.Policy
diff --git a/vendor/github.com/Microsoft/hcsshim/hnspolicylist.go b/vendor/github.com/Microsoft/hcsshim/hnspolicylist.go
new file mode 100644
index 000000000..55aaa4a50
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/hnspolicylist.go
@@ -0,0 +1,47 @@
+package hcsshim
+
+import (
+ "github.com/Microsoft/hcsshim/internal/hns"
+)
+
+// RoutePolicy is a structure defining schema for Route based Policy
+type RoutePolicy = hns.RoutePolicy
+
+// ELBPolicy is a structure defining schema for ELB LoadBalancing based Policy
+type ELBPolicy = hns.ELBPolicy
+
+// LBPolicy is a structure defining schema for LoadBalancing based Policy
+type LBPolicy = hns.LBPolicy
+
+// PolicyList is a structure defining schema for Policy list request
+type PolicyList = hns.PolicyList
+
+// HNSPolicyListRequest makes a call into HNS to update/query a single network
+func HNSPolicyListRequest(method, path, request string) (*PolicyList, error) {
+ return hns.HNSPolicyListRequest(method, path, request)
+}
+
+// HNSListPolicyListRequest gets all the policy list
+func HNSListPolicyListRequest() ([]PolicyList, error) {
+ return hns.HNSListPolicyListRequest()
+}
+
+// PolicyListRequest makes a HNS call to modify/query a network policy list
+func PolicyListRequest(method, path, request string) (*PolicyList, error) {
+ return hns.PolicyListRequest(method, path, request)
+}
+
+// GetPolicyListByID get the policy list by ID
+func GetPolicyListByID(policyListID string) (*PolicyList, error) {
+ return hns.GetPolicyListByID(policyListID)
+}
+
+// AddLoadBalancer policy list for the specified endpoints
+func AddLoadBalancer(endpoints []HNSEndpoint, isILB bool, sourceVIP, vip string, protocol uint16, internalPort uint16, externalPort uint16) (*PolicyList, error) {
+ return hns.AddLoadBalancer(endpoints, isILB, sourceVIP, vip, protocol, internalPort, externalPort)
+}
+
+// AddRoute adds route policy list for the specified endpoints
+func AddRoute(endpoints []HNSEndpoint, destinationPrefix string, nextHop string, encapEnabled bool) (*PolicyList, error) {
+ return hns.AddRoute(endpoints, destinationPrefix, nextHop, encapEnabled)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/hnssupport.go b/vendor/github.com/Microsoft/hcsshim/hnssupport.go
new file mode 100644
index 000000000..69405244b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/hnssupport.go
@@ -0,0 +1,13 @@
+package hcsshim
+
+import (
+ "github.com/Microsoft/hcsshim/internal/hns"
+)
+
+type HNSSupportedFeatures = hns.HNSSupportedFeatures
+
+type HNSAclFeatures = hns.HNSAclFeatures
+
+func GetHNSSupportedFeatures() HNSSupportedFeatures {
+ return hns.GetHNSSupportedFeatures()
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/interface.go b/vendor/github.com/Microsoft/hcsshim/interface.go
new file mode 100644
index 000000000..300eb5996
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/interface.go
@@ -0,0 +1,114 @@
+package hcsshim
+
+import (
+ "io"
+ "time"
+
+ "github.com/Microsoft/hcsshim/internal/hcs/schema1"
+)
+
+// ProcessConfig is used as both the input of Container.CreateProcess
+// and to convert the parameters to JSON for passing onto the HCS
+type ProcessConfig = schema1.ProcessConfig
+
+type Layer = schema1.Layer
+type MappedDir = schema1.MappedDir
+type MappedPipe = schema1.MappedPipe
+type HvRuntime = schema1.HvRuntime
+type MappedVirtualDisk = schema1.MappedVirtualDisk
+
+// AssignedDevice represents a device that has been directly assigned to a container
+//
+// NOTE: Support added in RS5
+type AssignedDevice = schema1.AssignedDevice
+
+// ContainerConfig is used as both the input of CreateContainer
+// and to convert the parameters to JSON for passing onto the HCS
+type ContainerConfig = schema1.ContainerConfig
+
+type ComputeSystemQuery = schema1.ComputeSystemQuery
+
+// Container represents a created (but not necessarily running) container.
+type Container interface {
+ // Start synchronously starts the container.
+ Start() error
+
+ // Shutdown requests a container shutdown, but it may not actually be shutdown until Wait() succeeds.
+ Shutdown() error
+
+ // Terminate requests a container terminate, but it may not actually be terminated until Wait() succeeds.
+ Terminate() error
+
+ // Waits synchronously waits for the container to shutdown or terminate.
+ Wait() error
+
+ // WaitTimeout synchronously waits for the container to terminate or the duration to elapse. It
+ // returns false if timeout occurs.
+ WaitTimeout(time.Duration) error
+
+ // Pause pauses the execution of a container.
+ Pause() error
+
+ // Resume resumes the execution of a container.
+ Resume() error
+
+ // HasPendingUpdates returns true if the container has updates pending to install.
+ HasPendingUpdates() (bool, error)
+
+ // Statistics returns statistics for a container.
+ Statistics() (Statistics, error)
+
+ // ProcessList returns details for the processes in a container.
+ ProcessList() ([]ProcessListItem, error)
+
+ // MappedVirtualDisks returns virtual disks mapped to a utility VM, indexed by controller
+ MappedVirtualDisks() (map[int]MappedVirtualDiskController, error)
+
+ // CreateProcess launches a new process within the container.
+ CreateProcess(c *ProcessConfig) (Process, error)
+
+ // OpenProcess gets an interface to an existing process within the container.
+ OpenProcess(pid int) (Process, error)
+
+ // Close cleans up any state associated with the container but does not terminate or wait for it.
+ Close() error
+
+ // Modify the System
+ Modify(config *ResourceModificationRequestResponse) error
+}
+
+// Process represents a running or exited process.
+type Process interface {
+ // Pid returns the process ID of the process within the container.
+ Pid() int
+
+ // Kill signals the process to terminate but does not wait for it to finish terminating.
+ Kill() error
+
+ // Wait waits for the process to exit.
+ Wait() error
+
+ // WaitTimeout waits for the process to exit or the duration to elapse. It returns
+ // false if timeout occurs.
+ WaitTimeout(time.Duration) error
+
+ // ExitCode returns the exit code of the process. The process must have
+ // already terminated.
+ ExitCode() (int, error)
+
+ // ResizeConsole resizes the console of the process.
+ ResizeConsole(width, height uint16) error
+
+ // Stdio returns the stdin, stdout, and stderr pipes, respectively. Closing
+ // these pipes does not close the underlying pipes; it should be possible to
+ // call this multiple times to get multiple interfaces.
+ Stdio() (io.WriteCloser, io.ReadCloser, io.ReadCloser, error)
+
+ // CloseStdin closes the write side of the stdin pipe so that the process is
+ // notified on the read side that there is no more data in stdin.
+ CloseStdin() error
+
+ // Close cleans up any state associated with the process but does not kill
+ // or wait on it.
+ Close() error
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/cow/cow.go b/vendor/github.com/Microsoft/hcsshim/internal/cow/cow.go
new file mode 100644
index 000000000..f46af33bb
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/cow/cow.go
@@ -0,0 +1,97 @@
+package cow
+
+import (
+ "context"
+ "io"
+
+ "github.com/Microsoft/hcsshim/internal/hcs/schema1"
+ hcsschema "github.com/Microsoft/hcsshim/internal/hcs/schema2"
+)
+
+// Process is the interface for an OS process running in a container or utility VM.
+type Process interface {
+ // Close releases resources associated with the process and closes the
+ // writer and readers returned by Stdio. Depending on the implementation,
+ // this may also terminate the process.
+ Close() error
+ // CloseStdin causes the process's stdin handle to receive EOF/EPIPE/whatever
+ // is appropriate to indicate that no more data is available.
+ CloseStdin(ctx context.Context) error
+ // CloseStdout closes the stdout connection to the process. It is used to indicate
+ // that we are done receiving output on the shim side.
+ CloseStdout(ctx context.Context) error
+ // CloseStderr closes the stderr connection to the process. It is used to indicate
+ // that we are done receiving output on the shim side.
+ CloseStderr(ctx context.Context) error
+ // Pid returns the process ID.
+ Pid() int
+ // Stdio returns the stdio streams for a process. These may be nil if a stream
+ // was not requested during CreateProcess.
+ Stdio() (_ io.Writer, _ io.Reader, _ io.Reader)
+ // ResizeConsole resizes the virtual terminal associated with the process.
+ ResizeConsole(ctx context.Context, width, height uint16) error
+ // Kill sends a SIGKILL or equivalent signal to the process and returns whether
+ // the signal was delivered. It does not wait for the process to terminate.
+ Kill(ctx context.Context) (bool, error)
+ // Signal sends a signal to the process and returns whether the signal was
+ // delivered. The input is OS specific (either
+ // guestrequest.SignalProcessOptionsWCOW or
+ // guestrequest.SignalProcessOptionsLCOW). It does not wait for the process
+ // to terminate.
+ Signal(ctx context.Context, options interface{}) (bool, error)
+ // Wait waits for the process to complete, or for a connection to the process to be
+ // terminated by some error condition (including calling Close).
+ Wait() error
+ // ExitCode returns the exit code of the process. Returns an error if the process is
+ // not running.
+ ExitCode() (int, error)
+}
+
+// ProcessHost is the interface for creating processes.
+type ProcessHost interface {
+ // CreateProcess creates a process. The configuration is host specific
+ // (either hcsschema.ProcessParameters or lcow.ProcessParameters).
+ CreateProcess(ctx context.Context, config interface{}) (Process, error)
+ // OS returns the host's operating system, "linux" or "windows".
+ OS() string
+ // IsOCI specifies whether this is an OCI-compliant process host. If true,
+ // then the configuration passed to CreateProcess should have an OCI process
+ // spec (or nil if this is the initial process in an OCI container).
+ // Otherwise, it should have the HCS-specific process parameters.
+ IsOCI() bool
+}
+
+// Container is the interface for container objects, either running on the host or
+// in a utility VM.
+type Container interface {
+ ProcessHost
+ // Close releases the resources associated with the container. Depending on
+ // the implementation, this may also terminate the container.
+ Close() error
+ // ID returns the container ID.
+ ID() string
+ // Properties returns the requested container properties targeting a V1 schema container.
+ Properties(ctx context.Context, types ...schema1.PropertyType) (*schema1.ContainerProperties, error)
+ // PropertiesV2 returns the requested container properties targeting a V2 schema container.
+ PropertiesV2(ctx context.Context, types ...hcsschema.PropertyType) (*hcsschema.Properties, error)
+ // Start starts a container.
+ Start(ctx context.Context) error
+ // Shutdown sends a shutdown request to the container (but does not wait for
+ // the shutdown to complete).
+ Shutdown(ctx context.Context) error
+ // Terminate sends a terminate request to the container (but does not wait
+ // for the terminate to complete).
+ Terminate(ctx context.Context) error
+ // Wait waits for the container to terminate, or for the connection to the
+ // container to be terminated by some error condition (including calling
+ // Close).
+ Wait() error
+ // WaitChannel returns the wait channel of the container
+ WaitChannel() <-chan struct{}
+ // WaitError returns the container termination error.
+ // This function should only be called after the channel in WaitChannel()
+ // is closed. Otherwise it is not thread safe.
+ WaitError() error
+ // Modify sends a request to modify container resources
+ Modify(ctx context.Context, config interface{}) error
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/callback.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/callback.go
new file mode 100644
index 000000000..d13772b03
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/callback.go
@@ -0,0 +1,161 @@
+package hcs
+
+import (
+ "fmt"
+ "sync"
+ "syscall"
+
+ "github.com/Microsoft/hcsshim/internal/interop"
+ "github.com/Microsoft/hcsshim/internal/logfields"
+ "github.com/Microsoft/hcsshim/internal/vmcompute"
+ "github.com/sirupsen/logrus"
+)
+
+var (
+ nextCallback uintptr
+ callbackMap = map[uintptr]*notificationWatcherContext{}
+ callbackMapLock = sync.RWMutex{}
+
+ notificationWatcherCallback = syscall.NewCallback(notificationWatcher)
+
+ // Notifications for HCS_SYSTEM handles
+ hcsNotificationSystemExited hcsNotification = 0x00000001
+ hcsNotificationSystemCreateCompleted hcsNotification = 0x00000002
+ hcsNotificationSystemStartCompleted hcsNotification = 0x00000003
+ hcsNotificationSystemPauseCompleted hcsNotification = 0x00000004
+ hcsNotificationSystemResumeCompleted hcsNotification = 0x00000005
+ hcsNotificationSystemCrashReport hcsNotification = 0x00000006
+ hcsNotificationSystemSiloJobCreated hcsNotification = 0x00000007
+ hcsNotificationSystemSaveCompleted hcsNotification = 0x00000008
+ hcsNotificationSystemRdpEnhancedModeStateChanged hcsNotification = 0x00000009
+ hcsNotificationSystemShutdownFailed hcsNotification = 0x0000000A
+ hcsNotificationSystemGetPropertiesCompleted hcsNotification = 0x0000000B
+ hcsNotificationSystemModifyCompleted hcsNotification = 0x0000000C
+ hcsNotificationSystemCrashInitiated hcsNotification = 0x0000000D
+ hcsNotificationSystemGuestConnectionClosed hcsNotification = 0x0000000E
+
+ // Notifications for HCS_PROCESS handles
+ hcsNotificationProcessExited hcsNotification = 0x00010000
+
+ // Common notifications
+ hcsNotificationInvalid hcsNotification = 0x00000000
+ hcsNotificationServiceDisconnect hcsNotification = 0x01000000
+)
+
+type hcsNotification uint32
+
+func (hn hcsNotification) String() string {
+ switch hn {
+ case hcsNotificationSystemExited:
+ return "SystemExited"
+ case hcsNotificationSystemCreateCompleted:
+ return "SystemCreateCompleted"
+ case hcsNotificationSystemStartCompleted:
+ return "SystemStartCompleted"
+ case hcsNotificationSystemPauseCompleted:
+ return "SystemPauseCompleted"
+ case hcsNotificationSystemResumeCompleted:
+ return "SystemResumeCompleted"
+ case hcsNotificationSystemCrashReport:
+ return "SystemCrashReport"
+ case hcsNotificationSystemSiloJobCreated:
+ return "SystemSiloJobCreated"
+ case hcsNotificationSystemSaveCompleted:
+ return "SystemSaveCompleted"
+ case hcsNotificationSystemRdpEnhancedModeStateChanged:
+ return "SystemRdpEnhancedModeStateChanged"
+ case hcsNotificationSystemShutdownFailed:
+ return "SystemShutdownFailed"
+ case hcsNotificationSystemGetPropertiesCompleted:
+ return "SystemGetPropertiesCompleted"
+ case hcsNotificationSystemModifyCompleted:
+ return "SystemModifyCompleted"
+ case hcsNotificationSystemCrashInitiated:
+ return "SystemCrashInitiated"
+ case hcsNotificationSystemGuestConnectionClosed:
+ return "SystemGuestConnectionClosed"
+ case hcsNotificationProcessExited:
+ return "ProcessExited"
+ case hcsNotificationInvalid:
+ return "Invalid"
+ case hcsNotificationServiceDisconnect:
+ return "ServiceDisconnect"
+ default:
+ return fmt.Sprintf("Unknown: %d", hn)
+ }
+}
+
+type notificationChannel chan error
+
+type notificationWatcherContext struct {
+ channels notificationChannels
+ handle vmcompute.HcsCallback
+
+ systemID string
+ processID int
+}
+
+type notificationChannels map[hcsNotification]notificationChannel
+
+func newSystemChannels() notificationChannels {
+ channels := make(notificationChannels)
+ for _, notif := range []hcsNotification{
+ hcsNotificationServiceDisconnect,
+ hcsNotificationSystemExited,
+ hcsNotificationSystemCreateCompleted,
+ hcsNotificationSystemStartCompleted,
+ hcsNotificationSystemPauseCompleted,
+ hcsNotificationSystemResumeCompleted,
+ hcsNotificationSystemSaveCompleted,
+ } {
+ channels[notif] = make(notificationChannel, 1)
+ }
+ return channels
+}
+
+func newProcessChannels() notificationChannels {
+ channels := make(notificationChannels)
+ for _, notif := range []hcsNotification{
+ hcsNotificationServiceDisconnect,
+ hcsNotificationProcessExited,
+ } {
+ channels[notif] = make(notificationChannel, 1)
+ }
+ return channels
+}
+
+func closeChannels(channels notificationChannels) {
+ for _, c := range channels {
+ close(c)
+ }
+}
+
+func notificationWatcher(notificationType hcsNotification, callbackNumber uintptr, notificationStatus uintptr, notificationData *uint16) uintptr {
+ var result error
+ if int32(notificationStatus) < 0 {
+ result = interop.Win32FromHresult(notificationStatus)
+ }
+
+ callbackMapLock.RLock()
+ context := callbackMap[callbackNumber]
+ callbackMapLock.RUnlock()
+
+ if context == nil {
+ return 0
+ }
+
+ log := logrus.WithFields(logrus.Fields{
+ "notification-type": notificationType.String(),
+ "system-id": context.systemID,
+ })
+ if context.processID != 0 {
+ log.Data[logfields.ProcessID] = context.processID
+ }
+ log.Debug("HCS notification")
+
+ if channel, ok := context.channels[notificationType]; ok {
+ channel <- result
+ }
+
+ return 0
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/errors.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/errors.go
new file mode 100644
index 000000000..295d4b849
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/errors.go
@@ -0,0 +1,343 @@
+package hcs
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "net"
+ "syscall"
+
+ "github.com/Microsoft/hcsshim/internal/log"
+)
+
+var (
+ // ErrComputeSystemDoesNotExist is an error encountered when the container being operated on no longer exists
+ ErrComputeSystemDoesNotExist = syscall.Errno(0xc037010e)
+
+ // ErrElementNotFound is an error encountered when the object being referenced does not exist
+ ErrElementNotFound = syscall.Errno(0x490)
+
+ // ErrElementNotFound is an error encountered when the object being referenced does not exist
+ ErrNotSupported = syscall.Errno(0x32)
+
+ // ErrInvalidData is an error encountered when the request being sent to hcs is invalid/unsupported
+ // decimal -2147024883 / hex 0x8007000d
+ ErrInvalidData = syscall.Errno(0xd)
+
+ // ErrHandleClose is an error encountered when the handle generating the notification being waited on has been closed
+ ErrHandleClose = errors.New("hcsshim: the handle generating this notification has been closed")
+
+ // ErrAlreadyClosed is an error encountered when using a handle that has been closed by the Close method
+ ErrAlreadyClosed = errors.New("hcsshim: the handle has already been closed")
+
+ // ErrInvalidNotificationType is an error encountered when an invalid notification type is used
+ ErrInvalidNotificationType = errors.New("hcsshim: invalid notification type")
+
+ // ErrInvalidProcessState is an error encountered when the process is not in a valid state for the requested operation
+ ErrInvalidProcessState = errors.New("the process is in an invalid state for the attempted operation")
+
+ // ErrTimeout is an error encountered when waiting on a notification times out
+ ErrTimeout = errors.New("hcsshim: timeout waiting for notification")
+
+ // ErrUnexpectedContainerExit is the error encountered when a container exits while waiting for
+ // a different expected notification
+ ErrUnexpectedContainerExit = errors.New("unexpected container exit")
+
+ // ErrUnexpectedProcessAbort is the error encountered when communication with the compute service
+ // is lost while waiting for a notification
+ ErrUnexpectedProcessAbort = errors.New("lost communication with compute service")
+
+ // ErrUnexpectedValue is an error encountered when hcs returns an invalid value
+ ErrUnexpectedValue = errors.New("unexpected value returned from hcs")
+
+ // ErrVmcomputeAlreadyStopped is an error encountered when a shutdown or terminate request is made on a stopped container
+ ErrVmcomputeAlreadyStopped = syscall.Errno(0xc0370110)
+
+ // ErrVmcomputeOperationPending is an error encountered when the operation is being completed asynchronously
+ ErrVmcomputeOperationPending = syscall.Errno(0xC0370103)
+
+ // ErrVmcomputeOperationInvalidState is an error encountered when the compute system is not in a valid state for the requested operation
+ ErrVmcomputeOperationInvalidState = syscall.Errno(0xc0370105)
+
+ // ErrProcNotFound is an error encountered when a procedure look up fails.
+ ErrProcNotFound = syscall.Errno(0x7f)
+
+ // ErrVmcomputeOperationAccessIsDenied is an error which can be encountered when enumerating compute systems in RS1/RS2
+ // builds when the underlying silo might be in the process of terminating. HCS was fixed in RS3.
+ ErrVmcomputeOperationAccessIsDenied = syscall.Errno(0x5)
+
+ // ErrVmcomputeInvalidJSON is an error encountered when the compute system does not support/understand the messages sent by management
+ ErrVmcomputeInvalidJSON = syscall.Errno(0xc037010d)
+
+ // ErrVmcomputeUnknownMessage is an error encountered guest compute system doesn't support the message
+ ErrVmcomputeUnknownMessage = syscall.Errno(0xc037010b)
+
+ // ErrVmcomputeUnexpectedExit is an error encountered when the compute system terminates unexpectedly
+ ErrVmcomputeUnexpectedExit = syscall.Errno(0xC0370106)
+
+ // ErrNotSupported is an error encountered when hcs doesn't support the request
+ ErrPlatformNotSupported = errors.New("unsupported platform request")
+
+ // ErrProcessAlreadyStopped is returned by hcs if the process we're trying to kill has already been stopped.
+ ErrProcessAlreadyStopped = syscall.Errno(0x8037011f)
+
+ // ErrInvalidHandle is an error that can be encountrered when querying the properties of a compute system when the handle to that
+ // compute system has already been closed.
+ ErrInvalidHandle = syscall.Errno(0x6)
+)
+
+type ErrorEvent struct {
+ Message string `json:"Message,omitempty"` // Fully formated error message
+ StackTrace string `json:"StackTrace,omitempty"` // Stack trace in string form
+ Provider string `json:"Provider,omitempty"`
+ EventID uint16 `json:"EventId,omitempty"`
+ Flags uint32 `json:"Flags,omitempty"`
+ Source string `json:"Source,omitempty"`
+ //Data []EventData `json:"Data,omitempty"` // Omit this as HCS doesn't encode this well. It's more confusing to include. It is however logged in debug mode (see processHcsResult function)
+}
+
+type hcsResult struct {
+ Error int32
+ ErrorMessage string
+ ErrorEvents []ErrorEvent `json:"ErrorEvents,omitempty"`
+}
+
+func (ev *ErrorEvent) String() string {
+ evs := "[Event Detail: " + ev.Message
+ if ev.StackTrace != "" {
+ evs += " Stack Trace: " + ev.StackTrace
+ }
+ if ev.Provider != "" {
+ evs += " Provider: " + ev.Provider
+ }
+ if ev.EventID != 0 {
+ evs = fmt.Sprintf("%s EventID: %d", evs, ev.EventID)
+ }
+ if ev.Flags != 0 {
+ evs = fmt.Sprintf("%s flags: %d", evs, ev.Flags)
+ }
+ if ev.Source != "" {
+ evs += " Source: " + ev.Source
+ }
+ evs += "]"
+ return evs
+}
+
+func processHcsResult(ctx context.Context, resultJSON string) []ErrorEvent {
+ if resultJSON != "" {
+ result := &hcsResult{}
+ if err := json.Unmarshal([]byte(resultJSON), result); err != nil {
+ log.G(ctx).WithError(err).Warning("Could not unmarshal HCS result")
+ return nil
+ }
+ return result.ErrorEvents
+ }
+ return nil
+}
+
+type HcsError struct {
+ Op string
+ Err error
+ Events []ErrorEvent
+}
+
+var _ net.Error = &HcsError{}
+
+func (e *HcsError) Error() string {
+ s := e.Op + ": " + e.Err.Error()
+ for _, ev := range e.Events {
+ s += "\n" + ev.String()
+ }
+ return s
+}
+
+func (e *HcsError) Temporary() bool {
+ err, ok := e.Err.(net.Error)
+ return ok && err.Temporary() //nolint:staticcheck
+}
+
+func (e *HcsError) Timeout() bool {
+ err, ok := e.Err.(net.Error)
+ return ok && err.Timeout()
+}
+
+// ProcessError is an error encountered in HCS during an operation on a Process object
+type ProcessError struct {
+ SystemID string
+ Pid int
+ Op string
+ Err error
+ Events []ErrorEvent
+}
+
+var _ net.Error = &ProcessError{}
+
+// SystemError is an error encountered in HCS during an operation on a Container object
+type SystemError struct {
+ ID string
+ Op string
+ Err error
+ Events []ErrorEvent
+}
+
+var _ net.Error = &SystemError{}
+
+func (e *SystemError) Error() string {
+ s := e.Op + " " + e.ID + ": " + e.Err.Error()
+ for _, ev := range e.Events {
+ s += "\n" + ev.String()
+ }
+ return s
+}
+
+func (e *SystemError) Temporary() bool {
+ err, ok := e.Err.(net.Error)
+ return ok && err.Temporary() //nolint:staticcheck
+}
+
+func (e *SystemError) Timeout() bool {
+ err, ok := e.Err.(net.Error)
+ return ok && err.Timeout()
+}
+
+func makeSystemError(system *System, op string, err error, events []ErrorEvent) error {
+ // Don't double wrap errors
+ if _, ok := err.(*SystemError); ok {
+ return err
+ }
+ return &SystemError{
+ ID: system.ID(),
+ Op: op,
+ Err: err,
+ Events: events,
+ }
+}
+
+func (e *ProcessError) Error() string {
+ s := fmt.Sprintf("%s %s:%d: %s", e.Op, e.SystemID, e.Pid, e.Err.Error())
+ for _, ev := range e.Events {
+ s += "\n" + ev.String()
+ }
+ return s
+}
+
+func (e *ProcessError) Temporary() bool {
+ err, ok := e.Err.(net.Error)
+ return ok && err.Temporary() //nolint:staticcheck
+}
+
+func (e *ProcessError) Timeout() bool {
+ err, ok := e.Err.(net.Error)
+ return ok && err.Timeout()
+}
+
+func makeProcessError(process *Process, op string, err error, events []ErrorEvent) error {
+ // Don't double wrap errors
+ if _, ok := err.(*ProcessError); ok {
+ return err
+ }
+ return &ProcessError{
+ Pid: process.Pid(),
+ SystemID: process.SystemID(),
+ Op: op,
+ Err: err,
+ Events: events,
+ }
+}
+
+// IsNotExist checks if an error is caused by the Container or Process not existing.
+// Note: Currently, ErrElementNotFound can mean that a Process has either
+// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
+// will currently return true when the error is ErrElementNotFound.
+func IsNotExist(err error) bool {
+ err = getInnerError(err)
+ return err == ErrComputeSystemDoesNotExist ||
+ err == ErrElementNotFound
+}
+
+// IsErrorInvalidHandle checks whether the error is the result of an operation carried
+// out on a handle that is invalid/closed. This error popped up while trying to query
+// stats on a container in the process of being stopped.
+func IsErrorInvalidHandle(err error) bool {
+ err = getInnerError(err)
+ return err == ErrInvalidHandle
+}
+
+// IsAlreadyClosed checks if an error is caused by the Container or Process having been
+// already closed by a call to the Close() method.
+func IsAlreadyClosed(err error) bool {
+ err = getInnerError(err)
+ return err == ErrAlreadyClosed
+}
+
+// IsPending returns a boolean indicating whether the error is that
+// the requested operation is being completed in the background.
+func IsPending(err error) bool {
+ err = getInnerError(err)
+ return err == ErrVmcomputeOperationPending
+}
+
+// IsTimeout returns a boolean indicating whether the error is caused by
+// a timeout waiting for the operation to complete.
+func IsTimeout(err error) bool {
+ if err, ok := err.(net.Error); ok && err.Timeout() {
+ return true
+ }
+ err = getInnerError(err)
+ return err == ErrTimeout
+}
+
+// IsAlreadyStopped returns a boolean indicating whether the error is caused by
+// a Container or Process being already stopped.
+// Note: Currently, ErrElementNotFound can mean that a Process has either
+// already exited, or does not exist. Both IsAlreadyStopped and IsNotExist
+// will currently return true when the error is ErrElementNotFound.
+func IsAlreadyStopped(err error) bool {
+ err = getInnerError(err)
+ return err == ErrVmcomputeAlreadyStopped ||
+ err == ErrProcessAlreadyStopped ||
+ err == ErrElementNotFound
+}
+
+// IsNotSupported returns a boolean indicating whether the error is caused by
+// unsupported platform requests
+// Note: Currently Unsupported platform requests can be mean either
+// ErrVmcomputeInvalidJSON, ErrInvalidData, ErrNotSupported or ErrVmcomputeUnknownMessage
+// is thrown from the Platform
+func IsNotSupported(err error) bool {
+ err = getInnerError(err)
+ // If Platform doesn't recognize or support the request sent, below errors are seen
+ return err == ErrVmcomputeInvalidJSON ||
+ err == ErrInvalidData ||
+ err == ErrNotSupported ||
+ err == ErrVmcomputeUnknownMessage
+}
+
+// IsOperationInvalidState returns true when err is caused by
+// `ErrVmcomputeOperationInvalidState`.
+func IsOperationInvalidState(err error) bool {
+ err = getInnerError(err)
+ return err == ErrVmcomputeOperationInvalidState
+}
+
+// IsAccessIsDenied returns true when err is caused by
+// `ErrVmcomputeOperationAccessIsDenied`.
+func IsAccessIsDenied(err error) bool {
+ err = getInnerError(err)
+ return err == ErrVmcomputeOperationAccessIsDenied
+}
+
+func getInnerError(err error) error {
+ switch pe := err.(type) {
+ case nil:
+ return nil
+ case *HcsError:
+ err = pe.Err
+ case *SystemError:
+ err = pe.Err
+ case *ProcessError:
+ err = pe.Err
+ }
+ return err
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/process.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/process.go
new file mode 100644
index 000000000..f4605922a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/process.go
@@ -0,0 +1,557 @@
+package hcs
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "io"
+ "os"
+ "sync"
+ "syscall"
+ "time"
+
+ "github.com/Microsoft/hcsshim/internal/log"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/Microsoft/hcsshim/internal/vmcompute"
+ "go.opencensus.io/trace"
+)
+
+// ContainerError is an error encountered in HCS
+type Process struct {
+ handleLock sync.RWMutex
+ handle vmcompute.HcsProcess
+ processID int
+ system *System
+ hasCachedStdio bool
+ stdioLock sync.Mutex
+ stdin io.WriteCloser
+ stdout io.ReadCloser
+ stderr io.ReadCloser
+ callbackNumber uintptr
+ killSignalDelivered bool
+
+ closedWaitOnce sync.Once
+ waitBlock chan struct{}
+ exitCode int
+ waitError error
+}
+
+func newProcess(process vmcompute.HcsProcess, processID int, computeSystem *System) *Process {
+ return &Process{
+ handle: process,
+ processID: processID,
+ system: computeSystem,
+ waitBlock: make(chan struct{}),
+ }
+}
+
+type processModifyRequest struct {
+ Operation string
+ ConsoleSize *consoleSize `json:",omitempty"`
+ CloseHandle *closeHandle `json:",omitempty"`
+}
+
+type consoleSize struct {
+ Height uint16
+ Width uint16
+}
+
+type closeHandle struct {
+ Handle string
+}
+
+type processStatus struct {
+ ProcessID uint32
+ Exited bool
+ ExitCode uint32
+ LastWaitResult int32
+}
+
+const stdIn string = "StdIn"
+
+const (
+ modifyConsoleSize string = "ConsoleSize"
+ modifyCloseHandle string = "CloseHandle"
+)
+
+// Pid returns the process ID of the process within the container.
+func (process *Process) Pid() int {
+ return process.processID
+}
+
+// SystemID returns the ID of the process's compute system.
+func (process *Process) SystemID() string {
+ return process.system.ID()
+}
+
+func (process *Process) processSignalResult(ctx context.Context, err error) (bool, error) {
+ switch err {
+ case nil:
+ return true, nil
+ case ErrVmcomputeOperationInvalidState, ErrComputeSystemDoesNotExist, ErrElementNotFound:
+ select {
+ case <-process.waitBlock:
+ // The process exit notification has already arrived.
+ default:
+ // The process should be gone, but we have not received the notification.
+ // After a second, force unblock the process wait to work around a possible
+ // deadlock in the HCS.
+ go func() {
+ time.Sleep(time.Second)
+ process.closedWaitOnce.Do(func() {
+ log.G(ctx).WithError(err).Warn("force unblocking process waits")
+ process.exitCode = -1
+ process.waitError = err
+ close(process.waitBlock)
+ })
+ }()
+ }
+ return false, nil
+ default:
+ return false, err
+ }
+}
+
+// Signal signals the process with `options`.
+//
+// For LCOW `guestrequest.SignalProcessOptionsLCOW`.
+//
+// For WCOW `guestrequest.SignalProcessOptionsWCOW`.
+func (process *Process) Signal(ctx context.Context, options interface{}) (bool, error) {
+ process.handleLock.RLock()
+ defer process.handleLock.RUnlock()
+
+ operation := "hcs::Process::Signal"
+
+ if process.handle == 0 {
+ return false, makeProcessError(process, operation, ErrAlreadyClosed, nil)
+ }
+
+ optionsb, err := json.Marshal(options)
+ if err != nil {
+ return false, err
+ }
+
+ resultJSON, err := vmcompute.HcsSignalProcess(ctx, process.handle, string(optionsb))
+ events := processHcsResult(ctx, resultJSON)
+ delivered, err := process.processSignalResult(ctx, err)
+ if err != nil {
+ err = makeProcessError(process, operation, err, events)
+ }
+ return delivered, err
+}
+
+// Kill signals the process to terminate but does not wait for it to finish terminating.
+func (process *Process) Kill(ctx context.Context) (bool, error) {
+ process.handleLock.RLock()
+ defer process.handleLock.RUnlock()
+
+ operation := "hcs::Process::Kill"
+
+ if process.handle == 0 {
+ return false, makeProcessError(process, operation, ErrAlreadyClosed, nil)
+ }
+
+ if process.killSignalDelivered {
+ // A kill signal has already been sent to this process. Sending a second
+ // one offers no real benefit, as processes cannot stop themselves from
+ // being terminated, once a TerminateProcess has been issued. Sending a
+ // second kill may result in a number of errors (two of which detailed bellow)
+ // and which we can avoid handling.
+ return true, nil
+ }
+
+ resultJSON, err := vmcompute.HcsTerminateProcess(ctx, process.handle)
+ if err != nil {
+ // We still need to check these two cases, as processes may still be killed by an
+ // external actor (human operator, OOM, random script etc).
+ if errors.Is(err, os.ErrPermission) || IsAlreadyStopped(err) {
+ // There are two cases where it should be safe to ignore an error returned
+ // by HcsTerminateProcess. The first one is cause by the fact that
+ // HcsTerminateProcess ends up calling TerminateProcess in the context
+ // of a container. According to the TerminateProcess documentation:
+ // https://docs.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-terminateprocess#remarks
+ // After a process has terminated, call to TerminateProcess with open
+ // handles to the process fails with ERROR_ACCESS_DENIED (5) error code.
+ // It's safe to ignore this error here. HCS should always have permissions
+ // to kill processes inside any container. So an ERROR_ACCESS_DENIED
+ // is unlikely to be anything else than what the ending remarks in the
+ // documentation states.
+ //
+ // The second case is generated by hcs itself, if for any reason HcsTerminateProcess
+ // is called twice in a very short amount of time. In such cases, hcs may return
+ // HCS_E_PROCESS_ALREADY_STOPPED.
+ return true, nil
+ }
+ }
+ events := processHcsResult(ctx, resultJSON)
+ delivered, err := process.processSignalResult(ctx, err)
+ if err != nil {
+ err = makeProcessError(process, operation, err, events)
+ }
+
+ process.killSignalDelivered = delivered
+ return delivered, err
+}
+
+// waitBackground waits for the process exit notification. Once received sets
+// `process.waitError` (if any) and unblocks all `Wait` calls.
+//
+// This MUST be called exactly once per `process.handle` but `Wait` is safe to
+// call multiple times.
+func (process *Process) waitBackground() {
+ operation := "hcs::Process::waitBackground"
+ ctx, span := trace.StartSpan(context.Background(), operation)
+ defer span.End()
+ span.AddAttributes(
+ trace.StringAttribute("cid", process.SystemID()),
+ trace.Int64Attribute("pid", int64(process.processID)))
+
+ var (
+ err error
+ exitCode = -1
+ propertiesJSON string
+ resultJSON string
+ )
+
+ err = waitForNotification(ctx, process.callbackNumber, hcsNotificationProcessExited, nil)
+ if err != nil {
+ err = makeProcessError(process, operation, err, nil)
+ log.G(ctx).WithError(err).Error("failed wait")
+ } else {
+ process.handleLock.RLock()
+ defer process.handleLock.RUnlock()
+
+ // Make sure we didnt race with Close() here
+ if process.handle != 0 {
+ propertiesJSON, resultJSON, err = vmcompute.HcsGetProcessProperties(ctx, process.handle)
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ err = makeProcessError(process, operation, err, events) //nolint:ineffassign
+ } else {
+ properties := &processStatus{}
+ err = json.Unmarshal([]byte(propertiesJSON), properties)
+ if err != nil {
+ err = makeProcessError(process, operation, err, nil) //nolint:ineffassign
+ } else {
+ if properties.LastWaitResult != 0 {
+ log.G(ctx).WithField("wait-result", properties.LastWaitResult).Warning("non-zero last wait result")
+ } else {
+ exitCode = int(properties.ExitCode)
+ }
+ }
+ }
+ }
+ }
+ log.G(ctx).WithField("exitCode", exitCode).Debug("process exited")
+
+ process.closedWaitOnce.Do(func() {
+ process.exitCode = exitCode
+ process.waitError = err
+ close(process.waitBlock)
+ })
+ oc.SetSpanStatus(span, err)
+}
+
+// Wait waits for the process to exit. If the process has already exited returns
+// the pervious error (if any).
+func (process *Process) Wait() error {
+ <-process.waitBlock
+ return process.waitError
+}
+
+// ResizeConsole resizes the console of the process.
+func (process *Process) ResizeConsole(ctx context.Context, width, height uint16) error {
+ process.handleLock.RLock()
+ defer process.handleLock.RUnlock()
+
+ operation := "hcs::Process::ResizeConsole"
+
+ if process.handle == 0 {
+ return makeProcessError(process, operation, ErrAlreadyClosed, nil)
+ }
+
+ modifyRequest := processModifyRequest{
+ Operation: modifyConsoleSize,
+ ConsoleSize: &consoleSize{
+ Height: height,
+ Width: width,
+ },
+ }
+
+ modifyRequestb, err := json.Marshal(modifyRequest)
+ if err != nil {
+ return err
+ }
+
+ resultJSON, err := vmcompute.HcsModifyProcess(ctx, process.handle, string(modifyRequestb))
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return makeProcessError(process, operation, err, events)
+ }
+
+ return nil
+}
+
+// ExitCode returns the exit code of the process. The process must have
+// already terminated.
+func (process *Process) ExitCode() (int, error) {
+ select {
+ case <-process.waitBlock:
+ if process.waitError != nil {
+ return -1, process.waitError
+ }
+ return process.exitCode, nil
+ default:
+ return -1, makeProcessError(process, "hcs::Process::ExitCode", ErrInvalidProcessState, nil)
+ }
+}
+
+// StdioLegacy returns the stdin, stdout, and stderr pipes, respectively. Closing
+// these pipes does not close the underlying pipes. Once returned, these pipes
+// are the responsibility of the caller to close.
+func (process *Process) StdioLegacy() (_ io.WriteCloser, _ io.ReadCloser, _ io.ReadCloser, err error) {
+ operation := "hcs::Process::StdioLegacy"
+ ctx, span := trace.StartSpan(context.Background(), operation)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("cid", process.SystemID()),
+ trace.Int64Attribute("pid", int64(process.processID)))
+
+ process.handleLock.RLock()
+ defer process.handleLock.RUnlock()
+
+ if process.handle == 0 {
+ return nil, nil, nil, makeProcessError(process, operation, ErrAlreadyClosed, nil)
+ }
+
+ process.stdioLock.Lock()
+ defer process.stdioLock.Unlock()
+ if process.hasCachedStdio {
+ stdin, stdout, stderr := process.stdin, process.stdout, process.stderr
+ process.stdin, process.stdout, process.stderr = nil, nil, nil
+ process.hasCachedStdio = false
+ return stdin, stdout, stderr, nil
+ }
+
+ processInfo, resultJSON, err := vmcompute.HcsGetProcessInfo(ctx, process.handle)
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return nil, nil, nil, makeProcessError(process, operation, err, events)
+ }
+
+ pipes, err := makeOpenFiles([]syscall.Handle{processInfo.StdInput, processInfo.StdOutput, processInfo.StdError})
+ if err != nil {
+ return nil, nil, nil, makeProcessError(process, operation, err, nil)
+ }
+
+ return pipes[0], pipes[1], pipes[2], nil
+}
+
+// Stdio returns the stdin, stdout, and stderr pipes, respectively.
+// To close them, close the process handle.
+func (process *Process) Stdio() (stdin io.Writer, stdout, stderr io.Reader) {
+ process.stdioLock.Lock()
+ defer process.stdioLock.Unlock()
+ return process.stdin, process.stdout, process.stderr
+}
+
+// CloseStdin closes the write side of the stdin pipe so that the process is
+// notified on the read side that there is no more data in stdin.
+func (process *Process) CloseStdin(ctx context.Context) error {
+ process.handleLock.RLock()
+ defer process.handleLock.RUnlock()
+
+ operation := "hcs::Process::CloseStdin"
+
+ if process.handle == 0 {
+ return makeProcessError(process, operation, ErrAlreadyClosed, nil)
+ }
+
+ modifyRequest := processModifyRequest{
+ Operation: modifyCloseHandle,
+ CloseHandle: &closeHandle{
+ Handle: stdIn,
+ },
+ }
+
+ modifyRequestb, err := json.Marshal(modifyRequest)
+ if err != nil {
+ return err
+ }
+
+ resultJSON, err := vmcompute.HcsModifyProcess(ctx, process.handle, string(modifyRequestb))
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return makeProcessError(process, operation, err, events)
+ }
+
+ process.stdioLock.Lock()
+ if process.stdin != nil {
+ process.stdin.Close()
+ process.stdin = nil
+ }
+ process.stdioLock.Unlock()
+
+ return nil
+}
+
+func (process *Process) CloseStdout(ctx context.Context) (err error) {
+ ctx, span := trace.StartSpan(ctx, "hcs::Process::CloseStdout") //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("cid", process.SystemID()),
+ trace.Int64Attribute("pid", int64(process.processID)))
+
+ process.handleLock.Lock()
+ defer process.handleLock.Unlock()
+
+ if process.handle == 0 {
+ return nil
+ }
+
+ process.stdioLock.Lock()
+ defer process.stdioLock.Unlock()
+ if process.stdout != nil {
+ process.stdout.Close()
+ process.stdout = nil
+ }
+ return nil
+}
+
+func (process *Process) CloseStderr(ctx context.Context) (err error) {
+ ctx, span := trace.StartSpan(ctx, "hcs::Process::CloseStderr") //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("cid", process.SystemID()),
+ trace.Int64Attribute("pid", int64(process.processID)))
+
+ process.handleLock.Lock()
+ defer process.handleLock.Unlock()
+
+ if process.handle == 0 {
+ return nil
+ }
+
+ process.stdioLock.Lock()
+ defer process.stdioLock.Unlock()
+ if process.stderr != nil {
+ process.stderr.Close()
+ process.stderr = nil
+
+ }
+ return nil
+}
+
+// Close cleans up any state associated with the process but does not kill
+// or wait on it.
+func (process *Process) Close() (err error) {
+ operation := "hcs::Process::Close"
+ ctx, span := trace.StartSpan(context.Background(), operation)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("cid", process.SystemID()),
+ trace.Int64Attribute("pid", int64(process.processID)))
+
+ process.handleLock.Lock()
+ defer process.handleLock.Unlock()
+
+ // Don't double free this
+ if process.handle == 0 {
+ return nil
+ }
+
+ process.stdioLock.Lock()
+ if process.stdin != nil {
+ process.stdin.Close()
+ process.stdin = nil
+ }
+ if process.stdout != nil {
+ process.stdout.Close()
+ process.stdout = nil
+ }
+ if process.stderr != nil {
+ process.stderr.Close()
+ process.stderr = nil
+ }
+ process.stdioLock.Unlock()
+
+ if err = process.unregisterCallback(ctx); err != nil {
+ return makeProcessError(process, operation, err, nil)
+ }
+
+ if err = vmcompute.HcsCloseProcess(ctx, process.handle); err != nil {
+ return makeProcessError(process, operation, err, nil)
+ }
+
+ process.handle = 0
+ process.closedWaitOnce.Do(func() {
+ process.exitCode = -1
+ process.waitError = ErrAlreadyClosed
+ close(process.waitBlock)
+ })
+
+ return nil
+}
+
+func (process *Process) registerCallback(ctx context.Context) error {
+ callbackContext := ¬ificationWatcherContext{
+ channels: newProcessChannels(),
+ systemID: process.SystemID(),
+ processID: process.processID,
+ }
+
+ callbackMapLock.Lock()
+ callbackNumber := nextCallback
+ nextCallback++
+ callbackMap[callbackNumber] = callbackContext
+ callbackMapLock.Unlock()
+
+ callbackHandle, err := vmcompute.HcsRegisterProcessCallback(ctx, process.handle, notificationWatcherCallback, callbackNumber)
+ if err != nil {
+ return err
+ }
+ callbackContext.handle = callbackHandle
+ process.callbackNumber = callbackNumber
+
+ return nil
+}
+
+func (process *Process) unregisterCallback(ctx context.Context) error {
+ callbackNumber := process.callbackNumber
+
+ callbackMapLock.RLock()
+ callbackContext := callbackMap[callbackNumber]
+ callbackMapLock.RUnlock()
+
+ if callbackContext == nil {
+ return nil
+ }
+
+ handle := callbackContext.handle
+
+ if handle == 0 {
+ return nil
+ }
+
+ // vmcompute.HcsUnregisterProcessCallback has its own synchronization to
+ // wait for all callbacks to complete. We must NOT hold the callbackMapLock.
+ err := vmcompute.HcsUnregisterProcessCallback(ctx, handle)
+ if err != nil {
+ return err
+ }
+
+ closeChannels(callbackContext.channels)
+
+ callbackMapLock.Lock()
+ delete(callbackMap, callbackNumber)
+ callbackMapLock.Unlock()
+
+ handle = 0 //nolint:ineffassign
+
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema1/schema1.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema1/schema1.go
new file mode 100644
index 000000000..b621c5593
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema1/schema1.go
@@ -0,0 +1,250 @@
+package schema1
+
+import (
+ "encoding/json"
+ "time"
+
+ "github.com/Microsoft/go-winio/pkg/guid"
+ hcsschema "github.com/Microsoft/hcsshim/internal/hcs/schema2"
+)
+
+// ProcessConfig is used as both the input of Container.CreateProcess
+// and to convert the parameters to JSON for passing onto the HCS
+type ProcessConfig struct {
+ ApplicationName string `json:",omitempty"`
+ CommandLine string `json:",omitempty"`
+ CommandArgs []string `json:",omitempty"` // Used by Linux Containers on Windows
+ User string `json:",omitempty"`
+ WorkingDirectory string `json:",omitempty"`
+ Environment map[string]string `json:",omitempty"`
+ EmulateConsole bool `json:",omitempty"`
+ CreateStdInPipe bool `json:",omitempty"`
+ CreateStdOutPipe bool `json:",omitempty"`
+ CreateStdErrPipe bool `json:",omitempty"`
+ ConsoleSize [2]uint `json:",omitempty"`
+ CreateInUtilityVm bool `json:",omitempty"` // Used by Linux Containers on Windows
+ OCISpecification *json.RawMessage `json:",omitempty"` // Used by Linux Containers on Windows
+}
+
+type Layer struct {
+ ID string
+ Path string
+}
+
+type MappedDir struct {
+ HostPath string
+ ContainerPath string
+ ReadOnly bool
+ BandwidthMaximum uint64
+ IOPSMaximum uint64
+ CreateInUtilityVM bool
+ // LinuxMetadata - Support added in 1803/RS4+.
+ LinuxMetadata bool `json:",omitempty"`
+}
+
+type MappedPipe struct {
+ HostPath string
+ ContainerPipeName string
+}
+
+type HvRuntime struct {
+ ImagePath string `json:",omitempty"`
+ SkipTemplate bool `json:",omitempty"`
+ LinuxInitrdFile string `json:",omitempty"` // File under ImagePath on host containing an initrd image for starting a Linux utility VM
+ LinuxKernelFile string `json:",omitempty"` // File under ImagePath on host containing a kernel for starting a Linux utility VM
+ LinuxBootParameters string `json:",omitempty"` // Additional boot parameters for starting a Linux Utility VM in initrd mode
+ BootSource string `json:",omitempty"` // "Vhd" for Linux Utility VM booting from VHD
+ WritableBootSource bool `json:",omitempty"` // Linux Utility VM booting from VHD
+}
+
+type MappedVirtualDisk struct {
+ HostPath string `json:",omitempty"` // Path to VHD on the host
+ ContainerPath string // Platform-specific mount point path in the container
+ CreateInUtilityVM bool `json:",omitempty"`
+ ReadOnly bool `json:",omitempty"`
+ Cache string `json:",omitempty"` // "" (Unspecified); "Disabled"; "Enabled"; "Private"; "PrivateAllowSharing"
+ AttachOnly bool `json:",omitempty"`
+}
+
+// AssignedDevice represents a device that has been directly assigned to a container
+//
+// NOTE: Support added in RS5
+type AssignedDevice struct {
+ // InterfaceClassGUID of the device to assign to container.
+ InterfaceClassGUID string `json:"InterfaceClassGuid,omitempty"`
+}
+
+// ContainerConfig is used as both the input of CreateContainer
+// and to convert the parameters to JSON for passing onto the HCS
+type ContainerConfig struct {
+ SystemType string // HCS requires this to be hard-coded to "Container"
+ Name string // Name of the container. We use the docker ID.
+ Owner string `json:",omitempty"` // The management platform that created this container
+ VolumePath string `json:",omitempty"` // Windows volume path for scratch space. Used by Windows Server Containers only. Format \\?\\Volume{GUID}
+ IgnoreFlushesDuringBoot bool `json:",omitempty"` // Optimization hint for container startup in Windows
+ LayerFolderPath string `json:",omitempty"` // Where the layer folders are located. Used by Windows Server Containers only. Format %root%\windowsfilter\containerID
+ Layers []Layer // List of storage layers. Required for Windows Server and Hyper-V Containers. Format ID=GUID;Path=%root%\windowsfilter\layerID
+ Credentials string `json:",omitempty"` // Credentials information
+ ProcessorCount uint32 `json:",omitempty"` // Number of processors to assign to the container.
+ ProcessorWeight uint64 `json:",omitempty"` // CPU shares (relative weight to other containers with cpu shares). Range is from 1 to 10000. A value of 0 results in default shares.
+ ProcessorMaximum int64 `json:",omitempty"` // Specifies the portion of processor cycles that this container can use as a percentage times 100. Range is from 1 to 10000. A value of 0 results in no limit.
+ StorageIOPSMaximum uint64 `json:",omitempty"` // Maximum Storage IOPS
+ StorageBandwidthMaximum uint64 `json:",omitempty"` // Maximum Storage Bandwidth in bytes per second
+ StorageSandboxSize uint64 `json:",omitempty"` // Size in bytes that the container system drive should be expanded to if smaller
+ MemoryMaximumInMB int64 `json:",omitempty"` // Maximum memory available to the container in Megabytes
+ HostName string `json:",omitempty"` // Hostname
+ MappedDirectories []MappedDir `json:",omitempty"` // List of mapped directories (volumes/mounts)
+ MappedPipes []MappedPipe `json:",omitempty"` // List of mapped Windows named pipes
+ HvPartition bool // True if it a Hyper-V Container
+ NetworkSharedContainerName string `json:",omitempty"` // Name (ID) of the container that we will share the network stack with.
+ EndpointList []string `json:",omitempty"` // List of networking endpoints to be attached to container
+ HvRuntime *HvRuntime `json:",omitempty"` // Hyper-V container settings. Used by Hyper-V containers only. Format ImagePath=%root%\BaseLayerID\UtilityVM
+ Servicing bool `json:",omitempty"` // True if this container is for servicing
+ AllowUnqualifiedDNSQuery bool `json:",omitempty"` // True to allow unqualified DNS name resolution
+ DNSSearchList string `json:",omitempty"` // Comma seperated list of DNS suffixes to use for name resolution
+ ContainerType string `json:",omitempty"` // "Linux" for Linux containers on Windows. Omitted otherwise.
+ TerminateOnLastHandleClosed bool `json:",omitempty"` // Should HCS terminate the container once all handles have been closed
+ MappedVirtualDisks []MappedVirtualDisk `json:",omitempty"` // Array of virtual disks to mount at start
+ AssignedDevices []AssignedDevice `json:",omitempty"` // Array of devices to assign. NOTE: Support added in RS5
+}
+
+type ComputeSystemQuery struct {
+ IDs []string `json:"Ids,omitempty"`
+ Types []string `json:",omitempty"`
+ Names []string `json:",omitempty"`
+ Owners []string `json:",omitempty"`
+}
+
+type PropertyType string
+
+const (
+ PropertyTypeStatistics PropertyType = "Statistics" // V1 and V2
+ PropertyTypeProcessList PropertyType = "ProcessList" // V1 and V2
+ PropertyTypeMappedVirtualDisk PropertyType = "MappedVirtualDisk" // Not supported in V2 schema call
+ PropertyTypeGuestConnection PropertyType = "GuestConnection" // V1 and V2. Nil return from HCS before RS5
+)
+
+type PropertyQuery struct {
+ PropertyTypes []PropertyType `json:",omitempty"`
+}
+
+// ContainerProperties holds the properties for a container and the processes running in that container
+type ContainerProperties struct {
+ ID string `json:"Id"`
+ State string
+ Name string
+ SystemType string
+ RuntimeOSType string `json:"RuntimeOsType,omitempty"`
+ Owner string
+ SiloGUID string `json:"SiloGuid,omitempty"`
+ RuntimeID guid.GUID `json:"RuntimeId,omitempty"`
+ IsRuntimeTemplate bool `json:",omitempty"`
+ RuntimeImagePath string `json:",omitempty"`
+ Stopped bool `json:",omitempty"`
+ ExitType string `json:",omitempty"`
+ AreUpdatesPending bool `json:",omitempty"`
+ ObRoot string `json:",omitempty"`
+ Statistics Statistics `json:",omitempty"`
+ ProcessList []ProcessListItem `json:",omitempty"`
+ MappedVirtualDiskControllers map[int]MappedVirtualDiskController `json:",omitempty"`
+ GuestConnectionInfo GuestConnectionInfo `json:",omitempty"`
+}
+
+// MemoryStats holds the memory statistics for a container
+type MemoryStats struct {
+ UsageCommitBytes uint64 `json:"MemoryUsageCommitBytes,omitempty"`
+ UsageCommitPeakBytes uint64 `json:"MemoryUsageCommitPeakBytes,omitempty"`
+ UsagePrivateWorkingSetBytes uint64 `json:"MemoryUsagePrivateWorkingSetBytes,omitempty"`
+}
+
+// ProcessorStats holds the processor statistics for a container
+type ProcessorStats struct {
+ TotalRuntime100ns uint64 `json:",omitempty"`
+ RuntimeUser100ns uint64 `json:",omitempty"`
+ RuntimeKernel100ns uint64 `json:",omitempty"`
+}
+
+// StorageStats holds the storage statistics for a container
+type StorageStats struct {
+ ReadCountNormalized uint64 `json:",omitempty"`
+ ReadSizeBytes uint64 `json:",omitempty"`
+ WriteCountNormalized uint64 `json:",omitempty"`
+ WriteSizeBytes uint64 `json:",omitempty"`
+}
+
+// NetworkStats holds the network statistics for a container
+type NetworkStats struct {
+ BytesReceived uint64 `json:",omitempty"`
+ BytesSent uint64 `json:",omitempty"`
+ PacketsReceived uint64 `json:",omitempty"`
+ PacketsSent uint64 `json:",omitempty"`
+ DroppedPacketsIncoming uint64 `json:",omitempty"`
+ DroppedPacketsOutgoing uint64 `json:",omitempty"`
+ EndpointId string `json:",omitempty"`
+ InstanceId string `json:",omitempty"`
+}
+
+// Statistics is the structure returned by a statistics call on a container
+type Statistics struct {
+ Timestamp time.Time `json:",omitempty"`
+ ContainerStartTime time.Time `json:",omitempty"`
+ Uptime100ns uint64 `json:",omitempty"`
+ Memory MemoryStats `json:",omitempty"`
+ Processor ProcessorStats `json:",omitempty"`
+ Storage StorageStats `json:",omitempty"`
+ Network []NetworkStats `json:",omitempty"`
+}
+
+// ProcessList is the structure of an item returned by a ProcessList call on a container
+type ProcessListItem struct {
+ CreateTimestamp time.Time `json:",omitempty"`
+ ImageName string `json:",omitempty"`
+ KernelTime100ns uint64 `json:",omitempty"`
+ MemoryCommitBytes uint64 `json:",omitempty"`
+ MemoryWorkingSetPrivateBytes uint64 `json:",omitempty"`
+ MemoryWorkingSetSharedBytes uint64 `json:",omitempty"`
+ ProcessId uint32 `json:",omitempty"`
+ UserTime100ns uint64 `json:",omitempty"`
+}
+
+// MappedVirtualDiskController is the structure of an item returned by a MappedVirtualDiskList call on a container
+type MappedVirtualDiskController struct {
+ MappedVirtualDisks map[int]MappedVirtualDisk `json:",omitempty"`
+}
+
+// GuestDefinedCapabilities is part of the GuestConnectionInfo returned by a GuestConnection call on a utility VM
+type GuestDefinedCapabilities struct {
+ NamespaceAddRequestSupported bool `json:",omitempty"`
+ SignalProcessSupported bool `json:",omitempty"`
+ DumpStacksSupported bool `json:",omitempty"`
+ DeleteContainerStateSupported bool `json:",omitempty"`
+ UpdateContainerSupported bool `json:",omitempty"`
+}
+
+// GuestConnectionInfo is the structure of an iterm return by a GuestConnection call on a utility VM
+type GuestConnectionInfo struct {
+ SupportedSchemaVersions []hcsschema.Version `json:",omitempty"`
+ ProtocolVersion uint32 `json:",omitempty"`
+ GuestDefinedCapabilities GuestDefinedCapabilities `json:",omitempty"`
+}
+
+// Type of Request Support in ModifySystem
+type RequestType string
+
+// Type of Resource Support in ModifySystem
+type ResourceType string
+
+// RequestType const
+const (
+ Add RequestType = "Add"
+ Remove RequestType = "Remove"
+ Network ResourceType = "Network"
+)
+
+// ResourceModificationRequestResponse is the structure used to send request to the container to modify the system
+// Supported resource types are Network and Request Types are Add/Remove
+type ResourceModificationRequestResponse struct {
+ Resource ResourceType `json:"ResourceType"`
+ Data interface{} `json:"Settings"`
+ Request RequestType `json:"RequestType,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/attachment.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/attachment.go
new file mode 100644
index 000000000..70884aad7
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/attachment.go
@@ -0,0 +1,36 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Attachment struct {
+ Type_ string `json:"Type,omitempty"`
+
+ Path string `json:"Path,omitempty"`
+
+ IgnoreFlushes bool `json:"IgnoreFlushes,omitempty"`
+
+ CachingMode string `json:"CachingMode,omitempty"`
+
+ NoWriteHardening bool `json:"NoWriteHardening,omitempty"`
+
+ DisableExpansionOptimization bool `json:"DisableExpansionOptimization,omitempty"`
+
+ IgnoreRelativeLocator bool `json:"IgnoreRelativeLocator,omitempty"`
+
+ CaptureIoAttributionContext bool `json:"CaptureIoAttributionContext,omitempty"`
+
+ ReadOnly bool `json:"ReadOnly,omitempty"`
+
+ SupportCompressedVolumes bool `json:"SupportCompressedVolumes,omitempty"`
+
+ AlwaysAllowSparseFiles bool `json:"AlwaysAllowSparseFiles,omitempty"`
+
+ ExtensibleVirtualDiskType string `json:"ExtensibleVirtualDiskType,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/battery.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/battery.go
new file mode 100644
index 000000000..ecbbed4c2
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/battery.go
@@ -0,0 +1,13 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Battery struct {
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cache_query_stats_response.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cache_query_stats_response.go
new file mode 100644
index 000000000..c1ea3953b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cache_query_stats_response.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type CacheQueryStatsResponse struct {
+ L3OccupancyBytes int32 `json:"L3OccupancyBytes,omitempty"`
+
+ L3TotalBwBytes int32 `json:"L3TotalBwBytes,omitempty"`
+
+ L3LocalBwBytes int32 `json:"L3LocalBwBytes,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/chipset.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/chipset.go
new file mode 100644
index 000000000..ca75277a3
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/chipset.go
@@ -0,0 +1,27 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Chipset struct {
+ Uefi *Uefi `json:"Uefi,omitempty"`
+
+ IsNumLockDisabled bool `json:"IsNumLockDisabled,omitempty"`
+
+ BaseBoardSerialNumber string `json:"BaseBoardSerialNumber,omitempty"`
+
+ ChassisSerialNumber string `json:"ChassisSerialNumber,omitempty"`
+
+ ChassisAssetTag string `json:"ChassisAssetTag,omitempty"`
+
+ UseUtc bool `json:"UseUtc,omitempty"`
+
+ // LinuxKernelDirect - Added in v2.2 Builds >=181117
+ LinuxKernelDirect *LinuxKernelDirect `json:"LinuxKernelDirect,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/close_handle.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/close_handle.go
new file mode 100644
index 000000000..b4f9c315b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/close_handle.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type CloseHandle struct {
+ Handle string `json:"Handle,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/com_port.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/com_port.go
new file mode 100644
index 000000000..8bf8cab60
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/com_port.go
@@ -0,0 +1,17 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// ComPort specifies the named pipe that will be used for the port, with empty string indicating a disconnected port.
+type ComPort struct {
+ NamedPipe string `json:"NamedPipe,omitempty"`
+
+ OptimizeForDebugger bool `json:"OptimizeForDebugger,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/compute_system.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/compute_system.go
new file mode 100644
index 000000000..10cea67e0
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/compute_system.go
@@ -0,0 +1,26 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ComputeSystem struct {
+ Owner string `json:"Owner,omitempty"`
+
+ SchemaVersion *Version `json:"SchemaVersion,omitempty"`
+
+ HostingSystemId string `json:"HostingSystemId,omitempty"`
+
+ HostedSystem interface{} `json:"HostedSystem,omitempty"`
+
+ Container *Container `json:"Container,omitempty"`
+
+ VirtualMachine *VirtualMachine `json:"VirtualMachine,omitempty"`
+
+ ShouldTerminateOnLastHandleClosed bool `json:"ShouldTerminateOnLastHandleClosed,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/configuration.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/configuration.go
new file mode 100644
index 000000000..1d5dfe68a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/configuration.go
@@ -0,0 +1,72 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+import (
+ "net/http"
+)
+
+// contextKeys are used to identify the type of value in the context.
+// Since these are string, it is possible to get a short description of the
+// context key for logging and debugging using key.String().
+
+type contextKey string
+
+func (c contextKey) String() string {
+ return "auth " + string(c)
+}
+
+var (
+ // ContextOAuth2 takes a oauth2.TokenSource as authentication for the request.
+ ContextOAuth2 = contextKey("token")
+
+ // ContextBasicAuth takes BasicAuth as authentication for the request.
+ ContextBasicAuth = contextKey("basic")
+
+ // ContextAccessToken takes a string oauth2 access token as authentication for the request.
+ ContextAccessToken = contextKey("accesstoken")
+
+ // ContextAPIKey takes an APIKey as authentication for the request
+ ContextAPIKey = contextKey("apikey")
+)
+
+// BasicAuth provides basic http authentication to a request passed via context using ContextBasicAuth
+type BasicAuth struct {
+ UserName string `json:"userName,omitempty"`
+ Password string `json:"password,omitempty"`
+}
+
+// APIKey provides API key based authentication to a request passed via context using ContextAPIKey
+type APIKey struct {
+ Key string
+ Prefix string
+}
+
+type Configuration struct {
+ BasePath string `json:"basePath,omitempty"`
+ Host string `json:"host,omitempty"`
+ Scheme string `json:"scheme,omitempty"`
+ DefaultHeader map[string]string `json:"defaultHeader,omitempty"`
+ UserAgent string `json:"userAgent,omitempty"`
+ HTTPClient *http.Client
+}
+
+func NewConfiguration() *Configuration {
+ cfg := &Configuration{
+ BasePath: "https://localhost",
+ DefaultHeader: make(map[string]string),
+ UserAgent: "Swagger-Codegen/2.1.0/go",
+ }
+ return cfg
+}
+
+func (c *Configuration) AddDefaultHeader(key string, value string) {
+ c.DefaultHeader[key] = value
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/console_size.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/console_size.go
new file mode 100644
index 000000000..68aa04a57
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/console_size.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ConsoleSize struct {
+ Height int32 `json:"Height,omitempty"`
+
+ Width int32 `json:"Width,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container.go
new file mode 100644
index 000000000..39a54432c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container.go
@@ -0,0 +1,36 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Container struct {
+ GuestOs *GuestOs `json:"GuestOs,omitempty"`
+
+ Storage *Storage `json:"Storage,omitempty"`
+
+ MappedDirectories []MappedDirectory `json:"MappedDirectories,omitempty"`
+
+ MappedPipes []MappedPipe `json:"MappedPipes,omitempty"`
+
+ Memory *Memory `json:"Memory,omitempty"`
+
+ Processor *Processor `json:"Processor,omitempty"`
+
+ Networking *Networking `json:"Networking,omitempty"`
+
+ HvSocket *HvSocket `json:"HvSocket,omitempty"`
+
+ ContainerCredentialGuard *ContainerCredentialGuardState `json:"ContainerCredentialGuard,omitempty"`
+
+ RegistryChanges *RegistryChanges `json:"RegistryChanges,omitempty"`
+
+ AssignedDevices []Device `json:"AssignedDevices,omitempty"`
+
+ AdditionalDeviceNamespace *ContainerDefinitionDevice `json:"AdditionalDeviceNamespace,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_add_instance_request.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_add_instance_request.go
new file mode 100644
index 000000000..495c6ebc8
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_add_instance_request.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerCredentialGuardAddInstanceRequest struct {
+ Id string `json:"Id,omitempty"`
+ CredentialSpec string `json:"CredentialSpec,omitempty"`
+ Transport string `json:"Transport,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_hv_socket_service_config.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_hv_socket_service_config.go
new file mode 100644
index 000000000..1ed4c008f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_hv_socket_service_config.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerCredentialGuardHvSocketServiceConfig struct {
+ ServiceId string `json:"ServiceId,omitempty"`
+ ServiceConfig *HvSocketServiceConfig `json:"ServiceConfig,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_instance.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_instance.go
new file mode 100644
index 000000000..d7ebd0fcc
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_instance.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerCredentialGuardInstance struct {
+ Id string `json:"Id,omitempty"`
+ CredentialGuard *ContainerCredentialGuardState `json:"CredentialGuard,omitempty"`
+ HvSocketConfig *ContainerCredentialGuardHvSocketServiceConfig `json:"HvSocketConfig,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_modify_operation.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_modify_operation.go
new file mode 100644
index 000000000..71005b090
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_modify_operation.go
@@ -0,0 +1,17 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerCredentialGuardModifyOperation string
+
+const (
+ AddInstance ContainerCredentialGuardModifyOperation = "AddInstance"
+ RemoveInstance ContainerCredentialGuardModifyOperation = "RemoveInstance"
+)
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_operation_request.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_operation_request.go
new file mode 100644
index 000000000..952cda496
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_operation_request.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerCredentialGuardOperationRequest struct {
+ Operation ContainerCredentialGuardModifyOperation `json:"Operation,omitempty"`
+ OperationDetails interface{} `json:"OperationDetails,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_remove_instance_request.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_remove_instance_request.go
new file mode 100644
index 000000000..32e5a3bee
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_remove_instance_request.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerCredentialGuardRemoveInstanceRequest struct {
+ Id string `json:"Id,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_state.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_state.go
new file mode 100644
index 000000000..0f8f64437
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_state.go
@@ -0,0 +1,25 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerCredentialGuardState struct {
+
+ // Authentication cookie for calls to a Container Credential Guard instance.
+ Cookie string `json:"Cookie,omitempty"`
+
+ // Name of the RPC endpoint of the Container Credential Guard instance.
+ RpcEndpoint string `json:"RpcEndpoint,omitempty"`
+
+ // Transport used for the configured Container Credential Guard instance.
+ Transport string `json:"Transport,omitempty"`
+
+ // Credential spec used for the configured Container Credential Guard instance.
+ CredentialSpec string `json:"CredentialSpec,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_system_info.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_system_info.go
new file mode 100644
index 000000000..ea306fa21
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_credential_guard_system_info.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerCredentialGuardSystemInfo struct {
+ Instances []ContainerCredentialGuardInstance `json:"Instances,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_memory_information.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_memory_information.go
new file mode 100644
index 000000000..1fd7ca5d5
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/container_memory_information.go
@@ -0,0 +1,25 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// memory usage as viewed from within the container
+type ContainerMemoryInformation struct {
+ TotalPhysicalBytes int32 `json:"TotalPhysicalBytes,omitempty"`
+
+ TotalUsage int32 `json:"TotalUsage,omitempty"`
+
+ CommittedBytes int32 `json:"CommittedBytes,omitempty"`
+
+ SharedCommittedBytes int32 `json:"SharedCommittedBytes,omitempty"`
+
+ CommitLimitBytes int32 `json:"CommitLimitBytes,omitempty"`
+
+ PeakCommitmentBytes int32 `json:"PeakCommitmentBytes,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group.go
new file mode 100644
index 000000000..90332a519
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// CPU groups allow Hyper-V administrators to better manage and allocate the host's CPU resources across guest virtual machines
+type CpuGroup struct {
+ Id string `json:"Id,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_affinity.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_affinity.go
new file mode 100644
index 000000000..8794961bf
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_affinity.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type CpuGroupAffinity struct {
+ LogicalProcessorCount int32 `json:"LogicalProcessorCount,omitempty"`
+ LogicalProcessors []int32 `json:"LogicalProcessors,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_config.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_config.go
new file mode 100644
index 000000000..0be0475d4
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_config.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type CpuGroupConfig struct {
+ GroupId string `json:"GroupId,omitempty"`
+ Affinity *CpuGroupAffinity `json:"Affinity,omitempty"`
+ GroupProperties []CpuGroupProperty `json:"GroupProperties,omitempty"`
+ // Hypervisor CPU group IDs exposed to clients
+ HypervisorGroupId uint64 `json:"HypervisorGroupId,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_configurations.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_configurations.go
new file mode 100644
index 000000000..3ace0ccc3
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_configurations.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Structure used to return cpu groups for a Service property query
+type CpuGroupConfigurations struct {
+ CpuGroups []CpuGroupConfig `json:"CpuGroups,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_operations.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_operations.go
new file mode 100644
index 000000000..7d8978070
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_operations.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type CPUGroupOperation string
+
+const (
+ CreateGroup CPUGroupOperation = "CreateGroup"
+ DeleteGroup CPUGroupOperation = "DeleteGroup"
+ SetProperty CPUGroupOperation = "SetProperty"
+)
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_property.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_property.go
new file mode 100644
index 000000000..bbad6a2c4
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/cpu_group_property.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type CpuGroupProperty struct {
+ PropertyCode uint32 `json:"PropertyCode,omitempty"`
+ PropertyValue uint32 `json:"PropertyValue,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/create_group_operation.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/create_group_operation.go
new file mode 100644
index 000000000..91a8278fe
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/create_group_operation.go
@@ -0,0 +1,17 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Create group operation settings
+type CreateGroupOperation struct {
+ GroupId string `json:"GroupId,omitempty"`
+ LogicalProcessorCount uint32 `json:"LogicalProcessorCount,omitempty"`
+ LogicalProcessors []uint32 `json:"LogicalProcessors,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/delete_group_operation.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/delete_group_operation.go
new file mode 100644
index 000000000..134bd9881
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/delete_group_operation.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Delete group operation settings
+type DeleteGroupOperation struct {
+ GroupId string `json:"GroupId,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/device.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/device.go
new file mode 100644
index 000000000..31c4538af
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/device.go
@@ -0,0 +1,27 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type DeviceType string
+
+const (
+ ClassGUID DeviceType = "ClassGuid"
+ DeviceInstanceID DeviceType = "DeviceInstance"
+ GPUMirror DeviceType = "GpuMirror"
+)
+
+type Device struct {
+ // The type of device to assign to the container.
+ Type DeviceType `json:"Type,omitempty"`
+ // The interface class guid of the device interfaces to assign to the container. Only used when Type is ClassGuid.
+ InterfaceClassGuid string `json:"InterfaceClassGuid,omitempty"`
+ // The location path of the device to assign to the container. Only used when Type is DeviceInstanceID.
+ LocationPath string `json:"LocationPath,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/devices.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/devices.go
new file mode 100644
index 000000000..e985d96d2
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/devices.go
@@ -0,0 +1,46 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Devices struct {
+ ComPorts map[string]ComPort `json:"ComPorts,omitempty"`
+
+ Scsi map[string]Scsi `json:"Scsi,omitempty"`
+
+ VirtualPMem *VirtualPMemController `json:"VirtualPMem,omitempty"`
+
+ NetworkAdapters map[string]NetworkAdapter `json:"NetworkAdapters,omitempty"`
+
+ VideoMonitor *VideoMonitor `json:"VideoMonitor,omitempty"`
+
+ Keyboard *Keyboard `json:"Keyboard,omitempty"`
+
+ Mouse *Mouse `json:"Mouse,omitempty"`
+
+ HvSocket *HvSocket2 `json:"HvSocket,omitempty"`
+
+ EnhancedModeVideo *EnhancedModeVideo `json:"EnhancedModeVideo,omitempty"`
+
+ GuestCrashReporting *GuestCrashReporting `json:"GuestCrashReporting,omitempty"`
+
+ VirtualSmb *VirtualSmb `json:"VirtualSmb,omitempty"`
+
+ Plan9 *Plan9 `json:"Plan9,omitempty"`
+
+ Battery *Battery `json:"Battery,omitempty"`
+
+ FlexibleIov map[string]FlexibleIoDevice `json:"FlexibleIov,omitempty"`
+
+ SharedMemory *SharedMemoryConfiguration `json:"SharedMemory,omitempty"`
+
+ // TODO: This is pre-release support in schema 2.3. Need to add build number
+ // docs when a public build with this is out.
+ VirtualPci map[string]VirtualPciDevice `json:",omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/enhanced_mode_video.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/enhanced_mode_video.go
new file mode 100644
index 000000000..85450c41e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/enhanced_mode_video.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type EnhancedModeVideo struct {
+ ConnectionOptions *RdpConnectionOptions `json:"ConnectionOptions,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/flexible_io_device.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/flexible_io_device.go
new file mode 100644
index 000000000..fe86cab65
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/flexible_io_device.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type FlexibleIoDevice struct {
+ EmulatorId string `json:"EmulatorId,omitempty"`
+
+ HostingModel string `json:"HostingModel,omitempty"`
+
+ Configuration []string `json:"Configuration,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_connection.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_connection.go
new file mode 100644
index 000000000..7db29495b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_connection.go
@@ -0,0 +1,19 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type GuestConnection struct {
+
+ // Use Vsock rather than Hyper-V sockets to communicate with the guest service.
+ UseVsock bool `json:"UseVsock,omitempty"`
+
+ // Don't disconnect the guest connection when pausing the virtual machine.
+ UseConnectedSuspend bool `json:"UseConnectedSuspend,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_connection_info.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_connection_info.go
new file mode 100644
index 000000000..8a369bab7
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_connection_info.go
@@ -0,0 +1,21 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Information about the guest.
+type GuestConnectionInfo struct {
+
+ // Each schema version x.y stands for the range of versions a.b where a==x and b<=y. This list comes from the SupportedSchemaVersions field in GcsCapabilities.
+ SupportedSchemaVersions []Version `json:"SupportedSchemaVersions,omitempty"`
+
+ ProtocolVersion int32 `json:"ProtocolVersion,omitempty"`
+
+ GuestDefinedCapabilities *interface{} `json:"GuestDefinedCapabilities,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_crash_reporting.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_crash_reporting.go
new file mode 100644
index 000000000..af8280048
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_crash_reporting.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type GuestCrashReporting struct {
+ WindowsCrashSettings *WindowsCrashReporting `json:"WindowsCrashSettings,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_os.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_os.go
new file mode 100644
index 000000000..8838519a3
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_os.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type GuestOs struct {
+ HostName string `json:"HostName,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_state.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_state.go
new file mode 100644
index 000000000..ef1eec886
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/guest_state.go
@@ -0,0 +1,22 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type GuestState struct {
+
+ // The path to an existing file uses for persistent guest state storage. An empty string indicates the system should initialize new transient, in-memory guest state.
+ GuestStateFilePath string `json:"GuestStateFilePath,omitempty"`
+
+ // The path to an existing file for persistent runtime state storage. An empty string indicates the system should initialize new transient, in-memory runtime state.
+ RuntimeStateFilePath string `json:"RuntimeStateFilePath,omitempty"`
+
+ // If true, the guest state and runtime state files will be used as templates to populate transient, in-memory state instead of using the files as persistent backing store.
+ ForceTransientState bool `json:"ForceTransientState,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/host_processor_modify_request.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/host_processor_modify_request.go
new file mode 100644
index 000000000..2238ce530
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/host_processor_modify_request.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Structure used to request a service processor modification
+type HostProcessorModificationRequest struct {
+ Operation CPUGroupOperation `json:"Operation,omitempty"`
+ OperationDetails interface{} `json:"OperationDetails,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hosted_system.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hosted_system.go
new file mode 100644
index 000000000..ea3084bca
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hosted_system.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type HostedSystem struct {
+ SchemaVersion *Version `json:"SchemaVersion,omitempty"`
+
+ Container *Container `json:"Container,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket.go
new file mode 100644
index 000000000..23b2ee9e7
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type HvSocket struct {
+ Config *HvSocketSystemConfig `json:"Config,omitempty"`
+
+ EnablePowerShellDirect bool `json:"EnablePowerShellDirect,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_2.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_2.go
new file mode 100644
index 000000000..a017691f0
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_2.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// HvSocket configuration for a VM
+type HvSocket2 struct {
+ HvSocketConfig *HvSocketSystemConfig `json:"HvSocketConfig,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_address.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_address.go
new file mode 100644
index 000000000..84c11b93e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_address.go
@@ -0,0 +1,17 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// This class defines address settings applied to a VM
+// by the GCS every time a VM starts or restores.
+type HvSocketAddress struct {
+ LocalAddress string `json:"LocalAddress,omitempty"`
+ ParentAddress string `json:"ParentAddress,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_service_config.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_service_config.go
new file mode 100644
index 000000000..ecd9f7fba
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_service_config.go
@@ -0,0 +1,28 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type HvSocketServiceConfig struct {
+
+ // SDDL string that HvSocket will check before allowing a host process to bind to this specific service. If not specified, defaults to the system DefaultBindSecurityDescriptor, defined in HvSocketSystemWpConfig in V1.
+ BindSecurityDescriptor string `json:"BindSecurityDescriptor,omitempty"`
+
+ // SDDL string that HvSocket will check before allowing a host process to connect to this specific service. If not specified, defaults to the system DefaultConnectSecurityDescriptor, defined in HvSocketSystemWpConfig in V1.
+ ConnectSecurityDescriptor string `json:"ConnectSecurityDescriptor,omitempty"`
+
+ // If true, HvSocket will process wildcard binds for this service/system combination. Wildcard binds are secured in the registry at SOFTWARE/Microsoft/Windows NT/CurrentVersion/Virtualization/HvSocket/WildcardDescriptors
+ AllowWildcardBinds bool `json:"AllowWildcardBinds,omitempty"`
+
+ // Disabled controls whether the HvSocket service is accepting connection requests.
+ // This set to true will make the service refuse all incoming connections as well as cancel
+ // any connections already established. The service itself will still be active however
+ // and can be re-enabled at a future time.
+ Disabled bool `json:"Disabled,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_system_config.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_system_config.go
new file mode 100644
index 000000000..69f4f9d39
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/hv_socket_system_config.go
@@ -0,0 +1,22 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// This is the HCS Schema version of the HvSocket configuration. The VMWP version is located in Config.Devices.IC in V1.
+type HvSocketSystemConfig struct {
+
+ // SDDL string that HvSocket will check before allowing a host process to bind to an unlisted service for this specific container/VM (not wildcard binds).
+ DefaultBindSecurityDescriptor string `json:"DefaultBindSecurityDescriptor,omitempty"`
+
+ // SDDL string that HvSocket will check before allowing a host process to connect to an unlisted service in the VM/container.
+ DefaultConnectSecurityDescriptor string `json:"DefaultConnectSecurityDescriptor,omitempty"`
+
+ ServiceTable map[string]HvSocketServiceConfig `json:"ServiceTable,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/interrupt_moderation_mode.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/interrupt_moderation_mode.go
new file mode 100644
index 000000000..a614d63bd
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/interrupt_moderation_mode.go
@@ -0,0 +1,42 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type InterruptModerationName string
+
+// The valid interrupt moderation modes for I/O virtualization (IOV) offloading.
+const (
+ DefaultName InterruptModerationName = "Default"
+ AdaptiveName InterruptModerationName = "Adaptive"
+ OffName InterruptModerationName = "Off"
+ LowName InterruptModerationName = "Low"
+ MediumName InterruptModerationName = "Medium"
+ HighName InterruptModerationName = "High"
+)
+
+type InterruptModerationValue uint32
+
+const (
+ DefaultValue InterruptModerationValue = iota
+ AdaptiveValue
+ OffValue
+ LowValue InterruptModerationValue = 100
+ MediumValue InterruptModerationValue = 200
+ HighValue InterruptModerationValue = 300
+)
+
+var InterruptModerationValueToName = map[InterruptModerationValue]InterruptModerationName{
+ DefaultValue: DefaultName,
+ AdaptiveValue: AdaptiveName,
+ OffValue: OffName,
+ LowValue: LowName,
+ MediumValue: MediumName,
+ HighValue: HighName,
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/iov_settings.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/iov_settings.go
new file mode 100644
index 000000000..2a55cc37c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/iov_settings.go
@@ -0,0 +1,22 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type IovSettings struct {
+ // The weight assigned to this port for I/O virtualization (IOV) offloading.
+ // Setting this to 0 disables IOV offloading.
+ OffloadWeight *uint32 `json:"OffloadWeight,omitempty"`
+
+ // The number of queue pairs requested for this port for I/O virtualization (IOV) offloading.
+ QueuePairsRequested *uint32 `json:"QueuePairsRequested,omitempty"`
+
+ // The interrupt moderation mode for I/O virtualization (IOV) offloading.
+ InterruptModeration *InterruptModerationName `json:"InterruptModeration,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/keyboard.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/keyboard.go
new file mode 100644
index 000000000..3d3fa3b1c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/keyboard.go
@@ -0,0 +1,13 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Keyboard struct {
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/layer.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/layer.go
new file mode 100644
index 000000000..176c49d49
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/layer.go
@@ -0,0 +1,21 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Layer struct {
+ Id string `json:"Id,omitempty"`
+
+ Path string `json:"Path,omitempty"`
+
+ PathType string `json:"PathType,omitempty"`
+
+ // Unspecified defaults to Enabled
+ Cache string `json:"Cache,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/linux_kernel_direct.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/linux_kernel_direct.go
new file mode 100644
index 000000000..0ab6c280f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/linux_kernel_direct.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.2
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type LinuxKernelDirect struct {
+ KernelFilePath string `json:"KernelFilePath,omitempty"`
+
+ InitRdPath string `json:"InitRdPath,omitempty"`
+
+ KernelCmdLine string `json:"KernelCmdLine,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/logical_processor.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/logical_processor.go
new file mode 100644
index 000000000..2e3aa5e17
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/logical_processor.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type LogicalProcessor struct {
+ LpIndex uint32 `json:"LpIndex,omitempty"`
+ NodeNumber uint8 `json:"NodeNumber,omitempty"`
+ PackageId uint32 `json:"PackageId,omitempty"`
+ CoreId uint32 `json:"CoreId,omitempty"`
+ RootVpIndex int32 `json:"RootVpIndex,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mapped_directory.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mapped_directory.go
new file mode 100644
index 000000000..9b86a4045
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mapped_directory.go
@@ -0,0 +1,20 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type MappedDirectory struct {
+ HostPath string `json:"HostPath,omitempty"`
+
+ HostPathType string `json:"HostPathType,omitempty"`
+
+ ContainerPath string `json:"ContainerPath,omitempty"`
+
+ ReadOnly bool `json:"ReadOnly,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mapped_pipe.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mapped_pipe.go
new file mode 100644
index 000000000..208074e9a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mapped_pipe.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type MappedPipe struct {
+ ContainerPipeName string `json:"ContainerPipeName,omitempty"`
+
+ HostPath string `json:"HostPath,omitempty"`
+
+ HostPathType string `json:"HostPathType,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory.go
new file mode 100644
index 000000000..30749c672
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Memory struct {
+ SizeInMB uint64 `json:"SizeInMB,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_2.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_2.go
new file mode 100644
index 000000000..71224c75b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_2.go
@@ -0,0 +1,49 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Memory2 struct {
+ SizeInMB uint64 `json:"SizeInMB,omitempty"`
+
+ AllowOvercommit bool `json:"AllowOvercommit,omitempty"`
+
+ EnableHotHint bool `json:"EnableHotHint,omitempty"`
+
+ EnableColdHint bool `json:"EnableColdHint,omitempty"`
+
+ EnableEpf bool `json:"EnableEpf,omitempty"`
+
+ // EnableDeferredCommit is private in the schema. If regenerated need to add back.
+ EnableDeferredCommit bool `json:"EnableDeferredCommit,omitempty"`
+
+ // EnableColdDiscardHint if enabled, then the memory cold discard hint feature is exposed
+ // to the VM, allowing it to trim non-zeroed pages from the working set (if supported by
+ // the guest operating system).
+ EnableColdDiscardHint bool `json:"EnableColdDiscardHint,omitempty"`
+
+ // LowMmioGapInMB is the low MMIO region allocated below 4GB.
+ //
+ // TODO: This is pre-release support in schema 2.3. Need to add build number
+ // docs when a public build with this is out.
+ LowMMIOGapInMB uint64 `json:"LowMmioGapInMB,omitempty"`
+
+ // HighMmioBaseInMB is the high MMIO region allocated above 4GB (base and
+ // size).
+ //
+ // TODO: This is pre-release support in schema 2.3. Need to add build number
+ // docs when a public build with this is out.
+ HighMMIOBaseInMB uint64 `json:"HighMmioBaseInMB,omitempty"`
+
+ // HighMmioGapInMB is the high MMIO region.
+ //
+ // TODO: This is pre-release support in schema 2.3. Need to add build number
+ // docs when a public build with this is out.
+ HighMMIOGapInMB uint64 `json:"HighMmioGapInMB,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_information_for_vm.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_information_for_vm.go
new file mode 100644
index 000000000..811779b04
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_information_for_vm.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type MemoryInformationForVm struct {
+ VirtualNodeCount uint32 `json:"VirtualNodeCount,omitempty"`
+
+ VirtualMachineMemory *VmMemory `json:"VirtualMachineMemory,omitempty"`
+
+ VirtualNodes []VirtualNodeInfo `json:"VirtualNodes,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_stats.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_stats.go
new file mode 100644
index 000000000..906ba597f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/memory_stats.go
@@ -0,0 +1,19 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Memory runtime statistics
+type MemoryStats struct {
+ MemoryUsageCommitBytes uint64 `json:"MemoryUsageCommitBytes,omitempty"`
+
+ MemoryUsageCommitPeakBytes uint64 `json:"MemoryUsageCommitPeakBytes,omitempty"`
+
+ MemoryUsagePrivateWorkingSetBytes uint64 `json:"MemoryUsagePrivateWorkingSetBytes,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_container_definition_device.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_container_definition_device.go
new file mode 100644
index 000000000..8dbe40b3b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_container_definition_device.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ContainerDefinitionDevice struct {
+ DeviceExtension []DeviceExtension `json:"device_extension,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_category.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_category.go
new file mode 100644
index 000000000..8fe89f927
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_category.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type DeviceCategory struct {
+ Name string `json:"name,omitempty"`
+ InterfaceClass []InterfaceClass `json:"interface_class,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_extension.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_extension.go
new file mode 100644
index 000000000..a62568d89
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_extension.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type DeviceExtension struct {
+ DeviceCategory *DeviceCategory `json:"device_category,omitempty"`
+ Namespace *DeviceExtensionNamespace `json:"namespace,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_instance.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_instance.go
new file mode 100644
index 000000000..a7410febd
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_instance.go
@@ -0,0 +1,17 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type DeviceInstance struct {
+ Id string `json:"id,omitempty"`
+ LocationPath string `json:"location_path,omitempty"`
+ PortName string `json:"port_name,omitempty"`
+ InterfaceClass []InterfaceClass `json:"interface_class,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_namespace.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_namespace.go
new file mode 100644
index 000000000..355364064
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_device_namespace.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type DeviceNamespace struct {
+ RequiresDriverstore bool `json:"requires_driverstore,omitempty"`
+ DeviceCategory []DeviceCategory `json:"device_category,omitempty"`
+ DeviceInstance []DeviceInstance `json:"device_instance,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_interface_class.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_interface_class.go
new file mode 100644
index 000000000..7be98b541
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_interface_class.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type InterfaceClass struct {
+ Type_ string `json:"type,omitempty"`
+ Identifier string `json:"identifier,omitempty"`
+ Recurse bool `json:"recurse,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_namespace.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_namespace.go
new file mode 100644
index 000000000..3ab9cf1ec
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_namespace.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type DeviceExtensionNamespace struct {
+ Ob *ObjectNamespace `json:"ob,omitempty"`
+ Device *DeviceNamespace `json:"device,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_directory.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_directory.go
new file mode 100644
index 000000000..d2f51b3b5
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_directory.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ObjectDirectory struct {
+ Name string `json:"name,omitempty"`
+ Clonesd string `json:"clonesd,omitempty"`
+ Shadow string `json:"shadow,omitempty"`
+ Symlink []ObjectSymlink `json:"symlink,omitempty"`
+ Objdir []ObjectDirectory `json:"objdir,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_namespace.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_namespace.go
new file mode 100644
index 000000000..47dfb55bf
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_namespace.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ObjectNamespace struct {
+ Shadow string `json:"shadow,omitempty"`
+ Symlink []ObjectSymlink `json:"symlink,omitempty"`
+ Objdir []ObjectDirectory `json:"objdir,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_symlink.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_symlink.go
new file mode 100644
index 000000000..8867ebe5f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/model_object_symlink.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ObjectSymlink struct {
+ Name string `json:"name,omitempty"`
+ Path string `json:"path,omitempty"`
+ Scope string `json:"scope,omitempty"`
+ Pathtoclone string `json:"pathtoclone,omitempty"`
+ AccessMask int32 `json:"access_mask,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/modification_request.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/modification_request.go
new file mode 100644
index 000000000..1384ed888
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/modification_request.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ModificationRequest struct {
+ PropertyType PropertyType `json:"PropertyType,omitempty"`
+ Settings interface{} `json:"Settings,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/modify_setting_request.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/modify_setting_request.go
new file mode 100644
index 000000000..d29455a3e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/modify_setting_request.go
@@ -0,0 +1,20 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ModifySettingRequest struct {
+ ResourcePath string `json:"ResourcePath,omitempty"`
+
+ RequestType string `json:"RequestType,omitempty"`
+
+ Settings interface{} `json:"Settings,omitempty"` // NOTE: Swagger generated as *interface{}. Locally updated
+
+ GuestRequest interface{} `json:"GuestRequest,omitempty"` // NOTE: Swagger generated as *interface{}. Locally updated
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mouse.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mouse.go
new file mode 100644
index 000000000..ccf8b938f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/mouse.go
@@ -0,0 +1,13 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Mouse struct {
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/network_adapter.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/network_adapter.go
new file mode 100644
index 000000000..7408abd31
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/network_adapter.go
@@ -0,0 +1,17 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type NetworkAdapter struct {
+ EndpointId string `json:"EndpointId,omitempty"`
+ MacAddress string `json:"MacAddress,omitempty"`
+ // The I/O virtualization (IOV) offloading configuration.
+ IovSettings *IovSettings `json:"IovSettings,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/networking.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/networking.go
new file mode 100644
index 000000000..e5ea187a2
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/networking.go
@@ -0,0 +1,23 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Networking struct {
+ AllowUnqualifiedDnsQuery bool `json:"AllowUnqualifiedDnsQuery,omitempty"`
+
+ DnsSearchList string `json:"DnsSearchList,omitempty"`
+
+ NetworkSharedContainerName string `json:"NetworkSharedContainerName,omitempty"`
+
+ // Guid in windows; string in linux
+ Namespace string `json:"Namespace,omitempty"`
+
+ NetworkAdapters []string `json:"NetworkAdapters,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/pause_notification.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/pause_notification.go
new file mode 100644
index 000000000..d96c9501f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/pause_notification.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Notification data that is indicated to components running in the Virtual Machine.
+type PauseNotification struct {
+ Reason string `json:"Reason,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/pause_options.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/pause_options.go
new file mode 100644
index 000000000..21707a88e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/pause_options.go
@@ -0,0 +1,17 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Options for HcsPauseComputeSystem
+type PauseOptions struct {
+ SuspensionLevel string `json:"SuspensionLevel,omitempty"`
+
+ HostedNotification *PauseNotification `json:"HostedNotification,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/plan9.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/plan9.go
new file mode 100644
index 000000000..29d8c8012
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/plan9.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Plan9 struct {
+ Shares []Plan9Share `json:"Shares,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/plan9_share.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/plan9_share.go
new file mode 100644
index 000000000..41f8fdea0
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/plan9_share.go
@@ -0,0 +1,34 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Plan9Share struct {
+ Name string `json:"Name,omitempty"`
+
+ // The name by which the guest operation system can access this share, via the aname parameter in the Plan9 protocol.
+ AccessName string `json:"AccessName,omitempty"`
+
+ Path string `json:"Path,omitempty"`
+
+ Port int32 `json:"Port,omitempty"`
+
+ // Flags are marked private. Until they are exported correctly
+ //
+ // ReadOnly 0x00000001
+ // LinuxMetadata 0x00000004
+ // CaseSensitive 0x00000008
+ Flags int32 `json:"Flags,omitempty"`
+
+ ReadOnly bool `json:"ReadOnly,omitempty"`
+
+ UseShareRootIdentity bool `json:"UseShareRootIdentity,omitempty"`
+
+ AllowedFiles []string `json:"AllowedFiles,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_details.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_details.go
new file mode 100644
index 000000000..e9a662dd5
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_details.go
@@ -0,0 +1,33 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+import (
+ "time"
+)
+
+// Information about a process running in a container
+type ProcessDetails struct {
+ ProcessId int32 `json:"ProcessId,omitempty"`
+
+ ImageName string `json:"ImageName,omitempty"`
+
+ CreateTimestamp time.Time `json:"CreateTimestamp,omitempty"`
+
+ UserTime100ns int32 `json:"UserTime100ns,omitempty"`
+
+ KernelTime100ns int32 `json:"KernelTime100ns,omitempty"`
+
+ MemoryCommitBytes int32 `json:"MemoryCommitBytes,omitempty"`
+
+ MemoryWorkingSetPrivateBytes int32 `json:"MemoryWorkingSetPrivateBytes,omitempty"`
+
+ MemoryWorkingSetSharedBytes int32 `json:"MemoryWorkingSetSharedBytes,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_modify_request.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_modify_request.go
new file mode 100644
index 000000000..e4ed095c7
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_modify_request.go
@@ -0,0 +1,19 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Passed to HcsRpc_ModifyProcess
+type ProcessModifyRequest struct {
+ Operation string `json:"Operation,omitempty"`
+
+ ConsoleSize *ConsoleSize `json:"ConsoleSize,omitempty"`
+
+ CloseHandle *CloseHandle `json:"CloseHandle,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_parameters.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_parameters.go
new file mode 100644
index 000000000..82b0d0532
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_parameters.go
@@ -0,0 +1,46 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ProcessParameters struct {
+ ApplicationName string `json:"ApplicationName,omitempty"`
+
+ CommandLine string `json:"CommandLine,omitempty"`
+
+ // optional alternative to CommandLine, currently only supported by Linux GCS
+ CommandArgs []string `json:"CommandArgs,omitempty"`
+
+ User string `json:"User,omitempty"`
+
+ WorkingDirectory string `json:"WorkingDirectory,omitempty"`
+
+ Environment map[string]string `json:"Environment,omitempty"`
+
+ // if set, will run as low-privilege process
+ RestrictedToken bool `json:"RestrictedToken,omitempty"`
+
+ // if set, ignore StdErrPipe
+ EmulateConsole bool `json:"EmulateConsole,omitempty"`
+
+ CreateStdInPipe bool `json:"CreateStdInPipe,omitempty"`
+
+ CreateStdOutPipe bool `json:"CreateStdOutPipe,omitempty"`
+
+ CreateStdErrPipe bool `json:"CreateStdErrPipe,omitempty"`
+
+ // height then width
+ ConsoleSize []int32 `json:"ConsoleSize,omitempty"`
+
+ // if set, find an existing session for the user and create the process in it
+ UseExistingLogin bool `json:"UseExistingLogin,omitempty"`
+
+ // if set, use the legacy console instead of conhost
+ UseLegacyConsole bool `json:"UseLegacyConsole,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_status.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_status.go
new file mode 100644
index 000000000..ad9a4fa9a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/process_status.go
@@ -0,0 +1,21 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Status of a process running in a container
+type ProcessStatus struct {
+ ProcessId int32 `json:"ProcessId,omitempty"`
+
+ Exited bool `json:"Exited,omitempty"`
+
+ ExitCode int32 `json:"ExitCode,omitempty"`
+
+ LastWaitResult int32 `json:"LastWaitResult,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor.go
new file mode 100644
index 000000000..bb24e88da
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Processor struct {
+ Count int32 `json:"Count,omitempty"`
+
+ Maximum int32 `json:"Maximum,omitempty"`
+
+ Weight int32 `json:"Weight,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_2.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_2.go
new file mode 100644
index 000000000..c64f335ec
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_2.go
@@ -0,0 +1,23 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.5
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Processor2 struct {
+ Count int32 `json:"Count,omitempty"`
+
+ Limit int32 `json:"Limit,omitempty"`
+
+ Weight int32 `json:"Weight,omitempty"`
+
+ ExposeVirtualizationExtensions bool `json:"ExposeVirtualizationExtensions,omitempty"`
+
+ // An optional object that configures the CPU Group to which a Virtual Machine is going to bind to.
+ CpuGroup *CpuGroup `json:"CpuGroup,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_stats.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_stats.go
new file mode 100644
index 000000000..6157e2522
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_stats.go
@@ -0,0 +1,19 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// CPU runtime statistics
+type ProcessorStats struct {
+ TotalRuntime100ns uint64 `json:"TotalRuntime100ns,omitempty"`
+
+ RuntimeUser100ns uint64 `json:"RuntimeUser100ns,omitempty"`
+
+ RuntimeKernel100ns uint64 `json:"RuntimeKernel100ns,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_topology.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_topology.go
new file mode 100644
index 000000000..885156e77
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/processor_topology.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type ProcessorTopology struct {
+ LogicalProcessorCount uint32 `json:"LogicalProcessorCount,omitempty"`
+ LogicalProcessors []LogicalProcessor `json:"LogicalProcessors,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/properties.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/properties.go
new file mode 100644
index 000000000..17558cba0
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/properties.go
@@ -0,0 +1,54 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+import (
+ v1 "github.com/containerd/cgroups/stats/v1"
+)
+
+type Properties struct {
+ Id string `json:"Id,omitempty"`
+
+ SystemType string `json:"SystemType,omitempty"`
+
+ RuntimeOsType string `json:"RuntimeOsType,omitempty"`
+
+ Name string `json:"Name,omitempty"`
+
+ Owner string `json:"Owner,omitempty"`
+
+ RuntimeId string `json:"RuntimeId,omitempty"`
+
+ RuntimeTemplateId string `json:"RuntimeTemplateId,omitempty"`
+
+ State string `json:"State,omitempty"`
+
+ Stopped bool `json:"Stopped,omitempty"`
+
+ ExitType string `json:"ExitType,omitempty"`
+
+ Memory *MemoryInformationForVm `json:"Memory,omitempty"`
+
+ Statistics *Statistics `json:"Statistics,omitempty"`
+
+ ProcessList []ProcessDetails `json:"ProcessList,omitempty"`
+
+ TerminateOnLastHandleClosed bool `json:"TerminateOnLastHandleClosed,omitempty"`
+
+ HostingSystemId string `json:"HostingSystemId,omitempty"`
+
+ SharedMemoryRegionInfo []SharedMemoryRegionInfo `json:"SharedMemoryRegionInfo,omitempty"`
+
+ GuestConnectionInfo *GuestConnectionInfo `json:"GuestConnectionInfo,omitempty"`
+
+ // Metrics is not part of the API for HCS but this is used for LCOW v2 to
+ // return the full cgroup metrics from the guest.
+ Metrics *v1.Metrics `json:"LCOWMetrics,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/property_query.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/property_query.go
new file mode 100644
index 000000000..d6d80df13
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/property_query.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// By default the basic properties will be returned. This query provides a way to request specific properties.
+type PropertyQuery struct {
+ PropertyTypes []PropertyType `json:"PropertyTypes,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/property_type.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/property_type.go
new file mode 100644
index 000000000..98f2c96ed
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/property_type.go
@@ -0,0 +1,26 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type PropertyType string
+
+const (
+ PTMemory PropertyType = "Memory"
+ PTGuestMemory PropertyType = "GuestMemory"
+ PTStatistics PropertyType = "Statistics"
+ PTProcessList PropertyType = "ProcessList"
+ PTTerminateOnLastHandleClosed PropertyType = "TerminateOnLastHandleClosed"
+ PTSharedMemoryRegion PropertyType = "SharedMemoryRegion"
+ PTContainerCredentialGuard PropertyType = "ContainerCredentialGuard" // This field is not generated by swagger. This was added manually.
+ PTGuestConnection PropertyType = "GuestConnection"
+ PTICHeartbeatStatus PropertyType = "ICHeartbeatStatus"
+ PTProcessorTopology PropertyType = "ProcessorTopology"
+ PTCPUGroup PropertyType = "CpuGroup"
+)
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/rdp_connection_options.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/rdp_connection_options.go
new file mode 100644
index 000000000..8d5f5c171
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/rdp_connection_options.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type RdpConnectionOptions struct {
+ AccessSids []string `json:"AccessSids,omitempty"`
+
+ NamedPipe string `json:"NamedPipe,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_changes.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_changes.go
new file mode 100644
index 000000000..006906f6e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_changes.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type RegistryChanges struct {
+ AddValues []RegistryValue `json:"AddValues,omitempty"`
+
+ DeleteKeys []RegistryKey `json:"DeleteKeys,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_key.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_key.go
new file mode 100644
index 000000000..26fde99c7
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_key.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type RegistryKey struct {
+ Hive string `json:"Hive,omitempty"`
+
+ Name string `json:"Name,omitempty"`
+
+ Volatile bool `json:"Volatile,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_value.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_value.go
new file mode 100644
index 000000000..3f203176c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/registry_value.go
@@ -0,0 +1,30 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type RegistryValue struct {
+ Key *RegistryKey `json:"Key,omitempty"`
+
+ Name string `json:"Name,omitempty"`
+
+ Type_ string `json:"Type,omitempty"`
+
+ // One and only one value type must be set.
+ StringValue string `json:"StringValue,omitempty"`
+
+ BinaryValue string `json:"BinaryValue,omitempty"`
+
+ DWordValue int32 `json:"DWordValue,omitempty"`
+
+ QWordValue int32 `json:"QWordValue,omitempty"`
+
+ // Only used if RegistryValueType is CustomType The data is in BinaryValue
+ CustomType int32 `json:"CustomType,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/restore_state.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/restore_state.go
new file mode 100644
index 000000000..778ff5873
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/restore_state.go
@@ -0,0 +1,19 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type RestoreState struct {
+
+ // The path to the save state file to restore the system from.
+ SaveStateFilePath string `json:"SaveStateFilePath,omitempty"`
+
+ // The ID of the template system to clone this new system off of. An empty string indicates the system should not be cloned from a template.
+ TemplateSystemId string `json:"TemplateSystemId,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/save_options.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/save_options.go
new file mode 100644
index 000000000..e55fa1d98
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/save_options.go
@@ -0,0 +1,19 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type SaveOptions struct {
+
+ // The type of save operation to be performed.
+ SaveType string `json:"SaveType,omitempty"`
+
+ // The path to the file that will container the saved state.
+ SaveStateFilePath string `json:"SaveStateFilePath,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/scsi.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/scsi.go
new file mode 100644
index 000000000..bf253a470
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/scsi.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Scsi struct {
+
+ // Map of attachments, where the key is the integer LUN number on the controller.
+ Attachments map[string]Attachment `json:"Attachments,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/service_properties.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/service_properties.go
new file mode 100644
index 000000000..b8142ca6a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/service_properties.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+import "encoding/json"
+
+type ServiceProperties struct {
+ // Changed Properties field to []json.RawMessage from []interface{} to avoid having to
+ // remarshal sp.Properties[n] and unmarshal into the type(s) we want.
+ Properties []json.RawMessage `json:"Properties,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_configuration.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_configuration.go
new file mode 100644
index 000000000..df9baa921
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_configuration.go
@@ -0,0 +1,14 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type SharedMemoryConfiguration struct {
+ Regions []SharedMemoryRegion `json:"Regions,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_region.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_region.go
new file mode 100644
index 000000000..825b71865
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_region.go
@@ -0,0 +1,22 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type SharedMemoryRegion struct {
+ SectionName string `json:"SectionName,omitempty"`
+
+ StartOffset int32 `json:"StartOffset,omitempty"`
+
+ Length int32 `json:"Length,omitempty"`
+
+ AllowGuestWrite bool `json:"AllowGuestWrite,omitempty"`
+
+ HiddenFromGuest bool `json:"HiddenFromGuest,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_region_info.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_region_info.go
new file mode 100644
index 000000000..f67b08eb5
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/shared_memory_region_info.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type SharedMemoryRegionInfo struct {
+ SectionName string `json:"SectionName,omitempty"`
+
+ GuestPhysicalAddress int32 `json:"GuestPhysicalAddress,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/silo_properties.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/silo_properties.go
new file mode 100644
index 000000000..5eaf6a7f4
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/silo_properties.go
@@ -0,0 +1,17 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Silo job information
+type SiloProperties struct {
+ Enabled bool `json:"Enabled,omitempty"`
+
+ JobName string `json:"JobName,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/statistics.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/statistics.go
new file mode 100644
index 000000000..ba7a6b396
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/statistics.go
@@ -0,0 +1,29 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+import (
+ "time"
+)
+
+// Runtime statistics for a container
+type Statistics struct {
+ Timestamp time.Time `json:"Timestamp,omitempty"`
+
+ ContainerStartTime time.Time `json:"ContainerStartTime,omitempty"`
+
+ Uptime100ns uint64 `json:"Uptime100ns,omitempty"`
+
+ Processor *ProcessorStats `json:"Processor,omitempty"`
+
+ Memory *MemoryStats `json:"Memory,omitempty"`
+
+ Storage *StorageStats `json:"Storage,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage.go
new file mode 100644
index 000000000..2627af913
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage.go
@@ -0,0 +1,21 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Storage struct {
+
+ // List of layers that describe the parent hierarchy for a container's storage. These layers combined together, presented as a disposable and/or committable working storage, are used by the container to record all changes done to the parent layers.
+ Layers []Layer `json:"Layers,omitempty"`
+
+ // Path that points to the scratch space of a container, where parent layers are combined together to present a new disposable and/or committable layer with the changes done during its runtime.
+ Path string `json:"Path,omitempty"`
+
+ QoS *StorageQoS `json:"QoS,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage_qo_s.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage_qo_s.go
new file mode 100644
index 000000000..9c5e6eb53
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage_qo_s.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type StorageQoS struct {
+ IopsMaximum int32 `json:"IopsMaximum,omitempty"`
+
+ BandwidthMaximum int32 `json:"BandwidthMaximum,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage_stats.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage_stats.go
new file mode 100644
index 000000000..4f042ffd9
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/storage_stats.go
@@ -0,0 +1,21 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// Storage runtime statistics
+type StorageStats struct {
+ ReadCountNormalized uint64 `json:"ReadCountNormalized,omitempty"`
+
+ ReadSizeBytes uint64 `json:"ReadSizeBytes,omitempty"`
+
+ WriteCountNormalized uint64 `json:"WriteCountNormalized,omitempty"`
+
+ WriteSizeBytes uint64 `json:"WriteSizeBytes,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/topology.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/topology.go
new file mode 100644
index 000000000..834869940
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/topology.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Topology struct {
+ Memory *Memory2 `json:"Memory,omitempty"`
+
+ Processor *Processor2 `json:"Processor,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/uefi.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/uefi.go
new file mode 100644
index 000000000..0e48ece50
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/uefi.go
@@ -0,0 +1,20 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Uefi struct {
+ EnableDebugger bool `json:"EnableDebugger,omitempty"`
+
+ SecureBootTemplateId string `json:"SecureBootTemplateId,omitempty"`
+
+ BootThis *UefiBootEntry `json:"BootThis,omitempty"`
+
+ Console string `json:"Console,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/uefi_boot_entry.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/uefi_boot_entry.go
new file mode 100644
index 000000000..3ab409d82
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/uefi_boot_entry.go
@@ -0,0 +1,22 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type UefiBootEntry struct {
+ DeviceType string `json:"DeviceType,omitempty"`
+
+ DevicePath string `json:"DevicePath,omitempty"`
+
+ DiskNumber int32 `json:"DiskNumber,omitempty"`
+
+ OptionalData string `json:"OptionalData,omitempty"`
+
+ VmbFsRootPath string `json:"VmbFsRootPath,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/version.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/version.go
new file mode 100644
index 000000000..2abfccca3
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/version.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type Version struct {
+ Major int32 `json:"Major,omitempty"`
+
+ Minor int32 `json:"Minor,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/video_monitor.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/video_monitor.go
new file mode 100644
index 000000000..ec5d0fb93
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/video_monitor.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VideoMonitor struct {
+ HorizontalResolution int32 `json:"HorizontalResolution,omitempty"`
+
+ VerticalResolution int32 `json:"VerticalResolution,omitempty"`
+
+ ConnectionOptions *RdpConnectionOptions `json:"ConnectionOptions,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_machine.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_machine.go
new file mode 100644
index 000000000..2d22b1bcb
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_machine.go
@@ -0,0 +1,32 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VirtualMachine struct {
+
+ // StopOnReset is private in the schema. If regenerated need to put back.
+ StopOnReset bool `json:"StopOnReset,omitempty"`
+
+ Chipset *Chipset `json:"Chipset,omitempty"`
+
+ ComputeTopology *Topology `json:"ComputeTopology,omitempty"`
+
+ Devices *Devices `json:"Devices,omitempty"`
+
+ GuestState *GuestState `json:"GuestState,omitempty"`
+
+ RestoreState *RestoreState `json:"RestoreState,omitempty"`
+
+ RegistryChanges *RegistryChanges `json:"RegistryChanges,omitempty"`
+
+ StorageQoS *StorageQoS `json:"StorageQoS,omitempty"`
+
+ GuestConnection *GuestConnection `json:"GuestConnection,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_node_info.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_node_info.go
new file mode 100644
index 000000000..91a3c83d4
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_node_info.go
@@ -0,0 +1,20 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VirtualNodeInfo struct {
+ VirtualNodeIndex int32 `json:"VirtualNodeIndex,omitempty"`
+
+ PhysicalNodeNumber int32 `json:"PhysicalNodeNumber,omitempty"`
+
+ VirtualProcessorCount int32 `json:"VirtualProcessorCount,omitempty"`
+
+ MemoryUsageInPages int32 `json:"MemoryUsageInPages,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_controller.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_controller.go
new file mode 100644
index 000000000..f5b7f3e38
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_controller.go
@@ -0,0 +1,20 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VirtualPMemController struct {
+ Devices map[string]VirtualPMemDevice `json:"Devices,omitempty"`
+
+ MaximumCount uint32 `json:"MaximumCount,omitempty"`
+
+ MaximumSizeBytes uint64 `json:"MaximumSizeBytes,omitempty"`
+
+ Backing string `json:"Backing,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_device.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_device.go
new file mode 100644
index 000000000..70cf2d90d
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_device.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VirtualPMemDevice struct {
+ HostPath string `json:"HostPath,omitempty"`
+
+ ReadOnly bool `json:"ReadOnly,omitempty"`
+
+ ImageFormat string `json:"ImageFormat,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_mapping.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_mapping.go
new file mode 100644
index 000000000..9ef322f61
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_p_mem_mapping.go
@@ -0,0 +1,15 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VirtualPMemMapping struct {
+ HostPath string `json:"HostPath,omitempty"`
+ ImageFormat string `json:"ImageFormat,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_pci_device.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_pci_device.go
new file mode 100644
index 000000000..f5e05903c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_pci_device.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.3
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// TODO: This is pre-release support in schema 2.3. Need to add build number
+// docs when a public build with this is out.
+type VirtualPciDevice struct {
+ Functions []VirtualPciFunction `json:",omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_pci_function.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_pci_function.go
new file mode 100644
index 000000000..cedb7d18b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_pci_function.go
@@ -0,0 +1,18 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.3
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// TODO: This is pre-release support in schema 2.3. Need to add build number
+// docs when a public build with this is out.
+type VirtualPciFunction struct {
+ DeviceInstancePath string `json:",omitempty"`
+
+ VirtualFunction uint16 `json:",omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb.go
new file mode 100644
index 000000000..362df363e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VirtualSmb struct {
+ Shares []VirtualSmbShare `json:"Shares,omitempty"`
+
+ DirectFileMappingInMB int64 `json:"DirectFileMappingInMB,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb_share.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb_share.go
new file mode 100644
index 000000000..915e9b638
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb_share.go
@@ -0,0 +1,20 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VirtualSmbShare struct {
+ Name string `json:"Name,omitempty"`
+
+ Path string `json:"Path,omitempty"`
+
+ AllowedFiles []string `json:"AllowedFiles,omitempty"`
+
+ Options *VirtualSmbShareOptions `json:"Options,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb_share_options.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb_share_options.go
new file mode 100644
index 000000000..75196bd8c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/virtual_smb_share_options.go
@@ -0,0 +1,62 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VirtualSmbShareOptions struct {
+ ReadOnly bool `json:"ReadOnly,omitempty"`
+
+ // convert exclusive access to shared read access
+ ShareRead bool `json:"ShareRead,omitempty"`
+
+ // all opens will use cached I/O
+ CacheIo bool `json:"CacheIo,omitempty"`
+
+ // disable oplock support
+ NoOplocks bool `json:"NoOplocks,omitempty"`
+
+ // Acquire the backup privilege when attempting to open
+ TakeBackupPrivilege bool `json:"TakeBackupPrivilege,omitempty"`
+
+ // Use the identity of the share root when opening
+ UseShareRootIdentity bool `json:"UseShareRootIdentity,omitempty"`
+
+ // disable Direct Mapping
+ NoDirectmap bool `json:"NoDirectmap,omitempty"`
+
+ // disable Byterange locks
+ NoLocks bool `json:"NoLocks,omitempty"`
+
+ // disable Directory CHange Notifications
+ NoDirnotify bool `json:"NoDirnotify,omitempty"`
+
+ // share is use for VM shared memory
+ VmSharedMemory bool `json:"VmSharedMemory,omitempty"`
+
+ // allow access only to the files specified in AllowedFiles
+ RestrictFileAccess bool `json:"RestrictFileAccess,omitempty"`
+
+ // disable all oplocks except Level II
+ ForceLevelIIOplocks bool `json:"ForceLevelIIOplocks,omitempty"`
+
+ // Allow the host to reparse this base layer
+ ReparseBaseLayer bool `json:"ReparseBaseLayer,omitempty"`
+
+ // Enable pseudo-oplocks
+ PseudoOplocks bool `json:"PseudoOplocks,omitempty"`
+
+ // All opens will use non-cached IO
+ NonCacheIo bool `json:"NonCacheIo,omitempty"`
+
+ // Enable pseudo directory change notifications
+ PseudoDirnotify bool `json:"PseudoDirnotify,omitempty"`
+
+ // Block directory enumeration, renames, and deletes.
+ SingleFileMapping bool `json:"SingleFileMapping,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/vm_memory.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/vm_memory.go
new file mode 100644
index 000000000..8e1836dd6
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/vm_memory.go
@@ -0,0 +1,26 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type VmMemory struct {
+ AvailableMemory int32 `json:"AvailableMemory,omitempty"`
+
+ AvailableMemoryBuffer int32 `json:"AvailableMemoryBuffer,omitempty"`
+
+ ReservedMemory uint64 `json:"ReservedMemory,omitempty"`
+
+ AssignedMemory uint64 `json:"AssignedMemory,omitempty"`
+
+ SlpActive bool `json:"SlpActive,omitempty"`
+
+ BalancingEnabled bool `json:"BalancingEnabled,omitempty"`
+
+ DmOperationInProgress bool `json:"DmOperationInProgress,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/vm_processor_limits.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/vm_processor_limits.go
new file mode 100644
index 000000000..de1b9cf1a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/vm_processor_limits.go
@@ -0,0 +1,22 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.4
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+// ProcessorLimits is used when modifying processor scheduling limits of a virtual machine.
+type ProcessorLimits struct {
+ // Maximum amount of host CPU resources that the virtual machine can use.
+ Limit uint64 `json:"Limit,omitempty"`
+ // Value describing the relative priority of this virtual machine compared to other virtual machines.
+ Weight uint64 `json:"Weight,omitempty"`
+ // Minimum amount of host CPU resources that the virtual machine is guaranteed.
+ Reservation uint64 `json:"Reservation,omitempty"`
+ // Provides the target maximum CPU frequency, in MHz, for a virtual machine.
+ MaximumFrequencyMHz uint32 `json:"MaximumFrequencyMHz,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/windows_crash_reporting.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/windows_crash_reporting.go
new file mode 100644
index 000000000..8ed7e566d
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/schema2/windows_crash_reporting.go
@@ -0,0 +1,16 @@
+/*
+ * HCS API
+ *
+ * No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
+ *
+ * API version: 2.1
+ * Generated by: Swagger Codegen (https://github.com/swagger-api/swagger-codegen.git)
+ */
+
+package hcsschema
+
+type WindowsCrashReporting struct {
+ DumpFileName string `json:"DumpFileName,omitempty"`
+
+ MaxDumpSize int64 `json:"MaxDumpSize,omitempty"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/service.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/service.go
new file mode 100644
index 000000000..a634dfc15
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/service.go
@@ -0,0 +1,49 @@
+package hcs
+
+import (
+ "context"
+ "encoding/json"
+
+ hcsschema "github.com/Microsoft/hcsshim/internal/hcs/schema2"
+ "github.com/Microsoft/hcsshim/internal/vmcompute"
+)
+
+// GetServiceProperties returns properties of the host compute service.
+func GetServiceProperties(ctx context.Context, q hcsschema.PropertyQuery) (*hcsschema.ServiceProperties, error) {
+ operation := "hcs::GetServiceProperties"
+
+ queryb, err := json.Marshal(q)
+ if err != nil {
+ return nil, err
+ }
+ propertiesJSON, resultJSON, err := vmcompute.HcsGetServiceProperties(ctx, string(queryb))
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return nil, &HcsError{Op: operation, Err: err, Events: events}
+ }
+
+ if propertiesJSON == "" {
+ return nil, ErrUnexpectedValue
+ }
+ properties := &hcsschema.ServiceProperties{}
+ if err := json.Unmarshal([]byte(propertiesJSON), properties); err != nil {
+ return nil, err
+ }
+ return properties, nil
+}
+
+// ModifyServiceSettings modifies settings of the host compute service.
+func ModifyServiceSettings(ctx context.Context, settings hcsschema.ModificationRequest) error {
+ operation := "hcs::ModifyServiceSettings"
+
+ settingsJSON, err := json.Marshal(settings)
+ if err != nil {
+ return err
+ }
+ resultJSON, err := vmcompute.HcsModifyServiceSettings(ctx, string(settingsJSON))
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return &HcsError{Op: operation, Err: err, Events: events}
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/system.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/system.go
new file mode 100644
index 000000000..a76f6b253
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/system.go
@@ -0,0 +1,815 @@
+package hcs
+
+import (
+ "context"
+ "encoding/json"
+ "errors"
+ "fmt"
+ "strings"
+ "sync"
+ "syscall"
+ "time"
+
+ "github.com/Microsoft/hcsshim/internal/cow"
+ "github.com/Microsoft/hcsshim/internal/hcs/schema1"
+ hcsschema "github.com/Microsoft/hcsshim/internal/hcs/schema2"
+ "github.com/Microsoft/hcsshim/internal/jobobject"
+ "github.com/Microsoft/hcsshim/internal/log"
+ "github.com/Microsoft/hcsshim/internal/logfields"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/Microsoft/hcsshim/internal/timeout"
+ "github.com/Microsoft/hcsshim/internal/vmcompute"
+ "github.com/sirupsen/logrus"
+ "go.opencensus.io/trace"
+)
+
+type System struct {
+ handleLock sync.RWMutex
+ handle vmcompute.HcsSystem
+ id string
+ callbackNumber uintptr
+
+ closedWaitOnce sync.Once
+ waitBlock chan struct{}
+ waitError error
+ exitError error
+ os, typ, owner string
+ startTime time.Time
+}
+
+func newSystem(id string) *System {
+ return &System{
+ id: id,
+ waitBlock: make(chan struct{}),
+ }
+}
+
+// Implementation detail for silo naming, this should NOT be relied upon very heavily.
+func siloNameFmt(containerID string) string {
+ return fmt.Sprintf(`\Container_%s`, containerID)
+}
+
+// CreateComputeSystem creates a new compute system with the given configuration but does not start it.
+func CreateComputeSystem(ctx context.Context, id string, hcsDocumentInterface interface{}) (_ *System, err error) {
+ operation := "hcs::CreateComputeSystem"
+
+ // hcsCreateComputeSystemContext is an async operation. Start the outer span
+ // here to measure the full create time.
+ ctx, span := trace.StartSpan(ctx, operation)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("cid", id))
+
+ computeSystem := newSystem(id)
+
+ hcsDocumentB, err := json.Marshal(hcsDocumentInterface)
+ if err != nil {
+ return nil, err
+ }
+
+ hcsDocument := string(hcsDocumentB)
+
+ var (
+ identity syscall.Handle
+ resultJSON string
+ createError error
+ )
+ computeSystem.handle, resultJSON, createError = vmcompute.HcsCreateComputeSystem(ctx, id, hcsDocument, identity)
+ if createError == nil || IsPending(createError) {
+ defer func() {
+ if err != nil {
+ computeSystem.Close()
+ }
+ }()
+ if err = computeSystem.registerCallback(ctx); err != nil {
+ // Terminate the compute system if it still exists. We're okay to
+ // ignore a failure here.
+ _ = computeSystem.Terminate(ctx)
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+ }
+
+ events, err := processAsyncHcsResult(ctx, createError, resultJSON, computeSystem.callbackNumber, hcsNotificationSystemCreateCompleted, &timeout.SystemCreate)
+ if err != nil {
+ if err == ErrTimeout {
+ // Terminate the compute system if it still exists. We're okay to
+ // ignore a failure here.
+ _ = computeSystem.Terminate(ctx)
+ }
+ return nil, makeSystemError(computeSystem, operation, err, events)
+ }
+ go computeSystem.waitBackground()
+ if err = computeSystem.getCachedProperties(ctx); err != nil {
+ return nil, err
+ }
+ return computeSystem, nil
+}
+
+// OpenComputeSystem opens an existing compute system by ID.
+func OpenComputeSystem(ctx context.Context, id string) (*System, error) {
+ operation := "hcs::OpenComputeSystem"
+
+ computeSystem := newSystem(id)
+ handle, resultJSON, err := vmcompute.HcsOpenComputeSystem(ctx, id)
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, events)
+ }
+ computeSystem.handle = handle
+ defer func() {
+ if err != nil {
+ computeSystem.Close()
+ }
+ }()
+ if err = computeSystem.registerCallback(ctx); err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+ go computeSystem.waitBackground()
+ if err = computeSystem.getCachedProperties(ctx); err != nil {
+ return nil, err
+ }
+ return computeSystem, nil
+}
+
+func (computeSystem *System) getCachedProperties(ctx context.Context) error {
+ props, err := computeSystem.Properties(ctx)
+ if err != nil {
+ return err
+ }
+ computeSystem.typ = strings.ToLower(props.SystemType)
+ computeSystem.os = strings.ToLower(props.RuntimeOSType)
+ computeSystem.owner = strings.ToLower(props.Owner)
+ if computeSystem.os == "" && computeSystem.typ == "container" {
+ // Pre-RS5 HCS did not return the OS, but it only supported containers
+ // that ran Windows.
+ computeSystem.os = "windows"
+ }
+ return nil
+}
+
+// OS returns the operating system of the compute system, "linux" or "windows".
+func (computeSystem *System) OS() string {
+ return computeSystem.os
+}
+
+// IsOCI returns whether processes in the compute system should be created via
+// OCI.
+func (computeSystem *System) IsOCI() bool {
+ return computeSystem.os == "linux" && computeSystem.typ == "container"
+}
+
+// GetComputeSystems gets a list of the compute systems on the system that match the query
+func GetComputeSystems(ctx context.Context, q schema1.ComputeSystemQuery) ([]schema1.ContainerProperties, error) {
+ operation := "hcs::GetComputeSystems"
+
+ queryb, err := json.Marshal(q)
+ if err != nil {
+ return nil, err
+ }
+
+ computeSystemsJSON, resultJSON, err := vmcompute.HcsEnumerateComputeSystems(ctx, string(queryb))
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return nil, &HcsError{Op: operation, Err: err, Events: events}
+ }
+
+ if computeSystemsJSON == "" {
+ return nil, ErrUnexpectedValue
+ }
+ computeSystems := []schema1.ContainerProperties{}
+ if err = json.Unmarshal([]byte(computeSystemsJSON), &computeSystems); err != nil {
+ return nil, err
+ }
+
+ return computeSystems, nil
+}
+
+// Start synchronously starts the computeSystem.
+func (computeSystem *System) Start(ctx context.Context) (err error) {
+ operation := "hcs::System::Start"
+
+ // hcsStartComputeSystemContext is an async operation. Start the outer span
+ // here to measure the full start time.
+ ctx, span := trace.StartSpan(ctx, operation)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("cid", computeSystem.id))
+
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ if computeSystem.handle == 0 {
+ return makeSystemError(computeSystem, operation, ErrAlreadyClosed, nil)
+ }
+
+ resultJSON, err := vmcompute.HcsStartComputeSystem(ctx, computeSystem.handle, "")
+ events, err := processAsyncHcsResult(ctx, err, resultJSON, computeSystem.callbackNumber, hcsNotificationSystemStartCompleted, &timeout.SystemStart)
+ if err != nil {
+ return makeSystemError(computeSystem, operation, err, events)
+ }
+ computeSystem.startTime = time.Now()
+ return nil
+}
+
+// ID returns the compute system's identifier.
+func (computeSystem *System) ID() string {
+ return computeSystem.id
+}
+
+// Shutdown requests a compute system shutdown.
+func (computeSystem *System) Shutdown(ctx context.Context) error {
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ operation := "hcs::System::Shutdown"
+
+ if computeSystem.handle == 0 {
+ return nil
+ }
+
+ resultJSON, err := vmcompute.HcsShutdownComputeSystem(ctx, computeSystem.handle, "")
+ events := processHcsResult(ctx, resultJSON)
+ switch err {
+ case nil, ErrVmcomputeAlreadyStopped, ErrComputeSystemDoesNotExist, ErrVmcomputeOperationPending:
+ default:
+ return makeSystemError(computeSystem, operation, err, events)
+ }
+ return nil
+}
+
+// Terminate requests a compute system terminate.
+func (computeSystem *System) Terminate(ctx context.Context) error {
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ operation := "hcs::System::Terminate"
+
+ if computeSystem.handle == 0 {
+ return nil
+ }
+
+ resultJSON, err := vmcompute.HcsTerminateComputeSystem(ctx, computeSystem.handle, "")
+ events := processHcsResult(ctx, resultJSON)
+ switch err {
+ case nil, ErrVmcomputeAlreadyStopped, ErrComputeSystemDoesNotExist, ErrVmcomputeOperationPending:
+ default:
+ return makeSystemError(computeSystem, operation, err, events)
+ }
+ return nil
+}
+
+// waitBackground waits for the compute system exit notification. Once received
+// sets `computeSystem.waitError` (if any) and unblocks all `Wait` calls.
+//
+// This MUST be called exactly once per `computeSystem.handle` but `Wait` is
+// safe to call multiple times.
+func (computeSystem *System) waitBackground() {
+ operation := "hcs::System::waitBackground"
+ ctx, span := trace.StartSpan(context.Background(), operation)
+ defer span.End()
+ span.AddAttributes(trace.StringAttribute("cid", computeSystem.id))
+
+ err := waitForNotification(ctx, computeSystem.callbackNumber, hcsNotificationSystemExited, nil)
+ switch err {
+ case nil:
+ log.G(ctx).Debug("system exited")
+ case ErrVmcomputeUnexpectedExit:
+ log.G(ctx).Debug("unexpected system exit")
+ computeSystem.exitError = makeSystemError(computeSystem, operation, err, nil)
+ err = nil
+ default:
+ err = makeSystemError(computeSystem, operation, err, nil)
+ }
+ computeSystem.closedWaitOnce.Do(func() {
+ computeSystem.waitError = err
+ close(computeSystem.waitBlock)
+ })
+ oc.SetSpanStatus(span, err)
+}
+
+func (computeSystem *System) WaitChannel() <-chan struct{} {
+ return computeSystem.waitBlock
+}
+
+func (computeSystem *System) WaitError() error {
+ return computeSystem.waitError
+}
+
+// Wait synchronously waits for the compute system to shutdown or terminate. If
+// the compute system has already exited returns the previous error (if any).
+func (computeSystem *System) Wait() error {
+ <-computeSystem.WaitChannel()
+ return computeSystem.WaitError()
+}
+
+// ExitError returns an error describing the reason the compute system terminated.
+func (computeSystem *System) ExitError() error {
+ select {
+ case <-computeSystem.waitBlock:
+ if computeSystem.waitError != nil {
+ return computeSystem.waitError
+ }
+ return computeSystem.exitError
+ default:
+ return errors.New("container not exited")
+ }
+}
+
+// Properties returns the requested container properties targeting a V1 schema container.
+func (computeSystem *System) Properties(ctx context.Context, types ...schema1.PropertyType) (*schema1.ContainerProperties, error) {
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ operation := "hcs::System::Properties"
+
+ queryBytes, err := json.Marshal(schema1.PropertyQuery{PropertyTypes: types})
+ if err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+
+ propertiesJSON, resultJSON, err := vmcompute.HcsGetComputeSystemProperties(ctx, computeSystem.handle, string(queryBytes))
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, events)
+ }
+
+ if propertiesJSON == "" {
+ return nil, ErrUnexpectedValue
+ }
+ properties := &schema1.ContainerProperties{}
+ if err := json.Unmarshal([]byte(propertiesJSON), properties); err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+
+ return properties, nil
+}
+
+// queryInProc handles querying for container properties without reaching out to HCS. `props`
+// will be updated to contain any data returned from the queries present in `types`. If any properties
+// failed to be queried they will be tallied up and returned in as the first return value. Failures on
+// query are NOT considered errors; the only failure case for this method is if the containers job object
+// cannot be opened.
+func (computeSystem *System) queryInProc(ctx context.Context, props *hcsschema.Properties, types []hcsschema.PropertyType) ([]hcsschema.PropertyType, error) {
+ // In the future we can make use of some new functionality in the HCS that allows you
+ // to pass a job object for HCS to use for the container. Currently, the only way we'll
+ // be able to open the job/silo is if we're running as SYSTEM.
+ jobOptions := &jobobject.Options{
+ UseNTVariant: true,
+ Name: siloNameFmt(computeSystem.id),
+ }
+ job, err := jobobject.Open(ctx, jobOptions)
+ if err != nil {
+ return nil, err
+ }
+ defer job.Close()
+
+ var fallbackQueryTypes []hcsschema.PropertyType
+ for _, propType := range types {
+ switch propType {
+ case hcsschema.PTStatistics:
+ // Handle a bad caller asking for the same type twice. No use in re-querying if this is
+ // filled in already.
+ if props.Statistics == nil {
+ props.Statistics, err = computeSystem.statisticsInProc(job)
+ if err != nil {
+ log.G(ctx).WithError(err).Warn("failed to get statistics in-proc")
+
+ fallbackQueryTypes = append(fallbackQueryTypes, propType)
+ }
+ }
+ default:
+ fallbackQueryTypes = append(fallbackQueryTypes, propType)
+ }
+ }
+
+ return fallbackQueryTypes, nil
+}
+
+// statisticsInProc emulates what HCS does to grab statistics for a given container with a small
+// change to make grabbing the private working set total much more efficient.
+func (computeSystem *System) statisticsInProc(job *jobobject.JobObject) (*hcsschema.Statistics, error) {
+ // Start timestamp for these stats before we grab them to match HCS
+ timestamp := time.Now()
+
+ memInfo, err := job.QueryMemoryStats()
+ if err != nil {
+ return nil, err
+ }
+
+ processorInfo, err := job.QueryProcessorStats()
+ if err != nil {
+ return nil, err
+ }
+
+ storageInfo, err := job.QueryStorageStats()
+ if err != nil {
+ return nil, err
+ }
+
+ // This calculates the private working set more efficiently than HCS does. HCS calls NtQuerySystemInformation
+ // with the class SystemProcessInformation which returns an array containing system information for *every*
+ // process running on the machine. They then grab the pids that are running in the container and filter down
+ // the entries in the array to only what's running in that silo and start tallying up the total. This doesn't
+ // work well as performance should get worse if more processess are running on the machine in general and not
+ // just in the container. All of the additional information besides the WorkingSetPrivateSize field is ignored
+ // as well which isn't great and is wasted work to fetch.
+ //
+ // HCS only let's you grab statistics in an all or nothing fashion, so we can't just grab the private
+ // working set ourselves and ask for everything else seperately. The optimization we can make here is
+ // to open the silo ourselves and do the same queries for the rest of the info, as well as calculating
+ // the private working set in a more efficient manner by:
+ //
+ // 1. Find the pids running in the silo
+ // 2. Get a process handle for every process (only need PROCESS_QUERY_LIMITED_INFORMATION access)
+ // 3. Call NtQueryInformationProcess on each process with the class ProcessVmCounters
+ // 4. Tally up the total using the field PrivateWorkingSetSize in VM_COUNTERS_EX2.
+ privateWorkingSet, err := job.QueryPrivateWorkingSet()
+ if err != nil {
+ return nil, err
+ }
+
+ return &hcsschema.Statistics{
+ Timestamp: timestamp,
+ ContainerStartTime: computeSystem.startTime,
+ Uptime100ns: uint64(time.Since(computeSystem.startTime).Nanoseconds()) / 100,
+ Memory: &hcsschema.MemoryStats{
+ MemoryUsageCommitBytes: memInfo.JobMemory,
+ MemoryUsageCommitPeakBytes: memInfo.PeakJobMemoryUsed,
+ MemoryUsagePrivateWorkingSetBytes: privateWorkingSet,
+ },
+ Processor: &hcsschema.ProcessorStats{
+ RuntimeKernel100ns: uint64(processorInfo.TotalKernelTime),
+ RuntimeUser100ns: uint64(processorInfo.TotalUserTime),
+ TotalRuntime100ns: uint64(processorInfo.TotalKernelTime + processorInfo.TotalUserTime),
+ },
+ Storage: &hcsschema.StorageStats{
+ ReadCountNormalized: uint64(storageInfo.ReadStats.IoCount),
+ ReadSizeBytes: storageInfo.ReadStats.TotalSize,
+ WriteCountNormalized: uint64(storageInfo.WriteStats.IoCount),
+ WriteSizeBytes: storageInfo.WriteStats.TotalSize,
+ },
+ }, nil
+}
+
+// hcsPropertiesV2Query is a helper to make a HcsGetComputeSystemProperties call using the V2 schema property types.
+func (computeSystem *System) hcsPropertiesV2Query(ctx context.Context, types []hcsschema.PropertyType) (*hcsschema.Properties, error) {
+ operation := "hcs::System::PropertiesV2"
+
+ queryBytes, err := json.Marshal(hcsschema.PropertyQuery{PropertyTypes: types})
+ if err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+
+ propertiesJSON, resultJSON, err := vmcompute.HcsGetComputeSystemProperties(ctx, computeSystem.handle, string(queryBytes))
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, events)
+ }
+
+ if propertiesJSON == "" {
+ return nil, ErrUnexpectedValue
+ }
+ props := &hcsschema.Properties{}
+ if err := json.Unmarshal([]byte(propertiesJSON), props); err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+
+ return props, nil
+}
+
+// PropertiesV2 returns the requested compute systems properties targeting a V2 schema compute system.
+func (computeSystem *System) PropertiesV2(ctx context.Context, types ...hcsschema.PropertyType) (_ *hcsschema.Properties, err error) {
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ // Let HCS tally up the total for VM based queries instead of querying ourselves.
+ if computeSystem.typ != "container" {
+ return computeSystem.hcsPropertiesV2Query(ctx, types)
+ }
+
+ // Define a starter Properties struct with the default fields returned from every
+ // query. Owner is only returned from Statistics but it's harmless to include.
+ properties := &hcsschema.Properties{
+ Id: computeSystem.id,
+ SystemType: computeSystem.typ,
+ RuntimeOsType: computeSystem.os,
+ Owner: computeSystem.owner,
+ }
+
+ logEntry := log.G(ctx)
+ // First lets try and query ourselves without reaching to HCS. If any of the queries fail
+ // we'll take note and fallback to querying HCS for any of the failed types.
+ fallbackTypes, err := computeSystem.queryInProc(ctx, properties, types)
+ if err == nil && len(fallbackTypes) == 0 {
+ return properties, nil
+ } else if err != nil {
+ logEntry.WithError(fmt.Errorf("failed to query compute system properties in-proc: %w", err))
+ fallbackTypes = types
+ }
+
+ logEntry.WithFields(logrus.Fields{
+ logfields.ContainerID: computeSystem.id,
+ "propertyTypes": fallbackTypes,
+ }).Info("falling back to HCS for property type queries")
+
+ hcsProperties, err := computeSystem.hcsPropertiesV2Query(ctx, fallbackTypes)
+ if err != nil {
+ return nil, err
+ }
+
+ // Now add in anything that we might have successfully queried in process.
+ if properties.Statistics != nil {
+ hcsProperties.Statistics = properties.Statistics
+ hcsProperties.Owner = properties.Owner
+ }
+
+ // For future support for querying processlist in-proc as well.
+ if properties.ProcessList != nil {
+ hcsProperties.ProcessList = properties.ProcessList
+ }
+
+ return hcsProperties, nil
+}
+
+// Pause pauses the execution of the computeSystem. This feature is not enabled in TP5.
+func (computeSystem *System) Pause(ctx context.Context) (err error) {
+ operation := "hcs::System::Pause"
+
+ // hcsPauseComputeSystemContext is an async peration. Start the outer span
+ // here to measure the full pause time.
+ ctx, span := trace.StartSpan(ctx, operation)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("cid", computeSystem.id))
+
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ if computeSystem.handle == 0 {
+ return makeSystemError(computeSystem, operation, ErrAlreadyClosed, nil)
+ }
+
+ resultJSON, err := vmcompute.HcsPauseComputeSystem(ctx, computeSystem.handle, "")
+ events, err := processAsyncHcsResult(ctx, err, resultJSON, computeSystem.callbackNumber, hcsNotificationSystemPauseCompleted, &timeout.SystemPause)
+ if err != nil {
+ return makeSystemError(computeSystem, operation, err, events)
+ }
+
+ return nil
+}
+
+// Resume resumes the execution of the computeSystem. This feature is not enabled in TP5.
+func (computeSystem *System) Resume(ctx context.Context) (err error) {
+ operation := "hcs::System::Resume"
+
+ // hcsResumeComputeSystemContext is an async operation. Start the outer span
+ // here to measure the full restore time.
+ ctx, span := trace.StartSpan(ctx, operation)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("cid", computeSystem.id))
+
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ if computeSystem.handle == 0 {
+ return makeSystemError(computeSystem, operation, ErrAlreadyClosed, nil)
+ }
+
+ resultJSON, err := vmcompute.HcsResumeComputeSystem(ctx, computeSystem.handle, "")
+ events, err := processAsyncHcsResult(ctx, err, resultJSON, computeSystem.callbackNumber, hcsNotificationSystemResumeCompleted, &timeout.SystemResume)
+ if err != nil {
+ return makeSystemError(computeSystem, operation, err, events)
+ }
+
+ return nil
+}
+
+// Save the compute system
+func (computeSystem *System) Save(ctx context.Context, options interface{}) (err error) {
+ operation := "hcs::System::Save"
+
+ // hcsSaveComputeSystemContext is an async peration. Start the outer span
+ // here to measure the full save time.
+ ctx, span := trace.StartSpan(ctx, operation)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("cid", computeSystem.id))
+
+ saveOptions, err := json.Marshal(options)
+ if err != nil {
+ return err
+ }
+
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ if computeSystem.handle == 0 {
+ return makeSystemError(computeSystem, operation, ErrAlreadyClosed, nil)
+ }
+
+ result, err := vmcompute.HcsSaveComputeSystem(ctx, computeSystem.handle, string(saveOptions))
+ events, err := processAsyncHcsResult(ctx, err, result, computeSystem.callbackNumber, hcsNotificationSystemSaveCompleted, &timeout.SystemSave)
+ if err != nil {
+ return makeSystemError(computeSystem, operation, err, events)
+ }
+
+ return nil
+}
+
+func (computeSystem *System) createProcess(ctx context.Context, operation string, c interface{}) (*Process, *vmcompute.HcsProcessInformation, error) {
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ if computeSystem.handle == 0 {
+ return nil, nil, makeSystemError(computeSystem, operation, ErrAlreadyClosed, nil)
+ }
+
+ configurationb, err := json.Marshal(c)
+ if err != nil {
+ return nil, nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+
+ configuration := string(configurationb)
+ processInfo, processHandle, resultJSON, err := vmcompute.HcsCreateProcess(ctx, computeSystem.handle, configuration)
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return nil, nil, makeSystemError(computeSystem, operation, err, events)
+ }
+
+ log.G(ctx).WithField("pid", processInfo.ProcessId).Debug("created process pid")
+ return newProcess(processHandle, int(processInfo.ProcessId), computeSystem), &processInfo, nil
+}
+
+// CreateProcess launches a new process within the computeSystem.
+func (computeSystem *System) CreateProcess(ctx context.Context, c interface{}) (cow.Process, error) {
+ operation := "hcs::System::CreateProcess"
+ process, processInfo, err := computeSystem.createProcess(ctx, operation, c)
+ if err != nil {
+ return nil, err
+ }
+ defer func() {
+ if err != nil {
+ process.Close()
+ }
+ }()
+
+ pipes, err := makeOpenFiles([]syscall.Handle{processInfo.StdInput, processInfo.StdOutput, processInfo.StdError})
+ if err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+ process.stdin = pipes[0]
+ process.stdout = pipes[1]
+ process.stderr = pipes[2]
+ process.hasCachedStdio = true
+
+ if err = process.registerCallback(ctx); err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+ go process.waitBackground()
+
+ return process, nil
+}
+
+// OpenProcess gets an interface to an existing process within the computeSystem.
+func (computeSystem *System) OpenProcess(ctx context.Context, pid int) (*Process, error) {
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ operation := "hcs::System::OpenProcess"
+
+ if computeSystem.handle == 0 {
+ return nil, makeSystemError(computeSystem, operation, ErrAlreadyClosed, nil)
+ }
+
+ processHandle, resultJSON, err := vmcompute.HcsOpenProcess(ctx, computeSystem.handle, uint32(pid))
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, events)
+ }
+
+ process := newProcess(processHandle, pid, computeSystem)
+ if err = process.registerCallback(ctx); err != nil {
+ return nil, makeSystemError(computeSystem, operation, err, nil)
+ }
+ go process.waitBackground()
+
+ return process, nil
+}
+
+// Close cleans up any state associated with the compute system but does not terminate or wait for it.
+func (computeSystem *System) Close() (err error) {
+ operation := "hcs::System::Close"
+ ctx, span := trace.StartSpan(context.Background(), operation)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("cid", computeSystem.id))
+
+ computeSystem.handleLock.Lock()
+ defer computeSystem.handleLock.Unlock()
+
+ // Don't double free this
+ if computeSystem.handle == 0 {
+ return nil
+ }
+
+ if err = computeSystem.unregisterCallback(ctx); err != nil {
+ return makeSystemError(computeSystem, operation, err, nil)
+ }
+
+ err = vmcompute.HcsCloseComputeSystem(ctx, computeSystem.handle)
+ if err != nil {
+ return makeSystemError(computeSystem, operation, err, nil)
+ }
+
+ computeSystem.handle = 0
+ computeSystem.closedWaitOnce.Do(func() {
+ computeSystem.waitError = ErrAlreadyClosed
+ close(computeSystem.waitBlock)
+ })
+
+ return nil
+}
+
+func (computeSystem *System) registerCallback(ctx context.Context) error {
+ callbackContext := ¬ificationWatcherContext{
+ channels: newSystemChannels(),
+ systemID: computeSystem.id,
+ }
+
+ callbackMapLock.Lock()
+ callbackNumber := nextCallback
+ nextCallback++
+ callbackMap[callbackNumber] = callbackContext
+ callbackMapLock.Unlock()
+
+ callbackHandle, err := vmcompute.HcsRegisterComputeSystemCallback(ctx, computeSystem.handle, notificationWatcherCallback, callbackNumber)
+ if err != nil {
+ return err
+ }
+ callbackContext.handle = callbackHandle
+ computeSystem.callbackNumber = callbackNumber
+
+ return nil
+}
+
+func (computeSystem *System) unregisterCallback(ctx context.Context) error {
+ callbackNumber := computeSystem.callbackNumber
+
+ callbackMapLock.RLock()
+ callbackContext := callbackMap[callbackNumber]
+ callbackMapLock.RUnlock()
+
+ if callbackContext == nil {
+ return nil
+ }
+
+ handle := callbackContext.handle
+
+ if handle == 0 {
+ return nil
+ }
+
+ // hcsUnregisterComputeSystemCallback has its own syncronization
+ // to wait for all callbacks to complete. We must NOT hold the callbackMapLock.
+ err := vmcompute.HcsUnregisterComputeSystemCallback(ctx, handle)
+ if err != nil {
+ return err
+ }
+
+ closeChannels(callbackContext.channels)
+
+ callbackMapLock.Lock()
+ delete(callbackMap, callbackNumber)
+ callbackMapLock.Unlock()
+
+ handle = 0 //nolint:ineffassign
+
+ return nil
+}
+
+// Modify the System by sending a request to HCS
+func (computeSystem *System) Modify(ctx context.Context, config interface{}) error {
+ computeSystem.handleLock.RLock()
+ defer computeSystem.handleLock.RUnlock()
+
+ operation := "hcs::System::Modify"
+
+ if computeSystem.handle == 0 {
+ return makeSystemError(computeSystem, operation, ErrAlreadyClosed, nil)
+ }
+
+ requestBytes, err := json.Marshal(config)
+ if err != nil {
+ return err
+ }
+
+ requestJSON := string(requestBytes)
+ resultJSON, err := vmcompute.HcsModifyComputeSystem(ctx, computeSystem.handle, requestJSON)
+ events := processHcsResult(ctx, resultJSON)
+ if err != nil {
+ return makeSystemError(computeSystem, operation, err, events)
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/utils.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/utils.go
new file mode 100644
index 000000000..3342e5bb9
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/utils.go
@@ -0,0 +1,62 @@
+package hcs
+
+import (
+ "context"
+ "io"
+ "syscall"
+
+ "github.com/Microsoft/go-winio"
+ diskutil "github.com/Microsoft/go-winio/vhd"
+ "github.com/Microsoft/hcsshim/computestorage"
+ "github.com/pkg/errors"
+ "golang.org/x/sys/windows"
+)
+
+// makeOpenFiles calls winio.MakeOpenFile for each handle in a slice but closes all the handles
+// if there is an error.
+func makeOpenFiles(hs []syscall.Handle) (_ []io.ReadWriteCloser, err error) {
+ fs := make([]io.ReadWriteCloser, len(hs))
+ for i, h := range hs {
+ if h != syscall.Handle(0) {
+ if err == nil {
+ fs[i], err = winio.MakeOpenFile(h)
+ }
+ if err != nil {
+ syscall.Close(h)
+ }
+ }
+ }
+ if err != nil {
+ for _, f := range fs {
+ if f != nil {
+ f.Close()
+ }
+ }
+ return nil, err
+ }
+ return fs, nil
+}
+
+// CreateNTFSVHD creates a VHD formatted with NTFS of size `sizeGB` at the given `vhdPath`.
+func CreateNTFSVHD(ctx context.Context, vhdPath string, sizeGB uint32) (err error) {
+ if err := diskutil.CreateVhdx(vhdPath, sizeGB, 1); err != nil {
+ return errors.Wrap(err, "failed to create VHD")
+ }
+
+ vhd, err := diskutil.OpenVirtualDisk(vhdPath, diskutil.VirtualDiskAccessNone, diskutil.OpenVirtualDiskFlagNone)
+ if err != nil {
+ return errors.Wrap(err, "failed to open VHD")
+ }
+ defer func() {
+ err2 := windows.CloseHandle(windows.Handle(vhd))
+ if err == nil {
+ err = errors.Wrap(err2, "failed to close VHD")
+ }
+ }()
+
+ if err := computestorage.FormatWritableLayerVhd(ctx, windows.Handle(vhd)); err != nil {
+ return errors.Wrap(err, "failed to format VHD")
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcs/waithelper.go b/vendor/github.com/Microsoft/hcsshim/internal/hcs/waithelper.go
new file mode 100644
index 000000000..db4e14fdf
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcs/waithelper.go
@@ -0,0 +1,68 @@
+package hcs
+
+import (
+ "context"
+ "time"
+
+ "github.com/Microsoft/hcsshim/internal/log"
+)
+
+func processAsyncHcsResult(ctx context.Context, err error, resultJSON string, callbackNumber uintptr, expectedNotification hcsNotification, timeout *time.Duration) ([]ErrorEvent, error) {
+ events := processHcsResult(ctx, resultJSON)
+ if IsPending(err) {
+ return nil, waitForNotification(ctx, callbackNumber, expectedNotification, timeout)
+ }
+
+ return events, err
+}
+
+func waitForNotification(ctx context.Context, callbackNumber uintptr, expectedNotification hcsNotification, timeout *time.Duration) error {
+ callbackMapLock.RLock()
+ if _, ok := callbackMap[callbackNumber]; !ok {
+ callbackMapLock.RUnlock()
+ log.G(ctx).WithField("callbackNumber", callbackNumber).Error("failed to waitForNotification: callbackNumber does not exist in callbackMap")
+ return ErrHandleClose
+ }
+ channels := callbackMap[callbackNumber].channels
+ callbackMapLock.RUnlock()
+
+ expectedChannel := channels[expectedNotification]
+ if expectedChannel == nil {
+ log.G(ctx).WithField("type", expectedNotification).Error("unknown notification type in waitForNotification")
+ return ErrInvalidNotificationType
+ }
+
+ var c <-chan time.Time
+ if timeout != nil {
+ timer := time.NewTimer(*timeout)
+ c = timer.C
+ defer timer.Stop()
+ }
+
+ select {
+ case err, ok := <-expectedChannel:
+ if !ok {
+ return ErrHandleClose
+ }
+ return err
+ case err, ok := <-channels[hcsNotificationSystemExited]:
+ if !ok {
+ return ErrHandleClose
+ }
+ // If the expected notification is hcsNotificationSystemExited which of the two selects
+ // chosen is random. Return the raw error if hcsNotificationSystemExited is expected
+ if channels[hcsNotificationSystemExited] == expectedChannel {
+ return err
+ }
+ return ErrUnexpectedContainerExit
+ case _, ok := <-channels[hcsNotificationServiceDisconnect]:
+ if !ok {
+ return ErrHandleClose
+ }
+ // hcsNotificationServiceDisconnect should never be an expected notification
+ // it does not need the same handling as hcsNotificationSystemExited
+ return ErrUnexpectedProcessAbort
+ case <-c:
+ return ErrTimeout
+ }
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hcserror/hcserror.go b/vendor/github.com/Microsoft/hcsshim/internal/hcserror/hcserror.go
new file mode 100644
index 000000000..921c2c855
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hcserror/hcserror.go
@@ -0,0 +1,47 @@
+package hcserror
+
+import (
+ "fmt"
+ "syscall"
+)
+
+const ERROR_GEN_FAILURE = syscall.Errno(31)
+
+type HcsError struct {
+ title string
+ rest string
+ Err error
+}
+
+func (e *HcsError) Error() string {
+ s := e.title
+ if len(s) > 0 && s[len(s)-1] != ' ' {
+ s += " "
+ }
+ s += fmt.Sprintf("failed in Win32: %s (0x%x)", e.Err, Win32FromError(e.Err))
+ if e.rest != "" {
+ if e.rest[0] != ' ' {
+ s += " "
+ }
+ s += e.rest
+ }
+ return s
+}
+
+func New(err error, title, rest string) error {
+ // Pass through DLL errors directly since they do not originate from HCS.
+ if _, ok := err.(*syscall.DLLError); ok {
+ return err
+ }
+ return &HcsError{title, rest, err}
+}
+
+func Win32FromError(err error) uint32 {
+ if herr, ok := err.(*HcsError); ok {
+ return Win32FromError(herr.Err)
+ }
+ if code, ok := err.(syscall.Errno); ok {
+ return uint32(code)
+ }
+ return uint32(ERROR_GEN_FAILURE)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/hns.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/hns.go
new file mode 100644
index 000000000..b2e475f53
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/hns.go
@@ -0,0 +1,23 @@
+package hns
+
+import "fmt"
+
+//go:generate go run ../../mksyscall_windows.go -output zsyscall_windows.go hns.go
+
+//sys _hnsCall(method string, path string, object string, response **uint16) (hr error) = vmcompute.HNSCall?
+
+type EndpointNotFoundError struct {
+ EndpointName string
+}
+
+func (e EndpointNotFoundError) Error() string {
+ return fmt.Sprintf("Endpoint %s not found", e.EndpointName)
+}
+
+type NetworkNotFoundError struct {
+ NetworkName string
+}
+
+func (e NetworkNotFoundError) Error() string {
+ return fmt.Sprintf("Network %s not found", e.NetworkName)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsendpoint.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsendpoint.go
new file mode 100644
index 000000000..7cf954c7b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsendpoint.go
@@ -0,0 +1,338 @@
+package hns
+
+import (
+ "encoding/json"
+ "net"
+ "strings"
+
+ "github.com/sirupsen/logrus"
+)
+
+// HNSEndpoint represents a network endpoint in HNS
+type HNSEndpoint struct {
+ Id string `json:"ID,omitempty"`
+ Name string `json:",omitempty"`
+ VirtualNetwork string `json:",omitempty"`
+ VirtualNetworkName string `json:",omitempty"`
+ Policies []json.RawMessage `json:",omitempty"`
+ MacAddress string `json:",omitempty"`
+ IPAddress net.IP `json:",omitempty"`
+ IPv6Address net.IP `json:",omitempty"`
+ DNSSuffix string `json:",omitempty"`
+ DNSServerList string `json:",omitempty"`
+ DNSDomain string `json:",omitempty"`
+ GatewayAddress string `json:",omitempty"`
+ GatewayAddressV6 string `json:",omitempty"`
+ EnableInternalDNS bool `json:",omitempty"`
+ DisableICC bool `json:",omitempty"`
+ PrefixLength uint8 `json:",omitempty"`
+ IPv6PrefixLength uint8 `json:",omitempty"`
+ IsRemoteEndpoint bool `json:",omitempty"`
+ EnableLowMetric bool `json:",omitempty"`
+ Namespace *Namespace `json:",omitempty"`
+ EncapOverhead uint16 `json:",omitempty"`
+ SharedContainers []string `json:",omitempty"`
+}
+
+//SystemType represents the type of the system on which actions are done
+type SystemType string
+
+// SystemType const
+const (
+ ContainerType SystemType = "Container"
+ VirtualMachineType SystemType = "VirtualMachine"
+ HostType SystemType = "Host"
+)
+
+// EndpointAttachDetachRequest is the structure used to send request to the container to modify the system
+// Supported resource types are Network and Request Types are Add/Remove
+type EndpointAttachDetachRequest struct {
+ ContainerID string `json:"ContainerId,omitempty"`
+ SystemType SystemType `json:"SystemType"`
+ CompartmentID uint16 `json:"CompartmentId,omitempty"`
+ VirtualNICName string `json:"VirtualNicName,omitempty"`
+}
+
+// EndpointResquestResponse is object to get the endpoint request response
+type EndpointResquestResponse struct {
+ Success bool
+ Error string
+}
+
+// EndpointStats is the object that has stats for a given endpoint
+type EndpointStats struct {
+ BytesReceived uint64 `json:"BytesReceived"`
+ BytesSent uint64 `json:"BytesSent"`
+ DroppedPacketsIncoming uint64 `json:"DroppedPacketsIncoming"`
+ DroppedPacketsOutgoing uint64 `json:"DroppedPacketsOutgoing"`
+ EndpointID string `json:"EndpointId"`
+ InstanceID string `json:"InstanceId"`
+ PacketsReceived uint64 `json:"PacketsReceived"`
+ PacketsSent uint64 `json:"PacketsSent"`
+}
+
+// HNSEndpointRequest makes a HNS call to modify/query a network endpoint
+func HNSEndpointRequest(method, path, request string) (*HNSEndpoint, error) {
+ endpoint := &HNSEndpoint{}
+ err := hnsCall(method, "/endpoints/"+path, request, &endpoint)
+ if err != nil {
+ return nil, err
+ }
+
+ return endpoint, nil
+}
+
+// HNSListEndpointRequest makes a HNS call to query the list of available endpoints
+func HNSListEndpointRequest() ([]HNSEndpoint, error) {
+ var endpoint []HNSEndpoint
+ err := hnsCall("GET", "/endpoints/", "", &endpoint)
+ if err != nil {
+ return nil, err
+ }
+
+ return endpoint, nil
+}
+
+// hnsEndpointStatsRequest makes a HNS call to query the stats for a given endpoint ID
+func hnsEndpointStatsRequest(id string) (*EndpointStats, error) {
+ var stats EndpointStats
+ err := hnsCall("GET", "/endpointstats/"+id, "", &stats)
+ if err != nil {
+ return nil, err
+ }
+
+ return &stats, nil
+}
+
+// GetHNSEndpointByID get the Endpoint by ID
+func GetHNSEndpointByID(endpointID string) (*HNSEndpoint, error) {
+ return HNSEndpointRequest("GET", endpointID, "")
+}
+
+// GetHNSEndpointStats get the stats for a n Endpoint by ID
+func GetHNSEndpointStats(endpointID string) (*EndpointStats, error) {
+ return hnsEndpointStatsRequest(endpointID)
+}
+
+// GetHNSEndpointByName gets the endpoint filtered by Name
+func GetHNSEndpointByName(endpointName string) (*HNSEndpoint, error) {
+ hnsResponse, err := HNSListEndpointRequest()
+ if err != nil {
+ return nil, err
+ }
+ for _, hnsEndpoint := range hnsResponse {
+ if hnsEndpoint.Name == endpointName {
+ return &hnsEndpoint, nil
+ }
+ }
+ return nil, EndpointNotFoundError{EndpointName: endpointName}
+}
+
+type endpointAttachInfo struct {
+ SharedContainers json.RawMessage `json:",omitempty"`
+}
+
+func (endpoint *HNSEndpoint) IsAttached(vID string) (bool, error) {
+ attachInfo := endpointAttachInfo{}
+ err := hnsCall("GET", "/endpoints/"+endpoint.Id, "", &attachInfo)
+
+ // Return false allows us to just return the err
+ if err != nil {
+ return false, err
+ }
+
+ if strings.Contains(strings.ToLower(string(attachInfo.SharedContainers)), strings.ToLower(vID)) {
+ return true, nil
+ }
+
+ return false, nil
+
+}
+
+// Create Endpoint by sending EndpointRequest to HNS. TODO: Create a separate HNS interface to place all these methods
+func (endpoint *HNSEndpoint) Create() (*HNSEndpoint, error) {
+ operation := "Create"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+
+ jsonString, err := json.Marshal(endpoint)
+ if err != nil {
+ return nil, err
+ }
+ return HNSEndpointRequest("POST", "", string(jsonString))
+}
+
+// Delete Endpoint by sending EndpointRequest to HNS
+func (endpoint *HNSEndpoint) Delete() (*HNSEndpoint, error) {
+ operation := "Delete"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+
+ return HNSEndpointRequest("DELETE", endpoint.Id, "")
+}
+
+// Update Endpoint
+func (endpoint *HNSEndpoint) Update() (*HNSEndpoint, error) {
+ operation := "Update"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+ jsonString, err := json.Marshal(endpoint)
+ if err != nil {
+ return nil, err
+ }
+ err = hnsCall("POST", "/endpoints/"+endpoint.Id, string(jsonString), &endpoint)
+
+ return endpoint, err
+}
+
+// ApplyACLPolicy applies a set of ACL Policies on the Endpoint
+func (endpoint *HNSEndpoint) ApplyACLPolicy(policies ...*ACLPolicy) error {
+ operation := "ApplyACLPolicy"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+
+ for _, policy := range policies {
+ if policy == nil {
+ continue
+ }
+ jsonString, err := json.Marshal(policy)
+ if err != nil {
+ return err
+ }
+ endpoint.Policies = append(endpoint.Policies, jsonString)
+ }
+
+ _, err := endpoint.Update()
+ return err
+}
+
+// ApplyProxyPolicy applies a set of Proxy Policies on the Endpoint
+func (endpoint *HNSEndpoint) ApplyProxyPolicy(policies ...*ProxyPolicy) error {
+ operation := "ApplyProxyPolicy"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+
+ for _, policy := range policies {
+ if policy == nil {
+ continue
+ }
+ jsonString, err := json.Marshal(policy)
+ if err != nil {
+ return err
+ }
+ endpoint.Policies = append(endpoint.Policies, jsonString)
+ }
+
+ _, err := endpoint.Update()
+ return err
+}
+
+// ContainerAttach attaches an endpoint to container
+func (endpoint *HNSEndpoint) ContainerAttach(containerID string, compartmentID uint16) error {
+ operation := "ContainerAttach"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+
+ requestMessage := &EndpointAttachDetachRequest{
+ ContainerID: containerID,
+ CompartmentID: compartmentID,
+ SystemType: ContainerType,
+ }
+ response := &EndpointResquestResponse{}
+ jsonString, err := json.Marshal(requestMessage)
+ if err != nil {
+ return err
+ }
+ return hnsCall("POST", "/endpoints/"+endpoint.Id+"/attach", string(jsonString), &response)
+}
+
+// ContainerDetach detaches an endpoint from container
+func (endpoint *HNSEndpoint) ContainerDetach(containerID string) error {
+ operation := "ContainerDetach"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+
+ requestMessage := &EndpointAttachDetachRequest{
+ ContainerID: containerID,
+ SystemType: ContainerType,
+ }
+ response := &EndpointResquestResponse{}
+
+ jsonString, err := json.Marshal(requestMessage)
+ if err != nil {
+ return err
+ }
+ return hnsCall("POST", "/endpoints/"+endpoint.Id+"/detach", string(jsonString), &response)
+}
+
+// HostAttach attaches a nic on the host
+func (endpoint *HNSEndpoint) HostAttach(compartmentID uint16) error {
+ operation := "HostAttach"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+ requestMessage := &EndpointAttachDetachRequest{
+ CompartmentID: compartmentID,
+ SystemType: HostType,
+ }
+ response := &EndpointResquestResponse{}
+
+ jsonString, err := json.Marshal(requestMessage)
+ if err != nil {
+ return err
+ }
+ return hnsCall("POST", "/endpoints/"+endpoint.Id+"/attach", string(jsonString), &response)
+
+}
+
+// HostDetach detaches a nic on the host
+func (endpoint *HNSEndpoint) HostDetach() error {
+ operation := "HostDetach"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+ requestMessage := &EndpointAttachDetachRequest{
+ SystemType: HostType,
+ }
+ response := &EndpointResquestResponse{}
+
+ jsonString, err := json.Marshal(requestMessage)
+ if err != nil {
+ return err
+ }
+ return hnsCall("POST", "/endpoints/"+endpoint.Id+"/detach", string(jsonString), &response)
+}
+
+// VirtualMachineNICAttach attaches a endpoint to a virtual machine
+func (endpoint *HNSEndpoint) VirtualMachineNICAttach(virtualMachineNICName string) error {
+ operation := "VirtualMachineNicAttach"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+ requestMessage := &EndpointAttachDetachRequest{
+ VirtualNICName: virtualMachineNICName,
+ SystemType: VirtualMachineType,
+ }
+ response := &EndpointResquestResponse{}
+
+ jsonString, err := json.Marshal(requestMessage)
+ if err != nil {
+ return err
+ }
+ return hnsCall("POST", "/endpoints/"+endpoint.Id+"/attach", string(jsonString), &response)
+}
+
+// VirtualMachineNICDetach detaches a endpoint from a virtual machine
+func (endpoint *HNSEndpoint) VirtualMachineNICDetach() error {
+ operation := "VirtualMachineNicDetach"
+ title := "hcsshim::HNSEndpoint::" + operation
+ logrus.Debugf(title+" id=%s", endpoint.Id)
+
+ requestMessage := &EndpointAttachDetachRequest{
+ SystemType: VirtualMachineType,
+ }
+ response := &EndpointResquestResponse{}
+
+ jsonString, err := json.Marshal(requestMessage)
+ if err != nil {
+ return err
+ }
+ return hnsCall("POST", "/endpoints/"+endpoint.Id+"/detach", string(jsonString), &response)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsfuncs.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsfuncs.go
new file mode 100644
index 000000000..2df4a57f5
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsfuncs.go
@@ -0,0 +1,49 @@
+package hns
+
+import (
+ "encoding/json"
+ "fmt"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/interop"
+ "github.com/sirupsen/logrus"
+)
+
+func hnsCallRawResponse(method, path, request string) (*hnsResponse, error) {
+ var responseBuffer *uint16
+ logrus.Debugf("[%s]=>[%s] Request : %s", method, path, request)
+
+ err := _hnsCall(method, path, request, &responseBuffer)
+ if err != nil {
+ return nil, hcserror.New(err, "hnsCall ", "")
+ }
+ response := interop.ConvertAndFreeCoTaskMemString(responseBuffer)
+
+ hnsresponse := &hnsResponse{}
+ if err = json.Unmarshal([]byte(response), &hnsresponse); err != nil {
+ return nil, err
+ }
+ return hnsresponse, nil
+}
+
+func hnsCall(method, path, request string, returnResponse interface{}) error {
+ hnsresponse, err := hnsCallRawResponse(method, path, request)
+ if err != nil {
+ return fmt.Errorf("failed during hnsCallRawResponse: %v", err)
+ }
+ if !hnsresponse.Success {
+ return fmt.Errorf("hns failed with error : %s", hnsresponse.Error)
+ }
+
+ if len(hnsresponse.Output) == 0 {
+ return nil
+ }
+
+ logrus.Debugf("Network Response : %s", hnsresponse.Output)
+ err = json.Unmarshal(hnsresponse.Output, returnResponse)
+ if err != nil {
+ return err
+ }
+
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsglobals.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsglobals.go
new file mode 100644
index 000000000..a8d8cc56a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsglobals.go
@@ -0,0 +1,28 @@
+package hns
+
+type HNSGlobals struct {
+ Version HNSVersion `json:"Version"`
+}
+
+type HNSVersion struct {
+ Major int `json:"Major"`
+ Minor int `json:"Minor"`
+}
+
+var (
+ HNSVersion1803 = HNSVersion{Major: 7, Minor: 2}
+)
+
+func GetHNSGlobals() (*HNSGlobals, error) {
+ var version HNSVersion
+ err := hnsCall("GET", "/globals/version", "", &version)
+ if err != nil {
+ return nil, err
+ }
+
+ globals := &HNSGlobals{
+ Version: version,
+ }
+
+ return globals, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsnetwork.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsnetwork.go
new file mode 100644
index 000000000..f12d3ab04
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnsnetwork.go
@@ -0,0 +1,141 @@
+package hns
+
+import (
+ "encoding/json"
+ "errors"
+ "github.com/sirupsen/logrus"
+ "net"
+)
+
+// Subnet is assoicated with a network and represents a list
+// of subnets available to the network
+type Subnet struct {
+ AddressPrefix string `json:",omitempty"`
+ GatewayAddress string `json:",omitempty"`
+ Policies []json.RawMessage `json:",omitempty"`
+}
+
+// MacPool is assoicated with a network and represents a list
+// of macaddresses available to the network
+type MacPool struct {
+ StartMacAddress string `json:",omitempty"`
+ EndMacAddress string `json:",omitempty"`
+}
+
+// HNSNetwork represents a network in HNS
+type HNSNetwork struct {
+ Id string `json:"ID,omitempty"`
+ Name string `json:",omitempty"`
+ Type string `json:",omitempty"`
+ NetworkAdapterName string `json:",omitempty"`
+ SourceMac string `json:",omitempty"`
+ Policies []json.RawMessage `json:",omitempty"`
+ MacPools []MacPool `json:",omitempty"`
+ Subnets []Subnet `json:",omitempty"`
+ DNSSuffix string `json:",omitempty"`
+ DNSServerList string `json:",omitempty"`
+ DNSServerCompartment uint32 `json:",omitempty"`
+ ManagementIP string `json:",omitempty"`
+ AutomaticDNS bool `json:",omitempty"`
+}
+
+type hnsResponse struct {
+ Success bool
+ Error string
+ Output json.RawMessage
+}
+
+// HNSNetworkRequest makes a call into HNS to update/query a single network
+func HNSNetworkRequest(method, path, request string) (*HNSNetwork, error) {
+ var network HNSNetwork
+ err := hnsCall(method, "/networks/"+path, request, &network)
+ if err != nil {
+ return nil, err
+ }
+
+ return &network, nil
+}
+
+// HNSListNetworkRequest makes a HNS call to query the list of available networks
+func HNSListNetworkRequest(method, path, request string) ([]HNSNetwork, error) {
+ var network []HNSNetwork
+ err := hnsCall(method, "/networks/"+path, request, &network)
+ if err != nil {
+ return nil, err
+ }
+
+ return network, nil
+}
+
+// GetHNSNetworkByID
+func GetHNSNetworkByID(networkID string) (*HNSNetwork, error) {
+ return HNSNetworkRequest("GET", networkID, "")
+}
+
+// GetHNSNetworkName filtered by Name
+func GetHNSNetworkByName(networkName string) (*HNSNetwork, error) {
+ hsnnetworks, err := HNSListNetworkRequest("GET", "", "")
+ if err != nil {
+ return nil, err
+ }
+ for _, hnsnetwork := range hsnnetworks {
+ if hnsnetwork.Name == networkName {
+ return &hnsnetwork, nil
+ }
+ }
+ return nil, NetworkNotFoundError{NetworkName: networkName}
+}
+
+// Create Network by sending NetworkRequest to HNS.
+func (network *HNSNetwork) Create() (*HNSNetwork, error) {
+ operation := "Create"
+ title := "hcsshim::HNSNetwork::" + operation
+ logrus.Debugf(title+" id=%s", network.Id)
+
+ for _, subnet := range network.Subnets {
+ if (subnet.AddressPrefix != "") && (subnet.GatewayAddress == "") {
+ return nil, errors.New("network create error, subnet has address prefix but no gateway specified")
+ }
+ }
+
+ jsonString, err := json.Marshal(network)
+ if err != nil {
+ return nil, err
+ }
+ return HNSNetworkRequest("POST", "", string(jsonString))
+}
+
+// Delete Network by sending NetworkRequest to HNS
+func (network *HNSNetwork) Delete() (*HNSNetwork, error) {
+ operation := "Delete"
+ title := "hcsshim::HNSNetwork::" + operation
+ logrus.Debugf(title+" id=%s", network.Id)
+
+ return HNSNetworkRequest("DELETE", network.Id, "")
+}
+
+// Creates an endpoint on the Network.
+func (network *HNSNetwork) NewEndpoint(ipAddress net.IP, macAddress net.HardwareAddr) *HNSEndpoint {
+ return &HNSEndpoint{
+ VirtualNetwork: network.Id,
+ IPAddress: ipAddress,
+ MacAddress: string(macAddress),
+ }
+}
+
+func (network *HNSNetwork) CreateEndpoint(endpoint *HNSEndpoint) (*HNSEndpoint, error) {
+ operation := "CreateEndpoint"
+ title := "hcsshim::HNSNetwork::" + operation
+ logrus.Debugf(title+" id=%s, endpointId=%s", network.Id, endpoint.Id)
+
+ endpoint.VirtualNetwork = network.Id
+ return endpoint.Create()
+}
+
+func (network *HNSNetwork) CreateRemoteEndpoint(endpoint *HNSEndpoint) (*HNSEndpoint, error) {
+ operation := "CreateRemoteEndpoint"
+ title := "hcsshim::HNSNetwork::" + operation
+ logrus.Debugf(title+" id=%s", network.Id)
+ endpoint.IsRemoteEndpoint = true
+ return network.CreateEndpoint(endpoint)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/hnspolicy.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnspolicy.go
new file mode 100644
index 000000000..84b368218
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnspolicy.go
@@ -0,0 +1,110 @@
+package hns
+
+// Type of Request Support in ModifySystem
+type PolicyType string
+
+// RequestType const
+const (
+ Nat PolicyType = "NAT"
+ ACL PolicyType = "ACL"
+ PA PolicyType = "PA"
+ VLAN PolicyType = "VLAN"
+ VSID PolicyType = "VSID"
+ VNet PolicyType = "VNET"
+ L2Driver PolicyType = "L2Driver"
+ Isolation PolicyType = "Isolation"
+ QOS PolicyType = "QOS"
+ OutboundNat PolicyType = "OutBoundNAT"
+ ExternalLoadBalancer PolicyType = "ELB"
+ Route PolicyType = "ROUTE"
+ Proxy PolicyType = "PROXY"
+)
+
+type NatPolicy struct {
+ Type PolicyType `json:"Type"`
+ Protocol string `json:",omitempty"`
+ InternalPort uint16 `json:",omitempty"`
+ ExternalPort uint16 `json:",omitempty"`
+ ExternalPortReserved bool `json:",omitempty"`
+}
+
+type QosPolicy struct {
+ Type PolicyType `json:"Type"`
+ MaximumOutgoingBandwidthInBytes uint64
+}
+
+type IsolationPolicy struct {
+ Type PolicyType `json:"Type"`
+ VLAN uint
+ VSID uint
+ InDefaultIsolation bool
+}
+
+type VlanPolicy struct {
+ Type PolicyType `json:"Type"`
+ VLAN uint
+}
+
+type VsidPolicy struct {
+ Type PolicyType `json:"Type"`
+ VSID uint
+}
+
+type PaPolicy struct {
+ Type PolicyType `json:"Type"`
+ PA string `json:"PA"`
+}
+
+type OutboundNatPolicy struct {
+ Policy
+ VIP string `json:"VIP,omitempty"`
+ Exceptions []string `json:"ExceptionList,omitempty"`
+ Destinations []string `json:",omitempty"`
+}
+
+type ProxyPolicy struct {
+ Type PolicyType `json:"Type"`
+ IP string `json:",omitempty"`
+ Port string `json:",omitempty"`
+ ExceptionList []string `json:",omitempty"`
+ Destination string `json:",omitempty"`
+ OutboundNat bool `json:",omitempty"`
+}
+
+type ActionType string
+type DirectionType string
+type RuleType string
+
+const (
+ Allow ActionType = "Allow"
+ Block ActionType = "Block"
+
+ In DirectionType = "In"
+ Out DirectionType = "Out"
+
+ Host RuleType = "Host"
+ Switch RuleType = "Switch"
+)
+
+type ACLPolicy struct {
+ Type PolicyType `json:"Type"`
+ Id string `json:"Id,omitempty"`
+ Protocol uint16 `json:",omitempty"`
+ Protocols string `json:"Protocols,omitempty"`
+ InternalPort uint16 `json:",omitempty"`
+ Action ActionType
+ Direction DirectionType
+ LocalAddresses string `json:",omitempty"`
+ RemoteAddresses string `json:",omitempty"`
+ LocalPorts string `json:"LocalPorts,omitempty"`
+ LocalPort uint16 `json:",omitempty"`
+ RemotePorts string `json:"RemotePorts,omitempty"`
+ RemotePort uint16 `json:",omitempty"`
+ RuleType RuleType `json:"RuleType,omitempty"`
+ Priority uint16 `json:",omitempty"`
+ ServiceName string `json:",omitempty"`
+}
+
+type Policy struct {
+ Type PolicyType `json:"Type"`
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/hnspolicylist.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnspolicylist.go
new file mode 100644
index 000000000..31322a681
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnspolicylist.go
@@ -0,0 +1,201 @@
+package hns
+
+import (
+ "encoding/json"
+
+ "github.com/sirupsen/logrus"
+)
+
+// RoutePolicy is a structure defining schema for Route based Policy
+type RoutePolicy struct {
+ Policy
+ DestinationPrefix string `json:"DestinationPrefix,omitempty"`
+ NextHop string `json:"NextHop,omitempty"`
+ EncapEnabled bool `json:"NeedEncap,omitempty"`
+}
+
+// ELBPolicy is a structure defining schema for ELB LoadBalancing based Policy
+type ELBPolicy struct {
+ LBPolicy
+ SourceVIP string `json:"SourceVIP,omitempty"`
+ VIPs []string `json:"VIPs,omitempty"`
+ ILB bool `json:"ILB,omitempty"`
+ DSR bool `json:"IsDSR,omitempty"`
+}
+
+// LBPolicy is a structure defining schema for LoadBalancing based Policy
+type LBPolicy struct {
+ Policy
+ Protocol uint16 `json:"Protocol,omitempty"`
+ InternalPort uint16
+ ExternalPort uint16
+}
+
+// PolicyList is a structure defining schema for Policy list request
+type PolicyList struct {
+ ID string `json:"ID,omitempty"`
+ EndpointReferences []string `json:"References,omitempty"`
+ Policies []json.RawMessage `json:"Policies,omitempty"`
+}
+
+// HNSPolicyListRequest makes a call into HNS to update/query a single network
+func HNSPolicyListRequest(method, path, request string) (*PolicyList, error) {
+ var policy PolicyList
+ err := hnsCall(method, "/policylists/"+path, request, &policy)
+ if err != nil {
+ return nil, err
+ }
+
+ return &policy, nil
+}
+
+// HNSListPolicyListRequest gets all the policy list
+func HNSListPolicyListRequest() ([]PolicyList, error) {
+ var plist []PolicyList
+ err := hnsCall("GET", "/policylists/", "", &plist)
+ if err != nil {
+ return nil, err
+ }
+
+ return plist, nil
+}
+
+// PolicyListRequest makes a HNS call to modify/query a network policy list
+func PolicyListRequest(method, path, request string) (*PolicyList, error) {
+ policylist := &PolicyList{}
+ err := hnsCall(method, "/policylists/"+path, request, &policylist)
+ if err != nil {
+ return nil, err
+ }
+
+ return policylist, nil
+}
+
+// GetPolicyListByID get the policy list by ID
+func GetPolicyListByID(policyListID string) (*PolicyList, error) {
+ return PolicyListRequest("GET", policyListID, "")
+}
+
+// Create PolicyList by sending PolicyListRequest to HNS.
+func (policylist *PolicyList) Create() (*PolicyList, error) {
+ operation := "Create"
+ title := "hcsshim::PolicyList::" + operation
+ logrus.Debugf(title+" id=%s", policylist.ID)
+ jsonString, err := json.Marshal(policylist)
+ if err != nil {
+ return nil, err
+ }
+ return PolicyListRequest("POST", "", string(jsonString))
+}
+
+// Delete deletes PolicyList
+func (policylist *PolicyList) Delete() (*PolicyList, error) {
+ operation := "Delete"
+ title := "hcsshim::PolicyList::" + operation
+ logrus.Debugf(title+" id=%s", policylist.ID)
+
+ return PolicyListRequest("DELETE", policylist.ID, "")
+}
+
+// AddEndpoint add an endpoint to a Policy List
+func (policylist *PolicyList) AddEndpoint(endpoint *HNSEndpoint) (*PolicyList, error) {
+ operation := "AddEndpoint"
+ title := "hcsshim::PolicyList::" + operation
+ logrus.Debugf(title+" id=%s, endpointId:%s", policylist.ID, endpoint.Id)
+
+ _, err := policylist.Delete()
+ if err != nil {
+ return nil, err
+ }
+
+ // Add Endpoint to the Existing List
+ policylist.EndpointReferences = append(policylist.EndpointReferences, "/endpoints/"+endpoint.Id)
+
+ return policylist.Create()
+}
+
+// RemoveEndpoint removes an endpoint from the Policy List
+func (policylist *PolicyList) RemoveEndpoint(endpoint *HNSEndpoint) (*PolicyList, error) {
+ operation := "RemoveEndpoint"
+ title := "hcsshim::PolicyList::" + operation
+ logrus.Debugf(title+" id=%s, endpointId:%s", policylist.ID, endpoint.Id)
+
+ _, err := policylist.Delete()
+ if err != nil {
+ return nil, err
+ }
+
+ elementToRemove := "/endpoints/" + endpoint.Id
+
+ var references []string
+
+ for _, endpointReference := range policylist.EndpointReferences {
+ if endpointReference == elementToRemove {
+ continue
+ }
+ references = append(references, endpointReference)
+ }
+ policylist.EndpointReferences = references
+ return policylist.Create()
+}
+
+// AddLoadBalancer policy list for the specified endpoints
+func AddLoadBalancer(endpoints []HNSEndpoint, isILB bool, sourceVIP, vip string, protocol uint16, internalPort uint16, externalPort uint16) (*PolicyList, error) {
+ operation := "AddLoadBalancer"
+ title := "hcsshim::PolicyList::" + operation
+ logrus.Debugf(title+" endpointId=%v, isILB=%v, sourceVIP=%s, vip=%s, protocol=%v, internalPort=%v, externalPort=%v", endpoints, isILB, sourceVIP, vip, protocol, internalPort, externalPort)
+
+ policylist := &PolicyList{}
+
+ elbPolicy := &ELBPolicy{
+ SourceVIP: sourceVIP,
+ ILB: isILB,
+ }
+
+ if len(vip) > 0 {
+ elbPolicy.VIPs = []string{vip}
+ }
+ elbPolicy.Type = ExternalLoadBalancer
+ elbPolicy.Protocol = protocol
+ elbPolicy.InternalPort = internalPort
+ elbPolicy.ExternalPort = externalPort
+
+ for _, endpoint := range endpoints {
+ policylist.EndpointReferences = append(policylist.EndpointReferences, "/endpoints/"+endpoint.Id)
+ }
+
+ jsonString, err := json.Marshal(elbPolicy)
+ if err != nil {
+ return nil, err
+ }
+ policylist.Policies = append(policylist.Policies, jsonString)
+ return policylist.Create()
+}
+
+// AddRoute adds route policy list for the specified endpoints
+func AddRoute(endpoints []HNSEndpoint, destinationPrefix string, nextHop string, encapEnabled bool) (*PolicyList, error) {
+ operation := "AddRoute"
+ title := "hcsshim::PolicyList::" + operation
+ logrus.Debugf(title+" destinationPrefix:%s", destinationPrefix)
+
+ policylist := &PolicyList{}
+
+ rPolicy := &RoutePolicy{
+ DestinationPrefix: destinationPrefix,
+ NextHop: nextHop,
+ EncapEnabled: encapEnabled,
+ }
+ rPolicy.Type = Route
+
+ for _, endpoint := range endpoints {
+ policylist.EndpointReferences = append(policylist.EndpointReferences, "/endpoints/"+endpoint.Id)
+ }
+
+ jsonString, err := json.Marshal(rPolicy)
+ if err != nil {
+ return nil, err
+ }
+
+ policylist.Policies = append(policylist.Policies, jsonString)
+ return policylist.Create()
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/hnssupport.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnssupport.go
new file mode 100644
index 000000000..d5efba7f2
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/hnssupport.go
@@ -0,0 +1,49 @@
+package hns
+
+import (
+ "github.com/sirupsen/logrus"
+)
+
+type HNSSupportedFeatures struct {
+ Acl HNSAclFeatures `json:"ACL"`
+}
+
+type HNSAclFeatures struct {
+ AclAddressLists bool `json:"AclAddressLists"`
+ AclNoHostRulePriority bool `json:"AclHostRulePriority"`
+ AclPortRanges bool `json:"AclPortRanges"`
+ AclRuleId bool `json:"AclRuleId"`
+}
+
+func GetHNSSupportedFeatures() HNSSupportedFeatures {
+ var hnsFeatures HNSSupportedFeatures
+
+ globals, err := GetHNSGlobals()
+ if err != nil {
+ // Expected on pre-1803 builds, all features will be false/unsupported
+ logrus.Debugf("Unable to obtain HNS globals: %s", err)
+ return hnsFeatures
+ }
+
+ hnsFeatures.Acl = HNSAclFeatures{
+ AclAddressLists: isHNSFeatureSupported(globals.Version, HNSVersion1803),
+ AclNoHostRulePriority: isHNSFeatureSupported(globals.Version, HNSVersion1803),
+ AclPortRanges: isHNSFeatureSupported(globals.Version, HNSVersion1803),
+ AclRuleId: isHNSFeatureSupported(globals.Version, HNSVersion1803),
+ }
+
+ return hnsFeatures
+}
+
+func isHNSFeatureSupported(currentVersion HNSVersion, minVersionSupported HNSVersion) bool {
+ if currentVersion.Major < minVersionSupported.Major {
+ return false
+ }
+ if currentVersion.Major > minVersionSupported.Major {
+ return true
+ }
+ if currentVersion.Minor < minVersionSupported.Minor {
+ return false
+ }
+ return true
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/namespace.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/namespace.go
new file mode 100644
index 000000000..d3b04eefe
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/namespace.go
@@ -0,0 +1,111 @@
+package hns
+
+import (
+ "encoding/json"
+ "fmt"
+ "os"
+ "path"
+ "strings"
+)
+
+type namespaceRequest struct {
+ IsDefault bool `json:",omitempty"`
+}
+
+type namespaceEndpointRequest struct {
+ ID string `json:"Id"`
+}
+
+type NamespaceResource struct {
+ Type string
+ Data json.RawMessage
+}
+
+type namespaceResourceRequest struct {
+ Type string
+ Data interface{}
+}
+
+type Namespace struct {
+ ID string
+ IsDefault bool `json:",omitempty"`
+ ResourceList []NamespaceResource `json:",omitempty"`
+ CompartmentId uint32 `json:",omitempty"`
+}
+
+func issueNamespaceRequest(id *string, method, subpath string, request interface{}) (*Namespace, error) {
+ var err error
+ hnspath := "/namespaces/"
+ if id != nil {
+ hnspath = path.Join(hnspath, *id)
+ }
+ if subpath != "" {
+ hnspath = path.Join(hnspath, subpath)
+ }
+ var reqJSON []byte
+ if request != nil {
+ if reqJSON, err = json.Marshal(request); err != nil {
+ return nil, err
+ }
+ }
+ var ns Namespace
+ err = hnsCall(method, hnspath, string(reqJSON), &ns)
+ if err != nil {
+ if strings.Contains(err.Error(), "Element not found.") {
+ return nil, os.ErrNotExist
+ }
+ return nil, fmt.Errorf("%s %s: %s", method, hnspath, err)
+ }
+ return &ns, err
+}
+
+func CreateNamespace() (string, error) {
+ req := namespaceRequest{}
+ ns, err := issueNamespaceRequest(nil, "POST", "", &req)
+ if err != nil {
+ return "", err
+ }
+ return ns.ID, nil
+}
+
+func RemoveNamespace(id string) error {
+ _, err := issueNamespaceRequest(&id, "DELETE", "", nil)
+ return err
+}
+
+func GetNamespaceEndpoints(id string) ([]string, error) {
+ ns, err := issueNamespaceRequest(&id, "GET", "", nil)
+ if err != nil {
+ return nil, err
+ }
+ var endpoints []string
+ for _, rsrc := range ns.ResourceList {
+ if rsrc.Type == "Endpoint" {
+ var endpoint namespaceEndpointRequest
+ err = json.Unmarshal(rsrc.Data, &endpoint)
+ if err != nil {
+ return nil, fmt.Errorf("unmarshal endpoint: %s", err)
+ }
+ endpoints = append(endpoints, endpoint.ID)
+ }
+ }
+ return endpoints, nil
+}
+
+func AddNamespaceEndpoint(id string, endpointID string) error {
+ resource := namespaceResourceRequest{
+ Type: "Endpoint",
+ Data: namespaceEndpointRequest{endpointID},
+ }
+ _, err := issueNamespaceRequest(&id, "POST", "addresource", &resource)
+ return err
+}
+
+func RemoveNamespaceEndpoint(id string, endpointID string) error {
+ resource := namespaceResourceRequest{
+ Type: "Endpoint",
+ Data: namespaceEndpointRequest{endpointID},
+ }
+ _, err := issueNamespaceRequest(&id, "POST", "removeresource", &resource)
+ return err
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/hns/zsyscall_windows.go b/vendor/github.com/Microsoft/hcsshim/internal/hns/zsyscall_windows.go
new file mode 100644
index 000000000..204633a48
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/hns/zsyscall_windows.go
@@ -0,0 +1,76 @@
+// Code generated mksyscall_windows.exe DO NOT EDIT
+
+package hns
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return nil
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modvmcompute = windows.NewLazySystemDLL("vmcompute.dll")
+
+ procHNSCall = modvmcompute.NewProc("HNSCall")
+)
+
+func _hnsCall(method string, path string, object string, response **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(method)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(path)
+ if hr != nil {
+ return
+ }
+ var _p2 *uint16
+ _p2, hr = syscall.UTF16PtrFromString(object)
+ if hr != nil {
+ return
+ }
+ return __hnsCall(_p0, _p1, _p2, response)
+}
+
+func __hnsCall(method *uint16, path *uint16, object *uint16, response **uint16) (hr error) {
+ if hr = procHNSCall.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procHNSCall.Addr(), 4, uintptr(unsafe.Pointer(method)), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(object)), uintptr(unsafe.Pointer(response)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/interop/interop.go b/vendor/github.com/Microsoft/hcsshim/internal/interop/interop.go
new file mode 100644
index 000000000..922f7c679
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/interop/interop.go
@@ -0,0 +1,23 @@
+package interop
+
+import (
+ "syscall"
+ "unsafe"
+)
+
+//go:generate go run ../../mksyscall_windows.go -output zsyscall_windows.go interop.go
+
+//sys coTaskMemFree(buffer unsafe.Pointer) = api_ms_win_core_com_l1_1_0.CoTaskMemFree
+
+func ConvertAndFreeCoTaskMemString(buffer *uint16) string {
+ str := syscall.UTF16ToString((*[1 << 29]uint16)(unsafe.Pointer(buffer))[:])
+ coTaskMemFree(unsafe.Pointer(buffer))
+ return str
+}
+
+func Win32FromHresult(hr uintptr) syscall.Errno {
+ if hr&0x1fff0000 == 0x00070000 {
+ return syscall.Errno(hr & 0xffff)
+ }
+ return syscall.Errno(hr)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/interop/zsyscall_windows.go b/vendor/github.com/Microsoft/hcsshim/internal/interop/zsyscall_windows.go
new file mode 100644
index 000000000..12b0c71c5
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/interop/zsyscall_windows.go
@@ -0,0 +1,48 @@
+// Code generated mksyscall_windows.exe DO NOT EDIT
+
+package interop
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return nil
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modapi_ms_win_core_com_l1_1_0 = windows.NewLazySystemDLL("api-ms-win-core-com-l1-1-0.dll")
+
+ procCoTaskMemFree = modapi_ms_win_core_com_l1_1_0.NewProc("CoTaskMemFree")
+)
+
+func coTaskMemFree(buffer unsafe.Pointer) {
+ syscall.Syscall(procCoTaskMemFree.Addr(), 1, uintptr(buffer), 0, 0)
+ return
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/jobobject/iocp.go b/vendor/github.com/Microsoft/hcsshim/internal/jobobject/iocp.go
new file mode 100644
index 000000000..5d6acd69e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/jobobject/iocp.go
@@ -0,0 +1,111 @@
+package jobobject
+
+import (
+ "context"
+ "fmt"
+ "sync"
+ "unsafe"
+
+ "github.com/Microsoft/hcsshim/internal/log"
+ "github.com/Microsoft/hcsshim/internal/queue"
+ "github.com/Microsoft/hcsshim/internal/winapi"
+ "github.com/sirupsen/logrus"
+ "golang.org/x/sys/windows"
+)
+
+var (
+ ioInitOnce sync.Once
+ initIOErr error
+ // Global iocp handle that will be re-used for every job object
+ ioCompletionPort windows.Handle
+ // Mapping of job handle to queue to place notifications in.
+ jobMap sync.Map
+)
+
+// MsgAllProcessesExited is a type representing a message that every process in a job has exited.
+type MsgAllProcessesExited struct{}
+
+// MsgUnimplemented represents a message that we are aware of, but that isn't implemented currently.
+// This should not be treated as an error.
+type MsgUnimplemented struct{}
+
+// pollIOCP polls the io completion port forever.
+func pollIOCP(ctx context.Context, iocpHandle windows.Handle) {
+ var (
+ overlapped uintptr
+ code uint32
+ key uintptr
+ )
+
+ for {
+ err := windows.GetQueuedCompletionStatus(iocpHandle, &code, &key, (**windows.Overlapped)(unsafe.Pointer(&overlapped)), windows.INFINITE)
+ if err != nil {
+ log.G(ctx).WithError(err).Error("failed to poll for job object message")
+ continue
+ }
+ if val, ok := jobMap.Load(key); ok {
+ msq, ok := val.(*queue.MessageQueue)
+ if !ok {
+ log.G(ctx).WithField("value", msq).Warn("encountered non queue type in job map")
+ continue
+ }
+ notification, err := parseMessage(code, overlapped)
+ if err != nil {
+ log.G(ctx).WithFields(logrus.Fields{
+ "code": code,
+ "overlapped": overlapped,
+ }).Warn("failed to parse job object message")
+ continue
+ }
+ if err := msq.Enqueue(notification); err == queue.ErrQueueClosed {
+ // Write will only return an error when the queue is closed.
+ // The only time a queue would ever be closed is when we call `Close` on
+ // the job it belongs to which also removes it from the jobMap, so something
+ // went wrong here. We can't return as this is reading messages for all jobs
+ // so just log it and move on.
+ log.G(ctx).WithFields(logrus.Fields{
+ "code": code,
+ "overlapped": overlapped,
+ }).Warn("tried to write to a closed queue")
+ continue
+ }
+ } else {
+ log.G(ctx).Warn("received a message for a job not present in the mapping")
+ }
+ }
+}
+
+func parseMessage(code uint32, overlapped uintptr) (interface{}, error) {
+ // Check code and parse out relevant information related to that notification
+ // that we care about. For now all we handle is the message that all processes
+ // in the job have exited.
+ switch code {
+ case winapi.JOB_OBJECT_MSG_ACTIVE_PROCESS_ZERO:
+ return MsgAllProcessesExited{}, nil
+ // Other messages for completeness and a check to make sure that if we fall
+ // into the default case that this is a code we don't know how to handle.
+ case winapi.JOB_OBJECT_MSG_END_OF_JOB_TIME:
+ case winapi.JOB_OBJECT_MSG_END_OF_PROCESS_TIME:
+ case winapi.JOB_OBJECT_MSG_ACTIVE_PROCESS_LIMIT:
+ case winapi.JOB_OBJECT_MSG_NEW_PROCESS:
+ case winapi.JOB_OBJECT_MSG_EXIT_PROCESS:
+ case winapi.JOB_OBJECT_MSG_ABNORMAL_EXIT_PROCESS:
+ case winapi.JOB_OBJECT_MSG_PROCESS_MEMORY_LIMIT:
+ case winapi.JOB_OBJECT_MSG_JOB_MEMORY_LIMIT:
+ case winapi.JOB_OBJECT_MSG_NOTIFICATION_LIMIT:
+ default:
+ return nil, fmt.Errorf("unknown job notification type: %d", code)
+ }
+ return MsgUnimplemented{}, nil
+}
+
+// Assigns an IO completion port to get notified of events for the registered job
+// object.
+func attachIOCP(job windows.Handle, iocp windows.Handle) error {
+ info := winapi.JOBOBJECT_ASSOCIATE_COMPLETION_PORT{
+ CompletionKey: job,
+ CompletionPort: iocp,
+ }
+ _, err := windows.SetInformationJobObject(job, windows.JobObjectAssociateCompletionPortInformation, uintptr(unsafe.Pointer(&info)), uint32(unsafe.Sizeof(info)))
+ return err
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/jobobject/jobobject.go b/vendor/github.com/Microsoft/hcsshim/internal/jobobject/jobobject.go
new file mode 100644
index 000000000..c9fdd921a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/jobobject/jobobject.go
@@ -0,0 +1,538 @@
+package jobobject
+
+import (
+ "context"
+ "errors"
+ "fmt"
+ "sync"
+ "unsafe"
+
+ "github.com/Microsoft/hcsshim/internal/queue"
+ "github.com/Microsoft/hcsshim/internal/winapi"
+ "golang.org/x/sys/windows"
+)
+
+// This file provides higher level constructs for the win32 job object API.
+// Most of the core creation and management functions are already present in "golang.org/x/sys/windows"
+// (CreateJobObject, AssignProcessToJobObject, etc.) as well as most of the limit information
+// structs and associated limit flags. Whatever is not present from the job object API
+// in golang.org/x/sys/windows is located in /internal/winapi.
+//
+// https://docs.microsoft.com/en-us/windows/win32/procthread/job-objects
+
+// JobObject is a high level wrapper around a Windows job object. Holds a handle to
+// the job, a queue to receive iocp notifications about the lifecycle
+// of the job and a mutex for synchronized handle access.
+type JobObject struct {
+ handle windows.Handle
+ mq *queue.MessageQueue
+ handleLock sync.RWMutex
+}
+
+// JobLimits represents the resource constraints that can be applied to a job object.
+type JobLimits struct {
+ CPULimit uint32
+ CPUWeight uint32
+ MemoryLimitInBytes uint64
+ MaxIOPS int64
+ MaxBandwidth int64
+}
+
+type CPURateControlType uint32
+
+const (
+ WeightBased CPURateControlType = iota
+ RateBased
+)
+
+// Processor resource controls
+const (
+ cpuLimitMin = 1
+ cpuLimitMax = 10000
+ cpuWeightMin = 1
+ cpuWeightMax = 9
+)
+
+var (
+ ErrAlreadyClosed = errors.New("the handle has already been closed")
+ ErrNotRegistered = errors.New("job is not registered to receive notifications")
+)
+
+// Options represents the set of configurable options when making or opening a job object.
+type Options struct {
+ // `Name` specifies the name of the job object if a named job object is desired.
+ Name string
+ // `Notifications` specifies if the job will be registered to receive notifications.
+ // Defaults to false.
+ Notifications bool
+ // `UseNTVariant` specifies if we should use the `Nt` variant of Open/CreateJobObject.
+ // Defaults to false.
+ UseNTVariant bool
+ // `IOTracking` enables tracking I/O statistics on the job object. More specifically this
+ // calls SetInformationJobObject with the JobObjectIoAttribution class.
+ EnableIOTracking bool
+}
+
+// Create creates a job object.
+//
+// If options.Name is an empty string, the job will not be assigned a name.
+//
+// If options.Notifications are not enabled `PollNotifications` will return immediately with error `errNotRegistered`.
+//
+// If `options` is nil, use default option values.
+//
+// Returns a JobObject structure and an error if there is one.
+func Create(ctx context.Context, options *Options) (_ *JobObject, err error) {
+ if options == nil {
+ options = &Options{}
+ }
+
+ var jobName *winapi.UnicodeString
+ if options.Name != "" {
+ jobName, err = winapi.NewUnicodeString(options.Name)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ var jobHandle windows.Handle
+ if options.UseNTVariant {
+ oa := winapi.ObjectAttributes{
+ Length: unsafe.Sizeof(winapi.ObjectAttributes{}),
+ ObjectName: jobName,
+ Attributes: 0,
+ }
+ status := winapi.NtCreateJobObject(&jobHandle, winapi.JOB_OBJECT_ALL_ACCESS, &oa)
+ if status != 0 {
+ return nil, winapi.RtlNtStatusToDosError(status)
+ }
+ } else {
+ var jobNameBuf *uint16
+ if jobName != nil && jobName.Buffer != nil {
+ jobNameBuf = jobName.Buffer
+ }
+ jobHandle, err = windows.CreateJobObject(nil, jobNameBuf)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ defer func() {
+ if err != nil {
+ windows.Close(jobHandle)
+ }
+ }()
+
+ job := &JobObject{
+ handle: jobHandle,
+ }
+
+ // If the IOCP we'll be using to receive messages for all jobs hasn't been
+ // created, create it and start polling.
+ if options.Notifications {
+ mq, err := setupNotifications(ctx, job)
+ if err != nil {
+ return nil, err
+ }
+ job.mq = mq
+ }
+
+ if options.EnableIOTracking {
+ if err := enableIOTracking(jobHandle); err != nil {
+ return nil, err
+ }
+ }
+
+ return job, nil
+}
+
+// Open opens an existing job object with name provided in `options`. If no name is provided
+// return an error since we need to know what job object to open.
+//
+// If options.Notifications is false `PollNotifications` will return immediately with error `errNotRegistered`.
+//
+// Returns a JobObject structure and an error if there is one.
+func Open(ctx context.Context, options *Options) (_ *JobObject, err error) {
+ if options == nil || (options != nil && options.Name == "") {
+ return nil, errors.New("no job object name specified to open")
+ }
+
+ unicodeJobName, err := winapi.NewUnicodeString(options.Name)
+ if err != nil {
+ return nil, err
+ }
+
+ var jobHandle windows.Handle
+ if options != nil && options.UseNTVariant {
+ oa := winapi.ObjectAttributes{
+ Length: unsafe.Sizeof(winapi.ObjectAttributes{}),
+ ObjectName: unicodeJobName,
+ Attributes: 0,
+ }
+ status := winapi.NtOpenJobObject(&jobHandle, winapi.JOB_OBJECT_ALL_ACCESS, &oa)
+ if status != 0 {
+ return nil, winapi.RtlNtStatusToDosError(status)
+ }
+ } else {
+ jobHandle, err = winapi.OpenJobObject(winapi.JOB_OBJECT_ALL_ACCESS, false, unicodeJobName.Buffer)
+ if err != nil {
+ return nil, err
+ }
+ }
+
+ defer func() {
+ if err != nil {
+ windows.Close(jobHandle)
+ }
+ }()
+
+ job := &JobObject{
+ handle: jobHandle,
+ }
+
+ // If the IOCP we'll be using to receive messages for all jobs hasn't been
+ // created, create it and start polling.
+ if options != nil && options.Notifications {
+ mq, err := setupNotifications(ctx, job)
+ if err != nil {
+ return nil, err
+ }
+ job.mq = mq
+ }
+
+ return job, nil
+}
+
+// helper function to setup notifications for creating/opening a job object
+func setupNotifications(ctx context.Context, job *JobObject) (*queue.MessageQueue, error) {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return nil, ErrAlreadyClosed
+ }
+
+ ioInitOnce.Do(func() {
+ h, err := windows.CreateIoCompletionPort(windows.InvalidHandle, 0, 0, 0xffffffff)
+ if err != nil {
+ initIOErr = err
+ return
+ }
+ ioCompletionPort = h
+ go pollIOCP(ctx, h)
+ })
+
+ if initIOErr != nil {
+ return nil, initIOErr
+ }
+
+ mq := queue.NewMessageQueue()
+ jobMap.Store(uintptr(job.handle), mq)
+ if err := attachIOCP(job.handle, ioCompletionPort); err != nil {
+ jobMap.Delete(uintptr(job.handle))
+ return nil, fmt.Errorf("failed to attach job to IO completion port: %w", err)
+ }
+ return mq, nil
+}
+
+// PollNotification will poll for a job object notification. This call should only be called once
+// per job (ideally in a goroutine loop) and will block if there is not a notification ready.
+// This call will return immediately with error `ErrNotRegistered` if the job was not registered
+// to receive notifications during `Create`. Internally, messages will be queued and there
+// is no worry of messages being dropped.
+func (job *JobObject) PollNotification() (interface{}, error) {
+ if job.mq == nil {
+ return nil, ErrNotRegistered
+ }
+ return job.mq.Dequeue()
+}
+
+// UpdateProcThreadAttribute updates the passed in ProcThreadAttributeList to contain what is necessary to
+// launch a process in a job at creation time. This can be used to avoid having to call Assign() after a process
+// has already started running.
+func (job *JobObject) UpdateProcThreadAttribute(attrList *windows.ProcThreadAttributeListContainer) error {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return ErrAlreadyClosed
+ }
+
+ if err := attrList.Update(
+ winapi.PROC_THREAD_ATTRIBUTE_JOB_LIST,
+ unsafe.Pointer(&job.handle),
+ unsafe.Sizeof(job.handle),
+ ); err != nil {
+ return fmt.Errorf("failed to update proc thread attributes for job object: %w", err)
+ }
+
+ return nil
+}
+
+// Close closes the job object handle.
+func (job *JobObject) Close() error {
+ job.handleLock.Lock()
+ defer job.handleLock.Unlock()
+
+ if job.handle == 0 {
+ return ErrAlreadyClosed
+ }
+
+ if err := windows.Close(job.handle); err != nil {
+ return err
+ }
+
+ if job.mq != nil {
+ job.mq.Close()
+ }
+ // Handles now invalid so if the map entry to receive notifications for this job still
+ // exists remove it so we can stop receiving notifications.
+ if _, ok := jobMap.Load(uintptr(job.handle)); ok {
+ jobMap.Delete(uintptr(job.handle))
+ }
+
+ job.handle = 0
+ return nil
+}
+
+// Assign assigns a process to the job object.
+func (job *JobObject) Assign(pid uint32) error {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return ErrAlreadyClosed
+ }
+
+ if pid == 0 {
+ return errors.New("invalid pid: 0")
+ }
+ hProc, err := windows.OpenProcess(winapi.PROCESS_ALL_ACCESS, true, pid)
+ if err != nil {
+ return err
+ }
+ defer windows.Close(hProc)
+ return windows.AssignProcessToJobObject(job.handle, hProc)
+}
+
+// Terminate terminates the job, essentially calls TerminateProcess on every process in the
+// job.
+func (job *JobObject) Terminate(exitCode uint32) error {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+ if job.handle == 0 {
+ return ErrAlreadyClosed
+ }
+ return windows.TerminateJobObject(job.handle, exitCode)
+}
+
+// Pids returns all of the process IDs in the job object.
+func (job *JobObject) Pids() ([]uint32, error) {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return nil, ErrAlreadyClosed
+ }
+
+ info := winapi.JOBOBJECT_BASIC_PROCESS_ID_LIST{}
+ err := winapi.QueryInformationJobObject(
+ job.handle,
+ winapi.JobObjectBasicProcessIdList,
+ unsafe.Pointer(&info),
+ uint32(unsafe.Sizeof(info)),
+ nil,
+ )
+
+ // This is either the case where there is only one process or no processes in
+ // the job. Any other case will result in ERROR_MORE_DATA. Check if info.NumberOfProcessIdsInList
+ // is 1 and just return this, otherwise return an empty slice.
+ if err == nil {
+ if info.NumberOfProcessIdsInList == 1 {
+ return []uint32{uint32(info.ProcessIdList[0])}, nil
+ }
+ // Return empty slice instead of nil to play well with the caller of this.
+ // Do not return an error if no processes are running inside the job
+ return []uint32{}, nil
+ }
+
+ if err != winapi.ERROR_MORE_DATA {
+ return nil, fmt.Errorf("failed initial query for PIDs in job object: %w", err)
+ }
+
+ jobBasicProcessIDListSize := unsafe.Sizeof(info) + (unsafe.Sizeof(info.ProcessIdList[0]) * uintptr(info.NumberOfAssignedProcesses-1))
+ buf := make([]byte, jobBasicProcessIDListSize)
+ if err = winapi.QueryInformationJobObject(
+ job.handle,
+ winapi.JobObjectBasicProcessIdList,
+ unsafe.Pointer(&buf[0]),
+ uint32(len(buf)),
+ nil,
+ ); err != nil {
+ return nil, fmt.Errorf("failed to query for PIDs in job object: %w", err)
+ }
+
+ bufInfo := (*winapi.JOBOBJECT_BASIC_PROCESS_ID_LIST)(unsafe.Pointer(&buf[0]))
+ pids := make([]uint32, bufInfo.NumberOfProcessIdsInList)
+ for i, bufPid := range bufInfo.AllPids() {
+ pids[i] = uint32(bufPid)
+ }
+ return pids, nil
+}
+
+// QueryMemoryStats gets the memory stats for the job object.
+func (job *JobObject) QueryMemoryStats() (*winapi.JOBOBJECT_MEMORY_USAGE_INFORMATION, error) {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return nil, ErrAlreadyClosed
+ }
+
+ info := winapi.JOBOBJECT_MEMORY_USAGE_INFORMATION{}
+ if err := winapi.QueryInformationJobObject(
+ job.handle,
+ winapi.JobObjectMemoryUsageInformation,
+ unsafe.Pointer(&info),
+ uint32(unsafe.Sizeof(info)),
+ nil,
+ ); err != nil {
+ return nil, fmt.Errorf("failed to query for job object memory stats: %w", err)
+ }
+ return &info, nil
+}
+
+// QueryProcessorStats gets the processor stats for the job object.
+func (job *JobObject) QueryProcessorStats() (*winapi.JOBOBJECT_BASIC_ACCOUNTING_INFORMATION, error) {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return nil, ErrAlreadyClosed
+ }
+
+ info := winapi.JOBOBJECT_BASIC_ACCOUNTING_INFORMATION{}
+ if err := winapi.QueryInformationJobObject(
+ job.handle,
+ winapi.JobObjectBasicAccountingInformation,
+ unsafe.Pointer(&info),
+ uint32(unsafe.Sizeof(info)),
+ nil,
+ ); err != nil {
+ return nil, fmt.Errorf("failed to query for job object process stats: %w", err)
+ }
+ return &info, nil
+}
+
+// QueryStorageStats gets the storage (I/O) stats for the job object. This call will error
+// if either `EnableIOTracking` wasn't set to true on creation of the job, or SetIOTracking()
+// hasn't been called since creation of the job.
+func (job *JobObject) QueryStorageStats() (*winapi.JOBOBJECT_IO_ATTRIBUTION_INFORMATION, error) {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return nil, ErrAlreadyClosed
+ }
+
+ info := winapi.JOBOBJECT_IO_ATTRIBUTION_INFORMATION{
+ ControlFlags: winapi.JOBOBJECT_IO_ATTRIBUTION_CONTROL_ENABLE,
+ }
+ if err := winapi.QueryInformationJobObject(
+ job.handle,
+ winapi.JobObjectIoAttribution,
+ unsafe.Pointer(&info),
+ uint32(unsafe.Sizeof(info)),
+ nil,
+ ); err != nil {
+ return nil, fmt.Errorf("failed to query for job object storage stats: %w", err)
+ }
+ return &info, nil
+}
+
+// QueryPrivateWorkingSet returns the private working set size for the job. This is calculated by adding up the
+// private working set for every process running in the job.
+func (job *JobObject) QueryPrivateWorkingSet() (uint64, error) {
+ pids, err := job.Pids()
+ if err != nil {
+ return 0, err
+ }
+
+ openAndQueryWorkingSet := func(pid uint32) (uint64, error) {
+ h, err := windows.OpenProcess(windows.PROCESS_QUERY_LIMITED_INFORMATION, false, pid)
+ if err != nil {
+ // Continue to the next if OpenProcess doesn't return a valid handle (fails). Handles a
+ // case where one of the pids in the job exited before we open.
+ return 0, nil
+ }
+ defer func() {
+ _ = windows.Close(h)
+ }()
+ // Check if the process is actually running in the job still. There's a small chance
+ // that the process could have exited and had its pid re-used between grabbing the pids
+ // in the job and opening the handle to it above.
+ var inJob int32
+ if err := winapi.IsProcessInJob(h, job.handle, &inJob); err != nil {
+ // This shouldn't fail unless we have incorrect access rights which we control
+ // here so probably best to error out if this failed.
+ return 0, err
+ }
+ // Don't report stats for this process as it's not running in the job. This shouldn't be
+ // an error condition though.
+ if inJob == 0 {
+ return 0, nil
+ }
+
+ var vmCounters winapi.VM_COUNTERS_EX2
+ status := winapi.NtQueryInformationProcess(
+ h,
+ winapi.ProcessVmCounters,
+ unsafe.Pointer(&vmCounters),
+ uint32(unsafe.Sizeof(vmCounters)),
+ nil,
+ )
+ if !winapi.NTSuccess(status) {
+ return 0, fmt.Errorf("failed to query information for process: %w", winapi.RtlNtStatusToDosError(status))
+ }
+ return uint64(vmCounters.PrivateWorkingSetSize), nil
+ }
+
+ var jobWorkingSetSize uint64
+ for _, pid := range pids {
+ workingSet, err := openAndQueryWorkingSet(pid)
+ if err != nil {
+ return 0, err
+ }
+ jobWorkingSetSize += workingSet
+ }
+
+ return jobWorkingSetSize, nil
+}
+
+// SetIOTracking enables IO tracking for processes in the job object.
+// This enables use of the QueryStorageStats method.
+func (job *JobObject) SetIOTracking() error {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return ErrAlreadyClosed
+ }
+
+ return enableIOTracking(job.handle)
+}
+
+func enableIOTracking(job windows.Handle) error {
+ info := winapi.JOBOBJECT_IO_ATTRIBUTION_INFORMATION{
+ ControlFlags: winapi.JOBOBJECT_IO_ATTRIBUTION_CONTROL_ENABLE,
+ }
+ if _, err := windows.SetInformationJobObject(
+ job,
+ winapi.JobObjectIoAttribution,
+ uintptr(unsafe.Pointer(&info)),
+ uint32(unsafe.Sizeof(info)),
+ ); err != nil {
+ return fmt.Errorf("failed to enable IO tracking on job object: %w", err)
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/jobobject/limits.go b/vendor/github.com/Microsoft/hcsshim/internal/jobobject/limits.go
new file mode 100644
index 000000000..4efde292c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/jobobject/limits.go
@@ -0,0 +1,315 @@
+package jobobject
+
+import (
+ "errors"
+ "fmt"
+ "unsafe"
+
+ "github.com/Microsoft/hcsshim/internal/winapi"
+ "golang.org/x/sys/windows"
+)
+
+const (
+ memoryLimitMax uint64 = 0xffffffffffffffff
+)
+
+func isFlagSet(flag, controlFlags uint32) bool {
+ return (flag & controlFlags) == flag
+}
+
+// SetResourceLimits sets resource limits on the job object (cpu, memory, storage).
+func (job *JobObject) SetResourceLimits(limits *JobLimits) error {
+ // Go through and check what limits were specified and apply them to the job.
+ if limits.MemoryLimitInBytes != 0 {
+ if err := job.SetMemoryLimit(limits.MemoryLimitInBytes); err != nil {
+ return fmt.Errorf("failed to set job object memory limit: %w", err)
+ }
+ }
+
+ if limits.CPULimit != 0 {
+ if err := job.SetCPULimit(RateBased, limits.CPULimit); err != nil {
+ return fmt.Errorf("failed to set job object cpu limit: %w", err)
+ }
+ } else if limits.CPUWeight != 0 {
+ if err := job.SetCPULimit(WeightBased, limits.CPUWeight); err != nil {
+ return fmt.Errorf("failed to set job object cpu limit: %w", err)
+ }
+ }
+
+ if limits.MaxBandwidth != 0 || limits.MaxIOPS != 0 {
+ if err := job.SetIOLimit(limits.MaxBandwidth, limits.MaxIOPS); err != nil {
+ return fmt.Errorf("failed to set io limit on job object: %w", err)
+ }
+ }
+ return nil
+}
+
+// SetTerminateOnLastHandleClose sets the job object flag that specifies that the job should terminate
+// all processes in the job on the last open handle being closed.
+func (job *JobObject) SetTerminateOnLastHandleClose() error {
+ info, err := job.getExtendedInformation()
+ if err != nil {
+ return err
+ }
+ info.BasicLimitInformation.LimitFlags |= windows.JOB_OBJECT_LIMIT_KILL_ON_JOB_CLOSE
+ return job.setExtendedInformation(info)
+}
+
+// SetMemoryLimit sets the memory limit of the job object based on the given `memoryLimitInBytes`.
+func (job *JobObject) SetMemoryLimit(memoryLimitInBytes uint64) error {
+ if memoryLimitInBytes >= memoryLimitMax {
+ return errors.New("memory limit specified exceeds the max size")
+ }
+
+ info, err := job.getExtendedInformation()
+ if err != nil {
+ return err
+ }
+
+ info.JobMemoryLimit = uintptr(memoryLimitInBytes)
+ info.BasicLimitInformation.LimitFlags |= windows.JOB_OBJECT_LIMIT_JOB_MEMORY
+ return job.setExtendedInformation(info)
+}
+
+// GetMemoryLimit gets the memory limit in bytes of the job object.
+func (job *JobObject) GetMemoryLimit() (uint64, error) {
+ info, err := job.getExtendedInformation()
+ if err != nil {
+ return 0, err
+ }
+ return uint64(info.JobMemoryLimit), nil
+}
+
+// SetCPULimit sets the CPU limit depending on the specified `CPURateControlType` to
+// `rateControlValue` for the job object.
+func (job *JobObject) SetCPULimit(rateControlType CPURateControlType, rateControlValue uint32) error {
+ cpuInfo, err := job.getCPURateControlInformation()
+ if err != nil {
+ return err
+ }
+ switch rateControlType {
+ case WeightBased:
+ if rateControlValue < cpuWeightMin || rateControlValue > cpuWeightMax {
+ return fmt.Errorf("processor weight value of `%d` is invalid", rateControlValue)
+ }
+ cpuInfo.ControlFlags |= winapi.JOB_OBJECT_CPU_RATE_CONTROL_ENABLE | winapi.JOB_OBJECT_CPU_RATE_CONTROL_WEIGHT_BASED
+ cpuInfo.Value = rateControlValue
+ case RateBased:
+ if rateControlValue < cpuLimitMin || rateControlValue > cpuLimitMax {
+ return fmt.Errorf("processor rate of `%d` is invalid", rateControlValue)
+ }
+ cpuInfo.ControlFlags |= winapi.JOB_OBJECT_CPU_RATE_CONTROL_ENABLE | winapi.JOB_OBJECT_CPU_RATE_CONTROL_HARD_CAP
+ cpuInfo.Value = rateControlValue
+ default:
+ return errors.New("invalid job object cpu rate control type")
+ }
+ return job.setCPURateControlInfo(cpuInfo)
+}
+
+// GetCPULimit gets the cpu limits for the job object.
+// `rateControlType` is used to indicate what type of cpu limit to query for.
+func (job *JobObject) GetCPULimit(rateControlType CPURateControlType) (uint32, error) {
+ info, err := job.getCPURateControlInformation()
+ if err != nil {
+ return 0, err
+ }
+
+ if !isFlagSet(winapi.JOB_OBJECT_CPU_RATE_CONTROL_ENABLE, info.ControlFlags) {
+ return 0, errors.New("the job does not have cpu rate control enabled")
+ }
+
+ switch rateControlType {
+ case WeightBased:
+ if !isFlagSet(winapi.JOB_OBJECT_CPU_RATE_CONTROL_WEIGHT_BASED, info.ControlFlags) {
+ return 0, errors.New("cannot get cpu weight for job object without cpu weight option set")
+ }
+ case RateBased:
+ if !isFlagSet(winapi.JOB_OBJECT_CPU_RATE_CONTROL_HARD_CAP, info.ControlFlags) {
+ return 0, errors.New("cannot get cpu rate hard cap for job object without cpu rate hard cap option set")
+ }
+ default:
+ return 0, errors.New("invalid job object cpu rate control type")
+ }
+ return info.Value, nil
+}
+
+// SetCPUAffinity sets the processor affinity for the job object.
+// The affinity is passed in as a bitmask.
+func (job *JobObject) SetCPUAffinity(affinityBitMask uint64) error {
+ info, err := job.getExtendedInformation()
+ if err != nil {
+ return err
+ }
+ info.BasicLimitInformation.LimitFlags |= uint32(windows.JOB_OBJECT_LIMIT_AFFINITY)
+ info.BasicLimitInformation.Affinity = uintptr(affinityBitMask)
+ return job.setExtendedInformation(info)
+}
+
+// GetCPUAffinity gets the processor affinity for the job object.
+// The returned affinity is a bitmask.
+func (job *JobObject) GetCPUAffinity() (uint64, error) {
+ info, err := job.getExtendedInformation()
+ if err != nil {
+ return 0, err
+ }
+ return uint64(info.BasicLimitInformation.Affinity), nil
+}
+
+// SetIOLimit sets the IO limits specified on the job object.
+func (job *JobObject) SetIOLimit(maxBandwidth, maxIOPS int64) error {
+ ioInfo, err := job.getIOLimit()
+ if err != nil {
+ return err
+ }
+ ioInfo.ControlFlags |= winapi.JOB_OBJECT_IO_RATE_CONTROL_ENABLE
+ if maxBandwidth != 0 {
+ ioInfo.MaxBandwidth = maxBandwidth
+ }
+ if maxIOPS != 0 {
+ ioInfo.MaxIops = maxIOPS
+ }
+ return job.setIORateControlInfo(ioInfo)
+}
+
+// GetIOMaxBandwidthLimit gets the max bandwidth for the job object.
+func (job *JobObject) GetIOMaxBandwidthLimit() (int64, error) {
+ info, err := job.getIOLimit()
+ if err != nil {
+ return 0, err
+ }
+ return info.MaxBandwidth, nil
+}
+
+// GetIOMaxIopsLimit gets the max iops for the job object.
+func (job *JobObject) GetIOMaxIopsLimit() (int64, error) {
+ info, err := job.getIOLimit()
+ if err != nil {
+ return 0, err
+ }
+ return info.MaxIops, nil
+}
+
+// Helper function for getting a job object's extended information.
+func (job *JobObject) getExtendedInformation() (*windows.JOBOBJECT_EXTENDED_LIMIT_INFORMATION, error) {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return nil, ErrAlreadyClosed
+ }
+
+ info := windows.JOBOBJECT_EXTENDED_LIMIT_INFORMATION{}
+ if err := winapi.QueryInformationJobObject(
+ job.handle,
+ windows.JobObjectExtendedLimitInformation,
+ unsafe.Pointer(&info),
+ uint32(unsafe.Sizeof(info)),
+ nil,
+ ); err != nil {
+ return nil, fmt.Errorf("query %v returned error: %w", info, err)
+ }
+ return &info, nil
+}
+
+// Helper function for getting a job object's CPU rate control information.
+func (job *JobObject) getCPURateControlInformation() (*winapi.JOBOBJECT_CPU_RATE_CONTROL_INFORMATION, error) {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return nil, ErrAlreadyClosed
+ }
+
+ info := winapi.JOBOBJECT_CPU_RATE_CONTROL_INFORMATION{}
+ if err := winapi.QueryInformationJobObject(
+ job.handle,
+ windows.JobObjectCpuRateControlInformation,
+ unsafe.Pointer(&info),
+ uint32(unsafe.Sizeof(info)),
+ nil,
+ ); err != nil {
+ return nil, fmt.Errorf("query %v returned error: %w", info, err)
+ }
+ return &info, nil
+}
+
+// Helper function for setting a job object's extended information.
+func (job *JobObject) setExtendedInformation(info *windows.JOBOBJECT_EXTENDED_LIMIT_INFORMATION) error {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return ErrAlreadyClosed
+ }
+
+ if _, err := windows.SetInformationJobObject(
+ job.handle,
+ windows.JobObjectExtendedLimitInformation,
+ uintptr(unsafe.Pointer(info)),
+ uint32(unsafe.Sizeof(*info)),
+ ); err != nil {
+ return fmt.Errorf("failed to set Extended info %v on job object: %w", info, err)
+ }
+ return nil
+}
+
+// Helper function for querying job handle for IO limit information.
+func (job *JobObject) getIOLimit() (*winapi.JOBOBJECT_IO_RATE_CONTROL_INFORMATION, error) {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return nil, ErrAlreadyClosed
+ }
+
+ ioInfo := &winapi.JOBOBJECT_IO_RATE_CONTROL_INFORMATION{}
+ var blockCount uint32 = 1
+
+ if _, err := winapi.QueryIoRateControlInformationJobObject(
+ job.handle,
+ nil,
+ &ioInfo,
+ &blockCount,
+ ); err != nil {
+ return nil, fmt.Errorf("query %v returned error: %w", ioInfo, err)
+ }
+
+ if !isFlagSet(winapi.JOB_OBJECT_IO_RATE_CONTROL_ENABLE, ioInfo.ControlFlags) {
+ return nil, fmt.Errorf("query %v cannot get IO limits for job object without IO rate control option set", ioInfo)
+ }
+ return ioInfo, nil
+}
+
+// Helper function for setting a job object's IO rate control information.
+func (job *JobObject) setIORateControlInfo(ioInfo *winapi.JOBOBJECT_IO_RATE_CONTROL_INFORMATION) error {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return ErrAlreadyClosed
+ }
+
+ if _, err := winapi.SetIoRateControlInformationJobObject(job.handle, ioInfo); err != nil {
+ return fmt.Errorf("failed to set IO limit info %v on job object: %w", ioInfo, err)
+ }
+ return nil
+}
+
+// Helper function for setting a job object's CPU rate control information.
+func (job *JobObject) setCPURateControlInfo(cpuInfo *winapi.JOBOBJECT_CPU_RATE_CONTROL_INFORMATION) error {
+ job.handleLock.RLock()
+ defer job.handleLock.RUnlock()
+
+ if job.handle == 0 {
+ return ErrAlreadyClosed
+ }
+ if _, err := windows.SetInformationJobObject(
+ job.handle,
+ windows.JobObjectCpuRateControlInformation,
+ uintptr(unsafe.Pointer(cpuInfo)),
+ uint32(unsafe.Sizeof(cpuInfo)),
+ ); err != nil {
+ return fmt.Errorf("failed to set cpu limit info %v on job object: %w", cpuInfo, err)
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/log/g.go b/vendor/github.com/Microsoft/hcsshim/internal/log/g.go
new file mode 100644
index 000000000..ba6b1a4a5
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/log/g.go
@@ -0,0 +1,23 @@
+package log
+
+import (
+ "context"
+
+ "github.com/sirupsen/logrus"
+ "go.opencensus.io/trace"
+)
+
+// G returns a `logrus.Entry` with the `TraceID, SpanID` from `ctx` if `ctx`
+// contains an OpenCensus `trace.Span`.
+func G(ctx context.Context) *logrus.Entry {
+ span := trace.FromContext(ctx)
+ if span != nil {
+ sctx := span.SpanContext()
+ return logrus.WithFields(logrus.Fields{
+ "traceID": sctx.TraceID.String(),
+ "spanID": sctx.SpanID.String(),
+ // "parentSpanID": TODO: JTERRY75 - Try to convince OC to export this?
+ })
+ }
+ return logrus.NewEntry(logrus.StandardLogger())
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/logfields/fields.go b/vendor/github.com/Microsoft/hcsshim/internal/logfields/fields.go
new file mode 100644
index 000000000..cf2c166d9
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/logfields/fields.go
@@ -0,0 +1,32 @@
+package logfields
+
+const (
+ // Identifiers
+
+ ContainerID = "cid"
+ UVMID = "uvm-id"
+ ProcessID = "pid"
+
+ // Common Misc
+
+ // Timeout represents an operation timeout.
+ Timeout = "timeout"
+ JSON = "json"
+
+ // Keys/values
+
+ Field = "field"
+ OCIAnnotation = "oci-annotation"
+ Value = "value"
+
+ // Golang type's
+
+ ExpectedType = "expected-type"
+ Bool = "bool"
+ Uint32 = "uint32"
+ Uint64 = "uint64"
+
+ // runhcs
+
+ VMShimOperation = "vmshim-op"
+)
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/longpath/longpath.go b/vendor/github.com/Microsoft/hcsshim/internal/longpath/longpath.go
new file mode 100644
index 000000000..e5b8b85e0
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/longpath/longpath.go
@@ -0,0 +1,24 @@
+package longpath
+
+import (
+ "path/filepath"
+ "strings"
+)
+
+// LongAbs makes a path absolute and returns it in NT long path form.
+func LongAbs(path string) (string, error) {
+ if strings.HasPrefix(path, `\\?\`) || strings.HasPrefix(path, `\\.\`) {
+ return path, nil
+ }
+ if !filepath.IsAbs(path) {
+ absPath, err := filepath.Abs(path)
+ if err != nil {
+ return "", err
+ }
+ path = absPath
+ }
+ if strings.HasPrefix(path, `\\`) {
+ return `\\?\UNC\` + path[2:], nil
+ }
+ return `\\?\` + path, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/mergemaps/merge.go b/vendor/github.com/Microsoft/hcsshim/internal/mergemaps/merge.go
new file mode 100644
index 000000000..7e95efb30
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/mergemaps/merge.go
@@ -0,0 +1,52 @@
+package mergemaps
+
+import "encoding/json"
+
+// Merge recursively merges map `fromMap` into map `ToMap`. Any pre-existing values
+// in ToMap are overwritten. Values in fromMap are added to ToMap.
+// From http://stackoverflow.com/questions/40491438/merging-two-json-strings-in-golang
+func Merge(fromMap, ToMap interface{}) interface{} {
+ switch fromMap := fromMap.(type) {
+ case map[string]interface{}:
+ ToMap, ok := ToMap.(map[string]interface{})
+ if !ok {
+ return fromMap
+ }
+ for keyToMap, valueToMap := range ToMap {
+ if valueFromMap, ok := fromMap[keyToMap]; ok {
+ fromMap[keyToMap] = Merge(valueFromMap, valueToMap)
+ } else {
+ fromMap[keyToMap] = valueToMap
+ }
+ }
+ case nil:
+ // merge(nil, map[string]interface{...}) -> map[string]interface{...}
+ ToMap, ok := ToMap.(map[string]interface{})
+ if ok {
+ return ToMap
+ }
+ }
+ return fromMap
+}
+
+// MergeJSON merges the contents of a JSON string into an object representation,
+// returning a new object suitable for translating to JSON.
+func MergeJSON(object interface{}, additionalJSON []byte) (interface{}, error) {
+ if len(additionalJSON) == 0 {
+ return object, nil
+ }
+ objectJSON, err := json.Marshal(object)
+ if err != nil {
+ return nil, err
+ }
+ var objectMap, newMap map[string]interface{}
+ err = json.Unmarshal(objectJSON, &objectMap)
+ if err != nil {
+ return nil, err
+ }
+ err = json.Unmarshal(additionalJSON, &newMap)
+ if err != nil {
+ return nil, err
+ }
+ return Merge(newMap, objectMap), nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/oc/exporter.go b/vendor/github.com/Microsoft/hcsshim/internal/oc/exporter.go
new file mode 100644
index 000000000..f428bdaf7
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/oc/exporter.go
@@ -0,0 +1,43 @@
+package oc
+
+import (
+ "github.com/sirupsen/logrus"
+ "go.opencensus.io/trace"
+)
+
+var _ = (trace.Exporter)(&LogrusExporter{})
+
+// LogrusExporter is an OpenCensus `trace.Exporter` that exports
+// `trace.SpanData` to logrus output.
+type LogrusExporter struct {
+}
+
+// ExportSpan exports `s` based on the the following rules:
+//
+// 1. All output will contain `s.Attributes`, `s.TraceID`, `s.SpanID`,
+// `s.ParentSpanID` for correlation
+//
+// 2. Any calls to .Annotate will not be supported.
+//
+// 3. The span itself will be written at `logrus.InfoLevel` unless
+// `s.Status.Code != 0` in which case it will be written at `logrus.ErrorLevel`
+// providing `s.Status.Message` as the error value.
+func (le *LogrusExporter) ExportSpan(s *trace.SpanData) {
+ // Combine all span annotations with traceID, spanID, parentSpanID
+ baseEntry := logrus.WithFields(logrus.Fields(s.Attributes))
+ baseEntry.Data["traceID"] = s.TraceID.String()
+ baseEntry.Data["spanID"] = s.SpanID.String()
+ baseEntry.Data["parentSpanID"] = s.ParentSpanID.String()
+ baseEntry.Data["startTime"] = s.StartTime
+ baseEntry.Data["endTime"] = s.EndTime
+ baseEntry.Data["duration"] = s.EndTime.Sub(s.StartTime).String()
+ baseEntry.Data["name"] = s.Name
+ baseEntry.Time = s.StartTime
+
+ level := logrus.InfoLevel
+ if s.Status.Code != 0 {
+ level = logrus.ErrorLevel
+ baseEntry.Data[logrus.ErrorKey] = s.Status.Message
+ }
+ baseEntry.Log(level, "Span")
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/oc/span.go b/vendor/github.com/Microsoft/hcsshim/internal/oc/span.go
new file mode 100644
index 000000000..fee4765cb
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/oc/span.go
@@ -0,0 +1,17 @@
+package oc
+
+import (
+ "go.opencensus.io/trace"
+)
+
+// SetSpanStatus sets `span.SetStatus` to the proper status depending on `err`. If
+// `err` is `nil` assumes `trace.StatusCodeOk`.
+func SetSpanStatus(span *trace.Span, err error) {
+ status := trace.Status{}
+ if err != nil {
+ // TODO: JTERRY75 - Handle errors in a non-generic way
+ status.Code = trace.StatusCodeUnknown
+ status.Message = err.Error()
+ }
+ span.SetStatus(status)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/queue/mq.go b/vendor/github.com/Microsoft/hcsshim/internal/queue/mq.go
new file mode 100644
index 000000000..4eb9bb9f1
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/queue/mq.go
@@ -0,0 +1,92 @@
+package queue
+
+import (
+ "errors"
+ "sync"
+)
+
+var ErrQueueClosed = errors.New("the queue is closed for reading and writing")
+
+// MessageQueue represents a threadsafe message queue to be used to retrieve or
+// write messages to.
+type MessageQueue struct {
+ m *sync.RWMutex
+ c *sync.Cond
+ messages []interface{}
+ closed bool
+}
+
+// NewMessageQueue returns a new MessageQueue.
+func NewMessageQueue() *MessageQueue {
+ m := &sync.RWMutex{}
+ return &MessageQueue{
+ m: m,
+ c: sync.NewCond(m),
+ messages: []interface{}{},
+ }
+}
+
+// Enqueue writes `msg` to the queue.
+func (mq *MessageQueue) Enqueue(msg interface{}) error {
+ mq.m.Lock()
+ defer mq.m.Unlock()
+
+ if mq.closed {
+ return ErrQueueClosed
+ }
+ mq.messages = append(mq.messages, msg)
+ // Signal a waiter that there is now a value available in the queue.
+ mq.c.Signal()
+ return nil
+}
+
+// Dequeue will read a value from the queue and remove it. If the queue
+// is empty, this will block until the queue is closed or a value gets enqueued.
+func (mq *MessageQueue) Dequeue() (interface{}, error) {
+ mq.m.Lock()
+ defer mq.m.Unlock()
+
+ for !mq.closed && mq.size() == 0 {
+ mq.c.Wait()
+ }
+
+ // We got woken up, check if it's because the queue got closed.
+ if mq.closed {
+ return nil, ErrQueueClosed
+ }
+
+ val := mq.messages[0]
+ mq.messages[0] = nil
+ mq.messages = mq.messages[1:]
+ return val, nil
+}
+
+// Size returns the size of the queue.
+func (mq *MessageQueue) Size() int {
+ mq.m.RLock()
+ defer mq.m.RUnlock()
+ return mq.size()
+}
+
+// Nonexported size check to check if the queue is empty inside already locked functions.
+func (mq *MessageQueue) size() int {
+ return len(mq.messages)
+}
+
+// Close closes the queue for future writes or reads. Any attempts to read or write from the
+// queue after close will return ErrQueueClosed. This is safe to call multiple times.
+func (mq *MessageQueue) Close() {
+ mq.m.Lock()
+ defer mq.m.Unlock()
+
+ // Already closed, noop
+ if mq.closed {
+ return
+ }
+
+ mq.messages = nil
+ mq.closed = true
+ // If there's anybody currently waiting on a value from Dequeue, we need to
+ // broadcast so the read(s) can return ErrQueueClosed.
+ mq.c.Broadcast()
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/safefile/safeopen.go b/vendor/github.com/Microsoft/hcsshim/internal/safefile/safeopen.go
new file mode 100644
index 000000000..66b8d7e03
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/safefile/safeopen.go
@@ -0,0 +1,375 @@
+package safefile
+
+import (
+ "errors"
+ "io"
+ "os"
+ "path/filepath"
+ "strings"
+ "syscall"
+ "unicode/utf16"
+ "unsafe"
+
+ "github.com/Microsoft/hcsshim/internal/longpath"
+ "github.com/Microsoft/hcsshim/internal/winapi"
+
+ winio "github.com/Microsoft/go-winio"
+)
+
+func OpenRoot(path string) (*os.File, error) {
+ longpath, err := longpath.LongAbs(path)
+ if err != nil {
+ return nil, err
+ }
+ return winio.OpenForBackup(longpath, syscall.GENERIC_READ, syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE, syscall.OPEN_EXISTING)
+}
+
+func cleanGoStringRelativePath(path string) (string, error) {
+ path = filepath.Clean(path)
+ if strings.Contains(path, ":") {
+ // Since alternate data streams must follow the file they
+ // are attached to, finding one here (out of order) is invalid.
+ return "", errors.New("path contains invalid character `:`")
+ }
+ fspath := filepath.FromSlash(path)
+ if len(fspath) > 0 && fspath[0] == '\\' {
+ return "", errors.New("expected relative path")
+ }
+ return fspath, nil
+}
+
+func ntRelativePath(path string) ([]uint16, error) {
+ fspath, err := cleanGoStringRelativePath(path)
+ if err != nil {
+ return nil, err
+ }
+
+ path16 := utf16.Encode(([]rune)(fspath))
+ if len(path16) > 32767 {
+ return nil, syscall.ENAMETOOLONG
+ }
+
+ return path16, nil
+}
+
+// openRelativeInternal opens a relative path from the given root, failing if
+// any of the intermediate path components are reparse points.
+func openRelativeInternal(path string, root *os.File, accessMask uint32, shareFlags uint32, createDisposition uint32, flags uint32) (*os.File, error) {
+ var (
+ h uintptr
+ iosb winapi.IOStatusBlock
+ oa winapi.ObjectAttributes
+ )
+
+ cleanRelativePath, err := cleanGoStringRelativePath(path)
+ if err != nil {
+ return nil, err
+ }
+
+ if root == nil || root.Fd() == 0 {
+ return nil, errors.New("missing root directory")
+ }
+
+ pathUnicode, err := winapi.NewUnicodeString(cleanRelativePath)
+ if err != nil {
+ return nil, err
+ }
+
+ oa.Length = unsafe.Sizeof(oa)
+ oa.ObjectName = pathUnicode
+ oa.RootDirectory = uintptr(root.Fd())
+ oa.Attributes = winapi.OBJ_DONT_REPARSE
+ status := winapi.NtCreateFile(
+ &h,
+ accessMask|syscall.SYNCHRONIZE,
+ &oa,
+ &iosb,
+ nil,
+ 0,
+ shareFlags,
+ createDisposition,
+ winapi.FILE_OPEN_FOR_BACKUP_INTENT|winapi.FILE_SYNCHRONOUS_IO_NONALERT|flags,
+ nil,
+ 0,
+ )
+ if status != 0 {
+ return nil, winapi.RtlNtStatusToDosError(status)
+ }
+
+ fullPath, err := longpath.LongAbs(filepath.Join(root.Name(), path))
+ if err != nil {
+ syscall.Close(syscall.Handle(h))
+ return nil, err
+ }
+
+ return os.NewFile(h, fullPath), nil
+}
+
+// OpenRelative opens a relative path from the given root, failing if
+// any of the intermediate path components are reparse points.
+func OpenRelative(path string, root *os.File, accessMask uint32, shareFlags uint32, createDisposition uint32, flags uint32) (*os.File, error) {
+ f, err := openRelativeInternal(path, root, accessMask, shareFlags, createDisposition, flags)
+ if err != nil {
+ err = &os.PathError{Op: "open", Path: filepath.Join(root.Name(), path), Err: err}
+ }
+ return f, err
+}
+
+// LinkRelative creates a hard link from oldname to newname (relative to oldroot
+// and newroot), failing if any of the intermediate path components are reparse
+// points.
+func LinkRelative(oldname string, oldroot *os.File, newname string, newroot *os.File) error {
+ // Open the old file.
+ oldf, err := openRelativeInternal(
+ oldname,
+ oldroot,
+ syscall.FILE_WRITE_ATTRIBUTES,
+ syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE,
+ winapi.FILE_OPEN,
+ 0,
+ )
+ if err != nil {
+ return &os.LinkError{Op: "link", Old: filepath.Join(oldroot.Name(), oldname), New: filepath.Join(newroot.Name(), newname), Err: err}
+ }
+ defer oldf.Close()
+
+ // Open the parent of the new file.
+ var parent *os.File
+ parentPath := filepath.Dir(newname)
+ if parentPath != "." {
+ parent, err = openRelativeInternal(
+ parentPath,
+ newroot,
+ syscall.GENERIC_READ,
+ syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE,
+ winapi.FILE_OPEN,
+ winapi.FILE_DIRECTORY_FILE)
+ if err != nil {
+ return &os.LinkError{Op: "link", Old: oldf.Name(), New: filepath.Join(newroot.Name(), newname), Err: err}
+ }
+ defer parent.Close()
+
+ fi, err := winio.GetFileBasicInfo(parent)
+ if err != nil {
+ return err
+ }
+ if (fi.FileAttributes & syscall.FILE_ATTRIBUTE_REPARSE_POINT) != 0 {
+ return &os.LinkError{Op: "link", Old: oldf.Name(), New: filepath.Join(newroot.Name(), newname), Err: winapi.RtlNtStatusToDosError(winapi.STATUS_REPARSE_POINT_ENCOUNTERED)}
+ }
+
+ } else {
+ parent = newroot
+ }
+
+ // Issue an NT call to create the link. This will be safe because NT will
+ // not open any more directories to create the link, so it cannot walk any
+ // more reparse points.
+ newbase := filepath.Base(newname)
+ newbase16, err := ntRelativePath(newbase)
+ if err != nil {
+ return err
+ }
+
+ size := int(unsafe.Offsetof(winapi.FileLinkInformation{}.FileName)) + len(newbase16)*2
+ linkinfoBuffer := winapi.LocalAlloc(0, size)
+ defer winapi.LocalFree(linkinfoBuffer)
+
+ linkinfo := (*winapi.FileLinkInformation)(unsafe.Pointer(linkinfoBuffer))
+ linkinfo.RootDirectory = parent.Fd()
+ linkinfo.FileNameLength = uint32(len(newbase16) * 2)
+ copy(winapi.Uint16BufferToSlice(&linkinfo.FileName[0], len(newbase16)), newbase16)
+
+ var iosb winapi.IOStatusBlock
+ status := winapi.NtSetInformationFile(
+ oldf.Fd(),
+ &iosb,
+ linkinfoBuffer,
+ uint32(size),
+ winapi.FileLinkInformationClass,
+ )
+ if status != 0 {
+ return &os.LinkError{Op: "link", Old: oldf.Name(), New: filepath.Join(parent.Name(), newbase), Err: winapi.RtlNtStatusToDosError(status)}
+ }
+
+ return nil
+}
+
+// deleteOnClose marks a file to be deleted when the handle is closed.
+func deleteOnClose(f *os.File) error {
+ disposition := winapi.FileDispositionInformationEx{Flags: winapi.FILE_DISPOSITION_DELETE}
+ var iosb winapi.IOStatusBlock
+ status := winapi.NtSetInformationFile(
+ f.Fd(),
+ &iosb,
+ uintptr(unsafe.Pointer(&disposition)),
+ uint32(unsafe.Sizeof(disposition)),
+ winapi.FileDispositionInformationExClass,
+ )
+ if status != 0 {
+ return winapi.RtlNtStatusToDosError(status)
+ }
+ return nil
+}
+
+// clearReadOnly clears the readonly attribute on a file.
+func clearReadOnly(f *os.File) error {
+ bi, err := winio.GetFileBasicInfo(f)
+ if err != nil {
+ return err
+ }
+ if bi.FileAttributes&syscall.FILE_ATTRIBUTE_READONLY == 0 {
+ return nil
+ }
+ sbi := winio.FileBasicInfo{
+ FileAttributes: bi.FileAttributes &^ syscall.FILE_ATTRIBUTE_READONLY,
+ }
+ if sbi.FileAttributes == 0 {
+ sbi.FileAttributes = syscall.FILE_ATTRIBUTE_NORMAL
+ }
+ return winio.SetFileBasicInfo(f, &sbi)
+}
+
+// RemoveRelative removes a file or directory relative to a root, failing if any
+// intermediate path components are reparse points.
+func RemoveRelative(path string, root *os.File) error {
+ f, err := openRelativeInternal(
+ path,
+ root,
+ winapi.FILE_READ_ATTRIBUTES|winapi.FILE_WRITE_ATTRIBUTES|winapi.DELETE,
+ syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE,
+ winapi.FILE_OPEN,
+ winapi.FILE_OPEN_REPARSE_POINT)
+ if err == nil {
+ defer f.Close()
+ err = deleteOnClose(f)
+ if err == syscall.ERROR_ACCESS_DENIED {
+ // Maybe the file is marked readonly. Clear the bit and retry.
+ _ = clearReadOnly(f)
+ err = deleteOnClose(f)
+ }
+ }
+ if err != nil {
+ return &os.PathError{Op: "remove", Path: filepath.Join(root.Name(), path), Err: err}
+ }
+ return nil
+}
+
+// RemoveAllRelative removes a directory tree relative to a root, failing if any
+// intermediate path components are reparse points.
+func RemoveAllRelative(path string, root *os.File) error {
+ fi, err := LstatRelative(path, root)
+ if err != nil {
+ if os.IsNotExist(err) {
+ return nil
+ }
+ return err
+ }
+ fileAttributes := fi.Sys().(*syscall.Win32FileAttributeData).FileAttributes
+ if fileAttributes&syscall.FILE_ATTRIBUTE_DIRECTORY == 0 || fileAttributes&syscall.FILE_ATTRIBUTE_REPARSE_POINT != 0 {
+ // If this is a reparse point, it can't have children. Simple remove will do.
+ err := RemoveRelative(path, root)
+ if err == nil || os.IsNotExist(err) {
+ return nil
+ }
+ return err
+ }
+
+ // It is necessary to use os.Open as Readdirnames does not work with
+ // OpenRelative. This is safe because the above lstatrelative fails
+ // if the target is outside the root, and we know this is not a
+ // symlink from the above FILE_ATTRIBUTE_REPARSE_POINT check.
+ fd, err := os.Open(filepath.Join(root.Name(), path))
+ if err != nil {
+ if os.IsNotExist(err) {
+ // Race. It was deleted between the Lstat and Open.
+ // Return nil per RemoveAll's docs.
+ return nil
+ }
+ return err
+ }
+
+ // Remove contents & return first error.
+ for {
+ names, err1 := fd.Readdirnames(100)
+ for _, name := range names {
+ err1 := RemoveAllRelative(path+string(os.PathSeparator)+name, root)
+ if err == nil {
+ err = err1
+ }
+ }
+ if err1 == io.EOF {
+ break
+ }
+ // If Readdirnames returned an error, use it.
+ if err == nil {
+ err = err1
+ }
+ if len(names) == 0 {
+ break
+ }
+ }
+ fd.Close()
+
+ // Remove directory.
+ err1 := RemoveRelative(path, root)
+ if err1 == nil || os.IsNotExist(err1) {
+ return nil
+ }
+ if err == nil {
+ err = err1
+ }
+ return err
+}
+
+// MkdirRelative creates a directory relative to a root, failing if any
+// intermediate path components are reparse points.
+func MkdirRelative(path string, root *os.File) error {
+ f, err := openRelativeInternal(
+ path,
+ root,
+ 0,
+ syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE,
+ winapi.FILE_CREATE,
+ winapi.FILE_DIRECTORY_FILE)
+ if err == nil {
+ f.Close()
+ } else {
+ err = &os.PathError{Op: "mkdir", Path: filepath.Join(root.Name(), path), Err: err}
+ }
+ return err
+}
+
+// LstatRelative performs a stat operation on a file relative to a root, failing
+// if any intermediate path components are reparse points.
+func LstatRelative(path string, root *os.File) (os.FileInfo, error) {
+ f, err := openRelativeInternal(
+ path,
+ root,
+ winapi.FILE_READ_ATTRIBUTES,
+ syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE,
+ winapi.FILE_OPEN,
+ winapi.FILE_OPEN_REPARSE_POINT)
+ if err != nil {
+ return nil, &os.PathError{Op: "stat", Path: filepath.Join(root.Name(), path), Err: err}
+ }
+ defer f.Close()
+ return f.Stat()
+}
+
+// EnsureNotReparsePointRelative validates that a given file (relative to a
+// root) and all intermediate path components are not a reparse points.
+func EnsureNotReparsePointRelative(path string, root *os.File) error {
+ // Perform an open with OBJ_DONT_REPARSE but without specifying FILE_OPEN_REPARSE_POINT.
+ f, err := OpenRelative(
+ path,
+ root,
+ 0,
+ syscall.FILE_SHARE_READ|syscall.FILE_SHARE_WRITE|syscall.FILE_SHARE_DELETE,
+ winapi.FILE_OPEN,
+ 0)
+ if err != nil {
+ return err
+ }
+ f.Close()
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/timeout/timeout.go b/vendor/github.com/Microsoft/hcsshim/internal/timeout/timeout.go
new file mode 100644
index 000000000..eaf39fa51
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/timeout/timeout.go
@@ -0,0 +1,74 @@
+package timeout
+
+import (
+ "os"
+ "strconv"
+ "time"
+)
+
+var (
+ // defaultTimeout is the timeout for most operations that is not overridden.
+ defaultTimeout = 4 * time.Minute
+
+ // defaultTimeoutTestdRetry is the retry loop timeout for testd to respond
+ // for a disk to come online in LCOW.
+ defaultTimeoutTestdRetry = 5 * time.Second
+)
+
+// External variables for HCSShim consumers to use.
+var (
+ // SystemCreate is the timeout for creating a compute system
+ SystemCreate time.Duration = defaultTimeout
+
+ // SystemStart is the timeout for starting a compute system
+ SystemStart time.Duration = defaultTimeout
+
+ // SystemPause is the timeout for pausing a compute system
+ SystemPause time.Duration = defaultTimeout
+
+ // SystemResume is the timeout for resuming a compute system
+ SystemResume time.Duration = defaultTimeout
+
+ // SystemSave is the timeout for saving a compute system
+ SystemSave time.Duration = defaultTimeout
+
+ // SyscallWatcher is the timeout before warning of a potential stuck platform syscall.
+ SyscallWatcher time.Duration = defaultTimeout
+
+ // Tar2VHD is the timeout for the tar2vhd operation to complete
+ Tar2VHD time.Duration = defaultTimeout
+
+ // ExternalCommandToStart is the timeout for external commands to start
+ ExternalCommandToStart = defaultTimeout
+
+ // ExternalCommandToComplete is the timeout for external commands to complete.
+ // Generally this means copying data from their stdio pipes.
+ ExternalCommandToComplete = defaultTimeout
+
+ // TestDRetryLoop is the timeout for testd retry loop when onlining a SCSI disk in LCOW
+ TestDRetryLoop = defaultTimeoutTestdRetry
+)
+
+func init() {
+ SystemCreate = durationFromEnvironment("HCSSHIM_TIMEOUT_SYSTEMCREATE", SystemCreate)
+ SystemStart = durationFromEnvironment("HCSSHIM_TIMEOUT_SYSTEMSTART", SystemStart)
+ SystemPause = durationFromEnvironment("HCSSHIM_TIMEOUT_SYSTEMPAUSE", SystemPause)
+ SystemResume = durationFromEnvironment("HCSSHIM_TIMEOUT_SYSTEMRESUME", SystemResume)
+ SystemSave = durationFromEnvironment("HCSSHIM_TIMEOUT_SYSTEMSAVE", SystemSave)
+ SyscallWatcher = durationFromEnvironment("HCSSHIM_TIMEOUT_SYSCALLWATCHER", SyscallWatcher)
+ Tar2VHD = durationFromEnvironment("HCSSHIM_TIMEOUT_TAR2VHD", Tar2VHD)
+ ExternalCommandToStart = durationFromEnvironment("HCSSHIM_TIMEOUT_EXTERNALCOMMANDSTART", ExternalCommandToStart)
+ ExternalCommandToComplete = durationFromEnvironment("HCSSHIM_TIMEOUT_EXTERNALCOMMANDCOMPLETE", ExternalCommandToComplete)
+ TestDRetryLoop = durationFromEnvironment("HCSSHIM_TIMEOUT_TESTDRETRYLOOP", TestDRetryLoop)
+}
+
+func durationFromEnvironment(env string, defaultValue time.Duration) time.Duration {
+ envTimeout := os.Getenv(env)
+ if len(envTimeout) > 0 {
+ e, err := strconv.Atoi(envTimeout)
+ if err == nil && e > 0 {
+ return time.Second * time.Duration(e)
+ }
+ }
+ return defaultValue
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/vmcompute/vmcompute.go b/vendor/github.com/Microsoft/hcsshim/internal/vmcompute/vmcompute.go
new file mode 100644
index 000000000..e7f114b67
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/vmcompute/vmcompute.go
@@ -0,0 +1,610 @@
+package vmcompute
+
+import (
+ gcontext "context"
+ "syscall"
+ "time"
+
+ "github.com/Microsoft/hcsshim/internal/interop"
+ "github.com/Microsoft/hcsshim/internal/log"
+ "github.com/Microsoft/hcsshim/internal/logfields"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/Microsoft/hcsshim/internal/timeout"
+ "go.opencensus.io/trace"
+)
+
+//go:generate go run ../../mksyscall_windows.go -output zsyscall_windows.go vmcompute.go
+
+//sys hcsEnumerateComputeSystems(query string, computeSystems **uint16, result **uint16) (hr error) = vmcompute.HcsEnumerateComputeSystems?
+//sys hcsCreateComputeSystem(id string, configuration string, identity syscall.Handle, computeSystem *HcsSystem, result **uint16) (hr error) = vmcompute.HcsCreateComputeSystem?
+//sys hcsOpenComputeSystem(id string, computeSystem *HcsSystem, result **uint16) (hr error) = vmcompute.HcsOpenComputeSystem?
+//sys hcsCloseComputeSystem(computeSystem HcsSystem) (hr error) = vmcompute.HcsCloseComputeSystem?
+//sys hcsStartComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsStartComputeSystem?
+//sys hcsShutdownComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsShutdownComputeSystem?
+//sys hcsTerminateComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsTerminateComputeSystem?
+//sys hcsPauseComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsPauseComputeSystem?
+//sys hcsResumeComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsResumeComputeSystem?
+//sys hcsGetComputeSystemProperties(computeSystem HcsSystem, propertyQuery string, properties **uint16, result **uint16) (hr error) = vmcompute.HcsGetComputeSystemProperties?
+//sys hcsModifyComputeSystem(computeSystem HcsSystem, configuration string, result **uint16) (hr error) = vmcompute.HcsModifyComputeSystem?
+//sys hcsModifyServiceSettings(settings string, result **uint16) (hr error) = vmcompute.HcsModifyServiceSettings?
+//sys hcsRegisterComputeSystemCallback(computeSystem HcsSystem, callback uintptr, context uintptr, callbackHandle *HcsCallback) (hr error) = vmcompute.HcsRegisterComputeSystemCallback?
+//sys hcsUnregisterComputeSystemCallback(callbackHandle HcsCallback) (hr error) = vmcompute.HcsUnregisterComputeSystemCallback?
+//sys hcsSaveComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) = vmcompute.HcsSaveComputeSystem?
+
+//sys hcsCreateProcess(computeSystem HcsSystem, processParameters string, processInformation *HcsProcessInformation, process *HcsProcess, result **uint16) (hr error) = vmcompute.HcsCreateProcess?
+//sys hcsOpenProcess(computeSystem HcsSystem, pid uint32, process *HcsProcess, result **uint16) (hr error) = vmcompute.HcsOpenProcess?
+//sys hcsCloseProcess(process HcsProcess) (hr error) = vmcompute.HcsCloseProcess?
+//sys hcsTerminateProcess(process HcsProcess, result **uint16) (hr error) = vmcompute.HcsTerminateProcess?
+//sys hcsSignalProcess(process HcsProcess, options string, result **uint16) (hr error) = vmcompute.HcsSignalProcess?
+//sys hcsGetProcessInfo(process HcsProcess, processInformation *HcsProcessInformation, result **uint16) (hr error) = vmcompute.HcsGetProcessInfo?
+//sys hcsGetProcessProperties(process HcsProcess, processProperties **uint16, result **uint16) (hr error) = vmcompute.HcsGetProcessProperties?
+//sys hcsModifyProcess(process HcsProcess, settings string, result **uint16) (hr error) = vmcompute.HcsModifyProcess?
+//sys hcsGetServiceProperties(propertyQuery string, properties **uint16, result **uint16) (hr error) = vmcompute.HcsGetServiceProperties?
+//sys hcsRegisterProcessCallback(process HcsProcess, callback uintptr, context uintptr, callbackHandle *HcsCallback) (hr error) = vmcompute.HcsRegisterProcessCallback?
+//sys hcsUnregisterProcessCallback(callbackHandle HcsCallback) (hr error) = vmcompute.HcsUnregisterProcessCallback?
+
+// errVmcomputeOperationPending is an error encountered when the operation is being completed asynchronously
+const errVmcomputeOperationPending = syscall.Errno(0xC0370103)
+
+// HcsSystem is the handle associated with a created compute system.
+type HcsSystem syscall.Handle
+
+// HcsProcess is the handle associated with a created process in a compute
+// system.
+type HcsProcess syscall.Handle
+
+// HcsCallback is the handle associated with the function to call when events
+// occur.
+type HcsCallback syscall.Handle
+
+// HcsProcessInformation is the structure used when creating or getting process
+// info.
+type HcsProcessInformation struct {
+ // ProcessId is the pid of the created process.
+ ProcessId uint32
+ reserved uint32 //nolint:structcheck
+ // StdInput is the handle associated with the stdin of the process.
+ StdInput syscall.Handle
+ // StdOutput is the handle associated with the stdout of the process.
+ StdOutput syscall.Handle
+ // StdError is the handle associated with the stderr of the process.
+ StdError syscall.Handle
+}
+
+func execute(ctx gcontext.Context, timeout time.Duration, f func() error) error {
+ if timeout > 0 {
+ var cancel gcontext.CancelFunc
+ ctx, cancel = gcontext.WithTimeout(ctx, timeout)
+ defer cancel()
+ }
+
+ done := make(chan error, 1)
+ go func() {
+ done <- f()
+ }()
+ select {
+ case <-ctx.Done():
+ if ctx.Err() == gcontext.DeadlineExceeded {
+ log.G(ctx).WithField(logfields.Timeout, timeout).
+ Warning("Syscall did not complete within operation timeout. This may indicate a platform issue. If it appears to be making no forward progress, obtain the stacks and see if there is a syscall stuck in the platform API for a significant length of time.")
+ }
+ return ctx.Err()
+ case err := <-done:
+ return err
+ }
+}
+
+func HcsEnumerateComputeSystems(ctx gcontext.Context, query string) (computeSystems, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsEnumerateComputeSystems")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.StringAttribute("query", query))
+
+ return computeSystems, result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var (
+ computeSystemsp *uint16
+ resultp *uint16
+ )
+ err := hcsEnumerateComputeSystems(query, &computeSystemsp, &resultp)
+ if computeSystemsp != nil {
+ computeSystems = interop.ConvertAndFreeCoTaskMemString(computeSystemsp)
+ }
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsCreateComputeSystem(ctx gcontext.Context, id string, configuration string, identity syscall.Handle) (computeSystem HcsSystem, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsCreateComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ if hr != errVmcomputeOperationPending {
+ oc.SetSpanStatus(span, hr)
+ }
+ }()
+ span.AddAttributes(
+ trace.StringAttribute("id", id),
+ trace.StringAttribute("configuration", configuration))
+
+ return computeSystem, result, execute(ctx, timeout.SystemCreate, func() error {
+ var resultp *uint16
+ err := hcsCreateComputeSystem(id, configuration, identity, &computeSystem, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsOpenComputeSystem(ctx gcontext.Context, id string) (computeSystem HcsSystem, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsOpenComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+
+ return computeSystem, result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsOpenComputeSystem(id, &computeSystem, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsCloseComputeSystem(ctx gcontext.Context, computeSystem HcsSystem) (hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsCloseComputeSystem")
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, hr) }()
+
+ return execute(ctx, timeout.SyscallWatcher, func() error {
+ return hcsCloseComputeSystem(computeSystem)
+ })
+}
+
+func HcsStartComputeSystem(ctx gcontext.Context, computeSystem HcsSystem, options string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsStartComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ if hr != errVmcomputeOperationPending {
+ oc.SetSpanStatus(span, hr)
+ }
+ }()
+ span.AddAttributes(trace.StringAttribute("options", options))
+
+ return result, execute(ctx, timeout.SystemStart, func() error {
+ var resultp *uint16
+ err := hcsStartComputeSystem(computeSystem, options, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsShutdownComputeSystem(ctx gcontext.Context, computeSystem HcsSystem, options string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsShutdownComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ if hr != errVmcomputeOperationPending {
+ oc.SetSpanStatus(span, hr)
+ }
+ }()
+ span.AddAttributes(trace.StringAttribute("options", options))
+
+ return result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsShutdownComputeSystem(computeSystem, options, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsTerminateComputeSystem(ctx gcontext.Context, computeSystem HcsSystem, options string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsTerminateComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ if hr != errVmcomputeOperationPending {
+ oc.SetSpanStatus(span, hr)
+ }
+ }()
+ span.AddAttributes(trace.StringAttribute("options", options))
+
+ return result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsTerminateComputeSystem(computeSystem, options, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsPauseComputeSystem(ctx gcontext.Context, computeSystem HcsSystem, options string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsPauseComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ if hr != errVmcomputeOperationPending {
+ oc.SetSpanStatus(span, hr)
+ }
+ }()
+ span.AddAttributes(trace.StringAttribute("options", options))
+
+ return result, execute(ctx, timeout.SystemPause, func() error {
+ var resultp *uint16
+ err := hcsPauseComputeSystem(computeSystem, options, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsResumeComputeSystem(ctx gcontext.Context, computeSystem HcsSystem, options string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsResumeComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ if hr != errVmcomputeOperationPending {
+ oc.SetSpanStatus(span, hr)
+ }
+ }()
+ span.AddAttributes(trace.StringAttribute("options", options))
+
+ return result, execute(ctx, timeout.SystemResume, func() error {
+ var resultp *uint16
+ err := hcsResumeComputeSystem(computeSystem, options, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsGetComputeSystemProperties(ctx gcontext.Context, computeSystem HcsSystem, propertyQuery string) (properties, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsGetComputeSystemProperties")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.StringAttribute("propertyQuery", propertyQuery))
+
+ return properties, result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var (
+ propertiesp *uint16
+ resultp *uint16
+ )
+ err := hcsGetComputeSystemProperties(computeSystem, propertyQuery, &propertiesp, &resultp)
+ if propertiesp != nil {
+ properties = interop.ConvertAndFreeCoTaskMemString(propertiesp)
+ }
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsModifyComputeSystem(ctx gcontext.Context, computeSystem HcsSystem, configuration string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsModifyComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.StringAttribute("configuration", configuration))
+
+ return result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsModifyComputeSystem(computeSystem, configuration, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsModifyServiceSettings(ctx gcontext.Context, settings string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsModifyServiceSettings")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.StringAttribute("settings", settings))
+
+ return result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsModifyServiceSettings(settings, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsRegisterComputeSystemCallback(ctx gcontext.Context, computeSystem HcsSystem, callback uintptr, context uintptr) (callbackHandle HcsCallback, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsRegisterComputeSystemCallback")
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, hr) }()
+
+ return callbackHandle, execute(ctx, timeout.SyscallWatcher, func() error {
+ return hcsRegisterComputeSystemCallback(computeSystem, callback, context, &callbackHandle)
+ })
+}
+
+func HcsUnregisterComputeSystemCallback(ctx gcontext.Context, callbackHandle HcsCallback) (hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsUnregisterComputeSystemCallback")
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, hr) }()
+
+ return execute(ctx, timeout.SyscallWatcher, func() error {
+ return hcsUnregisterComputeSystemCallback(callbackHandle)
+ })
+}
+
+func HcsCreateProcess(ctx gcontext.Context, computeSystem HcsSystem, processParameters string) (processInformation HcsProcessInformation, process HcsProcess, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsCreateProcess")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.StringAttribute("processParameters", processParameters))
+
+ return processInformation, process, result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsCreateProcess(computeSystem, processParameters, &processInformation, &process, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsOpenProcess(ctx gcontext.Context, computeSystem HcsSystem, pid uint32) (process HcsProcess, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsOpenProcess")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.Int64Attribute("pid", int64(pid)))
+
+ return process, result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsOpenProcess(computeSystem, pid, &process, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsCloseProcess(ctx gcontext.Context, process HcsProcess) (hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsCloseProcess")
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, hr) }()
+
+ return execute(ctx, timeout.SyscallWatcher, func() error {
+ return hcsCloseProcess(process)
+ })
+}
+
+func HcsTerminateProcess(ctx gcontext.Context, process HcsProcess) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsTerminateProcess")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+
+ return result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsTerminateProcess(process, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsSignalProcess(ctx gcontext.Context, process HcsProcess, options string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsSignalProcess")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.StringAttribute("options", options))
+
+ return result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsSignalProcess(process, options, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsGetProcessInfo(ctx gcontext.Context, process HcsProcess) (processInformation HcsProcessInformation, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsGetProcessInfo")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+
+ return processInformation, result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsGetProcessInfo(process, &processInformation, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsGetProcessProperties(ctx gcontext.Context, process HcsProcess) (processProperties, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsGetProcessProperties")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+
+ return processProperties, result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var (
+ processPropertiesp *uint16
+ resultp *uint16
+ )
+ err := hcsGetProcessProperties(process, &processPropertiesp, &resultp)
+ if processPropertiesp != nil {
+ processProperties = interop.ConvertAndFreeCoTaskMemString(processPropertiesp)
+ }
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsModifyProcess(ctx gcontext.Context, process HcsProcess, settings string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsModifyProcess")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.StringAttribute("settings", settings))
+
+ return result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsModifyProcess(process, settings, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsGetServiceProperties(ctx gcontext.Context, propertyQuery string) (properties, result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsGetServiceProperties")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ oc.SetSpanStatus(span, hr)
+ }()
+ span.AddAttributes(trace.StringAttribute("propertyQuery", propertyQuery))
+
+ return properties, result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var (
+ propertiesp *uint16
+ resultp *uint16
+ )
+ err := hcsGetServiceProperties(propertyQuery, &propertiesp, &resultp)
+ if propertiesp != nil {
+ properties = interop.ConvertAndFreeCoTaskMemString(propertiesp)
+ }
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
+
+func HcsRegisterProcessCallback(ctx gcontext.Context, process HcsProcess, callback uintptr, context uintptr) (callbackHandle HcsCallback, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsRegisterProcessCallback")
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, hr) }()
+
+ return callbackHandle, execute(ctx, timeout.SyscallWatcher, func() error {
+ return hcsRegisterProcessCallback(process, callback, context, &callbackHandle)
+ })
+}
+
+func HcsUnregisterProcessCallback(ctx gcontext.Context, callbackHandle HcsCallback) (hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsUnregisterProcessCallback")
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, hr) }()
+
+ return execute(ctx, timeout.SyscallWatcher, func() error {
+ return hcsUnregisterProcessCallback(callbackHandle)
+ })
+}
+
+func HcsSaveComputeSystem(ctx gcontext.Context, computeSystem HcsSystem, options string) (result string, hr error) {
+ ctx, span := trace.StartSpan(ctx, "HcsSaveComputeSystem")
+ defer span.End()
+ defer func() {
+ if result != "" {
+ span.AddAttributes(trace.StringAttribute("result", result))
+ }
+ if hr != errVmcomputeOperationPending {
+ oc.SetSpanStatus(span, hr)
+ }
+ }()
+
+ return result, execute(ctx, timeout.SyscallWatcher, func() error {
+ var resultp *uint16
+ err := hcsSaveComputeSystem(computeSystem, options, &resultp)
+ if resultp != nil {
+ result = interop.ConvertAndFreeCoTaskMemString(resultp)
+ }
+ return err
+ })
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/vmcompute/zsyscall_windows.go b/vendor/github.com/Microsoft/hcsshim/internal/vmcompute/zsyscall_windows.go
new file mode 100644
index 000000000..cae55058d
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/vmcompute/zsyscall_windows.go
@@ -0,0 +1,581 @@
+// Code generated mksyscall_windows.exe DO NOT EDIT
+
+package vmcompute
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return nil
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modvmcompute = windows.NewLazySystemDLL("vmcompute.dll")
+
+ procHcsEnumerateComputeSystems = modvmcompute.NewProc("HcsEnumerateComputeSystems")
+ procHcsCreateComputeSystem = modvmcompute.NewProc("HcsCreateComputeSystem")
+ procHcsOpenComputeSystem = modvmcompute.NewProc("HcsOpenComputeSystem")
+ procHcsCloseComputeSystem = modvmcompute.NewProc("HcsCloseComputeSystem")
+ procHcsStartComputeSystem = modvmcompute.NewProc("HcsStartComputeSystem")
+ procHcsShutdownComputeSystem = modvmcompute.NewProc("HcsShutdownComputeSystem")
+ procHcsTerminateComputeSystem = modvmcompute.NewProc("HcsTerminateComputeSystem")
+ procHcsPauseComputeSystem = modvmcompute.NewProc("HcsPauseComputeSystem")
+ procHcsResumeComputeSystem = modvmcompute.NewProc("HcsResumeComputeSystem")
+ procHcsGetComputeSystemProperties = modvmcompute.NewProc("HcsGetComputeSystemProperties")
+ procHcsModifyComputeSystem = modvmcompute.NewProc("HcsModifyComputeSystem")
+ procHcsModifyServiceSettings = modvmcompute.NewProc("HcsModifyServiceSettings")
+ procHcsRegisterComputeSystemCallback = modvmcompute.NewProc("HcsRegisterComputeSystemCallback")
+ procHcsUnregisterComputeSystemCallback = modvmcompute.NewProc("HcsUnregisterComputeSystemCallback")
+ procHcsSaveComputeSystem = modvmcompute.NewProc("HcsSaveComputeSystem")
+ procHcsCreateProcess = modvmcompute.NewProc("HcsCreateProcess")
+ procHcsOpenProcess = modvmcompute.NewProc("HcsOpenProcess")
+ procHcsCloseProcess = modvmcompute.NewProc("HcsCloseProcess")
+ procHcsTerminateProcess = modvmcompute.NewProc("HcsTerminateProcess")
+ procHcsSignalProcess = modvmcompute.NewProc("HcsSignalProcess")
+ procHcsGetProcessInfo = modvmcompute.NewProc("HcsGetProcessInfo")
+ procHcsGetProcessProperties = modvmcompute.NewProc("HcsGetProcessProperties")
+ procHcsModifyProcess = modvmcompute.NewProc("HcsModifyProcess")
+ procHcsGetServiceProperties = modvmcompute.NewProc("HcsGetServiceProperties")
+ procHcsRegisterProcessCallback = modvmcompute.NewProc("HcsRegisterProcessCallback")
+ procHcsUnregisterProcessCallback = modvmcompute.NewProc("HcsUnregisterProcessCallback")
+)
+
+func hcsEnumerateComputeSystems(query string, computeSystems **uint16, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(query)
+ if hr != nil {
+ return
+ }
+ return _hcsEnumerateComputeSystems(_p0, computeSystems, result)
+}
+
+func _hcsEnumerateComputeSystems(query *uint16, computeSystems **uint16, result **uint16) (hr error) {
+ if hr = procHcsEnumerateComputeSystems.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsEnumerateComputeSystems.Addr(), 3, uintptr(unsafe.Pointer(query)), uintptr(unsafe.Pointer(computeSystems)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsCreateComputeSystem(id string, configuration string, identity syscall.Handle, computeSystem *HcsSystem, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(configuration)
+ if hr != nil {
+ return
+ }
+ return _hcsCreateComputeSystem(_p0, _p1, identity, computeSystem, result)
+}
+
+func _hcsCreateComputeSystem(id *uint16, configuration *uint16, identity syscall.Handle, computeSystem *HcsSystem, result **uint16) (hr error) {
+ if hr = procHcsCreateComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procHcsCreateComputeSystem.Addr(), 5, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(configuration)), uintptr(identity), uintptr(unsafe.Pointer(computeSystem)), uintptr(unsafe.Pointer(result)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsOpenComputeSystem(id string, computeSystem *HcsSystem, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _hcsOpenComputeSystem(_p0, computeSystem, result)
+}
+
+func _hcsOpenComputeSystem(id *uint16, computeSystem *HcsSystem, result **uint16) (hr error) {
+ if hr = procHcsOpenComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsOpenComputeSystem.Addr(), 3, uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(computeSystem)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsCloseComputeSystem(computeSystem HcsSystem) (hr error) {
+ if hr = procHcsCloseComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsCloseComputeSystem.Addr(), 1, uintptr(computeSystem), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsStartComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsStartComputeSystem(computeSystem, _p0, result)
+}
+
+func _hcsStartComputeSystem(computeSystem HcsSystem, options *uint16, result **uint16) (hr error) {
+ if hr = procHcsStartComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsStartComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsShutdownComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsShutdownComputeSystem(computeSystem, _p0, result)
+}
+
+func _hcsShutdownComputeSystem(computeSystem HcsSystem, options *uint16, result **uint16) (hr error) {
+ if hr = procHcsShutdownComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsShutdownComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsTerminateComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsTerminateComputeSystem(computeSystem, _p0, result)
+}
+
+func _hcsTerminateComputeSystem(computeSystem HcsSystem, options *uint16, result **uint16) (hr error) {
+ if hr = procHcsTerminateComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsTerminateComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsPauseComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsPauseComputeSystem(computeSystem, _p0, result)
+}
+
+func _hcsPauseComputeSystem(computeSystem HcsSystem, options *uint16, result **uint16) (hr error) {
+ if hr = procHcsPauseComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsPauseComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsResumeComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsResumeComputeSystem(computeSystem, _p0, result)
+}
+
+func _hcsResumeComputeSystem(computeSystem HcsSystem, options *uint16, result **uint16) (hr error) {
+ if hr = procHcsResumeComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsResumeComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsGetComputeSystemProperties(computeSystem HcsSystem, propertyQuery string, properties **uint16, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(propertyQuery)
+ if hr != nil {
+ return
+ }
+ return _hcsGetComputeSystemProperties(computeSystem, _p0, properties, result)
+}
+
+func _hcsGetComputeSystemProperties(computeSystem HcsSystem, propertyQuery *uint16, properties **uint16, result **uint16) (hr error) {
+ if hr = procHcsGetComputeSystemProperties.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procHcsGetComputeSystemProperties.Addr(), 4, uintptr(computeSystem), uintptr(unsafe.Pointer(propertyQuery)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsModifyComputeSystem(computeSystem HcsSystem, configuration string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(configuration)
+ if hr != nil {
+ return
+ }
+ return _hcsModifyComputeSystem(computeSystem, _p0, result)
+}
+
+func _hcsModifyComputeSystem(computeSystem HcsSystem, configuration *uint16, result **uint16) (hr error) {
+ if hr = procHcsModifyComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsModifyComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(configuration)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsModifyServiceSettings(settings string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(settings)
+ if hr != nil {
+ return
+ }
+ return _hcsModifyServiceSettings(_p0, result)
+}
+
+func _hcsModifyServiceSettings(settings *uint16, result **uint16) (hr error) {
+ if hr = procHcsModifyServiceSettings.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsModifyServiceSettings.Addr(), 2, uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(result)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsRegisterComputeSystemCallback(computeSystem HcsSystem, callback uintptr, context uintptr, callbackHandle *HcsCallback) (hr error) {
+ if hr = procHcsRegisterComputeSystemCallback.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procHcsRegisterComputeSystemCallback.Addr(), 4, uintptr(computeSystem), uintptr(callback), uintptr(context), uintptr(unsafe.Pointer(callbackHandle)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsUnregisterComputeSystemCallback(callbackHandle HcsCallback) (hr error) {
+ if hr = procHcsUnregisterComputeSystemCallback.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsUnregisterComputeSystemCallback.Addr(), 1, uintptr(callbackHandle), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsSaveComputeSystem(computeSystem HcsSystem, options string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsSaveComputeSystem(computeSystem, _p0, result)
+}
+
+func _hcsSaveComputeSystem(computeSystem HcsSystem, options *uint16, result **uint16) (hr error) {
+ if hr = procHcsSaveComputeSystem.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsSaveComputeSystem.Addr(), 3, uintptr(computeSystem), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsCreateProcess(computeSystem HcsSystem, processParameters string, processInformation *HcsProcessInformation, process *HcsProcess, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(processParameters)
+ if hr != nil {
+ return
+ }
+ return _hcsCreateProcess(computeSystem, _p0, processInformation, process, result)
+}
+
+func _hcsCreateProcess(computeSystem HcsSystem, processParameters *uint16, processInformation *HcsProcessInformation, process *HcsProcess, result **uint16) (hr error) {
+ if hr = procHcsCreateProcess.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procHcsCreateProcess.Addr(), 5, uintptr(computeSystem), uintptr(unsafe.Pointer(processParameters)), uintptr(unsafe.Pointer(processInformation)), uintptr(unsafe.Pointer(process)), uintptr(unsafe.Pointer(result)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsOpenProcess(computeSystem HcsSystem, pid uint32, process *HcsProcess, result **uint16) (hr error) {
+ if hr = procHcsOpenProcess.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procHcsOpenProcess.Addr(), 4, uintptr(computeSystem), uintptr(pid), uintptr(unsafe.Pointer(process)), uintptr(unsafe.Pointer(result)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsCloseProcess(process HcsProcess) (hr error) {
+ if hr = procHcsCloseProcess.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsCloseProcess.Addr(), 1, uintptr(process), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsTerminateProcess(process HcsProcess, result **uint16) (hr error) {
+ if hr = procHcsTerminateProcess.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsTerminateProcess.Addr(), 2, uintptr(process), uintptr(unsafe.Pointer(result)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsSignalProcess(process HcsProcess, options string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(options)
+ if hr != nil {
+ return
+ }
+ return _hcsSignalProcess(process, _p0, result)
+}
+
+func _hcsSignalProcess(process HcsProcess, options *uint16, result **uint16) (hr error) {
+ if hr = procHcsSignalProcess.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsSignalProcess.Addr(), 3, uintptr(process), uintptr(unsafe.Pointer(options)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsGetProcessInfo(process HcsProcess, processInformation *HcsProcessInformation, result **uint16) (hr error) {
+ if hr = procHcsGetProcessInfo.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsGetProcessInfo.Addr(), 3, uintptr(process), uintptr(unsafe.Pointer(processInformation)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsGetProcessProperties(process HcsProcess, processProperties **uint16, result **uint16) (hr error) {
+ if hr = procHcsGetProcessProperties.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsGetProcessProperties.Addr(), 3, uintptr(process), uintptr(unsafe.Pointer(processProperties)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsModifyProcess(process HcsProcess, settings string, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(settings)
+ if hr != nil {
+ return
+ }
+ return _hcsModifyProcess(process, _p0, result)
+}
+
+func _hcsModifyProcess(process HcsProcess, settings *uint16, result **uint16) (hr error) {
+ if hr = procHcsModifyProcess.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsModifyProcess.Addr(), 3, uintptr(process), uintptr(unsafe.Pointer(settings)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsGetServiceProperties(propertyQuery string, properties **uint16, result **uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(propertyQuery)
+ if hr != nil {
+ return
+ }
+ return _hcsGetServiceProperties(_p0, properties, result)
+}
+
+func _hcsGetServiceProperties(propertyQuery *uint16, properties **uint16, result **uint16) (hr error) {
+ if hr = procHcsGetServiceProperties.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsGetServiceProperties.Addr(), 3, uintptr(unsafe.Pointer(propertyQuery)), uintptr(unsafe.Pointer(properties)), uintptr(unsafe.Pointer(result)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsRegisterProcessCallback(process HcsProcess, callback uintptr, context uintptr, callbackHandle *HcsCallback) (hr error) {
+ if hr = procHcsRegisterProcessCallback.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procHcsRegisterProcessCallback.Addr(), 4, uintptr(process), uintptr(callback), uintptr(context), uintptr(unsafe.Pointer(callbackHandle)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func hcsUnregisterProcessCallback(callbackHandle HcsCallback) (hr error) {
+ if hr = procHcsUnregisterProcessCallback.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procHcsUnregisterProcessCallback.Addr(), 1, uintptr(callbackHandle), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/activatelayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/activatelayer.go
new file mode 100644
index 000000000..5debe974d
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/activatelayer.go
@@ -0,0 +1,27 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// ActivateLayer will find the layer with the given id and mount it's filesystem.
+// For a read/write layer, the mounted filesystem will appear as a volume on the
+// host, while a read-only layer is generally expected to be a no-op.
+// An activated layer must later be deactivated via DeactivateLayer.
+func ActivateLayer(ctx context.Context, path string) (err error) {
+ title := "hcsshim::ActivateLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ err = activateLayer(&stdDriverInfo, path)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/baselayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/baselayer.go
new file mode 100644
index 000000000..3ec708d1e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/baselayer.go
@@ -0,0 +1,182 @@
+package wclayer
+
+import (
+ "context"
+ "errors"
+ "os"
+ "path/filepath"
+ "syscall"
+
+ "github.com/Microsoft/go-winio"
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/Microsoft/hcsshim/internal/safefile"
+ "github.com/Microsoft/hcsshim/internal/winapi"
+ "go.opencensus.io/trace"
+)
+
+type baseLayerWriter struct {
+ ctx context.Context
+ s *trace.Span
+
+ root *os.File
+ f *os.File
+ bw *winio.BackupFileWriter
+ err error
+ hasUtilityVM bool
+ dirInfo []dirInfo
+}
+
+type dirInfo struct {
+ path string
+ fileInfo winio.FileBasicInfo
+}
+
+// reapplyDirectoryTimes reapplies directory modification, creation, etc. times
+// after processing of the directory tree has completed. The times are expected
+// to be ordered such that parent directories come before child directories.
+func reapplyDirectoryTimes(root *os.File, dis []dirInfo) error {
+ for i := range dis {
+ di := &dis[len(dis)-i-1] // reverse order: process child directories first
+ f, err := safefile.OpenRelative(di.path, root, syscall.GENERIC_READ|syscall.GENERIC_WRITE, syscall.FILE_SHARE_READ, winapi.FILE_OPEN, winapi.FILE_DIRECTORY_FILE|syscall.FILE_FLAG_OPEN_REPARSE_POINT)
+ if err != nil {
+ return err
+ }
+
+ err = winio.SetFileBasicInfo(f, &di.fileInfo)
+ f.Close()
+ if err != nil {
+ return err
+ }
+
+ }
+ return nil
+}
+
+func (w *baseLayerWriter) closeCurrentFile() error {
+ if w.f != nil {
+ err := w.bw.Close()
+ err2 := w.f.Close()
+ w.f = nil
+ w.bw = nil
+ if err != nil {
+ return err
+ }
+ if err2 != nil {
+ return err2
+ }
+ }
+ return nil
+}
+
+func (w *baseLayerWriter) Add(name string, fileInfo *winio.FileBasicInfo) (err error) {
+ defer func() {
+ if err != nil {
+ w.err = err
+ }
+ }()
+
+ err = w.closeCurrentFile()
+ if err != nil {
+ return err
+ }
+
+ if filepath.ToSlash(name) == `UtilityVM/Files` {
+ w.hasUtilityVM = true
+ }
+
+ var f *os.File
+ defer func() {
+ if f != nil {
+ f.Close()
+ }
+ }()
+
+ extraFlags := uint32(0)
+ if fileInfo.FileAttributes&syscall.FILE_ATTRIBUTE_DIRECTORY != 0 {
+ extraFlags |= winapi.FILE_DIRECTORY_FILE
+ w.dirInfo = append(w.dirInfo, dirInfo{name, *fileInfo})
+ }
+
+ mode := uint32(syscall.GENERIC_READ | syscall.GENERIC_WRITE | winio.WRITE_DAC | winio.WRITE_OWNER | winio.ACCESS_SYSTEM_SECURITY)
+ f, err = safefile.OpenRelative(name, w.root, mode, syscall.FILE_SHARE_READ, winapi.FILE_CREATE, extraFlags)
+ if err != nil {
+ return hcserror.New(err, "Failed to safefile.OpenRelative", name)
+ }
+
+ err = winio.SetFileBasicInfo(f, fileInfo)
+ if err != nil {
+ return hcserror.New(err, "Failed to SetFileBasicInfo", name)
+ }
+
+ w.f = f
+ w.bw = winio.NewBackupFileWriter(f, true)
+ f = nil
+ return nil
+}
+
+func (w *baseLayerWriter) AddLink(name string, target string) (err error) {
+ defer func() {
+ if err != nil {
+ w.err = err
+ }
+ }()
+
+ err = w.closeCurrentFile()
+ if err != nil {
+ return err
+ }
+
+ return safefile.LinkRelative(target, w.root, name, w.root)
+}
+
+func (w *baseLayerWriter) Remove(name string) error {
+ return errors.New("base layer cannot have tombstones")
+}
+
+func (w *baseLayerWriter) Write(b []byte) (int, error) {
+ n, err := w.bw.Write(b)
+ if err != nil {
+ w.err = err
+ }
+ return n, err
+}
+
+func (w *baseLayerWriter) Close() (err error) {
+ defer w.s.End()
+ defer func() { oc.SetSpanStatus(w.s, err) }()
+ defer func() {
+ w.root.Close()
+ w.root = nil
+ }()
+
+ err = w.closeCurrentFile()
+ if err != nil {
+ return err
+ }
+ if w.err == nil {
+ // Restore the file times of all the directories, since they may have
+ // been modified by creating child directories.
+ err = reapplyDirectoryTimes(w.root, w.dirInfo)
+ if err != nil {
+ return err
+ }
+
+ err = ProcessBaseLayer(w.ctx, w.root.Name())
+ if err != nil {
+ return err
+ }
+
+ if w.hasUtilityVM {
+ err := safefile.EnsureNotReparsePointRelative("UtilityVM", w.root)
+ if err != nil {
+ return err
+ }
+ err = ProcessUtilityVMImage(w.ctx, filepath.Join(w.root.Name(), "UtilityVM"))
+ if err != nil {
+ return err
+ }
+ }
+ }
+ return w.err
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/createlayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/createlayer.go
new file mode 100644
index 000000000..480aee872
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/createlayer.go
@@ -0,0 +1,27 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// CreateLayer creates a new, empty, read-only layer on the filesystem based on
+// the parent layer provided.
+func CreateLayer(ctx context.Context, path, parent string) (err error) {
+ title := "hcsshim::CreateLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("path", path),
+ trace.StringAttribute("parent", parent))
+
+ err = createLayer(&stdDriverInfo, path, parent)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/createscratchlayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/createscratchlayer.go
new file mode 100644
index 000000000..131aa94f1
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/createscratchlayer.go
@@ -0,0 +1,34 @@
+package wclayer
+
+import (
+ "context"
+ "strings"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// CreateScratchLayer creates and populates new read-write layer for use by a container.
+// This requires the full list of paths to all parent layers up to the base
+func CreateScratchLayer(ctx context.Context, path string, parentLayerPaths []string) (err error) {
+ title := "hcsshim::CreateScratchLayer"
+ ctx, span := trace.StartSpan(ctx, title)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("path", path),
+ trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
+
+ // Generate layer descriptors
+ layers, err := layerPathsToDescriptors(ctx, parentLayerPaths)
+ if err != nil {
+ return err
+ }
+
+ err = createSandboxLayer(&stdDriverInfo, path, 0, layers)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/deactivatelayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/deactivatelayer.go
new file mode 100644
index 000000000..d5bf2f5bd
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/deactivatelayer.go
@@ -0,0 +1,24 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// DeactivateLayer will dismount a layer that was mounted via ActivateLayer.
+func DeactivateLayer(ctx context.Context, path string) (err error) {
+ title := "hcsshim::DeactivateLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ err = deactivateLayer(&stdDriverInfo, path)
+ if err != nil {
+ return hcserror.New(err, title+"- failed", "")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/destroylayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/destroylayer.go
new file mode 100644
index 000000000..424467ac3
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/destroylayer.go
@@ -0,0 +1,25 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// DestroyLayer will remove the on-disk files representing the layer with the given
+// path, including that layer's containing folder, if any.
+func DestroyLayer(ctx context.Context, path string) (err error) {
+ title := "hcsshim::DestroyLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ err = destroyLayer(&stdDriverInfo, path)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/expandscratchsize.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/expandscratchsize.go
new file mode 100644
index 000000000..035c9041e
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/expandscratchsize.go
@@ -0,0 +1,140 @@
+package wclayer
+
+import (
+ "context"
+ "os"
+ "path/filepath"
+ "syscall"
+ "unsafe"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/Microsoft/hcsshim/osversion"
+ "go.opencensus.io/trace"
+)
+
+// ExpandScratchSize expands the size of a layer to at least size bytes.
+func ExpandScratchSize(ctx context.Context, path string, size uint64) (err error) {
+ title := "hcsshim::ExpandScratchSize"
+ ctx, span := trace.StartSpan(ctx, title)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("path", path),
+ trace.Int64Attribute("size", int64(size)))
+
+ err = expandSandboxSize(&stdDriverInfo, path, size)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+
+ // Manually expand the volume now in order to work around bugs in 19H1 and
+ // prerelease versions of Vb. Remove once this is fixed in Windows.
+ if build := osversion.Build(); build >= osversion.V19H1 && build < 19020 {
+ err = expandSandboxVolume(ctx, path)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+type virtualStorageType struct {
+ DeviceID uint32
+ VendorID [16]byte
+}
+
+type openVersion2 struct {
+ GetInfoOnly int32 // bool but 4-byte aligned
+ ReadOnly int32 // bool but 4-byte aligned
+ ResiliencyGUID [16]byte // GUID
+}
+
+type openVirtualDiskParameters struct {
+ Version uint32 // Must always be set to 2
+ Version2 openVersion2
+}
+
+func attachVhd(path string) (syscall.Handle, error) {
+ var (
+ defaultType virtualStorageType
+ handle syscall.Handle
+ )
+ parameters := openVirtualDiskParameters{Version: 2}
+ err := openVirtualDisk(
+ &defaultType,
+ path,
+ 0,
+ 0,
+ ¶meters,
+ &handle)
+ if err != nil {
+ return 0, &os.PathError{Op: "OpenVirtualDisk", Path: path, Err: err}
+ }
+ err = attachVirtualDisk(handle, 0, 0, 0, 0, 0)
+ if err != nil {
+ syscall.Close(handle)
+ return 0, &os.PathError{Op: "AttachVirtualDisk", Path: path, Err: err}
+ }
+ return handle, nil
+}
+
+func expandSandboxVolume(ctx context.Context, path string) error {
+ // Mount the sandbox VHD temporarily.
+ vhdPath := filepath.Join(path, "sandbox.vhdx")
+ vhd, err := attachVhd(vhdPath)
+ if err != nil {
+ return &os.PathError{Op: "OpenVirtualDisk", Path: vhdPath, Err: err}
+ }
+ defer syscall.Close(vhd)
+
+ // Open the volume.
+ volumePath, err := GetLayerMountPath(ctx, path)
+ if err != nil {
+ return err
+ }
+ if volumePath[len(volumePath)-1] == '\\' {
+ volumePath = volumePath[:len(volumePath)-1]
+ }
+ volume, err := os.OpenFile(volumePath, os.O_RDWR, 0)
+ if err != nil {
+ return err
+ }
+ defer volume.Close()
+
+ // Get the volume's underlying partition size in NTFS clusters.
+ var (
+ partitionSize int64
+ bytes uint32
+ )
+ const _IOCTL_DISK_GET_LENGTH_INFO = 0x0007405C
+ err = syscall.DeviceIoControl(syscall.Handle(volume.Fd()), _IOCTL_DISK_GET_LENGTH_INFO, nil, 0, (*byte)(unsafe.Pointer(&partitionSize)), 8, &bytes, nil)
+ if err != nil {
+ return &os.PathError{Op: "IOCTL_DISK_GET_LENGTH_INFO", Path: volume.Name(), Err: err}
+ }
+ const (
+ clusterSize = 4096
+ sectorSize = 512
+ )
+ targetClusters := partitionSize / clusterSize
+
+ // Get the volume's current size in NTFS clusters.
+ var volumeSize int64
+ err = getDiskFreeSpaceEx(volume.Name()+"\\", nil, &volumeSize, nil)
+ if err != nil {
+ return &os.PathError{Op: "GetDiskFreeSpaceEx", Path: volume.Name(), Err: err}
+ }
+ volumeClusters := volumeSize / clusterSize
+
+ // Only resize the volume if there is space to grow, otherwise this will
+ // fail with invalid parameter. NTFS reserves one cluster.
+ if volumeClusters+1 < targetClusters {
+ targetSectors := targetClusters * (clusterSize / sectorSize)
+ const _FSCTL_EXTEND_VOLUME = 0x000900F0
+ err = syscall.DeviceIoControl(syscall.Handle(volume.Fd()), _FSCTL_EXTEND_VOLUME, (*byte)(unsafe.Pointer(&targetSectors)), 8, nil, 0, &bytes, nil)
+ if err != nil {
+ return &os.PathError{Op: "FSCTL_EXTEND_VOLUME", Path: volume.Name(), Err: err}
+ }
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/exportlayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/exportlayer.go
new file mode 100644
index 000000000..97b27eb7d
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/exportlayer.go
@@ -0,0 +1,94 @@
+package wclayer
+
+import (
+ "context"
+ "io/ioutil"
+ "os"
+ "strings"
+
+ "github.com/Microsoft/go-winio"
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// ExportLayer will create a folder at exportFolderPath and fill that folder with
+// the transport format version of the layer identified by layerId. This transport
+// format includes any metadata required for later importing the layer (using
+// ImportLayer), and requires the full list of parent layer paths in order to
+// perform the export.
+func ExportLayer(ctx context.Context, path string, exportFolderPath string, parentLayerPaths []string) (err error) {
+ title := "hcsshim::ExportLayer"
+ ctx, span := trace.StartSpan(ctx, title)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("path", path),
+ trace.StringAttribute("exportFolderPath", exportFolderPath),
+ trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
+
+ // Generate layer descriptors
+ layers, err := layerPathsToDescriptors(ctx, parentLayerPaths)
+ if err != nil {
+ return err
+ }
+
+ err = exportLayer(&stdDriverInfo, path, exportFolderPath, layers)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
+
+type LayerReader interface {
+ Next() (string, int64, *winio.FileBasicInfo, error)
+ Read(b []byte) (int, error)
+ Close() error
+}
+
+// NewLayerReader returns a new layer reader for reading the contents of an on-disk layer.
+// The caller must have taken the SeBackupPrivilege privilege
+// to call this and any methods on the resulting LayerReader.
+func NewLayerReader(ctx context.Context, path string, parentLayerPaths []string) (_ LayerReader, err error) {
+ ctx, span := trace.StartSpan(ctx, "hcsshim::NewLayerReader")
+ defer func() {
+ if err != nil {
+ oc.SetSpanStatus(span, err)
+ span.End()
+ }
+ }()
+ span.AddAttributes(
+ trace.StringAttribute("path", path),
+ trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
+
+ exportPath, err := ioutil.TempDir("", "hcs")
+ if err != nil {
+ return nil, err
+ }
+ err = ExportLayer(ctx, path, exportPath, parentLayerPaths)
+ if err != nil {
+ os.RemoveAll(exportPath)
+ return nil, err
+ }
+ return &legacyLayerReaderWrapper{
+ ctx: ctx,
+ s: span,
+ legacyLayerReader: newLegacyLayerReader(exportPath),
+ }, nil
+}
+
+type legacyLayerReaderWrapper struct {
+ ctx context.Context
+ s *trace.Span
+
+ *legacyLayerReader
+}
+
+func (r *legacyLayerReaderWrapper) Close() (err error) {
+ defer r.s.End()
+ defer func() { oc.SetSpanStatus(r.s, err) }()
+
+ err = r.legacyLayerReader.Close()
+ os.RemoveAll(r.root)
+ return err
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/getlayermountpath.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/getlayermountpath.go
new file mode 100644
index 000000000..8d213f587
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/getlayermountpath.go
@@ -0,0 +1,50 @@
+package wclayer
+
+import (
+ "context"
+ "syscall"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/log"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// GetLayerMountPath will look for a mounted layer with the given path and return
+// the path at which that layer can be accessed. This path may be a volume path
+// if the layer is a mounted read-write layer, otherwise it is expected to be the
+// folder path at which the layer is stored.
+func GetLayerMountPath(ctx context.Context, path string) (_ string, err error) {
+ title := "hcsshim::GetLayerMountPath"
+ ctx, span := trace.StartSpan(ctx, title)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ var mountPathLength uintptr = 0
+
+ // Call the procedure itself.
+ log.G(ctx).Debug("Calling proc (1)")
+ err = getLayerMountPath(&stdDriverInfo, path, &mountPathLength, nil)
+ if err != nil {
+ return "", hcserror.New(err, title, "(first call)")
+ }
+
+ // Allocate a mount path of the returned length.
+ if mountPathLength == 0 {
+ return "", nil
+ }
+ mountPathp := make([]uint16, mountPathLength)
+ mountPathp[0] = 0
+
+ // Call the procedure again
+ log.G(ctx).Debug("Calling proc (2)")
+ err = getLayerMountPath(&stdDriverInfo, path, &mountPathLength, &mountPathp[0])
+ if err != nil {
+ return "", hcserror.New(err, title, "(second call)")
+ }
+
+ mountPath := syscall.UTF16ToString(mountPathp[0:])
+ span.AddAttributes(trace.StringAttribute("mountPath", mountPath))
+ return mountPath, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/getsharedbaseimages.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/getsharedbaseimages.go
new file mode 100644
index 000000000..ae1fff840
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/getsharedbaseimages.go
@@ -0,0 +1,29 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/interop"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// GetSharedBaseImages will enumerate the images stored in the common central
+// image store and return descriptive info about those images for the purpose
+// of registering them with the graphdriver, graph, and tagstore.
+func GetSharedBaseImages(ctx context.Context) (_ string, err error) {
+ title := "hcsshim::GetSharedBaseImages"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+
+ var buffer *uint16
+ err = getBaseImages(&buffer)
+ if err != nil {
+ return "", hcserror.New(err, title, "")
+ }
+ imageData := interop.ConvertAndFreeCoTaskMemString(buffer)
+ span.AddAttributes(trace.StringAttribute("imageData", imageData))
+ return imageData, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/grantvmaccess.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/grantvmaccess.go
new file mode 100644
index 000000000..4b282fef9
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/grantvmaccess.go
@@ -0,0 +1,26 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// GrantVmAccess adds access to a file for a given VM
+func GrantVmAccess(ctx context.Context, vmid string, filepath string) (err error) {
+ title := "hcsshim::GrantVmAccess"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("vm-id", vmid),
+ trace.StringAttribute("path", filepath))
+
+ err = grantVmAccess(vmid, filepath)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/importlayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/importlayer.go
new file mode 100644
index 000000000..687550f0b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/importlayer.go
@@ -0,0 +1,166 @@
+package wclayer
+
+import (
+ "context"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "strings"
+
+ "github.com/Microsoft/go-winio"
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "github.com/Microsoft/hcsshim/internal/safefile"
+ "go.opencensus.io/trace"
+)
+
+// ImportLayer will take the contents of the folder at importFolderPath and import
+// that into a layer with the id layerId. Note that in order to correctly populate
+// the layer and interperet the transport format, all parent layers must already
+// be present on the system at the paths provided in parentLayerPaths.
+func ImportLayer(ctx context.Context, path string, importFolderPath string, parentLayerPaths []string) (err error) {
+ title := "hcsshim::ImportLayer"
+ ctx, span := trace.StartSpan(ctx, title)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("path", path),
+ trace.StringAttribute("importFolderPath", importFolderPath),
+ trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
+
+ // Generate layer descriptors
+ layers, err := layerPathsToDescriptors(ctx, parentLayerPaths)
+ if err != nil {
+ return err
+ }
+
+ err = importLayer(&stdDriverInfo, path, importFolderPath, layers)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
+
+// LayerWriter is an interface that supports writing a new container image layer.
+type LayerWriter interface {
+ // Add adds a file to the layer with given metadata.
+ Add(name string, fileInfo *winio.FileBasicInfo) error
+ // AddLink adds a hard link to the layer. The target must already have been added.
+ AddLink(name string, target string) error
+ // Remove removes a file that was present in a parent layer from the layer.
+ Remove(name string) error
+ // Write writes data to the current file. The data must be in the format of a Win32
+ // backup stream.
+ Write(b []byte) (int, error)
+ // Close finishes the layer writing process and releases any resources.
+ Close() error
+}
+
+type legacyLayerWriterWrapper struct {
+ ctx context.Context
+ s *trace.Span
+
+ *legacyLayerWriter
+ path string
+ parentLayerPaths []string
+}
+
+func (r *legacyLayerWriterWrapper) Close() (err error) {
+ defer r.s.End()
+ defer func() { oc.SetSpanStatus(r.s, err) }()
+ defer os.RemoveAll(r.root.Name())
+ defer r.legacyLayerWriter.CloseRoots()
+
+ err = r.legacyLayerWriter.Close()
+ if err != nil {
+ return err
+ }
+
+ if err = ImportLayer(r.ctx, r.destRoot.Name(), r.path, r.parentLayerPaths); err != nil {
+ return err
+ }
+ for _, name := range r.Tombstones {
+ if err = safefile.RemoveRelative(name, r.destRoot); err != nil && !os.IsNotExist(err) {
+ return err
+ }
+ }
+ // Add any hard links that were collected.
+ for _, lnk := range r.PendingLinks {
+ if err = safefile.RemoveRelative(lnk.Path, r.destRoot); err != nil && !os.IsNotExist(err) {
+ return err
+ }
+ if err = safefile.LinkRelative(lnk.Target, lnk.TargetRoot, lnk.Path, r.destRoot); err != nil {
+ return err
+ }
+ }
+
+ // The reapplyDirectoryTimes must be called AFTER we are done with Tombstone
+ // deletion and hard link creation. This is because Tombstone deletion and hard link
+ // creation updates the directory last write timestamps so that will change the
+ // timestamps added by the `Add` call. Some container applications depend on the
+ // correctness of these timestamps and so we should change the timestamps back to
+ // the original value (i.e the value provided in the Add call) after this
+ // processing is done.
+ err = reapplyDirectoryTimes(r.destRoot, r.changedDi)
+ if err != nil {
+ return err
+ }
+
+ // Prepare the utility VM for use if one is present in the layer.
+ if r.HasUtilityVM {
+ err := safefile.EnsureNotReparsePointRelative("UtilityVM", r.destRoot)
+ if err != nil {
+ return err
+ }
+ err = ProcessUtilityVMImage(r.ctx, filepath.Join(r.destRoot.Name(), "UtilityVM"))
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
+
+// NewLayerWriter returns a new layer writer for creating a layer on disk.
+// The caller must have taken the SeBackupPrivilege and SeRestorePrivilege privileges
+// to call this and any methods on the resulting LayerWriter.
+func NewLayerWriter(ctx context.Context, path string, parentLayerPaths []string) (_ LayerWriter, err error) {
+ ctx, span := trace.StartSpan(ctx, "hcsshim::NewLayerWriter")
+ defer func() {
+ if err != nil {
+ oc.SetSpanStatus(span, err)
+ span.End()
+ }
+ }()
+ span.AddAttributes(
+ trace.StringAttribute("path", path),
+ trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
+
+ if len(parentLayerPaths) == 0 {
+ // This is a base layer. It gets imported differently.
+ f, err := safefile.OpenRoot(path)
+ if err != nil {
+ return nil, err
+ }
+ return &baseLayerWriter{
+ ctx: ctx,
+ s: span,
+ root: f,
+ }, nil
+ }
+
+ importPath, err := ioutil.TempDir("", "hcs")
+ if err != nil {
+ return nil, err
+ }
+ w, err := newLegacyLayerWriter(importPath, parentLayerPaths, path)
+ if err != nil {
+ return nil, err
+ }
+ return &legacyLayerWriterWrapper{
+ ctx: ctx,
+ s: span,
+ legacyLayerWriter: w,
+ path: importPath,
+ parentLayerPaths: parentLayerPaths,
+ }, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerexists.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerexists.go
new file mode 100644
index 000000000..01e672339
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerexists.go
@@ -0,0 +1,28 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// LayerExists will return true if a layer with the given id exists and is known
+// to the system.
+func LayerExists(ctx context.Context, path string) (_ bool, err error) {
+ title := "hcsshim::LayerExists"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ // Call the procedure itself.
+ var exists uint32
+ err = layerExists(&stdDriverInfo, path, &exists)
+ if err != nil {
+ return false, hcserror.New(err, title, "")
+ }
+ span.AddAttributes(trace.BoolAttribute("layer-exists", exists != 0))
+ return exists != 0, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerid.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerid.go
new file mode 100644
index 000000000..0ce34a30f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerid.go
@@ -0,0 +1,22 @@
+package wclayer
+
+import (
+ "context"
+ "path/filepath"
+
+ "github.com/Microsoft/go-winio/pkg/guid"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// LayerID returns the layer ID of a layer on disk.
+func LayerID(ctx context.Context, path string) (_ guid.GUID, err error) {
+ title := "hcsshim::LayerID"
+ ctx, span := trace.StartSpan(ctx, title)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ _, file := filepath.Split(path)
+ return NameToGuid(ctx, file)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerutils.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerutils.go
new file mode 100644
index 000000000..1ec893c6a
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/layerutils.go
@@ -0,0 +1,97 @@
+package wclayer
+
+// This file contains utility functions to support storage (graph) related
+// functionality.
+
+import (
+ "context"
+ "syscall"
+
+ "github.com/Microsoft/go-winio/pkg/guid"
+ "github.com/sirupsen/logrus"
+)
+
+/* To pass into syscall, we need a struct matching the following:
+enum GraphDriverType
+{
+ DiffDriver,
+ FilterDriver
+};
+
+struct DriverInfo {
+ GraphDriverType Flavour;
+ LPCWSTR HomeDir;
+};
+*/
+
+type driverInfo struct {
+ Flavour int
+ HomeDirp *uint16
+}
+
+var (
+ utf16EmptyString uint16
+ stdDriverInfo = driverInfo{1, &utf16EmptyString}
+)
+
+/* To pass into syscall, we need a struct matching the following:
+typedef struct _WC_LAYER_DESCRIPTOR {
+
+ //
+ // The ID of the layer
+ //
+
+ GUID LayerId;
+
+ //
+ // Additional flags
+ //
+
+ union {
+ struct {
+ ULONG Reserved : 31;
+ ULONG Dirty : 1; // Created from sandbox as a result of snapshot
+ };
+ ULONG Value;
+ } Flags;
+
+ //
+ // Path to the layer root directory, null-terminated
+ //
+
+ PCWSTR Path;
+
+} WC_LAYER_DESCRIPTOR, *PWC_LAYER_DESCRIPTOR;
+*/
+type WC_LAYER_DESCRIPTOR struct {
+ LayerId guid.GUID
+ Flags uint32
+ Pathp *uint16
+}
+
+func layerPathsToDescriptors(ctx context.Context, parentLayerPaths []string) ([]WC_LAYER_DESCRIPTOR, error) {
+ // Array of descriptors that gets constructed.
+ var layers []WC_LAYER_DESCRIPTOR
+
+ for i := 0; i < len(parentLayerPaths); i++ {
+ g, err := LayerID(ctx, parentLayerPaths[i])
+ if err != nil {
+ logrus.WithError(err).Debug("Failed to convert name to guid")
+ return nil, err
+ }
+
+ p, err := syscall.UTF16PtrFromString(parentLayerPaths[i])
+ if err != nil {
+ logrus.WithError(err).Debug("Failed conversion of parentLayerPath to pointer")
+ return nil, err
+ }
+
+ layers = append(layers, WC_LAYER_DESCRIPTOR{
+ LayerId: g,
+ Flags: 0,
+ Pathp: p,
+ })
+ }
+
+ return layers, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/legacy.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/legacy.go
new file mode 100644
index 000000000..b7f3064f2
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/legacy.go
@@ -0,0 +1,811 @@
+package wclayer
+
+import (
+ "bufio"
+ "encoding/binary"
+ "errors"
+ "fmt"
+ "io"
+ "io/ioutil"
+ "os"
+ "path/filepath"
+ "strings"
+ "syscall"
+
+ "github.com/Microsoft/go-winio"
+ "github.com/Microsoft/hcsshim/internal/longpath"
+ "github.com/Microsoft/hcsshim/internal/safefile"
+ "github.com/Microsoft/hcsshim/internal/winapi"
+)
+
+var errorIterationCanceled = errors.New("")
+
+var mutatedUtilityVMFiles = map[string]bool{
+ `EFI\Microsoft\Boot\BCD`: true,
+ `EFI\Microsoft\Boot\BCD.LOG`: true,
+ `EFI\Microsoft\Boot\BCD.LOG1`: true,
+ `EFI\Microsoft\Boot\BCD.LOG2`: true,
+}
+
+const (
+ filesPath = `Files`
+ hivesPath = `Hives`
+ utilityVMPath = `UtilityVM`
+ utilityVMFilesPath = `UtilityVM\Files`
+)
+
+func openFileOrDir(path string, mode uint32, createDisposition uint32) (file *os.File, err error) {
+ return winio.OpenForBackup(path, mode, syscall.FILE_SHARE_READ, createDisposition)
+}
+
+func hasPathPrefix(p, prefix string) bool {
+ return strings.HasPrefix(p, prefix) && len(p) > len(prefix) && p[len(prefix)] == '\\'
+}
+
+type fileEntry struct {
+ path string
+ fi os.FileInfo
+ err error
+}
+
+type legacyLayerReader struct {
+ root string
+ result chan *fileEntry
+ proceed chan bool
+ currentFile *os.File
+ backupReader *winio.BackupFileReader
+}
+
+// newLegacyLayerReader returns a new LayerReader that can read the Windows
+// container layer transport format from disk.
+func newLegacyLayerReader(root string) *legacyLayerReader {
+ r := &legacyLayerReader{
+ root: root,
+ result: make(chan *fileEntry),
+ proceed: make(chan bool),
+ }
+ go r.walk()
+ return r
+}
+
+func readTombstones(path string) (map[string]([]string), error) {
+ tf, err := os.Open(filepath.Join(path, "tombstones.txt"))
+ if err != nil {
+ return nil, err
+ }
+ defer tf.Close()
+ s := bufio.NewScanner(tf)
+ if !s.Scan() || s.Text() != "\xef\xbb\xbfVersion 1.0" {
+ return nil, errors.New("invalid tombstones file")
+ }
+
+ ts := make(map[string]([]string))
+ for s.Scan() {
+ t := filepath.Join(filesPath, s.Text()[1:]) // skip leading `\`
+ dir := filepath.Dir(t)
+ ts[dir] = append(ts[dir], t)
+ }
+ if err = s.Err(); err != nil {
+ return nil, err
+ }
+
+ return ts, nil
+}
+
+func (r *legacyLayerReader) walkUntilCancelled() error {
+ root, err := longpath.LongAbs(r.root)
+ if err != nil {
+ return err
+ }
+
+ r.root = root
+ ts, err := readTombstones(r.root)
+ if err != nil {
+ return err
+ }
+
+ err = filepath.Walk(r.root, func(path string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+
+ // Indirect fix for https://github.com/moby/moby/issues/32838#issuecomment-343610048.
+ // Handle failure from what may be a golang bug in the conversion of
+ // UTF16 to UTF8 in files which are left in the recycle bin. Os.Lstat
+ // which is called by filepath.Walk will fail when a filename contains
+ // unicode characters. Skip the recycle bin regardless which is goodness.
+ if strings.EqualFold(path, filepath.Join(r.root, `Files\$Recycle.Bin`)) && info.IsDir() {
+ return filepath.SkipDir
+ }
+
+ if path == r.root || path == filepath.Join(r.root, "tombstones.txt") || strings.HasSuffix(path, ".$wcidirs$") {
+ return nil
+ }
+
+ r.result <- &fileEntry{path, info, nil}
+ if !<-r.proceed {
+ return errorIterationCanceled
+ }
+
+ // List all the tombstones.
+ if info.IsDir() {
+ relPath, err := filepath.Rel(r.root, path)
+ if err != nil {
+ return err
+ }
+ if dts, ok := ts[relPath]; ok {
+ for _, t := range dts {
+ r.result <- &fileEntry{filepath.Join(r.root, t), nil, nil}
+ if !<-r.proceed {
+ return errorIterationCanceled
+ }
+ }
+ }
+ }
+ return nil
+ })
+ if err == errorIterationCanceled {
+ return nil
+ }
+ if err == nil {
+ return io.EOF
+ }
+ return err
+}
+
+func (r *legacyLayerReader) walk() {
+ defer close(r.result)
+ if !<-r.proceed {
+ return
+ }
+
+ err := r.walkUntilCancelled()
+ if err != nil {
+ for {
+ r.result <- &fileEntry{err: err}
+ if !<-r.proceed {
+ return
+ }
+ }
+ }
+}
+
+func (r *legacyLayerReader) reset() {
+ if r.backupReader != nil {
+ r.backupReader.Close()
+ r.backupReader = nil
+ }
+ if r.currentFile != nil {
+ r.currentFile.Close()
+ r.currentFile = nil
+ }
+}
+
+func findBackupStreamSize(r io.Reader) (int64, error) {
+ br := winio.NewBackupStreamReader(r)
+ for {
+ hdr, err := br.Next()
+ if err != nil {
+ if err == io.EOF {
+ err = nil
+ }
+ return 0, err
+ }
+ if hdr.Id == winio.BackupData {
+ return hdr.Size, nil
+ }
+ }
+}
+
+func (r *legacyLayerReader) Next() (path string, size int64, fileInfo *winio.FileBasicInfo, err error) {
+ r.reset()
+ r.proceed <- true
+ fe := <-r.result
+ if fe == nil {
+ err = errors.New("LegacyLayerReader closed")
+ return
+ }
+ if fe.err != nil {
+ err = fe.err
+ return
+ }
+
+ path, err = filepath.Rel(r.root, fe.path)
+ if err != nil {
+ return
+ }
+
+ if fe.fi == nil {
+ // This is a tombstone. Return a nil fileInfo.
+ return
+ }
+
+ if fe.fi.IsDir() && hasPathPrefix(path, filesPath) {
+ fe.path += ".$wcidirs$"
+ }
+
+ f, err := openFileOrDir(fe.path, syscall.GENERIC_READ, syscall.OPEN_EXISTING)
+ if err != nil {
+ return
+ }
+ defer func() {
+ if f != nil {
+ f.Close()
+ }
+ }()
+
+ fileInfo, err = winio.GetFileBasicInfo(f)
+ if err != nil {
+ return
+ }
+
+ if !hasPathPrefix(path, filesPath) {
+ size = fe.fi.Size()
+ r.backupReader = winio.NewBackupFileReader(f, false)
+ if path == hivesPath || path == filesPath {
+ // The Hives directory has a non-deterministic file time because of the
+ // nature of the import process. Use the times from System_Delta.
+ var g *os.File
+ g, err = os.Open(filepath.Join(r.root, hivesPath, `System_Delta`))
+ if err != nil {
+ return
+ }
+ attr := fileInfo.FileAttributes
+ fileInfo, err = winio.GetFileBasicInfo(g)
+ g.Close()
+ if err != nil {
+ return
+ }
+ fileInfo.FileAttributes = attr
+ }
+
+ // The creation time and access time get reset for files outside of the Files path.
+ fileInfo.CreationTime = fileInfo.LastWriteTime
+ fileInfo.LastAccessTime = fileInfo.LastWriteTime
+
+ } else {
+ // The file attributes are written before the backup stream.
+ var attr uint32
+ err = binary.Read(f, binary.LittleEndian, &attr)
+ if err != nil {
+ return
+ }
+ fileInfo.FileAttributes = attr
+ beginning := int64(4)
+
+ // Find the accurate file size.
+ if !fe.fi.IsDir() {
+ size, err = findBackupStreamSize(f)
+ if err != nil {
+ err = &os.PathError{Op: "findBackupStreamSize", Path: fe.path, Err: err}
+ return
+ }
+ }
+
+ // Return back to the beginning of the backup stream.
+ _, err = f.Seek(beginning, 0)
+ if err != nil {
+ return
+ }
+ }
+
+ r.currentFile = f
+ f = nil
+ return
+}
+
+func (r *legacyLayerReader) Read(b []byte) (int, error) {
+ if r.backupReader == nil {
+ if r.currentFile == nil {
+ return 0, io.EOF
+ }
+ return r.currentFile.Read(b)
+ }
+ return r.backupReader.Read(b)
+}
+
+func (r *legacyLayerReader) Seek(offset int64, whence int) (int64, error) {
+ if r.backupReader == nil {
+ if r.currentFile == nil {
+ return 0, errors.New("no current file")
+ }
+ return r.currentFile.Seek(offset, whence)
+ }
+ return 0, errors.New("seek not supported on this stream")
+}
+
+func (r *legacyLayerReader) Close() error {
+ r.proceed <- false
+ <-r.result
+ r.reset()
+ return nil
+}
+
+type pendingLink struct {
+ Path, Target string
+ TargetRoot *os.File
+}
+
+type pendingDir struct {
+ Path string
+ Root *os.File
+}
+
+type legacyLayerWriter struct {
+ root *os.File
+ destRoot *os.File
+ parentRoots []*os.File
+ currentFile *os.File
+ bufWriter *bufio.Writer
+ currentFileName string
+ currentFileRoot *os.File
+ backupWriter *winio.BackupFileWriter
+ Tombstones []string
+ HasUtilityVM bool
+ changedDi []dirInfo
+ addedFiles map[string]bool
+ PendingLinks []pendingLink
+ pendingDirs []pendingDir
+ currentIsDir bool
+}
+
+// newLegacyLayerWriter returns a LayerWriter that can write the contaler layer
+// transport format to disk.
+func newLegacyLayerWriter(root string, parentRoots []string, destRoot string) (w *legacyLayerWriter, err error) {
+ w = &legacyLayerWriter{
+ addedFiles: make(map[string]bool),
+ }
+ defer func() {
+ if err != nil {
+ w.CloseRoots()
+ w = nil
+ }
+ }()
+ w.root, err = safefile.OpenRoot(root)
+ if err != nil {
+ return
+ }
+ w.destRoot, err = safefile.OpenRoot(destRoot)
+ if err != nil {
+ return
+ }
+ for _, r := range parentRoots {
+ f, err := safefile.OpenRoot(r)
+ if err != nil {
+ return w, err
+ }
+ w.parentRoots = append(w.parentRoots, f)
+ }
+ w.bufWriter = bufio.NewWriterSize(ioutil.Discard, 65536)
+ return
+}
+
+func (w *legacyLayerWriter) CloseRoots() {
+ if w.root != nil {
+ w.root.Close()
+ w.root = nil
+ }
+ if w.destRoot != nil {
+ w.destRoot.Close()
+ w.destRoot = nil
+ }
+ for i := range w.parentRoots {
+ _ = w.parentRoots[i].Close()
+ }
+ w.parentRoots = nil
+}
+
+func (w *legacyLayerWriter) initUtilityVM() error {
+ if !w.HasUtilityVM {
+ err := safefile.MkdirRelative(utilityVMPath, w.destRoot)
+ if err != nil {
+ return err
+ }
+ // Server 2016 does not support multiple layers for the utility VM, so
+ // clone the utility VM from the parent layer into this layer. Use hard
+ // links to avoid unnecessary copying, since most of the files are
+ // immutable.
+ err = cloneTree(w.parentRoots[0], w.destRoot, utilityVMFilesPath, mutatedUtilityVMFiles)
+ if err != nil {
+ return fmt.Errorf("cloning the parent utility VM image failed: %s", err)
+ }
+ w.HasUtilityVM = true
+ }
+ return nil
+}
+
+func (w *legacyLayerWriter) reset() error {
+ err := w.bufWriter.Flush()
+ if err != nil {
+ return err
+ }
+ w.bufWriter.Reset(ioutil.Discard)
+ if w.currentIsDir {
+ r := w.currentFile
+ br := winio.NewBackupStreamReader(r)
+ // Seek to the beginning of the backup stream, skipping the fileattrs
+ if _, err := r.Seek(4, io.SeekStart); err != nil {
+ return err
+ }
+
+ for {
+ bhdr, err := br.Next()
+ if err == io.EOF {
+ // end of backupstream data
+ break
+ }
+ if err != nil {
+ return err
+ }
+ switch bhdr.Id {
+ case winio.BackupReparseData:
+ // The current file is a `.$wcidirs$` metadata file that
+ // describes a directory reparse point. Delete the placeholder
+ // directory to prevent future files being added into the
+ // destination of the reparse point during the ImportLayer call
+ if err := safefile.RemoveRelative(w.currentFileName, w.currentFileRoot); err != nil {
+ return err
+ }
+ w.pendingDirs = append(w.pendingDirs, pendingDir{Path: w.currentFileName, Root: w.currentFileRoot})
+ default:
+ // ignore all other stream types, as we only care about directory reparse points
+ }
+ }
+ w.currentIsDir = false
+ }
+ if w.backupWriter != nil {
+ w.backupWriter.Close()
+ w.backupWriter = nil
+ }
+ if w.currentFile != nil {
+ w.currentFile.Close()
+ w.currentFile = nil
+ w.currentFileName = ""
+ w.currentFileRoot = nil
+ }
+ return nil
+}
+
+// copyFileWithMetadata copies a file using the backup/restore APIs in order to preserve metadata
+func copyFileWithMetadata(srcRoot, destRoot *os.File, subPath string, isDir bool) (fileInfo *winio.FileBasicInfo, err error) {
+ src, err := safefile.OpenRelative(
+ subPath,
+ srcRoot,
+ syscall.GENERIC_READ|winio.ACCESS_SYSTEM_SECURITY,
+ syscall.FILE_SHARE_READ,
+ winapi.FILE_OPEN,
+ winapi.FILE_OPEN_REPARSE_POINT)
+ if err != nil {
+ return nil, err
+ }
+ defer src.Close()
+ srcr := winio.NewBackupFileReader(src, true)
+ defer srcr.Close()
+
+ fileInfo, err = winio.GetFileBasicInfo(src)
+ if err != nil {
+ return nil, err
+ }
+
+ extraFlags := uint32(0)
+ if isDir {
+ extraFlags |= winapi.FILE_DIRECTORY_FILE
+ }
+ dest, err := safefile.OpenRelative(
+ subPath,
+ destRoot,
+ syscall.GENERIC_READ|syscall.GENERIC_WRITE|winio.WRITE_DAC|winio.WRITE_OWNER|winio.ACCESS_SYSTEM_SECURITY,
+ syscall.FILE_SHARE_READ,
+ winapi.FILE_CREATE,
+ extraFlags)
+ if err != nil {
+ return nil, err
+ }
+ defer dest.Close()
+
+ err = winio.SetFileBasicInfo(dest, fileInfo)
+ if err != nil {
+ return nil, err
+ }
+
+ destw := winio.NewBackupFileWriter(dest, true)
+ defer func() {
+ cerr := destw.Close()
+ if err == nil {
+ err = cerr
+ }
+ }()
+
+ _, err = io.Copy(destw, srcr)
+ if err != nil {
+ return nil, err
+ }
+
+ return fileInfo, nil
+}
+
+// cloneTree clones a directory tree using hard links. It skips hard links for
+// the file names in the provided map and just copies those files.
+func cloneTree(srcRoot *os.File, destRoot *os.File, subPath string, mutatedFiles map[string]bool) error {
+ var di []dirInfo
+ err := safefile.EnsureNotReparsePointRelative(subPath, srcRoot)
+ if err != nil {
+ return err
+ }
+ err = filepath.Walk(filepath.Join(srcRoot.Name(), subPath), func(srcFilePath string, info os.FileInfo, err error) error {
+ if err != nil {
+ return err
+ }
+
+ relPath, err := filepath.Rel(srcRoot.Name(), srcFilePath)
+ if err != nil {
+ return err
+ }
+
+ fileAttributes := info.Sys().(*syscall.Win32FileAttributeData).FileAttributes
+ // Directories, reparse points, and files that will be mutated during
+ // utility VM import must be copied. All other files can be hard linked.
+ isReparsePoint := fileAttributes&syscall.FILE_ATTRIBUTE_REPARSE_POINT != 0
+ // In go1.9, FileInfo.IsDir() returns false if the directory is also a symlink.
+ // See: https://github.com/golang/go/commit/1989921aef60c83e6f9127a8448fb5ede10e9acc
+ // Fixes the problem by checking syscall.FILE_ATTRIBUTE_DIRECTORY directly
+ isDir := fileAttributes&syscall.FILE_ATTRIBUTE_DIRECTORY != 0
+
+ if isDir || isReparsePoint || mutatedFiles[relPath] {
+ fi, err := copyFileWithMetadata(srcRoot, destRoot, relPath, isDir)
+ if err != nil {
+ return err
+ }
+ if isDir {
+ di = append(di, dirInfo{path: relPath, fileInfo: *fi})
+ }
+ } else {
+ err = safefile.LinkRelative(relPath, srcRoot, relPath, destRoot)
+ if err != nil {
+ return err
+ }
+ }
+
+ return nil
+ })
+ if err != nil {
+ return err
+ }
+
+ return reapplyDirectoryTimes(destRoot, di)
+}
+
+func (w *legacyLayerWriter) Add(name string, fileInfo *winio.FileBasicInfo) error {
+ if err := w.reset(); err != nil {
+ return err
+ }
+
+ if name == utilityVMPath {
+ return w.initUtilityVM()
+ }
+
+ if (fileInfo.FileAttributes & syscall.FILE_ATTRIBUTE_DIRECTORY) != 0 {
+ w.changedDi = append(w.changedDi, dirInfo{path: name, fileInfo: *fileInfo})
+ }
+
+ name = filepath.Clean(name)
+ if hasPathPrefix(name, utilityVMPath) {
+ if !w.HasUtilityVM {
+ return errors.New("missing UtilityVM directory")
+ }
+ if !hasPathPrefix(name, utilityVMFilesPath) && name != utilityVMFilesPath {
+ return errors.New("invalid UtilityVM layer")
+ }
+ createDisposition := uint32(winapi.FILE_OPEN)
+ if (fileInfo.FileAttributes & syscall.FILE_ATTRIBUTE_DIRECTORY) != 0 {
+ st, err := safefile.LstatRelative(name, w.destRoot)
+ if err != nil && !os.IsNotExist(err) {
+ return err
+ }
+ if st != nil {
+ // Delete the existing file/directory if it is not the same type as this directory.
+ existingAttr := st.Sys().(*syscall.Win32FileAttributeData).FileAttributes
+ if (uint32(fileInfo.FileAttributes)^existingAttr)&(syscall.FILE_ATTRIBUTE_DIRECTORY|syscall.FILE_ATTRIBUTE_REPARSE_POINT) != 0 {
+ if err = safefile.RemoveAllRelative(name, w.destRoot); err != nil {
+ return err
+ }
+ st = nil
+ }
+ }
+ if st == nil {
+ if err = safefile.MkdirRelative(name, w.destRoot); err != nil {
+ return err
+ }
+ }
+ } else {
+ // Overwrite any existing hard link.
+ err := safefile.RemoveRelative(name, w.destRoot)
+ if err != nil && !os.IsNotExist(err) {
+ return err
+ }
+ createDisposition = winapi.FILE_CREATE
+ }
+
+ f, err := safefile.OpenRelative(
+ name,
+ w.destRoot,
+ syscall.GENERIC_READ|syscall.GENERIC_WRITE|winio.WRITE_DAC|winio.WRITE_OWNER|winio.ACCESS_SYSTEM_SECURITY,
+ syscall.FILE_SHARE_READ,
+ createDisposition,
+ winapi.FILE_OPEN_REPARSE_POINT,
+ )
+ if err != nil {
+ return err
+ }
+ defer func() {
+ if f != nil {
+ f.Close()
+ _ = safefile.RemoveRelative(name, w.destRoot)
+ }
+ }()
+
+ err = winio.SetFileBasicInfo(f, fileInfo)
+ if err != nil {
+ return err
+ }
+
+ w.backupWriter = winio.NewBackupFileWriter(f, true)
+ w.bufWriter.Reset(w.backupWriter)
+ w.currentFile = f
+ w.currentFileName = name
+ w.currentFileRoot = w.destRoot
+ w.addedFiles[name] = true
+ f = nil
+ return nil
+ }
+
+ fname := name
+ if (fileInfo.FileAttributes & syscall.FILE_ATTRIBUTE_DIRECTORY) != 0 {
+ err := safefile.MkdirRelative(name, w.root)
+ if err != nil {
+ return err
+ }
+ fname += ".$wcidirs$"
+ w.currentIsDir = true
+ }
+
+ f, err := safefile.OpenRelative(fname, w.root, syscall.GENERIC_READ|syscall.GENERIC_WRITE, syscall.FILE_SHARE_READ, winapi.FILE_CREATE, 0)
+ if err != nil {
+ return err
+ }
+ defer func() {
+ if f != nil {
+ f.Close()
+ _ = safefile.RemoveRelative(fname, w.root)
+ }
+ }()
+
+ strippedFi := *fileInfo
+ strippedFi.FileAttributes = 0
+ err = winio.SetFileBasicInfo(f, &strippedFi)
+ if err != nil {
+ return err
+ }
+
+ if hasPathPrefix(name, hivesPath) {
+ w.backupWriter = winio.NewBackupFileWriter(f, false)
+ w.bufWriter.Reset(w.backupWriter)
+ } else {
+ w.bufWriter.Reset(f)
+ // The file attributes are written before the stream.
+ err = binary.Write(w.bufWriter, binary.LittleEndian, uint32(fileInfo.FileAttributes))
+ if err != nil {
+ w.bufWriter.Reset(ioutil.Discard)
+ return err
+ }
+ }
+
+ w.currentFile = f
+ w.currentFileName = name
+ w.currentFileRoot = w.root
+ w.addedFiles[name] = true
+ f = nil
+ return nil
+}
+
+func (w *legacyLayerWriter) AddLink(name string, target string) error {
+ if err := w.reset(); err != nil {
+ return err
+ }
+
+ target = filepath.Clean(target)
+ var roots []*os.File
+ if hasPathPrefix(target, filesPath) {
+ // Look for cross-layer hard link targets in the parent layers, since
+ // nothing is in the destination path yet.
+ roots = w.parentRoots
+ } else if hasPathPrefix(target, utilityVMFilesPath) {
+ // Since the utility VM is fully cloned into the destination path
+ // already, look for cross-layer hard link targets directly in the
+ // destination path.
+ roots = []*os.File{w.destRoot}
+ }
+
+ if roots == nil || (!hasPathPrefix(name, filesPath) && !hasPathPrefix(name, utilityVMFilesPath)) {
+ return errors.New("invalid hard link in layer")
+ }
+
+ // Find to try the target of the link in a previously added file. If that
+ // fails, search in parent layers.
+ var selectedRoot *os.File
+ if _, ok := w.addedFiles[target]; ok {
+ selectedRoot = w.destRoot
+ } else {
+ for _, r := range roots {
+ if _, err := safefile.LstatRelative(target, r); err != nil {
+ if !os.IsNotExist(err) {
+ return err
+ }
+ } else {
+ selectedRoot = r
+ break
+ }
+ }
+ if selectedRoot == nil {
+ return fmt.Errorf("failed to find link target for '%s' -> '%s'", name, target)
+ }
+ }
+
+ // The link can't be written until after the ImportLayer call.
+ w.PendingLinks = append(w.PendingLinks, pendingLink{
+ Path: name,
+ Target: target,
+ TargetRoot: selectedRoot,
+ })
+ w.addedFiles[name] = true
+ return nil
+}
+
+func (w *legacyLayerWriter) Remove(name string) error {
+ name = filepath.Clean(name)
+ if hasPathPrefix(name, filesPath) {
+ w.Tombstones = append(w.Tombstones, name)
+ } else if hasPathPrefix(name, utilityVMFilesPath) {
+ err := w.initUtilityVM()
+ if err != nil {
+ return err
+ }
+ // Make sure the path exists; os.RemoveAll will not fail if the file is
+ // already gone, and this needs to be a fatal error for diagnostics
+ // purposes.
+ if _, err := safefile.LstatRelative(name, w.destRoot); err != nil {
+ return err
+ }
+ err = safefile.RemoveAllRelative(name, w.destRoot)
+ if err != nil {
+ return err
+ }
+ } else {
+ return fmt.Errorf("invalid tombstone %s", name)
+ }
+
+ return nil
+}
+
+func (w *legacyLayerWriter) Write(b []byte) (int, error) {
+ if w.backupWriter == nil && w.currentFile == nil {
+ return 0, errors.New("closed")
+ }
+ return w.bufWriter.Write(b)
+}
+
+func (w *legacyLayerWriter) Close() error {
+ if err := w.reset(); err != nil {
+ return err
+ }
+ if err := safefile.RemoveRelative("tombstones.txt", w.root); err != nil && !os.IsNotExist(err) {
+ return err
+ }
+ for _, pd := range w.pendingDirs {
+ err := safefile.MkdirRelative(pd.Path, pd.Root)
+ if err != nil {
+ return err
+ }
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/nametoguid.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/nametoguid.go
new file mode 100644
index 000000000..09950297c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/nametoguid.go
@@ -0,0 +1,29 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/go-winio/pkg/guid"
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// NameToGuid converts the given string into a GUID using the algorithm in the
+// Host Compute Service, ensuring GUIDs generated with the same string are common
+// across all clients.
+func NameToGuid(ctx context.Context, name string) (_ guid.GUID, err error) {
+ title := "hcsshim::NameToGuid"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("objectName", name))
+
+ var id guid.GUID
+ err = nameToGuid(name, &id)
+ if err != nil {
+ return guid.GUID{}, hcserror.New(err, title, "")
+ }
+ span.AddAttributes(trace.StringAttribute("guid", id.String()))
+ return id, nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/preparelayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/preparelayer.go
new file mode 100644
index 000000000..90129faef
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/preparelayer.go
@@ -0,0 +1,44 @@
+package wclayer
+
+import (
+ "context"
+ "strings"
+ "sync"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+var prepareLayerLock sync.Mutex
+
+// PrepareLayer finds a mounted read-write layer matching path and enables the
+// the filesystem filter for use on that layer. This requires the paths to all
+// parent layers, and is necessary in order to view or interact with the layer
+// as an actual filesystem (reading and writing files, creating directories, etc).
+// Disabling the filter must be done via UnprepareLayer.
+func PrepareLayer(ctx context.Context, path string, parentLayerPaths []string) (err error) {
+ title := "hcsshim::PrepareLayer"
+ ctx, span := trace.StartSpan(ctx, title)
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(
+ trace.StringAttribute("path", path),
+ trace.StringAttribute("parentLayerPaths", strings.Join(parentLayerPaths, ", ")))
+
+ // Generate layer descriptors
+ layers, err := layerPathsToDescriptors(ctx, parentLayerPaths)
+ if err != nil {
+ return err
+ }
+
+ // This lock is a temporary workaround for a Windows bug. Only allowing one
+ // call to prepareLayer at a time vastly reduces the chance of a timeout.
+ prepareLayerLock.Lock()
+ defer prepareLayerLock.Unlock()
+ err = prepareLayer(&stdDriverInfo, path, layers)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/processimage.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/processimage.go
new file mode 100644
index 000000000..30bcdff5f
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/processimage.go
@@ -0,0 +1,41 @@
+package wclayer
+
+import (
+ "context"
+ "os"
+
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// ProcessBaseLayer post-processes a base layer that has had its files extracted.
+// The files should have been extracted to \Files.
+func ProcessBaseLayer(ctx context.Context, path string) (err error) {
+ title := "hcsshim::ProcessBaseLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ err = processBaseImage(path)
+ if err != nil {
+ return &os.PathError{Op: title, Path: path, Err: err}
+ }
+ return nil
+}
+
+// ProcessUtilityVMImage post-processes a utility VM image that has had its files extracted.
+// The files should have been extracted to \Files.
+func ProcessUtilityVMImage(ctx context.Context, path string) (err error) {
+ title := "hcsshim::ProcessUtilityVMImage"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ err = processUtilityImage(path)
+ if err != nil {
+ return &os.PathError{Op: title, Path: path, Err: err}
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/unpreparelayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/unpreparelayer.go
new file mode 100644
index 000000000..71b130c52
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/unpreparelayer.go
@@ -0,0 +1,25 @@
+package wclayer
+
+import (
+ "context"
+
+ "github.com/Microsoft/hcsshim/internal/hcserror"
+ "github.com/Microsoft/hcsshim/internal/oc"
+ "go.opencensus.io/trace"
+)
+
+// UnprepareLayer disables the filesystem filter for the read-write layer with
+// the given id.
+func UnprepareLayer(ctx context.Context, path string) (err error) {
+ title := "hcsshim::UnprepareLayer"
+ ctx, span := trace.StartSpan(ctx, title) //nolint:ineffassign,staticcheck
+ defer span.End()
+ defer func() { oc.SetSpanStatus(span, err) }()
+ span.AddAttributes(trace.StringAttribute("path", path))
+
+ err = unprepareLayer(&stdDriverInfo, path)
+ if err != nil {
+ return hcserror.New(err, title, "")
+ }
+ return nil
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/wclayer.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/wclayer.go
new file mode 100644
index 000000000..9b1e06d50
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/wclayer.go
@@ -0,0 +1,35 @@
+// Package wclayer provides bindings to HCS's legacy layer management API and
+// provides a higher level interface around these calls for container layer
+// management.
+package wclayer
+
+import "github.com/Microsoft/go-winio/pkg/guid"
+
+//go:generate go run ../../mksyscall_windows.go -output zsyscall_windows.go wclayer.go
+
+//sys activateLayer(info *driverInfo, id string) (hr error) = vmcompute.ActivateLayer?
+//sys copyLayer(info *driverInfo, srcId string, dstId string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.CopyLayer?
+//sys createLayer(info *driverInfo, id string, parent string) (hr error) = vmcompute.CreateLayer?
+//sys createSandboxLayer(info *driverInfo, id string, parent uintptr, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.CreateSandboxLayer?
+//sys expandSandboxSize(info *driverInfo, id string, size uint64) (hr error) = vmcompute.ExpandSandboxSize?
+//sys deactivateLayer(info *driverInfo, id string) (hr error) = vmcompute.DeactivateLayer?
+//sys destroyLayer(info *driverInfo, id string) (hr error) = vmcompute.DestroyLayer?
+//sys exportLayer(info *driverInfo, id string, path string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.ExportLayer?
+//sys getLayerMountPath(info *driverInfo, id string, length *uintptr, buffer *uint16) (hr error) = vmcompute.GetLayerMountPath?
+//sys getBaseImages(buffer **uint16) (hr error) = vmcompute.GetBaseImages?
+//sys importLayer(info *driverInfo, id string, path string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.ImportLayer?
+//sys layerExists(info *driverInfo, id string, exists *uint32) (hr error) = vmcompute.LayerExists?
+//sys nameToGuid(name string, guid *_guid) (hr error) = vmcompute.NameToGuid?
+//sys prepareLayer(info *driverInfo, id string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) = vmcompute.PrepareLayer?
+//sys unprepareLayer(info *driverInfo, id string) (hr error) = vmcompute.UnprepareLayer?
+//sys processBaseImage(path string) (hr error) = vmcompute.ProcessBaseImage?
+//sys processUtilityImage(path string) (hr error) = vmcompute.ProcessUtilityImage?
+
+//sys grantVmAccess(vmid string, filepath string) (hr error) = vmcompute.GrantVmAccess?
+
+//sys openVirtualDisk(virtualStorageType *virtualStorageType, path string, virtualDiskAccessMask uint32, flags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (err error) [failretval != 0] = virtdisk.OpenVirtualDisk
+//sys attachVirtualDisk(handle syscall.Handle, sd uintptr, flags uint32, providerFlags uint32, params uintptr, overlapped uintptr) (err error) [failretval != 0] = virtdisk.AttachVirtualDisk
+
+//sys getDiskFreeSpaceEx(directoryName string, freeBytesAvailableToCaller *int64, totalNumberOfBytes *int64, totalNumberOfFreeBytes *int64) (err error) = GetDiskFreeSpaceExW
+
+type _guid = guid.GUID
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/wclayer/zsyscall_windows.go b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/zsyscall_windows.go
new file mode 100644
index 000000000..67f917f07
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/wclayer/zsyscall_windows.go
@@ -0,0 +1,569 @@
+// Code generated mksyscall_windows.exe DO NOT EDIT
+
+package wclayer
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return nil
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modvmcompute = windows.NewLazySystemDLL("vmcompute.dll")
+ modvirtdisk = windows.NewLazySystemDLL("virtdisk.dll")
+ modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
+
+ procActivateLayer = modvmcompute.NewProc("ActivateLayer")
+ procCopyLayer = modvmcompute.NewProc("CopyLayer")
+ procCreateLayer = modvmcompute.NewProc("CreateLayer")
+ procCreateSandboxLayer = modvmcompute.NewProc("CreateSandboxLayer")
+ procExpandSandboxSize = modvmcompute.NewProc("ExpandSandboxSize")
+ procDeactivateLayer = modvmcompute.NewProc("DeactivateLayer")
+ procDestroyLayer = modvmcompute.NewProc("DestroyLayer")
+ procExportLayer = modvmcompute.NewProc("ExportLayer")
+ procGetLayerMountPath = modvmcompute.NewProc("GetLayerMountPath")
+ procGetBaseImages = modvmcompute.NewProc("GetBaseImages")
+ procImportLayer = modvmcompute.NewProc("ImportLayer")
+ procLayerExists = modvmcompute.NewProc("LayerExists")
+ procNameToGuid = modvmcompute.NewProc("NameToGuid")
+ procPrepareLayer = modvmcompute.NewProc("PrepareLayer")
+ procUnprepareLayer = modvmcompute.NewProc("UnprepareLayer")
+ procProcessBaseImage = modvmcompute.NewProc("ProcessBaseImage")
+ procProcessUtilityImage = modvmcompute.NewProc("ProcessUtilityImage")
+ procGrantVmAccess = modvmcompute.NewProc("GrantVmAccess")
+ procOpenVirtualDisk = modvirtdisk.NewProc("OpenVirtualDisk")
+ procAttachVirtualDisk = modvirtdisk.NewProc("AttachVirtualDisk")
+ procGetDiskFreeSpaceExW = modkernel32.NewProc("GetDiskFreeSpaceExW")
+)
+
+func activateLayer(info *driverInfo, id string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _activateLayer(info, _p0)
+}
+
+func _activateLayer(info *driverInfo, id *uint16) (hr error) {
+ if hr = procActivateLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procActivateLayer.Addr(), 2, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func copyLayer(info *driverInfo, srcId string, dstId string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(srcId)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(dstId)
+ if hr != nil {
+ return
+ }
+ return _copyLayer(info, _p0, _p1, descriptors)
+}
+
+func _copyLayer(info *driverInfo, srcId *uint16, dstId *uint16, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p2 *WC_LAYER_DESCRIPTOR
+ if len(descriptors) > 0 {
+ _p2 = &descriptors[0]
+ }
+ if hr = procCopyLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procCopyLayer.Addr(), 5, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(srcId)), uintptr(unsafe.Pointer(dstId)), uintptr(unsafe.Pointer(_p2)), uintptr(len(descriptors)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func createLayer(info *driverInfo, id string, parent string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(parent)
+ if hr != nil {
+ return
+ }
+ return _createLayer(info, _p0, _p1)
+}
+
+func _createLayer(info *driverInfo, id *uint16, parent *uint16) (hr error) {
+ if hr = procCreateLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procCreateLayer.Addr(), 3, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(parent)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func createSandboxLayer(info *driverInfo, id string, parent uintptr, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _createSandboxLayer(info, _p0, parent, descriptors)
+}
+
+func _createSandboxLayer(info *driverInfo, id *uint16, parent uintptr, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p1 *WC_LAYER_DESCRIPTOR
+ if len(descriptors) > 0 {
+ _p1 = &descriptors[0]
+ }
+ if hr = procCreateSandboxLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procCreateSandboxLayer.Addr(), 5, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), uintptr(parent), uintptr(unsafe.Pointer(_p1)), uintptr(len(descriptors)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func expandSandboxSize(info *driverInfo, id string, size uint64) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _expandSandboxSize(info, _p0, size)
+}
+
+func _expandSandboxSize(info *driverInfo, id *uint16, size uint64) (hr error) {
+ if hr = procExpandSandboxSize.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procExpandSandboxSize.Addr(), 3, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), uintptr(size))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func deactivateLayer(info *driverInfo, id string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _deactivateLayer(info, _p0)
+}
+
+func _deactivateLayer(info *driverInfo, id *uint16) (hr error) {
+ if hr = procDeactivateLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procDeactivateLayer.Addr(), 2, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func destroyLayer(info *driverInfo, id string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _destroyLayer(info, _p0)
+}
+
+func _destroyLayer(info *driverInfo, id *uint16) (hr error) {
+ if hr = procDestroyLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procDestroyLayer.Addr(), 2, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func exportLayer(info *driverInfo, id string, path string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(path)
+ if hr != nil {
+ return
+ }
+ return _exportLayer(info, _p0, _p1, descriptors)
+}
+
+func _exportLayer(info *driverInfo, id *uint16, path *uint16, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p2 *WC_LAYER_DESCRIPTOR
+ if len(descriptors) > 0 {
+ _p2 = &descriptors[0]
+ }
+ if hr = procExportLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procExportLayer.Addr(), 5, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(_p2)), uintptr(len(descriptors)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func getLayerMountPath(info *driverInfo, id string, length *uintptr, buffer *uint16) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _getLayerMountPath(info, _p0, length, buffer)
+}
+
+func _getLayerMountPath(info *driverInfo, id *uint16, length *uintptr, buffer *uint16) (hr error) {
+ if hr = procGetLayerMountPath.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procGetLayerMountPath.Addr(), 4, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(length)), uintptr(unsafe.Pointer(buffer)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func getBaseImages(buffer **uint16) (hr error) {
+ if hr = procGetBaseImages.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procGetBaseImages.Addr(), 1, uintptr(unsafe.Pointer(buffer)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func importLayer(info *driverInfo, id string, path string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(path)
+ if hr != nil {
+ return
+ }
+ return _importLayer(info, _p0, _p1, descriptors)
+}
+
+func _importLayer(info *driverInfo, id *uint16, path *uint16, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p2 *WC_LAYER_DESCRIPTOR
+ if len(descriptors) > 0 {
+ _p2 = &descriptors[0]
+ }
+ if hr = procImportLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procImportLayer.Addr(), 5, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(path)), uintptr(unsafe.Pointer(_p2)), uintptr(len(descriptors)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func layerExists(info *driverInfo, id string, exists *uint32) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _layerExists(info, _p0, exists)
+}
+
+func _layerExists(info *driverInfo, id *uint16, exists *uint32) (hr error) {
+ if hr = procLayerExists.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procLayerExists.Addr(), 3, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(exists)))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func nameToGuid(name string, guid *_guid) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(name)
+ if hr != nil {
+ return
+ }
+ return _nameToGuid(_p0, guid)
+}
+
+func _nameToGuid(name *uint16, guid *_guid) (hr error) {
+ if hr = procNameToGuid.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procNameToGuid.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(guid)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func prepareLayer(info *driverInfo, id string, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _prepareLayer(info, _p0, descriptors)
+}
+
+func _prepareLayer(info *driverInfo, id *uint16, descriptors []WC_LAYER_DESCRIPTOR) (hr error) {
+ var _p1 *WC_LAYER_DESCRIPTOR
+ if len(descriptors) > 0 {
+ _p1 = &descriptors[0]
+ }
+ if hr = procPrepareLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall6(procPrepareLayer.Addr(), 4, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), uintptr(unsafe.Pointer(_p1)), uintptr(len(descriptors)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func unprepareLayer(info *driverInfo, id string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(id)
+ if hr != nil {
+ return
+ }
+ return _unprepareLayer(info, _p0)
+}
+
+func _unprepareLayer(info *driverInfo, id *uint16) (hr error) {
+ if hr = procUnprepareLayer.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procUnprepareLayer.Addr(), 2, uintptr(unsafe.Pointer(info)), uintptr(unsafe.Pointer(id)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func processBaseImage(path string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(path)
+ if hr != nil {
+ return
+ }
+ return _processBaseImage(_p0)
+}
+
+func _processBaseImage(path *uint16) (hr error) {
+ if hr = procProcessBaseImage.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procProcessBaseImage.Addr(), 1, uintptr(unsafe.Pointer(path)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func processUtilityImage(path string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(path)
+ if hr != nil {
+ return
+ }
+ return _processUtilityImage(_p0)
+}
+
+func _processUtilityImage(path *uint16) (hr error) {
+ if hr = procProcessUtilityImage.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procProcessUtilityImage.Addr(), 1, uintptr(unsafe.Pointer(path)), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func grantVmAccess(vmid string, filepath string) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(vmid)
+ if hr != nil {
+ return
+ }
+ var _p1 *uint16
+ _p1, hr = syscall.UTF16PtrFromString(filepath)
+ if hr != nil {
+ return
+ }
+ return _grantVmAccess(_p0, _p1)
+}
+
+func _grantVmAccess(vmid *uint16, filepath *uint16) (hr error) {
+ if hr = procGrantVmAccess.Find(); hr != nil {
+ return
+ }
+ r0, _, _ := syscall.Syscall(procGrantVmAccess.Addr(), 2, uintptr(unsafe.Pointer(vmid)), uintptr(unsafe.Pointer(filepath)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func openVirtualDisk(virtualStorageType *virtualStorageType, path string, virtualDiskAccessMask uint32, flags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(path)
+ if err != nil {
+ return
+ }
+ return _openVirtualDisk(virtualStorageType, _p0, virtualDiskAccessMask, flags, parameters, handle)
+}
+
+func _openVirtualDisk(virtualStorageType *virtualStorageType, path *uint16, virtualDiskAccessMask uint32, flags uint32, parameters *openVirtualDiskParameters, handle *syscall.Handle) (err error) {
+ r1, _, e1 := syscall.Syscall6(procOpenVirtualDisk.Addr(), 6, uintptr(unsafe.Pointer(virtualStorageType)), uintptr(unsafe.Pointer(path)), uintptr(virtualDiskAccessMask), uintptr(flags), uintptr(unsafe.Pointer(parameters)), uintptr(unsafe.Pointer(handle)))
+ if r1 != 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func attachVirtualDisk(handle syscall.Handle, sd uintptr, flags uint32, providerFlags uint32, params uintptr, overlapped uintptr) (err error) {
+ r1, _, e1 := syscall.Syscall6(procAttachVirtualDisk.Addr(), 6, uintptr(handle), uintptr(sd), uintptr(flags), uintptr(providerFlags), uintptr(params), uintptr(overlapped))
+ if r1 != 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func getDiskFreeSpaceEx(directoryName string, freeBytesAvailableToCaller *int64, totalNumberOfBytes *int64, totalNumberOfFreeBytes *int64) (err error) {
+ var _p0 *uint16
+ _p0, err = syscall.UTF16PtrFromString(directoryName)
+ if err != nil {
+ return
+ }
+ return _getDiskFreeSpaceEx(_p0, freeBytesAvailableToCaller, totalNumberOfBytes, totalNumberOfFreeBytes)
+}
+
+func _getDiskFreeSpaceEx(directoryName *uint16, freeBytesAvailableToCaller *int64, totalNumberOfBytes *int64, totalNumberOfFreeBytes *int64) (err error) {
+ r1, _, e1 := syscall.Syscall6(procGetDiskFreeSpaceExW.Addr(), 4, uintptr(unsafe.Pointer(directoryName)), uintptr(unsafe.Pointer(freeBytesAvailableToCaller)), uintptr(unsafe.Pointer(totalNumberOfBytes)), uintptr(unsafe.Pointer(totalNumberOfFreeBytes)), 0, 0)
+ if r1 == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/console.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/console.go
new file mode 100644
index 000000000..def952541
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/console.go
@@ -0,0 +1,44 @@
+package winapi
+
+import (
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+const PSEUDOCONSOLE_INHERIT_CURSOR = 0x1
+
+// CreatePseudoConsole creates a windows pseudo console.
+func CreatePseudoConsole(size windows.Coord, hInput windows.Handle, hOutput windows.Handle, dwFlags uint32, hpcon *windows.Handle) error {
+ // We need this wrapper as the function takes a COORD struct and not a pointer to one, so we need to cast to something beforehand.
+ return createPseudoConsole(*((*uint32)(unsafe.Pointer(&size))), hInput, hOutput, 0, hpcon)
+}
+
+// ResizePseudoConsole resizes the internal buffers of the pseudo console to the width and height specified in `size`.
+func ResizePseudoConsole(hpcon windows.Handle, size windows.Coord) error {
+ // We need this wrapper as the function takes a COORD struct and not a pointer to one, so we need to cast to something beforehand.
+ return resizePseudoConsole(hpcon, *((*uint32)(unsafe.Pointer(&size))))
+}
+
+// HRESULT WINAPI CreatePseudoConsole(
+// _In_ COORD size,
+// _In_ HANDLE hInput,
+// _In_ HANDLE hOutput,
+// _In_ DWORD dwFlags,
+// _Out_ HPCON* phPC
+// );
+//
+//sys createPseudoConsole(size uint32, hInput windows.Handle, hOutput windows.Handle, dwFlags uint32, hpcon *windows.Handle) (hr error) = kernel32.CreatePseudoConsole
+
+// void WINAPI ClosePseudoConsole(
+// _In_ HPCON hPC
+// );
+//
+//sys ClosePseudoConsole(hpc windows.Handle) = kernel32.ClosePseudoConsole
+
+// HRESULT WINAPI ResizePseudoConsole(
+// _In_ HPCON hPC ,
+// _In_ COORD size
+// );
+//
+//sys resizePseudoConsole(hPc windows.Handle, size uint32) (hr error) = kernel32.ResizePseudoConsole
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/devices.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/devices.go
new file mode 100644
index 000000000..df28ea242
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/devices.go
@@ -0,0 +1,13 @@
+package winapi
+
+import "github.com/Microsoft/go-winio/pkg/guid"
+
+//sys CMGetDeviceIDListSize(pulLen *uint32, pszFilter *byte, uFlags uint32) (hr error) = cfgmgr32.CM_Get_Device_ID_List_SizeA
+//sys CMGetDeviceIDList(pszFilter *byte, buffer *byte, bufferLen uint32, uFlags uint32) (hr error)= cfgmgr32.CM_Get_Device_ID_ListA
+//sys CMLocateDevNode(pdnDevInst *uint32, pDeviceID string, uFlags uint32) (hr error) = cfgmgr32.CM_Locate_DevNodeW
+//sys CMGetDevNodeProperty(dnDevInst uint32, propertyKey *DevPropKey, propertyType *uint32, propertyBuffer *uint16, propertyBufferSize *uint32, uFlags uint32) (hr error) = cfgmgr32.CM_Get_DevNode_PropertyW
+
+type DevPropKey struct {
+ Fmtid guid.GUID
+ Pid uint32
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/errors.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/errors.go
new file mode 100644
index 000000000..4e80ef68c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/errors.go
@@ -0,0 +1,15 @@
+package winapi
+
+import "syscall"
+
+//sys RtlNtStatusToDosError(status uint32) (winerr error) = ntdll.RtlNtStatusToDosError
+
+const (
+ STATUS_REPARSE_POINT_ENCOUNTERED = 0xC000050B
+ ERROR_NO_MORE_ITEMS = 0x103
+ ERROR_MORE_DATA syscall.Errno = 234
+)
+
+func NTSuccess(status uint32) bool {
+ return status == 0
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/filesystem.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/filesystem.go
new file mode 100644
index 000000000..7ce52afd5
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/filesystem.go
@@ -0,0 +1,110 @@
+package winapi
+
+//sys NtCreateFile(handle *uintptr, accessMask uint32, oa *ObjectAttributes, iosb *IOStatusBlock, allocationSize *uint64, fileAttributes uint32, shareAccess uint32, createDisposition uint32, createOptions uint32, eaBuffer *byte, eaLength uint32) (status uint32) = ntdll.NtCreateFile
+//sys NtSetInformationFile(handle uintptr, iosb *IOStatusBlock, information uintptr, length uint32, class uint32) (status uint32) = ntdll.NtSetInformationFile
+
+//sys NtOpenDirectoryObject(handle *uintptr, accessMask uint32, oa *ObjectAttributes) (status uint32) = ntdll.NtOpenDirectoryObject
+//sys NtQueryDirectoryObject(handle uintptr, buffer *byte, length uint32, singleEntry bool, restartScan bool, context *uint32, returnLength *uint32)(status uint32) = ntdll.NtQueryDirectoryObject
+
+const (
+ FileLinkInformationClass = 11
+ FileDispositionInformationExClass = 64
+
+ FILE_READ_ATTRIBUTES = 0x0080
+ FILE_WRITE_ATTRIBUTES = 0x0100
+ DELETE = 0x10000
+
+ FILE_OPEN = 1
+ FILE_CREATE = 2
+
+ FILE_LIST_DIRECTORY = 0x00000001
+ FILE_DIRECTORY_FILE = 0x00000001
+ FILE_SYNCHRONOUS_IO_NONALERT = 0x00000020
+ FILE_OPEN_FOR_BACKUP_INTENT = 0x00004000
+ FILE_OPEN_REPARSE_POINT = 0x00200000
+
+ FILE_DISPOSITION_DELETE = 0x00000001
+
+ OBJ_DONT_REPARSE = 0x1000
+
+ STATUS_MORE_ENTRIES = 0x105
+ STATUS_NO_MORE_ENTRIES = 0x8000001a
+)
+
+// Select entries from FILE_INFO_BY_HANDLE_CLASS.
+//
+// C declaration:
+// typedef enum _FILE_INFO_BY_HANDLE_CLASS {
+// FileBasicInfo,
+// FileStandardInfo,
+// FileNameInfo,
+// FileRenameInfo,
+// FileDispositionInfo,
+// FileAllocationInfo,
+// FileEndOfFileInfo,
+// FileStreamInfo,
+// FileCompressionInfo,
+// FileAttributeTagInfo,
+// FileIdBothDirectoryInfo,
+// FileIdBothDirectoryRestartInfo,
+// FileIoPriorityHintInfo,
+// FileRemoteProtocolInfo,
+// FileFullDirectoryInfo,
+// FileFullDirectoryRestartInfo,
+// FileStorageInfo,
+// FileAlignmentInfo,
+// FileIdInfo,
+// FileIdExtdDirectoryInfo,
+// FileIdExtdDirectoryRestartInfo,
+// FileDispositionInfoEx,
+// FileRenameInfoEx,
+// FileCaseSensitiveInfo,
+// FileNormalizedNameInfo,
+// MaximumFileInfoByHandleClass
+// } FILE_INFO_BY_HANDLE_CLASS, *PFILE_INFO_BY_HANDLE_CLASS;
+//
+// Documentation: https://docs.microsoft.com/en-us/windows/win32/api/minwinbase/ne-minwinbase-file_info_by_handle_class
+const (
+ FileIdInfo = 18
+)
+
+type FileDispositionInformationEx struct {
+ Flags uintptr
+}
+
+type IOStatusBlock struct {
+ Status, Information uintptr
+}
+
+type ObjectAttributes struct {
+ Length uintptr
+ RootDirectory uintptr
+ ObjectName *UnicodeString
+ Attributes uintptr
+ SecurityDescriptor uintptr
+ SecurityQoS uintptr
+}
+
+type ObjectDirectoryInformation struct {
+ Name UnicodeString
+ TypeName UnicodeString
+}
+
+type FileLinkInformation struct {
+ ReplaceIfExists bool
+ RootDirectory uintptr
+ FileNameLength uint32
+ FileName [1]uint16
+}
+
+// C declaration:
+// typedef struct _FILE_ID_INFO {
+// ULONGLONG VolumeSerialNumber;
+// FILE_ID_128 FileId;
+// } FILE_ID_INFO, *PFILE_ID_INFO;
+//
+// Documentation: https://docs.microsoft.com/en-us/windows/win32/api/winbase/ns-winbase-file_id_info
+type FILE_ID_INFO struct {
+ VolumeSerialNumber uint64
+ FileID [16]byte
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/jobobject.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/jobobject.go
new file mode 100644
index 000000000..7eb13f8f0
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/jobobject.go
@@ -0,0 +1,218 @@
+package winapi
+
+import (
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+// Messages that can be received from an assigned io completion port.
+// https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_associate_completion_port
+const (
+ JOB_OBJECT_MSG_END_OF_JOB_TIME uint32 = 1
+ JOB_OBJECT_MSG_END_OF_PROCESS_TIME uint32 = 2
+ JOB_OBJECT_MSG_ACTIVE_PROCESS_LIMIT uint32 = 3
+ JOB_OBJECT_MSG_ACTIVE_PROCESS_ZERO uint32 = 4
+ JOB_OBJECT_MSG_NEW_PROCESS uint32 = 6
+ JOB_OBJECT_MSG_EXIT_PROCESS uint32 = 7
+ JOB_OBJECT_MSG_ABNORMAL_EXIT_PROCESS uint32 = 8
+ JOB_OBJECT_MSG_PROCESS_MEMORY_LIMIT uint32 = 9
+ JOB_OBJECT_MSG_JOB_MEMORY_LIMIT uint32 = 10
+ JOB_OBJECT_MSG_NOTIFICATION_LIMIT uint32 = 11
+)
+
+// Access rights for creating or opening job objects.
+//
+// https://docs.microsoft.com/en-us/windows/win32/procthread/job-object-security-and-access-rights
+const (
+ JOB_OBJECT_QUERY = 0x0004
+ JOB_OBJECT_ALL_ACCESS = 0x1F001F
+)
+
+// IO limit flags
+//
+// https://docs.microsoft.com/en-us/windows/win32/api/jobapi2/ns-jobapi2-jobobject_io_rate_control_information
+const JOB_OBJECT_IO_RATE_CONTROL_ENABLE = 0x1
+
+const JOBOBJECT_IO_ATTRIBUTION_CONTROL_ENABLE uint32 = 0x1
+
+// https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_cpu_rate_control_information
+const (
+ JOB_OBJECT_CPU_RATE_CONTROL_ENABLE uint32 = 1 << iota
+ JOB_OBJECT_CPU_RATE_CONTROL_WEIGHT_BASED
+ JOB_OBJECT_CPU_RATE_CONTROL_HARD_CAP
+ JOB_OBJECT_CPU_RATE_CONTROL_NOTIFY
+ JOB_OBJECT_CPU_RATE_CONTROL_MIN_MAX_RATE
+)
+
+// JobObjectInformationClass values. Used for a call to QueryInformationJobObject
+//
+// https://docs.microsoft.com/en-us/windows/win32/api/jobapi2/nf-jobapi2-queryinformationjobobject
+const (
+ JobObjectBasicAccountingInformation uint32 = 1
+ JobObjectBasicProcessIdList uint32 = 3
+ JobObjectBasicAndIoAccountingInformation uint32 = 8
+ JobObjectLimitViolationInformation uint32 = 13
+ JobObjectMemoryUsageInformation uint32 = 28
+ JobObjectNotificationLimitInformation2 uint32 = 33
+ JobObjectIoAttribution uint32 = 42
+)
+
+// https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_basic_limit_information
+type JOBOBJECT_BASIC_LIMIT_INFORMATION struct {
+ PerProcessUserTimeLimit int64
+ PerJobUserTimeLimit int64
+ LimitFlags uint32
+ MinimumWorkingSetSize uintptr
+ MaximumWorkingSetSize uintptr
+ ActiveProcessLimit uint32
+ Affinity uintptr
+ PriorityClass uint32
+ SchedulingClass uint32
+}
+
+// https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_cpu_rate_control_information
+type JOBOBJECT_CPU_RATE_CONTROL_INFORMATION struct {
+ ControlFlags uint32
+ Value uint32
+}
+
+// https://docs.microsoft.com/en-us/windows/win32/api/jobapi2/ns-jobapi2-jobobject_io_rate_control_information
+type JOBOBJECT_IO_RATE_CONTROL_INFORMATION struct {
+ MaxIops int64
+ MaxBandwidth int64
+ ReservationIops int64
+ BaseIOSize uint32
+ VolumeName string
+ ControlFlags uint32
+}
+
+// https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_basic_process_id_list
+type JOBOBJECT_BASIC_PROCESS_ID_LIST struct {
+ NumberOfAssignedProcesses uint32
+ NumberOfProcessIdsInList uint32
+ ProcessIdList [1]uintptr
+}
+
+// AllPids returns all the process Ids in the job object.
+func (p *JOBOBJECT_BASIC_PROCESS_ID_LIST) AllPids() []uintptr {
+ return (*[(1 << 27) - 1]uintptr)(unsafe.Pointer(&p.ProcessIdList[0]))[:p.NumberOfProcessIdsInList:p.NumberOfProcessIdsInList]
+}
+
+// https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_basic_accounting_information
+type JOBOBJECT_BASIC_ACCOUNTING_INFORMATION struct {
+ TotalUserTime int64
+ TotalKernelTime int64
+ ThisPeriodTotalUserTime int64
+ ThisPeriodTotalKernelTime int64
+ TotalPageFaultCount uint32
+ TotalProcesses uint32
+ ActiveProcesses uint32
+ TotalTerminateProcesses uint32
+}
+
+//https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_basic_and_io_accounting_information
+type JOBOBJECT_BASIC_AND_IO_ACCOUNTING_INFORMATION struct {
+ BasicInfo JOBOBJECT_BASIC_ACCOUNTING_INFORMATION
+ IoInfo windows.IO_COUNTERS
+}
+
+// typedef struct _JOBOBJECT_MEMORY_USAGE_INFORMATION {
+// ULONG64 JobMemory;
+// ULONG64 PeakJobMemoryUsed;
+// } JOBOBJECT_MEMORY_USAGE_INFORMATION, *PJOBOBJECT_MEMORY_USAGE_INFORMATION;
+//
+type JOBOBJECT_MEMORY_USAGE_INFORMATION struct {
+ JobMemory uint64
+ PeakJobMemoryUsed uint64
+}
+
+// typedef struct _JOBOBJECT_IO_ATTRIBUTION_STATS {
+// ULONG_PTR IoCount;
+// ULONGLONG TotalNonOverlappedQueueTime;
+// ULONGLONG TotalNonOverlappedServiceTime;
+// ULONGLONG TotalSize;
+// } JOBOBJECT_IO_ATTRIBUTION_STATS, *PJOBOBJECT_IO_ATTRIBUTION_STATS;
+//
+type JOBOBJECT_IO_ATTRIBUTION_STATS struct {
+ IoCount uintptr
+ TotalNonOverlappedQueueTime uint64
+ TotalNonOverlappedServiceTime uint64
+ TotalSize uint64
+}
+
+// typedef struct _JOBOBJECT_IO_ATTRIBUTION_INFORMATION {
+// ULONG ControlFlags;
+// JOBOBJECT_IO_ATTRIBUTION_STATS ReadStats;
+// JOBOBJECT_IO_ATTRIBUTION_STATS WriteStats;
+// } JOBOBJECT_IO_ATTRIBUTION_INFORMATION, *PJOBOBJECT_IO_ATTRIBUTION_INFORMATION;
+//
+type JOBOBJECT_IO_ATTRIBUTION_INFORMATION struct {
+ ControlFlags uint32
+ ReadStats JOBOBJECT_IO_ATTRIBUTION_STATS
+ WriteStats JOBOBJECT_IO_ATTRIBUTION_STATS
+}
+
+// https://docs.microsoft.com/en-us/windows/win32/api/winnt/ns-winnt-jobobject_associate_completion_port
+type JOBOBJECT_ASSOCIATE_COMPLETION_PORT struct {
+ CompletionKey windows.Handle
+ CompletionPort windows.Handle
+}
+
+// BOOL IsProcessInJob(
+// HANDLE ProcessHandle,
+// HANDLE JobHandle,
+// PBOOL Result
+// );
+//
+//sys IsProcessInJob(procHandle windows.Handle, jobHandle windows.Handle, result *int32) (err error) = kernel32.IsProcessInJob
+
+// BOOL QueryInformationJobObject(
+// HANDLE hJob,
+// JOBOBJECTINFOCLASS JobObjectInformationClass,
+// LPVOID lpJobObjectInformation,
+// DWORD cbJobObjectInformationLength,
+// LPDWORD lpReturnLength
+// );
+//
+//sys QueryInformationJobObject(jobHandle windows.Handle, infoClass uint32, jobObjectInfo unsafe.Pointer, jobObjectInformationLength uint32, lpReturnLength *uint32) (err error) = kernel32.QueryInformationJobObject
+
+// HANDLE OpenJobObjectW(
+// DWORD dwDesiredAccess,
+// BOOL bInheritHandle,
+// LPCWSTR lpName
+// );
+//
+//sys OpenJobObject(desiredAccess uint32, inheritHandle bool, lpName *uint16) (handle windows.Handle, err error) = kernel32.OpenJobObjectW
+
+// DWORD SetIoRateControlInformationJobObject(
+// HANDLE hJob,
+// JOBOBJECT_IO_RATE_CONTROL_INFORMATION *IoRateControlInfo
+// );
+//
+//sys SetIoRateControlInformationJobObject(jobHandle windows.Handle, ioRateControlInfo *JOBOBJECT_IO_RATE_CONTROL_INFORMATION) (ret uint32, err error) = kernel32.SetIoRateControlInformationJobObject
+
+// DWORD QueryIoRateControlInformationJobObject(
+// HANDLE hJob,
+// PCWSTR VolumeName,
+// JOBOBJECT_IO_RATE_CONTROL_INFORMATION **InfoBlocks,
+// ULONG *InfoBlockCount
+// );
+//sys QueryIoRateControlInformationJobObject(jobHandle windows.Handle, volumeName *uint16, ioRateControlInfo **JOBOBJECT_IO_RATE_CONTROL_INFORMATION, infoBlockCount *uint32) (ret uint32, err error) = kernel32.QueryIoRateControlInformationJobObject
+
+// NTSTATUS
+// NtOpenJobObject (
+// _Out_ PHANDLE JobHandle,
+// _In_ ACCESS_MASK DesiredAccess,
+// _In_ POBJECT_ATTRIBUTES ObjectAttributes
+// );
+//sys NtOpenJobObject(jobHandle *windows.Handle, desiredAccess uint32, objAttributes *ObjectAttributes) (status uint32) = ntdll.NtOpenJobObject
+
+// NTSTATUS
+// NTAPI
+// NtCreateJobObject (
+// _Out_ PHANDLE JobHandle,
+// _In_ ACCESS_MASK DesiredAccess,
+// _In_opt_ POBJECT_ATTRIBUTES ObjectAttributes
+// );
+//sys NtCreateJobObject(jobHandle *windows.Handle, desiredAccess uint32, objAttributes *ObjectAttributes) (status uint32) = ntdll.NtCreateJobObject
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/logon.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/logon.go
new file mode 100644
index 000000000..b6e7cfd46
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/logon.go
@@ -0,0 +1,30 @@
+package winapi
+
+// BOOL LogonUserA(
+// LPCWSTR lpszUsername,
+// LPCWSTR lpszDomain,
+// LPCWSTR lpszPassword,
+// DWORD dwLogonType,
+// DWORD dwLogonProvider,
+// PHANDLE phToken
+// );
+//
+//sys LogonUser(username *uint16, domain *uint16, password *uint16, logonType uint32, logonProvider uint32, token *windows.Token) (err error) = advapi32.LogonUserW
+
+// Logon types
+const (
+ LOGON32_LOGON_INTERACTIVE uint32 = 2
+ LOGON32_LOGON_NETWORK uint32 = 3
+ LOGON32_LOGON_BATCH uint32 = 4
+ LOGON32_LOGON_SERVICE uint32 = 5
+ LOGON32_LOGON_UNLOCK uint32 = 7
+ LOGON32_LOGON_NETWORK_CLEARTEXT uint32 = 8
+ LOGON32_LOGON_NEW_CREDENTIALS uint32 = 9
+)
+
+// Logon providers
+const (
+ LOGON32_PROVIDER_DEFAULT uint32 = 0
+ LOGON32_PROVIDER_WINNT40 uint32 = 2
+ LOGON32_PROVIDER_WINNT50 uint32 = 3
+)
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/memory.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/memory.go
new file mode 100644
index 000000000..53f62948c
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/memory.go
@@ -0,0 +1,4 @@
+package winapi
+
+//sys LocalAlloc(flags uint32, size int) (ptr uintptr) = kernel32.LocalAlloc
+//sys LocalFree(ptr uintptr) = kernel32.LocalFree
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/net.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/net.go
new file mode 100644
index 000000000..f37910024
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/net.go
@@ -0,0 +1,3 @@
+package winapi
+
+//sys SetJobCompartmentId(handle windows.Handle, compartmentId uint32) (win32Err error) = iphlpapi.SetJobCompartmentId
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/path.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/path.go
new file mode 100644
index 000000000..908920e87
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/path.go
@@ -0,0 +1,11 @@
+package winapi
+
+// DWORD SearchPathW(
+// LPCWSTR lpPath,
+// LPCWSTR lpFileName,
+// LPCWSTR lpExtension,
+// DWORD nBufferLength,
+// LPWSTR lpBuffer,
+// LPWSTR *lpFilePart
+// );
+//sys SearchPath(lpPath *uint16, lpFileName *uint16, lpExtension *uint16, nBufferLength uint32, lpBuffer *uint16, lpFilePath *uint16) (size uint32, err error) = kernel32.SearchPathW
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/process.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/process.go
new file mode 100644
index 000000000..222529f43
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/process.go
@@ -0,0 +1,65 @@
+package winapi
+
+const PROCESS_ALL_ACCESS uint32 = 2097151
+
+const (
+ PROC_THREAD_ATTRIBUTE_PSEUDOCONSOLE = 0x20016
+ PROC_THREAD_ATTRIBUTE_JOB_LIST = 0x2000D
+)
+
+// ProcessVmCounters corresponds to the _VM_COUNTERS_EX and _VM_COUNTERS_EX2 structures.
+const ProcessVmCounters = 3
+
+// __kernel_entry NTSTATUS NtQueryInformationProcess(
+// [in] HANDLE ProcessHandle,
+// [in] PROCESSINFOCLASS ProcessInformationClass,
+// [out] PVOID ProcessInformation,
+// [in] ULONG ProcessInformationLength,
+// [out, optional] PULONG ReturnLength
+// );
+//
+//sys NtQueryInformationProcess(processHandle windows.Handle, processInfoClass uint32, processInfo unsafe.Pointer, processInfoLength uint32, returnLength *uint32) (status uint32) = ntdll.NtQueryInformationProcess
+
+// typedef struct _VM_COUNTERS_EX
+// {
+// SIZE_T PeakVirtualSize;
+// SIZE_T VirtualSize;
+// ULONG PageFaultCount;
+// SIZE_T PeakWorkingSetSize;
+// SIZE_T WorkingSetSize;
+// SIZE_T QuotaPeakPagedPoolUsage;
+// SIZE_T QuotaPagedPoolUsage;
+// SIZE_T QuotaPeakNonPagedPoolUsage;
+// SIZE_T QuotaNonPagedPoolUsage;
+// SIZE_T PagefileUsage;
+// SIZE_T PeakPagefileUsage;
+// SIZE_T PrivateUsage;
+// } VM_COUNTERS_EX, *PVM_COUNTERS_EX;
+//
+type VM_COUNTERS_EX struct {
+ PeakVirtualSize uintptr
+ VirtualSize uintptr
+ PageFaultCount uint32
+ PeakWorkingSetSize uintptr
+ WorkingSetSize uintptr
+ QuotaPeakPagedPoolUsage uintptr
+ QuotaPagedPoolUsage uintptr
+ QuotaPeakNonPagedPoolUsage uintptr
+ QuotaNonPagedPoolUsage uintptr
+ PagefileUsage uintptr
+ PeakPagefileUsage uintptr
+ PrivateUsage uintptr
+}
+
+// typedef struct _VM_COUNTERS_EX2
+// {
+// VM_COUNTERS_EX CountersEx;
+// SIZE_T PrivateWorkingSetSize;
+// SIZE_T SharedCommitUsage;
+// } VM_COUNTERS_EX2, *PVM_COUNTERS_EX2;
+//
+type VM_COUNTERS_EX2 struct {
+ CountersEx VM_COUNTERS_EX
+ PrivateWorkingSetSize uintptr
+ SharedCommitUsage uintptr
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/processor.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/processor.go
new file mode 100644
index 000000000..ce79ac2cd
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/processor.go
@@ -0,0 +1,7 @@
+package winapi
+
+// Get count from all processor groups.
+// https://docs.microsoft.com/en-us/windows/win32/procthread/processor-groups
+const ALL_PROCESSOR_GROUPS = 0xFFFF
+
+//sys GetActiveProcessorCount(groupNumber uint16) (amount uint32) = kernel32.GetActiveProcessorCount
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/system.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/system.go
new file mode 100644
index 000000000..78fe01a4b
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/system.go
@@ -0,0 +1,53 @@
+package winapi
+
+import "golang.org/x/sys/windows"
+
+const SystemProcessInformation = 5
+
+const STATUS_INFO_LENGTH_MISMATCH = 0xC0000004
+
+// __kernel_entry NTSTATUS NtQuerySystemInformation(
+// SYSTEM_INFORMATION_CLASS SystemInformationClass,
+// PVOID SystemInformation,
+// ULONG SystemInformationLength,
+// PULONG ReturnLength
+// );
+//
+//sys NtQuerySystemInformation(systemInfoClass int, systemInformation unsafe.Pointer, systemInfoLength uint32, returnLength *uint32) (status uint32) = ntdll.NtQuerySystemInformation
+
+type SYSTEM_PROCESS_INFORMATION struct {
+ NextEntryOffset uint32 // ULONG
+ NumberOfThreads uint32 // ULONG
+ WorkingSetPrivateSize int64 // LARGE_INTEGER
+ HardFaultCount uint32 // ULONG
+ NumberOfThreadsHighWatermark uint32 // ULONG
+ CycleTime uint64 // ULONGLONG
+ CreateTime int64 // LARGE_INTEGER
+ UserTime int64 // LARGE_INTEGER
+ KernelTime int64 // LARGE_INTEGER
+ ImageName UnicodeString // UNICODE_STRING
+ BasePriority int32 // KPRIORITY
+ UniqueProcessID windows.Handle // HANDLE
+ InheritedFromUniqueProcessID windows.Handle // HANDLE
+ HandleCount uint32 // ULONG
+ SessionID uint32 // ULONG
+ UniqueProcessKey *uint32 // ULONG_PTR
+ PeakVirtualSize uintptr // SIZE_T
+ VirtualSize uintptr // SIZE_T
+ PageFaultCount uint32 // ULONG
+ PeakWorkingSetSize uintptr // SIZE_T
+ WorkingSetSize uintptr // SIZE_T
+ QuotaPeakPagedPoolUsage uintptr // SIZE_T
+ QuotaPagedPoolUsage uintptr // SIZE_T
+ QuotaPeakNonPagedPoolUsage uintptr // SIZE_T
+ QuotaNonPagedPoolUsage uintptr // SIZE_T
+ PagefileUsage uintptr // SIZE_T
+ PeakPagefileUsage uintptr // SIZE_T
+ PrivatePageCount uintptr // SIZE_T
+ ReadOperationCount int64 // LARGE_INTEGER
+ WriteOperationCount int64 // LARGE_INTEGER
+ OtherOperationCount int64 // LARGE_INTEGER
+ ReadTransferCount int64 // LARGE_INTEGER
+ WriteTransferCount int64 // LARGE_INTEGER
+ OtherTransferCount int64 // LARGE_INTEGER
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/thread.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/thread.go
new file mode 100644
index 000000000..4724713e3
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/thread.go
@@ -0,0 +1,12 @@
+package winapi
+
+// HANDLE CreateRemoteThread(
+// HANDLE hProcess,
+// LPSECURITY_ATTRIBUTES lpThreadAttributes,
+// SIZE_T dwStackSize,
+// LPTHREAD_START_ROUTINE lpStartAddress,
+// LPVOID lpParameter,
+// DWORD dwCreationFlags,
+// LPDWORD lpThreadId
+// );
+//sys CreateRemoteThread(process windows.Handle, sa *windows.SecurityAttributes, stackSize uint32, startAddr uintptr, parameter uintptr, creationFlags uint32, threadID *uint32) (handle windows.Handle, err error) = kernel32.CreateRemoteThread
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/utils.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/utils.go
new file mode 100644
index 000000000..859b753c2
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/utils.go
@@ -0,0 +1,80 @@
+package winapi
+
+import (
+ "errors"
+ "reflect"
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+// Uint16BufferToSlice wraps a uint16 pointer-and-length into a slice
+// for easier interop with Go APIs
+func Uint16BufferToSlice(buffer *uint16, bufferLength int) (result []uint16) {
+ hdr := (*reflect.SliceHeader)(unsafe.Pointer(&result))
+ hdr.Data = uintptr(unsafe.Pointer(buffer))
+ hdr.Cap = bufferLength
+ hdr.Len = bufferLength
+
+ return
+}
+
+// UnicodeString corresponds to UNICODE_STRING win32 struct defined here
+// https://docs.microsoft.com/en-us/windows/win32/api/ntdef/ns-ntdef-_unicode_string
+type UnicodeString struct {
+ Length uint16
+ MaximumLength uint16
+ Buffer *uint16
+}
+
+// NTSTRSAFE_UNICODE_STRING_MAX_CCH is a constant defined in ntstrsafe.h. This value
+// denotes the maximum number of wide chars a path can have.
+const NTSTRSAFE_UNICODE_STRING_MAX_CCH = 32767
+
+//String converts a UnicodeString to a golang string
+func (uni UnicodeString) String() string {
+ // UnicodeString is not guaranteed to be null terminated, therefore
+ // use the UnicodeString's Length field
+ return windows.UTF16ToString(Uint16BufferToSlice(uni.Buffer, int(uni.Length/2)))
+}
+
+// NewUnicodeString allocates a new UnicodeString and copies `s` into
+// the buffer of the new UnicodeString.
+func NewUnicodeString(s string) (*UnicodeString, error) {
+ buf, err := windows.UTF16FromString(s)
+ if err != nil {
+ return nil, err
+ }
+
+ if len(buf) > NTSTRSAFE_UNICODE_STRING_MAX_CCH {
+ return nil, syscall.ENAMETOOLONG
+ }
+
+ uni := &UnicodeString{
+ // The length is in bytes and should not include the trailing null character.
+ Length: uint16((len(buf) - 1) * 2),
+ MaximumLength: uint16((len(buf) - 1) * 2),
+ Buffer: &buf[0],
+ }
+ return uni, nil
+}
+
+// ConvertStringSetToSlice is a helper function used to convert the contents of
+// `buf` into a string slice. `buf` contains a set of null terminated strings
+// with an additional null at the end to indicate the end of the set.
+func ConvertStringSetToSlice(buf []byte) ([]string, error) {
+ var results []string
+ prev := 0
+ for i := range buf {
+ if buf[i] == 0 {
+ if prev == i {
+ // found two null characters in a row, return result
+ return results, nil
+ }
+ results = append(results, string(buf[prev:i]))
+ prev = i + 1
+ }
+ }
+ return nil, errors.New("string set malformed: missing null terminator at end of buffer")
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/winapi.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/winapi.go
new file mode 100644
index 000000000..d2cc9d9fb
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/winapi.go
@@ -0,0 +1,5 @@
+// Package winapi contains various low-level bindings to Windows APIs. It can
+// be thought of as an extension to golang.org/x/sys/windows.
+package winapi
+
+//go:generate go run ..\..\mksyscall_windows.go -output zsyscall_windows.go user.go console.go system.go net.go path.go thread.go jobobject.go logon.go memory.go process.go processor.go devices.go filesystem.go errors.go
diff --git a/vendor/github.com/Microsoft/hcsshim/internal/winapi/zsyscall_windows.go b/vendor/github.com/Microsoft/hcsshim/internal/winapi/zsyscall_windows.go
new file mode 100644
index 000000000..1f16cf0b8
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/internal/winapi/zsyscall_windows.go
@@ -0,0 +1,354 @@
+// Code generated mksyscall_windows.exe DO NOT EDIT
+
+package winapi
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return nil
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modkernel32 = windows.NewLazySystemDLL("kernel32.dll")
+ modntdll = windows.NewLazySystemDLL("ntdll.dll")
+ modiphlpapi = windows.NewLazySystemDLL("iphlpapi.dll")
+ modadvapi32 = windows.NewLazySystemDLL("advapi32.dll")
+ modcfgmgr32 = windows.NewLazySystemDLL("cfgmgr32.dll")
+
+ procCreatePseudoConsole = modkernel32.NewProc("CreatePseudoConsole")
+ procClosePseudoConsole = modkernel32.NewProc("ClosePseudoConsole")
+ procResizePseudoConsole = modkernel32.NewProc("ResizePseudoConsole")
+ procNtQuerySystemInformation = modntdll.NewProc("NtQuerySystemInformation")
+ procSetJobCompartmentId = modiphlpapi.NewProc("SetJobCompartmentId")
+ procSearchPathW = modkernel32.NewProc("SearchPathW")
+ procCreateRemoteThread = modkernel32.NewProc("CreateRemoteThread")
+ procIsProcessInJob = modkernel32.NewProc("IsProcessInJob")
+ procQueryInformationJobObject = modkernel32.NewProc("QueryInformationJobObject")
+ procOpenJobObjectW = modkernel32.NewProc("OpenJobObjectW")
+ procSetIoRateControlInformationJobObject = modkernel32.NewProc("SetIoRateControlInformationJobObject")
+ procQueryIoRateControlInformationJobObject = modkernel32.NewProc("QueryIoRateControlInformationJobObject")
+ procNtOpenJobObject = modntdll.NewProc("NtOpenJobObject")
+ procNtCreateJobObject = modntdll.NewProc("NtCreateJobObject")
+ procLogonUserW = modadvapi32.NewProc("LogonUserW")
+ procLocalAlloc = modkernel32.NewProc("LocalAlloc")
+ procLocalFree = modkernel32.NewProc("LocalFree")
+ procNtQueryInformationProcess = modntdll.NewProc("NtQueryInformationProcess")
+ procGetActiveProcessorCount = modkernel32.NewProc("GetActiveProcessorCount")
+ procCM_Get_Device_ID_List_SizeA = modcfgmgr32.NewProc("CM_Get_Device_ID_List_SizeA")
+ procCM_Get_Device_ID_ListA = modcfgmgr32.NewProc("CM_Get_Device_ID_ListA")
+ procCM_Locate_DevNodeW = modcfgmgr32.NewProc("CM_Locate_DevNodeW")
+ procCM_Get_DevNode_PropertyW = modcfgmgr32.NewProc("CM_Get_DevNode_PropertyW")
+ procNtCreateFile = modntdll.NewProc("NtCreateFile")
+ procNtSetInformationFile = modntdll.NewProc("NtSetInformationFile")
+ procNtOpenDirectoryObject = modntdll.NewProc("NtOpenDirectoryObject")
+ procNtQueryDirectoryObject = modntdll.NewProc("NtQueryDirectoryObject")
+ procRtlNtStatusToDosError = modntdll.NewProc("RtlNtStatusToDosError")
+)
+
+func createPseudoConsole(size uint32, hInput windows.Handle, hOutput windows.Handle, dwFlags uint32, hpcon *windows.Handle) (hr error) {
+ r0, _, _ := syscall.Syscall6(procCreatePseudoConsole.Addr(), 5, uintptr(size), uintptr(hInput), uintptr(hOutput), uintptr(dwFlags), uintptr(unsafe.Pointer(hpcon)), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func ClosePseudoConsole(hpc windows.Handle) {
+ syscall.Syscall(procClosePseudoConsole.Addr(), 1, uintptr(hpc), 0, 0)
+ return
+}
+
+func resizePseudoConsole(hPc windows.Handle, size uint32) (hr error) {
+ r0, _, _ := syscall.Syscall(procResizePseudoConsole.Addr(), 2, uintptr(hPc), uintptr(size), 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func NtQuerySystemInformation(systemInfoClass int, systemInformation unsafe.Pointer, systemInfoLength uint32, returnLength *uint32) (status uint32) {
+ r0, _, _ := syscall.Syscall6(procNtQuerySystemInformation.Addr(), 4, uintptr(systemInfoClass), uintptr(systemInformation), uintptr(systemInfoLength), uintptr(unsafe.Pointer(returnLength)), 0, 0)
+ status = uint32(r0)
+ return
+}
+
+func SetJobCompartmentId(handle windows.Handle, compartmentId uint32) (win32Err error) {
+ r0, _, _ := syscall.Syscall(procSetJobCompartmentId.Addr(), 2, uintptr(handle), uintptr(compartmentId), 0)
+ if r0 != 0 {
+ win32Err = syscall.Errno(r0)
+ }
+ return
+}
+
+func SearchPath(lpPath *uint16, lpFileName *uint16, lpExtension *uint16, nBufferLength uint32, lpBuffer *uint16, lpFilePath *uint16) (size uint32, err error) {
+ r0, _, e1 := syscall.Syscall6(procSearchPathW.Addr(), 6, uintptr(unsafe.Pointer(lpPath)), uintptr(unsafe.Pointer(lpFileName)), uintptr(unsafe.Pointer(lpExtension)), uintptr(nBufferLength), uintptr(unsafe.Pointer(lpBuffer)), uintptr(unsafe.Pointer(lpFilePath)))
+ size = uint32(r0)
+ if size == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func CreateRemoteThread(process windows.Handle, sa *windows.SecurityAttributes, stackSize uint32, startAddr uintptr, parameter uintptr, creationFlags uint32, threadID *uint32) (handle windows.Handle, err error) {
+ r0, _, e1 := syscall.Syscall9(procCreateRemoteThread.Addr(), 7, uintptr(process), uintptr(unsafe.Pointer(sa)), uintptr(stackSize), uintptr(startAddr), uintptr(parameter), uintptr(creationFlags), uintptr(unsafe.Pointer(threadID)), 0, 0)
+ handle = windows.Handle(r0)
+ if handle == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func IsProcessInJob(procHandle windows.Handle, jobHandle windows.Handle, result *int32) (err error) {
+ r1, _, e1 := syscall.Syscall(procIsProcessInJob.Addr(), 3, uintptr(procHandle), uintptr(jobHandle), uintptr(unsafe.Pointer(result)))
+ if r1 == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func QueryInformationJobObject(jobHandle windows.Handle, infoClass uint32, jobObjectInfo unsafe.Pointer, jobObjectInformationLength uint32, lpReturnLength *uint32) (err error) {
+ r1, _, e1 := syscall.Syscall6(procQueryInformationJobObject.Addr(), 5, uintptr(jobHandle), uintptr(infoClass), uintptr(jobObjectInfo), uintptr(jobObjectInformationLength), uintptr(unsafe.Pointer(lpReturnLength)), 0)
+ if r1 == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func OpenJobObject(desiredAccess uint32, inheritHandle bool, lpName *uint16) (handle windows.Handle, err error) {
+ var _p0 uint32
+ if inheritHandle {
+ _p0 = 1
+ } else {
+ _p0 = 0
+ }
+ r0, _, e1 := syscall.Syscall(procOpenJobObjectW.Addr(), 3, uintptr(desiredAccess), uintptr(_p0), uintptr(unsafe.Pointer(lpName)))
+ handle = windows.Handle(r0)
+ if handle == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func SetIoRateControlInformationJobObject(jobHandle windows.Handle, ioRateControlInfo *JOBOBJECT_IO_RATE_CONTROL_INFORMATION) (ret uint32, err error) {
+ r0, _, e1 := syscall.Syscall(procSetIoRateControlInformationJobObject.Addr(), 2, uintptr(jobHandle), uintptr(unsafe.Pointer(ioRateControlInfo)), 0)
+ ret = uint32(r0)
+ if ret == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func QueryIoRateControlInformationJobObject(jobHandle windows.Handle, volumeName *uint16, ioRateControlInfo **JOBOBJECT_IO_RATE_CONTROL_INFORMATION, infoBlockCount *uint32) (ret uint32, err error) {
+ r0, _, e1 := syscall.Syscall6(procQueryIoRateControlInformationJobObject.Addr(), 4, uintptr(jobHandle), uintptr(unsafe.Pointer(volumeName)), uintptr(unsafe.Pointer(ioRateControlInfo)), uintptr(unsafe.Pointer(infoBlockCount)), 0, 0)
+ ret = uint32(r0)
+ if ret == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func NtOpenJobObject(jobHandle *windows.Handle, desiredAccess uint32, objAttributes *ObjectAttributes) (status uint32) {
+ r0, _, _ := syscall.Syscall(procNtOpenJobObject.Addr(), 3, uintptr(unsafe.Pointer(jobHandle)), uintptr(desiredAccess), uintptr(unsafe.Pointer(objAttributes)))
+ status = uint32(r0)
+ return
+}
+
+func NtCreateJobObject(jobHandle *windows.Handle, desiredAccess uint32, objAttributes *ObjectAttributes) (status uint32) {
+ r0, _, _ := syscall.Syscall(procNtCreateJobObject.Addr(), 3, uintptr(unsafe.Pointer(jobHandle)), uintptr(desiredAccess), uintptr(unsafe.Pointer(objAttributes)))
+ status = uint32(r0)
+ return
+}
+
+func LogonUser(username *uint16, domain *uint16, password *uint16, logonType uint32, logonProvider uint32, token *windows.Token) (err error) {
+ r1, _, e1 := syscall.Syscall6(procLogonUserW.Addr(), 6, uintptr(unsafe.Pointer(username)), uintptr(unsafe.Pointer(domain)), uintptr(unsafe.Pointer(password)), uintptr(logonType), uintptr(logonProvider), uintptr(unsafe.Pointer(token)))
+ if r1 == 0 {
+ if e1 != 0 {
+ err = errnoErr(e1)
+ } else {
+ err = syscall.EINVAL
+ }
+ }
+ return
+}
+
+func LocalAlloc(flags uint32, size int) (ptr uintptr) {
+ r0, _, _ := syscall.Syscall(procLocalAlloc.Addr(), 2, uintptr(flags), uintptr(size), 0)
+ ptr = uintptr(r0)
+ return
+}
+
+func LocalFree(ptr uintptr) {
+ syscall.Syscall(procLocalFree.Addr(), 1, uintptr(ptr), 0, 0)
+ return
+}
+
+func NtQueryInformationProcess(processHandle windows.Handle, processInfoClass uint32, processInfo unsafe.Pointer, processInfoLength uint32, returnLength *uint32) (status uint32) {
+ r0, _, _ := syscall.Syscall6(procNtQueryInformationProcess.Addr(), 5, uintptr(processHandle), uintptr(processInfoClass), uintptr(processInfo), uintptr(processInfoLength), uintptr(unsafe.Pointer(returnLength)), 0)
+ status = uint32(r0)
+ return
+}
+
+func GetActiveProcessorCount(groupNumber uint16) (amount uint32) {
+ r0, _, _ := syscall.Syscall(procGetActiveProcessorCount.Addr(), 1, uintptr(groupNumber), 0, 0)
+ amount = uint32(r0)
+ return
+}
+
+func CMGetDeviceIDListSize(pulLen *uint32, pszFilter *byte, uFlags uint32) (hr error) {
+ r0, _, _ := syscall.Syscall(procCM_Get_Device_ID_List_SizeA.Addr(), 3, uintptr(unsafe.Pointer(pulLen)), uintptr(unsafe.Pointer(pszFilter)), uintptr(uFlags))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func CMGetDeviceIDList(pszFilter *byte, buffer *byte, bufferLen uint32, uFlags uint32) (hr error) {
+ r0, _, _ := syscall.Syscall6(procCM_Get_Device_ID_ListA.Addr(), 4, uintptr(unsafe.Pointer(pszFilter)), uintptr(unsafe.Pointer(buffer)), uintptr(bufferLen), uintptr(uFlags), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func CMLocateDevNode(pdnDevInst *uint32, pDeviceID string, uFlags uint32) (hr error) {
+ var _p0 *uint16
+ _p0, hr = syscall.UTF16PtrFromString(pDeviceID)
+ if hr != nil {
+ return
+ }
+ return _CMLocateDevNode(pdnDevInst, _p0, uFlags)
+}
+
+func _CMLocateDevNode(pdnDevInst *uint32, pDeviceID *uint16, uFlags uint32) (hr error) {
+ r0, _, _ := syscall.Syscall(procCM_Locate_DevNodeW.Addr(), 3, uintptr(unsafe.Pointer(pdnDevInst)), uintptr(unsafe.Pointer(pDeviceID)), uintptr(uFlags))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func CMGetDevNodeProperty(dnDevInst uint32, propertyKey *DevPropKey, propertyType *uint32, propertyBuffer *uint16, propertyBufferSize *uint32, uFlags uint32) (hr error) {
+ r0, _, _ := syscall.Syscall6(procCM_Get_DevNode_PropertyW.Addr(), 6, uintptr(dnDevInst), uintptr(unsafe.Pointer(propertyKey)), uintptr(unsafe.Pointer(propertyType)), uintptr(unsafe.Pointer(propertyBuffer)), uintptr(unsafe.Pointer(propertyBufferSize)), uintptr(uFlags))
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
+
+func NtCreateFile(handle *uintptr, accessMask uint32, oa *ObjectAttributes, iosb *IOStatusBlock, allocationSize *uint64, fileAttributes uint32, shareAccess uint32, createDisposition uint32, createOptions uint32, eaBuffer *byte, eaLength uint32) (status uint32) {
+ r0, _, _ := syscall.Syscall12(procNtCreateFile.Addr(), 11, uintptr(unsafe.Pointer(handle)), uintptr(accessMask), uintptr(unsafe.Pointer(oa)), uintptr(unsafe.Pointer(iosb)), uintptr(unsafe.Pointer(allocationSize)), uintptr(fileAttributes), uintptr(shareAccess), uintptr(createDisposition), uintptr(createOptions), uintptr(unsafe.Pointer(eaBuffer)), uintptr(eaLength), 0)
+ status = uint32(r0)
+ return
+}
+
+func NtSetInformationFile(handle uintptr, iosb *IOStatusBlock, information uintptr, length uint32, class uint32) (status uint32) {
+ r0, _, _ := syscall.Syscall6(procNtSetInformationFile.Addr(), 5, uintptr(handle), uintptr(unsafe.Pointer(iosb)), uintptr(information), uintptr(length), uintptr(class), 0)
+ status = uint32(r0)
+ return
+}
+
+func NtOpenDirectoryObject(handle *uintptr, accessMask uint32, oa *ObjectAttributes) (status uint32) {
+ r0, _, _ := syscall.Syscall(procNtOpenDirectoryObject.Addr(), 3, uintptr(unsafe.Pointer(handle)), uintptr(accessMask), uintptr(unsafe.Pointer(oa)))
+ status = uint32(r0)
+ return
+}
+
+func NtQueryDirectoryObject(handle uintptr, buffer *byte, length uint32, singleEntry bool, restartScan bool, context *uint32, returnLength *uint32) (status uint32) {
+ var _p0 uint32
+ if singleEntry {
+ _p0 = 1
+ } else {
+ _p0 = 0
+ }
+ var _p1 uint32
+ if restartScan {
+ _p1 = 1
+ } else {
+ _p1 = 0
+ }
+ r0, _, _ := syscall.Syscall9(procNtQueryDirectoryObject.Addr(), 7, uintptr(handle), uintptr(unsafe.Pointer(buffer)), uintptr(length), uintptr(_p0), uintptr(_p1), uintptr(unsafe.Pointer(context)), uintptr(unsafe.Pointer(returnLength)), 0, 0)
+ status = uint32(r0)
+ return
+}
+
+func RtlNtStatusToDosError(status uint32) (winerr error) {
+ r0, _, _ := syscall.Syscall(procRtlNtStatusToDosError.Addr(), 1, uintptr(status), 0, 0)
+ if r0 != 0 {
+ winerr = syscall.Errno(r0)
+ }
+ return
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/layer.go b/vendor/github.com/Microsoft/hcsshim/layer.go
new file mode 100644
index 000000000..891616370
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/layer.go
@@ -0,0 +1,107 @@
+package hcsshim
+
+import (
+ "context"
+ "crypto/sha1"
+ "path/filepath"
+
+ "github.com/Microsoft/go-winio/pkg/guid"
+ "github.com/Microsoft/hcsshim/internal/wclayer"
+)
+
+func layerPath(info *DriverInfo, id string) string {
+ return filepath.Join(info.HomeDir, id)
+}
+
+func ActivateLayer(info DriverInfo, id string) error {
+ return wclayer.ActivateLayer(context.Background(), layerPath(&info, id))
+}
+func CreateLayer(info DriverInfo, id, parent string) error {
+ return wclayer.CreateLayer(context.Background(), layerPath(&info, id), parent)
+}
+
+// New clients should use CreateScratchLayer instead. Kept in to preserve API compatibility.
+func CreateSandboxLayer(info DriverInfo, layerId, parentId string, parentLayerPaths []string) error {
+ return wclayer.CreateScratchLayer(context.Background(), layerPath(&info, layerId), parentLayerPaths)
+}
+func CreateScratchLayer(info DriverInfo, layerId, parentId string, parentLayerPaths []string) error {
+ return wclayer.CreateScratchLayer(context.Background(), layerPath(&info, layerId), parentLayerPaths)
+}
+func DeactivateLayer(info DriverInfo, id string) error {
+ return wclayer.DeactivateLayer(context.Background(), layerPath(&info, id))
+}
+func DestroyLayer(info DriverInfo, id string) error {
+ return wclayer.DestroyLayer(context.Background(), layerPath(&info, id))
+}
+
+// New clients should use ExpandScratchSize instead. Kept in to preserve API compatibility.
+func ExpandSandboxSize(info DriverInfo, layerId string, size uint64) error {
+ return wclayer.ExpandScratchSize(context.Background(), layerPath(&info, layerId), size)
+}
+func ExpandScratchSize(info DriverInfo, layerId string, size uint64) error {
+ return wclayer.ExpandScratchSize(context.Background(), layerPath(&info, layerId), size)
+}
+func ExportLayer(info DriverInfo, layerId string, exportFolderPath string, parentLayerPaths []string) error {
+ return wclayer.ExportLayer(context.Background(), layerPath(&info, layerId), exportFolderPath, parentLayerPaths)
+}
+func GetLayerMountPath(info DriverInfo, id string) (string, error) {
+ return wclayer.GetLayerMountPath(context.Background(), layerPath(&info, id))
+}
+func GetSharedBaseImages() (imageData string, err error) {
+ return wclayer.GetSharedBaseImages(context.Background())
+}
+func ImportLayer(info DriverInfo, layerID string, importFolderPath string, parentLayerPaths []string) error {
+ return wclayer.ImportLayer(context.Background(), layerPath(&info, layerID), importFolderPath, parentLayerPaths)
+}
+func LayerExists(info DriverInfo, id string) (bool, error) {
+ return wclayer.LayerExists(context.Background(), layerPath(&info, id))
+}
+func PrepareLayer(info DriverInfo, layerId string, parentLayerPaths []string) error {
+ return wclayer.PrepareLayer(context.Background(), layerPath(&info, layerId), parentLayerPaths)
+}
+func ProcessBaseLayer(path string) error {
+ return wclayer.ProcessBaseLayer(context.Background(), path)
+}
+func ProcessUtilityVMImage(path string) error {
+ return wclayer.ProcessUtilityVMImage(context.Background(), path)
+}
+func UnprepareLayer(info DriverInfo, layerId string) error {
+ return wclayer.UnprepareLayer(context.Background(), layerPath(&info, layerId))
+}
+
+type DriverInfo struct {
+ Flavour int
+ HomeDir string
+}
+
+type GUID [16]byte
+
+func NameToGuid(name string) (id GUID, err error) {
+ g, err := wclayer.NameToGuid(context.Background(), name)
+ return g.ToWindowsArray(), err
+}
+
+func NewGUID(source string) *GUID {
+ h := sha1.Sum([]byte(source))
+ var g GUID
+ copy(g[0:], h[0:16])
+ return &g
+}
+
+func (g *GUID) ToString() string {
+ return guid.FromWindowsArray(*g).String()
+}
+
+type LayerReader = wclayer.LayerReader
+
+func NewLayerReader(info DriverInfo, layerID string, parentLayerPaths []string) (LayerReader, error) {
+ return wclayer.NewLayerReader(context.Background(), layerPath(&info, layerID), parentLayerPaths)
+}
+
+type LayerWriter = wclayer.LayerWriter
+
+func NewLayerWriter(info DriverInfo, layerID string, parentLayerPaths []string) (LayerWriter, error) {
+ return wclayer.NewLayerWriter(context.Background(), layerPath(&info, layerID), parentLayerPaths)
+}
+
+type WC_LAYER_DESCRIPTOR = wclayer.WC_LAYER_DESCRIPTOR
diff --git a/vendor/github.com/Microsoft/hcsshim/osversion/osversion_windows.go b/vendor/github.com/Microsoft/hcsshim/osversion/osversion_windows.go
new file mode 100644
index 000000000..3ab3bcd89
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/osversion/osversion_windows.go
@@ -0,0 +1,50 @@
+package osversion
+
+import (
+ "fmt"
+ "sync"
+
+ "golang.org/x/sys/windows"
+)
+
+// OSVersion is a wrapper for Windows version information
+// https://msdn.microsoft.com/en-us/library/windows/desktop/ms724439(v=vs.85).aspx
+type OSVersion struct {
+ Version uint32
+ MajorVersion uint8
+ MinorVersion uint8
+ Build uint16
+}
+
+var (
+ osv OSVersion
+ once sync.Once
+)
+
+// Get gets the operating system version on Windows.
+// The calling application must be manifested to get the correct version information.
+func Get() OSVersion {
+ once.Do(func() {
+ var err error
+ osv = OSVersion{}
+ osv.Version, err = windows.GetVersion()
+ if err != nil {
+ // GetVersion never fails.
+ panic(err)
+ }
+ osv.MajorVersion = uint8(osv.Version & 0xFF)
+ osv.MinorVersion = uint8(osv.Version >> 8 & 0xFF)
+ osv.Build = uint16(osv.Version >> 16)
+ })
+ return osv
+}
+
+// Build gets the build-number on Windows
+// The calling application must be manifested to get the correct version information.
+func Build() uint16 {
+ return Get().Build
+}
+
+func (osv OSVersion) ToString() string {
+ return fmt.Sprintf("%d.%d.%d", osv.MajorVersion, osv.MinorVersion, osv.Build)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/osversion/windowsbuilds.go b/vendor/github.com/Microsoft/hcsshim/osversion/windowsbuilds.go
new file mode 100644
index 000000000..75dce5d82
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/osversion/windowsbuilds.go
@@ -0,0 +1,50 @@
+package osversion
+
+const (
+ // RS1 (version 1607, codename "Redstone 1") corresponds to Windows Server
+ // 2016 (ltsc2016) and Windows 10 (Anniversary Update).
+ RS1 = 14393
+
+ // RS2 (version 1703, codename "Redstone 2") was a client-only update, and
+ // corresponds to Windows 10 (Creators Update).
+ RS2 = 15063
+
+ // RS3 (version 1709, codename "Redstone 3") corresponds to Windows Server
+ // 1709 (Semi-Annual Channel (SAC)), and Windows 10 (Fall Creators Update).
+ RS3 = 16299
+
+ // RS4 (version 1803, codename "Redstone 4") corresponds to Windows Server
+ // 1803 (Semi-Annual Channel (SAC)), and Windows 10 (April 2018 Update).
+ RS4 = 17134
+
+ // RS5 (version 1809, codename "Redstone 5") corresponds to Windows Server
+ // 2019 (ltsc2019), and Windows 10 (October 2018 Update).
+ RS5 = 17763
+
+ // V19H1 (version 1903) corresponds to Windows Server 1903 (semi-annual
+ // channel).
+ V19H1 = 18362
+
+ // V19H2 (version 1909) corresponds to Windows Server 1909 (semi-annual
+ // channel).
+ V19H2 = 18363
+
+ // V20H1 (version 2004) corresponds to Windows Server 2004 (semi-annual
+ // channel).
+ V20H1 = 19041
+
+ // V20H2 corresponds to Windows Server 20H2 (semi-annual channel).
+ V20H2 = 19042
+
+ // V21H1 corresponds to Windows Server 21H1 (semi-annual channel).
+ V21H1 = 19043
+
+ // V21H2Win10 corresponds to Windows 10 (November 2021 Update).
+ V21H2Win10 = 19044
+
+ // V21H2Server corresponds to Windows Server 2022 (ltsc2022).
+ V21H2Server = 20348
+
+ // V21H2Win11 corresponds to Windows 11 (original release).
+ V21H2Win11 = 22000
+)
diff --git a/vendor/github.com/Microsoft/hcsshim/pkg/ociwclayer/export.go b/vendor/github.com/Microsoft/hcsshim/pkg/ociwclayer/export.go
new file mode 100644
index 000000000..e3f1be333
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/pkg/ociwclayer/export.go
@@ -0,0 +1,88 @@
+// Package ociwclayer provides functions for importing and exporting Windows
+// container layers from and to their OCI tar representation.
+package ociwclayer
+
+import (
+ "archive/tar"
+ "context"
+ "io"
+ "path/filepath"
+
+ "github.com/Microsoft/go-winio/backuptar"
+ "github.com/Microsoft/hcsshim"
+)
+
+var driverInfo = hcsshim.DriverInfo{}
+
+// ExportLayerToTar writes an OCI layer tar stream from the provided on-disk layer.
+// The caller must specify the parent layers, if any, ordered from lowest to
+// highest layer.
+//
+// The layer will be mounted for this process, so the caller should ensure that
+// it is not currently mounted.
+func ExportLayerToTar(ctx context.Context, w io.Writer, path string, parentLayerPaths []string) error {
+ err := hcsshim.ActivateLayer(driverInfo, path)
+ if err != nil {
+ return err
+ }
+ defer func() {
+ _ = hcsshim.DeactivateLayer(driverInfo, path)
+ }()
+
+ // Prepare and unprepare the layer to ensure that it has been initialized.
+ err = hcsshim.PrepareLayer(driverInfo, path, parentLayerPaths)
+ if err != nil {
+ return err
+ }
+ err = hcsshim.UnprepareLayer(driverInfo, path)
+ if err != nil {
+ return err
+ }
+
+ r, err := hcsshim.NewLayerReader(driverInfo, path, parentLayerPaths)
+ if err != nil {
+ return err
+ }
+
+ err = writeTarFromLayer(ctx, r, w)
+ cerr := r.Close()
+ if err != nil {
+ return err
+ }
+ return cerr
+}
+
+func writeTarFromLayer(ctx context.Context, r hcsshim.LayerReader, w io.Writer) error {
+ t := tar.NewWriter(w)
+ for {
+ select {
+ case <-ctx.Done():
+ return ctx.Err()
+ default:
+ }
+
+ name, size, fileInfo, err := r.Next()
+ if err == io.EOF {
+ break
+ }
+ if err != nil {
+ return err
+ }
+ if fileInfo == nil {
+ // Write a whiteout file.
+ hdr := &tar.Header{
+ Name: filepath.ToSlash(filepath.Join(filepath.Dir(name), whiteoutPrefix+filepath.Base(name))),
+ }
+ err := t.WriteHeader(hdr)
+ if err != nil {
+ return err
+ }
+ } else {
+ err = backuptar.WriteTarFileFromBackupStream(t, r, name, size, fileInfo)
+ if err != nil {
+ return err
+ }
+ }
+ }
+ return t.Close()
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/pkg/ociwclayer/import.go b/vendor/github.com/Microsoft/hcsshim/pkg/ociwclayer/import.go
new file mode 100644
index 000000000..e74a6b594
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/pkg/ociwclayer/import.go
@@ -0,0 +1,148 @@
+package ociwclayer
+
+import (
+ "archive/tar"
+ "bufio"
+ "context"
+ "io"
+ "os"
+ "path"
+ "path/filepath"
+ "strings"
+
+ winio "github.com/Microsoft/go-winio"
+ "github.com/Microsoft/go-winio/backuptar"
+ "github.com/Microsoft/hcsshim"
+)
+
+const whiteoutPrefix = ".wh."
+
+var (
+ // mutatedFiles is a list of files that are mutated by the import process
+ // and must be backed up and restored.
+ mutatedFiles = map[string]string{
+ "UtilityVM/Files/EFI/Microsoft/Boot/BCD": "bcd.bak",
+ "UtilityVM/Files/EFI/Microsoft/Boot/BCD.LOG": "bcd.log.bak",
+ "UtilityVM/Files/EFI/Microsoft/Boot/BCD.LOG1": "bcd.log1.bak",
+ "UtilityVM/Files/EFI/Microsoft/Boot/BCD.LOG2": "bcd.log2.bak",
+ }
+)
+
+// ImportLayerFromTar reads a layer from an OCI layer tar stream and extracts it to the
+// specified path. The caller must specify the parent layers, if any, ordered
+// from lowest to highest layer.
+//
+// The caller must ensure that the thread or process has acquired backup and
+// restore privileges.
+//
+// This function returns the total size of the layer's files, in bytes.
+func ImportLayerFromTar(ctx context.Context, r io.Reader, path string, parentLayerPaths []string) (int64, error) {
+ err := os.MkdirAll(path, 0)
+ if err != nil {
+ return 0, err
+ }
+ w, err := hcsshim.NewLayerWriter(hcsshim.DriverInfo{}, path, parentLayerPaths)
+ if err != nil {
+ return 0, err
+ }
+ n, err := writeLayerFromTar(ctx, r, w, path)
+ cerr := w.Close()
+ if err != nil {
+ return 0, err
+ }
+ if cerr != nil {
+ return 0, cerr
+ }
+ return n, nil
+}
+
+func writeLayerFromTar(ctx context.Context, r io.Reader, w hcsshim.LayerWriter, root string) (int64, error) {
+ t := tar.NewReader(r)
+ hdr, err := t.Next()
+ totalSize := int64(0)
+ buf := bufio.NewWriter(nil)
+ for err == nil {
+ select {
+ case <-ctx.Done():
+ return 0, ctx.Err()
+ default:
+ }
+
+ base := path.Base(hdr.Name)
+ if strings.HasPrefix(base, whiteoutPrefix) {
+ name := path.Join(path.Dir(hdr.Name), base[len(whiteoutPrefix):])
+ err = w.Remove(filepath.FromSlash(name))
+ if err != nil {
+ return 0, err
+ }
+ hdr, err = t.Next()
+ } else if hdr.Typeflag == tar.TypeLink {
+ err = w.AddLink(filepath.FromSlash(hdr.Name), filepath.FromSlash(hdr.Linkname))
+ if err != nil {
+ return 0, err
+ }
+ hdr, err = t.Next()
+ } else {
+ var (
+ name string
+ size int64
+ fileInfo *winio.FileBasicInfo
+ )
+ name, size, fileInfo, err = backuptar.FileInfoFromHeader(hdr)
+ if err != nil {
+ return 0, err
+ }
+ err = w.Add(filepath.FromSlash(name), fileInfo)
+ if err != nil {
+ return 0, err
+ }
+ hdr, err = writeBackupStreamFromTarAndSaveMutatedFiles(buf, w, t, hdr, root)
+ totalSize += size
+ }
+ }
+ if err != io.EOF {
+ return 0, err
+ }
+ return totalSize, nil
+}
+
+// writeBackupStreamFromTarAndSaveMutatedFiles reads data from a tar stream and
+// writes it to a backup stream, and also saves any files that will be mutated
+// by the import layer process to a backup location.
+func writeBackupStreamFromTarAndSaveMutatedFiles(buf *bufio.Writer, w io.Writer, t *tar.Reader, hdr *tar.Header, root string) (nextHdr *tar.Header, err error) {
+ var bcdBackup *os.File
+ var bcdBackupWriter *winio.BackupFileWriter
+ if backupPath, ok := mutatedFiles[hdr.Name]; ok {
+ bcdBackup, err = os.Create(filepath.Join(root, backupPath))
+ if err != nil {
+ return nil, err
+ }
+ defer func() {
+ cerr := bcdBackup.Close()
+ if err == nil {
+ err = cerr
+ }
+ }()
+
+ bcdBackupWriter = winio.NewBackupFileWriter(bcdBackup, false)
+ defer func() {
+ cerr := bcdBackupWriter.Close()
+ if err == nil {
+ err = cerr
+ }
+ }()
+
+ buf.Reset(io.MultiWriter(w, bcdBackupWriter))
+ } else {
+ buf.Reset(w)
+ }
+
+ defer func() {
+ ferr := buf.Flush()
+ if err == nil {
+ err = ferr
+ }
+ }()
+
+ return backuptar.WriteBackupStreamFromTarFile(buf, t, hdr)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/process.go b/vendor/github.com/Microsoft/hcsshim/process.go
new file mode 100644
index 000000000..3362c6833
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/process.go
@@ -0,0 +1,98 @@
+package hcsshim
+
+import (
+ "context"
+ "io"
+ "sync"
+ "time"
+
+ "github.com/Microsoft/hcsshim/internal/hcs"
+)
+
+// ContainerError is an error encountered in HCS
+type process struct {
+ p *hcs.Process
+ waitOnce sync.Once
+ waitCh chan struct{}
+ waitErr error
+}
+
+// Pid returns the process ID of the process within the container.
+func (process *process) Pid() int {
+ return process.p.Pid()
+}
+
+// Kill signals the process to terminate but does not wait for it to finish terminating.
+func (process *process) Kill() error {
+ found, err := process.p.Kill(context.Background())
+ if err != nil {
+ return convertProcessError(err, process)
+ }
+ if !found {
+ return &ProcessError{Process: process, Err: ErrElementNotFound, Operation: "hcsshim::Process::Kill"}
+ }
+ return nil
+}
+
+// Wait waits for the process to exit.
+func (process *process) Wait() error {
+ return convertProcessError(process.p.Wait(), process)
+}
+
+// WaitTimeout waits for the process to exit or the duration to elapse. It returns
+// false if timeout occurs.
+func (process *process) WaitTimeout(timeout time.Duration) error {
+ process.waitOnce.Do(func() {
+ process.waitCh = make(chan struct{})
+ go func() {
+ process.waitErr = process.Wait()
+ close(process.waitCh)
+ }()
+ })
+ t := time.NewTimer(timeout)
+ defer t.Stop()
+ select {
+ case <-t.C:
+ return &ProcessError{Process: process, Err: ErrTimeout, Operation: "hcsshim::Process::Wait"}
+ case <-process.waitCh:
+ return process.waitErr
+ }
+}
+
+// ExitCode returns the exit code of the process. The process must have
+// already terminated.
+func (process *process) ExitCode() (int, error) {
+ code, err := process.p.ExitCode()
+ if err != nil {
+ err = convertProcessError(err, process)
+ }
+ return code, err
+}
+
+// ResizeConsole resizes the console of the process.
+func (process *process) ResizeConsole(width, height uint16) error {
+ return convertProcessError(process.p.ResizeConsole(context.Background(), width, height), process)
+}
+
+// Stdio returns the stdin, stdout, and stderr pipes, respectively. Closing
+// these pipes does not close the underlying pipes; it should be possible to
+// call this multiple times to get multiple interfaces.
+func (process *process) Stdio() (io.WriteCloser, io.ReadCloser, io.ReadCloser, error) {
+ stdin, stdout, stderr, err := process.p.StdioLegacy()
+ if err != nil {
+ err = convertProcessError(err, process)
+ }
+ return stdin, stdout, stderr, err
+}
+
+// CloseStdin closes the write side of the stdin pipe so that the process is
+// notified on the read side that there is no more data in stdin.
+func (process *process) CloseStdin() error {
+ return convertProcessError(process.p.CloseStdin(context.Background()), process)
+}
+
+// Close cleans up any state associated with the process but does not kill
+// or wait on it.
+func (process *process) Close() error {
+ return convertProcessError(process.p.Close(), process)
+}
diff --git a/vendor/github.com/Microsoft/hcsshim/zsyscall_windows.go b/vendor/github.com/Microsoft/hcsshim/zsyscall_windows.go
new file mode 100644
index 000000000..8bed84857
--- /dev/null
+++ b/vendor/github.com/Microsoft/hcsshim/zsyscall_windows.go
@@ -0,0 +1,54 @@
+// Code generated mksyscall_windows.exe DO NOT EDIT
+
+package hcsshim
+
+import (
+ "syscall"
+ "unsafe"
+
+ "golang.org/x/sys/windows"
+)
+
+var _ unsafe.Pointer
+
+// Do the interface allocations only once for common
+// Errno values.
+const (
+ errnoERROR_IO_PENDING = 997
+)
+
+var (
+ errERROR_IO_PENDING error = syscall.Errno(errnoERROR_IO_PENDING)
+)
+
+// errnoErr returns common boxed Errno values, to prevent
+// allocations at runtime.
+func errnoErr(e syscall.Errno) error {
+ switch e {
+ case 0:
+ return nil
+ case errnoERROR_IO_PENDING:
+ return errERROR_IO_PENDING
+ }
+ // TODO: add more here, after collecting data on the common
+ // error values see on Windows. (perhaps when running
+ // all.bat?)
+ return e
+}
+
+var (
+ modiphlpapi = windows.NewLazySystemDLL("iphlpapi.dll")
+
+ procSetCurrentThreadCompartmentId = modiphlpapi.NewProc("SetCurrentThreadCompartmentId")
+)
+
+func SetCurrentThreadCompartmentId(compartmentId uint32) (hr error) {
+ r0, _, _ := syscall.Syscall(procSetCurrentThreadCompartmentId.Addr(), 1, uintptr(compartmentId), 0, 0)
+ if int32(r0) < 0 {
+ if r0&0x1fff0000 == 0x00070000 {
+ r0 &= 0xffff
+ }
+ hr = syscall.Errno(r0)
+ }
+ return
+}
diff --git a/vendor/github.com/cespare/xxhash/v2/README.md b/vendor/github.com/cespare/xxhash/v2/README.md
index 792b4a60b..8bf0e5b78 100644
--- a/vendor/github.com/cespare/xxhash/v2/README.md
+++ b/vendor/github.com/cespare/xxhash/v2/README.md
@@ -3,8 +3,7 @@
[![Go Reference](https://pkg.go.dev/badge/github.com/cespare/xxhash/v2.svg)](https://pkg.go.dev/github.com/cespare/xxhash/v2)
[![Test](https://github.com/cespare/xxhash/actions/workflows/test.yml/badge.svg)](https://github.com/cespare/xxhash/actions/workflows/test.yml)
-xxhash is a Go implementation of the 64-bit
-[xxHash](http://cyan4973.github.io/xxHash/) algorithm, XXH64. This is a
+xxhash is a Go implementation of the 64-bit [xxHash] algorithm, XXH64. This is a
high-quality hashing algorithm that is much faster than anything in the Go
standard library.
@@ -25,8 +24,11 @@ func (*Digest) WriteString(string) (int, error)
func (*Digest) Sum64() uint64
```
-This implementation provides a fast pure-Go implementation and an even faster
-assembly implementation for amd64.
+The package is written with optimized pure Go and also contains even faster
+assembly implementations for amd64 and arm64. If desired, the `purego` build tag
+opts into using the Go code even on those architectures.
+
+[xxHash]: http://cyan4973.github.io/xxHash/
## Compatibility
@@ -45,19 +47,20 @@ I recommend using the latest release of Go.
Here are some quick benchmarks comparing the pure-Go and assembly
implementations of Sum64.
-| input size | purego | asm |
-| --- | --- | --- |
-| 5 B | 979.66 MB/s | 1291.17 MB/s |
-| 100 B | 7475.26 MB/s | 7973.40 MB/s |
-| 4 KB | 17573.46 MB/s | 17602.65 MB/s |
-| 10 MB | 17131.46 MB/s | 17142.16 MB/s |
+| input size | purego | asm |
+| ---------- | --------- | --------- |
+| 4 B | 1.3 GB/s | 1.2 GB/s |
+| 16 B | 2.9 GB/s | 3.5 GB/s |
+| 100 B | 6.9 GB/s | 8.1 GB/s |
+| 4 KB | 11.7 GB/s | 16.7 GB/s |
+| 10 MB | 12.0 GB/s | 17.3 GB/s |
-These numbers were generated on Ubuntu 18.04 with an Intel i7-8700K CPU using
-the following commands under Go 1.11.2:
+These numbers were generated on Ubuntu 20.04 with an Intel Xeon Platinum 8252C
+CPU using the following commands under Go 1.19.2:
```
-$ go test -tags purego -benchtime 10s -bench '/xxhash,direct,bytes'
-$ go test -benchtime 10s -bench '/xxhash,direct,bytes'
+benchstat <(go test -tags purego -benchtime 500ms -count 15 -bench 'Sum64$')
+benchstat <(go test -benchtime 500ms -count 15 -bench 'Sum64$')
```
## Projects using this package
diff --git a/vendor/github.com/cespare/xxhash/v2/testall.sh b/vendor/github.com/cespare/xxhash/v2/testall.sh
new file mode 100644
index 000000000..94b9c4439
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/testall.sh
@@ -0,0 +1,10 @@
+#!/bin/bash
+set -eu -o pipefail
+
+# Small convenience script for running the tests with various combinations of
+# arch/tags. This assumes we're running on amd64 and have qemu available.
+
+go test ./...
+go test -tags purego ./...
+GOARCH=arm64 go test
+GOARCH=arm64 go test -tags purego
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash.go b/vendor/github.com/cespare/xxhash/v2/xxhash.go
index 15c835d54..a9e0d45c9 100644
--- a/vendor/github.com/cespare/xxhash/v2/xxhash.go
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash.go
@@ -16,19 +16,11 @@ const (
prime5 uint64 = 2870177450012600261
)
-// NOTE(caleb): I'm using both consts and vars of the primes. Using consts where
-// possible in the Go code is worth a small (but measurable) performance boost
-// by avoiding some MOVQs. Vars are needed for the asm and also are useful for
-// convenience in the Go code in a few places where we need to intentionally
-// avoid constant arithmetic (e.g., v1 := prime1 + prime2 fails because the
-// result overflows a uint64).
-var (
- prime1v = prime1
- prime2v = prime2
- prime3v = prime3
- prime4v = prime4
- prime5v = prime5
-)
+// Store the primes in an array as well.
+//
+// The consts are used when possible in Go code to avoid MOVs but we need a
+// contiguous array of the assembly code.
+var primes = [...]uint64{prime1, prime2, prime3, prime4, prime5}
// Digest implements hash.Hash64.
type Digest struct {
@@ -50,10 +42,10 @@ func New() *Digest {
// Reset clears the Digest's state so that it can be reused.
func (d *Digest) Reset() {
- d.v1 = prime1v + prime2
+ d.v1 = primes[0] + prime2
d.v2 = prime2
d.v3 = 0
- d.v4 = -prime1v
+ d.v4 = -primes[0]
d.total = 0
d.n = 0
}
@@ -69,21 +61,23 @@ func (d *Digest) Write(b []byte) (n int, err error) {
n = len(b)
d.total += uint64(n)
+ memleft := d.mem[d.n&(len(d.mem)-1):]
+
if d.n+n < 32 {
// This new data doesn't even fill the current block.
- copy(d.mem[d.n:], b)
+ copy(memleft, b)
d.n += n
return
}
if d.n > 0 {
// Finish off the partial block.
- copy(d.mem[d.n:], b)
+ c := copy(memleft, b)
d.v1 = round(d.v1, u64(d.mem[0:8]))
d.v2 = round(d.v2, u64(d.mem[8:16]))
d.v3 = round(d.v3, u64(d.mem[16:24]))
d.v4 = round(d.v4, u64(d.mem[24:32]))
- b = b[32-d.n:]
+ b = b[c:]
d.n = 0
}
@@ -133,21 +127,20 @@ func (d *Digest) Sum64() uint64 {
h += d.total
- i, end := 0, d.n
- for ; i+8 <= end; i += 8 {
- k1 := round(0, u64(d.mem[i:i+8]))
+ b := d.mem[:d.n&(len(d.mem)-1)]
+ for ; len(b) >= 8; b = b[8:] {
+ k1 := round(0, u64(b[:8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
- if i+4 <= end {
- h ^= uint64(u32(d.mem[i:i+4])) * prime1
+ if len(b) >= 4 {
+ h ^= uint64(u32(b[:4])) * prime1
h = rol23(h)*prime2 + prime3
- i += 4
+ b = b[4:]
}
- for i < end {
- h ^= uint64(d.mem[i]) * prime5
+ for ; len(b) > 0; b = b[1:] {
+ h ^= uint64(b[0]) * prime5
h = rol11(h) * prime1
- i++
}
h ^= h >> 33
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
index be8db5bf7..3e8b13257 100644
--- a/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_amd64.s
@@ -1,215 +1,209 @@
+//go:build !appengine && gc && !purego
// +build !appengine
// +build gc
// +build !purego
#include "textflag.h"
-// Register allocation:
-// AX h
-// SI pointer to advance through b
-// DX n
-// BX loop end
-// R8 v1, k1
-// R9 v2
-// R10 v3
-// R11 v4
-// R12 tmp
-// R13 prime1v
-// R14 prime2v
-// DI prime4v
-
-// round reads from and advances the buffer pointer in SI.
-// It assumes that R13 has prime1v and R14 has prime2v.
-#define round(r) \
- MOVQ (SI), R12 \
- ADDQ $8, SI \
- IMULQ R14, R12 \
- ADDQ R12, r \
- ROLQ $31, r \
- IMULQ R13, r
-
-// mergeRound applies a merge round on the two registers acc and val.
-// It assumes that R13 has prime1v, R14 has prime2v, and DI has prime4v.
-#define mergeRound(acc, val) \
- IMULQ R14, val \
- ROLQ $31, val \
- IMULQ R13, val \
- XORQ val, acc \
- IMULQ R13, acc \
- ADDQ DI, acc
+// Registers:
+#define h AX
+#define d AX
+#define p SI // pointer to advance through b
+#define n DX
+#define end BX // loop end
+#define v1 R8
+#define v2 R9
+#define v3 R10
+#define v4 R11
+#define x R12
+#define prime1 R13
+#define prime2 R14
+#define prime4 DI
+
+#define round(acc, x) \
+ IMULQ prime2, x \
+ ADDQ x, acc \
+ ROLQ $31, acc \
+ IMULQ prime1, acc
+
+// round0 performs the operation x = round(0, x).
+#define round0(x) \
+ IMULQ prime2, x \
+ ROLQ $31, x \
+ IMULQ prime1, x
+
+// mergeRound applies a merge round on the two registers acc and x.
+// It assumes that prime1, prime2, and prime4 have been loaded.
+#define mergeRound(acc, x) \
+ round0(x) \
+ XORQ x, acc \
+ IMULQ prime1, acc \
+ ADDQ prime4, acc
+
+// blockLoop processes as many 32-byte blocks as possible,
+// updating v1, v2, v3, and v4. It assumes that there is at least one block
+// to process.
+#define blockLoop() \
+loop: \
+ MOVQ +0(p), x \
+ round(v1, x) \
+ MOVQ +8(p), x \
+ round(v2, x) \
+ MOVQ +16(p), x \
+ round(v3, x) \
+ MOVQ +24(p), x \
+ round(v4, x) \
+ ADDQ $32, p \
+ CMPQ p, end \
+ JLE loop
// func Sum64(b []byte) uint64
-TEXT ·Sum64(SB), NOSPLIT, $0-32
+TEXT ·Sum64(SB), NOSPLIT|NOFRAME, $0-32
// Load fixed primes.
- MOVQ ·prime1v(SB), R13
- MOVQ ·prime2v(SB), R14
- MOVQ ·prime4v(SB), DI
+ MOVQ ·primes+0(SB), prime1
+ MOVQ ·primes+8(SB), prime2
+ MOVQ ·primes+24(SB), prime4
// Load slice.
- MOVQ b_base+0(FP), SI
- MOVQ b_len+8(FP), DX
- LEAQ (SI)(DX*1), BX
+ MOVQ b_base+0(FP), p
+ MOVQ b_len+8(FP), n
+ LEAQ (p)(n*1), end
// The first loop limit will be len(b)-32.
- SUBQ $32, BX
+ SUBQ $32, end
// Check whether we have at least one block.
- CMPQ DX, $32
+ CMPQ n, $32
JLT noBlocks
// Set up initial state (v1, v2, v3, v4).
- MOVQ R13, R8
- ADDQ R14, R8
- MOVQ R14, R9
- XORQ R10, R10
- XORQ R11, R11
- SUBQ R13, R11
-
- // Loop until SI > BX.
-blockLoop:
- round(R8)
- round(R9)
- round(R10)
- round(R11)
-
- CMPQ SI, BX
- JLE blockLoop
-
- MOVQ R8, AX
- ROLQ $1, AX
- MOVQ R9, R12
- ROLQ $7, R12
- ADDQ R12, AX
- MOVQ R10, R12
- ROLQ $12, R12
- ADDQ R12, AX
- MOVQ R11, R12
- ROLQ $18, R12
- ADDQ R12, AX
-
- mergeRound(AX, R8)
- mergeRound(AX, R9)
- mergeRound(AX, R10)
- mergeRound(AX, R11)
+ MOVQ prime1, v1
+ ADDQ prime2, v1
+ MOVQ prime2, v2
+ XORQ v3, v3
+ XORQ v4, v4
+ SUBQ prime1, v4
+
+ blockLoop()
+
+ MOVQ v1, h
+ ROLQ $1, h
+ MOVQ v2, x
+ ROLQ $7, x
+ ADDQ x, h
+ MOVQ v3, x
+ ROLQ $12, x
+ ADDQ x, h
+ MOVQ v4, x
+ ROLQ $18, x
+ ADDQ x, h
+
+ mergeRound(h, v1)
+ mergeRound(h, v2)
+ mergeRound(h, v3)
+ mergeRound(h, v4)
JMP afterBlocks
noBlocks:
- MOVQ ·prime5v(SB), AX
+ MOVQ ·primes+32(SB), h
afterBlocks:
- ADDQ DX, AX
-
- // Right now BX has len(b)-32, and we want to loop until SI > len(b)-8.
- ADDQ $24, BX
-
- CMPQ SI, BX
- JG fourByte
-
-wordLoop:
- // Calculate k1.
- MOVQ (SI), R8
- ADDQ $8, SI
- IMULQ R14, R8
- ROLQ $31, R8
- IMULQ R13, R8
-
- XORQ R8, AX
- ROLQ $27, AX
- IMULQ R13, AX
- ADDQ DI, AX
-
- CMPQ SI, BX
- JLE wordLoop
-
-fourByte:
- ADDQ $4, BX
- CMPQ SI, BX
- JG singles
-
- MOVL (SI), R8
- ADDQ $4, SI
- IMULQ R13, R8
- XORQ R8, AX
-
- ROLQ $23, AX
- IMULQ R14, AX
- ADDQ ·prime3v(SB), AX
-
-singles:
- ADDQ $4, BX
- CMPQ SI, BX
+ ADDQ n, h
+
+ ADDQ $24, end
+ CMPQ p, end
+ JG try4
+
+loop8:
+ MOVQ (p), x
+ ADDQ $8, p
+ round0(x)
+ XORQ x, h
+ ROLQ $27, h
+ IMULQ prime1, h
+ ADDQ prime4, h
+
+ CMPQ p, end
+ JLE loop8
+
+try4:
+ ADDQ $4, end
+ CMPQ p, end
+ JG try1
+
+ MOVL (p), x
+ ADDQ $4, p
+ IMULQ prime1, x
+ XORQ x, h
+
+ ROLQ $23, h
+ IMULQ prime2, h
+ ADDQ ·primes+16(SB), h
+
+try1:
+ ADDQ $4, end
+ CMPQ p, end
JGE finalize
-singlesLoop:
- MOVBQZX (SI), R12
- ADDQ $1, SI
- IMULQ ·prime5v(SB), R12
- XORQ R12, AX
+loop1:
+ MOVBQZX (p), x
+ ADDQ $1, p
+ IMULQ ·primes+32(SB), x
+ XORQ x, h
+ ROLQ $11, h
+ IMULQ prime1, h
- ROLQ $11, AX
- IMULQ R13, AX
-
- CMPQ SI, BX
- JL singlesLoop
+ CMPQ p, end
+ JL loop1
finalize:
- MOVQ AX, R12
- SHRQ $33, R12
- XORQ R12, AX
- IMULQ R14, AX
- MOVQ AX, R12
- SHRQ $29, R12
- XORQ R12, AX
- IMULQ ·prime3v(SB), AX
- MOVQ AX, R12
- SHRQ $32, R12
- XORQ R12, AX
-
- MOVQ AX, ret+24(FP)
+ MOVQ h, x
+ SHRQ $33, x
+ XORQ x, h
+ IMULQ prime2, h
+ MOVQ h, x
+ SHRQ $29, x
+ XORQ x, h
+ IMULQ ·primes+16(SB), h
+ MOVQ h, x
+ SHRQ $32, x
+ XORQ x, h
+
+ MOVQ h, ret+24(FP)
RET
-// writeBlocks uses the same registers as above except that it uses AX to store
-// the d pointer.
-
// func writeBlocks(d *Digest, b []byte) int
-TEXT ·writeBlocks(SB), NOSPLIT, $0-40
+TEXT ·writeBlocks(SB), NOSPLIT|NOFRAME, $0-40
// Load fixed primes needed for round.
- MOVQ ·prime1v(SB), R13
- MOVQ ·prime2v(SB), R14
+ MOVQ ·primes+0(SB), prime1
+ MOVQ ·primes+8(SB), prime2
// Load slice.
- MOVQ b_base+8(FP), SI
- MOVQ b_len+16(FP), DX
- LEAQ (SI)(DX*1), BX
- SUBQ $32, BX
+ MOVQ b_base+8(FP), p
+ MOVQ b_len+16(FP), n
+ LEAQ (p)(n*1), end
+ SUBQ $32, end
// Load vN from d.
- MOVQ d+0(FP), AX
- MOVQ 0(AX), R8 // v1
- MOVQ 8(AX), R9 // v2
- MOVQ 16(AX), R10 // v3
- MOVQ 24(AX), R11 // v4
+ MOVQ s+0(FP), d
+ MOVQ 0(d), v1
+ MOVQ 8(d), v2
+ MOVQ 16(d), v3
+ MOVQ 24(d), v4
// We don't need to check the loop condition here; this function is
// always called with at least one block of data to process.
-blockLoop:
- round(R8)
- round(R9)
- round(R10)
- round(R11)
-
- CMPQ SI, BX
- JLE blockLoop
+ blockLoop()
// Copy vN back to d.
- MOVQ R8, 0(AX)
- MOVQ R9, 8(AX)
- MOVQ R10, 16(AX)
- MOVQ R11, 24(AX)
-
- // The number of bytes written is SI minus the old base pointer.
- SUBQ b_base+8(FP), SI
- MOVQ SI, ret+32(FP)
+ MOVQ v1, 0(d)
+ MOVQ v2, 8(d)
+ MOVQ v3, 16(d)
+ MOVQ v4, 24(d)
+
+ // The number of bytes written is p minus the old base pointer.
+ SUBQ b_base+8(FP), p
+ MOVQ p, ret+32(FP)
RET
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s b/vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s
new file mode 100644
index 000000000..7e3145a22
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_arm64.s
@@ -0,0 +1,183 @@
+//go:build !appengine && gc && !purego
+// +build !appengine
+// +build gc
+// +build !purego
+
+#include "textflag.h"
+
+// Registers:
+#define digest R1
+#define h R2 // return value
+#define p R3 // input pointer
+#define n R4 // input length
+#define nblocks R5 // n / 32
+#define prime1 R7
+#define prime2 R8
+#define prime3 R9
+#define prime4 R10
+#define prime5 R11
+#define v1 R12
+#define v2 R13
+#define v3 R14
+#define v4 R15
+#define x1 R20
+#define x2 R21
+#define x3 R22
+#define x4 R23
+
+#define round(acc, x) \
+ MADD prime2, acc, x, acc \
+ ROR $64-31, acc \
+ MUL prime1, acc
+
+// round0 performs the operation x = round(0, x).
+#define round0(x) \
+ MUL prime2, x \
+ ROR $64-31, x \
+ MUL prime1, x
+
+#define mergeRound(acc, x) \
+ round0(x) \
+ EOR x, acc \
+ MADD acc, prime4, prime1, acc
+
+// blockLoop processes as many 32-byte blocks as possible,
+// updating v1, v2, v3, and v4. It assumes that n >= 32.
+#define blockLoop() \
+ LSR $5, n, nblocks \
+ PCALIGN $16 \
+ loop: \
+ LDP.P 16(p), (x1, x2) \
+ LDP.P 16(p), (x3, x4) \
+ round(v1, x1) \
+ round(v2, x2) \
+ round(v3, x3) \
+ round(v4, x4) \
+ SUB $1, nblocks \
+ CBNZ nblocks, loop
+
+// func Sum64(b []byte) uint64
+TEXT ·Sum64(SB), NOSPLIT|NOFRAME, $0-32
+ LDP b_base+0(FP), (p, n)
+
+ LDP ·primes+0(SB), (prime1, prime2)
+ LDP ·primes+16(SB), (prime3, prime4)
+ MOVD ·primes+32(SB), prime5
+
+ CMP $32, n
+ CSEL LT, prime5, ZR, h // if n < 32 { h = prime5 } else { h = 0 }
+ BLT afterLoop
+
+ ADD prime1, prime2, v1
+ MOVD prime2, v2
+ MOVD $0, v3
+ NEG prime1, v4
+
+ blockLoop()
+
+ ROR $64-1, v1, x1
+ ROR $64-7, v2, x2
+ ADD x1, x2
+ ROR $64-12, v3, x3
+ ROR $64-18, v4, x4
+ ADD x3, x4
+ ADD x2, x4, h
+
+ mergeRound(h, v1)
+ mergeRound(h, v2)
+ mergeRound(h, v3)
+ mergeRound(h, v4)
+
+afterLoop:
+ ADD n, h
+
+ TBZ $4, n, try8
+ LDP.P 16(p), (x1, x2)
+
+ round0(x1)
+
+ // NOTE: here and below, sequencing the EOR after the ROR (using a
+ // rotated register) is worth a small but measurable speedup for small
+ // inputs.
+ ROR $64-27, h
+ EOR x1 @> 64-27, h, h
+ MADD h, prime4, prime1, h
+
+ round0(x2)
+ ROR $64-27, h
+ EOR x2 @> 64-27, h, h
+ MADD h, prime4, prime1, h
+
+try8:
+ TBZ $3, n, try4
+ MOVD.P 8(p), x1
+
+ round0(x1)
+ ROR $64-27, h
+ EOR x1 @> 64-27, h, h
+ MADD h, prime4, prime1, h
+
+try4:
+ TBZ $2, n, try2
+ MOVWU.P 4(p), x2
+
+ MUL prime1, x2
+ ROR $64-23, h
+ EOR x2 @> 64-23, h, h
+ MADD h, prime3, prime2, h
+
+try2:
+ TBZ $1, n, try1
+ MOVHU.P 2(p), x3
+ AND $255, x3, x1
+ LSR $8, x3, x2
+
+ MUL prime5, x1
+ ROR $64-11, h
+ EOR x1 @> 64-11, h, h
+ MUL prime1, h
+
+ MUL prime5, x2
+ ROR $64-11, h
+ EOR x2 @> 64-11, h, h
+ MUL prime1, h
+
+try1:
+ TBZ $0, n, finalize
+ MOVBU (p), x4
+
+ MUL prime5, x4
+ ROR $64-11, h
+ EOR x4 @> 64-11, h, h
+ MUL prime1, h
+
+finalize:
+ EOR h >> 33, h
+ MUL prime2, h
+ EOR h >> 29, h
+ MUL prime3, h
+ EOR h >> 32, h
+
+ MOVD h, ret+24(FP)
+ RET
+
+// func writeBlocks(d *Digest, b []byte) int
+TEXT ·writeBlocks(SB), NOSPLIT|NOFRAME, $0-40
+ LDP ·primes+0(SB), (prime1, prime2)
+
+ // Load state. Assume v[1-4] are stored contiguously.
+ MOVD d+0(FP), digest
+ LDP 0(digest), (v1, v2)
+ LDP 16(digest), (v3, v4)
+
+ LDP b_base+8(FP), (p, n)
+
+ blockLoop()
+
+ // Store updated state.
+ STP (v1, v2), 0(digest)
+ STP (v3, v4), 16(digest)
+
+ BIC $31, n
+ MOVD n, ret+32(FP)
+ RET
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_asm.go b/vendor/github.com/cespare/xxhash/v2/xxhash_asm.go
new file mode 100644
index 000000000..9216e0a40
--- /dev/null
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_asm.go
@@ -0,0 +1,15 @@
+//go:build (amd64 || arm64) && !appengine && gc && !purego
+// +build amd64 arm64
+// +build !appengine
+// +build gc
+// +build !purego
+
+package xxhash
+
+// Sum64 computes the 64-bit xxHash digest of b.
+//
+//go:noescape
+func Sum64(b []byte) uint64
+
+//go:noescape
+func writeBlocks(d *Digest, b []byte) int
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_other.go b/vendor/github.com/cespare/xxhash/v2/xxhash_other.go
index 4a5a82160..26df13bba 100644
--- a/vendor/github.com/cespare/xxhash/v2/xxhash_other.go
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_other.go
@@ -1,4 +1,5 @@
-// +build !amd64 appengine !gc purego
+//go:build (!amd64 && !arm64) || appengine || !gc || purego
+// +build !amd64,!arm64 appengine !gc purego
package xxhash
@@ -14,10 +15,10 @@ func Sum64(b []byte) uint64 {
var h uint64
if n >= 32 {
- v1 := prime1v + prime2
+ v1 := primes[0] + prime2
v2 := prime2
v3 := uint64(0)
- v4 := -prime1v
+ v4 := -primes[0]
for len(b) >= 32 {
v1 = round(v1, u64(b[0:8:len(b)]))
v2 = round(v2, u64(b[8:16:len(b)]))
@@ -36,19 +37,18 @@ func Sum64(b []byte) uint64 {
h += uint64(n)
- i, end := 0, len(b)
- for ; i+8 <= end; i += 8 {
- k1 := round(0, u64(b[i:i+8:len(b)]))
+ for ; len(b) >= 8; b = b[8:] {
+ k1 := round(0, u64(b[:8]))
h ^= k1
h = rol27(h)*prime1 + prime4
}
- if i+4 <= end {
- h ^= uint64(u32(b[i:i+4:len(b)])) * prime1
+ if len(b) >= 4 {
+ h ^= uint64(u32(b[:4])) * prime1
h = rol23(h)*prime2 + prime3
- i += 4
+ b = b[4:]
}
- for ; i < end; i++ {
- h ^= uint64(b[i]) * prime5
+ for ; len(b) > 0; b = b[1:] {
+ h ^= uint64(b[0]) * prime5
h = rol11(h) * prime1
}
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go b/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
index fc9bea7a3..e86f1b5fd 100644
--- a/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_safe.go
@@ -1,3 +1,4 @@
+//go:build appengine
// +build appengine
// This file contains the safe implementations of otherwise unsafe-using code.
diff --git a/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go b/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
index 376e0ca2e..1c1638fd8 100644
--- a/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
+++ b/vendor/github.com/cespare/xxhash/v2/xxhash_unsafe.go
@@ -1,3 +1,4 @@
+//go:build !appengine
// +build !appengine
// This file encapsulates usage of unsafe.
@@ -11,7 +12,7 @@ import (
// In the future it's possible that compiler optimizations will make these
// XxxString functions unnecessary by realizing that calls such as
-// Sum64([]byte(s)) don't need to copy s. See https://golang.org/issue/2205.
+// Sum64([]byte(s)) don't need to copy s. See https://go.dev/issue/2205.
// If that happens, even if we keep these functions they can be replaced with
// the trivial safe code.
diff --git a/vendor/github.com/containerd/cgroups/LICENSE b/vendor/github.com/containerd/cgroups/LICENSE
new file mode 100644
index 000000000..261eeb9e9
--- /dev/null
+++ b/vendor/github.com/containerd/cgroups/LICENSE
@@ -0,0 +1,201 @@
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/containerd/cgroups/stats/v1/doc.go b/vendor/github.com/containerd/cgroups/stats/v1/doc.go
new file mode 100644
index 000000000..23f3cdd4b
--- /dev/null
+++ b/vendor/github.com/containerd/cgroups/stats/v1/doc.go
@@ -0,0 +1,17 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package v1
diff --git a/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.go b/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.go
new file mode 100644
index 000000000..6d2d41770
--- /dev/null
+++ b/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.go
@@ -0,0 +1,6125 @@
+// Code generated by protoc-gen-gogo. DO NOT EDIT.
+// source: github.com/containerd/cgroups/stats/v1/metrics.proto
+
+package v1
+
+import (
+ fmt "fmt"
+ _ "github.com/gogo/protobuf/gogoproto"
+ proto "github.com/gogo/protobuf/proto"
+ io "io"
+ math "math"
+ math_bits "math/bits"
+ reflect "reflect"
+ strings "strings"
+)
+
+// Reference imports to suppress errors if they are not otherwise used.
+var _ = proto.Marshal
+var _ = fmt.Errorf
+var _ = math.Inf
+
+// This is a compile-time assertion to ensure that this generated file
+// is compatible with the proto package it is being compiled against.
+// A compilation error at this line likely means your copy of the
+// proto package needs to be updated.
+const _ = proto.GoGoProtoPackageIsVersion3 // please upgrade the proto package
+
+type Metrics struct {
+ Hugetlb []*HugetlbStat `protobuf:"bytes,1,rep,name=hugetlb,proto3" json:"hugetlb,omitempty"`
+ Pids *PidsStat `protobuf:"bytes,2,opt,name=pids,proto3" json:"pids,omitempty"`
+ CPU *CPUStat `protobuf:"bytes,3,opt,name=cpu,proto3" json:"cpu,omitempty"`
+ Memory *MemoryStat `protobuf:"bytes,4,opt,name=memory,proto3" json:"memory,omitempty"`
+ Blkio *BlkIOStat `protobuf:"bytes,5,opt,name=blkio,proto3" json:"blkio,omitempty"`
+ Rdma *RdmaStat `protobuf:"bytes,6,opt,name=rdma,proto3" json:"rdma,omitempty"`
+ Network []*NetworkStat `protobuf:"bytes,7,rep,name=network,proto3" json:"network,omitempty"`
+ CgroupStats *CgroupStats `protobuf:"bytes,8,opt,name=cgroup_stats,json=cgroupStats,proto3" json:"cgroup_stats,omitempty"`
+ MemoryOomControl *MemoryOomControl `protobuf:"bytes,9,opt,name=memory_oom_control,json=memoryOomControl,proto3" json:"memory_oom_control,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Metrics) Reset() { *m = Metrics{} }
+func (*Metrics) ProtoMessage() {}
+func (*Metrics) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{0}
+}
+func (m *Metrics) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *Metrics) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_Metrics.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *Metrics) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Metrics.Merge(m, src)
+}
+func (m *Metrics) XXX_Size() int {
+ return m.Size()
+}
+func (m *Metrics) XXX_DiscardUnknown() {
+ xxx_messageInfo_Metrics.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Metrics proto.InternalMessageInfo
+
+type HugetlbStat struct {
+ Usage uint64 `protobuf:"varint,1,opt,name=usage,proto3" json:"usage,omitempty"`
+ Max uint64 `protobuf:"varint,2,opt,name=max,proto3" json:"max,omitempty"`
+ Failcnt uint64 `protobuf:"varint,3,opt,name=failcnt,proto3" json:"failcnt,omitempty"`
+ Pagesize string `protobuf:"bytes,4,opt,name=pagesize,proto3" json:"pagesize,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *HugetlbStat) Reset() { *m = HugetlbStat{} }
+func (*HugetlbStat) ProtoMessage() {}
+func (*HugetlbStat) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{1}
+}
+func (m *HugetlbStat) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *HugetlbStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_HugetlbStat.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *HugetlbStat) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_HugetlbStat.Merge(m, src)
+}
+func (m *HugetlbStat) XXX_Size() int {
+ return m.Size()
+}
+func (m *HugetlbStat) XXX_DiscardUnknown() {
+ xxx_messageInfo_HugetlbStat.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_HugetlbStat proto.InternalMessageInfo
+
+type PidsStat struct {
+ Current uint64 `protobuf:"varint,1,opt,name=current,proto3" json:"current,omitempty"`
+ Limit uint64 `protobuf:"varint,2,opt,name=limit,proto3" json:"limit,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *PidsStat) Reset() { *m = PidsStat{} }
+func (*PidsStat) ProtoMessage() {}
+func (*PidsStat) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{2}
+}
+func (m *PidsStat) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *PidsStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_PidsStat.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *PidsStat) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_PidsStat.Merge(m, src)
+}
+func (m *PidsStat) XXX_Size() int {
+ return m.Size()
+}
+func (m *PidsStat) XXX_DiscardUnknown() {
+ xxx_messageInfo_PidsStat.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_PidsStat proto.InternalMessageInfo
+
+type CPUStat struct {
+ Usage *CPUUsage `protobuf:"bytes,1,opt,name=usage,proto3" json:"usage,omitempty"`
+ Throttling *Throttle `protobuf:"bytes,2,opt,name=throttling,proto3" json:"throttling,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *CPUStat) Reset() { *m = CPUStat{} }
+func (*CPUStat) ProtoMessage() {}
+func (*CPUStat) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{3}
+}
+func (m *CPUStat) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *CPUStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_CPUStat.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *CPUStat) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_CPUStat.Merge(m, src)
+}
+func (m *CPUStat) XXX_Size() int {
+ return m.Size()
+}
+func (m *CPUStat) XXX_DiscardUnknown() {
+ xxx_messageInfo_CPUStat.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_CPUStat proto.InternalMessageInfo
+
+type CPUUsage struct {
+ // values in nanoseconds
+ Total uint64 `protobuf:"varint,1,opt,name=total,proto3" json:"total,omitempty"`
+ Kernel uint64 `protobuf:"varint,2,opt,name=kernel,proto3" json:"kernel,omitempty"`
+ User uint64 `protobuf:"varint,3,opt,name=user,proto3" json:"user,omitempty"`
+ PerCPU []uint64 `protobuf:"varint,4,rep,packed,name=per_cpu,json=perCpu,proto3" json:"per_cpu,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *CPUUsage) Reset() { *m = CPUUsage{} }
+func (*CPUUsage) ProtoMessage() {}
+func (*CPUUsage) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{4}
+}
+func (m *CPUUsage) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *CPUUsage) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_CPUUsage.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *CPUUsage) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_CPUUsage.Merge(m, src)
+}
+func (m *CPUUsage) XXX_Size() int {
+ return m.Size()
+}
+func (m *CPUUsage) XXX_DiscardUnknown() {
+ xxx_messageInfo_CPUUsage.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_CPUUsage proto.InternalMessageInfo
+
+type Throttle struct {
+ Periods uint64 `protobuf:"varint,1,opt,name=periods,proto3" json:"periods,omitempty"`
+ ThrottledPeriods uint64 `protobuf:"varint,2,opt,name=throttled_periods,json=throttledPeriods,proto3" json:"throttled_periods,omitempty"`
+ ThrottledTime uint64 `protobuf:"varint,3,opt,name=throttled_time,json=throttledTime,proto3" json:"throttled_time,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *Throttle) Reset() { *m = Throttle{} }
+func (*Throttle) ProtoMessage() {}
+func (*Throttle) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{5}
+}
+func (m *Throttle) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *Throttle) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_Throttle.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *Throttle) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_Throttle.Merge(m, src)
+}
+func (m *Throttle) XXX_Size() int {
+ return m.Size()
+}
+func (m *Throttle) XXX_DiscardUnknown() {
+ xxx_messageInfo_Throttle.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_Throttle proto.InternalMessageInfo
+
+type MemoryStat struct {
+ Cache uint64 `protobuf:"varint,1,opt,name=cache,proto3" json:"cache,omitempty"`
+ RSS uint64 `protobuf:"varint,2,opt,name=rss,proto3" json:"rss,omitempty"`
+ RSSHuge uint64 `protobuf:"varint,3,opt,name=rss_huge,json=rssHuge,proto3" json:"rss_huge,omitempty"`
+ MappedFile uint64 `protobuf:"varint,4,opt,name=mapped_file,json=mappedFile,proto3" json:"mapped_file,omitempty"`
+ Dirty uint64 `protobuf:"varint,5,opt,name=dirty,proto3" json:"dirty,omitempty"`
+ Writeback uint64 `protobuf:"varint,6,opt,name=writeback,proto3" json:"writeback,omitempty"`
+ PgPgIn uint64 `protobuf:"varint,7,opt,name=pg_pg_in,json=pgPgIn,proto3" json:"pg_pg_in,omitempty"`
+ PgPgOut uint64 `protobuf:"varint,8,opt,name=pg_pg_out,json=pgPgOut,proto3" json:"pg_pg_out,omitempty"`
+ PgFault uint64 `protobuf:"varint,9,opt,name=pg_fault,json=pgFault,proto3" json:"pg_fault,omitempty"`
+ PgMajFault uint64 `protobuf:"varint,10,opt,name=pg_maj_fault,json=pgMajFault,proto3" json:"pg_maj_fault,omitempty"`
+ InactiveAnon uint64 `protobuf:"varint,11,opt,name=inactive_anon,json=inactiveAnon,proto3" json:"inactive_anon,omitempty"`
+ ActiveAnon uint64 `protobuf:"varint,12,opt,name=active_anon,json=activeAnon,proto3" json:"active_anon,omitempty"`
+ InactiveFile uint64 `protobuf:"varint,13,opt,name=inactive_file,json=inactiveFile,proto3" json:"inactive_file,omitempty"`
+ ActiveFile uint64 `protobuf:"varint,14,opt,name=active_file,json=activeFile,proto3" json:"active_file,omitempty"`
+ Unevictable uint64 `protobuf:"varint,15,opt,name=unevictable,proto3" json:"unevictable,omitempty"`
+ HierarchicalMemoryLimit uint64 `protobuf:"varint,16,opt,name=hierarchical_memory_limit,json=hierarchicalMemoryLimit,proto3" json:"hierarchical_memory_limit,omitempty"`
+ HierarchicalSwapLimit uint64 `protobuf:"varint,17,opt,name=hierarchical_swap_limit,json=hierarchicalSwapLimit,proto3" json:"hierarchical_swap_limit,omitempty"`
+ TotalCache uint64 `protobuf:"varint,18,opt,name=total_cache,json=totalCache,proto3" json:"total_cache,omitempty"`
+ TotalRSS uint64 `protobuf:"varint,19,opt,name=total_rss,json=totalRss,proto3" json:"total_rss,omitempty"`
+ TotalRSSHuge uint64 `protobuf:"varint,20,opt,name=total_rss_huge,json=totalRssHuge,proto3" json:"total_rss_huge,omitempty"`
+ TotalMappedFile uint64 `protobuf:"varint,21,opt,name=total_mapped_file,json=totalMappedFile,proto3" json:"total_mapped_file,omitempty"`
+ TotalDirty uint64 `protobuf:"varint,22,opt,name=total_dirty,json=totalDirty,proto3" json:"total_dirty,omitempty"`
+ TotalWriteback uint64 `protobuf:"varint,23,opt,name=total_writeback,json=totalWriteback,proto3" json:"total_writeback,omitempty"`
+ TotalPgPgIn uint64 `protobuf:"varint,24,opt,name=total_pg_pg_in,json=totalPgPgIn,proto3" json:"total_pg_pg_in,omitempty"`
+ TotalPgPgOut uint64 `protobuf:"varint,25,opt,name=total_pg_pg_out,json=totalPgPgOut,proto3" json:"total_pg_pg_out,omitempty"`
+ TotalPgFault uint64 `protobuf:"varint,26,opt,name=total_pg_fault,json=totalPgFault,proto3" json:"total_pg_fault,omitempty"`
+ TotalPgMajFault uint64 `protobuf:"varint,27,opt,name=total_pg_maj_fault,json=totalPgMajFault,proto3" json:"total_pg_maj_fault,omitempty"`
+ TotalInactiveAnon uint64 `protobuf:"varint,28,opt,name=total_inactive_anon,json=totalInactiveAnon,proto3" json:"total_inactive_anon,omitempty"`
+ TotalActiveAnon uint64 `protobuf:"varint,29,opt,name=total_active_anon,json=totalActiveAnon,proto3" json:"total_active_anon,omitempty"`
+ TotalInactiveFile uint64 `protobuf:"varint,30,opt,name=total_inactive_file,json=totalInactiveFile,proto3" json:"total_inactive_file,omitempty"`
+ TotalActiveFile uint64 `protobuf:"varint,31,opt,name=total_active_file,json=totalActiveFile,proto3" json:"total_active_file,omitempty"`
+ TotalUnevictable uint64 `protobuf:"varint,32,opt,name=total_unevictable,json=totalUnevictable,proto3" json:"total_unevictable,omitempty"`
+ Usage *MemoryEntry `protobuf:"bytes,33,opt,name=usage,proto3" json:"usage,omitempty"`
+ Swap *MemoryEntry `protobuf:"bytes,34,opt,name=swap,proto3" json:"swap,omitempty"`
+ Kernel *MemoryEntry `protobuf:"bytes,35,opt,name=kernel,proto3" json:"kernel,omitempty"`
+ KernelTCP *MemoryEntry `protobuf:"bytes,36,opt,name=kernel_tcp,json=kernelTcp,proto3" json:"kernel_tcp,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *MemoryStat) Reset() { *m = MemoryStat{} }
+func (*MemoryStat) ProtoMessage() {}
+func (*MemoryStat) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{6}
+}
+func (m *MemoryStat) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *MemoryStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_MemoryStat.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *MemoryStat) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_MemoryStat.Merge(m, src)
+}
+func (m *MemoryStat) XXX_Size() int {
+ return m.Size()
+}
+func (m *MemoryStat) XXX_DiscardUnknown() {
+ xxx_messageInfo_MemoryStat.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_MemoryStat proto.InternalMessageInfo
+
+type MemoryEntry struct {
+ Limit uint64 `protobuf:"varint,1,opt,name=limit,proto3" json:"limit,omitempty"`
+ Usage uint64 `protobuf:"varint,2,opt,name=usage,proto3" json:"usage,omitempty"`
+ Max uint64 `protobuf:"varint,3,opt,name=max,proto3" json:"max,omitempty"`
+ Failcnt uint64 `protobuf:"varint,4,opt,name=failcnt,proto3" json:"failcnt,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *MemoryEntry) Reset() { *m = MemoryEntry{} }
+func (*MemoryEntry) ProtoMessage() {}
+func (*MemoryEntry) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{7}
+}
+func (m *MemoryEntry) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *MemoryEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_MemoryEntry.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *MemoryEntry) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_MemoryEntry.Merge(m, src)
+}
+func (m *MemoryEntry) XXX_Size() int {
+ return m.Size()
+}
+func (m *MemoryEntry) XXX_DiscardUnknown() {
+ xxx_messageInfo_MemoryEntry.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_MemoryEntry proto.InternalMessageInfo
+
+type MemoryOomControl struct {
+ OomKillDisable uint64 `protobuf:"varint,1,opt,name=oom_kill_disable,json=oomKillDisable,proto3" json:"oom_kill_disable,omitempty"`
+ UnderOom uint64 `protobuf:"varint,2,opt,name=under_oom,json=underOom,proto3" json:"under_oom,omitempty"`
+ OomKill uint64 `protobuf:"varint,3,opt,name=oom_kill,json=oomKill,proto3" json:"oom_kill,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *MemoryOomControl) Reset() { *m = MemoryOomControl{} }
+func (*MemoryOomControl) ProtoMessage() {}
+func (*MemoryOomControl) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{8}
+}
+func (m *MemoryOomControl) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *MemoryOomControl) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_MemoryOomControl.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *MemoryOomControl) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_MemoryOomControl.Merge(m, src)
+}
+func (m *MemoryOomControl) XXX_Size() int {
+ return m.Size()
+}
+func (m *MemoryOomControl) XXX_DiscardUnknown() {
+ xxx_messageInfo_MemoryOomControl.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_MemoryOomControl proto.InternalMessageInfo
+
+type BlkIOStat struct {
+ IoServiceBytesRecursive []*BlkIOEntry `protobuf:"bytes,1,rep,name=io_service_bytes_recursive,json=ioServiceBytesRecursive,proto3" json:"io_service_bytes_recursive,omitempty"`
+ IoServicedRecursive []*BlkIOEntry `protobuf:"bytes,2,rep,name=io_serviced_recursive,json=ioServicedRecursive,proto3" json:"io_serviced_recursive,omitempty"`
+ IoQueuedRecursive []*BlkIOEntry `protobuf:"bytes,3,rep,name=io_queued_recursive,json=ioQueuedRecursive,proto3" json:"io_queued_recursive,omitempty"`
+ IoServiceTimeRecursive []*BlkIOEntry `protobuf:"bytes,4,rep,name=io_service_time_recursive,json=ioServiceTimeRecursive,proto3" json:"io_service_time_recursive,omitempty"`
+ IoWaitTimeRecursive []*BlkIOEntry `protobuf:"bytes,5,rep,name=io_wait_time_recursive,json=ioWaitTimeRecursive,proto3" json:"io_wait_time_recursive,omitempty"`
+ IoMergedRecursive []*BlkIOEntry `protobuf:"bytes,6,rep,name=io_merged_recursive,json=ioMergedRecursive,proto3" json:"io_merged_recursive,omitempty"`
+ IoTimeRecursive []*BlkIOEntry `protobuf:"bytes,7,rep,name=io_time_recursive,json=ioTimeRecursive,proto3" json:"io_time_recursive,omitempty"`
+ SectorsRecursive []*BlkIOEntry `protobuf:"bytes,8,rep,name=sectors_recursive,json=sectorsRecursive,proto3" json:"sectors_recursive,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *BlkIOStat) Reset() { *m = BlkIOStat{} }
+func (*BlkIOStat) ProtoMessage() {}
+func (*BlkIOStat) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{9}
+}
+func (m *BlkIOStat) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *BlkIOStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_BlkIOStat.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *BlkIOStat) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_BlkIOStat.Merge(m, src)
+}
+func (m *BlkIOStat) XXX_Size() int {
+ return m.Size()
+}
+func (m *BlkIOStat) XXX_DiscardUnknown() {
+ xxx_messageInfo_BlkIOStat.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_BlkIOStat proto.InternalMessageInfo
+
+type BlkIOEntry struct {
+ Op string `protobuf:"bytes,1,opt,name=op,proto3" json:"op,omitempty"`
+ Device string `protobuf:"bytes,2,opt,name=device,proto3" json:"device,omitempty"`
+ Major uint64 `protobuf:"varint,3,opt,name=major,proto3" json:"major,omitempty"`
+ Minor uint64 `protobuf:"varint,4,opt,name=minor,proto3" json:"minor,omitempty"`
+ Value uint64 `protobuf:"varint,5,opt,name=value,proto3" json:"value,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *BlkIOEntry) Reset() { *m = BlkIOEntry{} }
+func (*BlkIOEntry) ProtoMessage() {}
+func (*BlkIOEntry) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{10}
+}
+func (m *BlkIOEntry) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *BlkIOEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_BlkIOEntry.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *BlkIOEntry) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_BlkIOEntry.Merge(m, src)
+}
+func (m *BlkIOEntry) XXX_Size() int {
+ return m.Size()
+}
+func (m *BlkIOEntry) XXX_DiscardUnknown() {
+ xxx_messageInfo_BlkIOEntry.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_BlkIOEntry proto.InternalMessageInfo
+
+type RdmaStat struct {
+ Current []*RdmaEntry `protobuf:"bytes,1,rep,name=current,proto3" json:"current,omitempty"`
+ Limit []*RdmaEntry `protobuf:"bytes,2,rep,name=limit,proto3" json:"limit,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *RdmaStat) Reset() { *m = RdmaStat{} }
+func (*RdmaStat) ProtoMessage() {}
+func (*RdmaStat) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{11}
+}
+func (m *RdmaStat) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *RdmaStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_RdmaStat.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *RdmaStat) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_RdmaStat.Merge(m, src)
+}
+func (m *RdmaStat) XXX_Size() int {
+ return m.Size()
+}
+func (m *RdmaStat) XXX_DiscardUnknown() {
+ xxx_messageInfo_RdmaStat.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_RdmaStat proto.InternalMessageInfo
+
+type RdmaEntry struct {
+ Device string `protobuf:"bytes,1,opt,name=device,proto3" json:"device,omitempty"`
+ HcaHandles uint32 `protobuf:"varint,2,opt,name=hca_handles,json=hcaHandles,proto3" json:"hca_handles,omitempty"`
+ HcaObjects uint32 `protobuf:"varint,3,opt,name=hca_objects,json=hcaObjects,proto3" json:"hca_objects,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *RdmaEntry) Reset() { *m = RdmaEntry{} }
+func (*RdmaEntry) ProtoMessage() {}
+func (*RdmaEntry) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{12}
+}
+func (m *RdmaEntry) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *RdmaEntry) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_RdmaEntry.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *RdmaEntry) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_RdmaEntry.Merge(m, src)
+}
+func (m *RdmaEntry) XXX_Size() int {
+ return m.Size()
+}
+func (m *RdmaEntry) XXX_DiscardUnknown() {
+ xxx_messageInfo_RdmaEntry.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_RdmaEntry proto.InternalMessageInfo
+
+type NetworkStat struct {
+ Name string `protobuf:"bytes,1,opt,name=name,proto3" json:"name,omitempty"`
+ RxBytes uint64 `protobuf:"varint,2,opt,name=rx_bytes,json=rxBytes,proto3" json:"rx_bytes,omitempty"`
+ RxPackets uint64 `protobuf:"varint,3,opt,name=rx_packets,json=rxPackets,proto3" json:"rx_packets,omitempty"`
+ RxErrors uint64 `protobuf:"varint,4,opt,name=rx_errors,json=rxErrors,proto3" json:"rx_errors,omitempty"`
+ RxDropped uint64 `protobuf:"varint,5,opt,name=rx_dropped,json=rxDropped,proto3" json:"rx_dropped,omitempty"`
+ TxBytes uint64 `protobuf:"varint,6,opt,name=tx_bytes,json=txBytes,proto3" json:"tx_bytes,omitempty"`
+ TxPackets uint64 `protobuf:"varint,7,opt,name=tx_packets,json=txPackets,proto3" json:"tx_packets,omitempty"`
+ TxErrors uint64 `protobuf:"varint,8,opt,name=tx_errors,json=txErrors,proto3" json:"tx_errors,omitempty"`
+ TxDropped uint64 `protobuf:"varint,9,opt,name=tx_dropped,json=txDropped,proto3" json:"tx_dropped,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *NetworkStat) Reset() { *m = NetworkStat{} }
+func (*NetworkStat) ProtoMessage() {}
+func (*NetworkStat) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{13}
+}
+func (m *NetworkStat) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *NetworkStat) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_NetworkStat.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *NetworkStat) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_NetworkStat.Merge(m, src)
+}
+func (m *NetworkStat) XXX_Size() int {
+ return m.Size()
+}
+func (m *NetworkStat) XXX_DiscardUnknown() {
+ xxx_messageInfo_NetworkStat.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_NetworkStat proto.InternalMessageInfo
+
+// CgroupStats exports per-cgroup statistics.
+type CgroupStats struct {
+ // number of tasks sleeping
+ NrSleeping uint64 `protobuf:"varint,1,opt,name=nr_sleeping,json=nrSleeping,proto3" json:"nr_sleeping,omitempty"`
+ // number of tasks running
+ NrRunning uint64 `protobuf:"varint,2,opt,name=nr_running,json=nrRunning,proto3" json:"nr_running,omitempty"`
+ // number of tasks in stopped state
+ NrStopped uint64 `protobuf:"varint,3,opt,name=nr_stopped,json=nrStopped,proto3" json:"nr_stopped,omitempty"`
+ // number of tasks in uninterruptible state
+ NrUninterruptible uint64 `protobuf:"varint,4,opt,name=nr_uninterruptible,json=nrUninterruptible,proto3" json:"nr_uninterruptible,omitempty"`
+ // number of tasks waiting on IO
+ NrIoWait uint64 `protobuf:"varint,5,opt,name=nr_io_wait,json=nrIoWait,proto3" json:"nr_io_wait,omitempty"`
+ XXX_NoUnkeyedLiteral struct{} `json:"-"`
+ XXX_unrecognized []byte `json:"-"`
+ XXX_sizecache int32 `json:"-"`
+}
+
+func (m *CgroupStats) Reset() { *m = CgroupStats{} }
+func (*CgroupStats) ProtoMessage() {}
+func (*CgroupStats) Descriptor() ([]byte, []int) {
+ return fileDescriptor_a17b2d87c332bfaa, []int{14}
+}
+func (m *CgroupStats) XXX_Unmarshal(b []byte) error {
+ return m.Unmarshal(b)
+}
+func (m *CgroupStats) XXX_Marshal(b []byte, deterministic bool) ([]byte, error) {
+ if deterministic {
+ return xxx_messageInfo_CgroupStats.Marshal(b, m, deterministic)
+ } else {
+ b = b[:cap(b)]
+ n, err := m.MarshalToSizedBuffer(b)
+ if err != nil {
+ return nil, err
+ }
+ return b[:n], nil
+ }
+}
+func (m *CgroupStats) XXX_Merge(src proto.Message) {
+ xxx_messageInfo_CgroupStats.Merge(m, src)
+}
+func (m *CgroupStats) XXX_Size() int {
+ return m.Size()
+}
+func (m *CgroupStats) XXX_DiscardUnknown() {
+ xxx_messageInfo_CgroupStats.DiscardUnknown(m)
+}
+
+var xxx_messageInfo_CgroupStats proto.InternalMessageInfo
+
+func init() {
+ proto.RegisterType((*Metrics)(nil), "io.containerd.cgroups.v1.Metrics")
+ proto.RegisterType((*HugetlbStat)(nil), "io.containerd.cgroups.v1.HugetlbStat")
+ proto.RegisterType((*PidsStat)(nil), "io.containerd.cgroups.v1.PidsStat")
+ proto.RegisterType((*CPUStat)(nil), "io.containerd.cgroups.v1.CPUStat")
+ proto.RegisterType((*CPUUsage)(nil), "io.containerd.cgroups.v1.CPUUsage")
+ proto.RegisterType((*Throttle)(nil), "io.containerd.cgroups.v1.Throttle")
+ proto.RegisterType((*MemoryStat)(nil), "io.containerd.cgroups.v1.MemoryStat")
+ proto.RegisterType((*MemoryEntry)(nil), "io.containerd.cgroups.v1.MemoryEntry")
+ proto.RegisterType((*MemoryOomControl)(nil), "io.containerd.cgroups.v1.MemoryOomControl")
+ proto.RegisterType((*BlkIOStat)(nil), "io.containerd.cgroups.v1.BlkIOStat")
+ proto.RegisterType((*BlkIOEntry)(nil), "io.containerd.cgroups.v1.BlkIOEntry")
+ proto.RegisterType((*RdmaStat)(nil), "io.containerd.cgroups.v1.RdmaStat")
+ proto.RegisterType((*RdmaEntry)(nil), "io.containerd.cgroups.v1.RdmaEntry")
+ proto.RegisterType((*NetworkStat)(nil), "io.containerd.cgroups.v1.NetworkStat")
+ proto.RegisterType((*CgroupStats)(nil), "io.containerd.cgroups.v1.CgroupStats")
+}
+
+func init() {
+ proto.RegisterFile("github.com/containerd/cgroups/stats/v1/metrics.proto", fileDescriptor_a17b2d87c332bfaa)
+}
+
+var fileDescriptor_a17b2d87c332bfaa = []byte{
+ // 1749 bytes of a gzipped FileDescriptorProto
+ 0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x94, 0x58, 0xcd, 0x72, 0xe3, 0xc6,
+ 0x11, 0x36, 0x45, 0x48, 0x24, 0x9a, 0x92, 0x56, 0x9a, 0xfd, 0x83, 0xe4, 0xb5, 0x28, 0x53, 0xbb,
+ 0x89, 0xe2, 0xad, 0x48, 0x65, 0x27, 0xb5, 0x95, 0x75, 0xec, 0x4a, 0x59, 0x5a, 0xbb, 0x76, 0xcb,
+ 0x51, 0x44, 0x83, 0x52, 0xd9, 0x39, 0xa1, 0x40, 0x70, 0x16, 0x9c, 0x15, 0x80, 0x81, 0x07, 0x03,
+ 0x89, 0xca, 0x29, 0x87, 0x54, 0xe5, 0x94, 0x07, 0xca, 0x1b, 0xf8, 0x98, 0x4b, 0x52, 0xc9, 0x45,
+ 0x15, 0xf3, 0x49, 0x52, 0x33, 0x3d, 0xf8, 0xa1, 0xbc, 0x5a, 0x85, 0x37, 0x76, 0xcf, 0xd7, 0x5f,
+ 0xf7, 0x34, 0xbe, 0x19, 0x34, 0x08, 0xbf, 0x0e, 0x99, 0x1c, 0xe7, 0xc3, 0xbd, 0x80, 0xc7, 0xfb,
+ 0x01, 0x4f, 0xa4, 0xcf, 0x12, 0x2a, 0x46, 0xfb, 0x41, 0x28, 0x78, 0x9e, 0x66, 0xfb, 0x99, 0xf4,
+ 0x65, 0xb6, 0x7f, 0xfe, 0xf1, 0x7e, 0x4c, 0xa5, 0x60, 0x41, 0xb6, 0x97, 0x0a, 0x2e, 0x39, 0x71,
+ 0x18, 0xdf, 0xab, 0xd0, 0x7b, 0x06, 0xbd, 0x77, 0xfe, 0xf1, 0xe6, 0xbd, 0x90, 0x87, 0x5c, 0x83,
+ 0xf6, 0xd5, 0x2f, 0xc4, 0xf7, 0xfe, 0x65, 0x41, 0xeb, 0x08, 0x19, 0xc8, 0xef, 0xa0, 0x35, 0xce,
+ 0x43, 0x2a, 0xa3, 0xa1, 0xd3, 0xd8, 0x6e, 0xee, 0x76, 0x3e, 0x79, 0xb2, 0x77, 0x13, 0xdb, 0xde,
+ 0x4b, 0x04, 0x0e, 0xa4, 0x2f, 0xdd, 0x22, 0x8a, 0x3c, 0x03, 0x2b, 0x65, 0xa3, 0xcc, 0x59, 0xd8,
+ 0x6e, 0xec, 0x76, 0x3e, 0xe9, 0xdd, 0x1c, 0xdd, 0x67, 0xa3, 0x4c, 0x87, 0x6a, 0x3c, 0xf9, 0x0c,
+ 0x9a, 0x41, 0x9a, 0x3b, 0x4d, 0x1d, 0xf6, 0xe1, 0xcd, 0x61, 0x87, 0xfd, 0x53, 0x15, 0x75, 0xd0,
+ 0x9a, 0x5e, 0x75, 0x9b, 0x87, 0xfd, 0x53, 0x57, 0x85, 0x91, 0xcf, 0x60, 0x29, 0xa6, 0x31, 0x17,
+ 0x97, 0x8e, 0xa5, 0x09, 0x1e, 0xdf, 0x4c, 0x70, 0xa4, 0x71, 0x3a, 0xb3, 0x89, 0x21, 0xcf, 0x61,
+ 0x71, 0x18, 0x9d, 0x31, 0xee, 0x2c, 0xea, 0xe0, 0x9d, 0x9b, 0x83, 0x0f, 0xa2, 0xb3, 0x57, 0xc7,
+ 0x3a, 0x16, 0x23, 0xd4, 0x76, 0xc5, 0x28, 0xf6, 0x9d, 0xa5, 0xdb, 0xb6, 0xeb, 0x8e, 0x62, 0x1f,
+ 0xb7, 0xab, 0xf0, 0xaa, 0xcf, 0x09, 0x95, 0x17, 0x5c, 0x9c, 0x39, 0xad, 0xdb, 0xfa, 0xfc, 0x07,
+ 0x04, 0x62, 0x9f, 0x4d, 0x14, 0x79, 0x09, 0xcb, 0x08, 0xf1, 0xb4, 0x0a, 0x9c, 0xb6, 0x2e, 0xe0,
+ 0x1d, 0x2c, 0x87, 0xfa, 0xa7, 0x22, 0xc9, 0xdc, 0x4e, 0x50, 0x19, 0xe4, 0x3b, 0x20, 0xd8, 0x07,
+ 0x8f, 0xf3, 0xd8, 0x53, 0xc1, 0x82, 0x47, 0x8e, 0xad, 0xf9, 0x3e, 0xba, 0xad, 0x8f, 0xc7, 0x3c,
+ 0x3e, 0xc4, 0x08, 0x77, 0x2d, 0xbe, 0xe6, 0xe9, 0x9d, 0x41, 0xa7, 0xa6, 0x11, 0x72, 0x0f, 0x16,
+ 0xf3, 0xcc, 0x0f, 0xa9, 0xd3, 0xd8, 0x6e, 0xec, 0x5a, 0x2e, 0x1a, 0x64, 0x0d, 0x9a, 0xb1, 0x3f,
+ 0xd1, 0x7a, 0xb1, 0x5c, 0xf5, 0x93, 0x38, 0xd0, 0x7a, 0xed, 0xb3, 0x28, 0x48, 0xa4, 0x96, 0x83,
+ 0xe5, 0x16, 0x26, 0xd9, 0x84, 0x76, 0xea, 0x87, 0x34, 0x63, 0x7f, 0xa2, 0xfa, 0x41, 0xdb, 0x6e,
+ 0x69, 0xf7, 0x3e, 0x85, 0x76, 0x21, 0x29, 0xc5, 0x10, 0xe4, 0x42, 0xd0, 0x44, 0x9a, 0x5c, 0x85,
+ 0xa9, 0x6a, 0x88, 0x58, 0xcc, 0xa4, 0xc9, 0x87, 0x46, 0xef, 0xaf, 0x0d, 0x68, 0x19, 0x61, 0x91,
+ 0xdf, 0xd4, 0xab, 0x7c, 0xe7, 0x23, 0x3d, 0xec, 0x9f, 0x9e, 0x2a, 0x64, 0xb1, 0x93, 0x03, 0x00,
+ 0x39, 0x16, 0x5c, 0xca, 0x88, 0x25, 0xe1, 0xed, 0x07, 0xe0, 0x04, 0xb1, 0xd4, 0xad, 0x45, 0xf5,
+ 0xbe, 0x87, 0x76, 0x41, 0xab, 0x6a, 0x95, 0x5c, 0xfa, 0x51, 0xd1, 0x2f, 0x6d, 0x90, 0x07, 0xb0,
+ 0x74, 0x46, 0x45, 0x42, 0x23, 0xb3, 0x05, 0x63, 0x11, 0x02, 0x56, 0x9e, 0x51, 0x61, 0x5a, 0xa6,
+ 0x7f, 0x93, 0x1d, 0x68, 0xa5, 0x54, 0x78, 0xea, 0x60, 0x59, 0xdb, 0xcd, 0x5d, 0xeb, 0x00, 0xa6,
+ 0x57, 0xdd, 0xa5, 0x3e, 0x15, 0xea, 0xe0, 0x2c, 0xa5, 0x54, 0x1c, 0xa6, 0x79, 0x6f, 0x02, 0xed,
+ 0xa2, 0x14, 0xd5, 0xb8, 0x94, 0x0a, 0xc6, 0x47, 0x59, 0xd1, 0x38, 0x63, 0x92, 0xa7, 0xb0, 0x6e,
+ 0xca, 0xa4, 0x23, 0xaf, 0xc0, 0x60, 0x05, 0x6b, 0xe5, 0x42, 0xdf, 0x80, 0x9f, 0xc0, 0x6a, 0x05,
+ 0x96, 0x2c, 0xa6, 0xa6, 0xaa, 0x95, 0xd2, 0x7b, 0xc2, 0x62, 0xda, 0xfb, 0x4f, 0x07, 0xa0, 0x3a,
+ 0x8e, 0x6a, 0xbf, 0x81, 0x1f, 0x8c, 0x4b, 0x7d, 0x68, 0x83, 0x6c, 0x40, 0x53, 0x64, 0x26, 0x15,
+ 0x9e, 0x7a, 0x77, 0x30, 0x70, 0x95, 0x8f, 0xfc, 0x0c, 0xda, 0x22, 0xcb, 0x3c, 0x75, 0xf5, 0x60,
+ 0x82, 0x83, 0xce, 0xf4, 0xaa, 0xdb, 0x72, 0x07, 0x03, 0x25, 0x3b, 0xb7, 0x25, 0xb2, 0x4c, 0xfd,
+ 0x20, 0x5d, 0xe8, 0xc4, 0x7e, 0x9a, 0xd2, 0x91, 0xf7, 0x9a, 0x45, 0xa8, 0x1c, 0xcb, 0x05, 0x74,
+ 0x7d, 0xc5, 0x22, 0xdd, 0xe9, 0x11, 0x13, 0xf2, 0x52, 0x5f, 0x00, 0x96, 0x8b, 0x06, 0x79, 0x04,
+ 0xf6, 0x85, 0x60, 0x92, 0x0e, 0xfd, 0xe0, 0x4c, 0x1f, 0x70, 0xcb, 0xad, 0x1c, 0xc4, 0x81, 0x76,
+ 0x1a, 0x7a, 0x69, 0xe8, 0xb1, 0xc4, 0x69, 0xe1, 0x93, 0x48, 0xc3, 0x7e, 0xf8, 0x2a, 0x21, 0x9b,
+ 0x60, 0xe3, 0x0a, 0xcf, 0xa5, 0x3e, 0x97, 0xaa, 0x8d, 0x61, 0x3f, 0x3c, 0xce, 0x25, 0xd9, 0xd0,
+ 0x51, 0xaf, 0xfd, 0x3c, 0x92, 0xfa, 0x88, 0xe9, 0xa5, 0xaf, 0x94, 0x49, 0xb6, 0x61, 0x39, 0x0d,
+ 0xbd, 0xd8, 0x7f, 0x63, 0x96, 0x01, 0xcb, 0x4c, 0xc3, 0x23, 0xff, 0x0d, 0x22, 0x76, 0x60, 0x85,
+ 0x25, 0x7e, 0x20, 0xd9, 0x39, 0xf5, 0xfc, 0x84, 0x27, 0x4e, 0x47, 0x43, 0x96, 0x0b, 0xe7, 0x17,
+ 0x09, 0x4f, 0xd4, 0x66, 0xeb, 0x90, 0x65, 0x64, 0xa9, 0x01, 0xea, 0x2c, 0xba, 0x1f, 0x2b, 0xb3,
+ 0x2c, 0xba, 0x23, 0x15, 0x8b, 0x86, 0xac, 0xd6, 0x59, 0x34, 0x60, 0x1b, 0x3a, 0x79, 0x42, 0xcf,
+ 0x59, 0x20, 0xfd, 0x61, 0x44, 0x9d, 0x3b, 0x1a, 0x50, 0x77, 0x91, 0x4f, 0x61, 0x63, 0xcc, 0xa8,
+ 0xf0, 0x45, 0x30, 0x66, 0x81, 0x1f, 0x79, 0xe6, 0x92, 0xc1, 0xe3, 0xb7, 0xa6, 0xf1, 0x0f, 0xeb,
+ 0x00, 0x54, 0xc2, 0xef, 0xd5, 0x32, 0x79, 0x06, 0x33, 0x4b, 0x5e, 0x76, 0xe1, 0xa7, 0x26, 0x72,
+ 0x5d, 0x47, 0xde, 0xaf, 0x2f, 0x0f, 0x2e, 0xfc, 0x14, 0xe3, 0xba, 0xd0, 0xd1, 0xa7, 0xc4, 0x43,
+ 0x21, 0x11, 0x2c, 0x5b, 0xbb, 0x0e, 0xb5, 0x9a, 0x7e, 0x01, 0x36, 0x02, 0x94, 0xa6, 0xee, 0x6a,
+ 0xcd, 0x2c, 0x4f, 0xaf, 0xba, 0xed, 0x13, 0xe5, 0x54, 0xc2, 0x6a, 0xeb, 0x65, 0x37, 0xcb, 0xc8,
+ 0x33, 0x58, 0x2d, 0xa1, 0xa8, 0xb1, 0x7b, 0x1a, 0xbf, 0x36, 0xbd, 0xea, 0x2e, 0x17, 0x78, 0x2d,
+ 0xb4, 0xe5, 0x22, 0x46, 0xab, 0xed, 0x23, 0x58, 0xc7, 0xb8, 0xba, 0xe6, 0xee, 0xeb, 0x4a, 0xee,
+ 0xe8, 0x85, 0xa3, 0x4a, 0x78, 0x65, 0xbd, 0x28, 0xbf, 0x07, 0xb5, 0x7a, 0x5f, 0x68, 0x0d, 0xfe,
+ 0x1c, 0x30, 0xc6, 0xab, 0x94, 0xf8, 0x50, 0x83, 0xb0, 0xb6, 0x6f, 0x4b, 0x39, 0xee, 0x14, 0xd5,
+ 0x96, 0xa2, 0x74, 0xf0, 0x91, 0x68, 0x6f, 0x1f, 0x95, 0xf9, 0xa4, 0x60, 0xab, 0xf4, 0xb9, 0x81,
+ 0x0f, 0xbf, 0x44, 0x29, 0x91, 0x3e, 0xae, 0x71, 0xa1, 0x16, 0x37, 0x67, 0x50, 0xa8, 0xc6, 0xa7,
+ 0x40, 0x4a, 0x54, 0xa5, 0xda, 0xf7, 0x6b, 0x1b, 0xed, 0x57, 0xd2, 0xdd, 0x83, 0xbb, 0x08, 0x9e,
+ 0x15, 0xf0, 0x23, 0x8d, 0xc6, 0x7e, 0xbd, 0xaa, 0xab, 0xb8, 0x6c, 0x62, 0x1d, 0xfd, 0x41, 0x8d,
+ 0xfb, 0x8b, 0x0a, 0xfb, 0x53, 0x6e, 0xdd, 0xf2, 0xad, 0xb7, 0x70, 0xeb, 0xa6, 0x5f, 0xe7, 0xd6,
+ 0xe8, 0xee, 0x4f, 0xb8, 0x35, 0xf6, 0x69, 0x81, 0xad, 0x8b, 0x7d, 0xdb, 0x5c, 0x7b, 0x6a, 0xe1,
+ 0xb4, 0xa6, 0xf8, 0xdf, 0x16, 0xaf, 0x8e, 0x0f, 0x6f, 0x7b, 0x19, 0xa3, 0xd6, 0xbf, 0x4c, 0xa4,
+ 0xb8, 0x2c, 0xde, 0x1e, 0xcf, 0xc1, 0x52, 0x2a, 0x77, 0x7a, 0xf3, 0xc4, 0xea, 0x10, 0xf2, 0x79,
+ 0xf9, 0x4a, 0xd8, 0x99, 0x27, 0xb8, 0x78, 0x73, 0x0c, 0x00, 0xf0, 0x97, 0x27, 0x83, 0xd4, 0x79,
+ 0x3c, 0x07, 0xc5, 0xc1, 0xca, 0xf4, 0xaa, 0x6b, 0x7f, 0xad, 0x83, 0x4f, 0x0e, 0xfb, 0xae, 0x8d,
+ 0x3c, 0x27, 0x41, 0xda, 0xa3, 0xd0, 0xa9, 0x01, 0xab, 0xf7, 0x6e, 0xa3, 0xf6, 0xde, 0xad, 0x26,
+ 0x82, 0x85, 0xb7, 0x4c, 0x04, 0xcd, 0xb7, 0x4e, 0x04, 0xd6, 0xcc, 0x44, 0xd0, 0x93, 0xb0, 0x76,
+ 0x7d, 0x10, 0x21, 0xbb, 0xb0, 0xa6, 0x26, 0x99, 0x33, 0x16, 0xa9, 0x73, 0x95, 0xe9, 0x47, 0x86,
+ 0x69, 0x57, 0x39, 0x8f, 0xbf, 0x66, 0x51, 0xf4, 0x02, 0xbd, 0xe4, 0x7d, 0xb0, 0xf3, 0x64, 0x44,
+ 0x85, 0x9a, 0x7c, 0x4c, 0x0d, 0x6d, 0xed, 0x38, 0xe6, 0xb1, 0xba, 0xaa, 0x0b, 0x9a, 0x62, 0x0e,
+ 0x31, 0xe1, 0xbd, 0x7f, 0x2e, 0x82, 0x5d, 0x8e, 0x82, 0xc4, 0x87, 0x4d, 0xc6, 0xbd, 0x8c, 0x8a,
+ 0x73, 0x16, 0x50, 0x6f, 0x78, 0x29, 0x69, 0xe6, 0x09, 0x1a, 0xe4, 0x22, 0x63, 0xe7, 0xd4, 0x8c,
+ 0xd1, 0x8f, 0x6f, 0x99, 0x29, 0xf1, 0x89, 0x3c, 0x64, 0x7c, 0x80, 0x34, 0x07, 0x8a, 0xc5, 0x2d,
+ 0x48, 0xc8, 0x77, 0x70, 0xbf, 0x4a, 0x31, 0xaa, 0xb1, 0x2f, 0xcc, 0xc1, 0x7e, 0xb7, 0x64, 0x1f,
+ 0x55, 0xcc, 0x27, 0x70, 0x97, 0x71, 0xef, 0xfb, 0x9c, 0xe6, 0x33, 0xbc, 0xcd, 0x39, 0x78, 0xd7,
+ 0x19, 0xff, 0x46, 0xc7, 0x57, 0xac, 0x1e, 0x6c, 0xd4, 0x5a, 0xa2, 0x26, 0x80, 0x1a, 0xb7, 0x35,
+ 0x07, 0xf7, 0x83, 0xb2, 0x66, 0x35, 0x31, 0x54, 0x09, 0xfe, 0x08, 0x0f, 0x18, 0xf7, 0x2e, 0x7c,
+ 0x26, 0xaf, 0xb3, 0x2f, 0xce, 0xd7, 0x91, 0x6f, 0x7d, 0x26, 0x67, 0xa9, 0xb1, 0x23, 0x31, 0x15,
+ 0xe1, 0x4c, 0x47, 0x96, 0xe6, 0xeb, 0xc8, 0x91, 0x8e, 0xaf, 0x58, 0xfb, 0xb0, 0xce, 0xf8, 0xf5,
+ 0x5a, 0x5b, 0x73, 0x70, 0xde, 0x61, 0x7c, 0xb6, 0xce, 0x6f, 0x60, 0x3d, 0xa3, 0x81, 0xe4, 0xa2,
+ 0xae, 0xb6, 0xf6, 0x1c, 0x8c, 0x6b, 0x26, 0xbc, 0xa4, 0xec, 0x9d, 0x03, 0x54, 0xeb, 0x64, 0x15,
+ 0x16, 0x78, 0xaa, 0x4f, 0x8e, 0xed, 0x2e, 0xf0, 0x54, 0x4d, 0x9e, 0x23, 0x75, 0xd9, 0xe1, 0x71,
+ 0xb5, 0x5d, 0x63, 0xa9, 0x53, 0x1c, 0xfb, 0x6f, 0x78, 0x31, 0x7a, 0xa2, 0xa1, 0xbd, 0x2c, 0xe1,
+ 0xc2, 0x9c, 0x58, 0x34, 0x94, 0xf7, 0xdc, 0x8f, 0x72, 0x5a, 0x4c, 0x5a, 0xda, 0xe8, 0xfd, 0xa5,
+ 0x01, 0xed, 0xe2, 0x03, 0x89, 0x7c, 0x5e, 0x1f, 0xde, 0x9b, 0xef, 0xfe, 0x1e, 0x53, 0x41, 0xb8,
+ 0x99, 0x72, 0xc2, 0x7f, 0x5e, 0x4d, 0xf8, 0xff, 0x77, 0xb0, 0xf9, 0x0c, 0xa0, 0x60, 0x97, 0xbe,
+ 0xda, 0x6e, 0x1b, 0x33, 0xbb, 0xed, 0x42, 0x67, 0x1c, 0xf8, 0xde, 0xd8, 0x4f, 0x46, 0x11, 0xc5,
+ 0xb9, 0x74, 0xc5, 0x85, 0x71, 0xe0, 0xbf, 0x44, 0x4f, 0x01, 0xe0, 0xc3, 0x37, 0x34, 0x90, 0x99,
+ 0x6e, 0x0a, 0x02, 0x8e, 0xd1, 0xd3, 0xfb, 0xdb, 0x02, 0x74, 0x6a, 0xdf, 0x74, 0x6a, 0x72, 0x4f,
+ 0xfc, 0xb8, 0xc8, 0xa3, 0x7f, 0xab, 0xcb, 0x47, 0x4c, 0xf0, 0x2e, 0x31, 0x17, 0x53, 0x4b, 0x4c,
+ 0xf4, 0xa5, 0x40, 0x3e, 0x00, 0x10, 0x13, 0x2f, 0xf5, 0x83, 0x33, 0x6a, 0xe8, 0x2d, 0xd7, 0x16,
+ 0x93, 0x3e, 0x3a, 0xd4, 0x9d, 0x26, 0x26, 0x1e, 0x15, 0x82, 0x8b, 0xcc, 0xf4, 0xbe, 0x2d, 0x26,
+ 0x5f, 0x6a, 0xdb, 0xc4, 0x8e, 0x04, 0x57, 0x13, 0x88, 0x79, 0x06, 0xb6, 0x98, 0xbc, 0x40, 0x87,
+ 0xca, 0x2a, 0x8b, 0xac, 0x38, 0xf0, 0xb6, 0x64, 0x95, 0x55, 0x56, 0x59, 0x71, 0xe0, 0xb5, 0x65,
+ 0x3d, 0xab, 0x2c, 0xb3, 0xe2, 0xcc, 0xdb, 0x96, 0xb5, 0xac, 0xb2, 0xca, 0x6a, 0x17, 0xb1, 0x26,
+ 0x6b, 0xef, 0xef, 0x0d, 0xe8, 0xd4, 0xbe, 0x4e, 0x55, 0x03, 0x13, 0xe1, 0x65, 0x11, 0xa5, 0xa9,
+ 0xfa, 0x90, 0xc2, 0xab, 0x1b, 0x12, 0x31, 0x30, 0x1e, 0xc5, 0x97, 0x08, 0x4f, 0xe4, 0x49, 0x52,
+ 0x7c, 0x68, 0x59, 0xae, 0x9d, 0x08, 0x17, 0x1d, 0x66, 0x39, 0x93, 0x98, 0xae, 0x59, 0x2c, 0x0f,
+ 0xd0, 0x41, 0x7e, 0x09, 0x24, 0x11, 0x5e, 0x9e, 0xb0, 0x44, 0x52, 0x21, 0xf2, 0x54, 0xb2, 0x61,
+ 0xf9, 0x51, 0xb0, 0x9e, 0x88, 0xd3, 0xd9, 0x05, 0xf2, 0x48, 0xb3, 0x99, 0xcb, 0xc6, 0xb4, 0xac,
+ 0x9d, 0x88, 0x57, 0xfa, 0xe6, 0x38, 0x70, 0x7e, 0xf8, 0x71, 0xeb, 0xbd, 0x7f, 0xff, 0xb8, 0xf5,
+ 0xde, 0x9f, 0xa7, 0x5b, 0x8d, 0x1f, 0xa6, 0x5b, 0x8d, 0x7f, 0x4c, 0xb7, 0x1a, 0xff, 0x9d, 0x6e,
+ 0x35, 0x86, 0x4b, 0xfa, 0xcf, 0x95, 0x5f, 0xfd, 0x2f, 0x00, 0x00, 0xff, 0xff, 0xc4, 0x4e, 0x24,
+ 0x22, 0xc4, 0x11, 0x00, 0x00,
+}
+
+func (m *Metrics) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Metrics) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Metrics) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.MemoryOomControl != nil {
+ {
+ size, err := m.MemoryOomControl.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x4a
+ }
+ if m.CgroupStats != nil {
+ {
+ size, err := m.CgroupStats.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x42
+ }
+ if len(m.Network) > 0 {
+ for iNdEx := len(m.Network) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Network[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x3a
+ }
+ }
+ if m.Rdma != nil {
+ {
+ size, err := m.Rdma.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x32
+ }
+ if m.Blkio != nil {
+ {
+ size, err := m.Blkio.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2a
+ }
+ if m.Memory != nil {
+ {
+ size, err := m.Memory.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
+ if m.CPU != nil {
+ {
+ size, err := m.CPU.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
+ }
+ if m.Pids != nil {
+ {
+ size, err := m.Pids.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Hugetlb) > 0 {
+ for iNdEx := len(m.Hugetlb) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Hugetlb[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *HugetlbStat) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *HugetlbStat) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *HugetlbStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if len(m.Pagesize) > 0 {
+ i -= len(m.Pagesize)
+ copy(dAtA[i:], m.Pagesize)
+ i = encodeVarintMetrics(dAtA, i, uint64(len(m.Pagesize)))
+ i--
+ dAtA[i] = 0x22
+ }
+ if m.Failcnt != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Failcnt))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.Max != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Max))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.Usage != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Usage))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *PidsStat) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *PidsStat) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *PidsStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.Limit != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Limit))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.Current != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Current))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *CPUStat) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CPUStat) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *CPUStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.Throttling != nil {
+ {
+ size, err := m.Throttling.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ if m.Usage != nil {
+ {
+ size, err := m.Usage.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *CPUUsage) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CPUUsage) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *CPUUsage) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if len(m.PerCPU) > 0 {
+ dAtA11 := make([]byte, len(m.PerCPU)*10)
+ var j10 int
+ for _, num := range m.PerCPU {
+ for num >= 1<<7 {
+ dAtA11[j10] = uint8(uint64(num)&0x7f | 0x80)
+ num >>= 7
+ j10++
+ }
+ dAtA11[j10] = uint8(num)
+ j10++
+ }
+ i -= j10
+ copy(dAtA[i:], dAtA11[:j10])
+ i = encodeVarintMetrics(dAtA, i, uint64(j10))
+ i--
+ dAtA[i] = 0x22
+ }
+ if m.User != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.User))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.Kernel != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Kernel))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.Total != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Total))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *Throttle) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *Throttle) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *Throttle) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.ThrottledTime != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.ThrottledTime))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.ThrottledPeriods != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.ThrottledPeriods))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.Periods != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Periods))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *MemoryStat) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *MemoryStat) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *MemoryStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.KernelTCP != nil {
+ {
+ size, err := m.KernelTCP.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0xa2
+ }
+ if m.Kernel != nil {
+ {
+ size, err := m.Kernel.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x9a
+ }
+ if m.Swap != nil {
+ {
+ size, err := m.Swap.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x92
+ }
+ if m.Usage != nil {
+ {
+ size, err := m.Usage.MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x8a
+ }
+ if m.TotalUnevictable != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalUnevictable))
+ i--
+ dAtA[i] = 0x2
+ i--
+ dAtA[i] = 0x80
+ }
+ if m.TotalActiveFile != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalActiveFile))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xf8
+ }
+ if m.TotalInactiveFile != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalInactiveFile))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xf0
+ }
+ if m.TotalActiveAnon != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalActiveAnon))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xe8
+ }
+ if m.TotalInactiveAnon != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalInactiveAnon))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xe0
+ }
+ if m.TotalPgMajFault != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalPgMajFault))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xd8
+ }
+ if m.TotalPgFault != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalPgFault))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xd0
+ }
+ if m.TotalPgPgOut != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalPgPgOut))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xc8
+ }
+ if m.TotalPgPgIn != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalPgPgIn))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xc0
+ }
+ if m.TotalWriteback != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalWriteback))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xb8
+ }
+ if m.TotalDirty != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalDirty))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xb0
+ }
+ if m.TotalMappedFile != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalMappedFile))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xa8
+ }
+ if m.TotalRSSHuge != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalRSSHuge))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0xa0
+ }
+ if m.TotalRSS != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalRSS))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x98
+ }
+ if m.TotalCache != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TotalCache))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x90
+ }
+ if m.HierarchicalSwapLimit != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.HierarchicalSwapLimit))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x88
+ }
+ if m.HierarchicalMemoryLimit != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.HierarchicalMemoryLimit))
+ i--
+ dAtA[i] = 0x1
+ i--
+ dAtA[i] = 0x80
+ }
+ if m.Unevictable != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Unevictable))
+ i--
+ dAtA[i] = 0x78
+ }
+ if m.ActiveFile != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.ActiveFile))
+ i--
+ dAtA[i] = 0x70
+ }
+ if m.InactiveFile != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.InactiveFile))
+ i--
+ dAtA[i] = 0x68
+ }
+ if m.ActiveAnon != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.ActiveAnon))
+ i--
+ dAtA[i] = 0x60
+ }
+ if m.InactiveAnon != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.InactiveAnon))
+ i--
+ dAtA[i] = 0x58
+ }
+ if m.PgMajFault != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.PgMajFault))
+ i--
+ dAtA[i] = 0x50
+ }
+ if m.PgFault != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.PgFault))
+ i--
+ dAtA[i] = 0x48
+ }
+ if m.PgPgOut != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.PgPgOut))
+ i--
+ dAtA[i] = 0x40
+ }
+ if m.PgPgIn != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.PgPgIn))
+ i--
+ dAtA[i] = 0x38
+ }
+ if m.Writeback != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Writeback))
+ i--
+ dAtA[i] = 0x30
+ }
+ if m.Dirty != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Dirty))
+ i--
+ dAtA[i] = 0x28
+ }
+ if m.MappedFile != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.MappedFile))
+ i--
+ dAtA[i] = 0x20
+ }
+ if m.RSSHuge != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.RSSHuge))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.RSS != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.RSS))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.Cache != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Cache))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *MemoryEntry) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *MemoryEntry) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *MemoryEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.Failcnt != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Failcnt))
+ i--
+ dAtA[i] = 0x20
+ }
+ if m.Max != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Max))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.Usage != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Usage))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.Limit != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Limit))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *MemoryOomControl) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *MemoryOomControl) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *MemoryOomControl) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.OomKill != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.OomKill))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.UnderOom != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.UnderOom))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.OomKillDisable != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.OomKillDisable))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *BlkIOStat) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *BlkIOStat) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *BlkIOStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if len(m.SectorsRecursive) > 0 {
+ for iNdEx := len(m.SectorsRecursive) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.SectorsRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x42
+ }
+ }
+ if len(m.IoTimeRecursive) > 0 {
+ for iNdEx := len(m.IoTimeRecursive) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.IoTimeRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x3a
+ }
+ }
+ if len(m.IoMergedRecursive) > 0 {
+ for iNdEx := len(m.IoMergedRecursive) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.IoMergedRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x32
+ }
+ }
+ if len(m.IoWaitTimeRecursive) > 0 {
+ for iNdEx := len(m.IoWaitTimeRecursive) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.IoWaitTimeRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x2a
+ }
+ }
+ if len(m.IoServiceTimeRecursive) > 0 {
+ for iNdEx := len(m.IoServiceTimeRecursive) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.IoServiceTimeRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x22
+ }
+ }
+ if len(m.IoQueuedRecursive) > 0 {
+ for iNdEx := len(m.IoQueuedRecursive) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.IoQueuedRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x1a
+ }
+ }
+ if len(m.IoServicedRecursive) > 0 {
+ for iNdEx := len(m.IoServicedRecursive) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.IoServicedRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ }
+ if len(m.IoServiceBytesRecursive) > 0 {
+ for iNdEx := len(m.IoServiceBytesRecursive) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.IoServiceBytesRecursive[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *BlkIOEntry) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *BlkIOEntry) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *BlkIOEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.Value != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Value))
+ i--
+ dAtA[i] = 0x28
+ }
+ if m.Minor != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Minor))
+ i--
+ dAtA[i] = 0x20
+ }
+ if m.Major != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.Major))
+ i--
+ dAtA[i] = 0x18
+ }
+ if len(m.Device) > 0 {
+ i -= len(m.Device)
+ copy(dAtA[i:], m.Device)
+ i = encodeVarintMetrics(dAtA, i, uint64(len(m.Device)))
+ i--
+ dAtA[i] = 0x12
+ }
+ if len(m.Op) > 0 {
+ i -= len(m.Op)
+ copy(dAtA[i:], m.Op)
+ i = encodeVarintMetrics(dAtA, i, uint64(len(m.Op)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *RdmaStat) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *RdmaStat) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *RdmaStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if len(m.Limit) > 0 {
+ for iNdEx := len(m.Limit) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Limit[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0x12
+ }
+ }
+ if len(m.Current) > 0 {
+ for iNdEx := len(m.Current) - 1; iNdEx >= 0; iNdEx-- {
+ {
+ size, err := m.Current[iNdEx].MarshalToSizedBuffer(dAtA[:i])
+ if err != nil {
+ return 0, err
+ }
+ i -= size
+ i = encodeVarintMetrics(dAtA, i, uint64(size))
+ }
+ i--
+ dAtA[i] = 0xa
+ }
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *RdmaEntry) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *RdmaEntry) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *RdmaEntry) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.HcaObjects != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.HcaObjects))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.HcaHandles != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.HcaHandles))
+ i--
+ dAtA[i] = 0x10
+ }
+ if len(m.Device) > 0 {
+ i -= len(m.Device)
+ copy(dAtA[i:], m.Device)
+ i = encodeVarintMetrics(dAtA, i, uint64(len(m.Device)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *NetworkStat) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *NetworkStat) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *NetworkStat) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.TxDropped != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TxDropped))
+ i--
+ dAtA[i] = 0x48
+ }
+ if m.TxErrors != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TxErrors))
+ i--
+ dAtA[i] = 0x40
+ }
+ if m.TxPackets != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TxPackets))
+ i--
+ dAtA[i] = 0x38
+ }
+ if m.TxBytes != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.TxBytes))
+ i--
+ dAtA[i] = 0x30
+ }
+ if m.RxDropped != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.RxDropped))
+ i--
+ dAtA[i] = 0x28
+ }
+ if m.RxErrors != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.RxErrors))
+ i--
+ dAtA[i] = 0x20
+ }
+ if m.RxPackets != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.RxPackets))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.RxBytes != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.RxBytes))
+ i--
+ dAtA[i] = 0x10
+ }
+ if len(m.Name) > 0 {
+ i -= len(m.Name)
+ copy(dAtA[i:], m.Name)
+ i = encodeVarintMetrics(dAtA, i, uint64(len(m.Name)))
+ i--
+ dAtA[i] = 0xa
+ }
+ return len(dAtA) - i, nil
+}
+
+func (m *CgroupStats) Marshal() (dAtA []byte, err error) {
+ size := m.Size()
+ dAtA = make([]byte, size)
+ n, err := m.MarshalToSizedBuffer(dAtA[:size])
+ if err != nil {
+ return nil, err
+ }
+ return dAtA[:n], nil
+}
+
+func (m *CgroupStats) MarshalTo(dAtA []byte) (int, error) {
+ size := m.Size()
+ return m.MarshalToSizedBuffer(dAtA[:size])
+}
+
+func (m *CgroupStats) MarshalToSizedBuffer(dAtA []byte) (int, error) {
+ i := len(dAtA)
+ _ = i
+ var l int
+ _ = l
+ if m.XXX_unrecognized != nil {
+ i -= len(m.XXX_unrecognized)
+ copy(dAtA[i:], m.XXX_unrecognized)
+ }
+ if m.NrIoWait != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.NrIoWait))
+ i--
+ dAtA[i] = 0x28
+ }
+ if m.NrUninterruptible != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.NrUninterruptible))
+ i--
+ dAtA[i] = 0x20
+ }
+ if m.NrStopped != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.NrStopped))
+ i--
+ dAtA[i] = 0x18
+ }
+ if m.NrRunning != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.NrRunning))
+ i--
+ dAtA[i] = 0x10
+ }
+ if m.NrSleeping != 0 {
+ i = encodeVarintMetrics(dAtA, i, uint64(m.NrSleeping))
+ i--
+ dAtA[i] = 0x8
+ }
+ return len(dAtA) - i, nil
+}
+
+func encodeVarintMetrics(dAtA []byte, offset int, v uint64) int {
+ offset -= sovMetrics(v)
+ base := offset
+ for v >= 1<<7 {
+ dAtA[offset] = uint8(v&0x7f | 0x80)
+ v >>= 7
+ offset++
+ }
+ dAtA[offset] = uint8(v)
+ return base
+}
+func (m *Metrics) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Hugetlb) > 0 {
+ for _, e := range m.Hugetlb {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if m.Pids != nil {
+ l = m.Pids.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.CPU != nil {
+ l = m.CPU.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.Memory != nil {
+ l = m.Memory.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.Blkio != nil {
+ l = m.Blkio.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.Rdma != nil {
+ l = m.Rdma.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if len(m.Network) > 0 {
+ for _, e := range m.Network {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if m.CgroupStats != nil {
+ l = m.CgroupStats.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.MemoryOomControl != nil {
+ l = m.MemoryOomControl.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *HugetlbStat) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Usage != 0 {
+ n += 1 + sovMetrics(uint64(m.Usage))
+ }
+ if m.Max != 0 {
+ n += 1 + sovMetrics(uint64(m.Max))
+ }
+ if m.Failcnt != 0 {
+ n += 1 + sovMetrics(uint64(m.Failcnt))
+ }
+ l = len(m.Pagesize)
+ if l > 0 {
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *PidsStat) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Current != 0 {
+ n += 1 + sovMetrics(uint64(m.Current))
+ }
+ if m.Limit != 0 {
+ n += 1 + sovMetrics(uint64(m.Limit))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *CPUStat) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Usage != nil {
+ l = m.Usage.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.Throttling != nil {
+ l = m.Throttling.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *CPUUsage) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Total != 0 {
+ n += 1 + sovMetrics(uint64(m.Total))
+ }
+ if m.Kernel != 0 {
+ n += 1 + sovMetrics(uint64(m.Kernel))
+ }
+ if m.User != 0 {
+ n += 1 + sovMetrics(uint64(m.User))
+ }
+ if len(m.PerCPU) > 0 {
+ l = 0
+ for _, e := range m.PerCPU {
+ l += sovMetrics(uint64(e))
+ }
+ n += 1 + sovMetrics(uint64(l)) + l
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *Throttle) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Periods != 0 {
+ n += 1 + sovMetrics(uint64(m.Periods))
+ }
+ if m.ThrottledPeriods != 0 {
+ n += 1 + sovMetrics(uint64(m.ThrottledPeriods))
+ }
+ if m.ThrottledTime != 0 {
+ n += 1 + sovMetrics(uint64(m.ThrottledTime))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *MemoryStat) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Cache != 0 {
+ n += 1 + sovMetrics(uint64(m.Cache))
+ }
+ if m.RSS != 0 {
+ n += 1 + sovMetrics(uint64(m.RSS))
+ }
+ if m.RSSHuge != 0 {
+ n += 1 + sovMetrics(uint64(m.RSSHuge))
+ }
+ if m.MappedFile != 0 {
+ n += 1 + sovMetrics(uint64(m.MappedFile))
+ }
+ if m.Dirty != 0 {
+ n += 1 + sovMetrics(uint64(m.Dirty))
+ }
+ if m.Writeback != 0 {
+ n += 1 + sovMetrics(uint64(m.Writeback))
+ }
+ if m.PgPgIn != 0 {
+ n += 1 + sovMetrics(uint64(m.PgPgIn))
+ }
+ if m.PgPgOut != 0 {
+ n += 1 + sovMetrics(uint64(m.PgPgOut))
+ }
+ if m.PgFault != 0 {
+ n += 1 + sovMetrics(uint64(m.PgFault))
+ }
+ if m.PgMajFault != 0 {
+ n += 1 + sovMetrics(uint64(m.PgMajFault))
+ }
+ if m.InactiveAnon != 0 {
+ n += 1 + sovMetrics(uint64(m.InactiveAnon))
+ }
+ if m.ActiveAnon != 0 {
+ n += 1 + sovMetrics(uint64(m.ActiveAnon))
+ }
+ if m.InactiveFile != 0 {
+ n += 1 + sovMetrics(uint64(m.InactiveFile))
+ }
+ if m.ActiveFile != 0 {
+ n += 1 + sovMetrics(uint64(m.ActiveFile))
+ }
+ if m.Unevictable != 0 {
+ n += 1 + sovMetrics(uint64(m.Unevictable))
+ }
+ if m.HierarchicalMemoryLimit != 0 {
+ n += 2 + sovMetrics(uint64(m.HierarchicalMemoryLimit))
+ }
+ if m.HierarchicalSwapLimit != 0 {
+ n += 2 + sovMetrics(uint64(m.HierarchicalSwapLimit))
+ }
+ if m.TotalCache != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalCache))
+ }
+ if m.TotalRSS != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalRSS))
+ }
+ if m.TotalRSSHuge != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalRSSHuge))
+ }
+ if m.TotalMappedFile != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalMappedFile))
+ }
+ if m.TotalDirty != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalDirty))
+ }
+ if m.TotalWriteback != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalWriteback))
+ }
+ if m.TotalPgPgIn != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalPgPgIn))
+ }
+ if m.TotalPgPgOut != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalPgPgOut))
+ }
+ if m.TotalPgFault != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalPgFault))
+ }
+ if m.TotalPgMajFault != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalPgMajFault))
+ }
+ if m.TotalInactiveAnon != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalInactiveAnon))
+ }
+ if m.TotalActiveAnon != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalActiveAnon))
+ }
+ if m.TotalInactiveFile != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalInactiveFile))
+ }
+ if m.TotalActiveFile != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalActiveFile))
+ }
+ if m.TotalUnevictable != 0 {
+ n += 2 + sovMetrics(uint64(m.TotalUnevictable))
+ }
+ if m.Usage != nil {
+ l = m.Usage.Size()
+ n += 2 + l + sovMetrics(uint64(l))
+ }
+ if m.Swap != nil {
+ l = m.Swap.Size()
+ n += 2 + l + sovMetrics(uint64(l))
+ }
+ if m.Kernel != nil {
+ l = m.Kernel.Size()
+ n += 2 + l + sovMetrics(uint64(l))
+ }
+ if m.KernelTCP != nil {
+ l = m.KernelTCP.Size()
+ n += 2 + l + sovMetrics(uint64(l))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *MemoryEntry) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.Limit != 0 {
+ n += 1 + sovMetrics(uint64(m.Limit))
+ }
+ if m.Usage != 0 {
+ n += 1 + sovMetrics(uint64(m.Usage))
+ }
+ if m.Max != 0 {
+ n += 1 + sovMetrics(uint64(m.Max))
+ }
+ if m.Failcnt != 0 {
+ n += 1 + sovMetrics(uint64(m.Failcnt))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *MemoryOomControl) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.OomKillDisable != 0 {
+ n += 1 + sovMetrics(uint64(m.OomKillDisable))
+ }
+ if m.UnderOom != 0 {
+ n += 1 + sovMetrics(uint64(m.UnderOom))
+ }
+ if m.OomKill != 0 {
+ n += 1 + sovMetrics(uint64(m.OomKill))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *BlkIOStat) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.IoServiceBytesRecursive) > 0 {
+ for _, e := range m.IoServiceBytesRecursive {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if len(m.IoServicedRecursive) > 0 {
+ for _, e := range m.IoServicedRecursive {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if len(m.IoQueuedRecursive) > 0 {
+ for _, e := range m.IoQueuedRecursive {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if len(m.IoServiceTimeRecursive) > 0 {
+ for _, e := range m.IoServiceTimeRecursive {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if len(m.IoWaitTimeRecursive) > 0 {
+ for _, e := range m.IoWaitTimeRecursive {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if len(m.IoMergedRecursive) > 0 {
+ for _, e := range m.IoMergedRecursive {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if len(m.IoTimeRecursive) > 0 {
+ for _, e := range m.IoTimeRecursive {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if len(m.SectorsRecursive) > 0 {
+ for _, e := range m.SectorsRecursive {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *BlkIOEntry) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Op)
+ if l > 0 {
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ l = len(m.Device)
+ if l > 0 {
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.Major != 0 {
+ n += 1 + sovMetrics(uint64(m.Major))
+ }
+ if m.Minor != 0 {
+ n += 1 + sovMetrics(uint64(m.Minor))
+ }
+ if m.Value != 0 {
+ n += 1 + sovMetrics(uint64(m.Value))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *RdmaStat) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if len(m.Current) > 0 {
+ for _, e := range m.Current {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if len(m.Limit) > 0 {
+ for _, e := range m.Limit {
+ l = e.Size()
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *RdmaEntry) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Device)
+ if l > 0 {
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.HcaHandles != 0 {
+ n += 1 + sovMetrics(uint64(m.HcaHandles))
+ }
+ if m.HcaObjects != 0 {
+ n += 1 + sovMetrics(uint64(m.HcaObjects))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *NetworkStat) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ l = len(m.Name)
+ if l > 0 {
+ n += 1 + l + sovMetrics(uint64(l))
+ }
+ if m.RxBytes != 0 {
+ n += 1 + sovMetrics(uint64(m.RxBytes))
+ }
+ if m.RxPackets != 0 {
+ n += 1 + sovMetrics(uint64(m.RxPackets))
+ }
+ if m.RxErrors != 0 {
+ n += 1 + sovMetrics(uint64(m.RxErrors))
+ }
+ if m.RxDropped != 0 {
+ n += 1 + sovMetrics(uint64(m.RxDropped))
+ }
+ if m.TxBytes != 0 {
+ n += 1 + sovMetrics(uint64(m.TxBytes))
+ }
+ if m.TxPackets != 0 {
+ n += 1 + sovMetrics(uint64(m.TxPackets))
+ }
+ if m.TxErrors != 0 {
+ n += 1 + sovMetrics(uint64(m.TxErrors))
+ }
+ if m.TxDropped != 0 {
+ n += 1 + sovMetrics(uint64(m.TxDropped))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func (m *CgroupStats) Size() (n int) {
+ if m == nil {
+ return 0
+ }
+ var l int
+ _ = l
+ if m.NrSleeping != 0 {
+ n += 1 + sovMetrics(uint64(m.NrSleeping))
+ }
+ if m.NrRunning != 0 {
+ n += 1 + sovMetrics(uint64(m.NrRunning))
+ }
+ if m.NrStopped != 0 {
+ n += 1 + sovMetrics(uint64(m.NrStopped))
+ }
+ if m.NrUninterruptible != 0 {
+ n += 1 + sovMetrics(uint64(m.NrUninterruptible))
+ }
+ if m.NrIoWait != 0 {
+ n += 1 + sovMetrics(uint64(m.NrIoWait))
+ }
+ if m.XXX_unrecognized != nil {
+ n += len(m.XXX_unrecognized)
+ }
+ return n
+}
+
+func sovMetrics(x uint64) (n int) {
+ return (math_bits.Len64(x|1) + 6) / 7
+}
+func sozMetrics(x uint64) (n int) {
+ return sovMetrics(uint64((x << 1) ^ uint64((int64(x) >> 63))))
+}
+func (this *Metrics) String() string {
+ if this == nil {
+ return "nil"
+ }
+ repeatedStringForHugetlb := "[]*HugetlbStat{"
+ for _, f := range this.Hugetlb {
+ repeatedStringForHugetlb += strings.Replace(f.String(), "HugetlbStat", "HugetlbStat", 1) + ","
+ }
+ repeatedStringForHugetlb += "}"
+ repeatedStringForNetwork := "[]*NetworkStat{"
+ for _, f := range this.Network {
+ repeatedStringForNetwork += strings.Replace(f.String(), "NetworkStat", "NetworkStat", 1) + ","
+ }
+ repeatedStringForNetwork += "}"
+ s := strings.Join([]string{`&Metrics{`,
+ `Hugetlb:` + repeatedStringForHugetlb + `,`,
+ `Pids:` + strings.Replace(this.Pids.String(), "PidsStat", "PidsStat", 1) + `,`,
+ `CPU:` + strings.Replace(this.CPU.String(), "CPUStat", "CPUStat", 1) + `,`,
+ `Memory:` + strings.Replace(this.Memory.String(), "MemoryStat", "MemoryStat", 1) + `,`,
+ `Blkio:` + strings.Replace(this.Blkio.String(), "BlkIOStat", "BlkIOStat", 1) + `,`,
+ `Rdma:` + strings.Replace(this.Rdma.String(), "RdmaStat", "RdmaStat", 1) + `,`,
+ `Network:` + repeatedStringForNetwork + `,`,
+ `CgroupStats:` + strings.Replace(this.CgroupStats.String(), "CgroupStats", "CgroupStats", 1) + `,`,
+ `MemoryOomControl:` + strings.Replace(this.MemoryOomControl.String(), "MemoryOomControl", "MemoryOomControl", 1) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *HugetlbStat) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&HugetlbStat{`,
+ `Usage:` + fmt.Sprintf("%v", this.Usage) + `,`,
+ `Max:` + fmt.Sprintf("%v", this.Max) + `,`,
+ `Failcnt:` + fmt.Sprintf("%v", this.Failcnt) + `,`,
+ `Pagesize:` + fmt.Sprintf("%v", this.Pagesize) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *PidsStat) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&PidsStat{`,
+ `Current:` + fmt.Sprintf("%v", this.Current) + `,`,
+ `Limit:` + fmt.Sprintf("%v", this.Limit) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *CPUStat) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&CPUStat{`,
+ `Usage:` + strings.Replace(this.Usage.String(), "CPUUsage", "CPUUsage", 1) + `,`,
+ `Throttling:` + strings.Replace(this.Throttling.String(), "Throttle", "Throttle", 1) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *CPUUsage) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&CPUUsage{`,
+ `Total:` + fmt.Sprintf("%v", this.Total) + `,`,
+ `Kernel:` + fmt.Sprintf("%v", this.Kernel) + `,`,
+ `User:` + fmt.Sprintf("%v", this.User) + `,`,
+ `PerCPU:` + fmt.Sprintf("%v", this.PerCPU) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *Throttle) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&Throttle{`,
+ `Periods:` + fmt.Sprintf("%v", this.Periods) + `,`,
+ `ThrottledPeriods:` + fmt.Sprintf("%v", this.ThrottledPeriods) + `,`,
+ `ThrottledTime:` + fmt.Sprintf("%v", this.ThrottledTime) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *MemoryStat) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&MemoryStat{`,
+ `Cache:` + fmt.Sprintf("%v", this.Cache) + `,`,
+ `RSS:` + fmt.Sprintf("%v", this.RSS) + `,`,
+ `RSSHuge:` + fmt.Sprintf("%v", this.RSSHuge) + `,`,
+ `MappedFile:` + fmt.Sprintf("%v", this.MappedFile) + `,`,
+ `Dirty:` + fmt.Sprintf("%v", this.Dirty) + `,`,
+ `Writeback:` + fmt.Sprintf("%v", this.Writeback) + `,`,
+ `PgPgIn:` + fmt.Sprintf("%v", this.PgPgIn) + `,`,
+ `PgPgOut:` + fmt.Sprintf("%v", this.PgPgOut) + `,`,
+ `PgFault:` + fmt.Sprintf("%v", this.PgFault) + `,`,
+ `PgMajFault:` + fmt.Sprintf("%v", this.PgMajFault) + `,`,
+ `InactiveAnon:` + fmt.Sprintf("%v", this.InactiveAnon) + `,`,
+ `ActiveAnon:` + fmt.Sprintf("%v", this.ActiveAnon) + `,`,
+ `InactiveFile:` + fmt.Sprintf("%v", this.InactiveFile) + `,`,
+ `ActiveFile:` + fmt.Sprintf("%v", this.ActiveFile) + `,`,
+ `Unevictable:` + fmt.Sprintf("%v", this.Unevictable) + `,`,
+ `HierarchicalMemoryLimit:` + fmt.Sprintf("%v", this.HierarchicalMemoryLimit) + `,`,
+ `HierarchicalSwapLimit:` + fmt.Sprintf("%v", this.HierarchicalSwapLimit) + `,`,
+ `TotalCache:` + fmt.Sprintf("%v", this.TotalCache) + `,`,
+ `TotalRSS:` + fmt.Sprintf("%v", this.TotalRSS) + `,`,
+ `TotalRSSHuge:` + fmt.Sprintf("%v", this.TotalRSSHuge) + `,`,
+ `TotalMappedFile:` + fmt.Sprintf("%v", this.TotalMappedFile) + `,`,
+ `TotalDirty:` + fmt.Sprintf("%v", this.TotalDirty) + `,`,
+ `TotalWriteback:` + fmt.Sprintf("%v", this.TotalWriteback) + `,`,
+ `TotalPgPgIn:` + fmt.Sprintf("%v", this.TotalPgPgIn) + `,`,
+ `TotalPgPgOut:` + fmt.Sprintf("%v", this.TotalPgPgOut) + `,`,
+ `TotalPgFault:` + fmt.Sprintf("%v", this.TotalPgFault) + `,`,
+ `TotalPgMajFault:` + fmt.Sprintf("%v", this.TotalPgMajFault) + `,`,
+ `TotalInactiveAnon:` + fmt.Sprintf("%v", this.TotalInactiveAnon) + `,`,
+ `TotalActiveAnon:` + fmt.Sprintf("%v", this.TotalActiveAnon) + `,`,
+ `TotalInactiveFile:` + fmt.Sprintf("%v", this.TotalInactiveFile) + `,`,
+ `TotalActiveFile:` + fmt.Sprintf("%v", this.TotalActiveFile) + `,`,
+ `TotalUnevictable:` + fmt.Sprintf("%v", this.TotalUnevictable) + `,`,
+ `Usage:` + strings.Replace(this.Usage.String(), "MemoryEntry", "MemoryEntry", 1) + `,`,
+ `Swap:` + strings.Replace(this.Swap.String(), "MemoryEntry", "MemoryEntry", 1) + `,`,
+ `Kernel:` + strings.Replace(this.Kernel.String(), "MemoryEntry", "MemoryEntry", 1) + `,`,
+ `KernelTCP:` + strings.Replace(this.KernelTCP.String(), "MemoryEntry", "MemoryEntry", 1) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *MemoryEntry) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&MemoryEntry{`,
+ `Limit:` + fmt.Sprintf("%v", this.Limit) + `,`,
+ `Usage:` + fmt.Sprintf("%v", this.Usage) + `,`,
+ `Max:` + fmt.Sprintf("%v", this.Max) + `,`,
+ `Failcnt:` + fmt.Sprintf("%v", this.Failcnt) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *MemoryOomControl) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&MemoryOomControl{`,
+ `OomKillDisable:` + fmt.Sprintf("%v", this.OomKillDisable) + `,`,
+ `UnderOom:` + fmt.Sprintf("%v", this.UnderOom) + `,`,
+ `OomKill:` + fmt.Sprintf("%v", this.OomKill) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *BlkIOStat) String() string {
+ if this == nil {
+ return "nil"
+ }
+ repeatedStringForIoServiceBytesRecursive := "[]*BlkIOEntry{"
+ for _, f := range this.IoServiceBytesRecursive {
+ repeatedStringForIoServiceBytesRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
+ }
+ repeatedStringForIoServiceBytesRecursive += "}"
+ repeatedStringForIoServicedRecursive := "[]*BlkIOEntry{"
+ for _, f := range this.IoServicedRecursive {
+ repeatedStringForIoServicedRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
+ }
+ repeatedStringForIoServicedRecursive += "}"
+ repeatedStringForIoQueuedRecursive := "[]*BlkIOEntry{"
+ for _, f := range this.IoQueuedRecursive {
+ repeatedStringForIoQueuedRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
+ }
+ repeatedStringForIoQueuedRecursive += "}"
+ repeatedStringForIoServiceTimeRecursive := "[]*BlkIOEntry{"
+ for _, f := range this.IoServiceTimeRecursive {
+ repeatedStringForIoServiceTimeRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
+ }
+ repeatedStringForIoServiceTimeRecursive += "}"
+ repeatedStringForIoWaitTimeRecursive := "[]*BlkIOEntry{"
+ for _, f := range this.IoWaitTimeRecursive {
+ repeatedStringForIoWaitTimeRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
+ }
+ repeatedStringForIoWaitTimeRecursive += "}"
+ repeatedStringForIoMergedRecursive := "[]*BlkIOEntry{"
+ for _, f := range this.IoMergedRecursive {
+ repeatedStringForIoMergedRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
+ }
+ repeatedStringForIoMergedRecursive += "}"
+ repeatedStringForIoTimeRecursive := "[]*BlkIOEntry{"
+ for _, f := range this.IoTimeRecursive {
+ repeatedStringForIoTimeRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
+ }
+ repeatedStringForIoTimeRecursive += "}"
+ repeatedStringForSectorsRecursive := "[]*BlkIOEntry{"
+ for _, f := range this.SectorsRecursive {
+ repeatedStringForSectorsRecursive += strings.Replace(f.String(), "BlkIOEntry", "BlkIOEntry", 1) + ","
+ }
+ repeatedStringForSectorsRecursive += "}"
+ s := strings.Join([]string{`&BlkIOStat{`,
+ `IoServiceBytesRecursive:` + repeatedStringForIoServiceBytesRecursive + `,`,
+ `IoServicedRecursive:` + repeatedStringForIoServicedRecursive + `,`,
+ `IoQueuedRecursive:` + repeatedStringForIoQueuedRecursive + `,`,
+ `IoServiceTimeRecursive:` + repeatedStringForIoServiceTimeRecursive + `,`,
+ `IoWaitTimeRecursive:` + repeatedStringForIoWaitTimeRecursive + `,`,
+ `IoMergedRecursive:` + repeatedStringForIoMergedRecursive + `,`,
+ `IoTimeRecursive:` + repeatedStringForIoTimeRecursive + `,`,
+ `SectorsRecursive:` + repeatedStringForSectorsRecursive + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *BlkIOEntry) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&BlkIOEntry{`,
+ `Op:` + fmt.Sprintf("%v", this.Op) + `,`,
+ `Device:` + fmt.Sprintf("%v", this.Device) + `,`,
+ `Major:` + fmt.Sprintf("%v", this.Major) + `,`,
+ `Minor:` + fmt.Sprintf("%v", this.Minor) + `,`,
+ `Value:` + fmt.Sprintf("%v", this.Value) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *RdmaStat) String() string {
+ if this == nil {
+ return "nil"
+ }
+ repeatedStringForCurrent := "[]*RdmaEntry{"
+ for _, f := range this.Current {
+ repeatedStringForCurrent += strings.Replace(f.String(), "RdmaEntry", "RdmaEntry", 1) + ","
+ }
+ repeatedStringForCurrent += "}"
+ repeatedStringForLimit := "[]*RdmaEntry{"
+ for _, f := range this.Limit {
+ repeatedStringForLimit += strings.Replace(f.String(), "RdmaEntry", "RdmaEntry", 1) + ","
+ }
+ repeatedStringForLimit += "}"
+ s := strings.Join([]string{`&RdmaStat{`,
+ `Current:` + repeatedStringForCurrent + `,`,
+ `Limit:` + repeatedStringForLimit + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *RdmaEntry) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&RdmaEntry{`,
+ `Device:` + fmt.Sprintf("%v", this.Device) + `,`,
+ `HcaHandles:` + fmt.Sprintf("%v", this.HcaHandles) + `,`,
+ `HcaObjects:` + fmt.Sprintf("%v", this.HcaObjects) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *NetworkStat) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&NetworkStat{`,
+ `Name:` + fmt.Sprintf("%v", this.Name) + `,`,
+ `RxBytes:` + fmt.Sprintf("%v", this.RxBytes) + `,`,
+ `RxPackets:` + fmt.Sprintf("%v", this.RxPackets) + `,`,
+ `RxErrors:` + fmt.Sprintf("%v", this.RxErrors) + `,`,
+ `RxDropped:` + fmt.Sprintf("%v", this.RxDropped) + `,`,
+ `TxBytes:` + fmt.Sprintf("%v", this.TxBytes) + `,`,
+ `TxPackets:` + fmt.Sprintf("%v", this.TxPackets) + `,`,
+ `TxErrors:` + fmt.Sprintf("%v", this.TxErrors) + `,`,
+ `TxDropped:` + fmt.Sprintf("%v", this.TxDropped) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func (this *CgroupStats) String() string {
+ if this == nil {
+ return "nil"
+ }
+ s := strings.Join([]string{`&CgroupStats{`,
+ `NrSleeping:` + fmt.Sprintf("%v", this.NrSleeping) + `,`,
+ `NrRunning:` + fmt.Sprintf("%v", this.NrRunning) + `,`,
+ `NrStopped:` + fmt.Sprintf("%v", this.NrStopped) + `,`,
+ `NrUninterruptible:` + fmt.Sprintf("%v", this.NrUninterruptible) + `,`,
+ `NrIoWait:` + fmt.Sprintf("%v", this.NrIoWait) + `,`,
+ `XXX_unrecognized:` + fmt.Sprintf("%v", this.XXX_unrecognized) + `,`,
+ `}`,
+ }, "")
+ return s
+}
+func valueToStringMetrics(v interface{}) string {
+ rv := reflect.ValueOf(v)
+ if rv.IsNil() {
+ return "nil"
+ }
+ pv := reflect.Indirect(rv).Interface()
+ return fmt.Sprintf("*%v", pv)
+}
+func (m *Metrics) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Metrics: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Metrics: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Hugetlb", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Hugetlb = append(m.Hugetlb, &HugetlbStat{})
+ if err := m.Hugetlb[len(m.Hugetlb)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Pids", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Pids == nil {
+ m.Pids = &PidsStat{}
+ }
+ if err := m.Pids.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field CPU", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.CPU == nil {
+ m.CPU = &CPUStat{}
+ }
+ if err := m.CPU.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Memory", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Memory == nil {
+ m.Memory = &MemoryStat{}
+ }
+ if err := m.Memory.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Blkio", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Blkio == nil {
+ m.Blkio = &BlkIOStat{}
+ }
+ if err := m.Blkio.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 6:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Rdma", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Rdma == nil {
+ m.Rdma = &RdmaStat{}
+ }
+ if err := m.Rdma.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 7:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Network", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Network = append(m.Network, &NetworkStat{})
+ if err := m.Network[len(m.Network)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 8:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field CgroupStats", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.CgroupStats == nil {
+ m.CgroupStats = &CgroupStats{}
+ }
+ if err := m.CgroupStats.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 9:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field MemoryOomControl", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.MemoryOomControl == nil {
+ m.MemoryOomControl = &MemoryOomControl{}
+ }
+ if err := m.MemoryOomControl.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *HugetlbStat) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: HugetlbStat: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: HugetlbStat: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType)
+ }
+ m.Usage = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Usage |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Max", wireType)
+ }
+ m.Max = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Max |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Failcnt", wireType)
+ }
+ m.Failcnt = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Failcnt |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Pagesize", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Pagesize = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *PidsStat) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: PidsStat: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: PidsStat: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Current", wireType)
+ }
+ m.Current = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Current |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType)
+ }
+ m.Limit = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Limit |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *CPUStat) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CPUStat: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CPUStat: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Usage == nil {
+ m.Usage = &CPUUsage{}
+ }
+ if err := m.Usage.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Throttling", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Throttling == nil {
+ m.Throttling = &Throttle{}
+ }
+ if err := m.Throttling.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *CPUUsage) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CPUUsage: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CPUUsage: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Total", wireType)
+ }
+ m.Total = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Total |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Kernel", wireType)
+ }
+ m.Kernel = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Kernel |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field User", wireType)
+ }
+ m.User = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.User |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType == 0 {
+ var v uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ m.PerCPU = append(m.PerCPU, v)
+ } else if wireType == 2 {
+ var packedLen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ packedLen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if packedLen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + packedLen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ var elementCount int
+ var count int
+ for _, integer := range dAtA[iNdEx:postIndex] {
+ if integer < 128 {
+ count++
+ }
+ }
+ elementCount = count
+ if elementCount != 0 && len(m.PerCPU) == 0 {
+ m.PerCPU = make([]uint64, 0, elementCount)
+ }
+ for iNdEx < postIndex {
+ var v uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ v |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ m.PerCPU = append(m.PerCPU, v)
+ }
+ } else {
+ return fmt.Errorf("proto: wrong wireType = %d for field PerCPU", wireType)
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *Throttle) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: Throttle: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: Throttle: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Periods", wireType)
+ }
+ m.Periods = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Periods |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ThrottledPeriods", wireType)
+ }
+ m.ThrottledPeriods = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.ThrottledPeriods |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ThrottledTime", wireType)
+ }
+ m.ThrottledTime = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.ThrottledTime |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *MemoryStat) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: MemoryStat: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: MemoryStat: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Cache", wireType)
+ }
+ m.Cache = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Cache |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RSS", wireType)
+ }
+ m.RSS = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.RSS |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RSSHuge", wireType)
+ }
+ m.RSSHuge = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.RSSHuge |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field MappedFile", wireType)
+ }
+ m.MappedFile = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.MappedFile |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Dirty", wireType)
+ }
+ m.Dirty = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Dirty |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 6:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Writeback", wireType)
+ }
+ m.Writeback = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Writeback |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 7:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field PgPgIn", wireType)
+ }
+ m.PgPgIn = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.PgPgIn |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 8:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field PgPgOut", wireType)
+ }
+ m.PgPgOut = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.PgPgOut |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 9:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field PgFault", wireType)
+ }
+ m.PgFault = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.PgFault |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 10:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field PgMajFault", wireType)
+ }
+ m.PgMajFault = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.PgMajFault |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 11:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field InactiveAnon", wireType)
+ }
+ m.InactiveAnon = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.InactiveAnon |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 12:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ActiveAnon", wireType)
+ }
+ m.ActiveAnon = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.ActiveAnon |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 13:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field InactiveFile", wireType)
+ }
+ m.InactiveFile = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.InactiveFile |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 14:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field ActiveFile", wireType)
+ }
+ m.ActiveFile = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.ActiveFile |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 15:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Unevictable", wireType)
+ }
+ m.Unevictable = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Unevictable |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 16:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field HierarchicalMemoryLimit", wireType)
+ }
+ m.HierarchicalMemoryLimit = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.HierarchicalMemoryLimit |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 17:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field HierarchicalSwapLimit", wireType)
+ }
+ m.HierarchicalSwapLimit = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.HierarchicalSwapLimit |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 18:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalCache", wireType)
+ }
+ m.TotalCache = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalCache |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 19:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalRSS", wireType)
+ }
+ m.TotalRSS = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalRSS |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 20:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalRSSHuge", wireType)
+ }
+ m.TotalRSSHuge = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalRSSHuge |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 21:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalMappedFile", wireType)
+ }
+ m.TotalMappedFile = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalMappedFile |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 22:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalDirty", wireType)
+ }
+ m.TotalDirty = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalDirty |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 23:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalWriteback", wireType)
+ }
+ m.TotalWriteback = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalWriteback |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 24:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalPgPgIn", wireType)
+ }
+ m.TotalPgPgIn = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalPgPgIn |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 25:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalPgPgOut", wireType)
+ }
+ m.TotalPgPgOut = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalPgPgOut |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 26:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalPgFault", wireType)
+ }
+ m.TotalPgFault = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalPgFault |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 27:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalPgMajFault", wireType)
+ }
+ m.TotalPgMajFault = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalPgMajFault |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 28:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalInactiveAnon", wireType)
+ }
+ m.TotalInactiveAnon = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalInactiveAnon |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 29:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalActiveAnon", wireType)
+ }
+ m.TotalActiveAnon = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalActiveAnon |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 30:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalInactiveFile", wireType)
+ }
+ m.TotalInactiveFile = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalInactiveFile |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 31:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalActiveFile", wireType)
+ }
+ m.TotalActiveFile = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalActiveFile |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 32:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TotalUnevictable", wireType)
+ }
+ m.TotalUnevictable = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TotalUnevictable |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 33:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Usage == nil {
+ m.Usage = &MemoryEntry{}
+ }
+ if err := m.Usage.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 34:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Swap", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Swap == nil {
+ m.Swap = &MemoryEntry{}
+ }
+ if err := m.Swap.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 35:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Kernel", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.Kernel == nil {
+ m.Kernel = &MemoryEntry{}
+ }
+ if err := m.Kernel.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 36:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field KernelTCP", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ if m.KernelTCP == nil {
+ m.KernelTCP = &MemoryEntry{}
+ }
+ if err := m.KernelTCP.Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *MemoryEntry) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: MemoryEntry: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: MemoryEntry: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType)
+ }
+ m.Limit = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Limit |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Usage", wireType)
+ }
+ m.Usage = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Usage |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Max", wireType)
+ }
+ m.Max = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Max |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Failcnt", wireType)
+ }
+ m.Failcnt = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Failcnt |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *MemoryOomControl) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: MemoryOomControl: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: MemoryOomControl: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field OomKillDisable", wireType)
+ }
+ m.OomKillDisable = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.OomKillDisable |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field UnderOom", wireType)
+ }
+ m.UnderOom = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.UnderOom |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field OomKill", wireType)
+ }
+ m.OomKill = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.OomKill |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *BlkIOStat) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: BlkIOStat: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: BlkIOStat: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field IoServiceBytesRecursive", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.IoServiceBytesRecursive = append(m.IoServiceBytesRecursive, &BlkIOEntry{})
+ if err := m.IoServiceBytesRecursive[len(m.IoServiceBytesRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field IoServicedRecursive", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.IoServicedRecursive = append(m.IoServicedRecursive, &BlkIOEntry{})
+ if err := m.IoServicedRecursive[len(m.IoServicedRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 3:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field IoQueuedRecursive", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.IoQueuedRecursive = append(m.IoQueuedRecursive, &BlkIOEntry{})
+ if err := m.IoQueuedRecursive[len(m.IoQueuedRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 4:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field IoServiceTimeRecursive", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.IoServiceTimeRecursive = append(m.IoServiceTimeRecursive, &BlkIOEntry{})
+ if err := m.IoServiceTimeRecursive[len(m.IoServiceTimeRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 5:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field IoWaitTimeRecursive", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.IoWaitTimeRecursive = append(m.IoWaitTimeRecursive, &BlkIOEntry{})
+ if err := m.IoWaitTimeRecursive[len(m.IoWaitTimeRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 6:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field IoMergedRecursive", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.IoMergedRecursive = append(m.IoMergedRecursive, &BlkIOEntry{})
+ if err := m.IoMergedRecursive[len(m.IoMergedRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 7:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field IoTimeRecursive", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.IoTimeRecursive = append(m.IoTimeRecursive, &BlkIOEntry{})
+ if err := m.IoTimeRecursive[len(m.IoTimeRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 8:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field SectorsRecursive", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.SectorsRecursive = append(m.SectorsRecursive, &BlkIOEntry{})
+ if err := m.SectorsRecursive[len(m.SectorsRecursive)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *BlkIOEntry) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: BlkIOEntry: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: BlkIOEntry: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Op", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Op = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Device = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Major", wireType)
+ }
+ m.Major = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Major |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Minor", wireType)
+ }
+ m.Minor = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Minor |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Value", wireType)
+ }
+ m.Value = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.Value |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RdmaStat) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RdmaStat: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RdmaStat: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Current", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Current = append(m.Current, &RdmaEntry{})
+ if err := m.Current[len(m.Current)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ case 2:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Limit", wireType)
+ }
+ var msglen int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ msglen |= int(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if msglen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + msglen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Limit = append(m.Limit, &RdmaEntry{})
+ if err := m.Limit[len(m.Limit)-1].Unmarshal(dAtA[iNdEx:postIndex]); err != nil {
+ return err
+ }
+ iNdEx = postIndex
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *RdmaEntry) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: RdmaEntry: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: RdmaEntry: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Device", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Device = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field HcaHandles", wireType)
+ }
+ m.HcaHandles = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.HcaHandles |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field HcaObjects", wireType)
+ }
+ m.HcaObjects = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.HcaObjects |= uint32(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *NetworkStat) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: NetworkStat: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: NetworkStat: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 2 {
+ return fmt.Errorf("proto: wrong wireType = %d for field Name", wireType)
+ }
+ var stringLen uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ stringLen |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ intStringLen := int(stringLen)
+ if intStringLen < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ postIndex := iNdEx + intStringLen
+ if postIndex < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if postIndex > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.Name = string(dAtA[iNdEx:postIndex])
+ iNdEx = postIndex
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RxBytes", wireType)
+ }
+ m.RxBytes = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.RxBytes |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RxPackets", wireType)
+ }
+ m.RxPackets = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.RxPackets |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RxErrors", wireType)
+ }
+ m.RxErrors = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.RxErrors |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field RxDropped", wireType)
+ }
+ m.RxDropped = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.RxDropped |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 6:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TxBytes", wireType)
+ }
+ m.TxBytes = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TxBytes |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 7:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TxPackets", wireType)
+ }
+ m.TxPackets = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TxPackets |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 8:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TxErrors", wireType)
+ }
+ m.TxErrors = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TxErrors |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 9:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field TxDropped", wireType)
+ }
+ m.TxDropped = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.TxDropped |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func (m *CgroupStats) Unmarshal(dAtA []byte) error {
+ l := len(dAtA)
+ iNdEx := 0
+ for iNdEx < l {
+ preIndex := iNdEx
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ fieldNum := int32(wire >> 3)
+ wireType := int(wire & 0x7)
+ if wireType == 4 {
+ return fmt.Errorf("proto: CgroupStats: wiretype end group for non-group")
+ }
+ if fieldNum <= 0 {
+ return fmt.Errorf("proto: CgroupStats: illegal tag %d (wire type %d)", fieldNum, wire)
+ }
+ switch fieldNum {
+ case 1:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field NrSleeping", wireType)
+ }
+ m.NrSleeping = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.NrSleeping |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 2:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field NrRunning", wireType)
+ }
+ m.NrRunning = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.NrRunning |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 3:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field NrStopped", wireType)
+ }
+ m.NrStopped = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.NrStopped |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 4:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field NrUninterruptible", wireType)
+ }
+ m.NrUninterruptible = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.NrUninterruptible |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ case 5:
+ if wireType != 0 {
+ return fmt.Errorf("proto: wrong wireType = %d for field NrIoWait", wireType)
+ }
+ m.NrIoWait = 0
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ m.NrIoWait |= uint64(b&0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ default:
+ iNdEx = preIndex
+ skippy, err := skipMetrics(dAtA[iNdEx:])
+ if err != nil {
+ return err
+ }
+ if (skippy < 0) || (iNdEx+skippy) < 0 {
+ return ErrInvalidLengthMetrics
+ }
+ if (iNdEx + skippy) > l {
+ return io.ErrUnexpectedEOF
+ }
+ m.XXX_unrecognized = append(m.XXX_unrecognized, dAtA[iNdEx:iNdEx+skippy]...)
+ iNdEx += skippy
+ }
+ }
+
+ if iNdEx > l {
+ return io.ErrUnexpectedEOF
+ }
+ return nil
+}
+func skipMetrics(dAtA []byte) (n int, err error) {
+ l := len(dAtA)
+ iNdEx := 0
+ depth := 0
+ for iNdEx < l {
+ var wire uint64
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return 0, ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return 0, io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ wire |= (uint64(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ wireType := int(wire & 0x7)
+ switch wireType {
+ case 0:
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return 0, ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return 0, io.ErrUnexpectedEOF
+ }
+ iNdEx++
+ if dAtA[iNdEx-1] < 0x80 {
+ break
+ }
+ }
+ case 1:
+ iNdEx += 8
+ case 2:
+ var length int
+ for shift := uint(0); ; shift += 7 {
+ if shift >= 64 {
+ return 0, ErrIntOverflowMetrics
+ }
+ if iNdEx >= l {
+ return 0, io.ErrUnexpectedEOF
+ }
+ b := dAtA[iNdEx]
+ iNdEx++
+ length |= (int(b) & 0x7F) << shift
+ if b < 0x80 {
+ break
+ }
+ }
+ if length < 0 {
+ return 0, ErrInvalidLengthMetrics
+ }
+ iNdEx += length
+ case 3:
+ depth++
+ case 4:
+ if depth == 0 {
+ return 0, ErrUnexpectedEndOfGroupMetrics
+ }
+ depth--
+ case 5:
+ iNdEx += 4
+ default:
+ return 0, fmt.Errorf("proto: illegal wireType %d", wireType)
+ }
+ if iNdEx < 0 {
+ return 0, ErrInvalidLengthMetrics
+ }
+ if depth == 0 {
+ return iNdEx, nil
+ }
+ }
+ return 0, io.ErrUnexpectedEOF
+}
+
+var (
+ ErrInvalidLengthMetrics = fmt.Errorf("proto: negative length found during unmarshaling")
+ ErrIntOverflowMetrics = fmt.Errorf("proto: integer overflow")
+ ErrUnexpectedEndOfGroupMetrics = fmt.Errorf("proto: unexpected end of group")
+)
diff --git a/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.txt b/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.txt
new file mode 100644
index 000000000..e476cea64
--- /dev/null
+++ b/vendor/github.com/containerd/cgroups/stats/v1/metrics.pb.txt
@@ -0,0 +1,790 @@
+file {
+ name: "github.com/containerd/cgroups/stats/v1/metrics.proto"
+ package: "io.containerd.cgroups.v1"
+ dependency: "gogoproto/gogo.proto"
+ message_type {
+ name: "Metrics"
+ field {
+ name: "hugetlb"
+ number: 1
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.HugetlbStat"
+ json_name: "hugetlb"
+ }
+ field {
+ name: "pids"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.PidsStat"
+ json_name: "pids"
+ }
+ field {
+ name: "cpu"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.CPUStat"
+ options {
+ 65004: "CPU"
+ }
+ json_name: "cpu"
+ }
+ field {
+ name: "memory"
+ number: 4
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.MemoryStat"
+ json_name: "memory"
+ }
+ field {
+ name: "blkio"
+ number: 5
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOStat"
+ json_name: "blkio"
+ }
+ field {
+ name: "rdma"
+ number: 6
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.RdmaStat"
+ json_name: "rdma"
+ }
+ field {
+ name: "network"
+ number: 7
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.NetworkStat"
+ json_name: "network"
+ }
+ field {
+ name: "cgroup_stats"
+ number: 8
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.CgroupStats"
+ json_name: "cgroupStats"
+ }
+ field {
+ name: "memory_oom_control"
+ number: 9
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.MemoryOomControl"
+ json_name: "memoryOomControl"
+ }
+ }
+ message_type {
+ name: "HugetlbStat"
+ field {
+ name: "usage"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "usage"
+ }
+ field {
+ name: "max"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "max"
+ }
+ field {
+ name: "failcnt"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "failcnt"
+ }
+ field {
+ name: "pagesize"
+ number: 4
+ label: LABEL_OPTIONAL
+ type: TYPE_STRING
+ json_name: "pagesize"
+ }
+ }
+ message_type {
+ name: "PidsStat"
+ field {
+ name: "current"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "current"
+ }
+ field {
+ name: "limit"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "limit"
+ }
+ }
+ message_type {
+ name: "CPUStat"
+ field {
+ name: "usage"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.CPUUsage"
+ json_name: "usage"
+ }
+ field {
+ name: "throttling"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.Throttle"
+ json_name: "throttling"
+ }
+ }
+ message_type {
+ name: "CPUUsage"
+ field {
+ name: "total"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "total"
+ }
+ field {
+ name: "kernel"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "kernel"
+ }
+ field {
+ name: "user"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "user"
+ }
+ field {
+ name: "per_cpu"
+ number: 4
+ label: LABEL_REPEATED
+ type: TYPE_UINT64
+ options {
+ 65004: "PerCPU"
+ }
+ json_name: "perCpu"
+ }
+ }
+ message_type {
+ name: "Throttle"
+ field {
+ name: "periods"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "periods"
+ }
+ field {
+ name: "throttled_periods"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "throttledPeriods"
+ }
+ field {
+ name: "throttled_time"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "throttledTime"
+ }
+ }
+ message_type {
+ name: "MemoryStat"
+ field {
+ name: "cache"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "cache"
+ }
+ field {
+ name: "rss"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ options {
+ 65004: "RSS"
+ }
+ json_name: "rss"
+ }
+ field {
+ name: "rss_huge"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ options {
+ 65004: "RSSHuge"
+ }
+ json_name: "rssHuge"
+ }
+ field {
+ name: "mapped_file"
+ number: 4
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "mappedFile"
+ }
+ field {
+ name: "dirty"
+ number: 5
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "dirty"
+ }
+ field {
+ name: "writeback"
+ number: 6
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "writeback"
+ }
+ field {
+ name: "pg_pg_in"
+ number: 7
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "pgPgIn"
+ }
+ field {
+ name: "pg_pg_out"
+ number: 8
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "pgPgOut"
+ }
+ field {
+ name: "pg_fault"
+ number: 9
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "pgFault"
+ }
+ field {
+ name: "pg_maj_fault"
+ number: 10
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "pgMajFault"
+ }
+ field {
+ name: "inactive_anon"
+ number: 11
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "inactiveAnon"
+ }
+ field {
+ name: "active_anon"
+ number: 12
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "activeAnon"
+ }
+ field {
+ name: "inactive_file"
+ number: 13
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "inactiveFile"
+ }
+ field {
+ name: "active_file"
+ number: 14
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "activeFile"
+ }
+ field {
+ name: "unevictable"
+ number: 15
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "unevictable"
+ }
+ field {
+ name: "hierarchical_memory_limit"
+ number: 16
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "hierarchicalMemoryLimit"
+ }
+ field {
+ name: "hierarchical_swap_limit"
+ number: 17
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "hierarchicalSwapLimit"
+ }
+ field {
+ name: "total_cache"
+ number: 18
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalCache"
+ }
+ field {
+ name: "total_rss"
+ number: 19
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ options {
+ 65004: "TotalRSS"
+ }
+ json_name: "totalRss"
+ }
+ field {
+ name: "total_rss_huge"
+ number: 20
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ options {
+ 65004: "TotalRSSHuge"
+ }
+ json_name: "totalRssHuge"
+ }
+ field {
+ name: "total_mapped_file"
+ number: 21
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalMappedFile"
+ }
+ field {
+ name: "total_dirty"
+ number: 22
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalDirty"
+ }
+ field {
+ name: "total_writeback"
+ number: 23
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalWriteback"
+ }
+ field {
+ name: "total_pg_pg_in"
+ number: 24
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalPgPgIn"
+ }
+ field {
+ name: "total_pg_pg_out"
+ number: 25
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalPgPgOut"
+ }
+ field {
+ name: "total_pg_fault"
+ number: 26
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalPgFault"
+ }
+ field {
+ name: "total_pg_maj_fault"
+ number: 27
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalPgMajFault"
+ }
+ field {
+ name: "total_inactive_anon"
+ number: 28
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalInactiveAnon"
+ }
+ field {
+ name: "total_active_anon"
+ number: 29
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalActiveAnon"
+ }
+ field {
+ name: "total_inactive_file"
+ number: 30
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalInactiveFile"
+ }
+ field {
+ name: "total_active_file"
+ number: 31
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalActiveFile"
+ }
+ field {
+ name: "total_unevictable"
+ number: 32
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "totalUnevictable"
+ }
+ field {
+ name: "usage"
+ number: 33
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.MemoryEntry"
+ json_name: "usage"
+ }
+ field {
+ name: "swap"
+ number: 34
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.MemoryEntry"
+ json_name: "swap"
+ }
+ field {
+ name: "kernel"
+ number: 35
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.MemoryEntry"
+ json_name: "kernel"
+ }
+ field {
+ name: "kernel_tcp"
+ number: 36
+ label: LABEL_OPTIONAL
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.MemoryEntry"
+ options {
+ 65004: "KernelTCP"
+ }
+ json_name: "kernelTcp"
+ }
+ }
+ message_type {
+ name: "MemoryEntry"
+ field {
+ name: "limit"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "limit"
+ }
+ field {
+ name: "usage"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "usage"
+ }
+ field {
+ name: "max"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "max"
+ }
+ field {
+ name: "failcnt"
+ number: 4
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "failcnt"
+ }
+ }
+ message_type {
+ name: "MemoryOomControl"
+ field {
+ name: "oom_kill_disable"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "oomKillDisable"
+ }
+ field {
+ name: "under_oom"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "underOom"
+ }
+ field {
+ name: "oom_kill"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "oomKill"
+ }
+ }
+ message_type {
+ name: "BlkIOStat"
+ field {
+ name: "io_service_bytes_recursive"
+ number: 1
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
+ json_name: "ioServiceBytesRecursive"
+ }
+ field {
+ name: "io_serviced_recursive"
+ number: 2
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
+ json_name: "ioServicedRecursive"
+ }
+ field {
+ name: "io_queued_recursive"
+ number: 3
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
+ json_name: "ioQueuedRecursive"
+ }
+ field {
+ name: "io_service_time_recursive"
+ number: 4
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
+ json_name: "ioServiceTimeRecursive"
+ }
+ field {
+ name: "io_wait_time_recursive"
+ number: 5
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
+ json_name: "ioWaitTimeRecursive"
+ }
+ field {
+ name: "io_merged_recursive"
+ number: 6
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
+ json_name: "ioMergedRecursive"
+ }
+ field {
+ name: "io_time_recursive"
+ number: 7
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
+ json_name: "ioTimeRecursive"
+ }
+ field {
+ name: "sectors_recursive"
+ number: 8
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.BlkIOEntry"
+ json_name: "sectorsRecursive"
+ }
+ }
+ message_type {
+ name: "BlkIOEntry"
+ field {
+ name: "op"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_STRING
+ json_name: "op"
+ }
+ field {
+ name: "device"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_STRING
+ json_name: "device"
+ }
+ field {
+ name: "major"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "major"
+ }
+ field {
+ name: "minor"
+ number: 4
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "minor"
+ }
+ field {
+ name: "value"
+ number: 5
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "value"
+ }
+ }
+ message_type {
+ name: "RdmaStat"
+ field {
+ name: "current"
+ number: 1
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.RdmaEntry"
+ json_name: "current"
+ }
+ field {
+ name: "limit"
+ number: 2
+ label: LABEL_REPEATED
+ type: TYPE_MESSAGE
+ type_name: ".io.containerd.cgroups.v1.RdmaEntry"
+ json_name: "limit"
+ }
+ }
+ message_type {
+ name: "RdmaEntry"
+ field {
+ name: "device"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_STRING
+ json_name: "device"
+ }
+ field {
+ name: "hca_handles"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT32
+ json_name: "hcaHandles"
+ }
+ field {
+ name: "hca_objects"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT32
+ json_name: "hcaObjects"
+ }
+ }
+ message_type {
+ name: "NetworkStat"
+ field {
+ name: "name"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_STRING
+ json_name: "name"
+ }
+ field {
+ name: "rx_bytes"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "rxBytes"
+ }
+ field {
+ name: "rx_packets"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "rxPackets"
+ }
+ field {
+ name: "rx_errors"
+ number: 4
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "rxErrors"
+ }
+ field {
+ name: "rx_dropped"
+ number: 5
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "rxDropped"
+ }
+ field {
+ name: "tx_bytes"
+ number: 6
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "txBytes"
+ }
+ field {
+ name: "tx_packets"
+ number: 7
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "txPackets"
+ }
+ field {
+ name: "tx_errors"
+ number: 8
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "txErrors"
+ }
+ field {
+ name: "tx_dropped"
+ number: 9
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "txDropped"
+ }
+ }
+ message_type {
+ name: "CgroupStats"
+ field {
+ name: "nr_sleeping"
+ number: 1
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "nrSleeping"
+ }
+ field {
+ name: "nr_running"
+ number: 2
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "nrRunning"
+ }
+ field {
+ name: "nr_stopped"
+ number: 3
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "nrStopped"
+ }
+ field {
+ name: "nr_uninterruptible"
+ number: 4
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "nrUninterruptible"
+ }
+ field {
+ name: "nr_io_wait"
+ number: 5
+ label: LABEL_OPTIONAL
+ type: TYPE_UINT64
+ json_name: "nrIoWait"
+ }
+ }
+ syntax: "proto3"
+}
diff --git a/vendor/github.com/containerd/cgroups/stats/v1/metrics.proto b/vendor/github.com/containerd/cgroups/stats/v1/metrics.proto
new file mode 100644
index 000000000..b3f6cc37d
--- /dev/null
+++ b/vendor/github.com/containerd/cgroups/stats/v1/metrics.proto
@@ -0,0 +1,158 @@
+syntax = "proto3";
+
+package io.containerd.cgroups.v1;
+
+import "gogoproto/gogo.proto";
+
+message Metrics {
+ repeated HugetlbStat hugetlb = 1;
+ PidsStat pids = 2;
+ CPUStat cpu = 3 [(gogoproto.customname) = "CPU"];
+ MemoryStat memory = 4;
+ BlkIOStat blkio = 5;
+ RdmaStat rdma = 6;
+ repeated NetworkStat network = 7;
+ CgroupStats cgroup_stats = 8;
+ MemoryOomControl memory_oom_control = 9;
+}
+
+message HugetlbStat {
+ uint64 usage = 1;
+ uint64 max = 2;
+ uint64 failcnt = 3;
+ string pagesize = 4;
+}
+
+message PidsStat {
+ uint64 current = 1;
+ uint64 limit = 2;
+}
+
+message CPUStat {
+ CPUUsage usage = 1;
+ Throttle throttling = 2;
+}
+
+message CPUUsage {
+ // values in nanoseconds
+ uint64 total = 1;
+ uint64 kernel = 2;
+ uint64 user = 3;
+ repeated uint64 per_cpu = 4 [(gogoproto.customname) = "PerCPU"];
+
+}
+
+message Throttle {
+ uint64 periods = 1;
+ uint64 throttled_periods = 2;
+ uint64 throttled_time = 3;
+}
+
+message MemoryStat {
+ uint64 cache = 1;
+ uint64 rss = 2 [(gogoproto.customname) = "RSS"];
+ uint64 rss_huge = 3 [(gogoproto.customname) = "RSSHuge"];
+ uint64 mapped_file = 4;
+ uint64 dirty = 5;
+ uint64 writeback = 6;
+ uint64 pg_pg_in = 7;
+ uint64 pg_pg_out = 8;
+ uint64 pg_fault = 9;
+ uint64 pg_maj_fault = 10;
+ uint64 inactive_anon = 11;
+ uint64 active_anon = 12;
+ uint64 inactive_file = 13;
+ uint64 active_file = 14;
+ uint64 unevictable = 15;
+ uint64 hierarchical_memory_limit = 16;
+ uint64 hierarchical_swap_limit = 17;
+ uint64 total_cache = 18;
+ uint64 total_rss = 19 [(gogoproto.customname) = "TotalRSS"];
+ uint64 total_rss_huge = 20 [(gogoproto.customname) = "TotalRSSHuge"];
+ uint64 total_mapped_file = 21;
+ uint64 total_dirty = 22;
+ uint64 total_writeback = 23;
+ uint64 total_pg_pg_in = 24;
+ uint64 total_pg_pg_out = 25;
+ uint64 total_pg_fault = 26;
+ uint64 total_pg_maj_fault = 27;
+ uint64 total_inactive_anon = 28;
+ uint64 total_active_anon = 29;
+ uint64 total_inactive_file = 30;
+ uint64 total_active_file = 31;
+ uint64 total_unevictable = 32;
+ MemoryEntry usage = 33;
+ MemoryEntry swap = 34;
+ MemoryEntry kernel = 35;
+ MemoryEntry kernel_tcp = 36 [(gogoproto.customname) = "KernelTCP"];
+
+}
+
+message MemoryEntry {
+ uint64 limit = 1;
+ uint64 usage = 2;
+ uint64 max = 3;
+ uint64 failcnt = 4;
+}
+
+message MemoryOomControl {
+ uint64 oom_kill_disable = 1;
+ uint64 under_oom = 2;
+ uint64 oom_kill = 3;
+}
+
+message BlkIOStat {
+ repeated BlkIOEntry io_service_bytes_recursive = 1;
+ repeated BlkIOEntry io_serviced_recursive = 2;
+ repeated BlkIOEntry io_queued_recursive = 3;
+ repeated BlkIOEntry io_service_time_recursive = 4;
+ repeated BlkIOEntry io_wait_time_recursive = 5;
+ repeated BlkIOEntry io_merged_recursive = 6;
+ repeated BlkIOEntry io_time_recursive = 7;
+ repeated BlkIOEntry sectors_recursive = 8;
+}
+
+message BlkIOEntry {
+ string op = 1;
+ string device = 2;
+ uint64 major = 3;
+ uint64 minor = 4;
+ uint64 value = 5;
+}
+
+message RdmaStat {
+ repeated RdmaEntry current = 1;
+ repeated RdmaEntry limit = 2;
+}
+
+message RdmaEntry {
+ string device = 1;
+ uint32 hca_handles = 2;
+ uint32 hca_objects = 3;
+}
+
+message NetworkStat {
+ string name = 1;
+ uint64 rx_bytes = 2;
+ uint64 rx_packets = 3;
+ uint64 rx_errors = 4;
+ uint64 rx_dropped = 5;
+ uint64 tx_bytes = 6;
+ uint64 tx_packets = 7;
+ uint64 tx_errors = 8;
+ uint64 tx_dropped = 9;
+}
+
+// CgroupStats exports per-cgroup statistics.
+message CgroupStats {
+ // number of tasks sleeping
+ uint64 nr_sleeping = 1;
+ // number of tasks running
+ uint64 nr_running = 2;
+ // number of tasks in stopped state
+ uint64 nr_stopped = 3;
+ // number of tasks in uninterruptible state
+ uint64 nr_uninterruptible = 4;
+ // number of tasks waiting on IO
+ uint64 nr_io_wait = 5;
+}
diff --git a/vendor/github.com/containerd/console/.golangci.yml b/vendor/github.com/containerd/console/.golangci.yml
new file mode 100644
index 000000000..fcba5e885
--- /dev/null
+++ b/vendor/github.com/containerd/console/.golangci.yml
@@ -0,0 +1,20 @@
+linters:
+ enable:
+ - structcheck
+ - varcheck
+ - staticcheck
+ - unconvert
+ - gofmt
+ - goimports
+ - golint
+ - ineffassign
+ - vet
+ - unused
+ - misspell
+ disable:
+ - errcheck
+
+run:
+ timeout: 3m
+ skip-dirs:
+ - vendor
diff --git a/vendor/github.com/containerd/console/LICENSE b/vendor/github.com/containerd/console/LICENSE
new file mode 100644
index 000000000..584149b6e
--- /dev/null
+++ b/vendor/github.com/containerd/console/LICENSE
@@ -0,0 +1,191 @@
+
+ Apache License
+ Version 2.0, January 2004
+ https://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ Copyright The containerd Authors
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ https://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
diff --git a/vendor/github.com/containerd/console/README.md b/vendor/github.com/containerd/console/README.md
new file mode 100644
index 000000000..580b461a7
--- /dev/null
+++ b/vendor/github.com/containerd/console/README.md
@@ -0,0 +1,29 @@
+# console
+
+[![PkgGoDev](https://pkg.go.dev/badge/github.com/containerd/console)](https://pkg.go.dev/github.com/containerd/console)
+[![Build Status](https://github.com/containerd/console/workflows/CI/badge.svg)](https://github.com/containerd/console/actions?query=workflow%3ACI)
+[![Go Report Card](https://goreportcard.com/badge/github.com/containerd/console)](https://goreportcard.com/report/github.com/containerd/console)
+
+Golang package for dealing with consoles. Light on deps and a simple API.
+
+## Modifying the current process
+
+```go
+current := console.Current()
+defer current.Reset()
+
+if err := current.SetRaw(); err != nil {
+}
+ws, err := current.Size()
+current.Resize(ws)
+```
+
+## Project details
+
+console is a containerd sub-project, licensed under the [Apache 2.0 license](./LICENSE).
+As a containerd sub-project, you will find the:
+ * [Project governance](https://github.com/containerd/project/blob/master/GOVERNANCE.md),
+ * [Maintainers](https://github.com/containerd/project/blob/master/MAINTAINERS),
+ * and [Contributing guidelines](https://github.com/containerd/project/blob/master/CONTRIBUTING.md)
+
+information in our [`containerd/project`](https://github.com/containerd/project) repository.
diff --git a/vendor/github.com/containerd/console/console.go b/vendor/github.com/containerd/console/console.go
new file mode 100644
index 000000000..f989d28a4
--- /dev/null
+++ b/vendor/github.com/containerd/console/console.go
@@ -0,0 +1,87 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "errors"
+ "io"
+ "os"
+)
+
+var ErrNotAConsole = errors.New("provided file is not a console")
+
+type File interface {
+ io.ReadWriteCloser
+
+ // Fd returns its file descriptor
+ Fd() uintptr
+ // Name returns its file name
+ Name() string
+}
+
+type Console interface {
+ File
+
+ // Resize resizes the console to the provided window size
+ Resize(WinSize) error
+ // ResizeFrom resizes the calling console to the size of the
+ // provided console
+ ResizeFrom(Console) error
+ // SetRaw sets the console in raw mode
+ SetRaw() error
+ // DisableEcho disables echo on the console
+ DisableEcho() error
+ // Reset restores the console to its orignal state
+ Reset() error
+ // Size returns the window size of the console
+ Size() (WinSize, error)
+}
+
+// WinSize specifies the window size of the console
+type WinSize struct {
+ // Height of the console
+ Height uint16
+ // Width of the console
+ Width uint16
+ x uint16
+ y uint16
+}
+
+// Current returns the current process' console
+func Current() (c Console) {
+ var err error
+ // Usually all three streams (stdin, stdout, and stderr)
+ // are open to the same console, but some might be redirected,
+ // so try all three.
+ for _, s := range []*os.File{os.Stderr, os.Stdout, os.Stdin} {
+ if c, err = ConsoleFromFile(s); err == nil {
+ return c
+ }
+ }
+ // One of the std streams should always be a console
+ // for the design of this function.
+ panic(err)
+}
+
+// ConsoleFromFile returns a console using the provided file
+// nolint:golint
+func ConsoleFromFile(f File) (Console, error) {
+ if err := checkConsole(f); err != nil {
+ return nil, err
+ }
+ return newMaster(f)
+}
diff --git a/vendor/github.com/containerd/console/console_linux.go b/vendor/github.com/containerd/console/console_linux.go
new file mode 100644
index 000000000..c1c839ee3
--- /dev/null
+++ b/vendor/github.com/containerd/console/console_linux.go
@@ -0,0 +1,280 @@
+// +build linux
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "io"
+ "os"
+ "sync"
+
+ "golang.org/x/sys/unix"
+)
+
+const (
+ maxEvents = 128
+)
+
+// Epoller manages multiple epoll consoles using edge-triggered epoll api so we
+// dont have to deal with repeated wake-up of EPOLLER or EPOLLHUP.
+// For more details, see:
+// - https://github.com/systemd/systemd/pull/4262
+// - https://github.com/moby/moby/issues/27202
+//
+// Example usage of Epoller and EpollConsole can be as follow:
+//
+// epoller, _ := NewEpoller()
+// epollConsole, _ := epoller.Add(console)
+// go epoller.Wait()
+// var (
+// b bytes.Buffer
+// wg sync.WaitGroup
+// )
+// wg.Add(1)
+// go func() {
+// io.Copy(&b, epollConsole)
+// wg.Done()
+// }()
+// // perform I/O on the console
+// epollConsole.Shutdown(epoller.CloseConsole)
+// wg.Wait()
+// epollConsole.Close()
+type Epoller struct {
+ efd int
+ mu sync.Mutex
+ fdMapping map[int]*EpollConsole
+ closeOnce sync.Once
+}
+
+// NewEpoller returns an instance of epoller with a valid epoll fd.
+func NewEpoller() (*Epoller, error) {
+ efd, err := unix.EpollCreate1(unix.EPOLL_CLOEXEC)
+ if err != nil {
+ return nil, err
+ }
+ return &Epoller{
+ efd: efd,
+ fdMapping: make(map[int]*EpollConsole),
+ }, nil
+}
+
+// Add creates an epoll console based on the provided console. The console will
+// be registered with EPOLLET (i.e. using edge-triggered notification) and its
+// file descriptor will be set to non-blocking mode. After this, user should use
+// the return console to perform I/O.
+func (e *Epoller) Add(console Console) (*EpollConsole, error) {
+ sysfd := int(console.Fd())
+ // Set sysfd to non-blocking mode
+ if err := unix.SetNonblock(sysfd, true); err != nil {
+ return nil, err
+ }
+
+ ev := unix.EpollEvent{
+ Events: unix.EPOLLIN | unix.EPOLLOUT | unix.EPOLLRDHUP | unix.EPOLLET,
+ Fd: int32(sysfd),
+ }
+ if err := unix.EpollCtl(e.efd, unix.EPOLL_CTL_ADD, sysfd, &ev); err != nil {
+ return nil, err
+ }
+ ef := &EpollConsole{
+ Console: console,
+ sysfd: sysfd,
+ readc: sync.NewCond(&sync.Mutex{}),
+ writec: sync.NewCond(&sync.Mutex{}),
+ }
+ e.mu.Lock()
+ e.fdMapping[sysfd] = ef
+ e.mu.Unlock()
+ return ef, nil
+}
+
+// Wait starts the loop to wait for its consoles' notifications and signal
+// appropriate console that it can perform I/O.
+func (e *Epoller) Wait() error {
+ events := make([]unix.EpollEvent, maxEvents)
+ for {
+ n, err := unix.EpollWait(e.efd, events, -1)
+ if err != nil {
+ // EINTR: The call was interrupted by a signal handler before either
+ // any of the requested events occurred or the timeout expired
+ if err == unix.EINTR {
+ continue
+ }
+ return err
+ }
+ for i := 0; i < n; i++ {
+ ev := &events[i]
+ // the console is ready to be read from
+ if ev.Events&(unix.EPOLLIN|unix.EPOLLHUP|unix.EPOLLERR) != 0 {
+ if epfile := e.getConsole(int(ev.Fd)); epfile != nil {
+ epfile.signalRead()
+ }
+ }
+ // the console is ready to be written to
+ if ev.Events&(unix.EPOLLOUT|unix.EPOLLHUP|unix.EPOLLERR) != 0 {
+ if epfile := e.getConsole(int(ev.Fd)); epfile != nil {
+ epfile.signalWrite()
+ }
+ }
+ }
+ }
+}
+
+// CloseConsole unregisters the console's file descriptor from epoll interface
+func (e *Epoller) CloseConsole(fd int) error {
+ e.mu.Lock()
+ defer e.mu.Unlock()
+ delete(e.fdMapping, fd)
+ return unix.EpollCtl(e.efd, unix.EPOLL_CTL_DEL, fd, &unix.EpollEvent{})
+}
+
+func (e *Epoller) getConsole(sysfd int) *EpollConsole {
+ e.mu.Lock()
+ f := e.fdMapping[sysfd]
+ e.mu.Unlock()
+ return f
+}
+
+// Close closes the epoll fd
+func (e *Epoller) Close() error {
+ closeErr := os.ErrClosed // default to "file already closed"
+ e.closeOnce.Do(func() {
+ closeErr = unix.Close(e.efd)
+ })
+ return closeErr
+}
+
+// EpollConsole acts like a console but registers its file descriptor with an
+// epoll fd and uses epoll API to perform I/O.
+type EpollConsole struct {
+ Console
+ readc *sync.Cond
+ writec *sync.Cond
+ sysfd int
+ closed bool
+}
+
+// Read reads up to len(p) bytes into p. It returns the number of bytes read
+// (0 <= n <= len(p)) and any error encountered.
+//
+// If the console's read returns EAGAIN or EIO, we assume that it's a
+// temporary error because the other side went away and wait for the signal
+// generated by epoll event to continue.
+func (ec *EpollConsole) Read(p []byte) (n int, err error) {
+ var read int
+ ec.readc.L.Lock()
+ defer ec.readc.L.Unlock()
+ for {
+ read, err = ec.Console.Read(p[n:])
+ n += read
+ if err != nil {
+ var hangup bool
+ if perr, ok := err.(*os.PathError); ok {
+ hangup = (perr.Err == unix.EAGAIN || perr.Err == unix.EIO)
+ } else {
+ hangup = (err == unix.EAGAIN || err == unix.EIO)
+ }
+ // if the other end disappear, assume this is temporary and wait for the
+ // signal to continue again. Unless we didnt read anything and the
+ // console is already marked as closed then we should exit
+ if hangup && !(n == 0 && len(p) > 0 && ec.closed) {
+ ec.readc.Wait()
+ continue
+ }
+ }
+ break
+ }
+ // if we didnt read anything then return io.EOF to end gracefully
+ if n == 0 && len(p) > 0 && err == nil {
+ err = io.EOF
+ }
+ // signal for others that we finished the read
+ ec.readc.Signal()
+ return n, err
+}
+
+// Writes len(p) bytes from p to the console. It returns the number of bytes
+// written from p (0 <= n <= len(p)) and any error encountered that caused
+// the write to stop early.
+//
+// If writes to the console returns EAGAIN or EIO, we assume that it's a
+// temporary error because the other side went away and wait for the signal
+// generated by epoll event to continue.
+func (ec *EpollConsole) Write(p []byte) (n int, err error) {
+ var written int
+ ec.writec.L.Lock()
+ defer ec.writec.L.Unlock()
+ for {
+ written, err = ec.Console.Write(p[n:])
+ n += written
+ if err != nil {
+ var hangup bool
+ if perr, ok := err.(*os.PathError); ok {
+ hangup = (perr.Err == unix.EAGAIN || perr.Err == unix.EIO)
+ } else {
+ hangup = (err == unix.EAGAIN || err == unix.EIO)
+ }
+ // if the other end disappears, assume this is temporary and wait for the
+ // signal to continue again.
+ if hangup {
+ ec.writec.Wait()
+ continue
+ }
+ }
+ // unrecoverable error, break the loop and return the error
+ break
+ }
+ if n < len(p) && err == nil {
+ err = io.ErrShortWrite
+ }
+ // signal for others that we finished the write
+ ec.writec.Signal()
+ return n, err
+}
+
+// Shutdown closes the file descriptor and signals call waiters for this fd.
+// It accepts a callback which will be called with the console's fd. The
+// callback typically will be used to do further cleanup such as unregister the
+// console's fd from the epoll interface.
+// User should call Shutdown and wait for all I/O operation to be finished
+// before closing the console.
+func (ec *EpollConsole) Shutdown(close func(int) error) error {
+ ec.readc.L.Lock()
+ defer ec.readc.L.Unlock()
+ ec.writec.L.Lock()
+ defer ec.writec.L.Unlock()
+
+ ec.readc.Broadcast()
+ ec.writec.Broadcast()
+ ec.closed = true
+ return close(ec.sysfd)
+}
+
+// signalRead signals that the console is readable.
+func (ec *EpollConsole) signalRead() {
+ ec.readc.L.Lock()
+ ec.readc.Signal()
+ ec.readc.L.Unlock()
+}
+
+// signalWrite signals that the console is writable.
+func (ec *EpollConsole) signalWrite() {
+ ec.writec.L.Lock()
+ ec.writec.Signal()
+ ec.writec.L.Unlock()
+}
diff --git a/vendor/github.com/containerd/console/console_unix.go b/vendor/github.com/containerd/console/console_unix.go
new file mode 100644
index 000000000..a08117695
--- /dev/null
+++ b/vendor/github.com/containerd/console/console_unix.go
@@ -0,0 +1,156 @@
+// +build darwin freebsd linux netbsd openbsd solaris
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "golang.org/x/sys/unix"
+)
+
+// NewPty creates a new pty pair
+// The master is returned as the first console and a string
+// with the path to the pty slave is returned as the second
+func NewPty() (Console, string, error) {
+ f, err := openpt()
+ if err != nil {
+ return nil, "", err
+ }
+ slave, err := ptsname(f)
+ if err != nil {
+ return nil, "", err
+ }
+ if err := unlockpt(f); err != nil {
+ return nil, "", err
+ }
+ m, err := newMaster(f)
+ if err != nil {
+ return nil, "", err
+ }
+ return m, slave, nil
+}
+
+type master struct {
+ f File
+ original *unix.Termios
+}
+
+func (m *master) Read(b []byte) (int, error) {
+ return m.f.Read(b)
+}
+
+func (m *master) Write(b []byte) (int, error) {
+ return m.f.Write(b)
+}
+
+func (m *master) Close() error {
+ return m.f.Close()
+}
+
+func (m *master) Resize(ws WinSize) error {
+ return tcswinsz(m.f.Fd(), ws)
+}
+
+func (m *master) ResizeFrom(c Console) error {
+ ws, err := c.Size()
+ if err != nil {
+ return err
+ }
+ return m.Resize(ws)
+}
+
+func (m *master) Reset() error {
+ if m.original == nil {
+ return nil
+ }
+ return tcset(m.f.Fd(), m.original)
+}
+
+func (m *master) getCurrent() (unix.Termios, error) {
+ var termios unix.Termios
+ if err := tcget(m.f.Fd(), &termios); err != nil {
+ return unix.Termios{}, err
+ }
+ return termios, nil
+}
+
+func (m *master) SetRaw() error {
+ rawState, err := m.getCurrent()
+ if err != nil {
+ return err
+ }
+ rawState = cfmakeraw(rawState)
+ rawState.Oflag = rawState.Oflag | unix.OPOST
+ return tcset(m.f.Fd(), &rawState)
+}
+
+func (m *master) DisableEcho() error {
+ rawState, err := m.getCurrent()
+ if err != nil {
+ return err
+ }
+ rawState.Lflag = rawState.Lflag &^ unix.ECHO
+ return tcset(m.f.Fd(), &rawState)
+}
+
+func (m *master) Size() (WinSize, error) {
+ return tcgwinsz(m.f.Fd())
+}
+
+func (m *master) Fd() uintptr {
+ return m.f.Fd()
+}
+
+func (m *master) Name() string {
+ return m.f.Name()
+}
+
+// checkConsole checks if the provided file is a console
+func checkConsole(f File) error {
+ var termios unix.Termios
+ if tcget(f.Fd(), &termios) != nil {
+ return ErrNotAConsole
+ }
+ return nil
+}
+
+func newMaster(f File) (Console, error) {
+ m := &master{
+ f: f,
+ }
+ t, err := m.getCurrent()
+ if err != nil {
+ return nil, err
+ }
+ m.original = &t
+ return m, nil
+}
+
+// ClearONLCR sets the necessary tty_ioctl(4)s to ensure that a pty pair
+// created by us acts normally. In particular, a not-very-well-known default of
+// Linux unix98 ptys is that they have +onlcr by default. While this isn't a
+// problem for terminal emulators, because we relay data from the terminal we
+// also relay that funky line discipline.
+func ClearONLCR(fd uintptr) error {
+ return setONLCR(fd, false)
+}
+
+// SetONLCR sets the necessary tty_ioctl(4)s to ensure that a pty pair
+// created by us acts as intended for a terminal emulator.
+func SetONLCR(fd uintptr) error {
+ return setONLCR(fd, true)
+}
diff --git a/vendor/github.com/containerd/console/console_windows.go b/vendor/github.com/containerd/console/console_windows.go
new file mode 100644
index 000000000..787c11fe5
--- /dev/null
+++ b/vendor/github.com/containerd/console/console_windows.go
@@ -0,0 +1,216 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "errors"
+ "fmt"
+ "os"
+
+ "golang.org/x/sys/windows"
+)
+
+var (
+ vtInputSupported bool
+ ErrNotImplemented = errors.New("not implemented")
+)
+
+func (m *master) initStdios() {
+ m.in = windows.Handle(os.Stdin.Fd())
+ if err := windows.GetConsoleMode(m.in, &m.inMode); err == nil {
+ // Validate that windows.ENABLE_VIRTUAL_TERMINAL_INPUT is supported, but do not set it.
+ if err = windows.SetConsoleMode(m.in, m.inMode|windows.ENABLE_VIRTUAL_TERMINAL_INPUT); err == nil {
+ vtInputSupported = true
+ }
+ // Unconditionally set the console mode back even on failure because SetConsoleMode
+ // remembers invalid bits on input handles.
+ windows.SetConsoleMode(m.in, m.inMode)
+ } else {
+ fmt.Printf("failed to get console mode for stdin: %v\n", err)
+ }
+
+ m.out = windows.Handle(os.Stdout.Fd())
+ if err := windows.GetConsoleMode(m.out, &m.outMode); err == nil {
+ if err := windows.SetConsoleMode(m.out, m.outMode|windows.ENABLE_VIRTUAL_TERMINAL_PROCESSING); err == nil {
+ m.outMode |= windows.ENABLE_VIRTUAL_TERMINAL_PROCESSING
+ } else {
+ windows.SetConsoleMode(m.out, m.outMode)
+ }
+ } else {
+ fmt.Printf("failed to get console mode for stdout: %v\n", err)
+ }
+
+ m.err = windows.Handle(os.Stderr.Fd())
+ if err := windows.GetConsoleMode(m.err, &m.errMode); err == nil {
+ if err := windows.SetConsoleMode(m.err, m.errMode|windows.ENABLE_VIRTUAL_TERMINAL_PROCESSING); err == nil {
+ m.errMode |= windows.ENABLE_VIRTUAL_TERMINAL_PROCESSING
+ } else {
+ windows.SetConsoleMode(m.err, m.errMode)
+ }
+ } else {
+ fmt.Printf("failed to get console mode for stderr: %v\n", err)
+ }
+}
+
+type master struct {
+ in windows.Handle
+ inMode uint32
+
+ out windows.Handle
+ outMode uint32
+
+ err windows.Handle
+ errMode uint32
+}
+
+func (m *master) SetRaw() error {
+ if err := makeInputRaw(m.in, m.inMode); err != nil {
+ return err
+ }
+
+ // Set StdOut and StdErr to raw mode, we ignore failures since
+ // windows.DISABLE_NEWLINE_AUTO_RETURN might not be supported on this version of
+ // Windows.
+
+ windows.SetConsoleMode(m.out, m.outMode|windows.DISABLE_NEWLINE_AUTO_RETURN)
+
+ windows.SetConsoleMode(m.err, m.errMode|windows.DISABLE_NEWLINE_AUTO_RETURN)
+
+ return nil
+}
+
+func (m *master) Reset() error {
+ for _, s := range []struct {
+ fd windows.Handle
+ mode uint32
+ }{
+ {m.in, m.inMode},
+ {m.out, m.outMode},
+ {m.err, m.errMode},
+ } {
+ if err := windows.SetConsoleMode(s.fd, s.mode); err != nil {
+ return fmt.Errorf("unable to restore console mode: %w", err)
+ }
+ }
+
+ return nil
+}
+
+func (m *master) Size() (WinSize, error) {
+ var info windows.ConsoleScreenBufferInfo
+ err := windows.GetConsoleScreenBufferInfo(m.out, &info)
+ if err != nil {
+ return WinSize{}, fmt.Errorf("unable to get console info: %w", err)
+ }
+
+ winsize := WinSize{
+ Width: uint16(info.Window.Right - info.Window.Left + 1),
+ Height: uint16(info.Window.Bottom - info.Window.Top + 1),
+ }
+
+ return winsize, nil
+}
+
+func (m *master) Resize(ws WinSize) error {
+ return ErrNotImplemented
+}
+
+func (m *master) ResizeFrom(c Console) error {
+ return ErrNotImplemented
+}
+
+func (m *master) DisableEcho() error {
+ mode := m.inMode &^ windows.ENABLE_ECHO_INPUT
+ mode |= windows.ENABLE_PROCESSED_INPUT
+ mode |= windows.ENABLE_LINE_INPUT
+
+ if err := windows.SetConsoleMode(m.in, mode); err != nil {
+ return fmt.Errorf("unable to set console to disable echo: %w", err)
+ }
+
+ return nil
+}
+
+func (m *master) Close() error {
+ return nil
+}
+
+func (m *master) Read(b []byte) (int, error) {
+ return os.Stdin.Read(b)
+}
+
+func (m *master) Write(b []byte) (int, error) {
+ return os.Stdout.Write(b)
+}
+
+func (m *master) Fd() uintptr {
+ return uintptr(m.in)
+}
+
+// on windows, console can only be made from os.Std{in,out,err}, hence there
+// isnt a single name here we can use. Return a dummy "console" value in this
+// case should be sufficient.
+func (m *master) Name() string {
+ return "console"
+}
+
+// makeInputRaw puts the terminal (Windows Console) connected to the given
+// file descriptor into raw mode
+func makeInputRaw(fd windows.Handle, mode uint32) error {
+ // See
+ // -- https://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx
+ // -- https://msdn.microsoft.com/en-us/library/windows/desktop/ms683462(v=vs.85).aspx
+
+ // Disable these modes
+ mode &^= windows.ENABLE_ECHO_INPUT
+ mode &^= windows.ENABLE_LINE_INPUT
+ mode &^= windows.ENABLE_MOUSE_INPUT
+ mode &^= windows.ENABLE_WINDOW_INPUT
+ mode &^= windows.ENABLE_PROCESSED_INPUT
+
+ // Enable these modes
+ mode |= windows.ENABLE_EXTENDED_FLAGS
+ mode |= windows.ENABLE_INSERT_MODE
+ mode |= windows.ENABLE_QUICK_EDIT_MODE
+
+ if vtInputSupported {
+ mode |= windows.ENABLE_VIRTUAL_TERMINAL_INPUT
+ }
+
+ if err := windows.SetConsoleMode(fd, mode); err != nil {
+ return fmt.Errorf("unable to set console to raw mode: %w", err)
+ }
+
+ return nil
+}
+
+func checkConsole(f File) error {
+ var mode uint32
+ if err := windows.GetConsoleMode(windows.Handle(f.Fd()), &mode); err != nil {
+ return err
+ }
+ return nil
+}
+
+func newMaster(f File) (Console, error) {
+ if f != os.Stdin && f != os.Stdout && f != os.Stderr {
+ return nil, errors.New("creating a console from a file is not supported on windows")
+ }
+ m := &master{}
+ m.initStdios()
+ return m, nil
+}
diff --git a/vendor/github.com/containerd/console/console_zos.go b/vendor/github.com/containerd/console/console_zos.go
new file mode 100644
index 000000000..b348a839a
--- /dev/null
+++ b/vendor/github.com/containerd/console/console_zos.go
@@ -0,0 +1,163 @@
+// +build zos
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "fmt"
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+// NewPty creates a new pty pair
+// The master is returned as the first console and a string
+// with the path to the pty slave is returned as the second
+func NewPty() (Console, string, error) {
+ var f File
+ var err error
+ var slave string
+ for i := 0;; i++ {
+ ptyp := fmt.Sprintf("/dev/ptyp%04d", i)
+ f, err = os.OpenFile(ptyp, os.O_RDWR, 0600)
+ if err == nil {
+ slave = fmt.Sprintf("/dev/ttyp%04d", i)
+ break
+ }
+ if os.IsNotExist(err) {
+ return nil, "", err
+ }
+ // else probably Resource Busy
+ }
+ m, err := newMaster(f)
+ if err != nil {
+ return nil, "", err
+ }
+ return m, slave, nil
+}
+
+type master struct {
+ f File
+ original *unix.Termios
+}
+
+func (m *master) Read(b []byte) (int, error) {
+ return m.f.Read(b)
+}
+
+func (m *master) Write(b []byte) (int, error) {
+ return m.f.Write(b)
+}
+
+func (m *master) Close() error {
+ return m.f.Close()
+}
+
+func (m *master) Resize(ws WinSize) error {
+ return tcswinsz(m.f.Fd(), ws)
+}
+
+func (m *master) ResizeFrom(c Console) error {
+ ws, err := c.Size()
+ if err != nil {
+ return err
+ }
+ return m.Resize(ws)
+}
+
+func (m *master) Reset() error {
+ if m.original == nil {
+ return nil
+ }
+ return tcset(m.f.Fd(), m.original)
+}
+
+func (m *master) getCurrent() (unix.Termios, error) {
+ var termios unix.Termios
+ if err := tcget(m.f.Fd(), &termios); err != nil {
+ return unix.Termios{}, err
+ }
+ return termios, nil
+}
+
+func (m *master) SetRaw() error {
+ rawState, err := m.getCurrent()
+ if err != nil {
+ return err
+ }
+ rawState = cfmakeraw(rawState)
+ rawState.Oflag = rawState.Oflag | unix.OPOST
+ return tcset(m.f.Fd(), &rawState)
+}
+
+func (m *master) DisableEcho() error {
+ rawState, err := m.getCurrent()
+ if err != nil {
+ return err
+ }
+ rawState.Lflag = rawState.Lflag &^ unix.ECHO
+ return tcset(m.f.Fd(), &rawState)
+}
+
+func (m *master) Size() (WinSize, error) {
+ return tcgwinsz(m.f.Fd())
+}
+
+func (m *master) Fd() uintptr {
+ return m.f.Fd()
+}
+
+func (m *master) Name() string {
+ return m.f.Name()
+}
+
+// checkConsole checks if the provided file is a console
+func checkConsole(f File) error {
+ var termios unix.Termios
+ if tcget(f.Fd(), &termios) != nil {
+ return ErrNotAConsole
+ }
+ return nil
+}
+
+func newMaster(f File) (Console, error) {
+ m := &master{
+ f: f,
+ }
+ t, err := m.getCurrent()
+ if err != nil {
+ return nil, err
+ }
+ m.original = &t
+ return m, nil
+}
+
+// ClearONLCR sets the necessary tty_ioctl(4)s to ensure that a pty pair
+// created by us acts normally. In particular, a not-very-well-known default of
+// Linux unix98 ptys is that they have +onlcr by default. While this isn't a
+// problem for terminal emulators, because we relay data from the terminal we
+// also relay that funky line discipline.
+func ClearONLCR(fd uintptr) error {
+ return setONLCR(fd, false)
+}
+
+// SetONLCR sets the necessary tty_ioctl(4)s to ensure that a pty pair
+// created by us acts as intended for a terminal emulator.
+func SetONLCR(fd uintptr) error {
+ return setONLCR(fd, true)
+}
diff --git a/vendor/github.com/containerd/console/pty_freebsd_cgo.go b/vendor/github.com/containerd/console/pty_freebsd_cgo.go
new file mode 100644
index 000000000..cbd3cd7ea
--- /dev/null
+++ b/vendor/github.com/containerd/console/pty_freebsd_cgo.go
@@ -0,0 +1,45 @@
+// +build freebsd,cgo
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "fmt"
+ "os"
+)
+
+/*
+#include
+#include
+#include
+*/
+import "C"
+
+// openpt allocates a new pseudo-terminal and establishes a connection with its
+// control device.
+func openpt() (*os.File, error) {
+ fd, err := C.posix_openpt(C.O_RDWR)
+ if err != nil {
+ return nil, fmt.Errorf("posix_openpt: %w", err)
+ }
+ if _, err := C.grantpt(fd); err != nil {
+ C.close(fd)
+ return nil, fmt.Errorf("grantpt: %w", err)
+ }
+ return os.NewFile(uintptr(fd), ""), nil
+}
diff --git a/vendor/github.com/containerd/console/pty_freebsd_nocgo.go b/vendor/github.com/containerd/console/pty_freebsd_nocgo.go
new file mode 100644
index 000000000..b5e43181d
--- /dev/null
+++ b/vendor/github.com/containerd/console/pty_freebsd_nocgo.go
@@ -0,0 +1,36 @@
+// +build freebsd,!cgo
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "os"
+)
+
+//
+// Implementing the functions below requires cgo support. Non-cgo stubs
+// versions are defined below to enable cross-compilation of source code
+// that depends on these functions, but the resultant cross-compiled
+// binaries cannot actually be used. If the stub function(s) below are
+// actually invoked they will display an error message and cause the
+// calling process to exit.
+//
+
+func openpt() (*os.File, error) {
+ panic("openpt() support requires cgo.")
+}
diff --git a/vendor/github.com/containerd/console/pty_unix.go b/vendor/github.com/containerd/console/pty_unix.go
new file mode 100644
index 000000000..d5a6bd8ca
--- /dev/null
+++ b/vendor/github.com/containerd/console/pty_unix.go
@@ -0,0 +1,30 @@
+// +build darwin linux netbsd openbsd solaris
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+// openpt allocates a new pseudo-terminal by opening the /dev/ptmx device
+func openpt() (*os.File, error) {
+ return os.OpenFile("/dev/ptmx", unix.O_RDWR|unix.O_NOCTTY|unix.O_CLOEXEC, 0)
+}
diff --git a/vendor/github.com/containerd/console/tc_darwin.go b/vendor/github.com/containerd/console/tc_darwin.go
new file mode 100644
index 000000000..787154580
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_darwin.go
@@ -0,0 +1,44 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "fmt"
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+const (
+ cmdTcGet = unix.TIOCGETA
+ cmdTcSet = unix.TIOCSETA
+)
+
+// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
+// unlockpt should be called before opening the slave side of a pty.
+func unlockpt(f *os.File) error {
+ return unix.IoctlSetPointerInt(int(f.Fd()), unix.TIOCPTYUNLK, 0)
+}
+
+// ptsname retrieves the name of the first available pts for the given master.
+func ptsname(f *os.File) (string, error) {
+ n, err := unix.IoctlGetInt(int(f.Fd()), unix.TIOCPTYGNAME)
+ if err != nil {
+ return "", err
+ }
+ return fmt.Sprintf("/dev/pts/%d", n), nil
+}
diff --git a/vendor/github.com/containerd/console/tc_freebsd_cgo.go b/vendor/github.com/containerd/console/tc_freebsd_cgo.go
new file mode 100644
index 000000000..0f3d27273
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_freebsd_cgo.go
@@ -0,0 +1,57 @@
+// +build freebsd,cgo
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "fmt"
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+/*
+#include
+#include
+*/
+import "C"
+
+const (
+ cmdTcGet = unix.TIOCGETA
+ cmdTcSet = unix.TIOCSETA
+)
+
+// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
+// unlockpt should be called before opening the slave side of a pty.
+func unlockpt(f *os.File) error {
+ fd := C.int(f.Fd())
+ if _, err := C.unlockpt(fd); err != nil {
+ C.close(fd)
+ return fmt.Errorf("unlockpt: %w", err)
+ }
+ return nil
+}
+
+// ptsname retrieves the name of the first available pts for the given master.
+func ptsname(f *os.File) (string, error) {
+ n, err := unix.IoctlGetInt(int(f.Fd()), unix.TIOCGPTN)
+ if err != nil {
+ return "", err
+ }
+ return fmt.Sprintf("/dev/pts/%d", n), nil
+}
diff --git a/vendor/github.com/containerd/console/tc_freebsd_nocgo.go b/vendor/github.com/containerd/console/tc_freebsd_nocgo.go
new file mode 100644
index 000000000..087fc158a
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_freebsd_nocgo.go
@@ -0,0 +1,55 @@
+// +build freebsd,!cgo
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "fmt"
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+const (
+ cmdTcGet = unix.TIOCGETA
+ cmdTcSet = unix.TIOCSETA
+)
+
+//
+// Implementing the functions below requires cgo support. Non-cgo stubs
+// versions are defined below to enable cross-compilation of source code
+// that depends on these functions, but the resultant cross-compiled
+// binaries cannot actually be used. If the stub function(s) below are
+// actually invoked they will display an error message and cause the
+// calling process to exit.
+//
+
+// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
+// unlockpt should be called before opening the slave side of a pty.
+func unlockpt(f *os.File) error {
+ panic("unlockpt() support requires cgo.")
+}
+
+// ptsname retrieves the name of the first available pts for the given master.
+func ptsname(f *os.File) (string, error) {
+ n, err := unix.IoctlGetInt(int(f.Fd()), unix.TIOCGPTN)
+ if err != nil {
+ return "", err
+ }
+ return fmt.Sprintf("/dev/pts/%d", n), nil
+}
diff --git a/vendor/github.com/containerd/console/tc_linux.go b/vendor/github.com/containerd/console/tc_linux.go
new file mode 100644
index 000000000..7d552ea4b
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_linux.go
@@ -0,0 +1,51 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "fmt"
+ "os"
+ "unsafe"
+
+ "golang.org/x/sys/unix"
+)
+
+const (
+ cmdTcGet = unix.TCGETS
+ cmdTcSet = unix.TCSETS
+)
+
+// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
+// unlockpt should be called before opening the slave side of a pty.
+func unlockpt(f *os.File) error {
+ var u int32
+ // XXX do not use unix.IoctlSetPointerInt here, see commit dbd69c59b81.
+ if _, _, err := unix.Syscall(unix.SYS_IOCTL, f.Fd(), unix.TIOCSPTLCK, uintptr(unsafe.Pointer(&u))); err != 0 {
+ return err
+ }
+ return nil
+}
+
+// ptsname retrieves the name of the first available pts for the given master.
+func ptsname(f *os.File) (string, error) {
+ var u uint32
+ // XXX do not use unix.IoctlGetInt here, see commit dbd69c59b81.
+ if _, _, err := unix.Syscall(unix.SYS_IOCTL, f.Fd(), unix.TIOCGPTN, uintptr(unsafe.Pointer(&u))); err != 0 {
+ return "", err
+ }
+ return fmt.Sprintf("/dev/pts/%d", u), nil
+}
diff --git a/vendor/github.com/containerd/console/tc_netbsd.go b/vendor/github.com/containerd/console/tc_netbsd.go
new file mode 100644
index 000000000..71227aefd
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_netbsd.go
@@ -0,0 +1,45 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "bytes"
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+const (
+ cmdTcGet = unix.TIOCGETA
+ cmdTcSet = unix.TIOCSETA
+)
+
+// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
+// unlockpt should be called before opening the slave side of a pty.
+// This does not exist on NetBSD, it does not allocate controlling terminals on open
+func unlockpt(f *os.File) error {
+ return nil
+}
+
+// ptsname retrieves the name of the first available pts for the given master.
+func ptsname(f *os.File) (string, error) {
+ ptm, err := unix.IoctlGetPtmget(int(f.Fd()), unix.TIOCPTSNAME)
+ if err != nil {
+ return "", err
+ }
+ return string(ptm.Sn[:bytes.IndexByte(ptm.Sn[:], 0)]), nil
+}
diff --git a/vendor/github.com/containerd/console/tc_openbsd_cgo.go b/vendor/github.com/containerd/console/tc_openbsd_cgo.go
new file mode 100644
index 000000000..f0cec06a7
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_openbsd_cgo.go
@@ -0,0 +1,51 @@
+// +build openbsd,cgo
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+//#include
+import "C"
+
+const (
+ cmdTcGet = unix.TIOCGETA
+ cmdTcSet = unix.TIOCSETA
+)
+
+// ptsname retrieves the name of the first available pts for the given master.
+func ptsname(f *os.File) (string, error) {
+ ptspath, err := C.ptsname(C.int(f.Fd()))
+ if err != nil {
+ return "", err
+ }
+ return C.GoString(ptspath), nil
+}
+
+// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
+// unlockpt should be called before opening the slave side of a pty.
+func unlockpt(f *os.File) error {
+ if _, err := C.grantpt(C.int(f.Fd())); err != nil {
+ return err
+ }
+ return nil
+}
diff --git a/vendor/github.com/containerd/console/tc_openbsd_nocgo.go b/vendor/github.com/containerd/console/tc_openbsd_nocgo.go
new file mode 100644
index 000000000..daccce205
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_openbsd_nocgo.go
@@ -0,0 +1,47 @@
+// +build openbsd,!cgo
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+//
+// Implementing the functions below requires cgo support. Non-cgo stubs
+// versions are defined below to enable cross-compilation of source code
+// that depends on these functions, but the resultant cross-compiled
+// binaries cannot actually be used. If the stub function(s) below are
+// actually invoked they will display an error message and cause the
+// calling process to exit.
+//
+
+package console
+
+import (
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+const (
+ cmdTcGet = unix.TIOCGETA
+ cmdTcSet = unix.TIOCSETA
+)
+
+func ptsname(f *os.File) (string, error) {
+ panic("ptsname() support requires cgo.")
+}
+
+func unlockpt(f *os.File) error {
+ panic("unlockpt() support requires cgo.")
+}
diff --git a/vendor/github.com/containerd/console/tc_solaris_cgo.go b/vendor/github.com/containerd/console/tc_solaris_cgo.go
new file mode 100644
index 000000000..e36a68edd
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_solaris_cgo.go
@@ -0,0 +1,51 @@
+// +build solaris,cgo
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+//#include
+import "C"
+
+const (
+ cmdTcGet = unix.TCGETS
+ cmdTcSet = unix.TCSETS
+)
+
+// ptsname retrieves the name of the first available pts for the given master.
+func ptsname(f *os.File) (string, error) {
+ ptspath, err := C.ptsname(C.int(f.Fd()))
+ if err != nil {
+ return "", err
+ }
+ return C.GoString(ptspath), nil
+}
+
+// unlockpt unlocks the slave pseudoterminal device corresponding to the master pseudoterminal referred to by f.
+// unlockpt should be called before opening the slave side of a pty.
+func unlockpt(f *os.File) error {
+ if _, err := C.grantpt(C.int(f.Fd())); err != nil {
+ return err
+ }
+ return nil
+}
diff --git a/vendor/github.com/containerd/console/tc_solaris_nocgo.go b/vendor/github.com/containerd/console/tc_solaris_nocgo.go
new file mode 100644
index 000000000..eb0bd2c36
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_solaris_nocgo.go
@@ -0,0 +1,47 @@
+// +build solaris,!cgo
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+//
+// Implementing the functions below requires cgo support. Non-cgo stubs
+// versions are defined below to enable cross-compilation of source code
+// that depends on these functions, but the resultant cross-compiled
+// binaries cannot actually be used. If the stub function(s) below are
+// actually invoked they will display an error message and cause the
+// calling process to exit.
+//
+
+package console
+
+import (
+ "os"
+
+ "golang.org/x/sys/unix"
+)
+
+const (
+ cmdTcGet = unix.TCGETS
+ cmdTcSet = unix.TCSETS
+)
+
+func ptsname(f *os.File) (string, error) {
+ panic("ptsname() support requires cgo.")
+}
+
+func unlockpt(f *os.File) error {
+ panic("unlockpt() support requires cgo.")
+}
diff --git a/vendor/github.com/containerd/console/tc_unix.go b/vendor/github.com/containerd/console/tc_unix.go
new file mode 100644
index 000000000..a6bf01e8d
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_unix.go
@@ -0,0 +1,91 @@
+// +build darwin freebsd linux netbsd openbsd solaris zos
+
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "golang.org/x/sys/unix"
+)
+
+func tcget(fd uintptr, p *unix.Termios) error {
+ termios, err := unix.IoctlGetTermios(int(fd), cmdTcGet)
+ if err != nil {
+ return err
+ }
+ *p = *termios
+ return nil
+}
+
+func tcset(fd uintptr, p *unix.Termios) error {
+ return unix.IoctlSetTermios(int(fd), cmdTcSet, p)
+}
+
+func tcgwinsz(fd uintptr) (WinSize, error) {
+ var ws WinSize
+
+ uws, err := unix.IoctlGetWinsize(int(fd), unix.TIOCGWINSZ)
+ if err != nil {
+ return ws, err
+ }
+
+ // Translate from unix.Winsize to console.WinSize
+ ws.Height = uws.Row
+ ws.Width = uws.Col
+ ws.x = uws.Xpixel
+ ws.y = uws.Ypixel
+ return ws, nil
+}
+
+func tcswinsz(fd uintptr, ws WinSize) error {
+ // Translate from console.WinSize to unix.Winsize
+
+ var uws unix.Winsize
+ uws.Row = ws.Height
+ uws.Col = ws.Width
+ uws.Xpixel = ws.x
+ uws.Ypixel = ws.y
+
+ return unix.IoctlSetWinsize(int(fd), unix.TIOCSWINSZ, &uws)
+}
+
+func setONLCR(fd uintptr, enable bool) error {
+ var termios unix.Termios
+ if err := tcget(fd, &termios); err != nil {
+ return err
+ }
+ if enable {
+ // Set +onlcr so we can act like a real terminal
+ termios.Oflag |= unix.ONLCR
+ } else {
+ // Set -onlcr so we don't have to deal with \r.
+ termios.Oflag &^= unix.ONLCR
+ }
+ return tcset(fd, &termios)
+}
+
+func cfmakeraw(t unix.Termios) unix.Termios {
+ t.Iflag &^= (unix.IGNBRK | unix.BRKINT | unix.PARMRK | unix.ISTRIP | unix.INLCR | unix.IGNCR | unix.ICRNL | unix.IXON)
+ t.Oflag &^= unix.OPOST
+ t.Lflag &^= (unix.ECHO | unix.ECHONL | unix.ICANON | unix.ISIG | unix.IEXTEN)
+ t.Cflag &^= (unix.CSIZE | unix.PARENB)
+ t.Cflag &^= unix.CS8
+ t.Cc[unix.VMIN] = 1
+ t.Cc[unix.VTIME] = 0
+
+ return t
+}
diff --git a/vendor/github.com/containerd/console/tc_zos.go b/vendor/github.com/containerd/console/tc_zos.go
new file mode 100644
index 000000000..4262eaf4c
--- /dev/null
+++ b/vendor/github.com/containerd/console/tc_zos.go
@@ -0,0 +1,26 @@
+/*
+ Copyright The containerd Authors.
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+*/
+
+package console
+
+import (
+ "golang.org/x/sys/unix"
+)
+
+const (
+ cmdTcGet = unix.TCGETS
+ cmdTcSet = unix.TCSETS
+)
diff --git a/vendor/github.com/containerd/containerd/.gitattributes b/vendor/github.com/containerd/containerd/.gitattributes
new file mode 100644
index 000000000..a0717e4b3
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/.gitattributes
@@ -0,0 +1 @@
+*.go text eol=lf
\ No newline at end of file
diff --git a/vendor/github.com/containerd/containerd/.gitignore b/vendor/github.com/containerd/containerd/.gitignore
new file mode 100644
index 000000000..73ba2c685
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/.gitignore
@@ -0,0 +1,10 @@
+/bin/
+/man/
+coverage.txt
+profile.out
+containerd.test
+_site/
+releases/*.tar.gz
+releases/*.tar.gz.sha256sum
+_output/
+.vagrant/
diff --git a/vendor/github.com/containerd/containerd/.golangci.yml b/vendor/github.com/containerd/containerd/.golangci.yml
new file mode 100644
index 000000000..4bf84599d
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/.golangci.yml
@@ -0,0 +1,27 @@
+linters:
+ enable:
+ - structcheck
+ - varcheck
+ - staticcheck
+ - unconvert
+ - gofmt
+ - goimports
+ - revive
+ - ineffassign
+ - vet
+ - unused
+ - misspell
+ disable:
+ - errcheck
+
+issues:
+ include:
+ - EXC0002
+
+run:
+ timeout: 8m
+ skip-dirs:
+ - api
+ - design
+ - docs
+ - docs/man
diff --git a/vendor/github.com/containerd/containerd/.mailmap b/vendor/github.com/containerd/containerd/.mailmap
new file mode 100644
index 000000000..11dcdc48c
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/.mailmap
@@ -0,0 +1,147 @@
+Abhinandan Prativadi
+Abhinandan Prativadi
+Ace-Tang
+Akihiro Suda
+Akihiro Suda
+Allen Sun
+Alexander Morozov
+Antonio Ojea
+Amit Krishnan
+Andrei Vagin
+Andrey Kolomentsev
+Arnaud Porterie
+Arnaud Porterie
+Bob Mader
+Boris Popovschi
+Bowen Yan
+Brent Baude
+Cao Zhihao
+Cao Zhihao
+Carlos Eduardo
+chenxiaoyu
+Cory Bennett
+Cristian Staretu
+Cristian Staretu
+Daniel Dao
+Derek McGowan
+Edgar Lee
+Eric Ernst
+Eric Ren
+Eric Ren
+Eric Ren
+Fabiano Fidêncio
+Fahed Dorgaa
+Frank Yang
+Fupan Li
+Fupan Li
+Fupan Li
+Furkan Türkal
+Georgia Panoutsakopoulou
+Guangming Wang
+Haiyan Meng
+haoyun
+Harry Zhang
+Hu Shuai
+Hu Shuai
+Iceber Gu
+Jaana Burcu Dogan
+Jess Valarezo
+Jess Valarezo
+Jian Liao
+Jian Liao
+Ji'an Liu
+Jie Zhang
+John Howard
+John Howard
+John Howard
+John Howard
+Lorenz Brun
+Luc Perkins
+Jiajun Jiang
+Julien Balestra
+Jun Lin Chen <1913688+mc256@users.noreply.github.com>
+Justin Cormack
+Justin Terry
+Justin Terry
+Kante
+Kenfe-Mickaël Laventure
+Kevin Kern
+Kevin Parsons
+Kevin Xu
+Kitt Hsu
+Kohei Tokunaga
+Krasi Georgiev
+Lantao Liu
+Lantao Liu
+Li Yuxuan
+Lifubang
+Lu Jingxiao
+Maksym Pavlenko <865334+mxpv@users.noreply.github.com>
+Maksym Pavlenko
+Maksym Pavlenko
+Mario Hros
+Mario Hros
+Mario Macias
+Mark Gordon
+Marvin Giessing
+Michael Crosby
+Michael Katsoulis
+Mike Brown
+Mohammad Asif Siddiqui
+Nabeel Rana
+Ng Yang
+Ning Li
+ningmingxiao
+Nishchay Kumar
+Oliver Stenbom
+Phil Estes
+Phil Estes
+Reid Li
+Robin Winkelewski
+Ross Boucher
+Ruediger Maass
+Rui Cao
+Sakeven Jiang
+Samuel Karp
+Samuel Karp
+Seth Pellegrino <30441101+sethp-nr@users.noreply.github.com>
+Shaobao Feng
+Shengbo Song
+Shengjing Zhu
+Siddharth Yadav
+SiYu Zhao
+Stefan Berger
+Stefan Berger
+Stephen J Day
+Stephen J Day
+Stephen J Day
+Sudeesh John
+Su Fei
+Su Xiaolin
+Takumasa Sakao
+Ted Yu
+Tõnis Tiigi
+Wade Lee
+Wade Lee
+Wade Lee <21621232@zju.edu.cn>
+Wang Bing
+wanglei
+wanglei
+wangzhan
+Wei Fu
+Wei Fu
+Xiaodong Zhang
+Xuean Yan
+Yang Yang
+Yue Zhang
+Yuxing Liu
+Zhang Wei
+zhangyadong
+Zhenguang Zhu
+Zhiyu Li
+Zhiyu Li <404977848@qq.com>
+Zhongming Chang
+Zhoulin Xie
+Zhoulin Xie <42261994+JoeWrightss@users.noreply.github.com>
+zounengren
+张潇
diff --git a/vendor/github.com/containerd/containerd/ADOPTERS.md b/vendor/github.com/containerd/containerd/ADOPTERS.md
new file mode 100644
index 000000000..bbf99e7dd
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/ADOPTERS.md
@@ -0,0 +1,58 @@
+## containerd Adopters
+
+A non-exhaustive list of containerd adopters is provided below.
+
+**_Docker/Moby engine_** - Containerd began life prior to its CNCF adoption as a lower-layer
+runtime manager for `runc` processes below the Docker engine. Continuing today, containerd
+has extremely broad production usage as a component of the [Docker engine](https://github.com/docker/docker-ce)
+stack. Note that this includes any use of the open source [Moby engine project](https://github.com/moby/moby);
+including the Balena project listed below.
+
+**_[IBM Cloud Kubernetes Service (IKS)](https://www.ibm.com/cloud/container-service)_** - offers containerd as the CRI runtime for v1.11 and higher versions.
+
+**_[IBM Cloud Private (ICP)](https://www.ibm.com/cloud/private)_** - IBM's on-premises cloud offering has containerd as a "tech preview" CRI runtime for the Kubernetes offered within this product for the past two releases, and plans to fully migrate to containerd in a future release.
+
+**_[Google Cloud Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine/)_** - containerd has been offered in GKE since version 1.14 and has been the default runtime since version 1.19. It is also the only supported runtime for GKE Autopilot from the launch. [More details](https://cloud.google.com/kubernetes-engine/docs/concepts/using-containerd)
+
+**_[AWS Fargate](https://aws.amazon.com/fargate)_** - uses containerd + Firecracker (noted below) as the runtime and isolation technology for containers run in the Fargate platform. Fargate is a serverless, container-native compute offering from Amazon Web Services.
+
+**_[Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/eks/)_** - EKS optionally offers containerd as a CRI runtime starting with Kubernetes version 1.21. In Kubernetes 1.22 the default CRI runtime will be containerd.
+
+**_[Bottlerocket](https://aws.amazon.com/bottlerocket/)_** - Bottlerocket is a Linux distribution from Amazon Web Services purpose-built for containers using containerd as the core system runtime.
+
+**_Cloud Foundry_** - The [Guardian container manager](https://github.com/cloudfoundry/guardian) for CF has been using OCI runC directly with additional code from CF managing the container image and filesystem interactions, but have recently migrated to use containerd as a replacement for the extra code they had written around runC.
+
+**_Alibaba's PouchContainer_** - The Alibaba [PouchContainer](https://github.com/alibaba/pouch) project uses containerd as its runtime for a cloud native offering that has unique isolation and image distribution capabilities.
+
+**_Rancher's k3s project_** - Rancher Labs [k3s](https://github.com/rancher/k3s) is a lightweight Kubernetes distribution; in their words: "Easy to install, half the memory, all in a binary less than 40mb." k8s uses containerd as the embedded runtime for this popular lightweight Kubernetes variant.
+
+**_Rancher's Rio project_** - Rancher Labs [Rio](https://github.com/rancher/rio) project uses containerd as the runtime for a combined Kubernetes, Istio, and container "Cloud Native Container Distribution" platform.
+
+**_Eliot_** - The [Eliot](https://github.com/ernoaapa/eliot) container project for IoT device container management uses containerd as the runtime.
+
+**_Balena_** - Resin's [Balena](https://github.com/resin-os/balena) container engine, based on moby/moby but for edge, embedded, and IoT use cases, uses the containerd and runc stack in the same way that the Docker engine uses containerd.
+
+**_LinuxKit_** - the Moby project's [LinuxKit](https://github.com/linuxkit/linuxkit) for building secure, minimal Linux OS images in a container-native model uses containerd as the core runtime for system and service containers.
+
+**_BuildKit_** - The Moby project's [BuildKit](https://github.com/moby/buildkit) can use either runC or containerd as build execution backends for building container images. BuildKit support has also been built into the Docker engine in recent releases, making BuildKit provide the backend to the `docker build` command.
+
+**_[Azure Kubernetes Service (AKS)](https://azure.microsoft.com/services/kubernetes-service)_** - Microsoft's managed Kubernetes offering uses containerd for Linux nodes running v1.19 or greater. Containerd for Windows nodes is currently in public preview. [More Details](https://docs.microsoft.com/azure/aks/cluster-configuration#container-runtime-configuration)
+
+**_Amazon Firecracker_** - The AWS [Firecracker VMM project](http://firecracker-microvm.io/) has extended containerd with a new snapshotter and v2 shim to allow containerd to drive virtualized container processes via their VMM implementation. More details on their containerd integration are available in [their GitHub project](https://github.com/firecracker-microvm/firecracker-containerd).
+
+**_Kata Containers_** - The [Kata containers](https://katacontainers.io/) lightweight-virtualized container runtime project integrates with containerd via a custom v2 shim implementation that drives the Kata container runtime.
+
+**_D2iQ Konvoy_** - D2iQ Inc [Konvoy](https://d2iq.com/products/konvoy) product uses containerd as the container runtime for its Kubernetes distribution.
+
+**_Inclavare Containers_** - [Inclavare Containers](https://github.com/alibaba/inclavare-containers) is an innovation of container runtime with the novel approach for launching protected containers in hardware-assisted Trusted Execution Environment (TEE) technology, aka Enclave, which can prevent the untrusted entity, such as Cloud Service Provider (CSP), from accessing the sensitive and confidential assets in use.
+
+**_VMware TKG_** - [Tanzu Kubernetes Grid](https://tanzu.vmware.com/kubernetes-grid) VMware's Multicloud Kubernetes offering uses containerd as the default CRI runtime.
+
+**_VMware TCE_** - [Tanzu Community Edition](https://github.com/vmware-tanzu/community-edition) VMware's fully-featured, easy to manage, Kubernetes platform for learners and users. It is a freely available, community supported, and open source distribution of VMware Tanzu. It uses containerd as the default CRI runtime.
+
+**_[Talos Linux](https://www.talos.dev/)_** - Talos Linux is Linux designed for Kubernetes – secure, immutable, and minimal. Talos Linux is using containerd as the core system runtime and CRI implementation.
+
+**_Other Projects_** - While the above list provides a cross-section of well known uses of containerd, the simplicity and clear API layer for containerd has inspired many smaller projects around providing simple container management platforms. Several examples of building higher layer functionality on top of the containerd base have come from various containerd community participants:
+ - Michael Crosby's [boss](https://github.com/crosbymichael/boss) project,
+ - Evan Hazlett's [stellar](https://github.com/ehazlett/stellar) project,
+ - Paul Knopf's immutable Linux image builder project: [darch](https://github.com/godarch/darch).
diff --git a/vendor/github.com/containerd/containerd/BUILDING.md b/vendor/github.com/containerd/containerd/BUILDING.md
new file mode 100644
index 000000000..4f2196e6c
--- /dev/null
+++ b/vendor/github.com/containerd/containerd/BUILDING.md
@@ -0,0 +1,287 @@
+# Build containerd from source
+
+This guide is useful if you intend to contribute on containerd. Thanks for your
+effort. Every contribution is very appreciated.
+
+This doc includes:
+* [Build requirements](#build-requirements)
+* [Build the development environment](#build-the-development-environment)
+* [Build containerd](#build-containerd)
+* [Via docker container](#via-docker-container)
+* [Testing](#testing-containerd)
+
+## Build requirements
+
+To build the `containerd` daemon, and the `ctr` simple test client, the following build system dependencies are required:
+
+* Go 1.13.x or above except 1.14.x
+* Protoc 3.x compiler and headers (download at the [Google protobuf releases page](https://github.com/protocolbuffers/protobuf/releases))
+* Btrfs headers and libraries for your distribution. Note that building the btrfs driver can be disabled via the build tag `no_btrfs`, removing this dependency.
+
+## Build the development environment
+
+First you need to setup your Go development environment. You can follow this
+guideline [How to write go code](https://golang.org/doc/code.html) and at the
+end you have `go` command in your `PATH`.
+
+You need `git` to checkout the source code:
+
+```sh
+git clone https://github.com/containerd/containerd
+```
+
+For proper results, install the `protoc` release into `/usr/local` on your build system. For example, the following commands will download and install the 3.11.4 release for a 64-bit Linux host:
+
+```sh
+wget -c https://github.com/protocolbuffers/protobuf/releases/download/v3.11.4/protoc-3.11.4-linux-x86_64.zip
+sudo unzip protoc-3.11.4-linux-x86_64.zip -d /usr/local
+```
+
+`containerd` uses [Btrfs](https://en.wikipedia.org/wiki/Btrfs) it means that you
+need to satisfy these dependencies in your system:
+
+* CentOS/Fedora: `yum install btrfs-progs-devel`
+* Debian/Ubuntu: `apt-get install btrfs-progs libbtrfs-dev`
+ * Debian(before Buster)/Ubuntu(before 19.10): `apt-get install btrfs-tools`
+
+At this point you are ready to build `containerd` yourself!
+
+## Runc
+
+Runc is the default container runtime used by `containerd` and is required to
+run containerd. While it is okay to download a `runc` binary and install that on
+the system, sometimes it is necessary to build runc directly when working with
+container runtime development. Make sure to follow the guidelines for versioning
+in [RUNC.md](/docs/RUNC.md) for the best results.
+
+## Build containerd
+
+`containerd` uses `make` to create a repeatable build flow. It means that you
+can run:
+
+```sh
+cd containerd
+make
+```
+
+This is going to build all the project binaries in the `./bin/` directory.
+
+You can move them in your global path, `/usr/local/bin` with:
+
+```sh
+sudo make install
+```
+
+The install prefix can be changed by passing the `PREFIX` variable (defaults
+to `/usr/local`).
+
+Note: if you set one of these vars, set them to the same values on all make stages
+(build as well as install).
+
+If you want to prepend an additional prefix on actual installation (eg. packaging or chroot install),
+you can pass it via `DESTDIR` variable:
+
+```sh
+sudo make install DESTDIR=/tmp/install-x973234/
+```
+
+The above command installs the `containerd` binary to `/tmp/install-x973234/usr/local/bin/containerd`
+
+The current `DESTDIR` convention is supported since containerd v1.6.
+Older releases was using `DESTDIR` for a different purpose that is similar to `PREFIX`.
+
+
+When making any changes to the gRPC API, you can use the installed `protoc`
+compiler to regenerate the API generated code packages with:
+
+```sh
+make generate
+```
+
+> *Note*: Several build tags are currently available:
+> * `no_cri`: A build tag disables building Kubernetes [CRI](http://blog.kubernetes.io/2016/12/container-runtime-interface-cri-in-kubernetes.html) support into containerd.
+> See [here](https://github.com/containerd/cri-containerd#build-tags) for build tags of CRI plugin.
+> * snapshotters (alphabetical order)
+> * `no_aufs`: A build tag disables building the aufs snapshot driver.
+> * `no_btrfs`: A build tag disables building the Btrfs snapshot driver.
+> * `no_devmapper`: A build tag disables building the device mapper snapshot driver.
+> * `no_zfs`: A build tag disables building the ZFS snapshot driver.
+>
+> For example, adding `BUILDTAGS=no_btrfs` to your environment before calling the **binaries**
+> Makefile target will disable the btrfs driver within the containerd Go build.
+
+Vendoring of external imports uses the [Go Modules](https://golang.org/ref/mod#vendoring). You need
+to use `go mod` command to modify the dependencies. After modifition, you should run `go mod tidy`
+and `go mod vendor` to ensure the `go.mod`, `go.sum` files and `vendor` directory are up to date.
+Changes to these files should become a single commit for a PR which relies on vendored updates.
+
+Please refer to [RUNC.md](/docs/RUNC.md) for the currently supported version of `runc` that is used by containerd.
+
+### Static binaries
+
+You can build static binaries by providing a few variables to `make`:
+
+```sh
+make STATIC=1
+```
+
+> *Note*:
+> - static build is discouraged
+> - static containerd binary does not support loading shared object plugins (`*.so`)
+> - static build binaries are not position-independent
+
+# Via Docker container
+
+The following instructions assume you are at the parent directory of containerd source directory.
+
+## Build containerd in a container
+
+You can build `containerd` via a Linux-based Docker container.
+You can build an image from this `Dockerfile`:
+
+```dockerfile
+FROM golang
+
+RUN apt-get update && \
+ apt-get install -y libbtrfs-dev
+```
+
+Let's suppose that you built an image called `containerd/build`. From the
+containerd source root directory you can run the following command:
+
+```sh
+docker run -it \
+ -v ${PWD}/containerd:/go/src/github.com/containerd/containerd \
+ -e GOPATH=/go \
+ -w /go/src/github.com/containerd/containerd containerd/build sh
+```
+
+This mounts `containerd` repository
+
+You are now ready to [build](#build-containerd):
+
+```sh
+make && make install
+```
+
+## Build containerd and runc in a container
+
+To have complete core container runtime, you will need both `containerd` and `runc`. It is possible to build both of these via Docker container.
+
+You can use `git` to checkout `runc`:
+
+```sh
+git clone https://github.com/opencontainers/runc
+```
+
+We can build an image from this `Dockerfile`:
+
+```sh
+FROM golang
+
+RUN apt-get update && \
+ apt-get install -y libbtrfs-dev libseccomp-dev
+```
+
+In our Docker container we will build `runc` build, which includes
+[seccomp](https://en.wikipedia.org/wiki/seccomp), [SELinux](https://en.wikipedia.org/wiki/Security-Enhanced_Linux),
+and [AppArmor](https://en.wikipedia.org/wiki/AppArmor) support. Seccomp support
+in runc requires `libseccomp-dev` as a dependency (AppArmor and SELinux support
+do not require external libraries at build time). Refer to [RUNC.md](docs/RUNC.md)
+in the docs directory to for details about building runc, and to learn about
+supported versions of `runc` as used by containerd.
+
+Let's suppose you build an image called `containerd/build` from the above Dockerfile. You can run the following command:
+
+```sh
+docker run -it --privileged \
+ -v /var/lib/containerd \
+ -v ${PWD}/runc:/go/src/github.com/opencontainers/runc \
+ -v ${PWD}/containerd:/go/src/github.com/containerd/containerd \
+ -e GOPATH=/go \
+ -w /go/src/github.com/containerd/containerd containerd/build sh
+```
+
+This mounts both `runc` and `containerd` repositories in our Docker container.
+
+From within our Docker container let's build `containerd`:
+
+```sh
+cd /go/src/github.com/containerd/containerd
+make && make install
+```
+
+These binaries can be found in the `./bin` directory in your host.
+`make install` will move the binaries in your `$PATH`.
+
+Next, let's build `runc`:
+
+```sh
+cd /go/src/github.com/opencontainers/runc
+make && make install
+```
+
+For further details about building runc, refer to [RUNC.md](docs/RUNC.md) in the
+docs directory.
+
+When working with `ctr`, the simple test client we just built, don't forget to start the daemon!
+
+```sh
+containerd --config config.toml
+```
+
+# Testing containerd
+
+During the automated CI the unit tests and integration tests are run as part of the PR validation. As a developer you can run these tests locally by using any of the following `Makefile` targets:
+ - `make test`: run all non-integration tests that do not require `root` privileges
+ - `make root-test`: run all non-integration tests which require `root`
+ - `make integration`: run all tests, including integration tests and those which require `root`. `TESTFLAGS_PARALLEL` can be used to control parallelism. For example, `TESTFLAGS_PARALLEL=1 make integration` will lead a non-parallel execution. The default value of `TESTFLAGS_PARALLEL` is **8**.
+
+To execute a specific test or set of tests you can use the `go test` capabilities
+without using the `Makefile` targets. The following examples show how to specify a test
+name and also how to use the flag directly against `go test` to run root-requiring tests.
+
+```sh
+# run the test