Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding e2e tests for shared vpc installs #1255

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
38 changes: 22 additions & 16 deletions scripts/ci-e2e.sh
Original file line number Diff line number Diff line change
Expand Up @@ -102,6 +102,9 @@ EOF

# initialize a router and cloud NAT
init_networks() {
# gcloud compute shared-vpc enable "$GCP_PROJECT"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These actions require more permissions. We either need to allow this permission, setup a specific project just for this e2e test, or use the same project (which wont be a shared vpc install). Any ideas @damdo ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what's best here. @cpanato did this ever come up in the past?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if need another project to run together, i would suggest to we talk with sig-testing/sig-infra to see if that is possible when running the prow job

# gcloud compute shared-vpc associated-projects add "$GCP_SERVICE_PROJECT" --host-project

if [[ ${GCP_NETWORK_NAME} != "default" ]]; then
gcloud compute networks create --project "$GCP_PROJECT" "${GCP_NETWORK_NAME}" --subnet-mode auto --quiet
gcloud compute firewall-rules create "${GCP_NETWORK_NAME}"-allow-http --project "$GCP_PROJECT" \
Expand All @@ -125,7 +128,6 @@ init_networks() {
--nat-all-subnet-ip-ranges --auto-allocate-nat-external-ips
}


cleanup() {
# Force a cleanup of cluster api created resources using gcloud commands
(gcloud compute forwarding-rules list --project "$GCP_PROJECT" | grep capg-e2e \
Expand All @@ -143,25 +145,29 @@ cleanup() {
(gcloud compute instances list --project "$GCP_PROJECT" | grep capg-e2e \
| awk '{print "gcloud compute instances delete --project '"$GCP_PROJECT"' --quiet " $1 " --zone " $2 "\n"}' \
| bash) || true
(gcloud compute instance-groups list --project "$GCP_PROJECT" | grep capg-e2e \
| awk '{print "gcloud compute instance-groups unmanaged delete --project '"$GCP_PROJECT"' --quiet " $1 " --zone " $2 "\n"}' \
| bash) || true
(gcloud compute firewall-rules list --project "$GCP_PROJECT" | grep capg-e2e \
| awk '{print "gcloud compute firewall-rules delete --project '"$GCP_PROJECT"' --quiet " $1 "\n"}' \
| bash) || true
(gcloud compute instance-groups list --project "$GCP_PROJECT" | grep capg-e2e \
| awk '{print "gcloud compute instance-groups unmanaged delete --project '"$GCP_PROJECT"' --quiet " $1 " --zone " $2 "\n"}' \
| bash) || true

# cleanup the networks
gcloud compute routers nats delete "${TEST_NAME}-mynat" --project="${GCP_PROJECT}" \
--router-region="${GCP_REGION}" --router="${TEST_NAME}-myrouter" --quiet || true
gcloud compute routers delete "${TEST_NAME}-myrouter" --project="${GCP_PROJECT}" \
--region="${GCP_REGION}" --quiet || true

if [[ ${GCP_NETWORK_NAME} != "default" ]]; then
(gcloud compute firewall-rules list --project "$GCP_PROJECT" | grep "$GCP_NETWORK_NAME" \
| awk '{print "gcloud compute firewall-rules delete --project '"$GCP_PROJECT"' --quiet " $1 "\n"}' \
| bash) || true
gcloud compute networks delete --project="${GCP_PROJECT}" \
--quiet "${GCP_NETWORK_NAME}" || true
if [[ -n "${SKIP_INIT_NETWORK:-}" ]]; then
echo "Skipping GCP network deletion..."
else
# cleanup the networks
gcloud compute routers nats delete "${TEST_NAME}-mynat" --project="${GCP_PROJECT}" \
--router-region="${GCP_REGION}" --router="${TEST_NAME}-myrouter" --quiet || true
gcloud compute routers delete "${TEST_NAME}-myrouter" --project="${GCP_PROJECT}" \
--region="${GCP_REGION}" --quiet || true

if [[ ${GCP_NETWORK_NAME} != "default" ]]; then
(gcloud compute firewall-rules list --project "$GCP_PROJECT" | grep "$GCP_NETWORK_NAME" \
| awk '{print "gcloud compute firewall-rules delete --project '"$GCP_PROJECT"' --quiet " $1 "\n"}' \
| bash) || true
gcloud compute networks delete --project="${GCP_PROJECT}" \
--quiet "${GCP_NETWORK_NAME}" || true
fi
Copy link
Contributor Author

@barbacbd barbacbd Jun 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The GCP_SERVICE_PROJECT is where a lot of these resources such as image will be created during a shared vpc install. Trying to think of a way to incorporate that here on cleanup.

fi

if [[ -n "${SKIP_INIT_IMAGE:-}" ]]; then
Expand Down
1 change: 1 addition & 0 deletions test/e2e/config/gcp-ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ providers:
- sourcePath: "${PWD}/test/e2e/data/infrastructure-gcp/cluster-template-ci-gke-autopilot.yaml"
- sourcePath: "${PWD}/test/e2e/data/infrastructure-gcp/cluster-template-ci-gke-custom-subnet.yaml"
- sourcePath: "${PWD}/test/e2e/data/infrastructure-gcp/cluster-template-ci-with-internal-lb.yaml"
- sourcePath: "${PWD}/test/e2e/data/infrastructure-gcp/cluster-template-ci-with-shared-vpc.yaml"

variables:
KUBERNETES_VERSION: "${KUBERNETES_VERSION:-v1.29.0}"
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
name: "${CLUSTER_NAME}"
labels:
cni: "${CLUSTER_NAME}-shared-vpc"
spec:
clusterNetwork:
pods:
cidrBlocks: ["192.168.0.0/16"]
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPCluster
name: "${CLUSTER_NAME}"
controlPlaneRef:
kind: KubeadmControlPlane
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
name: "${CLUSTER_NAME}-control-plane"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPCluster
metadata:
name: "${CLUSTER_NAME}"
spec:
project: "${GCP_SERVICE_PROJECT}"
region: "${GCP_REGION}"
network:
name: "${GCP_NETWORK_NAME}"
hostProject: "${GCP_PROJECT}"
---
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
metadata:
name: "${CLUSTER_NAME}-control-plane"
spec:
replicas: ${CONTROL_PLANE_MACHINE_COUNT}
machineTemplate:
infrastructureRef:
kind: GCPMachineTemplate
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
name: "${CLUSTER_NAME}-control-plane"
kubeadmConfigSpec:
useExperimentalRetryJoin: true
initConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
kubeletExtraArgs:
cloud-provider: gce
clusterConfiguration:
apiServer:
timeoutForControlPlane: 20m
extraArgs:
cloud-provider: gce
controllerManager:
extraArgs:
cloud-provider: gce
allocate-node-cidrs: "false"
kubernetesVersion: "${KUBERNETES_VERSION}"
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
kubeletExtraArgs:
cloud-provider: gce
version: "${KUBERNETES_VERSION}"
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate
metadata:
name: "${CLUSTER_NAME}-control-plane"
spec:
template:
spec:
instanceType: "${GCP_CONTROL_PLANE_MACHINE_TYPE}"
image: "${IMAGE_ID}"
---
apiVersion: cluster.x-k8s.io/v1beta1
kind: MachineDeployment
metadata:
name: "${CLUSTER_NAME}-md-0"
spec:
clusterName: "${CLUSTER_NAME}"
replicas: ${WORKER_MACHINE_COUNT}
selector:
matchLabels:
template:
spec:
clusterName: "${CLUSTER_NAME}"
version: "${KUBERNETES_VERSION}"
bootstrap:
configRef:
name: "${CLUSTER_NAME}-md-0"
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
infrastructureRef:
name: "${CLUSTER_NAME}-md-0"
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate
---
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: GCPMachineTemplate
metadata:
name: "${CLUSTER_NAME}-md-0"
spec:
template:
spec:
instanceType: "${GCP_NODE_MACHINE_TYPE}"
image: "${IMAGE_ID}"
---
apiVersion: bootstrap.cluster.x-k8s.io/v1beta1
kind: KubeadmConfigTemplate
metadata:
name: "${CLUSTER_NAME}-md-0"
spec:
template:
spec:
joinConfiguration:
nodeRegistration:
name: '{{ ds.meta_data.local_hostname.split(".")[0] }}'
kubeletExtraArgs:
cloud-provider: gce
24 changes: 24 additions & 0 deletions test/e2e/e2e_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -206,4 +206,28 @@ var _ = Describe("Workload cluster creation", func() {
}, result)
})
})

Context("Creating a control-plane cluster with a shared vpc", func() {
It("Should create a cluster with 1 control-plane and 1 worker node where the network exists in a host project", func() {
By("Creating a cluster where the host project shares network resources with the service project")
clusterctl.ApplyClusterTemplateAndWait(ctx, clusterctl.ApplyClusterTemplateAndWaitInput{
ClusterProxy: bootstrapClusterProxy,
ConfigCluster: clusterctl.ConfigClusterInput{
LogFolder: clusterctlLogFolder,
ClusterctlConfigPath: clusterctlConfigPath,
KubeconfigPath: bootstrapClusterProxy.GetKubeconfigPath(),
InfrastructureProvider: clusterctl.DefaultInfrastructureProvider,
Flavor: "ci-with-shared-vpc",
Namespace: namespace.Name,
ClusterName: clusterName,
KubernetesVersion: e2eConfig.GetVariable(KubernetesVersion),
ControlPlaneMachineCount: ptr.To[int64](1),
WorkerMachineCount: ptr.To[int64](1),
},
WaitForClusterIntervals: e2eConfig.GetIntervals(specName, "wait-cluster"),
WaitForControlPlaneIntervals: e2eConfig.GetIntervals(specName, "wait-control-plane"),
WaitForMachineDeployments: e2eConfig.GetIntervals(specName, "wait-worker-nodes"),
}, result)
})
})
})
Loading