-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathindex.json
442 lines (442 loc) · 386 KB
/
index.json
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
[
{
"uri": "/development/backports/",
"title": "Backports",
"tags": [],
"description": "",
"content": "Fixes for serious issues or regressions affecting previous releases may be backported to the corresponding branches, to be included in the next release from that branch.\nRequesting a backport Backports can only be requested on fixes made against a later branch, or the devel branch. (This doesn’t mean that bugs can’t be fixed in older branches directly; but where relevant, they should first be fixed on devel.)\nTo request such a backport, identify the relevant pull request, and add the “backport” label to it. You should also add a comment to the pull request explaining why the backport is necessary, and which branch(es) are targeted. Issues should not be labeled, they are liable to be overlooked or lack a one-to-one mapping to a code fix.\nHandling backports Pending backports can be identified using this query, listing all non-archived pull requests with a “backport” label and without a “backport-handled” label.\nBackports should only be handled once the reference pull request is merged. This ensures that commit identifiers will remain stable during the backport process and for later history.\nStandalone pull requests Backporting a pull request (PR) is automated by running:\nmake LOCAL_BUILD=1 backport release=\u0026lt;release-branch\u0026gt; pr=\u0026lt;PR to cherry-pick\u0026gt;\nSince you are running with LOCAL_BUILD=1, ensure that Shipyard\u0026rsquo;s repo is checked out and updated alongside the project (../\u0026lt;project dir where running make backport\u0026gt;). The make target runs a script, backport.sh, originally developed by the Kubernetes community.\nThe script does the following:\n Cherry-picks the commits from the PR onto \u0026lt;remote branch\u0026gt;. Creates a PR on \u0026lt;release-branch\u0026gt; with the title Automated backport of \u0026lt;original PR number\u0026gt;: \u0026lt;original PR title\u0026gt;. Adds the backport-handled label to the original PR and the automated-backport label to the backported PR. The DRY_RUN environment variable can be set to skip creating the PR. When set, it leaves you in a branch containing the commits that were cherry-picked.\nMultiple PRs can be backported together by passing a comma-separated list of PR numbers, eg pr=630,631.\nThe script uses the following environment variables. Please change them according to your setup.\n UPSTREAM_REMOTE: the remote for the upstream repository. Defaults to origin. FORK_REMOTE: the remote for your forked repository. Defaults to GITHUB_USER. GITHUB_USER: needs to be set to your GitHub username. GITHUB_TOKEN: a personal GitHub token, with at least “read:org” and “repo” scopes. Pull requests requiring dependent backports Reviewing backports Backports need to go through the same review process as usual. The author of the original pull request should be added as a reviewer.\nChange requests on a backport should only concern changes arising from the specifics of backporting to the target release branch. Any other change which is deemed useful as a result of the review probably also applies to the original pull request and should result in an entirely new pull request, which might not be a backport candidate.\n"
},
{
"uri": "/development/website/style-guide/",
"title": "Docs Style Guide",
"tags": [],
"description": "",
"content": "Documentation Style Guide This guide is meant to help keep our documentation consistent and ease the contribution and review process.\nSubmariner follows the Kubernetes Documentation Style Guide wherever relevant. This is a Submariner-specific extension of those practices.\nSubmariner.io Word List A list of Submariner-specific terms and words to be used consistently across the site.\n Term Usage Admiral The project name Admiral should always be capitalized. Broker The design pattern component Broker should always be capitalized. ClusterSet The Kubernetes object ClusterSet proposed in KEP1645 should always be CamelCase and formatted in code style. Cluster set The words \u0026ldquo;cluster set\u0026rdquo; should be used as a term for a group of clusters, but not the proposed Kubernetes object. Coastguard The project name Coastguard should always be capitalized. Globalnet The feature name Globalnet is one word, and so should always be capitalized and should have a lowercase \u0026ldquo;n\u0026rdquo;. IPsec The protocol IPsec should follow the capitalization used by RFCs and popular sources. iptables The application iptables consistently uses all-lowercase. Follow their convention, but avoid starting a sentence with \u0026ldquo;iptables\u0026rdquo;. K8s The project nickname K8s should typically be expanded to \u0026ldquo;Kubernetes\u0026rdquo;. kind The tool kind consistently uses all-lowercase. Follow their convention, but avoid starting a sentence with \u0026ldquo;kind\u0026rdquo;. Lighthouse The project name Lighthouse should always be capitalized. Operator The design pattern Operator should always be capitalized. OpenStack The project name OpenStack should always be capitalized and camel-cased. Shipyard The project name Shipyard should always be capitalized. subctl The artifact subctl should not be capitalized and should be formatted in code style. Submariner The project name Submariner should always be capitalized. VXLAN The protocol VXLAN should always be all-capitalized. Pronunciation of \u0026ldquo;Submariner\u0026rdquo; Both the \u0026ldquo;Sub-mariner\u0026rdquo; (\u0026ldquo;Sub-MARE-en-er\u0026rdquo;, like the watch) and \u0026ldquo;Submarine-er\u0026rdquo; (\u0026ldquo;Sub-muh-REEN-er\u0026rdquo;, like the Navy job) pronunciations are okay.\nThe second option, \u0026ldquo;Submarine-er\u0026rdquo;, has historically been more common as Chris Kim (the initial creator) imagined the iconography of the project as related to submarine cables.\nDate Format Submariner follows ISO 8601 for date formats (YYYY-MM-DD or YYYY-MM).\nUse Versions, not \u0026ldquo;New\u0026rdquo; Avoid referring to things as \u0026ldquo;new\u0026rdquo;, as this will become out of date and require maintenance. Instead, document the versions that introduce or remove features:\n As of 0.12.0, a subctl image is provided \u0026hellip;\n Release Notes Formatting Follow the Kubernetes guidelines for writing good release notes.\nIn particular, note that release notes should be written in past tense.\n"
},
{
"uri": "/development/building-testing/ci-maintenance/",
"title": "CI/CD Maintenance",
"tags": [],
"description": "",
"content": "This page documents the maintenance of Submariner\u0026rsquo;s CI/CD for developers.\nCustom GitHub Actions We have built some custom GitHub Actions in Shipyard for project-internal use. They have dependencies on public GitHub Actions that need to be periodically updated.\n[~/go/src/submariner-io/shipyard/gh-actions]$ grep -rni uses e2e/action.yaml:77: uses: submariner-io/shipyard/gh-actions/restore-images@devel release-images/action.yaml:17: uses: docker/setup-qemu-action@e81a89b1732b9c48d79cd809d8d81d79c4647a18 release-images/action.yaml:19: uses: docker/setup-buildx-action@4b4e9c3e2d4531116a6f8ba8e71fc6e2cb6e6c8c cache-images/action.yaml:14: uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 restore-images/action.yaml:18: uses: actions/cache@88522ab9f39a2ea568f7027eddc7d8d8bc9d59c8 submariner-io/shipyard/gh-actions\nGitHub Actions All our projects use GitHub Actions. These include dependencies which should be regularly checked for updates. Dependabot should be used to submit PRs to keep all GitHub Actions up-to-date. Hash-based versions should always be used to ensure there are no changes without an update on our side.\nFor example, this GitHub Action dependency:\nsteps: - name: Check out the repository uses: actions/checkout@5a4ac9002d0be2fb38bd78e4b4dbde5606d7042f with: fetch-depth: 0 Would be updated by this Dependabot configuration:\n--- version: 2 updates: - package-ecosystem: github-actions directory: \u0026#39;/\u0026#39; schedule: interval: daily Dependabot will only submit updates when projects make releases. That may leave CI broken waiting on a release while a fix is available. If a project has a fix but has not made a release that includes it, we should manually update the SHA we consume to include the fix. In particular, some projects \u0026ldquo;release\u0026rdquo; fixes by moving a tag to a point in git history that includes the fix. They assume versioning like gaurav-nelson/github-action-markdown-link-check@v1. Again, we should always use SHA-based versions, not moveable references like tags, to help mitigate supply-chain attacks.\nKubernetes Versions The versions of Kubernetes tested in Submariner\u0026rsquo;s CI need to be updated for new Kubernetes releases.\nSubmariner\u0026rsquo;s policy is to support all versions upstream-Kubernetes supports and no EOL versions.\nThe versions that should be used in CI are described below.\n CI Kubernetes Version Notes Most CI Latest CI should run against the latest Kubernetes version by default. Full E2E Top/bottom of supported range Full E2E CI matrix should use the latest Kubernetes version and the oldest non-EOL Kubernetes version. Full Kubernetes Support All other non-latest supported versions Full E2E CI matrix with all non-EOL Kubernetes versions not tested in E2E-Full. Run periodically, on releases, or manually by adding the e2e-all-k8s label. Unsupported Kubernetes Cut-off Oldest working version E2E for the oldest Kubernetes version known to work with Submariner. This tests the cut-off version used by subctl to prevent installing Submariner in environments that are known to be unsupported. Shipyard Base Image Software In branches older than 0.16, some versions of software used by the Shipyard base image are maintained manually and should be periodically updated.\nENV LINT_VERSION=\u0026lt;version\u0026gt; \\ HELM_VERSION=\u0026lt;version\u0026gt; \\ KIND_VERSION=\u0026lt;version\u0026gt; \\ BUILDX_VERSION=\u0026lt;version\u0026gt; \\ GH_VERSION=\u0026lt;version\u0026gt; \\ YQ_VERSION=\u0026lt;version\u0026gt; submariner-io/shipyard/package/Dockerfile.shipyard-dapper-base\nShipyard Linting Image Software Some software used by Shipyard\u0026rsquo;s linting image are pinned to avoid unplanned changes in linting requirements, which can cause disruption. These versions should be periodically updated.\nENV MARKDOWNLINT_VERSION=0.33.0 \\ GITLINT_VERSION=0.19.1 submariner-io/shipyard/package/Dockerfile.shipyard-linting\n"
},
{
"uri": "/getting-started/quickstart/managed-kubernetes/gke/",
"title": "Google (GKE)",
"tags": [],
"description": "",
"content": "This quickstart guide covers deploying two Google Kubernetes Engine (GKE) clusters on Google Cloud Platform (GCP) and connecting them with Submariner and Service Discovery.\nThe guide assumes clusters have non-overlapping Pod and Service CIDRs. Globalnet can be used if overlapping CIDRs can\u0026rsquo;t be avoided.\n The guide assumes you have the gcloud binary installed and configured and a GCP account with billing enabled for the active project.\n Cluster Creation Create two identical Kubernetes clusters on GKE. For this guide, the following minimal configuration was used, however not everything is required (see the note part below).\ngcloud container clusters create \u0026#34;cluster-a\u0026#34; \\ --zone \u0026#34;europe-west3-a\u0026#34; \\ --enable-ip-alias \\ --cluster-ipv4-cidr \u0026#34;10.0.0.0/14\u0026#34; \\ --services-ipv4-cidr=\u0026#34;10.4.0.0/20\u0026#34; \\ --cluster-version \u0026#34;1.17.13-gke.2001\u0026#34; \\ --username \u0026#34;admin\u0026#34; \\ --machine-type \u0026#34;g1-small\u0026#34; \\ --image-type \u0026#34;UBUNTU\u0026#34; \\ --disk-type \u0026#34;pd-ssd\u0026#34; \\ --disk-size \u0026#34;15\u0026#34; \\ --num-nodes \u0026#34;3\u0026#34; \\ --network \u0026#34;default\u0026#34; gcloud container clusters create \u0026#34;cluster-b\u0026#34; \\ --zone \u0026#34;europe-west3-a\u0026#34; \\ --enable-ip-alias \\ --cluster-ipv4-cidr \u0026#34;10.8.0.0/14\u0026#34; \\ --services-ipv4-cidr=\u0026#34;10.12.0.0/20\u0026#34; \\ --cluster-version \u0026#34;1.17.13-gke.2001\u0026#34; \\ --username \u0026#34;admin\u0026#34; \\ --machine-type \u0026#34;g1-small\u0026#34; \\ --image-type \u0026#34;UBUNTU\u0026#34; \\ --disk-type \u0026#34;pd-ssd\u0026#34; \\ --disk-size \u0026#34;15\u0026#34; \\ --num-nodes \u0026#34;3\u0026#34; \\ --network \u0026#34;default\u0026#34; Make sure to use Kubernetes version 1.17 or higher, set by --cluster-version. The latest versions are listed in the GKE release notes.\n Prepare Clusters for Submariner The clusters need some changes in order for Submariner to successfully open the IPsec tunnel between them.\nPreparation: Node Configuration As of version 0.8 of Submariner (the current one while writing this), Google\u0026rsquo;s native CNI plugin is not directly supported. GKE clusters can be generated with Calico CNI instead, but this was not tested during this demo and therefore could hold surprises as well.\nSo as this guide uses Google\u0026rsquo;s native CNI plugin, configuration is needed for the eth0 interface of each node on every cluster. The used workaround deploys netshoot pods onto each node that configure the reverse path filtering. The scripts in this Github repository need to be executed in all clusters.\nwget https://raw.githubusercontent.com/sridhargaddam/k8sscripts/main/rp_filter_settings/update-rp-filter.sh wget https://raw.githubusercontent.com/sridhargaddam/k8sscripts/main/rp_filter_settings/configure-rp-filter.sh chmod +x update-rp-filter.sh chmod +x configure-rp-filter.sh gcloud container clusters get-credentials cluster-a --zone=\u0026#34;europe-west3-a\u0026#34; ./configure-rp-filter.sh gcloud container clusters get-credentials cluster-b --zone=\u0026#34;europe-west3-a\u0026#34; ./configure-rp-filter.sh Preparation: Firewall Configuration Submariner requires UDP ports 500, 4500, and 4800 to be open in both directions. Additionally the microservices\u0026rsquo; traffic needs to flow through the IPsec tunnel as TCP packets. Hence the TCP traffic has source and destination addresses originating in the participating clusters. Create those firewall rules on the GCP project. Use the same IP ranges as in the cluster creation steps above.\ngcloud compute firewall-rules create \u0026#34;allow-tcp-in\u0026#34; --allow=tcp \\ --direction=IN --source-ranges=10.12.0.0/20,10.8.0.0/14,10.4.0.0/20,10.0.0.0/14 gcloud compute firewall-rules create \u0026#34;allow-tcp-out\u0026#34; --allow=tcp --direction=OUT \\ --destination-ranges=10.12.0.0/20,10.8.0.0/14,10.4.0.0/20,10.0.0.0/14 gcloud compute firewall-rules create \u0026#34;udp-in-500\u0026#34; --allow=udp:500 --direction=IN gcloud compute firewall-rules create \u0026#34;udp-in-4500\u0026#34; --allow=udp:4500 --direction=IN gcloud compute firewall-rules create \u0026#34;udp-in-4800\u0026#34; --allow=udp:4800 --direction=IN gcloud compute firewall-rules create \u0026#34;udp-out-500\u0026#34; --allow=udp:500 --direction=OUT gcloud compute firewall-rules create \u0026#34;udp-out-4500\u0026#34; --allow=udp:4500 --direction=OUT gcloud compute firewall-rules create \u0026#34;udp-out-4800\u0026#34; --allow=udp:4800 --direction=OUT Preparation: Globalnet Submariner Globalnet internally creates a Service with external IPs for every exported Service and sets the ExternalIPs to the global IP assigned to the respective Service. By default, GKE clusters do not allow Services to be created with ExternalIPs, as it deploys the DenyServiceExternalIPs admission controller. If you are planning to install Submariner Globalnet, please make sure that you disable the admission controller by running the following command.\ngcloud container clusters update \u0026lt;cluster-name\u0026gt; --enable-service-externalips You can further restrict the usage of Services with external IPs to selected Service Accounts as documented in the Globalnet Prerequisites.\nAfter this, the clusters are finally ready for Submariner!\nDeploy Submariner Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nDeploy the Broker on cluster-a.\ngcloud container clusters get-credentials cluster-a --zone=\u0026#34;europe-west3-a\u0026#34; subctl deploy-broker The command will output a file named broker-info.subm to the directory it is run from, which will be used to setup the IPsec tunnel between clusters.\nVerify the Broker components are installed:\n$ kubectl get crds | grep submariner clusters.submariner.io endpoints.submariner.io gateways.submariner.io kubectl get crds --context cluster-a | grep multicluster serviceexports.multicluster.x-k8s.io serviceimports.multicluster.x-k8s.io $ kubectl get ns | grep submariner submariner-k8s-broker Now it is time to register every cluster in the future ClusterSet to the Broker.\nFirst join the Broker-hosting cluster itself to the Broker:\ngcloud container clusters get-credentials cluster-a --zone=\u0026#34;europe-west3-a\u0026#34; subctl join broker-info.subm --clusterid cluster-a --servicecidr 10.4.0.0/20 Submariner will figure out most required information on its own. The --clusterid and --servicecidr flags should be used to pass the same values as during the cluster creation steps above. You will also see a dialogue on the terminal that asks you to decide which of the three nodes will be the Gateway. Any node will work. It will be annotated with submariner.io/gateway: true.\nWhen a cluster is joined, the Submariner Operator is installed. It creates several components in the submariner-operator namespace:\n submariner-gateway DaemonSet, to open a gateway for the IPsec tunnel on one node submariner-routeagent DaemonSet, which runs on every worker node in order to route the internal traffic to the local gateway via VXLAN tunnels submariner-lighthouse-agent Deployment, which accesses the Kubernetes API server in the Broker cluster to exchange Service information with the Broker submariner-lighthouse-coredns Deployment, which - as an external DNS server - gets forwarded requests to the *.clusterset.local domain for cross-cluster communication by Kubernetes\u0026rsquo; internal DNS server Check the DaemonSets and Deployments with the following command:\n$ gcloud container clusters get-credentials cluster-a --zone=\u0026#34;europe-west3-a\u0026#34; $ kubectl get ds,deploy -n submariner-operator NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/submariner-gateway 1 1 1 1 1 submariner.io/gateway=true 5m29s daemonset.apps/submariner-routeagent 3 3 3 3 3 \u0026lt;none\u0026gt; 5m27s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/submariner-lighthouse-agent 1/1 1 1 5m28s deployment.apps/submariner-lighthouse-coredns 2/2 2 2 5m27s deployment.apps/submariner-operator 1/1 1 1 5m43s Now join the second cluster to the Broker:\ngcloud container clusters get-credentials cluster-b --zone=\u0026#34;europe-west3-a\u0026#34; subctl join broker-info.subm --clusterid cluster-b --servicecidr 10.12.0.0/20 Then verify connectivity and CIDR settings within the ClusterSet:\n$ gcloud container clusters get-credentials cluster-a --zone=\u0026#34;europe-west3-a\u0026#34; $ subctl show all CLUSTER ID ENDPOINT IP PUBLIC IP CABLE DRIVER TYPE cluster-a 10.156.0.8 34.107.75.239 libreswan local cluster-b 10.156.0.13 35.242.247.43 libreswan remote GATEWAY CLUSTER REMOTE IP CABLE DRIVER SUBNETS STATUS gke-cluster-b-default-pool-e2e7 cluster-b 10.156.0.13 libreswan 10.12.0.0/20, 10.8.0.0/14 connected NODE HA STATUS SUMMARY gke-cluster-a-default-pool-4e5f active All connections (1) are established COMPONENT REPOSITORY VERSION submariner quay.io/submariner 0.8.0-rc0 submariner-operator quay.io/submariner 0.8.0-rc0 service-discovery quay.io/submariner 0.8.0-rc0 $ gcloud container clusters get-credentials cluster-b --zone=\u0026#34;europe-west3-a\u0026#34; $ subctl show all CLUSTER ID ENDPOINT IP PUBLIC IP CABLE DRIVER TYPE cluster-b 10.156.0.13 35.242.247.43 libreswan local cluster-a 10.156.0.8 34.107.75.239 libreswan remote GATEWAY CLUSTER REMOTE IP CABLE DRIVER SUBNETS STATUS gke-cluster-a-default-pool-4e5f cluster-a 10.156.0.8 libreswan 10.4.0.0/20, 10.0.0.0/14 connected NODE HA STATUS SUMMARY gke-cluster-b-default-pool-e2e7 active All connections (1) are established COMPONENT REPOSITORY VERSION submariner quay.io/submariner 0.8.0-rc0 submariner-operator quay.io/submariner 0.8.0-rc0 service-discovery quay.io/submariner 0.8.0-rc0 Workaround for KubeDNS GKE uses KubeDNS by default for cluster-internal DNS queries. Submariner however only works with CoreDNS as of version 0.7. As a consequence, the *.clusterset.local domain stub needs to be added manually to KubeDNS. Query the ClusterIP of the submariner-lighthouse-coredns Service in cluster-a and cluster-b:\n$ gcloud container clusters get-credentials cluster-a --zone=\u0026#34;europe-west3-a\u0026#34; $ CLUSTER_IP=$(kubectl get svc submariner-lighthouse-coredns -n submariner-operator -o=custom-columns=ClusterIP:.spec.clusterIP | tail -n +2) $ cat \u0026lt;\u0026lt;EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap data: stubDomains: | {\u0026#34;clusterset.local\u0026#34;:[\u0026#34;$CLUSTER_IP\u0026#34;]} metadata: labels: addonmanager.kubernetes.io/mode: EnsureExists name: kube-dns namespace: kube-system EOF $ gcloud container clusters get-credentials cluster-b --zone=\u0026#34;europe-west3-a\u0026#34; $ CLUSTER_IP=$(kubectl get svc submariner-lighthouse-coredns -n submariner-operator -o=custom-columns=ClusterIP:.spec.clusterIP | tail -n +2) $ cat \u0026lt;\u0026lt;EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap data: stubDomains: | {\u0026#34;clusterset.local\u0026#34;:[\u0026#34;$CLUSTER_IP\u0026#34;]} metadata: labels: addonmanager.kubernetes.io/mode: EnsureExists name: kube-dns namespace: kube-system EOF Automated Verification This will perform automated verifications between the clusters.\nKUBECONFIG=cluster-a.yml gcloud container clusters get-credentials cluster-a --zone=\u0026#34;europe-west3-a\u0026#34; KUBECONFIG=cluster-b.yml gcloud container clusters get-credentials cluster-b --zone=\u0026#34;europe-west3-a\u0026#34; KUBECONFIG=cluster-a.yml:cluster-b.yml subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose Reconfig after Node Restart If the GKE Nodes were at some point drained or deleted, the Submariner Pods needed to terminate. Once the Nodes are up again, remember to\n label one Node with kubectl label node \u0026lt;name\u0026gt; submariner.io/gateway=true in order for the Gateway to be deployed on this Node apply the Node Configuration workaround again change the applied KubeDNS workaround to reflect the current submariner-lighthouse-coredns IP. This makes Submariner functional again and work can be continued.\nClean Up When you\u0026rsquo;re done, delete your clusters:\ngcloud container clusters delete cluster-a --zone=\u0026#34;europe-west3-a\u0026#34; gcloud container clusters delete cluster-b --zone=\u0026#34;europe-west3-a\u0026#34; "
},
{
"uri": "/development/shipyard/settings/",
"title": "Customizing Deployments",
"tags": [],
"description": "",
"content": "Shipyard supports specifying different settings for each deployed cluster. The settings are specified using a SETTINGS variable, which must be set in the Makefile of each consuming project.\nUsing Custom Settings Set the SETTINGS variable to deploy with your custom settings file (must be inside the project\u0026rsquo;s directory structure), e.g.:\nmake deploy SETTINGS=\u0026lt;path/to/settings\u0026gt;.yaml Deployment Settings File The settings are specified in a YAML file, where default and per cluster settings can be provided. All clusters are listed under the clusters key, and each cluster can have specific deployment settings. All cluster specific settings can be specified on the root of the settings file to determine defaults.\nThe possible settings are:\n Global settings: broker: Special key to mark the broker, set to anything to select a broker (defaults to the first cluster). cluster_count: Can be used to quickly deploy multiple clusters with an identical topology. clusters: Map of clusters to deploy. Each key is the cluster name and the values are cluster specific settings. Global and/or cluster specific: cni: Which CNI to deploy on the cluster, currently supports the default kind CNI (kindnet, used if no value is specified) and ovn. nodes: A space separated list of nodes to deploy, supported types are control-plane and worker. submariner: If Submariner should be deployed, set to true. Otherwise, leave unset (or set to false explicitly). gateways: Number of gateway nodes to deploy. Settings File Examples For example, a basic settings file that deploys a couple of clusters with the kind CNI:\nsubmariner: true nodes: control-plane worker worker clusters: cluster1: cluster2: The following settings file deploys two clusters with one control node and two workers, with OVN and Submariner. The third cluster will host the broker and as such needs no CNI, only a worker node, and no Submariner deployment:\ncni: ovn submariner: true nodes: control-plane worker worker clusters: cluster1: cluster2: cluster3: broker: true cni: submariner: false nodes: control-plane The following settings file deploys two clusters. As no gateways setting is specified either globally or for the first cluster specifically, the first cluster will get have a single gateway node by default. The second cluster will be deployed with one control node and three worker nodes, with two of the nodes labeled as gateway nodes.\nsubmariner: true nodes: control-plane worker clusters: cluster1: cluster2: nodes: control-plane worker worker worker gateways: 2 "
},
{
"uri": "/development/building-testing/",
"title": "Building and Testing",
"tags": [],
"description": "",
"content": "Submariner strives to be an open, welcoming community. Substantial tooling is provided to ease the contribution experience.\nStandard Development Environment Submariner provides a standard, shared environment for both development and CI that is maintained in the Shipyard project.\nLearn more about working with Shipyard here.\nBuilding and Testing Submariner provides a set of Make targets for building and testing in the standard development environment.\nLinting To run all linting:\nmake lint There are also Make targets for each type of linting:\nmake gitlint golangci-lint markdownlint yamllint See the linter configuration files at the root of each repository for details about which checks are enabled.\nNote that a few linters only run in CI via GitHub Actions and are not available in the standard development environment.\nUnit Tests To run Go unit tests:\nmake unit Building To build the Go binaries provided by a repository:\nmake build To package those Go binaries into container images:\nmake images Note that Submariner will automatically rebuild binaries and images when they have been modified and are required by tests.\nTo prune all Submariner-provided images, ensuring they will be rebuilt or pulled the next time they’re required:\nmake prune-images If you\u0026rsquo;re using kind to test your changes, you can rebuild the images and reload them using a single command:\nmake reload-images The command can restart the pods in order for the new images to take effect. To restart all pods:\nmake reload-images restart=all To restart a specific pod, use the image name without the submariner- prefix, e.g.\nmake reload-images restart=gateway End-to-End Tests To run functional end-to-end tests with a full multi-cluster deployment:\nmake e2e Different types of deployments can be configured with using flags:\nmake e2e using=helm,globalnet The cable driver used to connect clusters can also be selected with using flags:\nmake e2e using=vxlan In order to deploy clusters with OVN Kubernetes, the following command can be used:\nmake e2e using=ovn See Shipyard\u0026rsquo;s Makefile.inc for the currently-supported using flags.\nAdditional Ginkgo flags can be passed using the TEST_ARGS flag.\nFor example, a subset of tests can be selected with Ginkgo\u0026rsquo;s focus flags:\nmake e2e TEST_ARGS=\u0026#39;--ginkgo.focus=dataplane\u0026#39; Alternatively, it\u0026rsquo;s possible to skip test(s) using Ginkgo\u0026rsquo;s skip flag:\nmake e2e TEST_ARGS=\u0026#39;--ginkgo.skip=dataplane\u0026#39; To create a multi-cluster deployment and install Submariner but not run tests:\nmake deploy To create a multi-cluster deployment without Submariner:\nmake clusters To clean up a multi-cluster deployment from one of the previous commands:\nmake clean-clusters Shell Session in Development Environment To jump into a shell in Submariner\u0026rsquo;s standard development environment:\nmake shell "
},
{
"uri": "/getting-started/quickstart/kind/",
"title": "Sandbox Environment (kind)",
"tags": [],
"description": "",
"content": "Deploy kind with Submariner Locally kind is a tool for running local Kubernetes clusters using Docker container nodes. This guide uses kind to demonstrate deployment and operation of Submariner in three Kubernetes clusters running locally on your computer.\nSubmariner provides automation to deploy clusters using kind and connect them using Submariner.\nPrerequisites Install Docker and ensure it is running properly on your computer. Install and set up kubectl. You may need to increase your inotify resource limits.\n Deploy Automatically To create kind clusters and deploy Submariner with service discovery enabled, run:\ngit clone https://github.com/submariner-io/submariner-operator cd submariner-operator make deploy using=lighthouse To deploy IPv4/IPv6 dual-stack Kubernetes clusters, set using=dual-stack.\nBy default, the automation configuration in the submariner-io/submariner-operator repository deploys two clusters, with cluster1 configured as the Broker. See the settings file for details.\nOnce you become familiar with Submariner\u0026rsquo;s basics, you may want to visit the Building and Testing page to learn more about customizing your Submariner development deployment. To understand how Submariner\u0026rsquo;s development deployment infrastructure works under the hood, see Deployment Customization in the Shipyard documentation.\nDeploy Manually If you wish to try out Submariner deployment manually, an easy option is to create kind clusters using our scripts and deploy Submariner with subctl.\nCreate kind Clusters To create kind clusters, run:\ngit clone https://github.com/submariner-io/submariner-operator cd submariner-operator make clusters Once the clusters are deployed, make clusters will indicate how to access them:\nYour virtual cluster(s) are deployed and working properly and can be accessed with: export KUBECONFIG=$(find $(git rev-parse --show-toplevel)/output/kubeconfigs/ -type f -printf %p:) $ kubectl config use-context cluster1 # or cluster2, cluster3.. To clean everthing up, just run: make clean-clusters The export KUBECONFIG command has to be run before kubectl can be used.\nmake clusters creates two Kubernetes clusters: cluster1 and cluster2. To see the list of kind clusters, use the following command:\n$ kind get clusters cluster1 cluster2 To list the local Kubernetes contexts, use the following command:\n$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE cluster1 cluster1 cluster1 * cluster2 cluster2 cluster2 Since multiple clusters are running, you need to choose which cluster kubectl talks to. You can set a default cluster for kubectl by setting the current context in the Kubernetes kubeconfig file. Additionally, you can run the following command to set the current context for kubectl:\nkubectl config use-context \u0026lt;cluster name\u0026gt; For more information on interacting with kind, please refer to the kind documentation.\n Install subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nUse cluster1 as Broker subctl deploy-broker --kubeconfig output/kubeconfigs/kind-config-cluster1 Join cluster1 and cluster2 to the Broker subctl join --kubeconfig output/kubeconfigs/kind-config-cluster1 broker-info.subm --clusterid cluster1 --natt=false subctl join --kubeconfig output/kubeconfigs/kind-config-cluster2 broker-info.subm --clusterid cluster2 --natt=false You now have a Submariner environment that you can experiment with.\nVerify Deployment Verify Automatically with subctl This will perform automated verifications between the clusters.\nexport KUBECONFIG=output/kubeconfigs/kind-config-cluster1:output/kubeconfigs/kind-config-cluster2 subctl verify --context cluster1 --tocontext cluster2 --only service-discovery,connectivity --verbose Verify Manually To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster2.\nDeploy ClusterIP Service kubectl --kubeconfig output/kubeconfigs/kind-config-cluster2 create deployment nginx --image=nginx kubectl --kubeconfig output/kubeconfigs/kind-config-cluster2 expose deployment nginx --port=80 subctl export service --kubeconfig output/kubeconfigs/kind-config-cluster2 --namespace default nginx Deploy Headless Service kubectl --kubeconfig output/kubeconfigs/kind-config-cluster2 create deployment nginx --image=nginx kubectl --kubeconfig output/kubeconfigs/kind-config-cluster2 expose deployment nginx --port=80 --cluster-ip=None subctl export service --kubeconfig output/kubeconfigs/kind-config-cluster2 --namespace default nginx Verify Run nettest from cluster1 to access the nginx service:\nkubectl --kubeconfig output/kubeconfigs/kind-config-cluster1 -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest \\ -- /bin/bash curl nginx.default.svc.clusterset.local Cleanup When you are done experimenting and you want to delete the clusters deployed in any of the previous steps, use the following command:\nmake clean-clusters "
},
{
"uri": "/community/code-of-conduct/",
"title": "Code of Conduct",
"tags": [],
"description": "",
"content": "Submariner Community Code of Conduct Submariner follows the CNCF Code of Conduct.\nPlease report instances of abusive, harassing, or otherwise unacceptable behavior by contacting one or more of the Submariner Project Owners.\n"
},
{
"uri": "/operations/deployment/",
"title": "Deployment",
"tags": [],
"description": "",
"content": "Submariner is always deployed using a Go-based Kubernetes custom controller, called an Operator, that provides API-based installation and management. Deployment tools like the subctl command line utility and Helm charts wrap the Operator. The recommended deployment method is subctl, as it is currently the default in CI and provides diagnostic features.\nInstalling subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nDeployment of the Broker The Broker is a set of Custom Resource Definitions (CRDs) backed by the Kubernetes datastore. The Broker must be deployed on a cluster whose Kubernetes API is accessible by all of the participating clusters.\nsubctl deploy-broker --kubeconfig \u0026lt;PATH-TO-KUBECONFIG-BROKER\u0026gt; This will create:\n The submariner-k8s-broker namespace. The Endpoint and Cluster CRDs in the cluster. A Service Account (SA) in the namespace for subsequent subctl access. It also generates the broker-info.subm file which contains the following elements:\n The API endpoint. A CA certificate for the API endpoint. The Service Account token for accessing the API endpoint. A random IPsec PSK which will be stored only in this file. Service Discovery settings. The cluster in which the Broker is deployed can also participate in the dataplane connectivity with other clusters, but it will need to be joined (see following step).\n You can customize the Broker namespace using the --broker-namespace flag, allowing you to use a namespace of your choice on the Broker for synchronising resources between clusters.\nsubctl deploy-broker --broker-namespace \u0026lt;CUSTOM-NAMESPACE\u0026gt; ... Reference the subctl deploy-broker flag docs for additional details.\nJoining clusters For each cluster you want to join, issue the following command:\nsubctl join --kubeconfig \u0026lt;PATH-TO-JOINING-CLUSTER\u0026gt; broker-info.subm --clusterid \u0026lt;ID\u0026gt; subctl will automatically discover as much as it can, and prompt the user for any missing necessary information. Note that each cluster must have a unique cluster ID; the cluster ID can be specified, or otherwise is going to be generated by default based on the cluster name in the kubeconfig file. The cluster ID must be a valid DNS-1123 Label. If the cluster name present in kubeconfig file isn\u0026rsquo;t valid, a valid cluster ID must be specified with --clusterid flag.\n"
},
{
"uri": "/getting-started/architecture/",
"title": "Architecture",
"tags": [],
"description": "",
"content": "Submariner connects multiple Kubernetes clusters in a way that is secure and performant. Submariner flattens the networks between the connected clusters, and enables IP reachability between Pods and Services. Submariner also provides, via Lighthouse, service discovery capabilities. The service discovery model is built using the proposed Kubernetes Multi Cluster Services.\nSubmariner consists of several main components that work in conjunction to securely connect workloads across multiple Kubernetes clusters, both on-premises and on public clouds:\n Gateway Engine: manages the secure tunnels to other clusters. Route Agent: routes cross-cluster traffic from nodes to the active Gateway Engine. Broker: facilitates the exchange of metadata between Gateway Engines enabling them to discover one another. Service Discovery: provides DNS discovery of Services across clusters. Submariner has optional components that provide additional functionality:\n Globalnet Controller: handles interconnection of clusters with overlapping CIDRs. The diagram below illustrates the basic architecture of Submariner:\nTerminology and Concepts ClusterSet - a group of two or more clusters with a high degree of mutual trust that share Services amongst themselves. Within a cluster set, all namespaces with a given name are considered to be the same namespace.\n ServiceExport (CRD) - used to specify which Services should be exposed across all clusters in the cluster set. If multiple clusters export a Service with the same name and from the same namespace, they will be recognized as a single logical Service.\n ServiceExports must be explicitly created by the user in each cluster and within the namespace in which the underlying Service resides, in order to signify that the Service should be visible and discoverable to other clusters in the cluster set. The ServiceExport object can be created manually or via the subctl export command.\n When a Service is exported, it then becomes accessible as \u0026lt;service\u0026gt;.\u0026lt;ns\u0026gt;.svc.clusterset.local.\n For Headless Services, individual Pods can be accessed as \u0026lt;pod-name\u0026gt;.\u0026lt;cluster-id\u0026gt;.\u0026lt;svc-name\u0026gt;.\u0026lt;ns\u0026gt;.svc.clusterset.local. \u0026lt;cluster-id\u0026gt; must be a valid DNS-1123 Label\n ServiceImport (CRD) - representation of a multi-cluster Service in each cluster. Created and used internally by Lighthouse and does not require any user action.\n "
},
{
"uri": "/getting-started/",
"title": "Getting Started",
"tags": [],
"description": "",
"content": "Basic Overview Submariner consists of several main components that work in conjunction to securely connect workloads across multiple Kubernetes clusters. For more information about Submariner\u0026rsquo;s architecture, please refer to the Architecture section.\nThe Broker The Broker is an API that all participating clusters are given access to, and where two objects are exchanged via CRDs in .submariner.io:\n Cluster: defines a participating cluster and its IP CIDRs. Endpoint: defines a connection endpoint to a cluster, and the reachable cluster IPs from the endpoint. The Broker must be deployed on a single Kubernetes cluster. This cluster’s API server must be reachable by all Kubernetes clusters connected by Submariner. It can be a dedicated cluster, or one of the connected clusters.\nThe Submariner Deployment on a Cluster Once Submariner is deployed on a cluster with the proper credentials to the Broker it will exchange Cluster and Endpoint objects with other clusters (via push/pull/watching), and start forming connections and routes to other clusters.\nPrerequisites Submariner has a few requirements to get started:\n At least two Kubernetes clusters, one of which is designated to serve as the central Broker that is accessible by all of your connected clusters; this can be one of your connected clusters, or a dedicated cluster. The oldest tested Kubernetes version is 1.19. Older versions are known not to work with Submariner. Service discovery requires Kubernetes 1.21 or later. Non-overlapping Pod and Service CIDRs between clusters. This is to prevent routing conflicts. For cases where addresses do overlap, Globalnet can be set up. IP reachability between the gateway nodes. When connecting two clusters, the gateways must have at least one-way connectivity to each other on their public or private IP address and encapsulation port. This is needed for creating the tunnels between the clusters. The default encapsulation port is 4500/UDP, for NAT Traversal discovery port 4490/UDP is used. For clusters behind corporate firewalls that block the default ports, Submariner also supports NAT Traversal (NAT-T) with the option to set custom non-standard ports like 4501/UDP. Submariner uses UDP port 4800 to encapsulate Pod traffic from worker and master nodes to the Gateway nodes. This is required in order to preserve the source IP addresses of the Pods. Ensure that firewall configuration allows 4800/UDP across all nodes in the cluster in both directions. This is not a requirement when using OVN-Kubernetes CNI. If the gateway nodes are directly reachable over their private IPs without any NAT in between, ensure that firewall configuration allows ESP protocol on the gateway nodes. Worker node IPs on all connected clusters must be outside of the Pod/Service CIDR ranges. Submariner can be deployed on x86-64 and ARM64 nodes. (Submariner components are deployed on all nodes in the cluster, so all nodes must be x86-64 or ARM64.) An example of three clusters configured to use with Submariner (without Globalnet) would look like the following:\n Cluster Name Provider Pod CIDR Service CIDR Cluster Nodes CIDR broker AWS 10.42.0.0/16 10.43.0.0/16 192.168.1.0/24 west vSphere 10.0.0.0/16 10.1.0.0/16 192.168.1.0/24 east On-Prem 10.98.0.0/16 10.99.0.0/16 192.168.1.0/24 Support Matrix Submariner is designed to be cloud provider agnostic, and should run in any standard Kubernetes cluster. Submariner has been tested with the following network (CNI) Plugins:\n OpenShift-SDN Weave Flannel Canal Calico (see the Calico-specific deployment instructions) OVN - Requires OVN NorthBound DB version 6.1.0+ Submariner supports all currently-supported Kubernetes versions, as determined by the Kubernetes release policy.\nDeployment Submariner is deployed and managed using its Operator. Submariner\u0026rsquo;s Operator can be deployed using subctl or Helm.\nThe recommended deployment method is subctl, as it is currently the default in CI and provides diagnostic features.\n"
},
{
"uri": "/getting-started/architecture/broker/",
"title": "Broker",
"tags": [],
"description": "",
"content": "Submariner uses a central Broker component to facilitate the exchange of metadata information between Gateway Engines deployed in participating clusters. The Broker is basically a set of Custom Resource Definitions (CRDs) backed by the Kubernetes datastore. The Broker also defines a ServiceAccount and RBAC components to enable other Submariner components to securely access the Broker\u0026rsquo;s API.\nWhile there are no Services associated with the Broker, using subctl to deploy the Broker also deploys an operator Pod that installs the CRDs and the Globalnet configuration.\nSubmariner defines two CRDs that are exchanged via the Broker: Endpoint and Cluster. The Endpoint CRD contains the information about the active Gateway Engine in a cluster, such as its IP, needed for clusters to connect to one another. The Cluster CRD contains static information about the originating cluster, such as its Service and Pod CIDRs.\nThe Broker is a singleton component that is deployed on a cluster whose Kubernetes API must be accessible by all of the participating clusters. If there is a mix of on-premises and public clusters, the Broker can be deployed on a public cluster. The Broker cluster may be one of the participating clusters or a standalone cluster without the other Submariner components deployed. The Gateway Engine components deployed in each participating cluster are configured with the information to securely connect to the Broker cluster\u0026rsquo;s API.\nThe availability of the Broker cluster does not affect the operation of the dataplane on the participating clusters, that is the dataplane will continue to route traffic using the last known information while the Broker is unavailable. However, during this time, control plane components will be unable to advertise new or updated information to other clusters and learn about new or updated information from other clusters. When connection is re-established to the Broker, each component will automatically re-synchronize its local information with the Broker and update the dataplane if necessary.\n"
},
{
"uri": "/operations/deployment/subctl/",
"title": "subctl",
"tags": [],
"description": "",
"content": "The subctl command-line utility simplifies the deployment and maintenance of Submariner by automating interactions with the Submariner Operator.\nSynopsis subctl [command] [--flags] ...\nInstallation Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nInstalling specific versions By default, https://get.submariner.io will provide the latest release for subctl, and hence Submariner. Specific versions can be requested by using the VERSION environment variable.\nAvalailable options are:\n latest: the latest stable release (default) devel: the devel branch code. rc: the latest release candidate. x.x.x (like 0.6.1, 0.5.0, etc) For example\ncurl https://get.submariner.io | VERSION=devel bash Common options subctl commands which need access to a Kubernetes cluster handle the same access mechanisms as kubectl (see kubectl options). One or more kubeconfig files can be listed in the KUBECONFIG environment variable, or using the --kubeconfig option. A specific context can be chosen using the --context option; by default, subctl commands use the chosen kubeconfig’s current context.\nWhere appropriate, a namespace can be chosen using the --namespace (-n) option; by default, subctl commands use either the appropriate Submariner namespace, or the chosen context’s current namespace.\nSome commands support multiple contexts. The “reference” context is the context specified by --context; the “other” context is identified by a prefix, e.g. --tocontext or --remotecontext. All the options providing access to a cluster are available with the corresponding prefix: --toconfig (the kubeconfig), --tousername etc. It is possible to use mutually conflicting kubeconfig files (e.g. files using the same names for different values) by specifiying them using --kubeconfig and the prefixed --…config; corresponding settings will only use the information from the matching kubeconfig.\nReferences to the “selected context” and “selected namespace” in the documentation below refer to the context and namespace specified using the options described above.\nCommands deploy-broker subctl deploy-broker [flags]\nThe deploy-broker command configures the cluster specified by the selected context as the Broker. It installs the necessary CRDs and the submariner-k8s-broker namespace.\nIn addition, it generates a broker-info.subm file which can be used with the join command to connect clusters to the Broker. This file contains the following details:\n Encryption PSK key Broker access details for subsequent subctl runs Service Discovery settings deploy-broker flags Flag Description --repository \u0026lt;string\u0026gt; The repository from where the various Submariner images will be sourced (default quay.io/submariner) --version \u0026lt;string\u0026gt; Image version (defaults to the subctl version) --components \u0026lt;strings\u0026gt; Comma-separated list of components to be installed - any of service-discovery,connectivity. The default is: service-discovery,connectivity --globalnet Enable support for overlapping Cluster/Service CIDRs in connecting clusters (default disabled) --globalnet-cidr-range \u0026lt;string\u0026gt; Global CIDR supernet range for allocating GlobalCIDRs to each cluster (default \u0026ldquo;242.0.0.0/8\u0026rdquo;) --globalnet-cluster-size \u0026lt;value\u0026gt; Default cluster size for GlobalCIDR allocated to each cluster (amount of global IPs) (default 65536) --ipsec-psk-from \u0026lt;string\u0026gt; Import IPsec PSK from existing Submariner broker file, like broker-info.subm (default broker-info.subm) --broker-namespace \u0026lt;string\u0026gt; Namespace on the Broker used for synchronizing resources between clusters (default submariner-k8s-broker) --enable-clusterset-ip Set default support for use of cluster set IP for exported services in connecting clusters (default disabled) --clusterset-ip-cidr-range \u0026lt;string\u0026gt; Cluster set IP CIDR supernet range for allocating cluster set IP CIDRs to each cluster export export service subctl export service [flags] \u0026lt;name\u0026gt; creates a ServiceExport resource for the given Service name. This makes the corresponding Service discoverable from other clusters in the Submariner deployment.\nexport service flags Flag Description --namespace \u0026lt;string\u0026gt; Namespace to use --use-clusterset-ip \u0026lt;string\u0026gt; Use cluster set IP for this service (true or false) If no namespace flag is specified, it uses the default namespace from the current context, if present, otherwise it uses default.\nunexport unexport service subctl unexport service [flags] \u0026lt;name\u0026gt; removes the ServiceExport resource with the given name which in turn stops the Service of the same name from being exported to other clusters.\nunexport service flags Flag Description --namespace \u0026lt;string\u0026gt; Namespace to use If no namespace flag is specified, it uses the default namespace from the current context, if present, otherwise it uses default.\njoin subctl join broker-info.subm [flags]\nThe join command deploys the Submariner Operator in a cluster using the settings provided in the broker-info.subm file. The service account credentials needed for the new cluster to access the Broker cluster will be created and provided to the Submariner Operator deployment.\njoin flags (general) Flag Description --air-gapped Specifies that the cluster is in an air-gapped environment without access to external servers. --broker-url The URL of the broker API endpoint (overrides the URL stored in the broker information file). --cable-driver \u0026lt;string\u0026gt; Cable driver implementation. Available options are libreswan (default), wireguard and vxlan --check-broker-certificate Check the broker certificate (disable this to allow \u0026ldquo;insecure\u0026rdquo; connections) (default true). --clustercidr \u0026lt;string\u0026gt; Specifies the cluster\u0026rsquo;s CIDR used to generate Pod IP addresses. If not specified, subctl will try to discover it and if unable to do so, it will prompt the user --clusterid \u0026lt;string\u0026gt; Cluster ID used to identify the tunnels. Every cluster needs to have a unique cluster ID. If not provided, one will be generated by default based on the cluster name in the kubeconfig file; if the cluster name is not a valid cluster ID, the user will be prompted for one --coredns-custom-configmap The name of the custom CoreDNS configmap used to configure forwarding to Lighthouse. It should be in \u0026lt;namespace\u0026gt;/\u0026lt;name\u0026gt; format where \u0026lt;namespace\u0026gt; is optional and defaults to kube-system. --custom-domains The list of domains to use for multicluster service discovery. --ignore-requirements Ignore requirement failures (unsupported). --label-gateway Label getways (enabled by default). --label-gateway=false disables the prompt for a Worker node to use as gateway --load-balancer Enable a cloud loadbalancer in front of the gateways. This removes the need for dedicated nodes with a public IP address --operator-debug Enable verbose operator debugging. --preferred-server Enable this cluster as a preferred IPsec server for dataplane connections (only available with libreswan cable driver) --pod-debug Enable Submariner pod debugging (verbose logging in the deployed pods) --servicecidr Specifies the cluster\u0026rsquo;s CIDR used to generate Service IP addresses. If not specified, subctl will try to discover it and if unable to do so, it will prompt the user --enable-clusterset-ip Set default support for use of cluster set IP for exported services in connecting clusters (default disabled) --clusterset-ip-cidr \u0026lt;string\u0026gt; Cluster set IP CIDR to be allocated to the cluster join flags (Globalnet) Flag Description --globalnet Enable/disable Globalnet for this cluster (default true). This has no effect if Globalnet is not enabled globally via the Broker --globalnet-cidr \u0026lt;string\u0026gt; If Globalnet is enabled, the specific Globalnet CIDR to use for this cluster. This setting is exclusive with --globalnet-cluster-size --globalnet-cluster-size \u0026lt;value\u0026gt; If Globalnet is enabled, the cluster size for the GlobalCIDR allocated to this cluster (amount of global IPs) join flags (IPsec) Flag Description --natt Enable NAT for IPsec (default enabled) --ipsec-debug Enable IPsec debugging (verbose logging) --force-udp-encaps Force UDP encapsulation --nattport \u0026lt;value\u0026gt; IPsec NAT-T port (default 4500) join flags (images and repositories) Flag Description --repository \u0026lt;string\u0026gt; The repository from where the various Submariner images will be sourced (default quay.io/submariner) --version \u0026lt;string\u0026gt; Image version (defaults to the subctl version) --image-override \u0026lt;string\u0026gt;=\u0026lt;string\u0026gt; Component image override. This flag can be used more than once (example: --image-override=submariner-gateway=quay.io/myUser/submariner-gateway:latest) join flags (health check) Flag Description --health-check Enable/disable Gateway health check (default true) --health-check-interval \u0026lt;uint\u0026gt; The interval in seconds at which health check packets will be sent (default 1) --health-check-max-packet-loss-count \u0026lt;uint\u0026gt; The maximum number of packets lost at which the health checker will mark the connection as down (default 5) upgrade subctl upgrade [flags]\nUpgrades subctl to the latest released version and upgrades Submariner components in any accessible clusters to match.\nupgrade flags Flag Description --to-version \u0026lt;string\u0026gt; The version of subctl and Submariner to which to upgrade show show networks subctl show networks [flags]\nInspects the cluster and reports information about the detected network plugin and detected Cluster and Service CIDRs.\nshow versions subctl show versions [flags]\nShows the version and image repository of each Submariner component in the cluster.\nshow gateways subctl show gateways [flags]\nShows summary information about the Submariner gateways in the cluster.\nshow connections subctl show connections [flags]\nShows information about the Submariner endpoint connections with other clusters.\nshow endpoints subctl show endpoints [flags]\nShows information about the Submariner endpoints in the cluster.\nshow brokers subctl show brokers [flags]\nShows information about the Broker in the cluster.\nshow all subctl show all [flags]\nShows the aggregated information from all the other show commands.\nverify subctl verify --context \u0026lt;context1\u0026gt; --tocontext \u0026lt;context2\u0026gt; [--extracontext \u0026lt;context3\u0026gt;] [flags]\nThe verify command verifies a Submariner deployment between two clusters is functioning properly. \u0026lt;context1\u0026gt; will be ClusterA in the reports, while \u0026lt;context2\u0026gt; will be ClusterB in the reports. The --verbose flag is recommended to see what\u0026rsquo;s happening during the tests.\nSome Service Discovery tests require a third cluster, specified via the --extracontext arg, to verify additional functionality. If the third cluster is not specified, those tests are skipped.\nThere are several suites of verifications that can be performed. By default, all verifications are performed. Some verifications are deemed disruptive in that they change some state of the clusters as a side effect. If running the command interactively, you will be prompted for confirmation to perform disruptive verifications unless the --disruptive-tests flag is also specified. If running non-interactively (that is with no stdin), --disruptive-tests must be specified otherwise disruptive verifications are skipped.\nThe connectivity suite verifies dataplane connectivity across the clusters for the following cases:\n Pods (on Gateway nodes) to Services Pods (on non-Gateway nodes) to Services Pods (on Gateway nodes) to Pods Pods (on non-Gateway nodes) to Pods The service-discovery suite verifies DNS discovery of \u0026lt;service\u0026gt;.\u0026lt;namespace\u0026gt;.svc.clusterset.local entries across the clusters.\nThe gateway-failover suite verifies the continuity of cross-cluster dataplane connectivity after a gateway failure in a cluster occurs. This suite requires a single gateway configured on ClusterA and other available Worker nodes capable of serving as gateways. Please note that this verification is disruptive.\nverify flags Flag Description --connection-attempts \u0026lt;value\u0026gt; The maximum number of connection attempts (default 2) --connection-timeout \u0026lt;value\u0026gt; The timeout in seconds per connection attempt (default 60) --operation-timeout \u0026lt;value\u0026gt; Operation timeout for Kubernetes API calls (default 240) --junit-report \u0026lt;string\u0026gt; XML report path and name (default \u0026ldquo;\u0026quot;) --verbose Produce verbose logs during connectivity verification --only Comma separated list of specific verifications to perform --disruptive-tests Enable verifications which are potentially disruptive to your deployment --image-override \u0026lt;string\u0026gt;=\u0026lt;string\u0026gt; Component image override. This flag can be used more than once (example: --image-override=submariner-gateway=quay.io/myUser/submariner-gateway:latest) benchmark benchmark throughput subctl benchmark throughput --context \u0026lt;context1\u0026gt; [--tocontext \u0026lt;context2\u0026gt;] [flags]\nThe benchmark throughput command runs a throughput benchmark test between two specified clusters or within a single cluster. It deploys a Pod to run the iperf tool and logs the output to the console. When running benchmark throughput, two types of tests will be executed:\n Pod to Pod - where both Pods are scheduled on Gateway nodes Pod to Pod - where both Pods are scheduled on non-Gateway nodes benchmark latency subctl benchmark latency --context \u0026lt;context1\u0026gt; [--tocontext \u0026lt;context2\u0026gt;] [flags]\nThe benchmark latency command runs a latency benchmark test between two specified clusters or within a single cluster. It deploys a Pod to run the netperf tool and logs the output to the console. When running benchmark latency, two types of tests will be executed:\n Pod to Pod - where both Pods are scheduled on Gateway nodes Pod to Pod - where both Pods are scheduled on non-Gateway nodes benchmark flags Flag Description --verbose Produce verbose logs during benchmark tests --image-override \u0026lt;string\u0026gt;=\u0026lt;string\u0026gt; Component image override. This flag can be used more than once (example: --image-override=submariner-gateway=quay.io/myUser/submariner-gateway:latest) diagnose The subctl diagnose command is a tool that runs various checks to help diagnose issues in a Submariner deployment or some configurations in the cluster that may prevent Submariner from working properly.\nBelow is a list of available sub-commands:\n Diagnose command Description Flags deployment checks that the Submariner components are properly deployed and running with no overlapping CIDRs connections checks that the Gateway connections to other clusters are all established k8s-version checks if Submariner can be deployed on the Kubernetes version kube-proxy-mode [flags] checks if the kube-proxy mode is supported by Submariner --namespace \u0026lt;string\u0026gt; cni checks if the detected CNI network plugin is supported by Submariner firewall intra-cluster [flags] checks if the firewall configuration allows traffic via intra-cluster Submariner VXLAN interface --validation-timeout \u0026lt;value\u0026gt;, --verbose, --namespace \u0026lt;string\u0026gt; firewall inter-cluster --context \u0026lt;localcontext\u0026gt; --remotecontext \u0026lt;remotecontext\u0026gt; [flags] checks if the firewall configuration allows tunnels to be configured on the Gateway nodes --validation-timeout \u0026lt;value\u0026gt;, --verbose, --namespace \u0026lt;string\u0026gt; all runs all diagnostic checks (except those requiring two kubecontexts) diagnose flags descriptions Flag Description --namespace \u0026lt;string\u0026gt; Namespace in which validation pods should be deployed. If not specified, the default namespace is used --validation-timeout \u0026lt;value\u0026gt; Timeout in seconds while validating the connection attempt --image-override \u0026lt;string\u0026gt;=\u0026lt;string\u0026gt; Component image override. This flag can be used more than once (example: --image-override=submariner-gateway=quay.io/myUser/submariner-gateway:latest) --verbose Produce verbose logs during validation diagnose global flags | Flag | Description | --in-cluster | Use the in-cluster configuration to connect to Kubernetes.\ngather The subctl gather command is a tool that collects various information from clusters to aid in troubleshooting a Submariner deployment, including Kubernetes resources and Pod logs. Clusters from which information is gathered are provided via the --kubeconfig flag (or the KUBECONFIG environment variable). By default it will gather information from all the cluster contexts contained in the kubeconfig. To gather information from specific clusters, contexts can be passed using --contexts flag.\nThe tool creates a UTC timestamped directory of the format submariner-YYYYMMDDHHMMSS containing various files. Kubernetes resources are written to YAML files with the naming format \u0026lt;cluster-name\u0026gt;_\u0026lt;resource-type\u0026gt;_\u0026lt;namespace\u0026gt;_\u0026lt;resource-name\u0026gt;.yaml. Pod logs are written to files with the format \u0026lt;cluster-name\u0026gt;_\u0026lt;pod-name\u0026gt;.log\nThe specific information collected is configurable. As part of gathering connectivity resources, it also collects information specific to the CNI and Submariner cable driver in use from each node using file format \u0026lt;cluster-name\u0026gt;_\u0026lt;node-name\u0026gt;_\u0026lt;command\u0026gt;.yaml\ngather flags Flag Description --module \u0026lt;string\u0026gt; Comma-separated list of components for which to gather data. Default is operator,connectivity,service-discovery,broker --type \u0026lt;string\u0026gt; Comma-separated list of data types to gather. Default is logs,resources gather examples These examples assume that kubeconfigs have been passed using the KUBECONFIG environment variable. Alternately, add the --kubeconfig flag if the environment variable is not set.\ngather all from all clusters It is recommended to use this when reporting any issue.\nsubctl gather\ngather all from specific clusters subctl gather --contexts cluster-east\ngather operator and connectivity logs from specific clusters subctl gather --contexts cluster-east,cluster-west --module operator,connectivity --type logs\ngather broker and service-discovery resources from all clusters subctl gather --module broker,service-discovery --type resources\ncloud cloud prepare subctl cloud prepare [flags]\nThis command prepares the underlying cloud infrastructure for Submariner installation.\nprepare global flags Flag Description --nat-discovery-port \u0026lt;uint16\u0026gt; NAT discovery port (default 4490) --natt-port \u0026lt;uint16\u0026gt; IPsec NAT traversal port (default 4500) --vxlan-port \u0026lt;uint16\u0026gt; Internal VXLAN port (default 4800) prepare aws subctl cloud prepare aws [flags]\nThis command prepares an OpenShift installer-provisioned infrastructure (IPI) on AWS cloud for Submariner installation.\n Flag Description --credentials \u0026lt;string\u0026gt; AWS credentials configuration file (default $HOME/.aws/credentials) --gateway-instance \u0026lt;string\u0026gt; Type of gateway instance machine (default c5d.large) --gateways \u0026lt;int\u0026gt; Number of dedicated gateways to deploy (Set to 0 when using \u0026ndash;load-balancer mode) (default 1) --infra-id \u0026lt;string\u0026gt; AWS infra ID --ocp-metadata \u0026lt;string\u0026gt; OCP metadata.json file (or directory containing it) to read AWS infra ID and region from (takes precedence over the specific flags) --profile \u0026lt;string\u0026gt; AWS profile to use for credentials (default \u0026ldquo;default\u0026rdquo;) --region \u0026lt;string\u0026gt; AWS region prepare gcp subctl cloud prepare gcp [flags]\nThis command prepares an OpenShift installer-provisioned infrastructure (IPI) on GCP cloud for Submariner installation.\n Flag Description --credentials \u0026lt;string\u0026gt; GCP credentials configuration file (default $HOME/.gcp/osServiceAccount.json) --dedicated-gateway Whether a dedicated gateway node has to be deployed (default true) --gateway-instance \u0026lt;string\u0026gt; Type of gateway instance machine (default n1-standard-4) --gateways \u0026lt;int\u0026gt; Number of dedicated gateways to deploy (default 1) --infra-id \u0026lt;string\u0026gt; GCP infra ID --ocp-metadata \u0026lt;string\u0026gt; OCP metadata.json file (or directory containing it) to read GCP infra ID and region from (takes precedence over the specific flags) --project-id \u0026lt;string\u0026gt; GCP project ID --region \u0026lt;string\u0026gt; GCP region prepare rhos subctl cloud prepare rhos [flags]\nThis command prepares an OpenShift installer-provisioned infrastructure (IPI) on OpenStack cloud for Submariner installation.\n Flag Description --cloud-entry \u0026lt;string\u0026gt; Specific cloud configuration to use from the clouds.yaml --dedicated-gateway Whether a dedicated gateway node has to be deployed (default true) --gateway-instance \u0026lt;string\u0026gt; Type of gateway instance machine (default PnTAE.CPU_4_Memory_8192_Disk_50) --gateways \u0026lt;int\u0026gt; Number of gateways to deploy (default 1) --infra-id \u0026lt;string\u0026gt; OpenStack infra ID --ocp-metadata \u0026lt;string\u0026gt; OCP metadata.json file (or directory containing it) to read OpenStack infra ID and region from (takes precedence over the specific flags) --project-id \u0026lt;string\u0026gt; OpenStack project ID --region \u0026lt;string\u0026gt; OpenStack region prepare generic flags This command prepares a generic cluster for Submariner installation. It assumes that the cloud already has the necessary firewall ports opened and will only label the required number of gateway nodes for Submariner installation.\n Flag Description --gateways \u0026lt;int\u0026gt; Number of gateways to deploy (default 1) cloud cleanup This command cleans up the cloud after Submariner uninstallation.\ncleanup aws subctl cloud cleanup aws [flags]\nThis command cleans up an OpenShift installer-provisioned infrastructure (IPI) on AWS-based cloud after Submariner uninstallation.\n Flag Description --credentials \u0026lt;string\u0026gt; AWS credentials configuration file (default $HOME/.aws/credentials) --infra-id \u0026lt;string\u0026gt; AWS infra ID --ocp-metadata \u0026lt;string\u0026gt; OCP metadata.json file (or directory containing it) to read AWS infra ID and region from --profile \u0026lt;string\u0026gt; AWS profile to use for credentials --region \u0026lt;string\u0026gt; AWS region cleanup gcp subctl cloud cleanup gcp [flags]\nThis command cleans up an installer-provisioned infrastructure (IPI) on GCP-based cloud after Submariner uninstallation.\n Flag Description --credentials \u0026lt;string\u0026gt; GCP Credentials configuration file (default $HOME/.gcp/osServiceAccount.json) --infra-id \u0026lt;string\u0026gt; GCP infra ID --ocp-metadata \u0026lt;string\u0026gt; OCP metadata.json file (or directory containing it) to read GCP infra ID and region from --project-id \u0026lt;string\u0026gt; GCP project ID --region \u0026lt;string\u0026gt; GCP region cleanup rhos subctl cloud cleanup rhos [flags]\nThis command cleans up an installer-provisioned infrastructure (IPI) on OpenStack-based cloud after Submariner uninstallation.\n Flag Description --cloud-entry \u0026lt;string\u0026gt; the cloud entry to use (default openstack) --infra-id \u0026lt;string\u0026gt; OpenStack infra ID --ocp-metadata \u0026lt;string\u0026gt; OCP metadata.json file (or directory containing it) to read OpenStack infra ID and region from --project-id \u0026lt;string\u0026gt; OpenStack project ID --region \u0026lt;string\u0026gt; OpenStack region cleanup generic subctl cloud cleanup generic [flags]\nThis command removes the labels from gateway nodes after Submariner uninstallation.\nversion subctl version\nPrints the version details for the subctl binary.\nuninstall subctl uninstall [flags]\nThis command uninstalls Submariner and its components.\nThe following steps are performed:\n Delete Submariner ClusterRoles and ClusterRoleBindings. Delete the submariner.io CRDs. Delete the routing entries (iptables/routes/ipsets) programmed on the nodes. Delete the tunnel interfaces created for internal/external communication. Delete the Submariner namespace. Delete the Broker namespace, if present. Unlabel the gateway nodes. uninstall flags Flag Description --namespace \u0026lt;string\u0026gt; Namespace in which Submariner is installed (default submariner-operator) --yes Automatically answer yes to confirmation prompt recover-broker-info subctl recover-broker-info [flags]\nThis command recovers a lost broker-info.subm file.\nRunning subctl diagnose from a Pod in cluster As of 0.12.0, a subctl image is provided for running the subctl binary from a Pod within a cluster. The output of the subctl diagnose command output can be accessed from the Pod\u0026rsquo;s logs. The --in-cluster flag was added to subctl diagnose to support this use case.\nExample running subctl diagnose using a Job apiVersion: batch/v1 kind: Job metadata: name: submariner-diagnose namespace: submariner-operator spec: template: metadata: labels: submariner.io/transient: \u0026#34;true\u0026#34; spec: containers: - name: submariner-diagnose image: quay.io/submariner/subctl:devel command: [\u0026#34;subctl\u0026#34;, \u0026#34;diagnose\u0026#34;, \u0026#34;all\u0026#34;, \u0026#34;--in-cluster\u0026#34;] restartPolicy: Never serviceAccount: submariner-diagnose serviceAccountName: submariner-diagnose backoffLimit: 0 The resulting Pod with subctl diagnose all --in-cluster logs can be accessed with the label job-name=submariner-diagnose. A similar template can also be used to create a CronJob that runs subctl diagnose periodically.\n"
},
{
"uri": "/operations/upgrading/",
"title": "Upgrading",
"tags": [],
"description": "",
"content": "Starting with Submariner 0.16, the recommended way to upgrade Submariner is via the subctl upgrade command. This can be used to upgrade clusters to Submariner 0.16 or later. To upgrade older clusters to a version of Submariner before 0.16, follow the manual upgrade process.\nAutomated Upgrade If your current version of subctl (as indicated by subctl version) is older than 0.16, start by installing the desired version of subctl. If your current version of subctl is 0.16 or later, it will upgrade itself during the upgrade process.\nOnce you have subctl 0.16 or later, run it with a kubeconfig pointing to the cluster(s) you wish to upgrade:\nsubctl upgrade --kubeconfig /path/to/kubeconfig (The --kubeconfig parameter is optional; subctl will use any configuration that kubectl would find.)\nsubctl upgrade will start by upgrading subctl to the latest released version, and then upgrade all the Submariner components in accessible clusters to match, i.e. all Submariner components present in any cluster accessible through a context in the configured kubeconfig.\nA specific target version can be specified using the --to-version parameter:\nsubctl upgrade --to-version v0.16.0 Manual Upgrade To manually upgrade Submariner in a set of clusters, follow the steps below:\nMake sure KUBECONFIG for all participating clusters is exported and all participating clusters are accessible via kubectl.\n Download the appropriate version of subctl\n Re-deploy the broker in the broker context, pointing to the previous broker-info.subm file to preserve the PSK:\nsubctl deploy-broker --context cluster1 --ipsec-psk-from broker-info.subm Join the connected clusters:\nsubctl join --context cluster1 subctl join --context cluster2 This will restart the operator and all Submariner pods, using the version of Submariner matching the version of subctl.\n "
},
{
"uri": "/operations/usage/",
"title": "User Guide",
"tags": [],
"description": "",
"content": "Overview This guide is intended for users who have a Submariner environment set up and want to verify the installation and learn more about how to use Submariner and the main capabilities it provides. This guide assumes that there are two Kubernetes clusters, cluster2 and cluster3, forming a cluster set, and that the Broker is deployed into a separate cluster cluster1.\nMake sure you have subctl set up. Regardless of how Submariner was deployed, subctl can be used for various verification and troubleshooting tasks, as shown in this guide.\n This guide focuses on a non-Globalnet Submariner deployment.\n 1. Validate the Installation On the Broker The Broker facilitates the exchange of metadata information between the connected clusters, enabling them to discover one another. The Broker consists of only a set of Custom Resource Definitions (CRDs); there are no Pods or Services deployed with it.\nThis command validates that the Broker namespace has been created in the Broker cluster:\n$ export KUBECONFIG=cluster1/auth/kubeconfig $ kubectl config use-context cluster1 Switched to context \u0026#34;cluster1\u0026#34;. $ kubectl get namespace submariner-k8s-broker NAME STATUS AGE submariner-k8s-broker Active 5m This command validates that the Submariner CRDs have been created in the Broker cluster:\n$ kubectl get crds | grep -iE \u0026#39;submariner|multicluster.x-k8s.io\u0026#39; clusters.submariner.io 2020-11-30T13:49:16Z endpoints.submariner.io 2020-11-30T13:49:16Z gateways.submariner.io 2020-11-30T13:49:16Z serviceexports.multicluster.x-k8s.io 2020-11-30T13:52:39Z serviceimports.multicluster.x-k8s.io 2020-11-30T13:52:39Z This command validates that the participating clusters have successfully joined the Broker:\n$ kubectl -n submariner-k8s-broker get clusters.submariner.io NAME AGE cluster2 5m9s cluster3 2m9s On Connected Clusters The commands below can be used on either cluster2 or cluster3 to verify that the two clusters have successfully formed a cluster set and are properly connected to one another. In this example, the commands are being issued on cluster2.\n$ export KUBECONFIG=cluster2/auth/kubeconfig $ kubectl config use-context cluster2 Switched to context \u0026#34;cluster2\u0026#34;. The command below lists all the Submariner related Pods. Ensure that the STATUS for each is Running, noting that some could have an intermediate transient status, like Pending or ContainerCreating, indicating they are still starting up. To continuously monitor the Pods, you can specify the --watch flag with the command:\n$ kubectl -n submariner-operator get pods NAME READY STATUS RESTARTS AGE submariner-gateway-btzrq 1/1 Running 0 76s submariner-metrics-proxy-sznnc 1/1 Running 0 76s submariner-lighthouse-agent-586cf4899-wn747 1/1 Running 0 75s submariner-lighthouse-coredns-c88f64f5-h77kw 1/1 Running 0 73s submariner-lighthouse-coredns-c88f64f5-qlw4x 1/1 Running 0 73s submariner-operator-dcbdf5669-n7jgp 1/1 Running 0 89s submariner-routeagent-bmgbc 1/1 Running 0 75s submariner-routeagent-rl9nh 1/1 Running 0 75s submariner-routeagent-wqmzs 1/1 Running 0 75s This command verifies on which Kubernetes node the Gateway Engine is running:\n$ kubectl get node --selector=submariner.io/gateway=true -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME cluster2-worker Ready worker 6h59m v1.17.0 172.17.0.7 3.81.125.62 Ubuntu 19.10 5.8.18-200.fc32.x86_64 containerd://1.3.2 This command verifies the connection between the participating clusters:\n$ subctl show connections --context cluster2 Showing information for cluster \u0026#34;cluster2\u0026#34;: GATEWAY CLUSTER REMOTE IP CABLE DRIVER SUBNETS STATUS cluster3-worker cluster3 172.17.0.10 libreswan 100.3.0.0/16, 10.3.0.0/16 connected This command shows detailed information about the Gateway including the connections to other clusters. The section highlighted in bold shows the connection information for cluster3, including the connection status and latency statistics:\n $ kubectl -n submariner-operator describe Gateway Name: cluster2-worker Namespace: submariner-operator Labels: Annotations: update-timestamp: 1606751397 API Version: submariner.io/v1 Kind: Gateway Metadata: Creation Timestamp: 2020-11-30T13:51:39Z Generation: 538 Resource Version: 28717 Self Link: /apis/submariner.io/v1/namespaces/submariner-operator/gateways/cluster2-worker UID: 682f791a-00b5-4f51-8249-80c7c82c4bbf Status: Connections: Endpoint: Backend: libreswan cable_name: submariner-cable-cluster3-172-17-0-10 cluster_id: cluster3 Health Check IP: 10.3.224.0 Hostname: cluster3-worker nat_enabled: false private_ip: 172.17.0.10 public_ip: Subnets: 100.3.0.0/16 10.3.0.0/16 Latency RTT: Average: 1.16693ms Last: 1.128109ms Max: 1.470344ms Min: 1.110059ms Std Dev: 68.57µs Status: connected Status Message: Ha Status: active Local Endpoint: Backend: libreswan cable_name: submariner-cable-cluster2-172-17-0-7 cluster_id: cluster2 Health Check IP: 10.2.224.0 Hostname: cluster2-worker nat_enabled: false private_ip: 172.17.0.7 public_ip: Subnets: 100.2.0.0/16 10.2.0.0/16 Status Failure: Version: v0.8.0-pre0-1-g5d7f163 Events: To validate that Service Discovery (Lighthouse) is installed properly, check that the ServiceExport and ServiceImport CRDs have been deployed in the cluster:\n$ kubectl get crds | grep -iE \u0026#39;multicluster.x-k8s.io\u0026#39; serviceexports.multicluster.x-k8s.io 2020-11-30T13:50:34Z serviceimports.multicluster.x-k8s.io 2020-11-30T13:50:33Z Verify that the submariner-lighthouse-coredns Service is ready:\n$ kubectl -n submariner-operator get service submariner-lighthouse-coredns NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE submariner-lighthouse-coredns ClusterIP 100.2.177.123 \u0026lt;none\u0026gt; 53/UDP 126m Verify that CoreDNS was properly configured to forward requests sent for the clusterset.local domain to the to Lighthouse CoreDNS Server in the cluster:\n$ kubectl -n kube-system describe configmap coredns Name: coredns Namespace: kube-system Labels: \u0026lt;none\u0026gt; Annotations: \u0026lt;none\u0026gt; Data ==== Corefile: ---- #lighthouse-start AUTO-GENERATED SECTION. DO NOT EDIT clusterset.local:53 { forward . 100.2.177.123 } #lighthouse-end .:53 { errors health { lameduck 5s } ready kubernetes cluster2.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } Note that 100.2.177.123 is the ClusterIP address of the submariner-lighthouse-coredns Service we verified earlier.\n2. Export Services Across Clusters At this point, we have enabled secure IP communication between the connected clusters and formed the cluster set infrastructure. However, further configuration is required in order to signify that a Service should be visible and discoverable to other clusters in the cluster set. In following sections, we will define a Service and show how to export it to other clusters.\nThis guide uses a simple nginx server for demonstration purposes.\nIn the example below, we create the nginx resources within the nginx-test namespace. Note that the namespace must be created in both clusters for service discovery to work properly.\n Test ClusterIP Services 1. Create an nginx Deployment on cluster3 $ export KUBECONFIG=cluster3/auth/kubeconfig $ kubectl config use-context cluster3 Switched to context \u0026#34;cluster3\u0026#34;. The following commands create an nginx Service in the nginx-test namespace which targets TCP port 8080, with name http, on any Pod with the app: nginx label and exposes it on an abstracted Service port. When created, the Service is assigned a unique IP address (also called ClusterIP):\n$ kubectl create namespace nginx-test namespace/nginx-test created $ kubectl -n nginx-test create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine deployment.apps/nginx created kubectl apply the following YAML within the nginx-test namespace to create the service:\napiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx namespace: nginx-test spec: ports: - name: http port: 8080 protocol: TCP targetPort: 8080 selector: app: nginx sessionAffinity: None type: ClusterIP status: loadBalancer: {} $ kubectl -n nginx-test apply -f nginx-svc.yaml service/nginx exposed Verify that the Service exists and is running:\n$ kubectl -n nginx-test get service nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 100.3.220.176 \u0026lt;none\u0026gt; 8080/TCP 2m41s $ kubectl -n nginx-test get pods -l app=nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-667744f849-t26s5 1/1 Running 0 3m 10.3.0.5 cluster3-worker2 \u0026lt;none\u0026gt; \u0026lt;none\u0026gt; 2. Export the Service In order to signify that the Service should be visible and discoverable to other clusters in the cluster set, a ServiceExport object needs to be created. This is done using the subctl export command:\n$ subctl export service --namespace nginx-test nginx Service exported successfully After creation of the ServiceExport, the nginx Service will be exported to other clusters via the Broker. The Status information on the ServiceExport object will indicate this:\n$ kubectl -n nginx-test describe serviceexports Name: nginx Namespace: nginx-test Labels: \u0026lt;none\u0026gt; Annotations: \u0026lt;none\u0026gt; API Version: multicluster.x-k8s.io/v1alpha1 Kind: ServiceExport Metadata: Creation Timestamp: 2020-12-01T12:35:32Z Generation: 1 Resource Version: 302209 Self Link: /apis/multicluster.x-k8s.io/v1alpha1/namespaces/nginx-test/serviceexports/nginx UID: afe0533c-7cca-4443-9d8a-aee8e888e8bc Status: Conditions: Last Transition Time: 2020-12-01T12:35:32Z Message: Reason: Status: True Type: Valid Last Transition Time: 2020-12-01T12:35:32Z Message: Service was successfully synced to the broker Reason: Status: True Type: Synced Events: \u0026lt;none\u0026gt; Once exported, the Service can be discovered as nginx.nginx-test.svc.clusterset.local across the cluster set.\n3. Consume the Service on cluster2 Verify that the exported nginx Service was imported to cluster2 as expected. Submariner (via Lighthouse) automatically creates a corresponding ServiceImport in the service namespace:\n$ export KUBECONFIG=cluster2/auth/kubeconfig $ kubectl config use-context cluster2 Switched to context \u0026#34;cluster2\u0026#34;. $ kubectl get -n nginx-test serviceimport NAME TYPE IP AGE nginx ClusterSetIP 13m Next, run a test Pod on cluster2 and try to access the nginx Service from within the Pod:\n$ kubectl create namespace nginx-test namespace/nginx-test created $ kubectl run -n nginx-test tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash bash-5.0# curl nginx.nginx-test.svc.clusterset.local:8080 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;Welcome to nginx!\u0026lt;/title\u0026gt; \u0026lt;style\u0026gt; body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } \u0026lt;/style\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;Welcome to nginx!\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt;If you see this page, the nginx web server is successfully installed and working. Further configuration is required.\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;For online documentation and support please refer to \u0026lt;a href=\u0026#34;http://nginx.org/\u0026#34;\u0026gt;nginx.org\u0026lt;/a\u0026gt;.\u0026lt;br/\u0026gt; Commercial support is available at \u0026lt;a href=\u0026#34;http://nginx.com/\u0026#34;\u0026gt;nginx.com\u0026lt;/a\u0026gt;.\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;\u0026lt;em\u0026gt;Thank you for using nginx.\u0026lt;/em\u0026gt;\u0026lt;/p\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; bash-5.0# dig nginx.nginx-test.svc.clusterset.local ; \u0026lt;\u0026lt;\u0026gt;\u0026gt; DiG 9.16.6 \u0026lt;\u0026lt;\u0026gt;\u0026gt; nginx.nginx-test.svc.clusterset.local ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; -\u0026gt;\u0026gt;HEADER\u0026lt;\u0026lt;- opcode: QUERY, status: NOERROR, id: 34800 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 6ff7ea72c14ce2d4 (echoed) ;; QUESTION SECTION: ;nginx.nginx-test.svc.clusterset.local. IN\tA ;; ANSWER SECTION: nginx.nginx-test.svc.clusterset.local. 5 IN A\t100.3.220.176 ;; Query time: 16 msec ;; SERVER: 100.2.0.10#53(100.2.0.10) ;; WHEN: Mon Nov 30 17:52:55 UTC 2020 ;; MSG SIZE rcvd: 125 bash-5.0# dig SRV _http._tcp.nginx.nginx-test.svc.clusterset.local ; \u0026lt;\u0026lt;\u0026gt;\u0026gt; DiG 9.16.6 \u0026lt;\u0026lt;\u0026gt;\u0026gt; SRV _http._tcp.nginx.nginx-test.svc.clusterset.local ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; -\u0026gt;\u0026gt;HEADER\u0026lt;\u0026lt;- opcode: QUERY, status: NOERROR, id: 21993 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 3f6018af2626ebd2 (echoed) ;; QUESTION SECTION: ;_http._tcp.nginx.nginx-test.svc.clusterset.local. IN SRV ;; ANSWER SECTION: _http._tcp.nginx.nginx-test.svc.clusterset.local. 5 IN SRV 0 50 8080 nginx.nginx-test.svc.clusterset.local. ;; Query time: 3 msec ;; SERVER: 100.2.0.10#53(100.2.0.10) ;; WHEN: Fri Jul 23 07:35:51 UTC 2021 ;; MSG SIZE rcvd: 194 Note that DNS resolution works across the clusters, and that the IP address 100.3.220.176 returned is the same ClusterIP associated with the nginx Service on cluster3.\n4. Create an nginx Deployment on cluster2 If multiple clusters export a Service with the same name and from the same namespace, it will be recognized as a single logical Service. To test this, we will deploy the same nginx Service in the same namespace on cluster2:\n$ kubectl -n nginx-test create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine deployment.apps/nginx created kubectl apply the following YAML within the nginx-test namespace to create the service:\napiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx namespace: nginx-test spec: ports: - name: http port: 8080 protocol: TCP targetPort: 8080 selector: app: nginx sessionAffinity: None type: ClusterIP status: loadBalancer: {} $ kubectl -n nginx-test apply -f nginx-svc.yaml service/nginx exposed Verify the Service exists and is running:\n$ kubectl -n nginx-test get service nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx ClusterIP 100.2.29.136 \u0026lt;none\u0026gt; 8080/TCP 1m40s $ kubectl -n nginx-test get pods -l app=nginx -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-5578584966-d7sj7 1/1 Running 0 22s 10.2.224.3 cluster2-worker \u0026lt;none\u0026gt; \u0026lt;none\u0026gt; 5. Export the Service As before, use the subctl export command to export the Service:\n$ subctl export service --namespace nginx-test nginx Service exported successfully After creation of the ServiceExport, the nginx Service will be exported to other clusters via the Broker. The Status information on the ServiceExport object will indicate this:\n$ kubectl -n nginx-test describe serviceexports Name: nginx Namespace: nginx-test Labels: \u0026lt;none\u0026gt; Annotations: \u0026lt;none\u0026gt; API Version: multicluster.x-k8s.io/v1alpha1 Kind: ServiceExport Metadata: Creation Timestamp: 2020-12-07T17:37:59Z Generation: 1 Resource Version: 3131 Self Link: /apis/multicluster.x-k8s.io/v1alpha1/namespaces/nginx-test/serviceexports/nginx UID: 7348eb3c-9558-4dc7-be1d-b0255a2038fd Status: Conditions: Last Transition Time: 2020-12-07T17:37:59Z Message: Reason: Status: True Type: Valid Last Transition Time: 2020-12-07T17:37:59Z Message: Service was successfully synced to the broker Reason: Status: True Type: Synced Events: \u0026lt;none\u0026gt; 6. Consume the Service from cluster2 Run a test Pod on cluster2 and try to access the nginx Service from within the Pod:\nkubectl run -n nginx-test tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash bash-5.0# curl nginx.nginx-test.svc.clusterset.local:8080 \u0026lt;!DOCTYPE html\u0026gt; \u0026lt;html\u0026gt; \u0026lt;head\u0026gt; \u0026lt;title\u0026gt;Welcome to nginx!\u0026lt;/title\u0026gt; \u0026lt;style\u0026gt; body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } \u0026lt;/style\u0026gt; \u0026lt;/head\u0026gt; \u0026lt;body\u0026gt; \u0026lt;h1\u0026gt;Welcome to nginx!\u0026lt;/h1\u0026gt; \u0026lt;p\u0026gt;If you see this page, the nginx web server is successfully installed and working. Further configuration is required.\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;For online documentation and support please refer to \u0026lt;a href=\u0026#34;http://nginx.org/\u0026#34;\u0026gt;nginx.org\u0026lt;/a\u0026gt;.\u0026lt;br/\u0026gt; Commercial support is available at \u0026lt;a href=\u0026#34;http://nginx.com/\u0026#34;\u0026gt;nginx.com\u0026lt;/a\u0026gt;.\u0026lt;/p\u0026gt; \u0026lt;p\u0026gt;\u0026lt;em\u0026gt;Thank you for using nginx.\u0026lt;/em\u0026gt;\u0026lt;/p\u0026gt; \u0026lt;/body\u0026gt; \u0026lt;/html\u0026gt; bash-5.0# dig nginx.nginx-test.svc.clusterset.local ; \u0026lt;\u0026lt;\u0026gt;\u0026gt; DiG 9.16.6 \u0026lt;\u0026lt;\u0026gt;\u0026gt; nginx.nginx-test.svc.clusterset.local ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; -\u0026gt;\u0026gt;HEADER\u0026lt;\u0026lt;- opcode: QUERY, status: NOERROR, id: 55022 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 6f9db9800a9a9779 (echoed) ;; QUESTION SECTION: ;nginx.nginx-test.svc.clusterset.local. IN\tA ;; ANSWER SECTION: nginx.nginx-test.svc.clusterset.local. 5 IN A\t100.2.29.136 ;; Query time: 5 msec ;; SERVER: 100.3.0.10#53(100.3.0.10) ;; WHEN: Tue Dec 01 07:45:48 UTC 2020 ;; MSG SIZE rcvd: 125 bash-5.0# dig SRV _http._tcp.nginx.nginx-test.svc.clusterset.local ; \u0026lt;\u0026lt;\u0026gt;\u0026gt; DiG 9.16.6 \u0026lt;\u0026lt;\u0026gt;\u0026gt; SRV _http._tcp.nginx.nginx-test.svc.clusterset.local ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; -\u0026gt;\u0026gt;HEADER\u0026lt;\u0026lt;- opcode: QUERY, status: NOERROR, id: 19656 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 8fe1ebfcf9165a8d (echoed) ;; QUESTION SECTION: ;_http._tcp.nginx.nginx-test.svc.clusterset.local. IN SRV ;; ANSWER SECTION: _http._tcp.nginx.nginx-test.svc.clusterset.local. 5 IN SRV 0 50 8080 nginx.nginx-test.svc.clusterset.local. ;; Query time: 16 msec ;; SERVER: 100.2.0.10#53(100.2.0.10) ;; WHEN: Fri Jul 23 09:11:21 UTC 2021 ;; MSG SIZE rcvd: 194 At this point we have the same nginx Service deployed within the nginx-test namespace on both clusters. Note that DNS resolution works, and the IP address 100.2.29.136 returned is the ClusterIP associated with the local nginx Service deployed on cluster2. This is expected, as Submariner prefers to handle the traffic locally whenever possible.\n 7. Stopping the service from being exported If you don\u0026rsquo;t want to have the service exported to other clusters in the cluster set anymore, you can do so with subctl unexport:\n$ subctl unexport service --namespace nginx-test nginx Service unexported successfully Now, the service is no longer discoverable outside its cluster.\nService Discovery for Services Deployed to Multiple Clusters Submariner follows this logic for service discovery across the cluster set:\n If an exported Service is not available in the local cluster, Lighthouse DNS returns the IP address of the ClusterIP Service from one of the remote clusters on which the Service was exported. If it is an SRV query, an SRV record with port and domain name corresponding to the ClusterIP will be returned.\n If an exported Service is available in the local cluster, Lighthouse DNS always returns the IP address of the local ClusterIP Service. In this example, if a Pod from cluster2 tries to access the nginx Service as nginx.nginx-test.svc.clusterset.local now, Lighthouse DNS resolves the Service as 100.2.29.136 which is the local ClusterIP Service on cluster2. Similarly, if a Pod from cluster3 tries to access the nginx Service as nginx.nginx-test.svc.clusterset.local, Lighthouse DNS resolves the Service as 100.3.220.176 which is the local ClusterIP Service on cluster3.\n If multiple clusters export a Service with the same name and from the same namespace, Lighthouse DNS load-balances between the clusters in a round-robin fashion. If, in our example, a Pod from a third cluster that joined the cluster set tries to access the nginx Service as nginx.nginx-test.svc.clusterset.local, Lighthouse will round-robin the DNS responses across cluster2 and cluster3, causing requests to be served by both clusters. Note that Lighthouse returns IPs from connected clusters only. Clusters in disconnected state are ignored.\n Applications can always access a Service from a specific cluster by prefixing the DNS query with cluster-id as follows: \u0026lt;cluster-id\u0026gt;.\u0026lt;svcname\u0026gt;.\u0026lt;namespace\u0026gt;.svc.clusterset.local. In our example, querying for cluster2.nginx.nginx-test.svc.clusterset.local always returns the ClusterIP Service on cluster2. Similarly, cluster3.nginx.nginx-test.svc.clusterset.local always returns the ClusterIP Service on cluster3.\n Cluster Set Virtual IP Submariner can also allocate a cluster set virtual IP for an exported service that is stored in the ServiceImport resource. This is an opt-in feature that can be enabled per service via the lighthouse.submariner.io/use-clusterset-ip annotation on the ServiceExport or automatically for all services via the enable-clusterset-ip option on subctl deploy-broker. Submariner will allocate a virtual IP from a pool of IP addresses based on a configurable CIDR assigned to the cluster from a global CIDR range. The first cluster to export a service will allocate and assign the virtual IP.\nLighthouse DNS will return the cluster set virtual IP from queries instead of a constituent cluster IP address. However, Submariner does not route this virtual IP and relies on some external component to do so.\nTest StatefulSet and Headless Service Submariner also supports Headless Services with StatefulSets, making it possible to access individual Pods via their stable DNS name. Kubernetes supports this by introducing stable Pod IDs composed of \u0026lt;pod-name\u0026gt;.\u0026lt;svc-name\u0026gt;.\u0026lt;ns\u0026gt;.svc.cluster.local within a single cluster, which Submariner extends to \u0026lt;pod-name\u0026gt;.\u0026lt;cluster-id\u0026gt;.\u0026lt;svc-name\u0026gt;.\u0026lt;ns\u0026gt;.svc.clusterset.local across the cluster set. The Headless Service in this case offers one single Service for all the underlying Pods.\nSince we need to use \u0026lt;cluster-id\u0026gt; in DNS queries for individual pods, cluster ID must be a valid DNS-1123 Label\n Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of its Pods. StatefulSets are typically used for applications that require stable unique network identifiers, persistent storage, and ordered deployment and scaling.\n1. Create a StatefulSet and Headless Service on cluster3 kubectl apply the following yaml within the nginx-test namespace:\napiVersion: v1 kind: Service metadata: name: nginx-ss labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: ports: - port: 80 name: web clusterIP: None selector: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: \u0026#34;nginx-ss\u0026#34; replicas: 2 selector: matchLabels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss template: metadata: labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: containers: - name: nginx-ss image: nginxinc/nginx-unprivileged:stable-alpine ports: - containerPort: 80 name: web This specification will create a StatefulSet named web which indicates that two replicas of the nginx container will be launched in unique Pods. This also creates a Headless Service called nginx-ss on the nginx-test namespace. Note that Headless Service is requested by explicitly specifying \u0026ldquo;None\u0026rdquo; for the clusterIP (.spec.clusterIP).\n$ kubectl -n nginx-test apply -f ./nginx-ss.yaml service/nginx-ss created statefulset.apps/web created Verify the Service and StatefulSet:\n$ kubectl -n nginx-test get service nginx-ss NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-ss ClusterIP None \u0026lt;none\u0026gt; 80/TCP 83s $ kubectl -n nginx-test describe statefulset web Name: web Namespace: nginx-test CreationTimestamp: Mon, 30 Nov 2020 21:53:01 +0200 Selector: app.kubernetes.io/instance=nginx-ss,app.kubernetes.io/name=nginx-ss Labels: \u0026lt;none\u0026gt; Annotations: \u0026lt;none\u0026gt; Replicas: 2 desired | 2 total Update Strategy: RollingUpdate Partition: 0 Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/instance=nginx-ss app.kubernetes.io/name=nginx-ss Containers: nginx-ss: Image: nginxinc/nginx-unprivileged:stable-alpine Port: 80/TCP Host Port: 0/TCP Environment: \u0026lt;none\u0026gt; Mounts: \u0026lt;none\u0026gt; Volumes: \u0026lt;none\u0026gt; Volume Claims: \u0026lt;none\u0026gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 94s statefulset-controller create Pod web-0 in StatefulSet web successful Normal SuccessfulCreate 85s statefulset-controller create Pod web-1 in StatefulSet web successful 2. Export the Service on cluster-3 As before, use the subctl export command to export the Service:\n$ subctl export service --namespace nginx-test nginx-ss Service exported successfully After creation of the ServiceExport, the nginx-ss Service will be exported to other clusters via the Broker. The Status information on the ServiceExport object will indicate this:\n$ kubectl -n nginx-test describe serviceexport nginx-ss Name: nginx-ss Namespace: nginx-test Labels: \u0026lt;none\u0026gt; Annotations: \u0026lt;none\u0026gt; API Version: multicluster.x-k8s.io/v1alpha1 Kind: ServiceExport Metadata: Creation Timestamp: 2020-11-30T19:59:44Z Generation: 1 Resource Version: 83431 Self Link: /apis/multicluster.x-k8s.io/v1alpha1/namespaces/nginx-test/serviceexports/nginx-ss UID: 2c0d6419-6160-431e-990c-8a9993363b10 Status: Conditions: Last Transition Time: 2020-11-30T19:59:44Z Message: Reason: Status: True Type: Valid Last Transition Time: 2020-11-30T19:59:44Z Message: Service was successfully synced to the broker Reason: Status: True Type: Synced Events: \u0026lt;none\u0026gt; Once the Service is exported successfully, it can be discovered as nginx-ss.nginx-test.svc.clusterset.local across the cluster set. In addition, the individual Pods can be accessed as web-0.cluster3.nginx-ss.nginx-test.svc.clusterset.local and web-1.cluster3.nginx-ss.nginx-test.svc.clusterset.local.\n3. Consume the Service from cluster2 Verify that the exported nginx-ss Service was imported to cluster2. Submariner (via Lighthouse) automatically creates a corresponding ServiceImport in the service namespace:\n$ export KUBECONFIG=cluster2/auth/kubeconfig $ kubectl config use-context cluster2 Switched to context \u0026#34;cluster2\u0026#34;. $ kubectl get -n nginx-test serviceimport NAME TYPE IP AGE nginx-ss Headless 5m48s Next, run a test Pod on cluster2 and try to access the nginx-ss Service from within the Pod:\nkubectl run -n nginx-test tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash bash-5.0# dig nginx-ss.nginx-test.svc.clusterset.local ; \u0026lt;\u0026lt;\u0026gt;\u0026gt; DiG 9.16.6 \u0026lt;\u0026lt;\u0026gt;\u0026gt; nginx-ss.nginx-test.svc.clusterset.local ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; -\u0026gt;\u0026gt;HEADER\u0026lt;\u0026lt;- opcode: QUERY, status: NOERROR, id: 19729 ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: 0b17506cb2b4a93b (echoed) ;; QUESTION SECTION: ;nginx-ss.nginx-test.svc.clusterset.local. IN A ;; ANSWER SECTION: nginx-ss.nginx-test.svc.clusterset.local. 5 IN A\t10.3.0.5 nginx-ss.nginx-test.svc.clusterset.local. 5 IN A\t10.3.224.3 ;; Query time: 1 msec ;; SERVER: 100.2.0.10#53(100.2.0.10) ;; WHEN: Mon Nov 30 20:18:08 UTC 2020 ;; MSG SIZE rcvd: 184 bash-5.0# dig SRV _web._tcp.nginx-ss.nginx-test.svc.clusterset.local ; \u0026lt;\u0026lt;\u0026gt;\u0026gt; DiG 9.16.6 \u0026lt;\u0026lt;\u0026gt;\u0026gt; SRV _web._tcp.nginx-ss.nginx-test.svc.clusterset.local ;; global options: +cmd ;; Got answer: ;; WARNING: .local is reserved for Multicast DNS ;; You are currently testing what happens when an mDNS query is leaked to DNS ;; -\u0026gt;\u0026gt;HEADER\u0026lt;\u0026lt;- opcode: QUERY, status: NOERROR, id: 16402 ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 4096 ; COOKIE: cf1e04578842eb5b (echoed) ;; QUESTION SECTION: ;_web._tcp.nginx-ss.nginx-test.svc.clusterset.local. IN SRV ;; ANSWER SECTION: _web._tcp.nginx-ss.nginx-test.svc.clusterset.local. 5 IN SRV 0 50 80 web-0.cluster3.nginx-ss.nginx-test.svc.clusterset.local. _web._tcp.nginx-ss.nginx-test.svc.clusterset.local. 5 IN SRV 0 50 80 web-1.cluster3.nginx-ss.nginx-test.svc.clusterset.local. ;; Query time: 2 msec ;; SERVER: 100.2.0.10#53(100.2.0.10) ;; WHEN: Fri Jul 23 07:38:03 UTC 2021 ;; MSG SIZE rcvd: 341 You can also access the individual Pods:\nbash-5.0# nslookup web-0.cluster3.nginx-ss.nginx-test.svc.clusterset.local Server:\t100.2.0.10 Address:\t100.2.0.10#53 Name:\tweb-0.cluster3.nginx-ss.nginx-test.svc.clusterset.local Address: 10.3.0.5 bash-5.0# nslookup web-1.cluster3.nginx-ss.nginx-test.svc.clusterset.local Server:\t100.2.0.10 Address:\t100.2.0.10#53 Name:\tweb-1.cluster3.nginx-ss.nginx-test.svc.clusterset.local Address: 10.3.224.3 In case of SRV you can access pod from individual clusters but not the pods direcly:\nbash-5.0# nslookup -q=SRV _web._tcp.cluster3.nginx-ss.nginx-test.svc.clusterset.local Server:\t100.2.0.10 Address:\t100.2.0.10#53 _web._tcp.cluster3.nginx-ss.nginx-test.svc.clusterset.local\tservice = 0 50 80 web-0.cluster3.nginx-ss.nginx-test.svc.clusterset.local. _web._tcp.cluster3.nginx-ss.nginx-test.svc.clusterset.local\tservice = 0 50 80 web-1.cluster3.nginx-ss.nginx-test.svc.clusterset.local. Clean the Created Resources To remove the previously created Kubernetes resources, simply delete the nginx-test namespace from both clusters:\n$ export KUBECONFIG=cluster2/auth/kubeconfig $ kubectl config use-context cluster2 Switched to context \u0026#34;cluster2\u0026#34;. $ kubectl delete namespace nginx-test namespace \u0026#34;nginx-test\u0026#34; deleted $ export KUBECONFIG=cluster3/auth/kubeconfig $ kubectl config use-context cluster3 Switched to context \u0026#34;cluster3\u0026#34;. $ kubectl delete namespace nginx-test namespace \u0026#34;nginx-test\u0026#34; deleted "
},
{
"uri": "/operations/",
"title": "Operations",
"tags": [],
"description": "",
"content": " Deployment subctl Helm Calico CNI Upgrading User Guide Monitoring Troubleshooting Known Issues Uninstalling Submariner "
},
{
"uri": "/operations/monitoring/",
"title": "Monitoring",
"tags": [],
"description": "",
"content": "Basic Overview Submariner provides a number of Prometheus metrics, and sets up ServiceMonitor instances which allow these metrics to be scraped by an in-cluster Prometheus deployment. Prometheus is a pluggable metrics collection and storage system and can act as a data source for Grafana, a metrics visualization frontend. Unlike some metrics collectors, Prometheus requires the collectors to pull metrics from each source.\nPrometheus Operator To start monitoring Submariner using the Prometheus Operator, Prometheus needs to be configured to scrape the Submariner Operator’s namespace (submariner-operator by default). The specifics depend on your Prometheus deployment, but typically, this will require you to:\n Add the Submariner Operator’s namespace to Prometheus’ ClusterRoleBinding.\n Ensure that Prometheus’ configuration doesn’t prevent it from scraping this namespace.\n A minimal Prometheus object providing access to the Submariner metrics is as follows:\napiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: name: prometheus labels: prometheus: prometheus spec: replicas: 1 serviceAccountName: prometheus serviceMonitorNamespaceSelector: {} serviceMonitorSelector: matchLabels: name: submariner-operator OpenShift Setup OpenShift 4.5 or later will automatically discover the Submariner metrics with service monitors in the openshift-monitoring namespace.\nMetrics Reference Submariner metrics provide insights into both the state of Submariner itself, as well as the inter-cluster network behavior of your cluster set. All Submariner metrics are exported within the submariner-operator namespace by default.\nThe following metrics are exposed currently:\nSubmariner Gateway Name Label Description submariner_gateways The number of gateways in the cluster submariner_gateway_creation_timestamp local_cluster, local_hostname Timestamp of gateway creation time submariner_gateway_sync_iterations Gateway synchronization iterations submariner_gateway_rx_bytes cable_driver, local_cluster, local_hostname, local_endpoint_ip, remote_cluster, remote_hostname, remote_endpoint_ip Count of bytes received by cable driver and cable submariner_gateway_tx_bytes cable_driver, local_cluster, local_hostname, local_endpoint_ip, remote_cluster, remote_hostname, remote_endpoint_ip Count of bytes transmitted by cable driver and cable Submariner Connections Name Label Description submariner_requested_connections local_cluster, local_hostname, remote_cluster, remote_hostname, status: “connecting”, “connected”, or “error” The number of connections by endpoint and status submariner_connections cable_driver, local_cluster, local_hostname, local_endpoint_ip, remote_cluster, remote_hostname, remote_endpoint_ip, status: “connecting”, “connected”, or “error” The number of connections and corresponding status by cable driver and cable submariner_connection_established_timestamp cable_driver, local_cluster, local_hostname, local_endpoint_ip, remote_cluster, remote_hostname, remote_endpoint_ip Timestamp of last successful connection established by cable driver and cable submariner_connection_latency_seconds cable_driver, local_cluster, local_hostname, local_endpoint_ip, remote_cluster, remote_hostname, remote_endpoint_ip Connection latency in seconds; last RTT, by cable driver and cable Globalnet Name Label Description submariner_global_IP_availability cidr Count of available global IPs per CIDR submariner_global_IP_allocated cidr Count of all global IPs allocated for Pods/Services per CIDR submariner_global_egress_IP_allocated cidr Count of global Egress IPs allocated for Pods/Services per CIDR submariner_cluster_global_egress_IP_allocated cidr Count of global Egress IPs allocated for clusters per CIDR submariner_global_ingress_IP_allocated cidr Count of global Ingress IPs allocated for Pods/Services per CIDR Service Discovery Name Label Description submariner_service_import direction, operation, syncer_name Count of imported Services submariner_service_export direction, operation, syncer_name Count of exported Services submariner_service_discovery_query source_cluster, destination_cluster, destination_service_name, destination_service_ip, destination_service_namespace Count DNS queries handled by Lighthouse plugin "
},
{
"uri": "/getting-started/quickstart/managed-kubernetes/rancher/",
"title": "Rancher",
"tags": [],
"description": "",
"content": " Prerequisites These instructions were developed with Rancher v2.4.x\nMake sure you are familiar with Rancher, and creating clusters. You can create either node driver clusters or Custom clusters, as long as your designated gateway nodes can communicate with each other.\nCreate and Deploy Cluster A In this step you will deploy cluster A, with the default IP CIDRs\n Pod CIDR Service CIDR 10.42.0.0/16 10.43.0.0/16 Use the Rancher UI to create a cluster, leaving the default options selected.\nMake sure you create at least one node that has a publicly accessible IP with the label submariner.io/gateway: \u0026quot;true\u0026quot;, either via node pool or via a custom node registration command.\nCreate and Deploy Cluster B In this step you will deploy cluster B, modifying the default IP CIDRs\n Pod CIDR Service CIDR 10.44.0.0/16 10.45.0.0/16 Create your cluster, but select Edit as YAML in the cluster creation UI. Edit the services stanza to reflect the options below, while making sure to keep the options that were already defined.\nservices: kube-api: service_cluster_ip_range: 10.45.0.0/16 kube-controller: cluster_cidr: 10.44.0.0/16 service_cluster_ip_range: 10.45.0.0/16 kubelet: cluster_domain: cluster.local cluster_dns_server: 10.45.0.10 Make sure you create at least one node that has a publicly accessible IP with the label submariner.io/gateway: \u0026quot;true\u0026quot;, either via node pool or via a custom node registration command.\nOnce you have done this, you can deploy your cluster.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nObtain the kubeconfig files from the Rancher UI for each of your clusters, placing them in the respective kubeconfigs.\n Cluster Kubeconfig File Name Cluster A kubeconfig-cluster-a Cluster B kubeconfig-cluster-b Edit the kubeconfig files so they use the context names “cluster-a” and “cluster-b”.\nUse cluster-a as Broker subctl deploy-broker --kubeconfig kubeconfig-cluster-a Join cluster-a and cluster-b to the Broker subctl join --kubeconfig kubeconfig-cluster-a broker-info.subm --clusterid cluster-a subctl join --kubeconfig kubeconfig-cluster-b broker-info.subm --clusterid cluster-b Verify connectivity This will run a series of E2E tests to verify proper connectivity between the cluster Pods and Services\nexport KUBECONFIG=kubeconfig-cluster-a:kubeconfig-cluster-b subctl verify --context cluster-a --tocontext cluster-b --only connectivity --verbose "
},
{
"uri": "/development/code-review/",
"title": "Code Review Guide",
"tags": [],
"description": "",
"content": "Code Review Guide This guide is meant to facilitate Submariner code review by sharing norms, best practices, and useful patterns.\nSubmariner follows the Kubernetes Code Review Guide wherever relevant. This guide collects the most important highlights of the Kubernetes process and adds Submariner-specific extensions.\nTwo non-author approvals required Pull Requests to Submariner require two approvals, including at least one from a Committer to the relevant part of the code base, as defined by the CODEOWNERS file at the root of each repository and the Community Membership/Committers process.\nNo merge commits Kubernetes recommends avoiding merge commits.\nWith our current GitHub setup, pull requests are liable to include merge commits temporarily. Whenever a PR is updated through the UI, GitHub merges the target branch into the PR. However, since we merge PRs by either squashing or rebasing them, those merge commits disappear from the series of commits which ultimately ends up in the target branch.\nSquash/amend commits into discrete steps Kubernetes recommends squashing commits using these guidelines.\nAfter a review, prepare your PR for merging by squashing your commits.\nAll commits left on your branch after a review should represent meaningful milestones or units of work. Use commits to add clarity to the development and review process. Keep in mind that smaller commits are easier to review.\nBefore merging a PR, squash the following kinds of commits:\n Fixes/review feedback Typos Merges and rebases Work in progress Aim to have every commit in a PR compile and pass tests independently if you can, but it\u0026rsquo;s not a requirement. Address code review feedback with new commits When addressing review comments, as a general rule, push a new commit instead of amending to the prior commit as the former makes it easy for reviewers to determine what changed.\nTo avoid cluttering the git log, squash the review commits into the appropriate commit before merging. The committer can do this in GitHub via the \u0026ldquo;Squash and merge\u0026rdquo; option. However you may want to preserve other commits, in which case squashing will need to be done manually via the Git CLI. To make that simpler, you can commit the review-prompted changes with git commit --fixup with the appropriate commit hash. This will keep them as separate commits, and if you later rebase with the --autosquash option (that is git rebase --autosquash -i) they will automatically be selected for squashing.\nCommit message formatting Kubernetes recommends these commit message practices.\nIn summary:\n Separate subject from body with a blank line Limit the subject line to 50 characters Capitalize the subject line Do not end the subject line with a period Use the imperative mood in the subject line Wrap the body at 72 characters Use the body to explain what and why vs how GitLint will automatically be run against all commits to try to validate these conventions.\nRequest new reviews after substantial changes If a PR is substantially changed after a code review, the author should request new reviews from all existing reviewers, including approvals, using the double-arrow icons in the list of reviewers. This will notify the reviewer and add the PR to their list of requested reviews.\nWith the current GitHub configuration, reviews are not automatically dismissed when PRs are updated. This is to cause less drag for the typical cases, like minor merge conflicts. As Submariner grows, it might make sense to trade this low-drag solution for one where only exactly the reviewed code can be merged.\nAddress all -1s before merging If someone requests changes (\u0026ldquo;votes -1\u0026rdquo;) for a PR, a best-effort should be made to address those concerns and achieve a neutral position or approval (0/+1 vote) before the PR is merged.\nUpdate branch only after required reviews To avoid wasting resources by running unnecessary jobs, only use the Update branch button to add a merge commit once a PR is actually ready to merge (has required reviews and no -1s). Unless other relevant code has changed, the new job results don\u0026rsquo;t tell us anything new. Since changes are constantly being merged, it\u0026rsquo;s likely another merge commit and set of jobs will be necessary right before merging anyway.\nMark work-in-progress PRs as drafts To clearly indicate a PR is still under development and not yet ready for review, mark it as a draft. It is not necessary to modify PR summaries or commit messages (e.g. \u0026ldquo;WIP\u0026rdquo;, \u0026ldquo;DO NOT MERGE\u0026rdquo;). Keeping the same PR summary keeps email notifications threaded, and using the commit message you plan to merge will allow gitlint to verify it. PRs should typically be marked as drafts if any CI is failing that the author can fix before asking for code review.\nPlease do this when opening the PR: instead of clicking on the “Create pull request” button, click on the drop-down arrow next to it, and select “Create draft pull request”. This will avoid notifying code owners; they will be notified when the PR is marked as ready for review.\nUse private forks for debugging PRs by running CI If a PR is not expected to pass CI but the author wants to see the results to enable development, use a personal fork to run CI. This avoids clogging the GitHub Actions job queue of the Submariner-io GitHub Organization. After the same git push to your personal fork you\u0026rsquo;d typically do for a PR, simply choose your fork as the \u0026ldquo;base repository\u0026rdquo; of the PR in GitHub\u0026rsquo;s \u0026ldquo;Open a pull request\u0026rdquo; UI. Make sure your fork\u0026rsquo;s main branch is up-to-date. After creating the PR, CI will trigger as usual but the jobs will count towards your personal queue. You will need to open a new PR against the main repository once your proposed change is ready for review.\nManage dependency among pull requests If a PR (child) is dependent on another PR (parent), irrespective of the project, comment on the child PR with the parent PR\u0026rsquo;s number with Depends on \u0026lt;Parent PR number\u0026gt; or depends on \u0026lt;Parent PR number\u0026gt;. This will trigger a PR Dependencies/Check Dependencies workflow. The workflow will add a dependent label to the child PR. The workflow will fail until the parent PR is merged and will pass once the parent PR is merged. This will prevent merging the child PR until the parent PR is merged.\nTest new functionality As new functionality is added, tests of that functionality should be added to automated test suites. As far as possible, such tests should be added in the same PR that adds the feature.\nFull end-to-end testing of new pull requests On some repositories, full E2E testing of pull requests will be done once a label ready-to-test has been assigned to the request. The label will be automatically assigned once the PR reaches the necessary number of approvals.\nYou can assign this label manually to the PR in order to trigger the full E2E test suite.\nDocument \u0026ldquo;why\u0026rdquo; in commit messages Commit messages should document the \u0026ldquo;why\u0026rdquo; of a change. Why is this change being made? Why is this change helpful? The diff is the ultimate documentation of the \u0026ldquo;what\u0026rdquo; of a change, and although it may need explaining, the commit message is the only opportunity to record the \u0026ldquo;why\u0026rdquo; of a change in the git history for future developers. See this example of good \u0026ldquo;why\u0026rdquo; in a commit message.\nRename and edit in separate commits When submitting a PR that modifies the contents of a file and also renames/moves it (git mv), use separate commits for the rename/move (with any required supporting changes so that the commit still builds) on the one hand, and the modifications on the other hand. This makes the git history and GitHub diffs more clear.\n"
},
{
"uri": "/development/shipyard/targets/",
"title": "Shared Targets",
"tags": [],
"description": "",
"content": "Shipyard ships a Makefile.inc file which defines these basic targets:\n clusters: Creates the kind-based cluster environment. deploy : Deploys Submariner components in the cluster environment (depends on clusters). e2e : Runs end to end tests on top of the deployed environment (deploying it if necessary). clean-clusters: Deletes the kind environment (if it exists) and any residual resources. clean-generated: Deletes all generated files. clean: Cleans everything up (running clusters and generated files). If your project uses Shipyard then it has all these targets and supports all the variables these targets support. Any variables supported by these targets can be assigned on the make command line.\nGlobal Variables Many targets support variables that influence how each target behaves.\nHighlighted Variables SETTINGS: Settings file that specifies a topology for deployment. PROVIDER: Cloud provider for the infrastructure (defaults to kind). GOLBALNET: When true, deploys the clusters with overlapping IPs (defaults to false). DEBUG_PRINT: When true, outputs debug information for Shipyard\u0026rsquo;s scripts (defaults to true). Clusters Creates a kind-based multi-cluster environment with just the default Kubernetes deployment:\nmake clusters Highlighted Variables for Clusters Any variable from the global variables list. K8S_VERSION: Determines the Kubernetes version that gets deployed (defaults to 1.24). Deploy Deploys Submariner components in a kind-based cluster environment (if one isn\u0026rsquo;t created yet, this target will first invoke the clusters target to do so):\nmake deploy Highlighted Variables for Deploy Any variable from the global variables list. Any variable from clusters target (only if it wasn\u0026rsquo;t created). CABLE_DRIVER: The cable driver used by Submariner (defaults to libreswan). DEPLOYTOOL: The tool used to deploy Submariner itself (defaults to operator). LIGHTHOUSE: Deploys Lighthouse in addition to the basic Submariner deployment (defaults to false). E2E (End to End) Runs end to end testing on the deployed environment (if one isn\u0026rsquo;t created yet, this target will first invoke the deploy target to do so). The tests are taken from the project, unless it has no specific end to end tests, in which case generic testing using subctl verify is run.\nmake e2e Highlighted Variables for E2E Any variable from the global variables list. Any variable from deploy target (only if it wasn\u0026rsquo;t created). Clean-clusters To clean up all the kind clusters deployed in any of the previous steps, use:\nmake clean-clusters This command will remove the clusters and any resources that might\u0026rsquo;ve been left in docker that are not needed any more (images, volumes, etc).\nClean-generated To clean up all generated files, use:\nmake clean-generated This will remove any file which can be re-generated and doesn’t need to be tracked.\nClean To clean everything up, use:\nmake clean This removes any running clusters and all generated files.\n"
},
{
"uri": "/getting-started/quickstart/openshift/aws/",
"title": "On AWS",
"tags": [],
"description": "",
"content": "This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters on AWS with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner with Service Discovery to interconnect the two clusters. Note that this guide focuses on Submariner deployment on clusters with non-overlapping Pod and Service CIDRs. For connecting clusters with overlapping CIDRs, please refer to the Submariner with Globalnet guide.\nPrerequisites Before we begin, the following tools need to be downloaded and added to your $PATH:\n OpenShift installer, pull secret, and command line interface. All can be downloaded from here. AWS CLI which can be downloaded from here. Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.\n Setup Your AWS Profile Configure the AWS CLI with the settings required to interact with AWS. These include your security credentials, the default AWS Region, and the default output format:\n$ aws configure AWS Access Key ID [None]: .... AWS Secret Access Key [None]: .... Default region name [None]: .... Default output format [None]: text Create and Deploy cluster-a In this step you will deploy cluster-a using the default IP CIDR ranges:\n Pod CIDR Service CIDR 10.128.0.0/14 172.30.0.0/16 openshift-install create install-config --dir cluster-a openshift-install create cluster --dir cluster-a When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nCreate and Deploy cluster-b In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. For more information on IPv4 CIDR conversion, please check this page.\nIn this example, we will use the following IP ranges:\n Pod CIDR Service CIDR 10.132.0.0/14 172.31.0.0/16 openshift-install create install-config --dir cluster-b Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:\nsed -i \u0026#39;s/10.128.0.0/10.132.0.0/g\u0026#39; cluster-b/install-config.yaml Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:\nsed -i \u0026#39;s/172.30.0.0/172.31.0.0/g\u0026#39; cluster-b/install-config.yaml And finally deploy the cluster:\nopenshift-install create cluster --dir cluster-b When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nPrepare AWS Clusters for Submariner Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.\nsubctl cloud prepare is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments, handling the requirements specified above.\nThe default EC2 instance type for the Submariner gateway node is c5d.large, optimized for better CPU which is found to be a bottleneck for IPsec and Wireguard drivers. Please ensure that the AWS Region you deploy to supports this instance type. Alternatively, you can choose to deploy using a different instance type.\n Prepare OpenShift-on-AWS cluster-a for Submariner:\nexport KUBECONFIG=cluster-a/auth/kubeconfig subctl cloud prepare aws --ocp-metadata path/to/cluster-a/metadata.json Prepare OpenShift-on-AWS cluster-b for Submariner:\nexport KUBECONFIG=cluster-b/auth/kubeconfig subctl cloud prepare aws --ocp-metadata path/to/cluster-b/metadata.json Note that certain parameters, such as the tunnel UDP port and AWS instance type for the gateway, can be customized. For example:\nsubctl cloud prepare aws --ocp-metadata path/to/metadata.json --natt-port 4501 --gateway-instance m4.xlarge Submariner can be deployed in HA mode by setting the gateways flag:\nsubctl cloud prepare aws --ocp-metadata path/to/metadata.json --gateways 3 Install Submariner with Service Discovery To install Submariner with multi-cluster Service Discovery follow the steps below:\nUse cluster-a as Broker subctl deploy-broker --kubeconfig cluster-a/auth/kubeconfig Join cluster-a and cluster-b to the Broker subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b Verify Deployment To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.\nDeploy ClusterIP Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 subctl export service --namespace default nginx Deploy Headless Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None subctl export service --namespace default nginx Verify Run nettest from cluster-a to access the nginx service:\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080 To access a Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt; as follows:\ncurl cluster-b.nginx.default.svc.clusterset.local:8080 Verify StatefulSets A StatefulSet uses a headless Service. Create a web.yaml as follows:\napiVersion: v1 kind: Service metadata: name: nginx-ss labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: ports: - port: 80 name: web clusterIP: None selector: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: \u0026#34;nginx-ss\u0026#34; replicas: 2 selector: matchLabels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss template: metadata: labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: containers: - name: nginx-ss image: nginxinc/nginx-unprivileged:stable-alpine ports: - containerPort: 80 name: web Use this yaml to create a StatefulSet web with nginx-ss as the Headless Service.\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default apply -f web.yaml subctl export service -n default nginx-ss curl nginx-ss.default.svc.clusterset.local:8080 To access the Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt;:\ncurl cluster-a.nginx-ss.default.svc.clusterset.local:8080 To access an individual pod in a specific cluster, prefix the query with \u0026lt;pod-hostname\u0026gt;.\u0026lt;cluster-id\u0026gt;:\ncurl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080 Perform automated verification The contexts on both config files are named admin and need to be modified before running the verify command. Here is how this can be done using yq:\nyq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-a\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-a\u0026#34; | .users[0].name = \u0026#34;admin-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-b\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-b/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-b\u0026#34; | .users[0].name = \u0026#34;admin-b\u0026#34;\u0026#39; cluster-b/auth/kubeconfig (if you’re using yq 4.18.1 or later, you can use yq -i instead of yq e -i).\nMore generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.\nThis will perform automated verifications between the clusters.\nexport KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose "
},
{
"uri": "/community/contributor-roles/",
"title": "Contributor Roles",
"tags": [],
"description": "",
"content": "This is a stripped-down version of the Kubernetes Community Membership process.\nAlthough we aspire to follow the Kubernetes process, some parts are not currently relevant to our structure or possible with our tooling:\n The SIG and subproject abstraction layers don\u0026rsquo;t apply to Submariner. Submariner is treated as a single project with file-based commit rights, not a \u0026ldquo;project\u0026rdquo; per repository. We hope to eventually move to Kubernetes OWNERS and Prow, but until we do so we can\u0026rsquo;t support advanced role-based automation (reviewers vs approvers; PR workflow commands like /okay-to-test, /lgtm, /approved). Project Owners are given responsibility for some tasks that are handled by dedicated teams in Kubernetes (security responses, Code of Conduct violations, and managing project funds). Submariner aspires to create dedicated teams for these tasks as the community grows. This doc outlines the various responsibilities of contributor roles in Submariner.\n Role Responsibilities Requirements Defined by Member Active contributor in the community Sponsored by 2 committers, multiple contributions to the project Submariner GitHub org member Committer Approve contributions from other members History of review and authorship CODEOWNERS file entry Owner Set direction and priorities for the project Demonstrated responsibility and excellent technical judgement for the project Submariner-owners GitHub team member and *entry in all CODEOWNERS files New Contributors New contributors should be welcomed to the community by existing members, helped with PR workflow, and directed to relevant documentation and communication channels.\nWe require every contributor to certify that they are legally permitted to contribute to our project. A contributor expresses this by consciously signing their commits, and by this act expressing that they comply with the Developer Certificate Of Origin.\nEstablished Community Members Established community members are expected to demonstrate their adherence to the principles in this document, familiarity with project organization, roles, policies, procedures, conventions, etc., and technical and/or writing ability. Role-specific expectations, responsibilities, and requirements are enumerated below.\nMember Members are continuously active contributors in the community. They can have issues and PRs assigned to them and participate through GitHub teams. Members are expected to remain active contributors to the community.\nDefined by: Member of the Submariner GitHub organization.\nMember Requirements Enabled two-factor authentication on their GitHub account Have made multiple contributions to the project or community. Contribution may include, but is not limited to: Authoring or reviewing PRs on GitHub Filing or commenting on issues on GitHub Contributing to community discussions (e.g. meetings, Slack, email discussion forums, Stack Overflow) Subscribed to [email protected] Have read the community and development guides Actively contributing Sponsored by 2 committers. Note the following requirements for sponsors: Sponsors must have close interactions with the prospective member - e.g. code/design/proposal review, coordinating on issues, etc. Sponsors must be committers in at least 1 CODEOWNERS file in any repo in the Submariner org Open an issue against the submariner-io/submariner repo Ensure your sponsors are @mentioned on the issue Complete every item on the checklist (preview the current version of the member template) Make sure that the list of contributions included is representative of your work on the project Have your sponsoring committers reply confirmation of sponsorship: +1 Once your sponsors have responded, your request will be reviewed. Any missing information will be requested. Member Responsibilities and Privileges Responsive to issues and PRs assigned to them Responsive to mentions of teams they are members of Active owner of code they have contributed (unless ownership is explicitly transferred) Code is well tested Tests consistently pass Addresses bugs or issues discovered after code is accepted They can be assigned to issues and PRs, and people can ask members for reviews Note: Members who frequently contribute code are expected to proactively perform code reviews and work towards becoming a committer.\nMembers can be removed by stepping down or by two thirds vote of Project Owners.\nCommitters Committers are able to review code for quality and correctness on some part of the project. They are knowledgeable about both the codebase and software engineering principles.\nUntil automation supports approvers vs reviewers: They also review for holistic acceptance of a contribution including: backwards / forwards compatibility, adhering to API and flag conventions, subtle performance and correctness issues, interactions with other parts of the system, etc.\nDefined by: Entry in a CODEOWNERS file in a repo owned by the Submariner project.\nCommitter status is scoped to a part of the codebase.\nCommitter Requirements The following apply to the part of codebase for which one would be a committer in a CODEOWNERS file:\n Member for at least 3 months Primary reviewer for at least 5 PRs to the codebase Reviewed at least 20 substantial PRs to the codebase Knowledgeable about the codebase Sponsored by two committers or project owners With no objections from other committers or project owners May either self-nominate or be nominated by a committer/owner Open an issue against the submariner-io/submariner repo Ensure your sponsors are @mentioned on the issue Complete every item on the checklist (preview the current version of the committer template) Make sure that the list of contributions included is representative of your work on the project Have your sponsoring committers/owners reply confirmation of sponsorship: +1 Once your sponsors have responded, your request will be reviewed. Any missing information will be requested. Committer Responsibilities and Privileges The following apply to the part of codebase for which one would be a committer in a CODEOWNERS file:\n Responsible for project quality control via code reviews Focus on code quality and correctness, including testing and factoring Until automation supports approvers vs reviewers: Focus on holistic acceptance of contribution such as dependencies with other features, backwards / forwards compatibility, API and flag definitions, etc Expected to be responsive to review requests as per community expectations Assigned PRs to review related to project of expertise Assigned test bugs related to project of expertise Granted \u0026ldquo;read access\u0026rdquo; to the corresponding repository May get a badge on PR and issue comments Demonstrate sound technical judgement Mentor contributors and reviewers Committers can be removed by stepping down or by two thirds vote of Project Owners.\nProject Owner Project owners are the technical authority for the Submariner project. They MUST have demonstrated both good judgement and responsibility towards the health the project. Project owners MUST set technical direction and make or approve design decisions for the project - either directly or through delegation of these responsibilities.\nDefined by: Member of the submariner-owners GitHub team and * entry in all CODEOWNERS files.\nOwner Requirements Unlike the roles outlined above, the owners of the project are typically limited to a relatively small group of decision makers and updated as fits the needs of the project.\nThe following apply to people who would be an owner:\n Deep understanding of the technical goals and direction of the project Deep understanding of the technical domain of the project Sustained contributions to design and direction by doing all of: Authoring and reviewing proposals Initiating, contributing and resolving discussions (emails, GitHub issues, meetings) Identifying subtle or complex issues in designs and implementation PRs Directly contributed to the project through implementation and / or review Owner Removal and Future Elected Governance Removal of Project Owners is currently frozen except for stepping down or violations of the Code of Conduct. This is a temporary governance step to define a removal process for extreme cases while protecting the project from dominance by a company. Once the Submariner community is diverse enough to replace Project Owners with an elected governance system, the project should do so. If the project hasn\u0026rsquo;t replaced Project Owners with elected governance by June 1st 2023, and if there are committers from at least three different companies, the project defaults to replacing Project Owners with a Technical Steering Committee elected by OpenDaylight\u0026rsquo;s TSC Election System with a single Committer at Large Represented Group (defined below) and a 49% company cap.\nMin Seats: 5 Max Seats: 5 Voters: Submariner Committers Duplicate Voter Strategy: Vote-per-Person Owner Responsibilities and Privileges The following apply to people who would be an owner:\n Make and approve technical design decisions for the project Set technical direction and priorities for the project Define milestones and releases Mentor and guide committers and contributors to the project Ensure continued health of project Adequate test coverage to confidently release Tests are passing reliably (i.e. not flaky) and are fixed when they fail Ensure a healthy process for discussion and decision making is in place Work with other project owners to maintain the project\u0026rsquo;s overall health and success holistically Receive security disclosures and ensure an adequate response. Receive reports of Code of Conduct violations and ensure an adequate response. Decide how funds raised by the project are spent. "
},
{
"uri": "/operations/deployment/helm/",
"title": "Helm",
"tags": [],
"description": "",
"content": "Deploying with Helm Installing Helm The latest Submariner charts require Helm 3; once you have that, run\nexport KUBECONFIG=\u0026lt;kubeconfig-of-broker\u0026gt; helm repo add submariner-latest https://submariner-io.github.io/submariner-charts/charts Exporting environment variables needed later export BROKER_NS=submariner-k8s-broker export SUBMARINER_NS=submariner-operator export SUBMARINER_PSK=$(LC_CTYPE=C tr -dc \u0026#39;a-zA-Z0-9\u0026#39; \u0026lt; /dev/urandom | fold -w 64 | head -n 1) Deploying the Broker helm install \u0026#34;${BROKER_NS}\u0026#34; submariner-latest/submariner-k8s-broker \\ --create-namespace \\ --namespace \u0026#34;${BROKER_NS}\u0026#34; Setup more environment variables we will need later for joining clusters.\nexport SUBMARINER_BROKER_CA=$(kubectl -n \u0026#34;${BROKER_NS}\u0026#34; get secrets \\ -o jsonpath=\u0026#34;{.items[?(@.metadata.annotations[\u0026#39;kubernetes\\.io/service-account\\.name\u0026#39;]==\u0026#39;${BROKER_NS}-client\u0026#39;)].data[\u0026#39;ca\\.crt\u0026#39;]}\u0026#34;) export SUBMARINER_BROKER_TOKEN=$(kubectl -n \u0026#34;${BROKER_NS}\u0026#34; get secrets \\ -o jsonpath=\u0026#34;{.items[?(@.metadata.annotations[\u0026#39;kubernetes\\.io/service-account\\.name\u0026#39;]==\u0026#39;${BROKER_NS}-client\u0026#39;)].data.token}\u0026#34; \\ | base64 --decode) export SUBMARINER_BROKER_URL=$(kubectl -n default get endpoints kubernetes \\ -o jsonpath=\u0026#34;{.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name==\u0026#39;https\u0026#39;)].port}\u0026#34;) Joining a cluster This step needs to be repeated for every cluster you want to connect with Submariner.\nexport KUBECONFIG=kubeconfig-of-the-cluster-to-join export CLUSTER_ID=the-id-of-the-cluster export CLUSTER_CIDR=x.x.x.x/x # the cluster\u0026#39;s Pod IP CIDR export SERVICE_CIDR=x.x.x.x/x # the cluster\u0026#39;s Service IP CIDR If your clusters have overlapping IPs (Cluster/Service CIDRs), please set:\nexport GLOBALNET=true export GLOBAL_CIDR=242.x.x.x/x # using an individual non-overlapping # range for each cluster you join. Joining the cluster:\nhelm install submariner-operator submariner-latest/submariner-operator \\ --create-namespace \\ --namespace \u0026#34;${SUBMARINER_NS}\u0026#34; \\ --set ipsec.psk=\u0026#34;${SUBMARINER_PSK}\u0026#34; \\ --set broker.server=\u0026#34;${SUBMARINER_BROKER_URL}\u0026#34; \\ --set broker.token=\u0026#34;${SUBMARINER_BROKER_TOKEN}\u0026#34; \\ --set broker.namespace=\u0026#34;${BROKER_NS}\u0026#34; \\ --set broker.ca=\u0026#34;${SUBMARINER_BROKER_CA}\u0026#34; \\ --set broker.globalnet=\u0026#34;${GLOBALNET}\u0026#34; \\ --set submariner.serviceDiscovery=true \\ --set submariner.cableDriver=libreswan \\ # or wireguard or vxlan --set submariner.clusterId=\u0026#34;${CLUSTER_ID}\u0026#34; \\ --set submariner.clusterCidr=\u0026#34;${CLUSTER_CIDR}\u0026#34; \\ --set submariner.serviceCidr=\u0026#34;${SERVICE_CIDR}\u0026#34; \\ --set submariner.globalCidr=\u0026#34;${GLOBAL_CIDR}\u0026#34; \\ --set submariner.natEnabled=\u0026#34;true\u0026#34; \\ # disable this if no NAT will happen between gateways --set serviceAccounts.globalnet.create=\u0026#34;${GLOBALNET}\u0026#34; \\ --set serviceAccounts.lighthouseAgent.create=true \\ --set serviceAccounts.lighthouseCoreDns.create=true Overriding Submariner Images The examples below demonstrate how to use images from a local registry. It\u0026rsquo;s also possible to use an online registry.\nTo override the operator image:\n--set operator.image.repository=\u0026#34;localhost:5000/submariner-operator\u0026#34; \\ --set operator.image.tag=\u0026#34;local\u0026#34; \\ --set operator.image.pullPolicy=\u0026#34;IfNotPresent\u0026#34; To override all Submariner images:\n--set submariner.images.repository=\u0026#34;localhost:5000\u0026#34; \\ --set submariner.image.tag=\u0026#34;local\u0026#34; To override a specific image, set images.\u0026lt;image-name\u0026gt; to the full URL, e.g.:\n--set images.submariner-gateway=\u0026#34;localhost:5000/submariner-gateway:local\u0026#34; OpenShift Requirements If installing on OpenShift, please also add the Submariner service accounts (SAs) to the privileged Security Context Constraint.\noc adm policy add-scc-to-user privileged system:serviceaccount:submariner:submariner-routeagent oc adm policy add-scc-to-user privileged system:serviceaccount:submariner:submariner-engine Perform automated verification Automated verification of the deployment can be performed by using the verification tests embedded in the subctl command line tool via the subctl verify command.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nRun the verification Ensure your kubeconfigs have different context names for each cluster, e.g. “cluster-a” and “cluster-b”; then run\nKUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig subctl verify --context cluster-a --tocontext cluster-b --verbose "
},
{
"uri": "/getting-started/quickstart/managed-kubernetes/",
"title": "Managed Kubernetes",
"tags": [],
"description": "",
"content": " Google (GKE) Rancher "
},
{
"uri": "/getting-started/quickstart/",
"title": "Quickstart Guides",
"tags": [],
"description": "",
"content": " Sandbox Environment (kind) Managed Kubernetes Google (GKE) Rancher OpenShift On AWS On AWS with Globalnet On Azure On GCP Hybrid vSphere and AWS External Network (Experimental) "
},
{
"uri": "/getting-started/architecture/gateway-engine/",
"title": "Gateway Engine",
"tags": [],
"description": "",
"content": "The Gateway Engine component is deployed in each participating cluster and is responsible for establishing secure tunnels to other clusters.\nThe Gateway Engine has a pluggable architecture for the cable engine component that maintains the tunnels. The following implementations are available:\n an IPsec implementation using Libreswan. This is currently the default. an implementation for WireGuard (via the wgctrl library). an un-encrypted tunnel implementation using VXLAN. The cable driver can be specified via the --cable-driver flag while joining a cluster using subctl. For more information, please refer to the subctl guide.\nWireGuard needs to be installed on Gateway nodes. See the WireGuard installation instructions.\n VXLAN connections are unencrypted by design. This is typically useful for environments in which all of the participating clusters run on-premises, the underlying inter-network fabric is controlled, and in many cases already encrypted by other means. Other common use case is to leverage the VXLAN cable engine over a virtual network peering on public clouds (e.g., VPC Peering on AWS). In this case, the VXLAN connection will be established on top of a peering link which is provided by the underlying cloud infrastructure and is already secured. In both cases, the expectation is that connected clusters should be directly reachable without NAT.\n Instances of the Gateway Engine run on specifically designated nodes in a cluster of which there may be more than one for fault tolerance. Submariner supports active/passive High Availability for the Gateway Engine component, which means that there is only one active Gateway Engine instance at a time in a cluster. They perform a leader election process to determine the active instance and the others await in standby mode ready to take over should the active instance fail.\nThe Gateway Engine is deployed as a DaemonSet that is configured to only run on nodes labelled with submariner.io/gateway=true.\n The active Gateway Engine communicates with the central Broker to advertise its Endpoint and Cluster resources to the other clusters connected to the Broker, also ensuring that it is the sole Endpoint for its cluster. The Route Agent Pods running in the cluster learn about the local Endpoint and setup the necessary infrastructure to route cross-cluster traffic from all nodes to the active Gateway Engine node. The active Gateway Engine also establishes a watch on the Broker to learn about the active Endpoint and Cluster resources advertised by the other clusters. Once two clusters are aware of each other\u0026rsquo;s Endpoints, they can establish a secure tunnel through which traffic can be routed.\nCable Drivers Topology Overview Libreswan The following diagram shows a high level topology for a Submariner deployment created with:\nmake deploy using=lighthouse In this case, Libreswan is configured to create 4 IPsec tunnels to allow for:\n Pod subnet to Pod subnet connectivity Pod subnet to Service subnet connectivity Service subnet to Pod subnet connectivity Service subnet to Service subnet connectivity VXLAN The following diagram shows a high level topology for a Submariner deployment created with:\nmake deploy using=lighthouse, vxlan With the VXLAN cable driver routes in table 100 are used on the source Gateway to steer the traffic into the vxlan-tunnel interface.\nThe figure shows a simple interaction (a ping from one pod in one cluster to another pod in a second cluster) when Submariner is used.\nGateway Failover If the active Gateway Engine fails, another Gateway Engine on one of the other designated nodes will gain leadership and perform reconciliation to advertise its Endpoint and to ensure that it is the sole Endpoint. The remote clusters will learn of the new Endpoint via the Broker and establish a new tunnel. Similarly, the Route Agent Pods running in the local cluster automatically update the route tables on each node to point to the new active Gateway node in the cluster.\nThe impact on datapath for various scenarios in a kind setup are captured in the following spreadsheet.\nGateway Health Check The Gateway Engine continuously monitors the health of connected clusters. It periodically pings each cluster and collects statistics including basic connectivity, round trip time (RTT) and average latency. This information is updated in the Gateway resource. Whenever the Gateway Engine detects that a ping to a particular cluster has failed, its connection status is marked with an error state. Service Discovery uses this information to avoid unhealthy clusters during Service discovery.\nThe health checking feature can be enabled/disabled via an option on the subctl join command.\nLoad Balancer mode The load balancer mode is still experimental, and is yet to be tested in all cloud providers nor in different failover scenarios.\n The load balancer mode is designed to simplify the deployment of Submariner in cloud environments where worker nodes don\u0026rsquo;t have a dedicated public IP available.\nWhen enabled for a cluster during subctl join, the operator will create a LoadBalancer type Service exposing both the encapsulation dataplane port as well as the NAT-T discovery port. This load balancer targets Pods labeled with gateway.submariner.io/status=active and app=submariner-gateway.\nWhen the LoadBalancer mode is enabled, the preferred-server mode is enabled automatically for the cluster, as IPsec is incompatible with the bi-directional connection mode and the load balancers and client/server connectivity is necessary.\nIf a failover occurred, the load balancer would update to the new available and active gateway endpoints.\nPreferred-server mode This mode is specific to the libreswan cable-driver which is based on IPsec. Other cable drivers ignore this setting.\nWhen enabled for a cluster during subctl join, the gateway will try to establish connection with other clusters by configuring the IPsec connection in server mode, and waiting for remote connections.\nRemote clusters will identify the preferred-server mode of this cluster, and attempt the connection. This is useful in environments where on-premises clusters don\u0026rsquo;t have access to port mapping.\nWhen both sides of a connection are in preferred-server mode, they will compare the endpoint cable names to decide which one will be server and which one will be client. When cable names are ordered alphabetically, the first one will be the client, the second one will be the server.\n"
},
{
"uri": "/operations/nat-traversal/",
"title": "NAT Traversal",
"tags": [],
"description": "",
"content": "Basic Overview Submariner establishes the dataplane tunnels between clusters over port 4500/UDP by default. This port can be customized per cluster and per gateway and is published as part of the Endpoint objects.\nPublic vs Private IP Endpoint objects publish both a PrivateIP and a PublicIP. The PrivateIP is the IP assigned to an interface on the gateway node where the Endpoint originated. The PublicIP is the source IP for the packets sent from the gateway to the Internet which is discovered by default via ipify.org, or my-ip.io and seeip.org fallbacks.\nAlternative methods can be configured on each gateway Node via the gateway.submariner.io/public-ip annotation:\nkubectl annotate node $GW gateway.submariner.io/public-ip=\u0026lt;resolver\u0026gt;,[resolver...] Resolvers are evaluated one by one, using the result of the first one to succeed. \u0026lt;resolver\u0026gt; should be written in the following form: method:parameter, and the following methods are implemented:\n Method Parameter Notes api HTTPS endpoint to contact, for example api.ipify.org The result body is inspected looking for the IP address lb LoadBalancer Service name in the submariner-operator namespace A network load balancer should be used ipv4 Fixed IPv4 address used as public IP dns FQDN DNS entry to be resolved The A entry of the FQDN will be resolved and used For example, when using a fixed public IPv4 address for a gateway, this can be used:\nkubectl annotate node $GW gateway.submariner.io/public-ip=ipv4:1.2.3.4 While joining the cluster, if --air-gapped flag is specified in the subctl join ... command, Submariner will avoid making any calls to external servers and an empty PublicIP is configured in the local Endpoint. However, if required, an explicit public IP can still be configured by adding the above annotation even in such environments.\n Reachability For two gateway Endpoints to connect to one another, at least one of them should be reachable either on its public or private IP address and the firewall configuration should allow the tunnel encapsulation port. If one of the clusters is designated as a preferred server, then only its Endpoint needs to be reachable to the other endpoints. This can be accomplished by joining the cluster in preferred server mode.\nsubctl join --kubeconfig A --preferred-server ... broker_info.subm Each gateway implements a UDP NAT-T discovery protocol where each gateway queries the gateways of the remote clusters on both the public and private IPs in order to determine the most suitable IP and its NAT characteristics to use for the tunnel connections, with a preference for the private IP.\nThis protocol is enabled by default on port 4490/UDP and can assign non default ports by annotating the gateway nodes:\nkubectl annotate node $GW gateway.submariner.io/natt-discovery-port=4490 If the NATT discovery protocol fails to determine reachability between two endpoints then it falls back to the NAT setting specified on join (the natEnabled field of the Submariner object or the --natt parameter of subctl), that is, if NAT is enabled, the public IP is used otherwise the private IP is used.\nIP Selection Algorithm The following flow chart describes the IP selection algorithm:\nPort Selection If the gateways of a cluster don\u0026rsquo;t have public floating or elastic IPs assigned to them then it\u0026rsquo;s recommended to use a separate UDP port for every node marked as a gateway. This will allow eventual port mapping on a router when communicating to clusters on remote sites with no direct routing.\nIf a cluster is behind a router which will NAT the traffic, it\u0026rsquo;s recommended to map the open ports into the gateway node private IPs, see the port mapping section. It could temporarily work without mapping, because most routers when performing NAT to the external network will not randomize or modify the source port of packets, but this will happen as soon as two connections collide over the same source port.\n UDP Dataplane Protocol (IPsec, WireGuard or VXLAN) By default, Submariner uses the 4500/UDP port for the dataplane. This can be changed cluster-wide via the --nattport flag on join although it\u0026rsquo;s possible to specify the port to be used per gateway node:\nkubectl annotate node $GW gateway.submariner.io/udp-port=4501 This allows individual gateways on the cluster to have different port numbers, hence allowing individual port-mapping if a public IP is shared.\nIPsec ESP or UDP Encapsulation IPsec in the Libreswan cable driver will be configured for the more performant ESP protocol whenever possible, which is normally when NAT is not detected and connectivity over the private IP is possible.\nIf your network and routers filter the IP\u0026gt;ESP packets, encapsulation can be forced by using the --force-udp-encaps during subctl join.\nPractical Examples All Private and Routed This is the simplest practical case where all gateways can contact all other gateways via routing on their private IPs and no NAT is needed.\nThe NATT discovery protocol will determine that the private IPs are preferred, and will try to avoid using NAT.\nAll Public Cloud, with Some Private Reachability In this case, the gateways for clusters A and B have direct reachability over their private IPs (10.0.0.1 and 10.1.0.1) possibly with large MTU capabilities. The same is true for clusters C and D (192.168.0.4 and 192.168.128.4).\nBetween any other pair of clusters reachability is only possible over their public IPs and the IP packets will undergo DNAT + SNAT translation at the border via the elastic or floating IP and also, while on transit via the public network, the MTU will be limited to 1500 bytes or less.\nEndpoints Endpoint Private IP Public IP A 10.0.0.1 1.1.1.1 B 10.1.0.1 1.1.1.2 C 192.168.0.4 2.1.1.1 D 192.168.128.4 2.1.1.2 Connections Left Cluster Left IP Left Port Right Cluster Right IP Right Port NAT A 10.0.0.1 4500 B 10.1.0.1 4500 no C 192.168.0.4 4500 D 192.168.128.4 4500 no A 1.1.1.1 4500 C 2.1.1.1 4500 yes A 1.1.1.1 4500 D 2.1.1.2 4500 yes B 1.1.1.2 4500 C 2.1.1.1 4500 yes B 1.1.1.2 4500 D 2.1.1.2 4500 yes The default configuration for the NAT-T discovery protocol will detect the IPs to use, make sure that the gateways have port 4490/udp open, as well as the encapsulation port 4500/udp.\nPublic Cloud vs On-Premises In this case, A \u0026amp; B cluster gateways have direct reachability over their private IPs (10.0.0.1 and 10.1.0.1) possibly with large MTU capabilities. The same is true for the C \u0026amp; D cluster gateways (192.168.0.4 and 192.168.128.4).\nBetween all other cluster pairs reachability is only possible over their public IPs, the IP packets from A \u0026amp; B will undergo DNAT + SNAT translation at the border via the elastic or floating IP, the packets from C \u0026amp; D will undergo SNAT translation to the public IP of the router 2.1.1.1 and also, while on transit via the public network, the MTU will be limited to 1500 bytes or less.\nEndpoints for Public Cloud to On-Premises Endpoint Private IP Public IP A 10.0.0.1 1.1.1.1 B 10.1.0.1 1.1.1.2 C 192.168.0.4 2.1.1.1 D 192.168.128.4 2.1.1.1 Connections for Public Cloud to On-Premises Left Cluster Left IP Left Port Right Cluster Right IP Right Port NAT A 10.0.0.1 4500 B 10.1.0.1 4500 no C 192.168.0.4 4501 D 192.168.128.4 4502 no A 1.1.1.1 4500 C 2.1.1.1 4501 yes A 1.1.1.1 4500 D 2.1.1.1 4502 yes B 1.1.1.2 4500 C 2.1.1.1 4501 yes B 1.1.1.2 4500 D 2.1.1.1 4502 yes The recommended configuration for the gateways behind the on-premises router which has a single external IP with no IP routing or mapping to the private network is to have a dedicated and distinct port number for the NATT discovery protocol (as well as the encapsulation)\nkubectl annotate node $GWC --kubeconfig C gateway.submariner.io/natt-discovery-port=4491 kubectl annotate node $GWC --kubeconfig C gateway.submariner.io/udp-port=4501 kubectl annotate node $GWD --kubeconfig D gateway.submariner.io/natt-discovery-port=4492 kubectl annotate node $GWD --kubeconfig D gateway.submariner.io/udp-port=4502 # restart the gateways to pick up the new setting for cluster in C D; do kubectl delete pod -n submariner-operator -l app=submariner-gateway --kubeconfig $cluster done If HA is configured on the on-premise clusters, each gateway behind the 2.1.1.1 router should have a dedicated UDP port. For example if we had two clusters and two gateways on each cluster, four ports would be necessary.\n Router Port Mapping Under this configuration it\u0026rsquo;s important to map the UDP ports on the 2.1.1.1 router to the private IPs of the gateways.\n External IP Port Internal IP Port Protocol 2.1.1.1 4501 192.168.0.4 4501 UDP 2.1.1.1 4491 192.168.0.4 4491 UDP 2.1.1.1 4502 192.168.128.4 4502 UDP 2.1.1.1 4492 192.168.128.4 4492 UDP Without port mapping it\u0026rsquo;s entirely possible that the connectivity will be established without issues. This can happen because the router\u0026rsquo;s NAT will not generally modify the source port of the outgoing UDP packets, and future packets arriving on this port will be redirected to the internal IP which initiated connectivity. However if the 2.1.1.1 router randomizes the source port on NAT or if other applications on the internal network were already using the 4501-4502 or 4491-4492 ports, the remote ends would not be able to contact gateway C or D over the expected ports.\n Alternative to Port Mapping If port mapping is not possible, we can enable a server/client model for connections where we designate the clusters with a dedicated public IP or the clusters with the ability to get mapped ports as preferred servers. In this way, only the non-preferred server clusters will initiate connections to the preferred server clusters.\nFor example, given clusters A, B, C, and D, we designate A and B as preferred servers:\nsubctl join --kubeconfig A --preferred-server .... broker_info.subm subctl join --kubeconfig B --preferred-server .... broker_info.subm This means that the gateways for clusters A and B will negotiate which one will be the server based on the Endpoint names. Clusters C and D will connect to clusters A and B as clients. Clusters C and D will connect normally.\nMultiple on-premise sites In this case, A \u0026amp; B cluster gateways have direct reachability over their private IPs (10.0.0.1 and 10.1.0.1) possibly with large MTU capabilities. The same is true for the C \u0026amp; D cluster gateways (192.168.0.4 and 192.168.128.4).\nBetween all other cluster pairs reachability is only possible over their public IPs, the IP packets from A,B,C \u0026amp; D will undergo SNAT translation at the border with the public network also, while on transit via the public network the MTU will be limited to 1500 bytes or less.\nEndpoints for Multiple On-Premises Endpoint Private IP Public IP A 10.0.0.1 1.1.1.1 B 10.1.0.1 1.1.1.1 C 192.168.0.4 2.1.1.1 D 192.168.128.4 2.1.1.1 Connections for Multiple On-Premises Left Cluster Left IP Left Port Right Cluster Right IP Right Port NAT A 10.0.0.1 4501 B 10.1.0.1 4502 no C 192.168.0.4 4501 D 192.168.128.4 4502 no A 1.1.1.1 4501 C 2.1.1.1 4501 yes A 1.1.1.1 4501 D 2.1.1.1 4502 yes B 1.1.1.1 4502 C 2.1.1.1 4501 yes B 1.1.1.1 4502 D 2.1.1.1 4502 yes Every gateway must have its own port number for NATT discovery, as well as for encapsulation, and the ports on the NAT gateway should be mapped to the internal IPs and ports of the gateways.\nkubectl annotate node $GWA --kubeconfig A gateway.submariner.io/natt-discovery-port=4491 kubectl annotate node $GWA --kubeconfig A gateway.submariner.io/udp-port=4501 kubectl annotate node $GWB --kubeconfig B gateway.submariner.io/natt-discovery-port=4492 kubectl annotate node $GWB --kubeconfig B gateway.submariner.io/udp-port=4502 kubectl annotate node $GWC --kubeconfig C gateway.submariner.io/natt-discovery-port=4491 kubectl annotate node $GWC --kubeconfig C gateway.submariner.io/udp-port=4501 kubectl annotate node $GWD --kubeconfig D gateway.submariner.io/natt-discovery-port=4492 kubectl annotate node $GWD --kubeconfig D gateway.submariner.io/udp-port=4502 # restart the gateways to pick up the new setting for cluster in A B C D; do kubectl delete pod -n submariner-operator -l app=submariner-gateway --kubeconfig $cluster done If HA is configured on the on-premises clusters, each gateway behind the routers should have a dedicated UDP port. For example if we had two clusters and two gateways on each network, four ports would be necessary.\n Router Port Mapping for Multiple On-Oremises Under this configuration it\u0026rsquo;s important to map the UDP ports on the 2.1.1.1 router to the private IPs of the gateways.\nOn the 2.1.1.1 router External IP Port Internal IP Port Protocol 2.1.1.1 4501 192.168.0.4 4501 UDP 2.1.1.1 4491 192.168.0.4 4491 UDP 2.1.1.1 4502 192.168.128.4 4502 UDP 2.1.1.1 4492 192.168.128.4 4492 UDP On the 1.1.1.1 router External IP Port Internal IP Port Protocol 1.1.1.1 4501 10.0.0.1 4501 UDP 1.1.1.1 4491 10.0.0.1 4491 UDP 1.1.1.1 4502 10.1.0.1 4502 UDP 1.1.1.1 4492 10.1.0.1 4492 UDP Without port mapping it\u0026rsquo;s entirely possible that the connectivity will be established without issues, this happens due to the fact that route\u0026rsquo;s NAT will not generally modify the src port of the outgoing UDP packets, and future packets arriving this port will be redirected to the internal IP which initiated connectivity, but if the 2.1.1.1 router randomizes the source port on NAT, or if other applications on the internal network were already using the 4501-4502,4491-4492 ports, the remote ends would not be able to contact gateway C or D over the expected ports.\n Double NAT Traversal (scenario 1) In this case, clusters C \u0026amp; D are neither reachable on their private IPs (192.168.0.4 and 192.168.0.4) nor on the public IP. However, they are reachable over the private floating IPs (10.2.0.1 and 10.2.0.2). Submariner cannot detect these private floating IPs. To get the connectivity working, you can annotate the Gateway node with the private floating IP as shown below.\nkubectl annotate node $GWC gateway.submariner.io/public-ip=ipv4:10.2.0.1 kubectl annotate node $GWD gateway.submariner.io/public-ip=ipv4:10.2.0.2 # restart the gateways to pick up the new setting for cluster in C D; do kubectl delete pod -n submariner-operator -l app=submariner-gateway --kubeconfig $cluster done Double NAT Traversal (scenario 2) In this case, A \u0026amp; B cluster gateways have direct reachability over their private IPs (10.0.0.1 and 10.1.0.1) possibly with large MTU capabilities, while between cluster C and D (192.168.0.4 and 192.168.0.4 too), reachability over the private IPs is not possible but it would be possible over the private floating IPs 10.2.0.1 and 10.2.0.2. However Submariner is unable to detect such floating IPs.\nEndpoints for Double NAT Endpoint Private IP Public IP A 10.0.0.1 1.1.1.1 B 10.1.0.1 1.1.1.1 C 192.168.0.4 2.1.1.1 D 192.168.0.4 2.1.1.1 A problem will exist between C \u0026amp; D because they can\u0026rsquo;t reach each other neither on 2.1.1.1 or ther IPs since the private CIDRs overlap.\nThis is a complicated topology that is still not supported in Submariner. Possible solutions to this could be:\n Modifying the CIDRs of the virtual networks for clusters C \u0026amp; D, and then peer the virtual routers of those virtual networks to perform routing between C \u0026amp; D. Then C \u0026amp; D would be able to connect over the private IPs to each other.\n Support manual addition of multiple IPs per gateway, so each Endpoint would simply expose a list of IPs with preference instead of just a Public/Private IP.\n "
},
{
"uri": "/getting-started/quickstart/openshift/globalnet/",
"title": "On AWS with Globalnet",
"tags": [],
"description": "",
"content": "This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters on AWS with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner to interconnect the two clusters. Since the two clusters share the same Cluster and Service CIDR ranges, Globalnet will be enabled.\nPrerequisites Before we begin, the following tools need to be downloaded and added to your $PATH:\n OpenShift installer, pull secret, and command line interface. All can be downloaded from here. AWS CLI which can be downloaded from here. Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.\n Setup Your AWS Profile Configure the AWS CLI with the settings required to interact with AWS. These include your security credentials, the default AWS Region, and the default output format:\n$ aws configure AWS Access Key ID [None]: .... AWS Secret Access Key [None]: .... Default region name [None]: .... Default output format [None]: text Create and Deploy cluster-a In this step you will deploy cluster-a using the default IP CIDR ranges:\n Pod CIDR Service CIDR 10.128.0.0/14 172.30.0.0/16 openshift-install create install-config --dir cluster-a openshift-install create cluster --dir cluster-a When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nCreate and Deploy cluster-b In this step you will deploy cluster-b using the same default IP CIDR ranges:\n Pod CIDR Service CIDR 10.128.0.0/14 172.30.0.0/16 openshift-install create install-config --dir cluster-b openshift-install create cluster --dir cluster-b When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nPrepare AWS Clusters for Submariner Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.\nsubctl cloud prepare is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments, handling the requirements specified above.\nThe default EC2 instance type for the Submariner gateway node is c5d.large, optimized for better CPU which is found to be a bottleneck for IPsec and Wireguard drivers. Please ensure that the AWS Region you deploy to supports this instance type. Alternatively, you can choose to deploy using a different instance type.\n Prepare OpenShift-on-AWS cluster-a for Submariner:\nexport KUBECONFIG=cluster-a/auth/kubeconfig subctl cloud prepare aws --ocp-metadata path/to/cluster-a/metadata.json Prepare OpenShift-on-AWS cluster-b for Submariner:\nexport KUBECONFIG=cluster-b/auth/kubeconfig subctl cloud prepare aws --ocp-metadata path/to/cluster-b/metadata.json Note that certain parameters, such as the tunnel UDP port and AWS instance type for the gateway, can be customized. For example:\nsubctl cloud prepare aws --ocp-metadata path/to/metadata.json --natt-port 4501 --gateway-instance m4.xlarge Submariner can be deployed in HA mode by setting the gateways flag:\nsubctl cloud prepare aws --ocp-metadata path/to/metadata.json --gateways 3 Install Submariner with Service Discovery and Globalnet To install Submariner with multi-cluster service discovery and support for overlapping CIDRs follow the steps below.\nUse cluster-a as Broker with service discovery and Globalnet enabled subctl deploy-broker --kubeconfig cluster-a/auth/kubeconfig --globalnet Join cluster-a and cluster-b to the Broker subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b Verify Deployment To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.\nDeploy ClusterIP Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 subctl export service --namespace default nginx Deploy Headless Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None subctl export service --namespace default nginx Verify Run nettest from cluster-a to access the nginx service:\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080 To access a Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt; as follows:\ncurl cluster-b.nginx.default.svc.clusterset.local:8080 Verify StatefulSets A StatefulSet uses a headless Service. Create a web.yaml as follows:\napiVersion: v1 kind: Service metadata: name: nginx-ss labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: ports: - port: 80 name: web clusterIP: None selector: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: \u0026#34;nginx-ss\u0026#34; replicas: 2 selector: matchLabels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss template: metadata: labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: containers: - name: nginx-ss image: nginxinc/nginx-unprivileged:stable-alpine ports: - containerPort: 80 name: web Use this yaml to create a StatefulSet web with nginx-ss as the Headless Service.\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default apply -f web.yaml subctl export service -n default nginx-ss curl nginx-ss.default.svc.clusterset.local:8080 To access the Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt;:\ncurl cluster-a.nginx-ss.default.svc.clusterset.local:8080 To access an individual pod in a specific cluster, prefix the query with \u0026lt;pod-hostname\u0026gt;.\u0026lt;cluster-id\u0026gt;:\ncurl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080 Perform automated verification The contexts on both config files are named admin and need to be modified before running the verify command. Here is how this can be done using yq:\nyq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-a\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-a\u0026#34; | .users[0].name = \u0026#34;admin-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-b\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-b/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-b\u0026#34; | .users[0].name = \u0026#34;admin-b\u0026#34;\u0026#39; cluster-b/auth/kubeconfig (if you’re using yq 4.18.1 or later, you can use yq -i instead of yq e -i).\nMore generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.\nThis will perform automated verifications between the clusters.\nexport KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose "
},
{
"uri": "/development/contribution-guide/",
"title": "Contributing to the Project",
"tags": [],
"description": "",
"content": "This guide outlines the process which developers need to follow when contributing to the Submariner project. Developers are expected to read and follow the guidelines outlined in this guide and in the other contribution guides, in order for their contributions to the project to be addressed in a timely manner.\nProject Resources Submariner uses GitHub Projects to manage releases. Read Best practices for projects to learn more about how to work with projects.\nBacklog Board The backlog board hosts issues, bugs and features which are not a part of a particular release board. While each issue is opened in its corresponding repository, this board gives an aggregated view of all open issues across all repositories. The board has total of three columns. Any new issues or epics should typically be assigned to the Backlog column. Epics which are candidates to be included in the next release are moved to the Next Version Candidate column. During planning, all epics from the Next Version Candidate column are reviewed and triaged. Epics which are selected to be in the next release are moved to the current release board. All unassigned issues are labeled with priority based on the priority discussed in the team meeting. The Close? column holds any issues/epics that can probably be closed for various reasons.\nCurrent Release Board Current release work is tracked in the latest board. The board has 5 columns. The first one, Schedule and Epics, hosts all properly triaged epics targeted for the current release and the schedule for the current release. Triaged issues are under the To do column. When an issue is being worked on, it is moved to In Progress column. When a PR for the issue is pushed, the issue is moved to In Review column. The PR has the corresponding issue linked from the GitHub UI and is not tracked on the board. Once the PR is merged, the issue is moved to Done column.\nEnhancements Repository The Enhancements repository is used for tracking epics as issues, and their corresponding enhancement proposals (the epics design) as pull requests to the enhancements repository. Any medium or small enhancement should be opened as an issue in the relevant repository, or as a general Enhancement Request issue.\nReleases Repository Submariner\u0026rsquo;s release is automated to a great extent. The release process documentation explains the details of a release. The release automation is maintained in the releases repository. It also hosts subctl binaries for all released versions. Code is frozen during releases: no PRs unrelated to the release can be merged to the branch being released.\nBugs, Tasks and Epics In order to have more structure and clarity, we expand upon the standard GitHub issue and define 3 types of issues: Bugs, Tasks and Epics.\nBugs A bug is an issue which captures an error or fault in the project. If a bug meets the criteria of a blocker, it is considered a blocker bug.\nBlocker Criteria If an issue prevents a feature (either new or existing) from operating correctly and there\u0026rsquo;s no sufficient workaround, it may be deemed as a blocker for a release such that the release cannot proceed until it is addressed.\nTasks A task defines a specific unit of work that can be stand-alone or part of an epic. Work on a bug is not a task. A task should be relatively small and fit within the scope of a single sprint otherwise it should be broken down into smaller tasks or perhaps be defined as an epic. Tasks that relate to ongoing maintenance (version bumps, image trimmings, CI and automation) will usually be small, unplanned tasks that typically occur throughout each release cycle. The associated GitHub issue / PR should be labeled as maintenance.\nEpics An epic is a collection of tasks which are required to accomplish a feature.\nEpic Guidelines Epics should be issues in the enhancements repository and created using the epic template. Only include work that is a part of the Submariner project. The design should be completed before starting working on an epic. An epic should not be added to a release after the planning is complete. Provide clear and agreed-upon acceptance criteria. An epic should be split into smaller tasks (implementation, testing, documentation etc) using the design, acceptance criteria and epic template checklist as guidelines: Open a GitHub issue for each task in the appropriate repository. Each task should be listed under the Work Items section in the epic template. Tasks should be small to medium in size and fit within the scope of a single sprint. Release Cycles The Submariner project follows time based release cycles where each cycle is 16 weeks long. While blocking bugs may delay the general availability release date, new features will not.\nFeatures that were partially implemented in a given release will be considered \u0026ldquo;experimental\u0026rdquo; and won\u0026rsquo;t have any support commitment.\nEach cycle will result in either a minor version bump or a major version bump in case backwards compatibility can\u0026rsquo;t be maintained.\nSprints Sprints are 3 week periods which encapsulate work on the release. Most sprints focus on active development of the current version, while the last one focuses on stabilization. Specific sprints and their contents are detailed in the following sections.\nMost sprints will end with a milestone pre-release as detailed in the following sections. This allows the community and project members to verify the project is stable and test any fixes or features that were added during the sprint. A formal test day may be held to facilitate testing of the pre-release.\nEach sprint ends on a boundary day which also marks the beginning of the next sprint. The boundary days occur on a Monday.\nOn the sprint boundary day we will:\n Perform a milestone pre-release (when applicable). Have release related meetings, instead of any usually recurring meetings: Grooming (30 minutes): Making sure epics are on track. Reviewing the Definition of Done for each epic. Moving epics back to the Backlog, in case they\u0026rsquo;re de-prioritized. Retrospective (30 minutes): Looking back at the task sizes and assessing if they were correct. General process improvement. Demos (30 minutes): Any enhancements (or parts of) that have been delivered in the sprint. Other interesting changes (e.g. refactors) that aren\u0026rsquo;t part of any epic. In case there\u0026rsquo;s nothing to showcase, this meeting will be skipped. Release Timeline Each release follows a fixed timeline, with 4 development sprints and one final sprint for stabilization. The version will be released one week after the last sprint, and the planning work for the next release will begin.\nThe following sections explain the activities of each sprint.\nPlanning The week before a new release cycle starts is dedicated to planning the release. Planning covers the epics, tasks and bugs which are targeted for the next release version. Planning meetings will be held, focusing on the Backlog board.\nInclusion Criteria In order for a task or an epic to be eligible for the next version, it needs to fulfill these requirements:\n Be part of the Backlog board. Located in the next-version-candidate column. Have a description detailing what the issue is and optionally how it\u0026rsquo;s going to be solved. Have an appropriate sizing label, according to the amount of work expected for a single person to completely deliver the task: Small: Work is contained in one sprint and is expected to take less than half the sprint. Medium: Work is contained in one sprint and is expected to take most of the sprint. Large: Work is contained within a release (two-three sprints). Extra-Large: Work can\u0026rsquo;t be contained within a release and would span multiple releases. Any Large or Extra-Large task must be converted to an epic. In case of an epic, it should: Have a corresponding issue in the enhancements project. Adhere to the epic template. Have a high-level break down of the expected work, corresponding to the \u0026ldquo;Definition of Done\u0026rdquo;. Planning Meetings The project team will hold planning meetings, led by the project\u0026rsquo;s \u0026ldquo;scrum lead\u0026rdquo;. During these meetings, the project team will:\n Prioritize and assign epics for the next version. Only epics adhering to the described requirements will be considered. Transfer assigned epics to the release board according to the capacity of the team to deliver them. Re-evaluate the priorities of any Small and Medium tasks in the backlog. Optionally assign important bugs and tasks and move them to the release board. By the end of the planning week, the project team will have a backlog of epics and tasks and can commence working on the design phase. All epics for the next version will be on the release board, along with Small and Medium tasks with an owner assigned. Unassigned tasks will be left on the backlog board and worked on based on their priority.\nFeature Design Project members are expected to work on the design for any epic features assigned to them. Project members will update their respective epics with any work identified during the design phase. Project members are encouraged to perform proof of concept investigations in order to validate the design and clarify specific work items.\nIn case additional work items are identified during the design, they should be opened as tasks and tracked under the respective epic. Such tasks are expected to follow the sizing guidelines from the Planning stage. Specifically, tasks that are themselves epics due to their size should be identified and treated as such.\nDesign proposals for epics should be submitted as pull requests to the enhancements repository, detailing the proposed design, any alternatives, and any changes necessary to the Submariner projects and APIs.\nThe pull requests to the enhancements repository will be reviewed, discussing any necessary changes or reservations. Any pull request will need approval from at least 50% of the code owners of the enhancements repository. The code owners list is an aggregate list of the code owners of all Sumbariner repositories. As soon as the pull request is reviewed and merged, work on the epic can begin.\nProject members are expected to review proposals from other members in addition to drafting their own proposals. If a project member has finished work on their proposal, they\u0026rsquo;re encouraged to help with the other ongoing proposals.\nProject members are encouraged to host a design review for their enhancement proposal design.\nDevelopment Milestones The milestone sprints are focused on development work for various tasks and bugs. Project members will work on any planned tasks and bugs, and will also work on unplanned bugs should they arise. Any unplanned work should follow the defined guidelines.\nEach sprint will end with a release to allow the community to test new features and fixed bugs. In total, three milestone sprints are planned:\n Three sprints ending with the release of each milestone m1, m2, and m3. The last sprint ending with the release of the release candidate rc0. As detailed in the sprints section, each milestone release will be followed by a test day and the sprint meetings.\nRelease Candidates At the end of 12 weeks, the project is ready to be released and the pre-release rc0 is created. At this point, as detailed in the release process documentation, stable branches are created and the project goes into feature freeze for the release branches.\nTwo test days will take place after the release is created, as the project members make sure the release is ready for general availability. Any bugs found during the test days will need to be labeled with the appropriate testday label. The project members will triage the test day bugs and identify any high priority ones that should be addressed before general availability.\nIf any high priority bugs were identified after rc0, a new release rc1 will be planned to allow for fixing them. The rc1 release will be planned at the team\u0026rsquo;s discretion and has no expected date. If no rc1 release is planned, the team will proceed with the general availability release.\nStarting from rc1, the stable branches enter a code freeze mode - only blocker bugs will be eligible for merging. If a bug is fixed and merged during the code freeze, a new release candidate needs to be prepared and tested. Releasing rc2 and beyond will delay the general availability release.\nGeneral Availability Once a release candidate is deemed stable and has no blocker bugs it will be released for general availability. Prior to releasing, the release manager will verify that there were no changes on the stable branches since the last release candidate. This ensures that no bugs could have been introduced to possibly affect the stability of the released version.\nThe new version will be announced per the announcement guidelines in the release process documentation, and release notes for the new release will need to be published.\nThe current release board will be closed, and all remaining items will be moved back to the backlog board. The items can then be considered for the next version, based on the planning guidelines.\nDuring the week when the general availability release is performed, the next version will be planned. Additionally, a retrospective meeting for the last release cycle will be held. There will be no dedicated test day, as the release candidate has been tested and no changes have occurred since.\nRelease Notes Release notes should be published for each GA version. All new features and bug fixes of the release should be labeled with release-note-needed and have a corresponding release note PR in the website repository.\nPending release notes should be prepared as pull requests adding to the pending release notes on the appropriate branch in the website repository, i.e. release-notes-0.16 etc. Release notes should follow the practices described in writing good release notes. Each release note pull request, if it\u0026rsquo;s written alongside the PR it describes, should depend on the main PR (so that it can\u0026rsquo;t be merged before the PR it documents). A PR with a release note PR should be labeled with the release-note-handled label.\nOnce a week, as part of the tasks and bugs triage meeting, the team will make sure that new bugs / tasks / PRs are properly labeled and have corresponding release notes. When the version is ready for GA, the pending release note branch can then be merged to the main branch of the website repository.\nSearching the Submariner GitHub Org with label:release-note-needed -label:release-note-handled can help identify PRs that still need release notes.\nUnplanned Work During a release, new work that was unknown during the planning phase will emerge. This work is typically one of three types:\n Bugs. Ongoing maintenance tasks. New epics or independent tasks. The team triages tasks and bugs as part of a weekly meeting. The bugs can be worked on immediately, while other types of issues are described below.\nTest Days Test days are held in order to validate pre-released versions by the project members and the wider community. Any bugs opened on a test day will be triaged soon after.\nThe goals of these days are:\n Verify any new features that were introduced during the last sprint. Validate any bugs that were closed were actually fixed. Find any regressions in existing functionality. Test days will be led by one of the project members, who will be responsible for:\n Creating a test day spreadsheet, if one doesn\u0026rsquo;t exist yet, using the test day template. The first sheet of the document is a template for the test days. Updated with the correct infrastructure versions. Add columns for planned new features. Add rows to planned new infrastructure support. Adding a sheet for the test day with the milestone as the sheet name. Announcing the test day (meetings, slack, email, social media). Hosting the test day itself. Should a bug be identified during a test day, it should be labeled with an appropriate testday label.\n "
},
{
"uri": "/community/",
"title": "Community",
"tags": [],
"description": "",
"content": " Code of Conduct Contributor Roles Getting Help Releases Roadmap Role in the Ecosystem "
},
{
"uri": "/development/shipyard/images/",
"title": "Image Related Targets",
"tags": [],
"description": "",
"content": "Image Capabilities Shipyard ships Makefile.images which contains pre-packaged image capabilities that can be used to build and consume the image(s) that a project requires:\n images: Builds the images the project provides. preload-images: Pre-loads images into a local registry. reload-images: Reloads images into a local registry, and updates local deployment. multiarch-images: Builds the images the project provides for all platforms declared by the project. release-images: Uploads the requested image(s) to Quay.io. Any consuming project has to define the following variables in order for image targets to work.\n IMAGES: A space separated list of images the project provides. MULTIARCH_IMAGES: A space separated list of multi-arch images the project provides. Global Variables These variables affect most or all of the targets mentioned below.\n REPO: The repo prefix to use for images (defaults to quay.io/submariner). Images Builds the images that the project provides, for the currently detected platform. These images can then be used when deploying a local environment.\nThe target is automatically consumed by other Shipyard targets, so there\u0026rsquo;s no need to explicitly specify it. Use this target when you want to purposefully rebuild a project\u0026rsquo;s images.\nmake images Pre-load Images Pre-loads all images (as defined by IMAGES) to a local registry, in case the PROVIDER is kind (default behavior). The target will rebuild all images first, to make sure they\u0026rsquo;re up-to-date.\nThe target is automatically consumed by other Shipyard targets, so there\u0026rsquo;s no need to explicitly specify it.\nmake preload-images Reload Images Reloads all images (as defined by IMAGES) to a local registry. The target will rebuild all images first, to make sure they\u0026rsquo;re up-to-date.\nUse this target when testing with a local deployment, and you wish to use updated images without re-deploying.\nmake reload-images Respected Variables for Reload RESTART: Specify which Submariner component to restart: none: Don\u0026rsquo;t restart anything (default behavior). all: Restart all Submariner related components. \u0026lt;component name\u0026gt;: Restart just the given component (e.g. gateway). Multi-arch Images Builds the images that the project provides for all platforms declared by the project. These images are packaged for release, and can\u0026rsquo;t be used when deploying a local environment.\nmake multiarch-images Any project wishing to package such images should set the following variable in it\u0026rsquo;s Makefile:\n PLATFORMS: Comma separated list of platforms the image should be built for. Release Images Uploads the images built by the project to Quay.io:\nmake release-images Respected Variables for Release QUAY_USERNAME, QUAY_PASSWORD: Needed in order to log in to Quay. TAG: A tag to use for the release (default is the branch name). "
},
{
"uri": "/operations/troubleshooting/",
"title": "Troubleshooting",
"tags": [],
"description": "",
"content": "Overview You have followed steps in Deployment but something has gone wrong. You\u0026rsquo;re not sure what and how to fix it, or what information to collect to raise an issue. Welcome to the Submariner troubleshooting guide where we will help you get your deployment working again.\nBasic familiarity with the Submariner components and architecture will be helpful when troubleshooting so please review the Architecture section.\nThe guide has been broken into different sections for easy navigation.\nAutomated Troubleshooting Use the subctl utility to automate troubleshooting and collecting debugging information.\nInstall subctl:\ncurl -Ls https://get.submariner.io | VERSION=\u0026lt;your Submariner version\u0026gt; bash Set KUBECONFIG to point at your clusters:\nexport KUBECONFIG=\u0026lt;kubeconfig0 path\u0026gt;:\u0026lt;kubeconfig1 path\u0026gt; Show overview of, and diagnose issues with, each cluster:\nsubctl show all subctl diagnose all Diagnose common firewall issues between a pair of clusters:\nsubctl diagnose firewall inter-cluster --context \u0026lt;localcontext\u0026gt; --remotecontext \u0026lt;remotecontext\u0026gt; Collect details about an issue you\u0026rsquo;d like help with:\nsubctl gather tar cfz submariner-\u0026lt;timestamp\u0026gt;.tar.gz submariner-\u0026lt;timestamp\u0026gt; When reporting an issue, it may also help to include the information in the bug-report.md template.\nManual Troubleshooting Pre-requisite Before we begin troubleshooting, run subctl version to obtain which version of the Submariner components you are running.\nRun kubectl get services -n \u0026lt;service-namespace\u0026gt; | grep \u0026lt;service-name\u0026gt; to get information about the service you\u0026rsquo;re trying to access. This will provide you with the Service Name, Namespace and ServiceIP. If Globalnet is enabled, you will also need the globalIp of the service by running\nkubectl get globalingressip \u0026lt;service-name\u0026gt;'\nConnectivity Issues Submariner deployment completed successfully but Services/Pods on one cluster are unable to connect to Services on another cluster. This can be due to multiple factors outlined below.\nCheck the Connection Statistics If you are unable to connect to a remote cluster, check its connection status in the Gateway resource.\nkubectl describe Gateway -n submariner-operator\nSample output:\n- endpoint: backend: libreswan cable_name: submariner-cable-cluster1-172-17-0-7 cluster_id: cluster1 healthCheckIP: 10.1.128.0 hostname: cluster1-worker nat_enabled: false private_ip: 172.17.0.7 public_ip: \u0026#34;\u0026#34; subnets: - 100.1.0.0/16 - 10.1.0.0/16 latencyRTT: average: 447.358µs last: 281.577µs max: 5.80437ms min: 158.725µs stdDev: 364.154µs status: connected statusMessage: Connected to 172.17.0.7:4500 - encryption alg=AES_GCM_16, keysize=128 rekey-time=13444 The Gateway Engine uses the Health Check IP of the endpoint to verify connectivity. The connection Status will be marked as error, if it cannot reach this IP, and the Status Message will provide more information about the possible failure reason. It also provides the statistics for the connection.\nService Discovery Issues If you are able to connect to remote service by using ServiceIP or globalIp, but not by service name, it is a Service Discovery Issue.\nService Discovery not working This is good time to familiarize yourself with Service Discovery Architecture if you haven\u0026rsquo;t already.\nCheck ServiceExport for your Service For a Service to be accessible across clusters, you must first export the Service via subctl which creates a ServiceExport resource. Ensure the ServiceExport resource exists and check if its status condition indicates `Exported\u0026rsquo;. Otherwise, its status condition will indicate the reason it wasn\u0026rsquo;t exported.\nkubectl describe serviceexport -n \u0026lt;service-namespace\u0026gt; \u0026lt;service-name\u0026gt;\nNote that you can also use shorthand svcex for serviceexport and svcim for serviceimport.\nSample output:\nName: nginx-demo Namespace: default Labels: \u0026lt;none\u0026gt; Annotations: \u0026lt;none\u0026gt; API Version: multicluster.x-k8s.io/v1alpha1 Kind: ServiceExport Metadata: Creation Timestamp: 2020-11-25T06:21:01Z Generation: 1 Resource Version: 5254 Self Link: /apis/multicluster.x-k8s.io/v1alpha1/namespaces/default/serviceexports/nginx-demo UID: 77509e43-8fd1-4173-805c-e03c4581ebbf Status: Conditions: Last Transition Time: 2020-11-25T06:21:01Z Message: Reason: Status: True Type: Valid Last Transition Time: 2020-11-25T06:21:01Z Message: Service was successfully synced to the broker Reason: Status: True Type: Synced Events: \u0026lt;none\u0026gt; Check Lighthouse CoreDNS Service All cross-cluster service queries are handled by Lighthouse CoreDNS server. First we check if the Lighthouse CoreDNS Service is running properly.\nkubectl -n submariner-operator get service submariner-lighthouse-coredns\nIf it is running fine, note down the ServiceIP for the next steps. If not, check the logs for an error.\nIf the error is due to a wrong image, run kubectl -n submariner-operator get deployment submariner-lighthouse-coredns and make sure Image is set to quay.io/submariner/lighthouse-coredns:\u0026lt;version\u0026gt; and refers to the correct version.\nFor any other errors, capture the information and raise a new issue.\nIf there\u0026rsquo;s no error, then check if the Lighthouse CoreDNS server is configured correctly. Run kubectl -n submariner-operator describe configmap submariner-lighthouse-coredns and make sure it has following configuration:\nclusterset.local:53 { lighthouse errors health ready } In order to enable debug logs in Lighthouse CoreDNS pods, you can replace errors with debug in the above configmap.\nCheck CoreDNS Configuration Submariner requires the CoreDNS deployment to forward requests for the domain clusterset.local to the Lighthouse CoreDNS server in the cluster making the query. Ensure this configuration exists and is correct.\nFirst we check if CoreDNS is configured to forward requests for domain clusterset.local to Lighthouse CoreDNS Server in the cluster making the query.\nkubectl -n kube-system describe configmap coredns\nIn the output look for something like this:\nclusterset.local:53 { forward . \u0026lt;lighthouse-coredns-serviceip\u0026gt; ======\u0026gt; ServiceIP of lighthouse-coredns service as noted in previous section } If the entries highlighted above are missing or ServiceIp is incorrect, it means CoreDNS wasn\u0026rsquo;t configured correctly. It can be fixed by running kubectl edit configmap coredns and making the changes manually. You may need to repeat this step on every cluster.\nCheck submariner-lighthouse-agent Next we check if the submariner-lighthouse-agent is properly running. Run kubectl -n submariner-operator get pods submariner-lighthouse-agent and check the status of Pods.\nIf the status indicates the ImagePullBackOff error, run kubectl -n submariner-operator describe deployment submariner-lighthouse-agent and check if Image is set correctly to quay.io/submariner/lighthouse-agent:\u0026lt;version\u0026gt;. If it is and the same error still occurs, raise an issue here or ping us on the community slack channel.\nIf the status indicates any other error, run kubectl -n submariner-operator get pods to get the name of the lighthouse-agent Pod. Then run kubectl -n submariner-operator logs \u0026lt;lighthouse-agent-pod-name\u0026gt; to get the logs. See if there are any errors in the log. If yes, raise an issue with the log contents, or you can continue reading through this guide to troubleshoot further.\nIf there are no errors, grep the log for the service name that you\u0026rsquo;re trying to query as we may need the log entries later for raising an issue.\nCheck ServiceImport resources If the steps above did not indicate an issue, next we check if the ServiceImport resources were properly created for the service you\u0026rsquo;re trying to access.\nRun kubectl get serviceimports --all-namespaces |grep \u0026lt;your-service-name\u0026gt; on the Broker cluster to check if a resource was created for your service. If not, then check the Lighthouse Agent logs on the cluster where the service was created and look for any error or warning messages indicating a failure to create the ServiceImport resource for your service. The most common error is Forbidden if the RBAC wasn\u0026rsquo;t configured correctly. Depending on the deployment method used, \u0026lsquo;subctl\u0026rsquo; or \u0026lsquo;helm\u0026rsquo;, it should\u0026rsquo;ve been done for you. Create an issue with relevant log entries.\nIf the ServiceImport resource was created correctly on the Broker cluster, the next step is to check if it exists on the cluster where you\u0026rsquo;re trying to access the service. The ServiceImport should exist in the service\u0026rsquo;s namespace with the same name as the service. If it doesn\u0026rsquo;t exist, check the logs of the Lighthouse Agent on the cluster where you are trying to access the service. As described earlier, it will most commonly be an issue with RBAC otherwise create an issue with relevant log entries.\nCheck EndpointSlice resources If the ServiceImport resources are correct, next we check if the EndpointSlice resources were properly created for the service you\u0026rsquo;re trying to access. Run kubectl get endpointslices --all-namespaces -l multicluster.kubernetes.io/service-name=\u0026lt;your-service-name\u0026gt; on the Broker cluster to check if a resource was created for your Service. If not, then check the Lighthouse Agent logs on the cluster where the Service was created and look for any error or warning messages indicating a failure to create the EndpointSlice resource for your Service. The most common error is Forbidden if the RBAC wasn\u0026rsquo;t configured correctly. This is supposed to be done automatically during deployment so please file an issue with the relevant log entries.\nIf the EndpointSlice resource was created correctly on the Broker cluster, the next step is to check if it exists on the cluster where you\u0026rsquo;re trying to access the Service. The EndpointSlice should exist in the service\u0026rsquo;s namespace. If it doesn\u0026rsquo;t exist check the logs of the Lighthouse Agent on the cluster where you are trying to access the Service. As described earlier, it will most commonly be an issue with RBAC so create an issue with relevant log entries.\nIf the EndpointSlice resource was created properly on the cluster, run kubectl -n \u0026lt;your-service-namespace\u0026gt; describe endpointslice \u0026lt;your-endpointslice-name\u0026gt; and check if it has the correct endpoint addresses, and they indicate the Ready condition is true:\nName: nginx-ss-cluster2 Namespace: default Labels: endpointslice.kubernetes.io/managed-by=lighthouse-agent.submariner.io lighthouse.submariner.io/sourceCluster=cluster2 lighthouse.submariner.io/sourceName=nginx-ss lighthouse.submariner.io/sourceNamespace=default multicluster.kubernetes.io/service-name=nginx-ss-default-cluster2 Annotations: \u0026lt;none\u0026gt; AddressType: IPv4 Ports: Name Port Protocol ---- ---- -------- web 80 TCP Endpoints: - Addresses: 10.242.0.5 -----\u0026gt; Pod IP Conditions: Ready: true Hostname: web-0 -----\u0026gt; Pod hostname Topology: kubernetes.io/hostname=cluster2-worker2 - Addresses: 10.242.224.4 Conditions: Ready: true Hostname: web-1 Topology: kubernetes.io/hostname=cluster2-worker Events: \u0026lt;none\u0026gt; For a non-headless service, the EndpointSlice will contain a single endpoint referencing the service\u0026rsquo;s cluster IP address.\nIf the Addresses are correct but still not being returned from DNS queries, try querying IPs in a specific cluster by prefixing the query with \u0026lt;cluster-id\u0026gt;. If that returns the IPs correctly, then check the connectivity to the cluster using subctl show endpoint. The Lighthouse CoreDNS Server only returns IPs from connected clusters.\nFor errors querying specific Pods of a StatefulSet, check that the Hostname is correct for the endpoint.\nIf still not working, file an issue with relevant log entries.\n"
},
{
"uri": "/getting-started/quickstart/openshift/",
"title": "OpenShift",
"tags": [],
"description": "",
"content": " On AWS On AWS with Globalnet On Azure On GCP Hybrid vSphere and AWS Hybrid OpenStack and AWS "
},
{
"uri": "/development/website/",
"title": "Contributing to the Website",
"tags": [],
"description": "",
"content": "The Submariner documentation website is based on Hugo, Grav, the Hugo Learn theme, and is written in Markdown format.\nYou can always click the Edit this page link at the top right of each page, but if you want to test your changes locally before submitting you can:\n Fork the submariner-io/submariner-website project on GitHub.\n Check out your copy locally:\ngit clone ssh://[email protected]/\u0026lt;your-user\u0026gt;/submariner-website.git cd submariner-website make server An instance of the website is now running locally on your machine and is accessible at http://localhost:1313.\nBy default, the server can only be accessed from the same machine it\u0026rsquo;s run on. Running make server BIND=0.0.0.0 PORT=8080 will allow remote access via any IP address on the machine (remote or local) on port 8080. Setting BIND to a specific IP address restricts access to that address alone.\n Edit files in src. The browser should automatically reload so you can view your changes.\n Eventually commit, push, and pull-request your changes. You can find a good guide about the GitHub workflow here.\n Your changes will be verified by CI. Check the job results for details of any errors.\n "
},
{
"uri": "/operations/deployment/calico/",
"title": "Calico CNI",
"tags": [],
"description": "",
"content": "Typically, the Kubernetes network plugin (based on kube-proxy) programs iptables rules for Pod networking within a cluster. When a Pod in a cluster tries to access an external IP, the plugin performs specific Network Address Translation (NAT) manipulation on the traffic as it does not belong to the local cluster. Similarly, Submariner also programs certain iptables rules and it requires these rules to be applied prior to the ones programmed by the network plugin. Submariner tries to preserve the source IP of the Pods for cross-cluster communication for visibility, ease of debugging, and security purposes.\nCalico supports different types of overlay networking. Currently, Submariner is validated only when Calico is deployed with VXLAN encapsulation.\n On clusters deployed with Calico as the network plugin, the rules inserted by Calico take precedence over Submariner, causing issues with cross-cluster communication. To make Calico compatible with Submariner, it needs to be configured, via IPPools, not to perform NAT on the subnets associated with the Pod and Service CIDRs of the remote clusters. Once the IPPools are configured in the clusters, Calico will not perform NAT for the configured CIDRs and allows Submariner to support cross-cluster connectivity.\nWhen using Submariner Globalnet with Calico, please avoid the default Globalnet CIDR (i.e., 242.0.0.0/8) as it is used internally within Calico. You can explicitly specify a non-overlapping Globalnet CIDR while deploying Submariner.\n Submariner automatically creates the necessary Calico IPPools for cross-cluster communication when the Calico API Server is installed within the cluster.\nIf the Calico API Server is not installed, the IPPools needed for cross-cluster communication must be manually created as outlined below.\nAs an example, consider two clusters, East and West, deployed with the Calico network plugin and connected via Submariner. For cluster East, the Service CIDR is 100.93.0.0/16 and the Pod CIDR is 10.243.0.0/16. For cluster West, they are 100.92.0.0/16 and 10.242.0.0/16. The following IPPools should be created:\nOn East Cluster:\n$ cat \u0026gt; svcwestcluster.yaml \u0026lt;\u0026lt;EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: svcwestcluster spec: cidr: 100.92.0.0/16 natOutgoing: false disabled: true EOF cat \u0026gt; podwestcluster.yaml \u0026lt;\u0026lt;EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: podwestcluster spec: cidr: 10.242.0.0/16 natOutgoing: false disabled: true EOF DATASTORE_TYPE=kubernetes KUBECONFIG=\u0026lt;kubeconfig-eastcluster.yaml\u0026gt; calicoctl create -f svcwestcluster.yaml DATASTORE_TYPE=kubernetes KUBECONFIG=\u0026lt;kubeconfig-eastcluster.yaml\u0026gt; calicoctl create -f podwestcluster.yaml On West Cluster:\ncat \u0026gt; svceastcluster.yaml \u0026lt;\u0026lt;EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: svceastcluster spec: cidr: 100.93.0.0/16 natOutgoing: false disabled: true EOF cat \u0026gt; podeastcluster.yaml \u0026lt;\u0026lt;EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: podeastcluster spec: cidr: 10.243.0.0/16 natOutgoing: false disabled: true EOF DATASTORE_TYPE=kubernetes KUBECONFIG=\u0026lt;kubeconfig-westcluster.yaml\u0026gt; calicoctl create -f svceastcluster.yaml DATASTORE_TYPE=kubernetes KUBECONFIG=\u0026lt;kubeconfig-westcluster.yaml\u0026gt; calicoctl create -f podeastcluster.yaml "
},
{
"uri": "/community/getting-help/",
"title": "Getting Help",
"tags": [],
"description": "",
"content": "Talk to Us We would love to hear from you, how you are using Submariner, and what we can do to make it better.\nGitHub Check out Submariner\u0026rsquo;s GitHub and consider contributing. Pick up an issue to work on or propose an enhancement by reporting a new issue; once your code is ready to be reviewed, you can propose a pull request. You can find a good guide about the GitHub workflow here.\nSlack Share your ideas in the #submariner channel in Kubernetes\u0026rsquo; Slack. If you need it, you can request an invite to Kubernetes\u0026rsquo; Slack instance.\nCommunity Calendar Submariner\u0026rsquo;s meetings are open to everyone. All meetings are documented on Submariner\u0026rsquo;s Community Calendar. The bi-weekly Submariner Dev, Users \u0026amp; Community Meeting (Mondays at 3:00pm CET) is a good place to start.\nMailing List Join the submariner-dev or submariner-users mailing lists.\n"
},
{
"uri": "/getting-started/quickstart/openshift/azure/",
"title": "On Azure",
"tags": [],
"description": "",
"content": "This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters on Azure with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner with Service Discovery to interconnect the two clusters. Note that this guide focuses on Submariner deployment on clusters with non-overlapping Pod and Service CIDRs. For connecting clusters with overlapping CIDRs, please refer to the Submariner with Globalnet guide.\nPrerequisites Before we begin, the following tools need to be downloaded and added to your $PATH:\n OpenShift installer, pull secret, and command line interface. All can be downloaded from the official Installer documenation. Azure CLI which can be downloaded from the officical Azure documentation. Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.\n Setup Your Azure Profile Create a service principal and configure its access to Azure resources. Output the result in an Azure SDK compatible auth file. Please refer to the official OpenShift on Azure documentation for details.\naz ad sp create-for-rbac --sdk-auth \u0026gt; my.auth Create and Deploy cluster-a In this step you will deploy cluster-a using the default IP CIDR ranges:\n Pod CIDR Service CIDR 10.128.0.0/14 172.30.0.0/16 openshift-install create install-config --dir cluster-a openshift-install create cluster --dir cluster-a When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nCreate and Deploy cluster-b In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. For more information on IPv4 CIDR conversion, please check this page.\nIn this example, we will use the following IP ranges:\n Pod CIDR Service CIDR 10.132.0.0/14 172.31.0.0/16 openshift-install create install-config --dir cluster-b Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:\nsed -i \u0026#39;s/10.128.0.0/10.132.0.0/g\u0026#39; cluster-b/install-config.yaml Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:\nsed -i \u0026#39;s/172.30.0.0/172.31.0.0/g\u0026#39; cluster-b/install-config.yaml And finally deploy the cluster:\nopenshift-install create cluster --dir cluster-b When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nPrepare Azure Clusters for Submariner Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.\nsubctl cloud prepare is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments, handling the requirements specified above.\nRun the command for cluster-a:\nexport KUBECONFIG=cluster-a/auth/kubeconfig subctl cloud prepare azure --ocp-metadata cluster-a/metadata.json --auth-file my.auth Run the command for cluster-b:\nexport KUBECONFIG=cluster-b/auth/kubeconfig subctl cloud prepare azure --ocp-metadata cluster-b/metadata.json --auth-file my.auth Install Submariner with Service Discovery To install Submariner with multi-cluster Service Discovery follow the steps below:\nUse cluster-a as Broker subctl deploy-broker --kubeconfig cluster-a/auth/kubeconfig Join cluster-a and cluster-b to the Broker subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b Verify Deployment To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.\nDeploy ClusterIP Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 subctl export service --namespace default nginx Deploy Headless Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None subctl export service --namespace default nginx Verify Run nettest from cluster-a to access the nginx service:\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080 To access a Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt; as follows:\ncurl cluster-b.nginx.default.svc.clusterset.local:8080 Verify StatefulSets A StatefulSet uses a headless Service. Create a web.yaml as follows:\napiVersion: v1 kind: Service metadata: name: nginx-ss labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: ports: - port: 80 name: web clusterIP: None selector: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: \u0026#34;nginx-ss\u0026#34; replicas: 2 selector: matchLabels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss template: metadata: labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: containers: - name: nginx-ss image: nginxinc/nginx-unprivileged:stable-alpine ports: - containerPort: 80 name: web Use this yaml to create a StatefulSet web with nginx-ss as the Headless Service.\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default apply -f web.yaml subctl export service -n default nginx-ss curl nginx-ss.default.svc.clusterset.local:8080 To access the Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt;:\ncurl cluster-a.nginx-ss.default.svc.clusterset.local:8080 To access an individual pod in a specific cluster, prefix the query with \u0026lt;pod-hostname\u0026gt;.\u0026lt;cluster-id\u0026gt;:\ncurl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080 Perform automated verification The contexts on both config files are named admin and need to be modified before running the verify command. Here is how this can be done using yq:\nyq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-a\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-a\u0026#34; | .users[0].name = \u0026#34;admin-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-b\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-b/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-b\u0026#34; | .users[0].name = \u0026#34;admin-b\u0026#34;\u0026#39; cluster-b/auth/kubeconfig (if you’re using yq 4.18.1 or later, you can use yq -i instead of yq e -i).\nMore generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.\nThis will perform automated verifications between the clusters.\nexport KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose "
},
{
"uri": "/development/licenses/",
"title": "Licenses",
"tags": [],
"description": "",
"content": "Content contributed to the Submariner project must be made available under one of two licenses.\nContributions to projects other than the website These must be made available under the Apache License, version 2.0. Go files must start with the following header:\n/* SPDX-License-Identifier: Apache-2.0 Copyright Contributors to the Submariner project. Licensed under the Apache License, Version 2.0 (the \u0026#34;License\u0026#34;); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \u0026#34;AS IS\u0026#34; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ This is enforced by our CI.\nContributions to the website Contributions to the website must be made available under the Creative Commons Attribution 4.0 International license (CC BY 4.0).\n"
},
{
"uri": "/getting-started/quickstart/openshift/gcp-lb/",
"title": "On GCP (LoadBalancer mode)",
"tags": [],
"description": "",
"content": "This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters on GCP leveraging a cloud network load balancer service in front of the Submariner gateways.\nThe main benefit of this mode is that there is no need to dedicate specialized nodes with a public IP address to act as gateways. The administrator only needs to manually label any existing node or nodes in each cluster as Submariner gateways, and the Submariner Operator will take care of creating a LoadBalancer type Service pointing to the active Submariner gateway.\nPlease note that this mode is still experimental and may need more testing. For example we haven\u0026rsquo;t measured the impact on HA failover times.\n Prerequisites Before we begin, the following tools need to be downloaded and added to your $PATH:\n OpenShift installer, pull secret, and command line interface. All can be downloaded from the official Installer documenation. GCP CLI which can be downloaded from the official GCP documenation. Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.\n Setup Your GCP Profile Configure the GCP Credentials like project_id, private_key etc in ~/.gcp/osServiceAccount.json file. Please refer to the official doc for detailed instructions Create and Deploy cluster-a In this step you will deploy cluster-a using the default IP CIDR ranges:\n Pod CIDR Service CIDR 10.128.0.0/14 172.30.0.0/16 openshift-install create install-config --dir cluster-a openshift-install create cluster --dir cluster-a When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nCreate and Deploy cluster-b In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. For more information on IPv4 CIDR conversion, please check this page.\nIn this example, we will use the following IP ranges:\n Pod CIDR Service CIDR 10.132.0.0/14 172.31.0.0/16 openshift-install create install-config --dir cluster-b Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:\nsed -i \u0026#39;s/10.128.0.0/10.132.0.0/g\u0026#39; cluster-b/install-config.yaml Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:\nsed -i \u0026#39;s/172.30.0.0/172.31.0.0/g\u0026#39; cluster-b/install-config.yaml And finally deploy the cluster:\nopenshift-install create cluster --dir cluster-b When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nPrepare GCP Clusters for Submariner Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.\nsubctl cloud prepare is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments, handling the requirements specified above.\nRun the command for cluster-a:\nexport KUBECONFIG=cluster-a/auth/kubeconfig subctl cloud prepare gcp --ocp-metadata cluster-a/metadata.json Run the command for cluster-b:\nexport KUBECONFIG=cluster-b/auth/kubeconfig subctl cloud prepare gcp --ocp-metadata cluster-b/metadata.json Install Submariner with Service Discovery To install Submariner with multi-cluster Service Discovery follow the steps below:\nUse cluster-a as Broker subctl deploy-broker --kubeconfig cluster-a/auth/kubeconfig Join cluster-a and cluster-b to the Broker subctl join --load-balancer --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a subctl join --load-balancer --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b Verify Deployment To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.\nDeploy ClusterIP Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 subctl export service --namespace default nginx Deploy Headless Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None subctl export service --namespace default nginx Verify Run nettest from cluster-a to access the nginx service:\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080 To access a Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt; as follows:\ncurl cluster-b.nginx.default.svc.clusterset.local:8080 Verify StatefulSets A StatefulSet uses a headless Service. Create a web.yaml as follows:\napiVersion: v1 kind: Service metadata: name: nginx-ss labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: ports: - port: 80 name: web clusterIP: None selector: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: \u0026#34;nginx-ss\u0026#34; replicas: 2 selector: matchLabels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss template: metadata: labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: containers: - name: nginx-ss image: nginxinc/nginx-unprivileged:stable-alpine ports: - containerPort: 80 name: web Use this yaml to create a StatefulSet web with nginx-ss as the Headless Service.\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default apply -f web.yaml subctl export service -n default nginx-ss curl nginx-ss.default.svc.clusterset.local:8080 To access the Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt;:\ncurl cluster-a.nginx-ss.default.svc.clusterset.local:8080 To access an individual pod in a specific cluster, prefix the query with \u0026lt;pod-hostname\u0026gt;.\u0026lt;cluster-id\u0026gt;:\ncurl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080 Perform automated verification The contexts on both config files are named admin and need to be modified before running the verify command. Here is how this can be done using yq:\nyq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-a\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-a\u0026#34; | .users[0].name = \u0026#34;admin-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-b\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-b/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-b\u0026#34; | .users[0].name = \u0026#34;admin-b\u0026#34;\u0026#39; cluster-b/auth/kubeconfig (if you’re using yq 4.18.1 or later, you can use yq -i instead of yq e -i).\nMore generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.\nThis will perform automated verifications between the clusters.\nexport KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose "
},
{
"uri": "/community/releases/",
"title": "Releases",
"tags": [],
"description": "",
"content": "v0.16.8 (January 9, 2025) Fixed an issue with Service Discovery that caused significant latencies when exporting a large number of service. New options were added to subctl cloud prepare to support a custom vpc for AWS. Addressed security vulnerability CVE-2024-45338. v0.17.5 (January 9, 2025) Fixed an issue with Service Discovery that caused ServiceImport resources to be deleted on submariner-lighthouse-agent pod restart. Addressed security vulnerability CVE-2024-45338. v0.18.4 (January 8, 2025) Addressed security vulnerability CVE-2024-45338. v0.18.3 (December 3, 2024) Fixed an issue with Service Discovery that caused ServiceImport resources to be deleted on submariner-lighthouse-agent pod restart. An exported non-headless Service\u0026rsquo;s publishNotReadyAddresses flag is now propagated to the Globalnet ingress Service to allow forwarding traffic if there is no backing ready pod. Fixed an issue where AWS cloud prepare failed to find the VPC. v0.19.1 (November 28, 2024) Fixed an issue with Service Discovery that caused ServiceImport resources to be deleted on submariner-lighthouse-agent pod restart. An exported non-headless Service\u0026rsquo;s publishNotReadyAddresses flag is now propagated to the Globalnet ingress Service to allow forwarding traffic if there is no backing ready pod. Fixed an issue where AWS cloud prepare failed to find the VPC. v0.17.4 (November 13, 2024) Fixed an issue where AWS cloud prepare failed to find the VPC. v0.18.2 (October 30, 2024) Fixed an issue with Service Discovery that caused a new EndpointSlice to be created when the labels on the exporting Service were updated. New options were added to subctl cloud prepare to support a custom vpc for AWS. v0.19.0 (October 25, 2024) New features Service Discovery now propagates the session affinity information from the exported service to the aggregated ServiceImport. Service Discovery can now allocate a cluster set virtual IP for exported services. This is an opt-in feature that can be enabled per service via the lighthouse.submariner.io/use-clusterset-ip annotation on the ServiceExport or automatically for all services via the enable-clusterset-ip option on subctl deploy-broker. Note that while DNS queries will return the cluster set virtual IP, Service Discovery does not route this virtual IP and relies on some external component to do so. Each Route Agent now monitors the connectivity to each remote cluster\u0026rsquo;s gateway using ICMP and the health of each connection is reported by subctl diagnose. New options were added to subctl cloud prepare to support a custom vpc for AWS. Submariner can now be deployed on Kubernetes KubeVirt clusters. Other changes Fixed an issue with Service Discovery that caused significant latencies when exporting a large number of service. Fixed an issue which could cause the wrong pod CIDR to be discovered on join. The Service Discovery CoreDNS ClusterIP service now also defines a TCP port to support TCP retries after truncation per RFC1035 and RFC2181. Fixed an issue with Calico wrongly overwriting static routes added by RouteAgent. Fixed an issue with detecting Calico CNI interface after node reboot. Fixed an issue with Service Discovery that caused a new EndpointSlice to be created when the labels on the exporting Service were updated. v0.17.3 (October 9, 2024) Fixed an issue with Service Discovery that caused significant latencies when exporting a large number of service. Fixed an issue with Calico wrongly overwriting static routes added by RouteAgent. Fixed an issue with detecting Calico CNI interface after node reboot. The Service Discovery CoreDNS ClusterIP service now also defines a TCP port to support TCP retries after truncation per RFC1035 and RFC2181. Fixed an issue with Service Discovery that caused a new EndpointSlice to be created when the labels on the exporting Service were updated. New options were added to subctl cloud prepare to support a custom vpc for AWS. v0.18.1 (October 7, 2024) Fixed an issue with Service Discovery that caused significant latencies when exporting a large number of service. Fixed an issue which could cause the wrong pod CIDR to be discovered on join. Fixed an issue with Calico wrongly overwriting static routes added by RouteAgent. Fixed an issue with detecting Calico CNI interface after node reboot. The Service Discovery CoreDNS ClusterIP service now also defines a TCP port to support TCP retries after truncation per RFC1035 and RFC2181. v0.14.9 (July 26, 2024) Reduced and restricted the RBAC permissions for the various Submariner components to only what is actually needed to reduce any potential attack surface. Note: this version replaces v0.14.8.\nv0.15.5 (July 23, 2024) Reduced and restricted the RBAC permissions for the various Submariner components to only what is actually needed to reduce any potential attack surface. Note: this version replaces v0.15.4.\nv0.18.0 (July 4, 2024) New features subctl join and other commands now support HTTP proxy arguments corresponding to the HTTP proxy environment variables that are propagated to the various pods. subctl verify now outputs a short description of each test that is run. Other changes Fixed an issue in Service Discovery where un-exporting a Service on one cluster and then quickly exporting it on another cluster could result in a missing ServiceImport resource and cause name resolution failures. Reduced and restricted the RBAC permissions for the various Submariner components to only what is actually needed to reduce any potential attack surface. Improved the performance of Service Discovery exporting at scale which was hindered by excessive throttling delays when exporting many services quickly. To reduce RBAC permissions, Submariner no longer annotates Node resources. After upgrade, any submariner.io/* annotations will not be removed because Submariner no longer has Node update permission. Health check counters on the Gateway resource now report correct information after a gateway leader re-election occurs. AWS cloud prepare now supports the resource naming convention implemented in Openshift 4.16 and above. v0.17.2 (June 26, 2024) Improved the performance of Service Discovery exporting at scale which was hindered by excessive throttling delays when exporting many services quickly. Reduced and restricted the RBAC permissions for the various Submariner components to only what is actually needed to reduce any potential attack surface. Health check counters on the Gateway resource now report correct information after a gateway leader re-election occurs. AWS cloud prepare now supports the resource naming convention implemented in Openshift 4.16 and above. v0.16.7 (June 17, 2024) Fixed an issue in Service Discovery where un-exporting a Service on one cluster and then quickly exporting it on another cluster could result in a missing ServiceImport resource and cause name resolution failures. Reduced and restricted the RBAC permissions for the various Submariner components to only what is actually needed to reduce any potential attack surface. Health check counters on the Gateway resource now report correct information after a gateway leader re-election occurs. Note: this version replaces v0.16.4, v0.16.5, v0.16.6.\nv0.17.1 (April 17, 2024) Fixed an issue in Service Discovery where un-exporting a Service on one cluster and then quickly exporting it on another cluster could result in a missing ServiceImport resource and cause name resolution failures. v0.17.0 (February 26, 2024) New features The new --only basic-connectivity option on subctl verify runs a smaller set of connectivity tests as a quick sanity check when time is a constraint. The deploy-broker, recover-broker-info, and join sub-commands have a --broker-url option which can be used to override the broker URL (which is usually derived from the context used to access the broker, or stored in the broker-info.subm file). subctl join now ensures the local cluster ID is unique with respect to existing joined clusters to avoid issues with duplicate IDs. subctl verify has a new flag, --extracontext, to specify the context for a third cluster that is required for some Service Discovery tests. Other changes The Globalnet controller now employs Kubernetes leader election to ensure proper continuity during fail-over and avoid potential race conditions. Globalnet now handles port updates for exported services. Removed the dedicated-gateway flag from subctl cloud prepare that was previously deprecated in v0.15.0. To deploy without dedicated gateways, use the Load Balancer mode instead. Removed the generic option from subctl cloud prepare that was previously deprecated in v0.15.0. To label gateway nodes, use subctl join instead. Fixed an issue in Service Discovery where stale endpoint IPs, corresponding to services that no longer exist, were returned from DNS queries. Fixed an issue in Service Discovery which caused an erroneous ServiceExport Conflict status condition to be reported. The Gateway leader election was enhanced to not restart the pod when leadership is lost to avoid possible data path disruptions. Fixed a crash in the Submariner Operator pod due to a concurrent map write. Fixed an issue with Service Discovery where, after disaster recovery of the broker cluster, some DNS queries could fail requiring a restart of the CoreDNS server pod. Fixed an issue with the OVN-Kubernetes CNI where, after a cluster recovery, the data path was broken requiring manual deletion of stale GatewayRoute and NonGatewayRoute resources and a restart of the Route Agent pod. The script to download the subctl binary now correctly handles the Linux aarch64 architecture. v0.16.3 (January 11, 2024) Fixed an issue in Service Discovery which caused an erroneous ServiceExport Conflict status condition to be reported. Fixed an issue with Service Discovery where, after disaster recovery of the broker cluster, some DNS queries could fail requiring a restart of the CoreDNS server pod. Fixed an issue with the OVN-Kubernetes CNI where, after a cluster recovery, the data path was broken requiring manual deletion of stale GatewayRoute and NonGatewayRoute resources and a restart of the Route Agent pod. Fixed a crash in the Submariner Operator pod due to a concurrent map write. v0.16.2 (November 7, 2023) The Globalnet controller now employs Kubernetes leader election to ensure proper continuity during fail-over and avoid potential race conditions. The Gateway leader election was enhanced to not restart the pod when leadership is lost to avoid possible data path disruption. Fixed an issue in Service Discovery where stale endpoint IPs, corresponding to services that no longer exist, were returned from DNS queries. Sockets from the host are mounted through their parent directory, which ensures that the sockets themselves aren\u0026rsquo;t replaced by directories (which prevents OVN components from starting). Additionally, stray directories are cleaned up at startup. This fixes the known issue with upgrades involving OVN, as documented in the known issues section for v0.16.0 Note: this version replaces v0.16.1.\nv0.15.3 (November 3, 2023) The subctl diagnose command has been enhanced to check for potential firewall issues that may be blocking ESP traffic and will provide an appropriate error message. Submariner now explicitly enables forwarding on the interfaces that it creates to support forwarding even when global forwarding on the node is turned off. Enhanced Calico CNI detection now includes searching for calico-node CNI pods when the calico-config map is not detected. Submariner now explicitly configures dpddelay when initiating IPsec connections to prevent excessively frequent liveness probes. Service Discovery will now publish DNS records for pods that are not ready based on the setting of the publishNotReadyAddresses flag on the service. The CNI detection method in Submariner Operator is now improved to detect the Flannel CNI, even when the Flannel configMap is missing from the cluster. Submariner now ensures that the IPsec control socket is created before initiating connection requests, and also automatically retries connections in response to errors reported by the \u0026lsquo;whack\u0026rsquo; command. The pod CIDR detection logic now ensures that the node\u0026rsquo;s podCIDR is exclusively used for single-node deployments. The Submariner gateway now retries reading local node information on startup to reduce pod restarts if the Kubernetes API server is temporarily unavailable. Reduced data path downtime with Libreswan cable driver when gateway pod restarts. v0.14.7 (October 17, 2023) Submariner now explicitly enables forwarding on the interfaces that it creates to support forwarding even when global forwarding on the node is turned off. Submariner now ensures that the IPsec control socket is created before initiating connection requests, and also automatically retries connections in response to errors reported by the \u0026lsquo;whack\u0026rsquo; command. The Submariner gateway now retries reading local node information on startup to reduce pod restarts if the Kubernetes API server is temporarily unavailable. Reduced data path downtime with Libreswan cable driver when gateway pod restarts. v0.16.0 (October 2, 2023) New features The subctl cloud prepare azure command has a new flag, air-gapped, to indicate the cluster is in an air-gapped environment which may forbid certain configurations in a disconnected Azure installation. subctl is now built for ARM Macs (Darwin arm64). subctl show versions now shows the version of the metrics proxy component. The subctl gather command now collects metrics proxy pod logs in Globalnet deployments. For headless services, Service Discovery now derives its EndpointSlices from the Kubernetes EndpointSlices so for each Kubernetes EndpointSlice there will be a corresponding Service Discovery EndpointSlice. Service Discovery EndpointSlices follow the same naming convention in that the names are auto-generated by Kubernetes prefixed by the service name. Endpoints for all conditions are now included - prior releases only published ready endpoints. Service Discovery will now publish DNS records for pods that are not ready based on the setting of the publishNotReadyAddresses flag on the service. Service Discovery now propagates labels from an exported Service to its generated EndpointSlices. The new subctl upgrade command can upgrade subctl itself in-place, and upgrade Submariner deployments on brokers and joined clusters to the corresponding version of Submariner. The subctl diagnose command has been enhanced to check for potential firewall issues that may be blocking ESP traffic and will provide an appropriate error message. Submariner now explicitly enables forwarding on the interfaces that it creates to support forwarding even when global forwarding on the node is turned off. Other changes Reduced data path downtime with Libreswan cable driver when gateway pod restarts. Fixed an issue with OVNKubernetes CNI where routes could be accidentally deleted during cluster restart, or upgrade scenarios. Submariner gateway pods now skip invoking cable engine cleanup during termination, as this is handled by the route agent during gateway migration. The status condition type \u0026ldquo;Allocated\u0026rdquo; for Globalnet resources now adheres to the intended design of status conditions in Kubernetes by reflecting only the latest observed status. Fixed issue which caused the IPsec pluto process to crash when the remote endpoint was unstable. Submariner now explicitly configures dpddelay when initiating IPsec connections to prevent excessively frequent liveness probes. Submariner now uses case-insensitive comparison while parsing CNI names. Enhanced Calico CNI detection now includes searching for calico-node CNI pods when the calico-config map is not detected. Submariner now automatically creates the necessary Calico IPPools for remote cluster connectivity when the Calico API Server is installed in the cluster. Fixed an issue with Service Discovery with Globalnet enabled where a service was inaccessible after recreating it. Fixed an issue with Service Discovery where a remote cluster\u0026rsquo;s service was inaccessible after recreating its local namespace. Service Discovery with Globalnet enabled now correctly handles headless services without a selector. The pod CIDR detection logic now ensures that the node\u0026rsquo;s podCIDR is exclusively used for single-node deployments. subctl verify no longer requires the KUBECONFIG environment variable to be set. The submariner_service_export metric is now properly exposed after being inadvertently removed. The Globalnet component now handles out-of-order remote endpoint notifications properly. The Submariner gateway now retries reading local node information on startup to reduce pod restarts if the Kubernetes API server is temporarily unavailable. Submariner now ensures that the IPsec control socket is created before initiating connection requests, and also automatically retries connections in response to errors reported by the \u0026lsquo;whack\u0026rsquo; command. The CNI detection method in Submariner Operator is now improved to detect the Flannel CNI, even when the Flannel configMap is missing from the cluster. Known issues Upgrades involving OVN can fail because one of the OVN sockets is replaced by a directory. To bring affected nodes up successfully, all invalid sockets on each node must be removed: find /run -type d -name '*.sock' -delete. v0.16.0 includes a partial fix for this: route agents wait for node readiness before starting, which allows OVN to finish initializing. In some scenarios however, an invalid directory is created before OVN is upgraded, which prevents OVN from starting up correctly. This will be fixed fully in v0.16.1. v0.14.6 (July 5, 2023) The subctl cloud prepare azure command has a new flag, air-gapped, to indicate the cluster is in an air-gapped environment which may forbid certain configurations in a disconnected Azure installation. The Globalnet component now handles out-of-order remote endpoint notifications properly. subctl is now built for ARM Macs (Darwin arm64). Fixed an issue with OVNKubernetes CNI where routes could be accidentally deleted during cluster restart, or upgrade scenarios. Submariner gateway pods now skip invoking cable engine cleanup during termination, as this is handled by the route agent during gateway migration. v0.15.2 (July 4, 2023) The subctl cloud prepare azure command has a new flag, air-gapped, to indicate the cluster is in an air-gapped environment which may forbid certain configurations in a disconnected Azure installation. Submariner now uses case-insensitive comparison while parsing CNI names. Submariner gateway pods now skip invoking cable engine cleanup during termination, as this is handled by the route agent during gateway migration. subctl is now built for ARM Macs (Darwin arm64). subctl show versions now shows the versions of the metrics proxy and plugin syncer components. The Globalnet component now handles out-of-order remote endpoint notifications properly. Reduced data path downtime with Libreswan cable driver when gateway pod restarts. Fixed an issue with OVNKubernetes CNI where routes could be accidentally deleted during cluster restart, or upgrade scenarios. v0.13.6 (June 7, 2023) This is a bugfix release:\n Fixed issue where a Gateway pod restart due to SIGINT or SIGTERM signals caused data path disruption. Fixed issue which caused the IPsec pluto process to crash when the remote endpoint was unstable. v0.15.1 (June 6, 2023) This is a bugfix release:\n Fixed issue which caused the IPsec pluto process to crash when the remote endpoint was unstable. Fixed issue where a Gateway pod restart due to SIGINT or SIGTERM signals caused data path disruption. Service Discovery now publishes DNS records for pods that are not ready for headless services based on the setting of the publishNotReadyAddresses flag on the Service. v0.14.5 (June 5, 2023) This is a bugfix release:\n The subctl gather command now collects iptables information for OVN-Kubernetes CNI. Fixed issue while running subctl gather command for OVN-Kubernetes CNI. Fixed issue where a Gateway pod restart due to SIGINT or SIGTERM signals caused data path disruption. Fixed issue which caused the IPsec pluto process to crash when the remote endpoint was unstable. v0.12.4 (May 24, 2023) There are no user-facing changes in this release.\nv0.13.5 (May 23, 2023) This is a bugfix release:\n Submariner now ensures that reverse path filtering setting is properly applied on the vx-submariner and vxlan-tunnel interfaces after they are created. This fix was necessary for RHEL 9 nodes where the setting was sometimes getting overwritten. Fixed intermittent failure where gateway connections sometimes don\u0026rsquo;t get established. Submariner now handles out-of-order remote endpoint notifications properly in various handlers associated with the Route Agent component. Fixed stale iptables rules and a global IP leak which can sometimes happen when a GlobalEgressIP is created and immediately deleted as part of stress testing. Fixed issues while spawning Gateway nodes during cloud prepare for clusters deployed on OpenStack environment running OVN-Kubernetes CNI. Fixed issue with Service addresses being resolved before the service is ready. The subctl gather command now collects the ipset information from all cluster nodes. v0.14.4 (May 4, 2023) This is a bugfix release:\n Fixed stale iptables rules along with global IP leak which can sometimes happen as part of stress testing. Handle out-of-order remote endpoint notifications properly in various Route Agent handlers. Ensure that reverse path filtering setting is properly applied on the vx-submariner and vxlan-tunnel interfaces after they are created. This fix was necessary for RHEL 9 nodes where the setting was sometimes getting overwritten. Fixed issues while spawning Gateway nodes during cloud prepare for clusters deployed on OpenStack environment running OVN-Kubernetes CNI. The subctl gather command now collects the ipset information from all cluster nodes. v0.15.0 (May 2, 2023) New features To be compliant with the Kubernetes Multicluster Services specification, Service Discovery now distributes a single aggregated ServiceImport to each cluster in the exported service\u0026rsquo;s namespace. Previously, each cluster distributed its own ServiceImport copy that was placed in the submariner-operator namespace. Submariner can now be installed on IPv4/IPv6 dual-stack Kubernetes clusters. Currently, only IPv4 addresses are supported. Added a subctl recover-broker-info command to recover lost a broker-info.subm file. Extended the ability to customize the default TCP MSS clamping value set by Submariner to non-Globalnet deployments. The subctl gather command now gathers iptables logs for Calico and kindnet CNIs. The subctl gather command now collects the ipset information from all cluster nodes. The subctl diagnose command now validates that the Calico IPPool configuration matches Submariner\u0026rsquo;s requirements. The subctl verify E2E tests now support setting the packet size used in TCP connectivity tests to troubleshoot MTU issues. The subctl verify command now runs FIPS verification tests. Allow overriding the image name of the metrics proxy component. Added endpoints to access profiling information for the gateway and Globalnet binaries. The following deprecated commands and variants have been removed: subctl benchmark’s --kubecontexts option (use --context and --tocontext instead) subctl benchmark’s --intra-cluster option (specify a single context to run intra-cluster benchmarks) subctl benchmark with two kubeconfigs as command-line arguments subctl cloud’s --metrics-ports option subctl deploy-broker’s --broker-namespace option (use --namespace instead) subctl diagnose firewall metrics (this is checked during deployment) subctl diagnose firewall intra-cluster with two kubeconfigs as command-line arguments subctl diagnose firewall inter-cluster with two kubeconfigs as command-line arguments subctl gather’s --kubecontexts option (use --contexts instead) Deprecated the subctl cloud prepare ... --dedicated-gateway flag, as it\u0026rsquo;s not actually used. Deprecated the subctl cloud prepare generic command, as it\u0026rsquo;s not actually used. Other changes Service Discovery-only deployments now work properly without the connectivity component deployed. Names of EndpointSlice objects now include their namespace to avoid conflicts between services with the same name in multiple namespaces. Changes in Azure cloud prepare: Machine set names are now based on region + UUID and limited to 20 characters to prevent issues with long cluster names. Machine set creation and deletion logic was updated to prevent creation of multiple gateway nodes. Image names are now retrieved from existing machine sets. Fixed stale iptables rules and a global IP leak which can sometimes happen when a GlobalEgressIP is created and immediately deleted as part of stress testing. Label gateway nodes as infrastructure with node-role.kubernetes.io/infra=\u0026quot;\u0026quot; to prevent them from counting against OpenShift subscriptions. Submariner now handles out-of-order remote endpoint notifications properly in various handlers associated with the Route Agent component. Submariner now ensures that reverse path filtering setting is properly applied on the vx-submariner and vxlan-tunnel interfaces after they are created. This fix was necessary for RHEL 9 nodes where the setting was sometimes getting overwritten. Fixed intermittent failure where gateway connections sometimes don\u0026rsquo;t get established. Fixed an issue whereby the flags for subctl unexport service were not recognized. The subctl diagnose cni command no longer fails for the Calico CNI when the natOutgoing IPPool status is missing. Fixed CVE-2023-28840, CVE-2023-28841, and CVE-2023-28842, which don\u0026rsquo;t affect Submariner but were flagged in deliverables. v0.14.3 (March 16, 2023) This is a bugfix release:\n Fixed issue with Service addresses being resolved before the service is ready. Various fixes for the --image-overrides flag when used with the subctl diagnose command. Fixed overriding the metrics proxy component in subctl join. v0.13.4 (February 24, 2023) This is a bugfix release:\n Changes in Azure cloud prepare: Machine set names are now based on region + UUID and limited to 20 characters to prevent issues with long cluster names. Machine set creation and deletion logic was updated to prevent creation of multiple gateway nodes. Image names are now retrieved from existing machine sets. The namespace is now included in EndpointSlice names to avoid conflicts between services with the same name in multiple namespaces. The subctl gather command now gathers iptables logs for Calico and kindnet CNIs. The subctl cloud prepare command no longer causes errors if the list of ports is empty. Cloud cleanup for OpenStack now identifies and deletes failed MachineSets. Bumped k8s.io/client-go to 0.20.15 to fix CVE-2020-8565. Bumped golang.org/x/crypto to 0.6.0 to fix CVE-2022-27191. Bumped golang.org/x/net to 0.7.0 to fix a number of security issues. v0.14.2 (February 22, 2023) This is a bugfix release:\n Changes in Azure cloud prepare: Machine set names are now based on region + UUID and limited to 20 characters to prevent issues with long cluster names. Machine set creation and deletion logic was updated to prevent creation of multiple gateway nodes. Image names are now retrieved from existing machine sets. Fixed a socket permission denied error in external network end-to-end tests. The subctl gather command now gathers iptables logs for Calico and kindnet CNIs. The subctl cloud prepare command no longer causes errors if the list of ports is empty. subctl operations which deploy images now allow those images to be overridden. The overrides are specified using --image-override: subctl benchmark. subctl verify. subctl diagnose sub-commands. The namespace is now included in EndpointSlice names to avoid conflicts between services with the same name in multiple namespaces. Bumped go-restful to 2.16.0 to address CVE-2022-1996. Bumped k8s.io/client-go to 0.20.15 to fix CVE-2020-8565. Bumped golang.org/x/crypto to 0.6.0 to fix CVE-2022-27191. Bumped golang.org/x/net to 0.7.0 to fix a number of security issues. v0.13.3 (December 21, 2022) This is a bugfix release:\n The subctl diagnose kube-proxy-mode command now works with different versions of iproute packages. The following changes were made to pods running subctl diagnose commands in order to allow them to run commands like tcpdump: Made the diagnose pod privileged. Run the diagnose pod with user ID 0. v0.12.3 (December 13, 2022) This is a bugfix release:\n Image version hashes are now 12 character long, avoiding possible collisions between images. Stopped using cluster-owned tag for AWS cloud prepare, fixing problems with Submariner security groups left over after uninstallation. Support overriding the MTU value used in TCP MSS clamping, allowing fine tuning of MTU when necessary. CNI interface annotations created by Submariner are now removed during uninstallation. Bumped x/text to address CVE-2021-38561 and CVE-2022-32149. Diagnose now validates if the OVNKubernetes CNI is supported by the deployed Submariner. Set DNSPolicy to ClusterFirstWithHostNet for pods that run with host networking. Service Discovery now writes the DNS message response body when it is not a ServerFailure to avoid unnecessary client retries. v0.14.1 (December 9, 2022) This is a bugfix release:\n Stopped using cluster-owned tag for AWS Security Group lookup. Running the subctl diagnose firewall command with individual kubeconfigs will now deploy diagnose pods in the submariner-operator namespace to avoid pod security errors. The periodic public IP watcher is enhanced to use random external servers to resolve the public IP associated with Gateway nodes. The subctl diagnose kube-proxy-mode command now works with different versions of iproute packages. The following changes were made to pods running subctl diagnose commands in order to allow them to run commands like tcpdump: Made the diagnose pod privileged. Run the diagnose pod with user ID 0. v0.13.2 (November 30, 2022) Added support for OpenShift 4.12. Service Discovery now returns a DNS error message in the response body when no matching records are found when queried about clusterset.local. This prevents unnecessary retries. Stopped using cluster-owned tag for AWS Security Group lookup. Stopped using api.ipify.org as the first resolver for public IPs. Extended the ability to customize the default TCP MSS clamping value set by Submariner to non-Globalnet deployments. v0.14.0 (November 21, 2022) New features Users no longer need to open ports 8080 and 8081 on the host for querying metrics. A new submariner-metrics-proxy DaemonSet runs pods on gateway nodes and forwards HTTP requests for metrics services to gateway and Globalnet pods running on the nodes. Gateway and Globalnet pods now listen on ports 32780 and 32781 instead of well-known ports 8080 and 8081 to avoid conflict with any other services that might be using those ports. Users will continue to query existing submariner-gateway-metrics and submariner-globalnet-metrics services to query the metrics. Added subctl diagnose service-discovery verifications for Service Discovery objects. The subctl join command now supports an --air-gapped option that instructs Submariner not to access any external servers for public-ip resolution. Support for simulated \u0026ldquo;air-gapped\u0026rdquo; environments has been added to kind clusters. To use, deploy with USING=air-gap or AIR_GAPPED=true. Support was added in the Shipyard project to easily deploy Submariner with a LoadBalancer type Service in front. To use, simply specify the target (e.g. deploy) with USING=load-balancer or LOAD_BALANCER=true. For kind-based deployments, MetalLB is deployed to provide the capability. The MetalLB version can be specified using METALLB_VERSION=x.y.z. Support was added to force running subctl verify when testing end-to-end, ignoring any local tests. To use this feature, run make e2e using=subctl-verify. Verifications can be now specified using the SUBCTL_VERIFICATIONS flag, instead of relying on the default behavior. e.g.: make e2e using=subctl-verify SUBCTL_VERIFICATIONS=connectivity,service-discovery. kubeconfig handling has been revamped to be consistent across all subctl commands and to match kubectl’s behaviour. The single-context commands, cloud-prepare, deploy-broker, export, join, unexport and uninstall, now all support a --context argument to specify the kubeconfig context to use. kubeconfig files can be specified using either the KUBECONFIG environment variable or the --kubeconfig argument; kubectl defaults will be applied if configured. If no context is specified, the kubeconfig default context will be used. Multiple-context commands which operate on all contexts by default, show and gather, support a --contexts argument which can be used to select one or more contexts; they also support the --context argument to select a single context. Multiple-context commands which operate on specific contexts, benchmark and verify, support a --context argument to specify the originating context, and a --tocontext argument to specify the target context. diagnose operates on all accessible contexts by default, except diagnose firewall inter-cluster and diagnose firewall nat-traversal which rely on an originating context specified by --context and a remote context specified by --remotecontext. Namespace-based commands such as export will use the namespace given using --namespace (-n), if any, or the current namespace in the selected context, if there is one, rather than the default namespace. These commands also support all connection options supported by kubectl, so connections can be configured using command arguments instead of kubeconfigs. Existing options (--kubecontext etc.) are preserved for backwards compatibility, but are deprecated and will be removed in the next release. Other changes The Flannel CNI is now properly identified during join. A new ServiceExport status condition type named Synced was added that indicates whether or not the ServiceImport was successfully synced to the broker. Service Discovery now handles updates to an exported service and updates/deletes the corresponding ServiceImport accordingly. Service Discovery now returns a DNS error message in the response body when no matching records are found when queried about clusterset.local. This prevents unnecessary retries. Cloud cleanup for OpenStack now identifies and deletes failed MachineSets. Privileges of the Route Agent and Gateway pods were reduced as they don’t need to access PersistentVolumeClaims and Secrets. The privileged SCC permission for Submariner components in OCP is set now by creating separate ClusterRole and ClusterRoleBinding resources instead of manipulating the system privileged SCC resource. Extended the ability to customize the default TCP MSS clamping value set by Submariner to non-Globalnet deployments. The subctl show command now correctly reports component image versions when image overrides were specified on join. Updates to the subctl gather command: The subctl gather command now creates one subdirectory per cluster instead of embedding the cluster name in each file name. If it’s not given a custom directory, subctl gather stores all its output in a directory named submariner- followed by the current date and time (in UTC) in \u0026ldquo;YYYYMMDDHHmmss\u0026rdquo; format. The subctl gather command now includes the output from ovn-sbctl show which has the chassis-id to hostname mapping that can be used to verify if submariner_router is pinned to the proper Gateway node. v0.13.1 (September 22, 2022) This is a bugfix release:\n Allow broker certificate checks to be disabled for insecure connections, using subctl join --check-broker-certificate=false. Return local cluster IP for headless services. Display proper output message from subctl show brokers when broker is not installed on the cluster. Allow passing DEFAULT_REPO while building subctl. Cleaned up the host routes programmed by OVN RA plugin during uninstall. Support overriding image names per-component to better support downstream builds. Limited Azure machine name lengths to 40 characters. Documented the default cable driver in the subctl join help message. Set DNSPolicy to ClusterFirstWithHostNet for pods that run with HostNetworking: true. Removed hardcoded workerNodeList while querying image for GCP and RHOS cloud preparation steps. Collect the output of ovn-sbctl show in subctl gather. Bumped x/text to address CVE-2021-38561. Set ReadHeaderTimeout (new in Go 1.18) to mitigate potential Slowloris attacks. v0.13.0 (July 18, 2022) New features All Submariner container images are now available for x86-64 and ARM64 architectures. Support was added in subctl cloud prepare to deploy Submariner on OpenShift on Microsoft Azure. This automatically configures the underlying Azure cloud infrastructure to meet Submariner\u0026rsquo;s prerequisites. Added more robust support for connecting clusters that use the OVNKubernetes CNI plugin in non-Globalnet deployments. Note that OVNKubernetes requires the OVN NorthBound DB version to be 6.1.0 or above and older versions are not supported. Also note that the minimum supported OpenShift Container Platform (OCP) version is 4.11. Added support for connecting to Kubernetes headless Services without Pod label selectors in Globalnet deployments. This is useful when you want to point a Service to another Service in a different namespace or external network. When endpoints are manually defined by the user, Submariner automatically routes the traffic and provides DNS resolution. Added a new subctl show brokers command that displays information about the Submariner Brokers installed. The subctl diagnose command was extended to verify inter-cluster connectivity when Submariner is deployed using a LoadBalancer Service. Other changes The submariner-operator namespace is labeled in accordance with KEP-2579: Pod Security Admission Control (default in Kubernetes 1.24) to allow the Pods to be privileged. The default namespace in which subctl diagnose kubeproxy and subctl diagnose firewall (and subcommands) spawn a Pod has been changed from default to submariner-operator as the latter has all necessary labels needed by the Pod Security Admission Controller. If the user-specified namespace is missing any of these labels, subctl will inform the user about the warnings in the subctl diagnose logs. The Globalnet metrics port will now be opened by default when Globalnet is deployed using subctl cloud prepare. It is now possible to customize the default TCP MSS clamping value set by Submariner in Globalnet deployments. This could be useful in network topologies where MTU issues are seen. To force a particular MSS clamping value use the submariner.io/tcp-clamp-mss node annotation on Gateway nodes, e.g. kubectl annotate node \u0026lt;node_name\u0026gt; submariner.io/tcp-clamp-mss=\u0026lt;value\u0026gt;. v0.12.2 (July 7, 2022) This is a bugfix release:\n The Globalnet metrics port will now be opened by default when Globalnet is deployed using subctl cloud prepare. Submariner ServiceExport now has unique condition types to simplify waiting for readiness. The subctl diagnose command now supports NAT-discovery port validation. The subctl cloud prepare rhos command will now work properly for nodes to which security groups were added manually. The submariner-operator namespace is labeled in accordance with KEP-2579: Pod Security Admission Control (default in Kubernetes 1.24) to allow the Pods to be privileged. The default namespace for the subctl diagnose command was changed to submariner-operator. Submariner pod images are now based on Fedora 36. Fixed issues related to Globalnet and Route-agent pods due to missing grep in the container image. Made secrets for ServiceAccounts compatible with Kubernetes 1.24 onwards. Restart health check pinger if it fails. Fixed intermittent failure when running subctl diagnose firewall metrics. v0.12.1 (May 10, 2022) This is a bugfix release:\n The default image type for a dedicated gateway node is changed from PnTAE.CPU_16_Memory_32768_Disk_80 to PnTAE.CPU_4_Memory_8192_Disk_50 for OpenStack Cloud prepare. subctl gather will now use libreswan as a default cable driver if none is specified in SubmarinerSpec during installation. Sometimes when Submariner, with Globalnet enabled, is used to connect onPrem clusters with Public clusters, MTU issues are seen. This was particularly noticed when the underlying platform uses nftables on the host nodes. This release fixes the MTU issues by explicitly clamping the TCP MSS to a fixed value derived from the default interface MTU subtracted with the cable-driver overhead. As part of subctl uninstall operation, we now remove the submariner.io/globalIp annotation that is added on the gateway node. v0.12.0 (March 21, 2022) New features Added a new subctl uninstall command that removes all Submariner components and dataplane artifacts, such as iptables rules and routing table entries, from a cluster. Added a new subctl unexport command that stops exporting a previously exported service. Added new subctl cloud prepare and subctl cloud cleanup commands for the Red Hat OpenStack Platform (RHOS). Added new metrics: Globalnet: Count of global Egress IPs allocated at Cluster scope, namespace scope, and for selected pods per CIDR. Globalnet: Count of global Ingress IPs allocated for Pods/Services per CIDR. Service Discovery: Count of DNS queries handled by Lighthouse. Added support for Globalnet objects verification using the subctl diagnose command. Added support for --broker-namespace flag while deploying the Broker. Added support for running subctl diagnose on single node clusters. Added support for running subctl diagnose from a pod in a cluster. subctl cloud prepare now deploys a dedicated gateway node as a default option on GCP and OpenStack platforms. subctl show now shows information about the Broker CR in the cluster. subctl gather now collects Globalnet information. subctl diagnose displays a warning when a generic CNI network plugin is detected. Bug fixes Calico is now correctly detected when used as a network plugin in OpenShift. Services without selectors can now be resolved across the ClusterSet. subctl diagnose firewall inter-cluster now works correctly for the VXLAN cable driver. Other changes The broker token and IPsec PSK are now stored in secrets which are used in preference to the corresponding fields in the Submariner CR, which are now deprecated. For backwards compatibility and to simplify upgrades, the deprecated fields are still populated but will be removed in 0.13. Globalnet no longer uses kube-proxy chains in support of exported services. Instead, it now creates an internal ClusterIP Service with the ExternalIPs set to the global IP assigned to the corresponding Service. Some Kubernetes distributions don\u0026rsquo;t allow Services with ExternalIPs by default for security reasons. Users must follow the Globalnet prerequisites to allow the Globalnet controller to create/update/delete Services with ExternalIPs. Known Issues When using the dot character in the cluster name, service discovery doesn’t work (#707). On OpenShift, Globalnet metrics do not appear automatically. This can be fixed by manually opening the Globalnet metrics port, TCP/8081. When using subctl cloud prepare on Red Hat OpenStack Platform (RHOS), if a dedicated gateway is used, the Submariner gateway security group and Submariner internal security group are associated with the wrong node. This can be resolved by manually adding the security groups using OpenStack CLI or Web UI (#227). v0.11.2 (February 1, 2022) This release doesn’t contain any user-facing changes; it fixes internal release issues.\nv0.11.1 (January 10, 2022) This is a bugfix release:\n All exported headless Services are now given a Globalnet ingress IP when Globalnet is enabled (#1634). Deployments without Globalnet no longer fail because of an invalid GlobalCIDR range (#1668). subctl gather no longer panics when retrieving some Pod container status information (#1684). v0.11.0 (October 28, 2021) This release mainly focused on stability, bug fixes, and improving the integration between Submariner and Open Cluster Management via the Submariner addon.\n subctl cloud prepare command now supports Google Cloud Platform as well as generic Kubernetes clusters. --ignore-requirements flag was added to subctl join command which ignores Submariner requirements checks. v0.10.1 (August 12, 2021) Inter-connecting clusters with overlapping CIDRs (Globalnet): The initial Globanet implementation is deprecated in favor of a new implementation which is more performant and scalable. Globalnet now allows users to explicitly request global IPs at the cluster level, for specific namespaces, or for specific Pods. The new Globalnet implementation is not backward-compatible with the initial Globalnet solution and there is no upgrade path. Globalnet now supports headless Services. The default globalnetCIDR range is changed from 169.254.0.0/16 to 242.0.0.0/8 and each cluster is allocated 64K Global IPs. Globalnet no longer annotates Pods and Services with global IPs but stores this information in ClusterGlobalEgressIP, GlobalEgressIP, and GlobalIngressIP resources. A new experimental load balancer mode was introduced which is designed to simplify the deployment of Submariner in cloud environments where worker nodes do not have access to a dedicated public IP. In this mode, the Submariner Operator creates a LoadBalancer Service that exposes both the encapsulation dataplane port as well as the NAT-T discovery port. This mode can be enabled by using subctl join --load-balancer. Submariner now supports inter-cluster connections based on the VXLAN protocol. This is useful in cases where encryption, such as with IPsec or WireGuard, is not desired, for example on connections that are already encrypted where the overhead of double encryption is not necessary or performant. This can be enabled by setting the --cable-driver vxlan option during subctl join. Submariner now supports SRV DNS queries for both ClusterIP and Headless Services. This facilitates Service discovery using port name and protocol. For a ClusterIP Service, this resolves to the port number and the domain name. For a Headless Service, the name resolves to multiple answers, one for each Pod backing the Service. Improved the Submariner integration with the Calico CNI. subctl benchmark latency and subctl benchmark throughput now take a new flag --kubecontexts as input instead of two kubeconfig files. v0.9.1 (June 29, 2021) The --kubecontext flag in subctl commands now works properly. Simplified subctl cloud prepare aws to extract the credentials, infrastructure ID, and region from a local configuration file (if available). The natt-discovery-port and udp-port options can now be set via node annotations. v0.9.0 (April 30, 2021) The gateway Pod has been renamed from submariner to submariner-gateway. The Helm charts now use Submariner\u0026rsquo;s Operator to deploy and manage Submariner. Broker creation is now managed by the Operator instead of subctl. Each Submariner Pod now has its own service account with appropriate privileges. The Lighthouse CoreDNS server metrics are now exposed. The submariner_connections metric is renamed to submariner_requested_connections. The service-discovery flag of subctl deploy-broker has been deprecated in favor of the components flag. For cases in which cross-cluster connectivity is provided without Submariner, subctl can now just deploy Service Discovery. Improved Service CIDR discovery for K3s deployments. All Submariner Prometheus metrics are now prefixed with submariner_. With Globalnet deployments, Global IPs are now assigned to exported Services only. Previously, Globalnet annotated every Service in the cluster, whether or not it was exported. The name of the CoreDNS custom ConfigMap for service discovery can now be specified on subctl join. The strongswan cable driver that was deprecated in the v0.8.0 release is now removed. The Lighthouse-specific API is now removed in favor of Kubernetes Multicluster Services API. A new tool, subctl diagnose, was added that detects issues with the Submariner deployment that may prevent it from working properly. subctl commands now check if the subctl version is compatible with the deployed Submariner version. New flags, repository and version, were added to the subctl deploy-broker command. New Lighthouse metrics were added that track the number of services imported from and exported to other clusters. subctl show connections now also shows average rtt values. A new tool, subctl gather, was added that collects various information from clusters to aid in troubleshooting a Submariner deployment. Each gateway can now use a different port for IPsec/WireGuard communication via the gateway.submariner.io/udp-port node label. Gateways now implement a NAT-Traversal (NAT-T) discovery protocol that can be enabled via the gateway.submariner.io/natt-discovery-port node label. A cluster can now be configured in IPsec server mode via the preferred-server flag on subctl join. v0.8.1 (February 11, 2021) Submariner Gateway Health Check is now supported with Globalnet deployments. Added support for deploying OVN in kind using make clusters using=ovn for E2E testing and development environments. Added support for debugging the Libreswan cable driver. Fixed the cable driver label in the Prometheus latency metrics. Added support for non-TLS connections for OVN databases. Services can now be recreated without needing to recreate their associated ServiceExport objects. Service Discovery no longer depends on Submariner-provided connectivity. Improved Service Discovery verification suite. The ServiceImport object now includes Port information from the original Service. subctl show now indicates when the target cluster doesn\u0026rsquo;t have Submariner installed. v0.8.0 (December 22, 2020) Added support for connecting clusters that use the OVNKubernetes CNI plugin in non-Globalnet deployments. Support for Globalnet will be available in a future release. The active Gateway now performs periodic health checks on the connections to remote clusters, updates the Gateway connection status, and adds latency statistics. Gateways now export the following connection metrics on TCP port 8080 which can be used with Prometheus. These are currently only supported for the Libreswan cable driver: The count of bytes transmitted and received between Gateways. The number of connections between Gateways and their corresponding status. The timestamp of the last successful connection established between Gateways. The RTT latency between Gateways. The Libreswan cable driver is now the default. The strongSwan cable driver is deprecated and will be removed in a future release. The Lighthouse DNS always returns the IP address of the local exported ClusterIP Service, if available, otherwise it load-balances between the same Services exported from other clusters in a round-robin fashion. Lighthouse has fully migrated to use the proposed Kubernetes Multicluster Services API (ServiceExport and ServiceImport). The Lighthouse-specific API is deprecated and will be removed in a future release. On upgrade from v0.7.0, exported Services will automatically be migrated to the new CRDs. Broker resiliency has been improved. The dataplane is no longer affected in any way if the Broker is unavailable. The subctl benchmark tests now accept a verbose flag to enable full logging. Otherwise only the results are presented. v0.7.0 StatefulSet support for service discovery and benchmark tooling This release mainly focused on adding support for StatefulSets in Lighthouse for service discovery and adding new subctl commands to benchmark the network performance across clusters.\n Lighthouse enhancements/changes: Added support for accessing individual Pods in a StatefulSet using their host names. A Service in a specific cluster can now be explicitly queried. Removed support for the supercluster.local domain to align with the Kubernetes MultiCluster Service API. Added new subctl benchmark commands for measuring the throughput and round trip latency between two Pods in separate clusters or within the same cluster. The data path is no longer disrupted when the Globalnet Pod is restarted. The Route Agent component now runs on all worker nodes including those with taints. When upgrading to 0.7.0 on a cluster already running Submariner, the current state must be cleared:\n Remove the Submariner namespaces: kubectl delete ns submariner-operator submariner-k8s-broker Remove the Submariner cluster roles: kubectl delete clusterroles submariner-lighthouse submariner-operator submariner-operator:globalnet v0.6.0 Improved Submariner High Availability and various Lighthouse enhancements This release mainly focused on support for headless Services in Lighthouse, as well as improving Submariner\u0026rsquo;s High Availability (HA).\n The DNS domains have been updated from \u0026lt;service\u0026gt;.\u0026lt;namespace\u0026gt;.svc.supercluster.local to \u0026lt;service\u0026gt;.\u0026lt;namespace\u0026gt;.svc.clusterset.local to align with the change in Kubernetes Multicluster Service API. Both domains will be supported for 0.6.0 but 0.7.0 will remove support for supercluster.local. Please update your deployments and applications.\n Lighthouse has been enhanced to: Be aware of the local cluster Gateway connectivity so as not to announce the IP addresses for disconnected remote clusters. Support headless Services for non-Globalnet deployments. Support for Globalnet will be available in a future release. Be aware of a Service\u0026rsquo;s backend Pods so as not to announce IP addresses for Services that have no active Pods. Use Round Robin IP resolution for Services available in multiple clusters. Enable service discovery by default for subctl deployments. subctl auto-detects the cluster ID from the kubeconfig file\u0026rsquo;s information when possible. Submariner\u0026rsquo;s Pods now shut down gracefully and do proper cleanup which reduces downtime during Gateway failover. The Operator now automatically exports Prometheus metrics; these integrate seamlessly with OpenShift Prometheus if user workload monitoring is enabled, and can be included in any other Prometheus setup. Minimum Kubernetes version is now 1.17. HostNetwork to remote Service connectivity fixes for AWS clusters. The project\u0026rsquo;s codebase quality and readability has been improved using various linters. v0.5.0 Lighthouse service discovery alignment This release mainly focused on continuing the alignment of Lighthouse\u0026rsquo;s service discovery support with the Kubernetes Multicluster Services KEP.\n Lighthouse has been modified per the Kubernetes Multicluster Services KEP as follows: The MultiClusterService resource has been replaced by ServiceImport. The ServiceExport resource is now updated with status information as lifecycle events occur. Lighthouse now allows a ServiceExport resource to be created prior to the associated Service. Network discovery was moved from subctl to the Submariner Operator. Several new commands were added to subctl: export service, show versions, show connections, show networks, show endpoints, and show gateways. The subctl info command has been removed in lieu of the new show networks command. The Globalnet configuration has been moved from the broker-info.subm file to a ConfigMap resource stored on the Broker cluster. Therefore, the new subctl cannot be used on brownfield Globalnet deployments where this information was stored as part of broker-info.subm. subctl now supports joining multiple clusters in parallel without having to explicitly specify the globalnet-cidr for the cluster to work around this issue. The globalnet-cidr will automatically be allocated by subctl for each cluster. The separate --operator-image parameter has been removed from subctl join and the --repository and --version parameters are now used for all images. The Submariner Operator status now includes Gateway information. Closed technical requirements for Submariner to become a CNCF project, including Developer Certificate of Origin compliance and additional source code linting. v0.4.0 Libreswan cable driver, Kubernetes multicluster service discovery This release is mainly focused on Submariner\u0026rsquo;s Libreswan cable driver implementation, as well as standardizing Lighthouse\u0026rsquo;s service discovery support with the Kubernetes Multicluster Services KEP.\n Libreswan IPsec cable driver is available for testing and is covered in Submariner\u0026rsquo;s CI. Lighthouse has been modified per the Kubernetes Multicluster Services KEP as follows: A ServiceExport object needs to be created alongside any Service that is intended to be exported to participant clusters. Supercluster services can be accessed with \u0026lt;service-name\u0026gt;.\u0026lt;namespace\u0026gt;.svc.clusterset.local. Globalnet overlapping CIDR support improvements and bug fixes. Multiple CI improvements implemented from Shipyard. CI tests are now run via GitHub Actions. Submariner\u0026rsquo;s Operator now completely handles the Lighthouse deployment via the ServiceDiscovery CRD. subctl verify is now available for connectivity, service-discovery and gateway-failover. v0.3.0 Lighthouse Service Discovery without KubeFed This release is focused on removing the KubeFed dependency from Lighthouse, improving the user experience, and adding experimental WireGuard support as an alternative to IPsec.\n Lighthouse no longer depends on KubeFed. All metadata exchange is handled over the Broker as MultiClusterService CRs. Experimental WireGuard support has been added as a pluggable CableDriver option in addition to the current default IPsec. Submariner reports the active and passive gateways as a gateway.submariner.io resource. The Submariner Operator reports a detailed status of the deployment. The gateway redundancy/failover tests are now enabled and stable in CI. Globalnet hostNetwork to remote globalIP is now supported. Previously, when a Pod used hostNetworking it was unable to connect to a remote Service via globalIP. A GlobalCIDR can be manually specified when joining a cluster with Globalnet enabled. This enables CI speed optimizations via better parallelism. Operator and subctl are more robust via standard retries on updates. subctl creates a new individual access token for every new joined cluster. v0.2.0 Overlapping CIDR support This release is focused on overlapping CIDR support between clusters.\n Support for overlapping CIDRs between clusters (Globalnet). Enhanced end-to-end scripts, which will be shared between repositories in the Shipyard project (ongoing work). Improved end-to-end deployment by using a local registry. Refactoring to support pluggable drivers (in preparation for WireGuard). v0.1.1 Submariner with more light This release is focused on stability for Lighthouse.\n Cleaner logging for submariner-engine. Cleaner logging for submariner-route-agent. Fixed issue with wrong token stored in .subm file (#244). Added flag to disable the OpenShift CVO (#235). Fixed several service discovery bugs (#194, #167). Fixed several panics on nil network discovery. Added checks to ensure the CIDRs for joining cluster don\u0026rsquo;t overlap with existing ones. Fixed context handling related to service discovery/KubeFed (#180). Use the correct CoreDNS image for OpenShift. v0.1.0 Submariner with some light This release has focused on stability, bugfixes and making Lighthouse available as a developer preview via subctl deployments.\n Several bugfixes and enhancements around HA failover (#346, #348, #332). Migrated to DaemonSets for Submariner gateway deployment. Added support for hostNetwork to remote Pod/Service connectivity (#298). Auto detection and configuration of MTU for vx-submariner, jumbo frames support (#301). Support for updated strongSwan (#288). Better iptables detection for some hosts (#227). subctl and the Submariner Operator have the following improvements:\n Support for verify-connectivity checks between two connected clusters. Deployment of Submariner gateways based on DaemonSet instead of Deployment. Renamed submariner Pods to submariner-gateway Pods for clarity. Print version details on crash (subctl). Stopped storing IPsec key on Broker during deploy-broker, now it\u0026rsquo;s only contained into the .subm file. Version command for subctl. Nicer spinners during deployment (thanks to kind). v0.0.3 \u0026ndash; KubeCon NA 2019 Submariner has been greatly enhanced to allow administrators to deploy into Kubernetes clusters without the necessity for Layer 2 adjacency for nodes. Submariner now allows for VXLAN interconnectivity between nodes (facilitated by the route agent). subctl was created to make deployment of Submariner easier.\nv0.0.2 Second Submariner release v0.0.1 First Submariner release "
},
{
"uri": "/operations/known-issues/",
"title": "Known Issues",
"tags": [],
"description": "",
"content": "General The oldest Kubernetes version for which Submariner is known to work is 1.19 (1.21 for Service Discovery). Submariner only supports kube-proxy in iptables mode. IPVS is not supported at this time. CoreDNS is supported out of the box for *.clusterset.local service discovery. KubeDNS needs manual configuration. Please refer to the GKE Quickstart Guide for more information. Clusters deployed with the Calico network plug-in require further configuration to be compatible with Submariner. Please refer to the Calico-specific deployment instructions. The Gateway load balancer support is still experimental and needs more testing. Submariner Gateway metrics submariner_gateway_rx_bytes and submariner_gateway_tx_bytes will not be collected when using the VXLAN cable driver. Submariner does not support IPv6-only setups. On dual-stack setups, it only allocates IPv4 addresses. Globalnet The subctl benchmark latency command is not compatible with Globalnet deployments at this time. Deploying with Helm on OpenShift When deploying Submariner using Helm on OpenShift, Submariner needs to be granted the appropriate security context for its service accounts:\noc adm policy add-scc-to-user privileged system:serviceaccount:submariner:submariner-routeagent oc adm policy add-scc-to-user privileged system:serviceaccount:submariner:submariner-gateway oc adm policy add-scc-to-user privileged system:serviceaccount:submariner:submariner-globalnet This is handled automatically in subctl and the Submariner addon.\n"
},
{
"uri": "/development/release-process/",
"title": "Release Process",
"tags": [],
"description": "",
"content": "These docs describe how to create a Submariner release.\nRelease Concepts Project Release Order Submariner\u0026rsquo;s projects have a dependency hierarchy among their Go libraries and container images, which drives their release order.\nThe Go dependency hierarchy is:\nshipyard \u0026lt;- admiral \u0026lt;- [submariner, lighthouse, cloud-prepare] \u0026lt;- submariner-operator \u0026lt;- subctl\nThe container image dependency hierarchy is:\nsubctl binary \u0026lt;- shipyard-dapper-base image \u0026lt;- [admiral, cloud-prepare, submariner, lighthouse, submariner-operator]\nProjects in brackets are siblings and do not depend on each other. Dependencies of siblings require all siblings to have aligned versions.\nChoosing Versions Version numbers are required to be formatted following the schema norms where they are used.\n Git: vx.y.z (example: v0.8.0) Containers: x.y.z (example: 0.8.0) Stable branches: release-x.y (example: release-0.8) Milestone releases: Append -mN starting at 1 (example: v0.8.0-m1) Release candidates: Append -rcN starting at 0 (example: v0.8.0-rc0) Single-project testing release: Append -preN starting at 0 (example: v0.8.0-pre0) Release errors: Append .N starting at 1 (example: v0.8.0-m1.1) Creating Releases The following sections are an ordered series of steps to create a Submariner release.\nThe release process is mostly automated and uses a YAML file created in the releases repository that describes the release. This file is updated for each step in the release process.\nOnce the changes for a step are reviewed and merged, a CI job will run to create the release(s) for the step and create the required pull requests in preparation for the next step to be reviewed and merged. Once all these pull requests have been merged, you can continue onto the next step.\nFor most projects, after a release is created, another job will be initiated to build release artifacts and publish to Quay. This will take several minutes. You can monitor the progress from the project\u0026rsquo;s main page. In the branches/tags pull-down above the file list heading, select the tag for the new version. A small yellow circle icon should be present to the right of the file list heading which indicates a job is in progress. You can click it to see details. There may be several checks for the job listed but the important one is \u0026ldquo;Release Images\u0026rdquo;. When complete, the indicator icon will change to either a green check mark on success or a red X on failure. A failure likely means the artifacts were not published to Quay, in which case select the failed check, inspect the logs, correct the issue and re-run the job.\nRelease Notes (Final Releases) If you\u0026rsquo;re creating a release meant for general consumption, not a milestone or release candidate, release notes must also be created.\nIt is expected that release notes for any given release will accumulate on the corresponding release-notes-... branch. Once the release is ready, a new branch should be pulled from this release note branch, named merge-release-notes-... with the full version (e.g. merge-release-notes-0.15.2); a PR can then be opened on devel using this branch. In most cases conflicts will need to be resolved before the branch can be merged.\nThe release notes are maintained in reverse chronological order. Each version should have its release date added in the release note merge PR.\nOur GitHub configuration requires a rebase before merging PRs, which means we need to use git rebase instead of git merge. See the 0.16.0 PR for an example. If additional changes need to be added, they should be added to the release notes branch first. If the initial PR is still pending, they can then be rebased onto the PR\u0026rsquo;s branch. If the initial PR has been merged, they can be rebased onto devel and submitted with an additional PR.\nUpdating Dependencies Verify that all dependencies are up to date before branch cutting at the first release candidate. See the CI Maintenance docs for details about versions that must be manually maintained.\nChecking for dependent PRs Before starting the release process, check the corresponding release issue for dependent PRs or issues; these are PRs which are supposed to be reviewed, and issues which are supposed to be addressed, before the release starts.\nAutomated Release Creation Process Most of the release can be done in a series of mostly-automated steps. After each step, a Pull Request is sent with the correct YAML content for the release, this needs to be reviewed. Once the pull request is merged, the release process will continue automatically and the next step can be initiated shortly after making sure the release jobs on the releases and any participating repositories are done.\nStarting with 0.13, when creating a release for a stable version (release-\u0026lt;major\u0026gt;.\u0026lt;minor\u0026gt;), the make release commands must be run on a branch based off the associated stable release branch.\nThe GITHUB_TOKEN environment variable in the shell you\u0026rsquo;re using for the automation must be set to a Personal Access Token you create. The token needs at least public_repo scope for the automated release to work.\nexport GITHUB_TOKEN=\u0026lt;token\u0026gt; To run the automated release, simply clone the releases repository and execute:\nmake release VERSION=\u0026#34;0.8.0\u0026#34; Make sure to specify the proper version you\u0026rsquo;re intending to release (e.g. for rc0 specify VERSION=\u0026quot;0.8.0-rc0\u0026quot;).\nBy default, the action will try to push to the GitHub account used in the origin remote. If you want to use a specific GitHub account, set GITHUB_ACTOR to the desired account, e.g.\nmake release VERSION=\u0026#34;0.8.0\u0026#34; GITHUB_ACTOR=\u0026#34;octocat\u0026#34; You can run the process without pushing the PR automatically (obviating the need to set GITHUB_TOKEN). To do so, run the make command with dryrun=true.\n The command runs, gathers the data for the release, updates the release YAML and pushes it for review. Once the review process is done, merge the PR. Pull requests will then be created for all dependent projects to update them to the new version. The automation will leave a comment with a list of the version-bump PRs for dependent project in the release PR that was just merged. Make sure all those PRs are merged and their release jobs pass (see the Actions tab of the repository on GitHub) then proceed to the next release phase by running the same command again.\nOn an rc0 release, stable branches are created for the release across all repositories (including releases). After the first invocation, the command needs to be run on branches based on the correct stable branch (release-\u0026lt;major\u0026gt;.\u0026lt;minor\u0026gt;).\n Once there isn\u0026rsquo;t anything else to do, the command will inform you. At this point, continue manually with any steps not automated yet, starting with Verify Release.\nManual Release Creation Process These instructions are here as a backup in case the automated creation process has problems, and to serve as a guide.\nStable Releases: Create Stable Branches If you\u0026rsquo;re creating a stable release, you need to create a stable branch for backports in each repository. Milestone releases don\u0026rsquo;t receive backports and therefore don\u0026rsquo;t need branches.\nThe release automation process can create stable branches for you. To do so, navigate to the releases repository.\n Create a new file in the releases directory (you can copy the example.yaml file). For our example, we\u0026rsquo;ll name it v0.8.0.yaml.\n Fill in the version/name/branch fields for the release, following the naming scheme below. The status field must be set to branch for this phase.\nversion: v0.8.0 name: 0.8.0 branch: release-0.8 status: branch Commit your changes, create a pull request, and have it reviewed.\n Once the pull request is merged, it will trigger a CI job to create the stable branches and pin them to Shipyard on that stable branch.\nStep 1: Create Shipyard Release Navigate to the releases repository.\n Create a new file in the releases directory (you can copy the example.yaml file). For our example, we\u0026rsquo;ll name it v0.8.0.yaml.\n Fill in the general fields for the release with the status field set to shipyard. Also add the shipyard component with the hash of the desired or latest commit ID on which to base the release. To obtain the latest, first navigate to the Shipyard project. The heading above the file list shows the latest commit on the devel branch including the first 7 hex digits of the commit ID hash.\nIf this is not a final release, set the pre-release field to true (that is uncomment the pre-release line below). This includes release candidates. This is important so it is not labeled as the Latest release in GitHub.\nWhen releasing on a stable branch, make sure to specify the branch as outlined below. Otherwise, omit it.\nversion: v0.8.0 name: 0.8.0 #pre-release: true branch: release-0.8 status: shipyard components: shipyard: \u0026lt;hash goes here\u0026gt; Commit your changes, create a pull request, and have it reviewed.\n Verify:\n The releases/release job passed. The Shipyard release was created. The submariner/shipyard-dapper-base image is on Quay. Pull requests will be created for projects that consume Shipyard to update them to the new version in preparation for the subsequent steps. The automation will leave a comment with a list of them. Make sure all those PRs are merged and their release jobs pass.\n Step 2: Create Admiral Release Once the pull request to pin Admiral to the new Shipyard version is merged, we can proceed to updating the release YAML file to create an Admiral release.\n Edit the release yaml file (v0.8.0.yaml). Update the status field to admiral and add the admiral component with the latest commit ID hash:\n-status: shipyard +status: admiral components: shipyard: \u0026lt;hash goes here\u0026gt; + admiral: \u0026lt;hash goes here\u0026gt; Commit your changes, create a pull request, and have it reviewed.\n Verify:\n The releases/release job passed. The Admiral release was created. Pull requests will be created for projects that consume Admiral to update them to the new version in preparation for the subsequent steps. The automation will leave a comment with a list of them. Make sure all those PRs are merged and their release jobs pass.\n Step 3: Create cloud-prepare, Lighthouse, and Submariner Releases Once the pull requests to pin the cloud-prepare, Lighthouse and Submariner projects to the new Admiral version are merged:\n Update the release YAML file status field to projects and add the submariner, cloud-prepare and lighthouse components with their latest commit ID hashes:\n-status: admiral +status: projects components: shipyard: \u0026lt;hash goes here\u0026gt; admiral: \u0026lt;hash goes here\u0026gt; + cloud-prepare: \u0026lt;hash goes here\u0026gt; + lighthouse: \u0026lt;hash goes here\u0026gt; + submariner: \u0026lt;hash goes here\u0026gt; Commit your changes, create a pull request, and have it reviewed.\n Verify:\n The releases/release job passed. The cloud-prepare release was created. The Lighthouse release was created. The Submariner release was created. The submariner/submariner-gateway image is on Quay. The submariner/submariner-route-agent image is on Quay. The submariner/submariner-globalnet image is on Quay. The submariner/submariner-networkplugin-syncer image is on Quay. The submariner/lighthouse-agent image is on Quay. The submariner/lighthouse-coredns image is on Quay. Automation will create a pull request to pin submariner-operator to the released versions. Make sure that the PR is merged and the release job passes.\n Step 4: Create Operator and Charts Releases Once the pull request to pin submariner-operator has been merged, we can create the submariner-operator and submariner-charts releases:\n Update the release YAML file status field to installers. Add the submariner-operator and submariner-charts components with their latest commit ID hashes.\n-status: projects +status: installers components: shipyard: \u0026lt;hash goes here\u0026gt; admiral: \u0026lt;hash goes here\u0026gt; cloud-prepare: \u0026lt;hash goes here\u0026gt; lighthouse: \u0026lt;hash goes here\u0026gt; submariner: \u0026lt;hash goes here\u0026gt; + submariner-charts: \u0026lt;hash goes here\u0026gt; + submariner-operator: \u0026lt;hash goes here\u0026gt; Commit your changes, create a pull request, and have it reviewed.\n Verify:\n The submariner-operator release was created. The submariner/submariner-operator image is on Quay. Step 5: Create subctl Release Once the submariner-operator and submariner-charts releases are complete, we can create the final release:\n Update the release YAML file status field to released. Add the subctl component with its latest commit ID hash.\n-status: installers +status: released components: shipyard: \u0026lt;hash goes here\u0026gt; admiral: \u0026lt;hash goes here\u0026gt; cloud-prepare: \u0026lt;hash goes here\u0026gt; lighthouse: \u0026lt;hash goes here\u0026gt; submariner: \u0026lt;hash goes here\u0026gt; submariner-charts: \u0026lt;hash goes here\u0026gt; submariner-operator: \u0026lt;hash goes here\u0026gt; + subctl: \u0026lt;hash goes here\u0026gt; Commit your changes, create a pull request, and have it reviewed.\n Verify:\n The releases/release job passed. The subctl artifacts were released If the release wasn\u0026rsquo;t marked as a pre-release, the releases/release job will also create pull requests in each consuming project to unpin the Shipyard Dapper base image version, that is set it back to devel. For ongoing development we want each project to automatically pick up the latest changes to the base image.\n Step 5: Verify Release You can follow any of the quick start guides.\nStep 6: Update OperatorHub.io The k8s-operatorhub/community-operators Git repository is a source for sharing Kubernetes Operators with the broader community via OperatorHub.io. OpenShift users will find Submariner\u0026rsquo;s Operator in the official Red Hat catalog.\n Clone the submariner-operator repository.\n Make sure you have operator-sdk v1 installed.\n Generate new package manifests:\nmake packagemanifests VERSION=${new_version} FROM_VERSION=${previous_version} CHANNEL=${channel} For example:\nmake packagemanifests VERSION=0.11.1 FROM_VERSION=0.11.0 CHANNEL=alpha-0.11 Generated package manifests should be in /packagemanifests/${VERSION}/.\n Fork and clone the k8s-operatorhub/community-operators repository.\n Update the Kubernetes Operator:\n Copy the generated package from Step 3 into operators/submariner.\n Copy the generated package definition /packagemanifests/submariner.package.yaml into operators/submariner/.\n Test the Operator by running:\nOPP_AUTO_PACKAGEMANIFEST_CLUSTER_VERSION_LABEL=1 OPP_PRODUCTION_TYPE=k8s \\ curl -sL https://raw.githubusercontent.com/redhat-openshift-ecosystem/community-operators-pipeline/ci/latest/ci/scripts/opp.sh | bash \\ -s -- all operators/submariner/${VERSION} Preview the Operator on OperatorHub.io\n Once everything is fine, review this checklist and create a new PR on k8s-operatorhub/community-operators.\n For more details check the full documentation.\n Step 7: Announce Release E-Mail Once the release and release notes are published, make an announcement to both Submariner mailing lists.\n submariner-dev submariner-users See the v0.8.0 email example.\nTwitter Synthesize the release notes and summarize the key points in a Tweet. Link to the release notes for details.\n @submarinerio See the v0.8.0 Tweet example.\n"
},
{
"uri": "/getting-started/architecture/service-discovery/",
"title": "Service Discovery",
"tags": [],
"description": "",
"content": "The Lighthouse project provides DNS discovery for Kubernetes clusters connected by Submariner in multi-cluster environments. Lighthouse implements the Kubernetes Multi-Cluster Service APIs.\nArchitecture The below diagram shows the basic Lighthouse architecture:\nLighthouse Agent The Lighthouse Agent runs in every cluster and accesses the Kubernetes API server running in the Broker cluster to exchange service metadata information with other clusters. Local Service information is exported to the Broker and Service information from other clusters is imported.\nAgent Workflow The workflow is as follows:\n Lighthouse Agent connects to the Broker\u0026rsquo;s Kubernetes API server. For every Service in the local cluster for which a ServiceExport has been created, the Agent creates ServiceImport and EndpointSlice resources and exports them to the Broker to be consumed by other clusters. For every resource in the Broker exported from another cluster, it creates a copy of it in the local cluster. Lighthouse DNS Server The Lighthouse DNS server runs as an external DNS server which owns the domain clusterset.local. CoreDNS is configured to forward any request sent to clusterset.local to the Lighthouse DNS server, which uses the ServiceImport and EndpointSlice resources that are distributed by the controller to build an address cache for DNS resolution. The Lighthouse DNS server supports queries using an A record and an SRV record.\nWhen a single Service is deployed to multiple clusters, Lighthouse DNS server prefers the local cluster first before routing the traffic to other remote clusters in a round-robin fashion.\n Server Workflow The workflow is as follows:\n A Pod tries to resolve a Service name using the domain name clusterset.local. CoreDNS forwards the request to the Lighthouse DNS server. The Lighthouse DNS server will use its address cache to try to resolve the request. If a record exists it will be returned, else an NXDomain error will be returned. "
},
{
"uri": "/getting-started/quickstart/openshift/vsphere-aws/",
"title": "Hybrid vSphere and AWS",
"tags": [],
"description": "",
"content": "This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters: one on VMware vSphere with user provisioned infrastructure (UPI) and the other one on AWS with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner with Service Discovery to interconnect the two clusters.\nPrerequisites Before we begin, the following tools need to be downloaded and added to your $PATH:\n OpenShift installer, pull secret, and command line interface. All can be downloaded from here. AWS CLI which can be downloaded from here. Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.\n Create and Deploy cluster-a on vSphere (On-Prem) In this step you will deploy cluster-a using the default IP CIDR ranges:\n Pod CIDR Service CIDR 10.128.0.0/14 172.30.0.0/16 Before you deploy an OpenShift Container Platform cluster that uses user-provisioned infrastructure, you must create the underlying infrastructure. Follow the OpenShift documenation for installation instructions on supported versions of vSphere.\nSubmariner Gateway nodes need to be able to accept IPsec traffic. For on-premises clusters behind corporate firewalls, the default IPsec UDP ports might be blocked. To overcome this, Submariner supports NAT Traversal (NAT-T) with the option to set custom non-standard ports. In this example, we use UDP 4501 and UDP 501. Ensure that those ports are allowed on the gateway node and on the corporate firewall.\nSubmariner also uses VXLAN to encapsulate traffic from the worker and master nodes to the Gateway nodes. Ensure that firewall configuration on the vSphere cluster allows 4800/UDP across all nodes in the cluster in both directions.\n Protocol Port Description UDP 4800 Overlay network for inter-cluster traffic UDP 4501 IPsec traffic UDP 501 IPsec traffic When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nCreate and Deploy cluster-b on AWS Setup Your AWS Profile Configure the AWS CLI with the settings required to interact with AWS. These include your security credentials, the default AWS Region, and the default output format:\n$ aws configure AWS Access Key ID [None]: .... AWS Secret Access Key [None]: .... Default region name [None]: .... Default output format [None]: text Create and Deploy cluster-b In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. For more information on IPv4 CIDR conversion, please check this page.\nIn this example, we will use the following IP ranges:\n Pod CIDR Service CIDR 10.132.0.0/14 172.31.0.0/16 openshift-install create install-config --dir cluster-b Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:\nsed -i \u0026#39;s/10.128.0.0/10.132.0.0/g\u0026#39; cluster-b/install-config.yaml Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:\nsed -i \u0026#39;s/172.30.0.0/172.31.0.0/g\u0026#39; cluster-b/install-config.yaml And finally deploy the cluster:\nopenshift-install create cluster --dir cluster-b When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nPrepare AWS Cluster for Submariner Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.\nsubctl cloud prepare is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments, handling the requirements specified above.\nThe default EC2 instance type for the Submariner gateway node is c5d.large, optimized for better CPU which is found to be a bottleneck for IPsec and Wireguard drivers. Please ensure that the AWS Region you deploy to supports this instance type. Alternatively, you can choose to deploy using a different instance type.\n Prepare OpenShift-on-AWS cluster-b for Submariner:\nexport KUBECONFIG=cluster-b/auth/kubeconfig subctl cloud prepare aws --ocp-metadata path/to/cluster-b/metadata.json --natt-port 4501 Note that certain parameters, such as the tunnel UDP port and AWS instance type for the gateway, can be customized. For example:\nsubctl cloud prepare aws --ocp-metadata path/to/metadata.json --natt-port 4501 --gateway-instance m4.xlarge Submariner can be deployed in HA mode by setting the gateways flag:\nsubctl cloud prepare aws --ocp-metadata path/to/metadata.json --gateways 3 Install Submariner with Service Discovery To install Submariner with multi-cluster service discovery, follow the steps below:\nUse cluster-b (AWS) as Broker with Service Discovery enabled subctl deploy-broker --kubeconfig cluster-b/auth/kubeconfig Join cluster-b (AWS) and cluster-a (vSphere) to the Broker subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --nattport 4501 subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --nattport 4501 Verify Deployment To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.\nDeploy ClusterIP Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 subctl export service --namespace default nginx Deploy Headless Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None subctl export service --namespace default nginx Verify Run nettest from cluster-a to access the nginx service:\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080 To access a Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt; as follows:\ncurl cluster-b.nginx.default.svc.clusterset.local:8080 Verify StatefulSets A StatefulSet uses a headless Service. Create a web.yaml as follows:\napiVersion: v1 kind: Service metadata: name: nginx-ss labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: ports: - port: 80 name: web clusterIP: None selector: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: \u0026#34;nginx-ss\u0026#34; replicas: 2 selector: matchLabels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss template: metadata: labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: containers: - name: nginx-ss image: nginxinc/nginx-unprivileged:stable-alpine ports: - containerPort: 80 name: web Use this yaml to create a StatefulSet web with nginx-ss as the Headless Service.\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default apply -f web.yaml subctl export service -n default nginx-ss curl nginx-ss.default.svc.clusterset.local:8080 To access the Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt;:\ncurl cluster-a.nginx-ss.default.svc.clusterset.local:8080 To access an individual pod in a specific cluster, prefix the query with \u0026lt;pod-hostname\u0026gt;.\u0026lt;cluster-id\u0026gt;:\ncurl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080 Perform automated verification The contexts on both config files are named admin and need to be modified before running the verify command. Here is how this can be done using yq:\nyq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-a\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-a\u0026#34; | .users[0].name = \u0026#34;admin-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-b\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-b/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-b\u0026#34; | .users[0].name = \u0026#34;admin-b\u0026#34;\u0026#39; cluster-b/auth/kubeconfig (if you’re using yq 4.18.1 or later, you can use yq -i instead of yq e -i).\nMore generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.\nThis will perform automated verifications between the clusters.\nexport KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose "
},
{
"uri": "/development/",
"title": "Development",
"tags": [],
"description": "",
"content": " Backports Building and Testing CI/CD Maintenance Code Review Guide Contributing to the Project Contributing to the Website Docs Style Guide Licenses Release Process Security Security Reporting Container Requirements Working with Shipyard Customizing Deployments Shared Targets Image Related Targets Help Wanted If you\u0026rsquo;d like to get involved and haven\u0026rsquo;t already found something to work on, check the GitHub Issues tagged \u0026ldquo;help wanted\u0026rdquo;.\nSubmariner\u0026rsquo;s success depends on growing the set of contributors to the project. Welcoming new contributors is a top priority of the project.\n"
},
{
"uri": "/development/security/reporting/",
"title": "Security Reporting",
"tags": [],
"description": "",
"content": "Submariner welcomes and appreciates responsible disclosure of security vulnerabilities.\nIf you know of a security issue with Submariner, please report it to [email protected]. Submariner Project Owners receive security disclosures by default. They may share disclosures with others as required to make and propagate fixes.\nSubmariner aspires to follow the Kubernetes security reporting process, but is far too small of a project to implement those practices. Where applicable, Submariner will follow the principles of the Kubernetes process.\n"
},
{
"uri": "/getting-started/quickstart/external/",
"title": "External Network (Experimental)",
"tags": [],
"description": "",
"content": "This guide covers how to set up Submariner for the external network use case. In this use case, pods running in a Kubernetes cluster can access external applications outside of the cluster and vice versa by using DNS resolution supported by Lighthouse or manually using the Globalnet ingress IPs. In addition to providing connectivity, the source IP of traffic is also preserved.\nPrerequisites Prepare:\n Two or more Kubernetes clusters One or more non-cluster hosts that exist in the same network segment to one of the Kubernetes clusters In this guide, we will use the following Kubernetes clusters and non-cluster host.\n Name IP Description cluster-a 192.168.122.26 Single-node cluster cluster-b 192.168.122.27 Single-node cluster test-vm 192.168.122.142 Linux host In this example, everything is deployed in the 192.168.122.0/24 segment. However, it is only required that cluster-a and test-vm are in the same segment. Other clusters, cluster-b and any additional clusters, can be deployed in different segments or even in any other networks in the internet. Also, clusters can be multi-node clusters.\nSubnets of non-cluster hosts should be distinguished from those of the clusters to easily specify the external network CIDR. In this example, cluster-a and cluster-b belong to 192.168.122.0/25 and test-vm belongs to 192.168.122.128/25. Therefore, the external network CIDR for this configuration is 192.168.122.128/25. In test environments with just one host, an external network CIDR like 192.168.122.142/32 can be specified. However, design of the subnets need to be considered when more hosts are used.\n Choose the Pod CIDR and the Service CIDR for Kubernetes clusters and deply them.\nIn this guide, we will use the following CIDRs:\n Cluster Pod CIDR Service CIDR cluster-a 10.42.0.0/24 10.43.0.0/16 cluster-b 10.42.0.0/24 10.43.0.0/16 Note that we will use Globalnet in this guide, therefore overlapping CIDRs are supported.\n In this configuration, global IPs are used to access between the gateway node and non-cluster hosts, which means packets are sent to IP addresses that are not part of the actual network segment. To make such packets not to be dropped, anti-spoofing rules need to be disabled for the hosts and the gateway node.\n Setup Submariner Ensure kubeconfig files Ensure that kubeconfig files for both clusters are available. This guide assumes cluster-a\u0026rsquo;s kubeconfig file is named kubeconfig.cluster-a and cluster-b\u0026rsquo;s is named kubeconfig.cluster-b.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nUse cluster-a as the Broker with Globalnet enabled subctl deploy-broker --kubeconfig kubeconfig.cluster-a --globalnet Label gateway nodes When Submariner joins a cluster to the broker via the subctl join command, it chooses a node on which to install the gateway by labeling it appropriately. By default, Submariner uses a worker node for the gateway; if there are no worker nodes, then no gateway is installed unless a node is manually labeled as a gateway. Since we are deploying all-in-one nodes, there are no worker nodes, so it is necessary to label the single node as a gateway. By default, the node name is the hostname. In this example, the hostnames are \u0026ldquo;cluster-a\u0026rdquo; and \u0026ldquo;cluster-b\u0026rdquo;, respectively.\nExecute the following on cluster-a:\nkubectl label node cluster-a submariner.io/gateway=true Execute the following on cluster-b:\nkubectl label node cluster-b submariner.io/gateway=true Join cluster-a to the Broker with external CIDR added as cluster CIDR Carefully review the CLUSTER_CIDR and EXTERNAL_CIDR and run:\nCLUSTER_CIDR=10.42.0.0/24 EXTERNAL_CIDR=192.168.122.128/25 subctl join --kubeconfig kubeconfig.cluster-a broker-info.subm --clusterid cluster-a --natt=false --clustercidr=${CLUSTER_CIDR},${EXTERNAL_CIDR} Join cluster-b to the Broker subctl join --kubeconfig kubeconfig.cluster-b broker-info.subm --clusterid cluster-b --natt=false Deploy DNS server on cluster-a for non-cluster hosts Create a list of upstream DNS servers as upstreamservers:\nNote that dnsip is the IP of DNS server for the test-vm, which is defined as nameserver in /etc/resolve.conf.\ndnsip=192.168.122.1 lighthousednsip=$(kubectl get svc --kubeconfig kubeconfig.cluster-a -n submariner-operator submariner-lighthouse-coredns -o jsonpath=\u0026#39;{.spec.clusterIP}\u0026#39;) cat \u0026lt;\u0026lt; EOF \u0026gt; upstreamservers server=/svc.clusterset.local/$lighthousednsip server=$dnsip EOF Create configmap of the list:\nexport KUBECONFIG=kubeconfig.cluster-a kubectl create configmap external-dnsmasq -n submariner-operator --from-file=upstreamservers Create a dns.yaml as follows:\napiVersion: apps/v1 kind: Deployment metadata: name: external-dns-cluster-a namespace: submariner-operator labels: app: external-dns-cluster-a spec: replicas: 1 selector: matchLabels: app: external-dns-cluster-a template: metadata: labels: app: external-dns-cluster-a spec: containers: - name: dnsmasq image: registry.access.redhat.com/ubi8/ubi-minimal:latest ports: - containerPort: 53 command: [ \u0026#34;/bin/sh\u0026#34;, \u0026#34;-c\u0026#34;, \u0026#34;microdnf install -y dnsmasq; ln -s /upstreamservers /etc/dnsmasq.d/upstreamservers; dnsmasq -k\u0026#34; ] securityContext: capabilities: add: [\u0026#34;NET_ADMIN\u0026#34;] volumeMounts: - name: upstreamservers mountPath: /upstreamservers volumes: - name: upstreamservers configMap: name: external-dnsmasq --- apiVersion: v1 kind: Service metadata: namespace: submariner-operator name: external-dns-cluster-a spec: ports: - name: udp port: 53 protocol: UDP targetPort: 53 selector: app: external-dns-cluster-a Use this YAML to create DNS server, and assign global ingress IP:\nkubectl apply -f dns.yaml subctl export service -n submariner-operator external-dns-cluster-a Check global ingress IP:\nkubectl --kubeconfig kubeconfig.cluster-a get globalingressip external-dns-cluster-a -n submariner-operator NAME IP external-dns-cluster-a 242.0.255.251 Set up non-cluster hosts Modify routing for global CIDR on test-vm:\nNote that subm_gw_ip is the gateway node IP of the cluster in the same network segment of the hosts. In the case of the example of this guide, it is the node IP of cluster-a. Also, 242.0.0.0/8 is the default globalCIDR.\nsubm_gw_ip=192.168.122.26 ip r add 242.0.0.0/8 via ${subm_gw_ip} To persist above configuration across reboot, check the document for each Linux distribution. For example, on Centos 7, to set presistent route for eth0, below command is required:\necho \u0026#34;242.0.0.0/8 via ${subm_gw_ip}dev eth0\u0026#34; \u0026gt;\u0026gt; /etc/sysconfig/network-scripts/route-eth0 Modify /etc/resolv.conf to change DNS server for the host on test-vm:\nFor example)\n Before: nameserver 192.168.122.1 After: nameserver 242.0.255.251 Check that the DNS server itself can be resolved:\nnslookup external-dns-cluster-a.submariner-operator.svc.clusterset.local Server: 242.0.255.251 Address: 242.0.255.251#53 Name: external-dns-cluster-a.submariner-operator.svc.clusterset.local Address: 10.43.162.46 Verify Deployment Verify Manually Deploy HTTP server on hosts Run on test-vm:\n# Python 2.x: python -m SimpleHTTPServer 80 # Python 3.x: python -m http.server 80 Verify access to External hosts from clusters Create headless Service without selector, Endpoints, ServiceExport to access the test-vm from cluster-a:\nNote that Endpoints.subsets.addresses needs to be modified to IP of test-vm.\nexport KUBECONFIG=kubeconfig.cluster-a cat \u0026lt;\u0026lt; EOF | kubectl apply -f - apiVersion: v1 kind: Service metadata: name: test-vm spec: ports: - protocol: TCP port: 80 targetPort: 80 clusterIP: None EOF cat \u0026lt;\u0026lt; EOF | kubectl apply -f - apiVersion: v1 kind: Endpoints metadata: name: test-vm subsets: - addresses: - ip: 192.168.122.142 hostname: \u0026#34;web0\u0026#34; ports: - port: 80 name: \u0026#34;web\u0026#34; EOF subctl export service -n default test-vm subsets.addresses[*].hostname and subsets.ports[*].name in Endpoints must be specified. Otherwise, corresponding globalingressip and endpointslice won\u0026rsquo;t be created.\n Check global ingress IP for test-vm, on cluster-a:\nkubectl get globalingressip NAME IP ep-test-vm-192.168.122.142 242.0.255.253 Verify access to test-vm from clusters:\nexport KUBECONFIG=kubeconfig.cluster-a kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- bash curl web0.cluster-a.test-vm.default.svc.clusterset.local curl 242.0.255.253 export KUBECONFIG=kubeconfig.cluster-b kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- bash curl web0.cluster-a.test-vm.default.svc.clusterset.local curl 242.0.255.253 On test-vm, check the console log of HTTP server that there are accesses from pods. Source IPs for these accesses will be one of the global egress IPs for the cluster.\nVerify access to Deployment from non-cluster hosts Create Deployment in cluster-b:\nexport KUBECONFIG=kubeconfig.cluster-b kubectl -n default create deployment nginx --image=registry.k8s.io/nginx-slim:0.8 kubectl -n default expose deployment nginx --port=80 subctl export service --namespace default nginx From test-vm, verify access:\ncurl nginx.default.svc.clusterset.local Check the console log of HTTP server that there is access from test-vm:\nkubectl logs -l app=nginx Source IP for the access will be the global ingress IP of the endpoint for the test-vm.\nVerify access to Statefulset from non-cluster hosts A StatefulSet uses a headless Service. Create a web.yaml file as follows:\napiVersion: v1 kind: Service metadata: name: nginx-ss labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: ports: - port: 80 name: web clusterIP: None selector: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: \u0026#34;nginx-ss\u0026#34; replicas: 2 selector: matchLabels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss template: metadata: labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: containers: - name: nginx-ss image: registry.k8s.io/nginx-slim:0.8 ports: - containerPort: 80 name: web Apply the above YAML to create a web StatefulSet with nginx-ss as the headless service:\nexport KUBECONFIG=kubeconfig.cluster-b kubectl -n default apply -f web.yaml subctl export service -n default nginx-ss From test-vm, verify access:\ncurl nginx-ss.default.svc.clusterset.local curl cluster-b.nginx-ss.default.svc.clusterset.local curl web-0.cluster-b.nginx-ss.default.svc.clusterset.local curl web-1.cluster-b.nginx-ss.default.svc.clusterset.local Check the console log of the HTTP server to verify there are accesses from test-vm:\nkubectl logs web-0 kubectl logs web-1 Verify source IP of the access from Statefulset Confirm the global egress IPs for each pod managed by Statefulset:\n From Cluster: export KUBECONFIG=kubeconfig.cluster-b kubectl get globalingressip | grep web pod-web-0 242.1.255.251 pod-web-1 242.1.255.250 From Hosts: nslookup web-0.cluster-b.nginx-ss.default.svc.clusterset.local Server: 242.0.255.251 Address: 242.0.255.251#53 Name: web-0.cluster-b.nginx-ss.default.svc.clusterset.local Address: 242.1.255.251 nslookup web-1.cluster-b.nginx-ss.default.svc.clusterset.local Server: 242.0.255.251 Address: 242.0.255.251#53 Name: web-1.cluster-b.nginx-ss.default.svc.clusterset.local Address: 242.1.255.250 Verify the source IP of each access from each pod to test-vm is the same to its global egress IP:\n Access from web-0 export KUBECONFIG=kubeconfig.cluster-b kubectl exec -it web-0 -- bash curl web0.cluster-a.test-vm.default.svc.clusterset.local curl 242.0.255.253 exit Access from web-1 export KUBECONFIG=kubeconfig.cluster-b kubectl exec -it web-1 -- bash curl web0.cluster-a.test-vm.default.svc.clusterset.local curl 242.0.255.253 exit Check the console log in test-vm "
},
{
"uri": "/operations/cleanup/",
"title": "Uninstalling Submariner",
"tags": [],
"description": "",
"content": "Starting with Submariner 0.12, the recommended way to uninstall Submariner is via the subctl uninstall command. This will automatically remove Submariner and its components from a given cluster. For previous versions, Submariner would need to be uninstalled manually.\nAutomated Uninstall Issue the subctl uninstall command against the cluster you want to uninstall Submariner from. Example output:\n$ subctl uninstall --kubeconfig output/kubeconfigs/cluster1 ? This will completely uninstall Submariner from the cluster cluster1. Are you sure you want to continue? Yes ✓ Checking if the connectivity component is installed ✓ The connectivity component is installed ✓ Deleting the Submariner resource - this may take some time ✓ Deleting the Submariner cluster roles and bindings ✓ Deleted the \u0026#34;submariner-diagnose\u0026#34; cluster role and binding ✓ Deleted the \u0026#34;submariner-gateway\u0026#34; cluster role and binding ✓ Deleted the \u0026#34;submariner-globalnet\u0026#34; cluster role and binding ✓ Deleted the \u0026#34;submariner-lighthouse-agent\u0026#34; cluster role and binding ✓ Deleted the \u0026#34;submariner-lighthouse-coredns\u0026#34; cluster role and binding ✓ Deleted the \u0026#34;submariner-networkplugin-syncer\u0026#34; cluster role and binding ✓ Deleted the \u0026#34;submariner-operator\u0026#34; cluster role and binding ✓ Deleted the \u0026#34;submariner-routeagent\u0026#34; cluster role and binding ✓ Deleting the Submariner namespace \u0026#34;submariner-operator\u0026#34; ✓ Deleting the broker namespace \u0026#34;submariner-k8s-broker\u0026#34; ✓ Deleting the Submariner custom resource definitions ✓ Deleted the \u0026#34;brokers.submariner.io\u0026#34; custom resource definition ✓ Deleted the \u0026#34;clusterglobalegressips.submariner.io\u0026#34; custom resource definition ✓ Deleted the \u0026#34;clusters.submariner.io\u0026#34; custom resource definition ✓ Deleted the \u0026#34;endpoints.submariner.io\u0026#34; custom resource definition ✓ Deleted the \u0026#34;gateways.submariner.io\u0026#34; custom resource definition ✓ Deleted the \u0026#34;globalegressips.submariner.io\u0026#34; custom resource definition ✓ Deleted the \u0026#34;globalingressips.submariner.io\u0026#34; custom resource definition ✓ Deleted the \u0026#34;servicediscoveries.submariner.io\u0026#34; custom resource definition ✓ Deleted the \u0026#34;submariners.submariner.io\u0026#34; custom resource definition ✓ Unlabeling gateway nodes Manual Uninstall To manually uninstall Submariner from a cluster, follow the steps below:\nMake sure KUBECONFIG for all participating clusters is exported and all participating clusters are accessible via kubectl.\n Delete Submariner-related namespaces\nFor each participating cluster, issue the following command:\nkubectl delete namespace submariner-operator For the Broker cluster, issue the following command:\nkubectl delete namespace submariner-k8s-broker For submariner version 0.9 and above, also delete submariner-operator namespace from the Broker cluster by issuing the following command:\nkubectl delete namespace submariner-operator Delete the Submariner CRDs\nFor each participating cluster, issue the following command:\nfor CRD in `kubectl get crds | grep -iE \u0026#39;submariner|multicluster.x-k8s.io\u0026#39;| awk \u0026#39;{print $1}\u0026#39;`; do kubectl delete crd $CRD; done Delete Submariner\u0026rsquo;s ClusterRoles and ClusterRoleBindings\nFor each participating cluster, issue the following command:\nroles=\u0026#34;submariner-operator submariner-operator-globalnet submariner-lighthouse submariner-networkplugin-syncer\u0026#34; kubectl delete clusterrole,clusterrolebinding $roles --ignore-not-found Remove the Submariner gateway labels\nFor each participating cluster, issue the following command:\nkubectl label --all node submariner.io/gateway- For OpenShift deployments, delete Lighthouse entry from default DNS.\nFor each participating cluster, issue the following command:\nkubectl apply -f - \u0026lt;\u0026lt;EOF apiVersion: operator.openshift.io/v1 kind: DNS metadata: finalizers: - dns.operator.openshift.io/dns-controller name: default spec: servers: [] EOF This deletes the Lighthouse entry from the Data section in Corefile of the configmap.\n#lighthouse-start AUTO-GENERATED SECTION. DO NOT EDIT clusterset.local:53 { forward . 100.3.185.93 } #lighthouse-end Verify that the Lighthouse entry is deleted from Corefile of dns-default configmap by running following command on an OpenShift cluster\nkubectl describe configmap dns-default -n openshift-dns For Kubernetes deployments, manually edit the Corefile of coredns configmap and delete the Lighthouse entry by issuing below commands\nkubectl edit cm coredns -n kube-system This will also restart the coredns. Below command can also be issued to manually restart coredns.\nkubectl rollout restart -n kube-system deployment/coredns Verify that the Lighthouse entry is deleted from Data section in Corefile of dns-default config map by running following command on a Kubernetes cluster\nkubectl describe configmap coredns -n kube-system Following commands need to be executed from inside the cluster nodes.\n Remove Submariner\u0026rsquo;s iptables chains\nOn all nodes in each participating cluster, issue the following commands:\niptables --flush SUBMARINER-INPUT iptables -D INPUT $(iptables -L INPUT --line-numbers | grep SUBMARINER-INPUT | awk \u0026#39;{print $1}\u0026#39;) iptables --delete-chain SUBMARINER-INPUT iptables -t nat --flush SUBMARINER-POSTROUTING iptables -t nat -D POSTROUTING $(iptables -t nat -L POSTROUTING --line-numbers | grep SUBMARINER-POSTROUTING | awk \u0026#39;{print $1}\u0026#39;) iptables -t nat --delete-chain SUBMARINER-POSTROUTING iptables -t mangle --flush SUBMARINER-POSTROUTING iptables -t mangle -D POSTROUTING $(iptables -t mangle -L POSTROUTING --line-numbers | grep SUBMARINER-POSTROUTING | awk \u0026#39;{print $1}\u0026#39;) iptables -t mangle --delete-chain SUBMARINER-POSTROUTING ipset destroy SUBMARINER-LOCALCIDRS ipset destroy SUBMARINER-REMOTECIDRS If Globalnet is enabled in the setup, additionally issue the following commands on gateway nodes:\niptables -t nat --flush SUBMARINER-GN-INGRESS iptables -t nat -D PREROUTING $(iptables -t nat -L PREROUTING --line-numbers | grep SUBMARINER-GN-INGRESS | awk \u0026#39;{print $1}\u0026#39;) iptables -t nat --delete-chain SUBMARINER-GN-INGRESS iptables -t nat --flush SUBMARINER-GN-EGRESS iptables -t nat --delete-chain SUBMARINER-GN-EGRESS iptables -t nat -t nat --flush SUBMARINER-GN-MARK iptables -t nat --delete-chain SUBMARINER-GN-MARK Delete the vx-submariner interface\nOn all nodes in each participating cluster, issue the following command:\nip link delete vx-submariner If Globalnet release 0.9 (or earlier) is enabled in the setup, issue the following commands to remove the annotations from all the Pods and Services.\nFor each participating cluster, issue the following command:\nfor ns in `kubectl get ns -o jsonpath=\u0026#34;{.items[*].metadata.name}\u0026#34;`; do kubectl annotate pods -n $ns --all submariner.io/globalIp- kubectl annotate services -n $ns --all submariner.io/globalIp- done "
},
{
"uri": "/other-resources/",
"title": "Other Resources",
"tags": [],
"description": "",
"content": "This page catalogs content documenting Submariner elsewhere on the web.\nConference Presentations Hybrid K8s Environments with Submariner, Kubernetes Community Days Washington DC (2022-02) Connectivity Between Legacy Systems and Kubernetes: Identifying Senders By Using Source IPs, Open Source Summit Japan (2021-12) (slides) Here Be Services: Beyond the Cluster Boundary with Multicluster Services, KubeCon NA (2021-10) Multi-Cluster Service Deployments with Operators and KubeCarrier, KubeCon EU (2021-05) ODCN’s Journey to Connecting OpenShift Clusters Securely and Transparently with Submariner, OpenShift Commons at KubeCon EU (2021-05) Connecting Kubernetes Clusters with Submariner, DevConf.CZ (2021-03) Multicluster Network Connectivity Submariner, Computing on the Edge with Kubernetes (2020-10) Hybrid Cloud and Multicluster Service Discovery, KubeCon China (2019-07) (slides) Solving Multicluster Network Connectivty with Submariner, KubeCon North America (2019-11) (slides) Demo Recordings Automated Disaster Recovery failover and failback with Red Hat OpenShift (2022-01) Submariner in 60s (2020-05) Connecting hybrid Kubernetes clusters using Submariner (2020-03) Cross-cluster service discovery in Submariner using Lighthouse (2020-03) Deploying Submariner with subctl (2019-12) Blogs Connecting K8S/Cilium cluster and K8S/Calico cluster using Submariner (2024-11) Embracing the Open Hybrid Multi-Cloud connecting overlay networking from ARO and ROSA clusters (2023-05) Connecting overlay networks of ROSA clusters with Submariner (2023-04) How to enable cross-cluster networking in Kubernetes with the Submariner add-on (2023-03) Connect AWS EKS Clusters with Submariner (2023-02) A Guide to Cluster Landing Zones for Hybrid and Multi-cloud Architectures (Part 2) (2022-11) A Guide to Cluster Landing Zones for Hybrid and Multi-cloud Architectures (2022-10) Orchestrating Multi-Region Apps with Red Hat Advanced Cluster Management and Submariner (2022-08) Set up an Istio Multicluster Service Mesh with Submariner in Red Hat Advanced Cluster Management for Kubernetes (2022-02) Set up Istio Multicluster with Submariner in Red Hat Advanced Cluster Management for Kubernetes (2021-10) Multi-Cluster monitoring with Prometheus and Submariner (2021-09) Connecting stateful applications in multicluster environments with RHACM and Submariner (2021-04) Deep Dive with RHACM and Submariner - Connecting multicluster overlay networks (2021-04) Connecting managed clusters with Submariner in Red Hat Advanced Cluster Management for Kubernetes (2021-04) Geographically Distributed Stateful Workloads Part One: Cluster Preparation (2020-11) Geographically Distributed Stateful Workloads Part Two: CockroachDB (2020-11) Geographically Distributed Stateful Workloads Part Three: Keycloak (2021-06) Geographically Distributed Stateful Workloads Part Four: Kafka (2021-08) Geographically Distributed Stateful Workloads Part Five: YugabyteDB (2021-09) Multicluster Service Discovery in OpenShift with Submariner and Lighthouse (Part 1) (2020-08) Multicluster Service Discovery in OpenShift with Submariner and Lighthouse (Part 2) (2020-08) Kubernetes Multi-Cloud and Multi-Cluster Connectivity with Submariner (2020-02) Podcasts/Streams The Cloud Multiplier Ep. 3: Connecting Your Cloud with Submariner (2022-06) Ask an OpenShift Admin (Ep 55): Disaster recovery with ODF and ACM (2022-01) Kubelist Podcast Ep. #18, Submariner (2021-08) The Cockroach Hour: Distributed Databases | Red Hat OpenShift \u0026amp; Kubernetes (2021-03) SIG Discussions Spotlight on SIG Multicluster (2022-02) CNCF SIG-Network introduction to Submariner and consideration for CNCF Sandbox donation (2021-03-18) K8s SIG-Multicluster demo of Submariner\u0026rsquo;s KEP1645 Multicluster Services implementation (2020-09-22) K8s SIG-Multicluster demo of Submariner\u0026rsquo;s multicluster networking deployed by Submariner\u0026rsquo;s Operator and subctl (2019-12-17) Academic Papers Evaluating the Impact of Inter-cluster Communications in Edge Computing (2024-09) Kubernetes and the Edge? (2020-10) If you find additional material that isn\u0026rsquo;t listed here, please feel free to add it to this page by editing it. The website contributing guide is here.\n"
},
{
"uri": "/getting-started/quickstart/openshift/rhos/",
"title": "Hybrid OpenStack and AWS",
"tags": [],
"description": "",
"content": "This quickstart guide covers the necessary steps to deploy two OpenShift Container Platform (OCP) clusters: one on AWS and the other one on OpeStack, both with full stack automation, also known as installer-provisioned infrastructure (IPI). Once the OpenShift clusters are deployed, we deploy Submariner with Service Discovery to interconnect the two clusters. Note that this guide focuses on Submariner deployment on clusters with non-overlapping Pod and Service CIDRs. For connecting clusters with overlapping CIDRs, please refer to the Submariner with Globalnet guide.\nPrerequisites Before we begin, the following tools need to be downloaded and added to your $PATH:\n OpenShift installer, pull secret, and oc CLI. OpenStack CLI. AWS CLI. Please ensure that the tools you downloaded above are compatible with your OpenShift Container Platform version. For more information, please refer to the official OpenShift documentation.\n cluster-a on AWS Setup Your AWS Profile Configure the AWS CLI with the settings required to interact with AWS. These include your security credentials, the default AWS Region, and the default output format:\n$ aws configure AWS Access Key ID [None]: .... AWS Secret Access Key [None]: .... Default region name [None]: .... Default output format [None]: text Create and Deploy cluster-a In this step you will deploy cluster-a in AWS (or any other public cloud can be used) using the default IP CIDR ranges:\n Pod CIDR Service CIDR 10.128.0.0/14 172.30.0.0/16 openshift-install create install-config --dir cluster-a openshift-install create cluster --dir cluster-a When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, display in your terminal.\ncluster-b on OpenStack (On-Prem) Setup Your OpenStack Profile Configure the OpenStack credentials for the command line client. Please refer to the official OpenStack documentation for detailed instructions.\nCreate and Deploy cluster-b In this step you will deploy cluster-b, modifying the default IP CIDRs to avoid IP address conflicts with cluster-a. You can change the IP addresses block and prefix based on your requirements. You may want to check your IP ranges with a CIDR calculator.\nIn this example, we will use the following IP ranges:\n Pod CIDR Service CIDR 10.132.0.0/14 172.31.0.0/16 openshift-install create install-config --dir cluster-b Change the Pod network CIDR from 10.128.0.0/14 to 10.132.0.0/14:\nsed -i \u0026#39;s/10.128.0.0/10.132.0.0/g\u0026#39; cluster-b/install-config.yaml Change the Service network CIDR from 172.30.0.0/16 to 172.31.0.0/16:\nsed -i \u0026#39;s/172.30.0.0/172.31.0.0/g\u0026#39; cluster-b/install-config.yaml And finally deploy the cluster:\nopenshift-install create cluster --dir cluster-b When the cluster deployment completes, directions for accessing your cluster, including a link to its web console and credentials for the kubeadmin user, will be displayed in your terminal.\nInstall subctl Download the subctl binary and make it available on your PATH.\ncurl -Ls https://get.submariner.io | bash export PATH=$PATH:~/.local/bin echo export PATH=\\$PATH:~/.local/bin \u0026gt;\u0026gt; ~/.profile If you have Go and the source code, you can build and install subctl instead:\ncd go/src/submariner-io/subctl go build -o $GOBIN/subctl github.com/submariner-io/subctl/cmd (and ensure your go/bin directory is on your PATH).\nPrepare OpenStack and AWS Clusters for Submariner Submariner Gateway nodes need to be able to accept traffic over UDP ports (4500 and 4490 by default). Submariner also uses UDP port 4800 to encapsulate traffic from the worker and master nodes to the Gateway nodes, and TCP port 8080 to retrieve metrics from the Gateway nodes. Additionally, the default OpenShift deployment does not allow assigning an elastic public IP to existing worker nodes, which may be necessary on one end of the tunnel connection.\nsubctl cloud prepare is a command designed to update your OpenShift installer provisioned infrastructure for Submariner deployments, handling the requirements specified above.\nPrepare OpenShift-on-AWS cluster-a for Submariner The default EC2 instance type for the Submariner gateway node is c5d.large, optimized for better CPU which is found to be a bottleneck for IPsec and Wireguard drivers. Alternatively, you can choose to deploy using a different instance type.\n export KUBECONFIG=cluster-a/auth/kubeconfig subctl cloud prepare aws --ocp-metadata path/to/cluster-a/metadata.json --natt-port 4747 Prepare OpenShift-on-OpenStack cluster-b for Submariner The default OpenStack compute instance type for the Submariner gateway node is PnTAE.CPU_4_Memory_8192_Disk_50, Alternatively, you can choose to deploy using a different instance type. Make sure you use the appropriate cloud name from clouds.yaml, here it uses OpenStack.\n export KUBECONFIG=cluster-b/auth/kubeconfig subctl cloud prepare rhos --ocp-metadata path/to/cluster-b/metadata.json --cloud-entry\\ openstack --natt-port 4747 Install Submariner with Service Discovery To install Submariner with multi-cluster Service Discovery follow the steps below:\nUse cluster-a as Broker subctl deploy-broker --kubeconfig cluster-a/auth/kubeconfig Join cluster-a and cluster-b to the Broker subctl join --kubeconfig cluster-a/auth/kubeconfig broker-info.subm --clusterid cluster-a --nattport 4747 subctl join --kubeconfig cluster-b/auth/kubeconfig broker-info.subm --clusterid cluster-b --nattport 4747 Verify Deployment To manually verify the deployment, follow the steps below using either a headless or ClusterIP nginx service deployed in cluster-b.\nDeploy ClusterIP Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 subctl export service --namespace default nginx Deploy Headless Service export KUBECONFIG=cluster-b/auth/kubeconfig kubectl -n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine kubectl -n default expose deployment nginx --port=8080 --cluster-ip=None subctl export service --namespace default nginx Verify Run nettest from cluster-a to access the nginx service:\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080 To access a Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt; as follows:\ncurl cluster-b.nginx.default.svc.clusterset.local:8080 Verify StatefulSets A StatefulSet uses a headless Service. Create a web.yaml as follows:\napiVersion: v1 kind: Service metadata: name: nginx-ss labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: ports: - port: 80 name: web clusterIP: None selector: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: serviceName: \u0026#34;nginx-ss\u0026#34; replicas: 2 selector: matchLabels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss template: metadata: labels: app.kubernetes.io/instance: nginx-ss app.kubernetes.io/name: nginx-ss spec: containers: - name: nginx-ss image: nginxinc/nginx-unprivileged:stable-alpine ports: - containerPort: 80 name: web Use this yaml to create a StatefulSet web with nginx-ss as the Headless Service.\nexport KUBECONFIG=cluster-a/auth/kubeconfig kubectl -n default apply -f web.yaml subctl export service -n default nginx-ss curl nginx-ss.default.svc.clusterset.local:8080 To access the Service in a specific cluster, prefix the query with \u0026lt;cluster-id\u0026gt;:\ncurl cluster-a.nginx-ss.default.svc.clusterset.local:8080 To access an individual pod in a specific cluster, prefix the query with \u0026lt;pod-hostname\u0026gt;.\u0026lt;cluster-id\u0026gt;:\ncurl web-0.cluster-a.nginx-ss.default.svc.clusterset.local:8080 Perform automated verification The contexts on both config files are named admin and need to be modified before running the verify command. Here is how this can be done using yq:\nyq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-a\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-a\u0026#34; | .users[0].name = \u0026#34;admin-a\u0026#34;\u0026#39; cluster-a/auth/kubeconfig yq e -i \u0026#39;.contexts[0].name = \u0026#34;cluster-b\u0026#34; | .current-context = \u0026#34;cluster-a\u0026#34;\u0026#39; cluster-b/auth/kubeconfig yq e -i \u0026#39;.contexts[0].context.user = \u0026#34;admin-b\u0026#34; | .users[0].name = \u0026#34;admin-b\u0026#34;\u0026#39; cluster-b/auth/kubeconfig (if you’re using yq 4.18.1 or later, you can use yq -i instead of yq e -i).\nMore generally, see the Kubernetes documentation on accessing multiple clusters using configuration files.\nThis will perform automated verifications between the clusters.\nexport KUBECONFIG=cluster-a/auth/kubeconfig:cluster-b/auth/kubeconfig subctl verify --context cluster-a --tocontext cluster-b --only service-discovery,connectivity --verbose "
},
{
"uri": "/getting-started/architecture/globalnet/",
"title": "Globalnet Controller",
"tags": [],
"description": "",
"content": "Introduction Submariner is a tool built to connect overlay networks of different Kubernetes clusters. These clusters can be on different public clouds or on-premises. An important use case for Submariner is to connect disparate independent clusters into a ClusterSet.\nHowever, by default, a limitation of Submariner is that it doesn\u0026rsquo;t handle overlapping CIDRs (ServiceCIDR and ClusterCIDR) across clusters. Each cluster must use distinct CIDRs that don\u0026rsquo;t conflict or overlap with any other cluster that is going to be part of the ClusterSet.\nThis is largely problematic because most actual deployments use the default CIDRs for a cluster so every cluster ends up using the same CIDRs. Changing CIDRs on existing clusters is a very disruptive process and requires a cluster restart. So Submariner needs a way to allow clusters with overlapping CIDRs to connect together.\nArchitecture To support overlapping CIDRs in connected clusters, Submariner has a component called Global Private Network, Globalnet (globalnet). This Globalnet is a virtual network specifically to support Submariner\u0026rsquo;s multi-cluster solution with a global CIDR. Each cluster is given a subnet from this virtual Global Private Network, configured as new cluster parameter GlobalCIDR (e.g. 242.0.0.0/8) which is configurable at time of deployment. User can also manually specify GlobalCIDR for each cluster that is joined to the Broker using the flag globalnet-cidr passed to subctl join command. If Globalnet is not enabled in the Broker or if a GlobalCIDR is preconfigured in the cluster, the supplied Globalnet CIDR will be ignored.\nCluster-scope global egress IPs By default, every cluster is assigned a configurable number of global IPs, represented by a ClusterGlobalEgressIP resource, which are used as egress IPs for cross-cluster communication. Multiple IPs are supported to avoid ephemeral port exhaustion issues. The default is 8. The IPs are allocated from a configurable global CIDR. Applications running on the host network that access remote clusters also use the cluster-level global egress IPs.\nNamespace-scope global egress IPs A user can assign a configurable number of global IPs per namespace by creating a GlobalEgressIP resource. These IPs are also allocated from the global CIDR and are used as egress IPs for all or selected pods in the namespace and take precedence over the cluster-level global IPs. In addition, the global IPs allocated for a GlobalEgressIP that targets specific pods in a namespace take precedence over the global IPs allocated for a GlobalEgressIP that just targets the namespace.\nService global ingress IPs Exported ClusterIP type services are automatically allocated a global IP from the global CIDR for ingress. For headless services, each backing pod is allocated a global IP that is used for both ingress and egress. However, if a backing pod matches a GlobalEgressIP then its allocated IPs are used for egress.\nRouting and iptable rules are configured to use the corresponding global IPs for ingress and egress. All address translations occur on the active Gateway node of the cluster.\nsubmariner-globalnet Submariner Globalnet is a component that provides cross-cluster connectivity from pods to remote services using their global IPs. Compiled as binary submariner-globalnet, it is responsible for maintaining a pool of global IPs, allocating IPs from the global IP pool to pods and services, and configuring the required rules on the gateway node to provide cross-cluster connectivity using global IPs. Globalnet also supports connectivity from the nodes (including pods that use host networking) to the global IP of remote services. It mainly consists of two key components: the IP Address Manager and Globalnet.\nIP Address Manager (IPAM) The IP Address Manager (IPAM) component does the following:\n Creates a pool of IP addresses based on the GlobalCIDR configured on cluster. Allocates IPs from the global pool for all ingress and egress, and releases them when no longer needed. Globalnet This component is responsible for programming the routing entries, iptable rules and does the following:\n Creates initial iptables chains for Globalnet rules. For each GlobalEgressIP, creates corresponding SNAT rules to convert the source IPs for all the matching pods to the corresponding global IP(s) allocated to the GlobalEgressIP object. For each exported Service, it internally creates an additional Service with externalIPs, in the same namespace as the exported Service, and sets the externalIPs to the globalIP assigned to the respective Service. Clean up the rules from the gateway node on the deletion of a Pod, Service, or ServiceExport. Service Discovery - Lighthouse Connectivity is only part of the solution as pods still need to know the IPs of services on remote clusters.\nThis is achieved by enhancing Lighthouse with support for Globalnet. The Lighthouse controller uses a service\u0026rsquo;s global IP when creating the ServiceImport for services of type ClusterIP. For headless services, backing pod\u0026rsquo;s global IP is used when creating the EndpointSlice resources to be distributed to other clusters. The Lighthouse plugin then uses the global IPs when replying to DNS queries.\nBuilding Nothing extra needs to be done to build submariner-globalnet as it is built with the standard Submariner build.\nPrerequisites Allow Globalnet controller to create/update/delete the Service with externalIPs by below steps:\n Disable DenyServiceExternalIPs, if enabled. Restrict the use of the Service with externalIPs: OpenShift: No extra configuration is needed. The default network.openshift.io/ExternalIPRanger validating admission plug-in allows the use of the Service with externalIPs only for users with permission to handle the service/externalips resource in the network.openshift.io group. By default, submariner-globalnet's ServiceAccount has such an RBAC rule. Other Kubernetes distributions: Enable externalip-webhook while specifying allowed-external-ip-cidrs to include the GlobalCIDR allocated to the cluster and allowed-usernames to include system:serviceaccount:submariner-operator:submariner-globalnet. The steps above are necessary because for every exported Service, Submariner Globalnet internally creates a Service with externalIPs and sets the externalIPs to the globalIP assigned to the respective Service. Some deployments of Kubernetes do not allow the Service with externalIPs to be created for security reasons.\n Usage Refer to the Quickstart Guides on how to deploy Submariner with Globalnet enabled. For most deployments users will not need to do anything else once deployed. However, users can create GlobalEgressIPs or edit the ClusterGlobalEgressIP for specific use cases.\nEphemeral Port Exhaustion By default, 8 cluster-scoped global IPs are allocated which allows for ~8x64k active ephemeral ports. If those are still not enough for a cluster, this number can be increased by setting the NumberOfIPs field in the ClusterGlobalEgressIP with the well-known name cluster-egress.submariner.io:\napiVersion: submariner.io/v1 kind: ClusterGlobalEgressIP metadata: name: cluster-egress.submariner.io spec: numberOfIPs: 9 Only the ClusterGlobalEgressIP resource with the name cluster-egress.submariner.io is recognized by Globalnet. This resource is automatically created with the default number of IPs.\n Global IPs for a Namespace If it\u0026rsquo;s desired for all pods in a namespace to use a unique global IP instead of one of the cluster-scoped IPs, a user can create a GlobalEgressIP resource in that namespace:\napiVersion: submariner.io/v1 kind: GlobalEgressIP metadata: name: ns-egressip namespace: ns1 spec: numberOfIPs: 1 The example above will allocate 1 global IP which will be used as egress IP for all pods in namespace ns1.\nNumberOfIPs can have minimum value of 0 and maximum of 20\n Global IPs for a set of pods If it\u0026rsquo;s desired for a set of pods in a namespace to use unique global IP(s), a user can create a GlobalEgressIP resource in that namespace with the podSelector field set:\napiVersion: submariner.io/v1 kind: GlobalEgressIP metadata: name: db-pods namespace: ns1 spec: podSelector: matchLabels: role: db numberOfIPs: 1 The example above will allocate 1 global IP which will be used as egress IP for all pods matching label role=db in namespace ns1.\n"
},
{
"uri": "/community/roadmap/",
"title": "Roadmap",
"tags": [],
"description": "",
"content": "Submariner organizes all current and upcoming work using GitHub Issues, Projects, and Milestones.\nPlanning Process For a detailed explanation of Submariner planning process, see Submariner\u0026rsquo;s Contribution Guide.\nCurrent Work Current and near-future work is tracked by Submariner\u0026rsquo;s open Projects.\nFuture Work Future work which is not planned for the current release can be found in the backlog board and the enhancements repository.\nSuggesting Work If we are missing something that would make Submariner more useful to you, please let us know. The best way is to file an Issue and include information on how you intend to use Submariner with that feature.\n"
},
{
"uri": "/development/security/containers/",
"title": "Container Requirements",
"tags": [],
"description": "",
"content": "Current privilege setup is as follows, for non-test containers deployed by Submariner. Production containers not described here don’t use extra capabilities.\n Container Capabilities Privilege escalation Privileged Read-only root Runs as non-root Host network Volume mounts Gateway1 All Yes Yes No No Yes Route agent1 All Yes Yes No No Yes Globalnet1 All Yes Yes No No Yes Lighthouse CoreDNS NET_BIND_SERVICE2 No No Yes Yes No /etc/coredns, read-only This container needs to run iptables. \u0026#x21a9;\u0026#xfe0e;\n This is required to bind to port 53. \u0026#x21a9;\u0026#xfe0e;\n "
},
{
"uri": "/community/role-ecosystem/",
"title": "Role in the Ecosystem",
"tags": [],
"description": "",
"content": "As cloud-native applications continue to evolve, the open-source ecosystem for multi-cluster networking has changed significantly in recent years. There are several community projects and emerging technologies addressing various challenges of connecting, securing, and managing communication between Kubernetes clusters across different environments. Some notable examples include Submariner, Skupper, Istio, Calico, and Cilium. These solutions play a critical role in enabling scalable and secure multi-cluster deployments.\nThis page highlights Submariner’s role within the ecosystem and touches on how it compares to some of these other projects.\nSubmariner Networking Focus: Submariner is primarily focused on providing network connectivity at layer 3. It establishes tunnels between clusters to facilitate direct communication between pods and services. Submariner operates at layer 3 of the OSI model, which means it can support communication for any type of application data or protocol. However, setting up Submariner does require some administrative overhead, particularly in configuring firewall rules in the underlying infrastructure.\nService Discovery: Submariner provides an implementation of the Multi-cluster Services API (MCS API), an initiative within the Kubernetes ecosystem aimed at standardizing the management of services across multiple Kubernetes clusters, and follows its core principal of “namespace sameness” whereby Kubernetes namespaces behave consistently and seamlessly across interconnected clusters.\nUse Cases: It is well-suited for scenarios where you need to create a unified permanent network across geographically distributed clusters, ensuring seamless pod-to-pod communication and service discovery.\nConnectivity Domain: Submariner primarily focuses on interconnecting Kubernetes clusters. However, Submariner also provides an experimental feature that allows access to external applications or endpoints that exist outside of the cluster, in non-Kubernetes environments. It\u0026rsquo;s important to note that while this experimental feature exists, it might not be as mature or stable as Submariner\u0026rsquo;s core functionalities. Users interested in leveraging Submariner for accessing external applications should consider testing and evaluating this feature in their specific use cases.\nIntegration: Submariner integrates with various networking solutions and can work alongside existing CNI (Container Network Interface) plug-ins like Calico, Flannel, etc., ensuring compatibility with different Kubernetes environments.\nSecurity: Provides secure communication between clusters using IPsec tunnels by default, which encrypt traffic between clusters.\nComparison Summary Submariner focuses on establishing a unified network between Kubernetes clusters, ensuring secure pod-to-pod communication and service discovery.\n Skupper leverages messaging functionalities to facilitate flexible communication across end-points. While Skupper provides support for linking namespaces across different Kubernetes clusters, it can also be used to support non-Kubernetes environments, including bare-metal, VMs or services running as Docker or Podman containers. It’s important to note that Skupper utilizes a layer 7-based mechanism for establishing connectivity. This approach supports selected application protocols today, including HTTP/1.1, HTTP/2, gRPC, and TCP communication.\n Istio provides a robust service mesh with advanced traffic management, policy and security features primarily designed for intra-cluster communication but can be extended to manage communication across clusters with additional setup.\n Projects like Calico and Cilium, which integrate with Kubernetes using the Container Network Interface (CNI), require that the same CNI plug-in is configured consistently across all interconnected clusters for multi-cluster connectivity solutions to work effectively.\n Choosing between these solutions would depend on your specific use case requirements regarding network connectivity, service discovery, security needs, and integration preferences within a multi-cluster environment.\n"
},
{
"uri": "/development/security/",
"title": "Security",
"tags": [],
"description": "",
"content": " Security Reporting Container Requirements Secrets "
},
{
"uri": "/development/shipyard/",
"title": "Working with Shipyard",
"tags": [],
"description": "",
"content": "Overview The Shipyard project provides common tooling for creating Kubernetes clusters with kind (Kubernetes in Docker) and provides a common Go framework for creating end to end tests. Shipyard contains common functionality shared by other projects. Any project specific functionality should be part of that project.\nA base image quay.io/submariner/shipyard-dapper-base is created from Shipyard and contains all the tooling to build other projects and run tests in a consistent environment.\nShipyard has several folders at the root of the project:\n package: Contains the ingredients to build the base images. scripts: Contains general scripts for Shipyard make targets. shared: Contains all the shared scripts that projects can consume. These are copied into the base image under $SCRIPTS_DIR. lib: Library functions that shared scripts, or consuming projects, can use. resources: Resource files to be used by the shared scripts. test: Test library to be used by other projects. Shipyard ships with some shared and image related Make targets which can be used by developers in consuming projects.\nUsage A developer can use the make command to interact with a project (which in turn uses Shipyard).\nTo see all targets defined in a project, run:\nmake targets The most common targets would be clusters, deploy and e2e which are built as a \u0026ldquo;dependency graph\u0026rdquo; - e2e will deploy Submariner if its not deployed, which in turn calls clusters to create the deployment environment. Therefore, variables used in any \u0026ldquo;dependent\u0026rdquo; target will be propagated to it\u0026rsquo;s dependencies.\nSimplified Usage Options For ease of use and convenience, many of the shared targets support a simplified usage model using the special USING variable. The value is a space separated string of usage options. Specifying conflicting options (e.g. wireguard and libreswan) will work, but the outcome should not be considered predictable. Any non-existing options will be silently ignored.\nFor example, to deploy an environment that uses Globalnet, Lighthouse and a WireGuard cable driver use:\nmake deploy USING=\u0026#39;globalnet lighthouse wireguard\u0026#39; Highlighted USING Options General deployment: aws-ocp: Deploy on top of AWS using OCP (OpenShift Container Platform). globalnet: Deploy clusters with overlapping CIDRs, and Submariner in Globalnet mode. lighthouse: Deploy service discovery (Lighthouse) in addition to the basic deployment. ovn: Deploy the clusters with the OVN CNI. air-gap: Deploy clusters in a simulated air-gapped (disconnected) environment. Deployment tools. helm: Deploy clusters using Helm. operator: Deploy clusters using the Submariner Operator. Cable drivers: libreswan: Use the Libreswan cable driver when deploying the clusters. vxlan: Use the VXLAN cable driver when deploying the clusters. wireguard: Use the WireGuard cable driver when deploying the clusters. Testing: subctl-verify: Force end-to-end tests to run with subctl verify, irrespective of any possible project-specific tests. How to Add Shipyard to a Project The project should have a Makefile that contains all the projects targets, and imports all the Shipyard targets.\nIn case you\u0026rsquo;re adding Shipyard to a project that doesn\u0026rsquo;t have it yet, use the following skeleton:\nBASE_BRANCH ?= devel # Running in Dapper ifneq (,$(DAPPER_HOST_ARCH)) include $(SHIPYARD_DIR)/Makefile.inc ### All your specific targets and settings go here. ### # Not running in Dapper else Makefile.dapper: @echo Downloading $@ @curl -sfLO https://raw.githubusercontent.com/submariner-io/shipyard/$(BASE_BRANCH)/$@ include Makefile.dapper endif You can also refer to the project\u0026rsquo;s own Makefile as an example.\nUse Shipyard in Your Project Once Shipyard has been added to a project, you can use any of the shared targets that it provides.\nHave Shipyard Targets Depend on Your Project\u0026rsquo;s Targets Having any of the Shipyard Makefile targets rely on your project\u0026rsquo;s specific targets can be done easily by adding the dependency in your project\u0026rsquo;s Makefile. For example:\nclusters: \u0026lt;pre-cluster-target\u0026gt; Use an Updated Images in Your Project Test an Updated Shipyard Image If you\u0026rsquo;ve made changes to Shipyard\u0026rsquo;s targets and need to test them in your project, run this command in the Shipyard directory:\nmake images This creates a local image with your changes available for consumption in other projects.\nTest Updated Images from Sibling Project(s) In case you made changes in a sibling project and wish to test with that project\u0026rsquo;s images, first rebuild the images:\ncd \u0026lt;path/to/sibling project\u0026gt; make images These images will be available in the local docker image cache, but not necessarily used by the project when deploying. To use these images, set the PRELOAD_IMAGES variable to the projects images and any sibling images.\nFor example, to use updated gateway images when deploying on the operator repository:\nmake deploy PRELOAD_IMAGES=\u0026#39;submariner-operator submariner-gateway\u0026#39; "
},
{
"uri": "/development/security/secrets/",
"title": "Secrets",
"tags": [],
"description": "",
"content": "The following Kubernetes Secrets are used to store sensitive information (with the usual caveat that Secrets don\u0026rsquo;t protect sensitive information):\n broker-secret- with a Kubernetes-generated suffix, which stores the credentials used to connect to the Broker. submariner-ipsec-psk, which stores the PSK used for IPsec connections. These secrets are stored in the operator’s namespace, submariner-operator.\nThe following fields in the Submariner specification store the names to use:\n BrokerK8sSecret gives the name of the Broker Secret. CeIPSecPSKSecret gives the name of the IPsec Secret. The ServiceDiscovery specification also has a BrokerK8sSecret since it needs access to the Broker.\nThe Operator presents the Secrets as corresponding volumes in the appropriate deployments to make them available to the relevant Submariner components.\n"
},
{
"uri": "/",
"title": "",
"tags": [],
"description": "",
"content": "Submariner Submariner enables direct networking between Pods and Services in different Kubernetes clusters, either on-premises or in the cloud.\nWhy Submariner As Kubernetes gains adoption, teams are finding they must deploy and manage multiple clusters to facilitate features like geo-redundancy, scale, and fault isolation for their applications. With Submariner, your applications and services can span multiple cloud providers, data centers, and regions.\nSubmariner is completely open source, and designed to be network plugin (CNI) agnostic.\nWhat Submariner Provides Cross-cluster L3 connectivity using encrypted or unencrypted connections Service Discovery across clusters subctl, a command-line utility that simplifies deployment and management Support for interconnecting clusters with overlapping CIDRs A few requirements need to be met before you can begin. Check the Prerequisites section for more information.\n Check the Quickstart Guides section for deployment instructions.\n "
},
{
"uri": "/categories/",
"title": "Categories",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "/getting-started/architecture/networkplugin-syncer/",
"title": "Network Plugin Syncer",
"tags": [],
"description": "",
"content": " The information provided in the following section regarding network-plugin-syncer is relevant only for Submariner releases prior to version 0.16. Starting from Submariner 0.16, this functionality has been incorporated into the route-agent.\n The Network Plugin Syncer provides a framework for components to interface with the configured Kubernetes Container Network Interface (CNI) plugin to perform any API/database tasks necessary to facilitate routing cross-cluster traffic, like creating API objects that the CNI plugin will process or working with the specific CNI databases.\nThe detected CNI plugin implementation configured for the cluster is received by the Network Plugin Syncer, and executes the appropriate plugin handler component, if any.\nThe following table highlights the differences with the Route Agent:\n Route Agent Network Plugin Syncer Configures the CNI plugin x Configures low level network elements on the host x Runs as a Kubernetes Deployment x Runs as a Kubernetes Daemonset on every host x This component is only necessary for specific Kubernetes CNI plugins like OVN Kubernetes.\n "
},
{
"uri": "/getting-started/architecture/networkplugin-syncer/ovn-kubernetes/",
"title": "OVN Kubernetes",
"tags": [],
"description": "",
"content": "A specific handler component is deployed for the OVN Kubernetes CNI plugin.\nOVN is a project that builds on top of Open vSwitch providing a rich high level API for describing virtual network components like Logical Routers, Logical Switches, Load balancers, Logical Ports. OVN Kubernetes is a Cloud Management System Plugin (CMS plugin) which manages OVN resources to setup networking for Kubernetes clusters.\nThe OVN Kubernetes handler watches for Submariner Endpoints and Kubernetes Nodes and interfaces with the OVN databases (OVN NorthDB and SouthDB) to store and create OVN resources necessary for Submariner, including:\n A logical router named submariner_router that handles the communication to remote clusters and has a leg on the network which can talk to the ovn-k8s-sub0 interface on the Gateway node. This router is pinned to the active Gateway chassis.\n The Ovn-Kubernetes Specific OVN Load Balancer Group (which contains all of the cluster\u0026rsquo;s service VIPs) is added to the submariner_routerin order to ensure total service connectivity.\n OVN Logical Router Static Routes added to the submariner_router to ensure local traffic destined for remote clusters and remote traffic destined for local resources is routed correctly.\n OVN Logical Router Policies added to the existing ovn_cluster_router which redirect traffic targeted for remote routers through the submariner_router.\n A submariner_join logical switch that connects the submariner_router with the ovn_cluster_router.\n Requires OVN NorthBound DB version 6.1.0+, available with OCP 4.11.0+\n The handler architecture The following diagram illustrates the required Submariner OVN architecture transposed on the native OVN-Kubernetes managed OVN architecture and components. The specific networkpluginsyncer managed OVN components are boxed in green.\n"
},
{
"uri": "/getting-started/architecture/route-agent/",
"title": "Route Agent",
"tags": [],
"description": "",
"content": "The Route Agent component runs on every node in each participating cluster. It is responsible for setting up the necessary host network elements on top of the existing Kubernetes CNI plugin.\nThe Route Agent receives the detected CNI plugin as part of its configuration.\nkube-proxy iptables For CNI plugins that utilize kube-proxy in iptables mode, the Route Agent is responsible for setting up VXLAN tunnels and routing the cross cluster traffic from the node to the cluster’s active Gateway Engine which subsequently sends the traffic to the destination cluster.\nWhen running on the same node as the active Gateway Engine, Route Agent creates a VXLAN VTEP interface to which Route Agent instances running on the other worker nodes in the local cluster connect by establishing a VXLAN tunnel with the VTEP of the active Gateway Engine node. The MTU of the VXLAN tunnel is configured based on the MTU of the default interface on the host minus the VXLAN overhead.\nRoute Agents use Endpoint resources synced from other clusters to configure routes and to program the necessary iptables rules to enable full cross-cluster connectivity.\nWhen the active Gateway Engine fails and a new Gateway Engine takes over, Route Agents will automatically update the route tables on each node to point to the new active Gateway Engine node.\nOVN Kubernetes With OVN Kubernetes we reuse the GENEVE tunnels created by OVNKubernetes CNI to reach the gateway nodes from non-gateway nodes and a separate VXLAN tunnel is not created.\nFor Submariner 0.15 and below refer network plugin syncer\n With OVN we can have two deployment models,\nSubmariner automatically discovers the OVN mode and uses the appropriate implementation and this is not a configuration option in Submariner\n Single Zone A single-zone deployment involves a single OVN database and a set of master nodes that program it.\nHere, Submariner configures the ovn_cluster_router to route traffic to other clusters through the ovn-k8s-mp0 interface of the gateway node, effectively bridging it to the host networking stack of the gateway node. Since ovn_cluster_router is distributed, this route also ensures that traffic from non-gateway node is directed to local gateway node.\nThe traffic that comes through Submariner tunnel from remote cluster to gateway node will be directed to ovn-k8s-mp0 interface through host routes and will be handled by ovn_cluster_router.\nMultiple Zone In a multi-zone configuration, each zone operates with its dedicated OVN database and OVN master pod. These zones are interconnected via transit switches, and OVN-Kubernetes orchestrates the essential routing for enabling pod and service communication across nodes situated in different zones.\nWithin this framework, the Submariner route agent plays a pivotal role. It ensures that the same routing configurations employed in a single zone are replicated in the OVN cluster router and the host stack of the gateway node. For nodes outside the zone where the gateway node is located, Submariner takes action by adding a route that directs traffic to remote clusters, channeling it through the transit switch IP of the gateway node.\nThe host networking rules remain consistent across all nodes. They guide traffic towards the ovn_cluster_router specific to that zone, leveraging ovn-k8s-mp0. The ovn_cluster_router, in turn, guarantees that the traffic is directed through the Submariner tunnel via the gateway node.\n"
},
{
"uri": "/tags/",
"title": "Tags",
"tags": [],
"description": "",
"content": ""
}]