Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow user to replace ingress network #31714

Merged
merged 3 commits into from
Mar 27, 2017
Merged

Allow user to replace ingress network #31714

merged 3 commits into from
Mar 27, 2017

Conversation

aboch
Copy link
Contributor

@aboch aboch commented Mar 9, 2017

Vendoring of swarmkit carries a change to support ingress network replacement and two important scheduler fixes:

Vendoring of libnetwork carries:

This PR allows user to remove and (re)create the ingress network:

  • A fresh new cluster will come up with the default ingress network as today.
  • The ingress network can be removed and never be recreated.
$ docker network rm ingress
WARNING! Before removing the routing-mesh network, make sure all the nodes in your swarm run the same docker engine version. Otherwise, removal may not be effective and functionality of newly create ingress networks will be impaired.
Are you sure you want to continue? [y/N]
ingress
  • On creation, user can specify any parameter as for any other swarm network. Name can be anything.
$ docker network create -d overlay --opt com.docker.network.mtu=1200 --ingress my-ingress
syws4fv0sqrwkhblly297z569
  • The ingress network removal requires that no services depending on the network exist.
$ docker network rm my-ingress 
WARNING! Before removing the routing-mesh network, make sure all the nodes in your swarm run the same docker engine version. Otherwise, removal may not be effective and functionality of newly create ingress networks will be impaired.
Are you sure you want to continue? [y/N]
Error response from daemon: rpc error: code = 7 desc = ingress network cannot be removed because service lbxksk0633cqzbqrlnwx9v4kp depends on it
  • Swarm services creation/update will make sure the ingress network is present before succeeding, if the service requires the routing mesh.
$ docker service create --name srv-on-rmesh -p 9000:8000 busybox top
Error response from daemon: rpc error: code = 7 desc = service needs ingress network, but ingress network is not present
$
$ docker service update --publish-add 7000:80 s0
Error response from daemon: rpc error: code = 7 desc = service needs ingress network, but ingress network is not present

Fixes #24220

Depends on moby/swarmkit/pull/2028 and moby/libnetwork/pull/1678

zebre
- A picture of a cute animal (not mandatory but encouraged)

@vdemeester
Copy link
Member

@thaJeztah
Copy link
Member

Is the custom ingress network still treated "special", i.e. is "icc" communication disabled by default?

@aboch
Copy link
Contributor Author

aboch commented Mar 13, 2017

@thaJeztah

is "icc" communication disabled by default

That has not changed, even now icc is not disabled on the ingress network.

What container cannot do today (and this PR does not change that) is to reach services via the ingress network on L4 ports other than the ones the services exposes. That is achieved internally, without user configurations needed on the ingress network creation.

@mavenugo
Copy link
Contributor

mavenugo commented Mar 13, 2017

@aboch but hopefully Service Discovery, VIP and LB are disabled on this Ingress network ?

@aboch
Copy link
Contributor Author

aboch commented Mar 13, 2017

but Service Discovery, VIP and LB are disabled on this Ingress network correct ?

Yes, forgot that thanks. That is all disabled. It's all done internally and user does not and will not need to control it via network creation options.

@aboch
Copy link
Contributor Author

aboch commented Mar 13, 2017

hmm some mess happening with vendoring. let me look into that.

Looks like I need to update my vndr tool.

Edit: run the updted vndr, should be fine now

@aboch aboch mentioned this pull request Mar 14, 2017
@aboch aboch added this to the 17.04.0 milestone Mar 14, 2017
@aboch
Copy link
Contributor Author

aboch commented Mar 14, 2017

hmm, a docker-py integration test time out failed janky (https://jenkins.dockerproject.org/job/Docker-PRs/40326/console):

07:08:00 ../../../../../docker-py/tests/integration/api_container_test.py ...s........sx.................Build timed out (after 120 minutes). Marking the build as failed.
07:08:00 Build timed out (after 120 minutes). Marking the build as aborted.

But when I run it locally, the docker-py integ test passes:

============================= test session starts ==============================
platform linux2 -- Python 2.7.9, pytest-2.9.1, py-1.4.32, pluggy-0.3.1
rootdir: /docker-py, inifile: pytest.ini
plugins: cov-2.1.0
collected 239 items

../../../../../docker-py/tests/integration/api_build_test.py ........s.
../../../../../docker-py/tests/integration/api_client_test.py ........
../../../../../docker-py/tests/integration/api_container_test.py ...s........sx...........................................s.....
../../../../../docker-py/tests/integration/api_exec_test.py ........
../../../../../docker-py/tests/integration/api_healthcheck_test.py ...
../../../../../docker-py/tests/integration/api_image_test.py ..........s..s
../../../../../docker-py/tests/integration/api_network_test.py .........s............s.
../../../../../docker-py/tests/integration/api_plugin_test.py ssssssssss
../../../../../docker-py/tests/integration/api_secret_test.py sssss
../../../../../docker-py/tests/integration/api_service_test.py ..........ss.s......s
../../../../../docker-py/tests/integration/api_swarm_test.py ..s........
../../../../../docker-py/tests/integration/api_volume_test.py ..s...s..
../../../../../docker-py/tests/integration/client_test.py ...
../../../../../docker-py/tests/integration/errors_test.py .
../../../../../docker-py/tests/integration/models_containers_test.py .......................
../../../../../docker-py/tests/integration/models_images_test.py .x....
../../../../../docker-py/tests/integration/models_networks_test.py ....
../../../../../docker-py/tests/integration/models_nodes_test.py .
../../../../../docker-py/tests/integration/models_resources_test.py .
../../../../../docker-py/tests/integration/models_services_test.py ....s
../../../../../docker-py/tests/integration/models_swarm_test.py .
../../../../../docker-py/tests/integration/models_volumes_test.py ..
../../../../../docker-py/tests/integration/regression_test.py ......

Could this be just a glitch, has anybody seen it before ?

@aboch
Copy link
Contributor Author

aboch commented Mar 14, 2017

All green.
ping @aaronlehmann @mavenugo @cpuguy83 @thaJeztah @vdemeester

@@ -82,6 +82,7 @@ type NetworkSpec struct {
IPv6Enabled bool `json:",omitempty"`
Internal bool `json:",omitempty"`
Attachable bool `json:",omitempty"`
Ingress bool `json:",omitempty"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add to swagger.yaml

@@ -400,6 +400,7 @@ type NetworkResource struct {
IPAM network.IPAM // IPAM is the network's IP Address Management
Internal bool // Internal represents if the network is used internal only
Attachable bool // Attachable represents if the global scope is manually attachable by regular containers from workers in swarm mode.
Ingress bool // Ingress indicates the network is providing the routing-mesh for the swarm cluster.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add to swagger.yaml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I will, along to the addition to the command reference (forgot to add the diff to the commit)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -431,6 +432,7 @@ type NetworkCreate struct {
IPAM *network.IPAM
Internal bool
Attachable bool
Ingress bool
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add to swagger.yaml

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -59,6 +60,8 @@ func newCreateCommand(dockerCli *command.DockerCli) *cobra.Command {
flags.BoolVar(&opts.ipv6, "ipv6", false, "Enable IPv6 networking")
flags.BoolVar(&opts.attachable, "attachable", false, "Enable manual container attachment")
flags.SetAnnotation("attachable", "version", []string{"1.25"})
flags.BoolVar(&opts.ingress, "ingress", false, "Swarm routing-mesh network")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Create a swarm routing-mesh network"?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

if ingressWorkerStop != nil {
close(ingressWorkerStop)
ingressWorkerOnce = sync.Once{}
ingressID = ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without locking, this can race with the goroutine in setupIngressWorker.

func (daemon *Daemon) stopIngressWorker() {
if ingressWorkerStop != nil {
close(ingressWorkerStop)
ingressWorkerOnce = sync.Once{}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of resetting ingressWorkerOnce?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the nodes leaves the swarm, and joins a swarm again.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an external guarantee that stopIngressWorker can't be called concurrently with SetupIngress or ReleaseIngress?

I think I'd feel more comfortable having a mutex and a boolean instead of a sync.Once. Then the mutex could also protect ingressID, and we wouldn't have to make assumptions about which functions can be called at the same time.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there an external guarantee that stopIngressWorker can't be called concurrently with SetupIngress or ReleaseIngress?

Yes, stopIngressWorker is called when docker leaves the swarm.
SetupIngress and ReleaseIngress cannot be called concurrently as they are all queued.
The worker logic is in fact to serialize the setup/release events.

I was under the impression setup or release which in turn are called by executor.Configure() cannot happen to be called after DaemonLeavesCluster().

Given the worker drains ingress network creation/deletion events, I did not expect a long list of events left in the queue to process.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, let me think about it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this can be resolved by moving the ingressID reset into the setupIngressWorker() go routine when reacting to the stop event: L122.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because only one select at the time will run, and the one that executes setup/release function will read ingressID value and pass it to the functions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What confuses me about ingressWorkerOnce is that the use of sync.Once implies there's synchronization involved, but the fact that we're resetting it without a lock implies synchronization isn't an issue. Reviewing the places where ReleaseIngress and SetupIngress are called, I don't think they can be called simultaneously. If this is the case, I think it would be clearer to just use a boolean and an if statement. Otherwise, the code looks suspiciously like incorrect concurrent code.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason for using once here is not because we are in a concurrent prone code path. It is simply for assuring we initialize the worker once, given we do lazy initialization of the worker.
Therefore only the first call to SetupIngress from the executor will result in starting the worker.

@aboch
Copy link
Contributor Author

aboch commented Mar 14, 2017

@aaronlehmann Addressed your comments. PTAL.

)

func (daemon *Daemon) setupIngressWorker() {
ingressWorkerStop = make(chan struct{}, 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This channel doesn't need to be buffered

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will change

ingressChan <- struct{}{}
return func() { <-ingressChan }
func (daemon *Daemon) stopIngressWorker() {
if ingressWorkerStop != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This if statement also strikes me as suspicious. It implies that stopIngressWorker can be called before setupIngressWorker. But if that's the case, it suggests that after leaving a swarm and rejoining, we could close an already-closed channel (ingressWorkerStop never gets reset to nil).

Again, I think adding locking around this stuff would make it a lot easier to be confident that there aren't concurrency bugs. If nothing else, it would make my review a lot easier.

Copy link
Contributor Author

@aboch aboch Mar 15, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It implies that stopIngressWorker can be called before setupIngressWorker

I see your point.

The fact is the SetupIngressNetwork being called from the executor and the call to DaemonLeavesCluster happen from two different threads. Their execution order is not guaranteed, and I saw it not deterministic during the testing of initial diffs.

In fact in the setup ingress call, we need to wait for agentInitWait() for this reason.

So it is not excluded a swarm join quickly followed by a swarm leave would flip the order of events.

So, on DaemonLeavesCluster, I cannot assume the worker was started.

If locking this section ease the code reading, I agree we should do it. I will update shortly.

But this should not be confused with the fact I use the once, which are being used for lazy initialization of the worker as I explained in the other comment.

@aboch
Copy link
Contributor Author

aboch commented Mar 15, 2017

Updated.

ingressJobsChannel chan *ingressJob
ingressWorkerStop chan struct{}
ingressID string
ingress sync.Mutex
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the lock. But startIngressWorker is using ingressWorkerStop without holding the lock, and SetupIngress / ReleaseIngress are using ingressWorkerOnce without holding the lock, so unfortunately my concerns about concurrency are not fully addressed.

Earlier in the thread you said:

The fact is the SetupIngressNetwork being called from the executor and the
call to DaemonLeavesCluster happen from two different threads. Their execution
order is not guaranteed, and I saw it not deterministic during the testing of
initial diffs.

In fact in the setup ingress call, we need to wait for agentInitWait() for
this reason.

So it is not excluded a swarm join quickly followed by a swarm leave would
flip the order of events.

...so I still don't see how it's safe to use these variables from different goroutines without a lock.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SetupIngress / ReleaseIngress are using ingressWorkerOnce

Not sure I follow. Once.Do(f) is designed to not allow more than one execution of f, even in concurrent environment. I do not think there is any need to hold a lock during a call to Once.Do.

Being called in a once do construct, startIngressWorker is guaranteed to be called once by one thread.

Still, as you said, startIngressWorker initializes the two channels, and stopIngressNetwork, which also accesses the channels, could be concurrently called. To protect against this, both startIngressWorker and stopIngressNetwork are now locking around the access.

Yes, then it spawns the goroutine which listen on those channels, with no protection.

The correct thing here is probably to define a type ingressWorker struct with start, stop, enqueue methods, so that it can control the access to and lifecycle of its channels.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think there is any need to hold a lock during a call to Once.Do.

There is if you may overwrite the sync.Once from another goroutine.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh ok, you are referring to the reset of the once variable.
Yes, that is a problem.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The more I look at this, it seems to me the best tradeoff now is to not to stop the goroutine once it is started.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@aaronlehmann I decided to fall back to not stopping the routine, when the node leaves the swarm.

IMO it is fine for now as we are adding the new functionality.
We can later improve the logic getting to ideal stop of the routine, with calm.

I updated the diffs and tested the add/rm ingress, before and after leaving and rejoining the cluster.

ingress.Lock()
if ingressWorkerStop != nil {
close(ingressWorkerStop)
ingressWorkerStop = nil
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pattern may cause the goroutine in startIngressWorker to miss the channel close, because it might see the nil channel instead of the one that was closed. Also, the race detector will complain (and this might cause problems, depending on the internal implementation of select).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right. I added this in last diff round and this is wrong and not needed.
The existing check if ingressWorkerStop != nil is just to tell us whether a worker routine is actually running, if so signal the routine to stop.

aboch added 2 commits March 24, 2017 11:07
Signed-off-by: Alessandro Boch <[email protected]>
Signed-off-by: Alessandro Boch <[email protected]>
@mavenugo
Copy link
Contributor

@aaronlehmann I understand that this PR is blocking other Swarmkit vendoring PRs. I am waiting for @aboch to be back so that he can respond to the compatibility question.

@aboch
Copy link
Contributor Author

aboch commented Mar 25, 2017

@mavenugo
I think proper documentation in the update section will need to be added.
So that user will not attempt the ingress nw rm on a mixed version cluster.

Although I think it is very unlikely admin would remove the routing-mesh on a half upgraded prod environment, it is true casual user may hit the problem, so I am proposing we can follow the same approach used in docker network prune, where the cli client prints a clear warning and asks for user confirmation.
Something like

$ docker network remove ingress
WARNING! You are removing the routing-mesh network. Proceed only if all nodes in the cluster are at version 17.05.
Are you sure you want to continue? [y/N]

Higher level mgmt tools can already reject the request given they likely have notion of the engine version running on each node.

@mavenugo
Copy link
Contributor

@aboch I agree. if we can do this change, I think we will have the necessary UX covered and also when the proper capability exchange mechanism arrives, it can be automated as well.

Can you pls take care of this change and we can get this merged.

@aboch
Copy link
Contributor Author

aboch commented Mar 26, 2017

@mavenugo Updated.

@vdemeester
Copy link
Member

@aboch build failure that seems legit

15:08:31 ----------------------------------------------------------------------
15:08:31 FAIL: docker_cli_swarm_test.go:416: DockerSwarmSuite.TestSwarmIngressNetwork
15:08:31 
15:08:31 [df0c8073844d8] waiting for daemon to start
15:08:31 [df0c8073844d8] daemon started
15:08:31 
15:08:31 docker_cli_swarm_test.go:425:
15:08:31     c.Assert(err, checker.IsNil, check.Commentf(out))
15:08:31 ... value *exec.ExitError = &exec.ExitError{ProcessState:(*os.ProcessState)(0xc42020fb40), Stderr:[]uint8(nil)} ("exit status 1")
15:08:31 ... Error response from daemon: rpc error: code = 6 desc = ingress network (pmoutasdajym5jdq9jxs1zlx7) is already present
15:08:31 
15:08:31 
15:08:31 [df0c8073844d8] exiting daemon
15:08:34 
15:08:34 ----------------------------------------------------------------------

@aboch
Copy link
Contributor Author

aboch commented Mar 26, 2017

Thanks @vdemeester
Now that I changed the cli to ask for confirmation, and not having a force option for docker network remove, I'd better change the respective integ test to use the API calls directly.

@mavenugo
Copy link
Contributor

@aboch I think it is better to introduce the -f for docker network rm and handle it for all other regular remove case as well. This will also force the user to use this flag when removing the ingress network. But that is beyond the scope of this PR and hence am good with the current PR.

@vdemeester
Copy link
Member

@mavenugo we can add the -f/--force flag in another PR (follow-up). I can take care of that 😉

Copy link
Member

@vdemeester vdemeester left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM 🐯

@mavenugo
Copy link
Contributor

Thanks @aboch @vdemeester . LGTM.

@edesai
Copy link

edesai commented Jun 28, 2017

I am using 17.05 for my manager and worker nodes and am trying to create a new ingress network with custom subnet and then creating swarm service. But service doesn't seem to spawn any containers. I see some errors from docker logs. Seems like it keeps "looping over" Requesting Address from the new subnet. Can someone help out here?

Logs as below:

Jun 28 10:58:24 dockerd[19967]: time="2017-06-28T10:58:24.952553135-07:00" level=debug msg="Service xa3gd5xvpgatbwwhis1of9lp2 was scaled up from 0 to 1 instances" module=node node.id=b9zadh96hjod5zrx55lvsv19m
Jun 28 10:58:24 dockerd[19967]: time="2017-06-28T10:58:24.953912764-07:00" level=error msg="task allocation failure" error="service xa3gd5xvpgatbwwhis1of9lp2 to which this task lukp3bld3w35g560h5ufr6ae0 belongs has pending allocations" module=node node.id=b9zadh96hjod5zrx55lvsv19m
Jun 28 10:58:24 edesai-redhat-7-3-dcnm dockerd[19967]: time="2017-06-28T10:58:24.953963436-07:00" level=debug msg="RequestAddress(GlobalDefault/11.11.0.0/16, , map[])"
Jun 28 10:58:24 edesai-redhat-7-3-dcnm dockerd[19967]: time="2017-06-28T10:58:24.954710522-07:00" level=debug msg="RequestAddress(GlobalDefault/11.11.0.0/16, , map[])"
.
.
.

My steps of reproducing are:

  1. docker network rm ingress

  2. docker network create --driver overlay --ingress --subnet 11.11.11.1/16 --gateway 11.11.11.1 my-ingress
    ikhelr9jnjllg8fr5xfk9p8bg

  3. docker service create --network my-ingress ct:0.1
    xa3gd5xvpgatbwwhis1of9lp2
    Since --detach=false was not specified, tasks will be created in the background.
    In a future release, --detach=false will become the default.

Observations:

docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
xa3gd5xvpgat condescending_spence replicated 0/1 172.28.x.x:5000/ct:0.1
[root@edesai-redhat-7-3-dcnm ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
xa3gd5xvpgat condescending_spence replicated 0/1 172.28.x.x:5000/ct:0.1

docker network ls
NETWORK ID NAME DRIVER SCOPE
e88846664160 bridge bridge local
fde9fb9ec622 docker_gwbridge bridge local
b3efd1b6e7f4 host host local
ikhelr9jnjll my-ingress overlay swarm
5e8e58b59631 none null local

docker network inspect ikhelr9jnjll
[
{
"Name": "my-ingress",
"Id": "ikhelr9jnjllg8fr5xfk9p8bg",
"Created": "2017-06-28T10:57:37.24037921-07:00",
"Scope": "swarm",
"Driver": "overlay",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "11.11.11.1/16",
"Gateway": "11.11.11.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": true,
"Containers": {
"ingress-sbox": {
"Name": "ingress-endpoint",
"EndpointID": "940a99930934e923499d9ba59e87e166f5c1333a7010411f0968a1335d43bb59",
"MacAddress": "02:42:0b:0b:00:02",
"IPv4Address": "11.11.0.2/16",
"IPv6Address": ""
}
},

@thaJeztah
Copy link
Member

@edesai please open an issue instead of commenting on a closed pull request

Also keep in mind that the GitHub issue tracker is not intended as a general support forum,
but for reporting bugs and feature requests. For other type of questions, consider using one of;

@realcbb
Copy link

realcbb commented Oct 13, 2017

@thaJeztah Can driver of ingress network be customized, or can port 4789 UDP of overlay driver be changed? In my scenario, 4789 UDP is already used by the underlying network of VMs which cannot be modified.

When I use weave network plugin v2 for swarm, communication between containers on different hosts works well, but it seems that the network is still blocked when routing mesh(LB) happens.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Change swarm mode ingress network