Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flag to create network policies to enable connectivity between components #74

Open
russmalloypers opened this issue Nov 2, 2021 · 4 comments

Comments

@russmalloypers
Copy link

In a production type environment which may have a default deny policy in given namespace, this helm chart doesn't provide capability to add network policies between components.(Unless I'm missing something)

Is it possible to have a flag added to create network policies between components? Limited with PodSelectors such as:
PodSelector: app=couchbase
PodSelector: app.kubernetes.io/name=couchbase-admission-controller

@patrick-stephens
Copy link
Contributor

This certainly sounds reasonable although catering for the myriad of ways it might be configured by a specific user will be tricky. We could look to whitelist the traffic allowed at a fairly granular level - it's frustrating services cannot be selected by network policies yet: https://kubernetes.io/docs/concepts/services-networking/network-policies/#what-you-can-t-do-with-network-policies-at-least-not-yet

Presumably currently you have a policy along these lines? https://github.com/ahmetb/kubernetes-network-policy-recipes/blob/master/03-deny-all-non-whitelisted-traffic-in-the-namespace.md

It would be helpful to have the specific restrictions you have just to be clear.

I think in this case it would be something similar to what we do with creating the Secrets in that the helm chart could generate some default policies but this would be opt-in for those that need them - others may want specific policies or have different set ups so we need to ensure we don't break anything at that level which is tricky to debug.

@russmalloypers
Copy link
Author

Correct, our default network policy allows all outbound traffic but no inbound traffic. Inbound traffic must be explicitly allowed:
spec:
egress:

  • {}
    podSelector: {}
    policyTypes:
  • Ingress
  • Egress

From what I understand there are a few rules required:
1- the admission controller needs 443 open from the couchbase operator
2- couchbase cluster nodes need to have a large list of nodes exposed between themselves: 4369/TCP, 8091/TCP, 8092/TCP, 8093/TCP, 8094/TCP, 8095/TCP, 8096/TCP, 9100/TCP, 9101/TCP, 9102/TCP, 9103/TCP, 9104/TCP, 9105/TCP, 9110/TCP, 9111/TCP, 9112/TCP, 9113/TCP, 9114/TCP, 9115/TCP, 9116/TCP, 9117/TCP, 9118/TCP, 9120/TCP, 9121/TCP, 9122/TCP, 9130/TCP, 9140/TCP, 9999/TCP, 11207/TCP, 11209/TCP, 11210/TCP, 18091/TCP, 18092/TCP, 18093/TCP, 18094/TCP, 18095/TCP, 18096/TCP, 19130/TCP, 21100/TCP, 21150/TCP
3- couchbase cluster nodes need to have 8091 exposed to whatever pod/services will need to access the db as well as the UI

I'm probably missing some because I'm having a hard time finding what exactly needs to be exposed and to which pods in the documentation

@patrick-stephens
Copy link
Contributor

patrick-stephens commented Nov 4, 2021

Yeah the port definitions are all handled by Couchbase Server and can be affected by whatever Couchbase Server version you're running so typically we just link out to the documentation there, e.g. https://docs.couchbase.com/server/current/install/install-ports.html

This sounds similar to some of the configuration we have to support for Istio and other service meshes. There are additional complications with some of the networking modes as well as using XDCR or SDKs too.

I think this is a general issue for the operator rather than a specific helm deployment issue as we'll likely have others wanting to do the same so I'm going to raise a JIRA on getting it documented with an example and this could then just be reused for the helm deployment. I'll try to knock up a working example for you with KIND and Helm locally though to ensure you're not blocked as soon as I can.

https://issues.couchbase.com/browse/K8S-2510

@patrick-stephens
Copy link
Contributor

I have a working example here: https://github.com/patrick-stephens/couchbase-gitops/blob/96254f590bac86b2a0165e0a69b7e5cb1e77d8f1/network-policy-test.sh#L81-L152

Note I am not blocking ports at all, purely on a pod level. I've also split the DAC out into a separate namespace as per best practice for a cluster wide DAC. It would need any rules there to allow traffic between the DAC and the API, obviously kubectl or helm or whatever also needs any permissions to allow ingress into the cluster appropriately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants