Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EZP-31804: [Docker] Added fixed IP to varnish container to workaround TRUSTED_PROXIES setting #592

Merged
merged 2 commits into from
Oct 26, 2020

Conversation

mnocon
Copy link
Member

@mnocon mnocon commented Aug 25, 2020

JIRA: https://jira.ez.no/browse/EZP-31804

A potential replacement to the solution proposed in ezsystems/docker-php#34

Currently we set varnish as trusted proxy, but it doesn't work - Symfony does not resolve hostnames when looking at trusted proxies, meaning that in our Docker stack no proxy is trusted. This makes it impossible to purge cache correctly when using it.

Things I've considered:
Currently varnish has a dependency on app (because it resolves the app hostname in --acl-add command, so adding a varnish dependency to app to resolve trusted proxy would create a circular dependency.

This leaves as with a possibility to wait until varnish container is available as in ezsystems/docker-php#34 or we could try to set the IP without resolving hostname:

  1. by setting varnish to "trust all" - I didn't want to do it as it's against best practices when app container is available to the public
  2. by setting a fixed network range or fixed IP as trusted proxy - this solution makes the backend network (so the network used both by varnish and app) to use a specified subnet with Varnish container having a fixed IP in that subnet - so that we don't have to resolve it and can hardcode it.

This fixed subnet is specified only in varnish.yml, so there's no change for people not using it.
Also in our Docker stack there is only one varnish instance (and I don't think we have a plan to introduce multiple reverse proxies in front of our application in the Docker stack, Fastly-style) so IMHO having a fixed IP is acceptable here.

Copy link
Contributor

@andrerom andrerom left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, no need for hacks or anything 😉 👍 💯

And yes, no need for several Varnish servers here. We anyway (in general) recommend using one active Varnish server and one passive that can take over, to make sure to get the best possible cache hit ratio. Varnish does have commercial solutions to share cache between instances, which might be given as recommendation on a case by case basis.

SIDE: We currently don't specify how people should do this in order to make sure the passive one can have some cache warmed up in it to not take over with cold cache, but we could consider having such things covered too.

In docker stack that would probably be config to always make sure one instance is running?

doc/docker/varnish.yml Outdated Show resolved Hide resolved
Copy link
Member

@vidarl vidarl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, with this change, you cannot have two separate installs on the same machine, right?

@ViniTou
Copy link
Contributor

ViniTou commented Aug 26, 2020

@vidarl
Was that already not the case as we have exposed ports for app (8080)? If of course you meant running two stacks at the same time, I do not know how this affects just having those containers built with different names.

@mnocon
Copy link
Member Author

mnocon commented Aug 26, 2020

@vidarl : As @ViniTou wrote, wasn't it always the case? We had the exposed ports (8080,8081) clashing with each other from app and varnish services.

I'm not sure how you're dealing with this issue now - the only solution that comes to my mind is remapping the ports manually, so this would be another value to remap when setting up the n-th Docker stack.

@vidarl
Copy link
Member

vidarl commented Aug 26, 2020

The port clashing does not prevent you from creating containers for multiple installs at the same time ( only one install may run at a given time though ).

I haven't tested, but I suspect that with this change, you cannot even create the containers for a second install as backend network's subnet will clash with an existing one.

@mnocon
Copy link
Member Author

mnocon commented Sep 10, 2020

@vidarl you're right - without this solution it's enough to docker-compose stop the running stack to start a new one without issues, with this change docker-compose down must be run to delete existing networks. And it's not good if someone is using it for development, as it will break his workflow and we don't want to do that.

I've searched again for solutions and here is a short summary:

  1. Varnish container with constant IP address: I've looked through docker-compose doc and I don't think it's possible to configure these networks in a way that static IP is used and multiple Docker stacks don't clash with each other. So, IMHO, this solution is out of the picture
  2. Something similar to Resolving hostnames in SYMFONY_TRUSTED_PROXIES docker-php#34 - resolving hostnames in app container on start - that PR was not accepted initially and that's why I went for other solutions
  3. Set app container to trust all proxies - this would be insecure if someone would use that in production (when also exposing the app container), but it's not something that we support/recommend. The whole Docker stack comes with a warning that it's made for test automation, the varnish compose file is marked as experimental - nobody should use it in production anyway.

I've changed this PR to solution 3 and will be happy to hear how you feel about it.

@vidarl
Copy link
Member

vidarl commented Oct 12, 2020

a 4th solution would be that the php container would resolve hostnames in SYMFONY_TRUSTED_PROXIES when it starts :
https://github.com/ezsystems/docker-php/pull/34/files

However, your approach is much simpler so we can do that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging this pull request may close these issues.

6 participants