Skip to content
This repository has been archived by the owner on Feb 1, 2021. It is now read-only.

Reserved Memory not release #1217

Closed
kevinsimper opened this issue Sep 16, 2015 · 10 comments
Closed

Reserved Memory not release #1217

kevinsimper opened this issue Sep 16, 2015 · 10 comments

Comments

@kevinsimper
Copy link

Containers that have exited still take up reserved memory, is that correct?

Exited (137) 35 minutes ago

@abronan
Copy link
Contributor

abronan commented Sep 16, 2015

Hi @kevinsimper, Yes if you want that chunk of memory to be released you need to explicitly remove the container. The reason why is that a container might have a restart policy on exit/failure and you don't want that process to fail restarting because its allocated resource chunk has been given to some other container in the meantime :)

@kevinsimper
Copy link
Author

Okay, that makes sense, but I can't read that anywhere?
https://docs.docker.com/swarm/api/swarm-api/

I did memory=512M because i wanted the container to be limited, not the whole cluster to be limited. I don't want a docker container to be take up all the memory, but I want to be able to put 10 containers on a 2GB machine. How can i do that?

From what i read here, memory is not calculated together either? Or am i wrong? https://docs.docker.com/reference/run/#memory-constraints

@abronan
Copy link
Contributor

abronan commented Sep 16, 2015

@kevinsimper I admit that somehow this should be configurable but as of now, resource accounting was conservative regarding the container state. You can put 10 containers on a 2GB machine but just make sure that some containers are not lurking around holding up resources. Usually you would like to clean and remove containers in the exit state.

In the meantime I'm migrating the resource accounting to be done by the Swarm Manager directly in #1212 so there might be a window to allow a resource chunk to be released on container exit (based on events we receive) and also based on restart-policies.

@kevinsimper
Copy link
Author

Okay that sounds good! However, I was not talking about containers that has exit state, when I talked about 10 containers. Most of the time containers does not use that much memory, you just want to control them, that they don't use too much!

For an example, 10 golang web servers could only take up 1 GB, so they could easily fit on a 2 GB server, I just don't one 1 golang server to leak and take up all 2 GB.

@abronan
Copy link
Contributor

abronan commented Sep 16, 2015

@kevinsimper You can put soft memory limits on a container, docker allows that (so you put a lower/upper bound memory limit to your container). This might help in this case. Haven't tried on swarm though, this might require a change on the client. Curious to see!

@kevinsimper
Copy link
Author

I just tried, and docker puts the soft limit on, but does not limit the overall system. So docker swarm needs a option to allow that :)

@erdem-aslan
Copy link

+1 for overall limitation config

@vieux
Copy link
Contributor

vieux commented Jan 20, 2016

this was documented in #1520

I'm not sure such a flag makes sense in a clustering environment.

@amitshukla
Copy link

Closing: the question was answered and the docs were updated (#1520)

@kevinsimper
Copy link
Author

@amitshukla This should be reopened because it was not solved in #1520.

#1520 only talks about how containers are placed but nothing about memory -m arguments and how memory occupation is calculated.

How is a server with 1 GB of ram and 2 GB swap used and calculated?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants