Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please provide a way to limit parallelism in Compose v2 #8849

Closed
pagelypete opened this issue Oct 28, 2021 · 24 comments
Closed

Please provide a way to limit parallelism in Compose v2 #8849

pagelypete opened this issue Oct 28, 2021 · 24 comments

Comments

@pagelypete
Copy link

pagelypete commented Oct 28, 2021

Description

This issue is a report/feature request for Compose v2 but is essentially the same as this problem in v1 - #8226

Having no limit to parallelism makes compose effectively unusable when the compose file contains more than a few dozen services, as it consumes all resources on the host system when doing operations like bringing containers up.

Currently we have no way to migrate to compose v2 because of this and indeed we are stuck on v1 with a third party patch.

Please provide a way to limit parallelism.

Steps to reproduce the issue:

  1. Have a compose file with some container definitions that do something for a few seconds. Have at least 50 in your compose file.
  2. Run docker compose up -d --remove-orphans (or equivalent)
  3. Watch system slow to a crawl and resource usage spike hugely, depending on the resources the host has

Describe the results you received:

As above.

Describe the results you expected:

As above, since there is no way to limit parallelism.

Additional information you deem important (e.g. issue happens only occasionally):

N/A

Output of docker compose version:

Docker Compose version v2.0.0-rc.3

Output of docker info:

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
  compose: Docker Compose (Docker Inc., v2.0.0-rc.3)
  scan: Docker Scan (Docker Inc., v0.9.0)

Server:
 Containers: 2
  Running: 0
  Paused: 0
  Stopped: 2
 Images: 271
 Server Version: 20.10.10
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 5b46e404f6b9f661a205e28d59c982d3634148f8
 runc version: v1.0.2-0-g52b36a2
 init version: de40ad0
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 5.11.0-38-generic
 Operating System: Ubuntu 21.04
 OSType: linux
 Architecture: x86_64
 CPUs: 12
 Total Memory: 30.98GiB
 Name: pagely
 ID: B6KI:5FVQ:KIOY:RIHT:PP55:IKJ4:QWV4:X6LH:RYYD:NYLN:QFWL:K6K4
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

Additional environment details:

@hroemer
Copy link

hroemer commented Oct 29, 2021

Confer buildkit #1032 → new setting MaxParallelism (only on daemon start). I guess unless buildkit #593 or buildkit #1131 are addressed, it seems to be a "won't fix".

@pagelypete
Copy link
Author

Confer buildkit #1032 → new setting MaxParallelism (only on daemon start). I guess unless buildkit #593 or buildkit #1131 are addressed, it seems to be a "won't fix".

Well, for the building part of it sure, but compose can independently provide a parallelism limit for container operations which is mostly what this issue is about as opposed to building.

@stale
Copy link

stale bot commented Apr 27, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Apr 27, 2022
@pagelypete
Copy link
Author

Issue is not stale.

@stale
Copy link

stale bot commented Apr 28, 2022

This issue has been automatically marked as not stale anymore due to the recent activity.

@oocx
Copy link

oocx commented Sep 22, 2022

We could use this as well. We have a docker compose file with 11 services. If all containers need to build first, 11 builds will try to do a "nuget restore" at the same time. This takes so much bandwidth at the same time that many of them run into timeout problems and fail.

The only and really annoying workaround I found is to manually comment out some of the services in my docker compose and build that first, so that not all of them are built at the same time.

@mmguero
Copy link

mmguero commented Sep 23, 2022

An easier workaround that doesn't involve modifying your compose file is to use yq and xargs:

$ yq '.services | keys | .[]' docker-compose.yml | xargs -r -P 1 -I XXX docker compose build "XXX"

If you want to, replace the 1 with however many concurrent builds you want.

@Holzhaus
Copy link

This issue is probably also related: #9837

@pagelypete
Copy link
Author

pagelypete commented Sep 23, 2022

This issue is/was supposed to be specifically about limiting parallelism for container operations not build operations. I suspect these are two totally different implementations since when limiting it for container operations things like dependencies on other defined services need to be taken into account.

As has been shown with the yq example by @mmguero it's relatively straightforward to limit concurrent builds without a solution inside compose (although admittedly I agree a solution within compose would be nicer), but I think that should be a separate issue from this one which cannot be easily solved without a fix from compose.

@maarten-kieft
Copy link

Ran into this one again... you would expect at least a reaction from the docker people at least...

@ndeloof
Copy link
Contributor

ndeloof commented Dec 6, 2022

this is (partially) addressed by #10030

@ndeloof
Copy link
Contributor

ndeloof commented Dec 7, 2022

I'll close this issue, considered as fixed by #10030 while we are aware this doesn't include support for build, but a workaround is documented on https://github.com/docker/buildx/blob/master/docs/guides/resource-limiting.md#max-parallelism

@ndeloof ndeloof closed this as completed Dec 7, 2022
@pagelypete
Copy link
Author

pagelypete commented Dec 7, 2022

@ndeloof can you please reopen this issue? This issue is not about pulls, pushes, or builds, it's about the spawning of containers when using up. Unless I am mistaken your patch only addresses pulls and pushes and does nothing to prevent 50 containers starting simultaneously if the compose file defines 50 containers with none depending on one another.

Perhaps the issue title is too generic but the issue description is explicit about using up and seeing system resources be exhausted.

@ndeloof
Copy link
Contributor

ndeloof commented Dec 7, 2022

If your application involves 50 containers, what do you expect but 50 containers starting?

@pagelypete
Copy link
Author

pagelypete commented Dec 7, 2022

If your application involves 50 containers, what do you expect but 50 containers starting?

@ndeloof See my report for v1 with the same issue - #8226

When asked to start a large number of containers (let's keep using 50 as the example) compose appears to be sending 50 parallel API requests to docker to start containers. With the patch in the linked issue (by a third party that was never merged) that actually limits the parallelism of these calls, compose works far, far, better for large numbers of containers as it starts the 50 but with only X number of API calls in parallel. This means the run on system resources completely goes away.

Of course we want the 50 containers to start, just not with a parallelism of 50.

@ndeloof
Copy link
Contributor

ndeloof commented Dec 7, 2022

Due to #8530 we will have to limit parallelism on ContainerCreate anyway

@pagelypete
Copy link
Author

pagelypete commented Dec 7, 2022

I see - might it be nice to leave this issue open regardless? For example if the patch for #8530 ends up checking for port ranges and only making calls sequential if they exist, then it would still be good to have an option to manually turn on sequential container creation. There's a chance that some people not using port ranges will want the max parallel container creation so that would appease both sides if implemented in this way.

Also looking at the merged changes for #10030 - it might be good to adjust the help text to clarify exactly what it is limiting the parallelism of. Right now it reads like it affects all operations.

@ndeloof
Copy link
Contributor

ndeloof commented Dec 7, 2022

it reads like it affects all operations

Yes, need to apply the same in a few other places (everywhere we don't call non-trivial APIs)

@leograba
Copy link

Being able to limit parallel startup sounds interesting.

For low-memory devices (talking about 512MB or 1GB RAM), I've noticed the container startup in parallel causes a spike in RAM usage that leads to some containers being created but not started, for instance.

Being able to limit parallel container startup might make it possible to overcome such a limitation.

@ndeloof
Copy link
Contributor

ndeloof commented Dec 14, 2022

with latest codebase, container start takes place sequentially, not concurrently anymore. Please note anyway this is about calling engine's ContainerStart API, we can't control the RAM for containers during their startup.

@varishtsg
Copy link

varishtsg commented May 14, 2023

This is actually a problem on low resource devices like raspberry pi 4. I have a 8GB model, and need to run 4 docker compose files on boot. On boot when the docker service starts, all the services compete for resources, although I have enough ram, the containers get starved for CPU resources which results in a kernel panic and raspberry pi crashes. Here clearly if I can start the compose files in a staggered manner, it would totally prevent this.

@ndeloof
Copy link
Contributor

ndeloof commented May 14, 2023

@varishtsg have you tried setting COMPOSE _PARALLEL_LIMIT in your startup script?

@INGGAV
Copy link

INGGAV commented Aug 9, 2023

So, there's no way to limit the parallelism when using docker compose up?

@rafipiccolo
Copy link

i saw a similar comand up, but it didnt work.
this one starts services one by one.
It's just a shame that i need to write it myself instead of using a parameter on 'docker compose up'

yq '.services | keys | .[]' docker-compose.yml | xargs -L 1 docker compose up -d

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests