Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce default max_connections number #2220

Open
laukhin opened this issue Jun 7, 2022 · 6 comments
Open

Reduce default max_connections number #2220

laukhin opened this issue Jun 7, 2022 · 6 comments
Assignees
Labels
breakingchange API or Breaking Change bug Bug

Comments

@laukhin
Copy link

laukhin commented Jun 7, 2022

Version: 4.3.3

Platform: Python 3.10 (actually any python)

Description: By default, there is a 2**31 max connection pool size. Such a large number seems unreasonable and unsafe to me for several reasons:

  1. By default Redis instance can handle 10000 connections at a time which is far less than our default pool size.
  2. Such an enormous pool size could lead to a DDoS attack on the Redis instance if used without proper care. Actually, it happened with our production not so long ago - we had some problems with a load and it led to our connections pool growing infinitely. Our Redis instances just couldn't recover from a massive amount of new connections. With a stricter connections size policy we could avoid that.
  3. I made some research and other Redis clients have a much stricter pool policy. For example, the Java client has 8 connections by default. Unofficial go client implementation has 10 connections per CPU by default.

My suggestion is to reduce the default max_connections number (both in sync and async versions) to provide a safe configuration by default and let the end-user override it to fine-tune the performance for its needs. The number is discussable, I'd stay for about 100 default connections. I can handle this issue if you will decide to fix that :)

@Fogapod
Copy link

Fogapod commented Dec 6, 2022

I started getting ConnectionError: Too many connections after setting max_connections=50 on 4.4.0. Looks like connections aren't released properly and 2**31 hides that fact?

relevant line:

raise ConnectionError("Too many connections")

Why would pool ever raise Too many connections, it should just wait instead

Copy link
Contributor

This issue is marked stale. It will be closed in 30 days if it is not updated.

@Fogapod
Copy link

Fogapod commented Feb 15, 2024

this is not solved..

@uglide uglide reopened this Mar 1, 2024
@anushkmittal
Copy link

@Fogapod any workaround to this? do u just not set max_connections

@Fogapod
Copy link

Fogapod commented Mar 23, 2024

@Fogapod any workaround to this? do u just not set max_connections

I use blocking pool: #2517

@github-actions github-actions bot removed the Stale label Apr 16, 2024
@mohd-akram
Copy link

Running into this as well. Seems it's all to easy to DoS a server with the current defaults. Looks like #3200 aims to solve this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breakingchange API or Breaking Change bug Bug
Projects
None yet
Development

No branches or pull requests

7 participants