-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cache-locks #1402
cache-locks #1402
Conversation
lgtm |
@@ -34,6 +34,7 @@ lua_shared_dict cache ${{MEM_CACHE_SIZE}}; | |||
lua_shared_dict reports_locks 100k; | |||
lua_shared_dict cluster_locks 100k; | |||
lua_shared_dict cluster_autojoin_locks 100k; | |||
lua_shared_dict cache_locks 100k; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we do not need so many different *_locks
shared dicts. A single one is able to hold multiple locks with different options (expiration, timeout, etc...). It will be worth cleaning this up some time soon.
@thibaultcha addressed last comments. |
exptime = ASYNC_AUTOJOIN_INTERVAL - 0.001 | ||
}) | ||
if not lock then | ||
ngx_log(ngx.ERR, "failed to init lock dictionary", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The err
variable will tell what the error is. Should be "could not create lock: "
All comments addressed. |
When requesting a specific entity from the database, only one request to the database per node is allowed. This dramatically improves reliability during heavy load.
For example: let's say that 1000 req/s are being processed by a Kong node, and suddenly an in-memory entity is being invalidated and Kong needs to read it again from the database. Before this PR Kong would open 1000req/s to the database until that entity is stored into memory again. With this PR, only one read request will be opened to the database (per node), and the other requests (on the same node) will just wait until it's finally in memory again.
This also helps the performance of new nodes that are being added to the cluster (whose load goes from 0 to 100 real quick).
Closes #264.