Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache Locks #264

Closed
subnetmarco opened this issue May 26, 2015 · 3 comments
Closed

Cache Locks #264

subnetmarco opened this issue May 26, 2015 · 3 comments
Assignees
Labels
task/feature Requests for new features in Kong
Milestone

Comments

@subnetmarco
Copy link
Member

Currently Kong uses an in-memory cache to avoid making request to the database every time an entity is requested by the system.

This might lead to a performance loss in a very specific edge-case when lots of incoming requests are being sent to Kong, that triggers simultaneous cache misses which in turn triggers multiple requests to the database, basically DDosing the database and slowing down Kong itself (also called the Dogpile effect).

One of the many examples where this happens is when lots of connections are being sent to an API that has never been loaded into memory before.

The cache client should leverage locks to make sure only one request is being sent to the database everytime, possibly using this module: https://github.com/openresty/lua-resty-lock

This might also be related to #15.

@subnetmarco subnetmarco changed the title Cache miss concurrency optimization Cache Locks May 26, 2015
@thibaultcha
Copy link
Member

Cold cache. Put something in memory that says you're currently querying it. That would need to be a LOT of requests in the very first, say, second when Kong is started... Also "DDOSing Cassandra"... Not sure about that. Maybe if not using a production-ready cluster, which should not be the case in that scenario.

@subnetmarco
Copy link
Member Author

Locks are a better fit because they are specifically built to fix this kind of problems, a lock is basically a cold cache object.

Some years ago we had a similar problem at Mashape, using OpenResty and Redis, when for maintenance reasons we had to clear the cache from the proxy and 300 concurrent requests were issued for each object to Redis (which is more performant than Cassandra). The failure rate of the requests increased exponentially.

@thibaultcha thibaultcha added [about] DAO task/feature Requests for new features in Kong labels Oct 15, 2015
@sonicaghi sonicaghi added this to the 0.8.2 milestone May 21, 2016
@subnetmarco subnetmarco modified the milestones: 0.9, 0.8.2 May 24, 2016
subnetmarco added a commit that referenced this issue Jul 8, 2016
subnetmarco added a commit that referenced this issue Jul 8, 2016
@Tieske
Copy link
Member

Tieske commented Jul 20, 2016

closing this as #1367 has been merged

@Tieske Tieske closed this as completed Jul 20, 2016
subnetmarco added a commit that referenced this issue Jul 23, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
task/feature Requests for new features in Kong
Projects
None yet
Development

No branches or pull requests

4 participants