-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache Locks #264
Comments
Cold cache. Put something in memory that says you're currently querying it. That would need to be a LOT of requests in the very first, say, second when Kong is started... Also "DDOSing Cassandra"... Not sure about that. Maybe if not using a production-ready cluster, which should not be the case in that scenario. |
Locks are a better fit because they are specifically built to fix this kind of problems, a lock is basically a cold cache object. Some years ago we had a similar problem at Mashape, using OpenResty and Redis, when for maintenance reasons we had to clear the cache from the proxy and 300 concurrent requests were issued for each object to Redis (which is more performant than Cassandra). The failure rate of the requests increased exponentially. |
closing this as #1367 has been merged |
Currently Kong uses an in-memory cache to avoid making request to the database every time an entity is requested by the system.
This might lead to a performance loss in a very specific edge-case when lots of incoming requests are being sent to Kong, that triggers simultaneous cache misses which in turn triggers multiple requests to the database, basically DDosing the database and slowing down Kong itself (also called the Dogpile effect).
One of the many examples where this happens is when lots of connections are being sent to an API that has never been loaded into memory before.
The cache client should leverage locks to make sure only one request is being sent to the database everytime, possibly using this module: https://github.com/openresty/lua-resty-lock
This might also be related to #15.
The text was updated successfully, but these errors were encountered: