-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add 'allow' mode in the caching policy #558
Changes from 4 commits
6e40fb7
1c73337
3bd7a65
0a70362
a1837ff
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,6 +9,12 @@ | |
-- invalidate the cache. This allows us to authorize and deny calls | ||
-- according to the result of the last request made even when backend is | ||
-- down. | ||
-- - Allow: caches authorized and denied calls. When backend is unavailable, | ||
-- it will cache an authorization. In practice, this means that when | ||
-- backend is down _any_ request will be authorized unless last call to | ||
-- backend for that request returned 'deny' (status code = 4xx). | ||
-- Make sure to understand the implications of that before using this mode. | ||
-- It makes sense only in very specific use cases. | ||
-- - None: disables caching. | ||
|
||
local policy = require('apicast.policy') | ||
|
@@ -38,13 +44,39 @@ local function resilient_handler(cache, cached_key, response, ttl) | |
end | ||
end | ||
|
||
local function handle_500_allow_mode(cache, cached_key, ttl) | ||
local current_value = cache:get(cached_key) | ||
local cached_4xx = current_value and current_value >= 400 and current_value < 500 | ||
|
||
if not cached_4xx then | ||
ngx.log(ngx.WARN, 'Backend seems to be unavailable. "Allow" mode is ', | ||
'enabled in the cache policy, so next request will be ', | ||
'authorized') | ||
cache:set(cached_key, 200, ttl) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. What about the race condition between read and write? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Good point. Unfortunately, I don't see any 'compare-and-set' operation in the API of ngx.shared.dict: https://github.com/openresty/lua-nginx-module#ngxshareddict I see that there's this library: https://github.com/openresty/lua-resty-lock but that would mean introducing a dependency just for this policy. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @davidor lua-resty-lock is part of the openresty bundle, so it is part of stdlib. I think we could take some time and think about how to do this without the lock anyway. Possibly using the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think that
Let me try this. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Solved in the new commit @mikz There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 👍 |
||
end | ||
end | ||
|
||
local function allow_handler(cache, cached_key, response, ttl) | ||
local status = response.status | ||
|
||
if status and status < 500 then | ||
ngx.log(ngx.INFO, 'apicast cache write key: ', cached_key, | ||
' status: ', status, ', ttl: ', ttl) | ||
|
||
cache:set(cached_key, status, ttl or 0) | ||
else | ||
handle_500_allow_mode(cache, cached_key, ttl or 0) | ||
end | ||
end | ||
|
||
local function disabled_cache_handler() | ||
ngx.log(ngx.DEBUG, 'Caching is disabled. Skipping cache handler.') | ||
end | ||
|
||
local handlers = { | ||
resilient = resilient_handler, | ||
strict = strict_handler, | ||
allow = allow_handler, | ||
none = disabled_cache_handler | ||
} | ||
|
||
|
@@ -66,7 +98,7 @@ end | |
|
||
--- Initialize a Caching policy. | ||
-- @tparam[opt] table config | ||
-- @field caching_type Caching type (strict, resilient) | ||
-- @field caching_type Caching type (strict, resilient, allow, none) | ||
function _M.new(config) | ||
local self = new() | ||
self.cache_handler = handler(config or {}) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,7 +5,7 @@ | |
"properties": { | ||
"exit": { | ||
"type": "caching_type", | ||
"enum": ["resilient", "strict", "none"] | ||
"enum": ["resilient", "strict", "allow", "none"] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No need to change anything, but I wonder if there is a way to describe each individual enum value so we can show it nicely in the form. |
||
} | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it warrant own function now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't have a strong preference. I'm inclined to say yes because of the long comment.