Skip to content

Commit

Permalink
[3scale_batcher] Introduce APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE
Browse files Browse the repository at this point in the history
In some cases, the batcher policy will run out of storage space (batched_reports)
and cause metrics to not being reported. This commit adds a new environment
variable to configure the batcher policy shared memory storage size
  • Loading branch information
tkan145 committed Mar 12, 2024
1 parent 8ef1568 commit 1ba4383
Show file tree
Hide file tree
Showing 4 changed files with 118 additions and 1 deletion.
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ and this project adheres to [Semantic Versioning](http://semver.org/).

- Added `APICAST_CLIENT_REQUEST_HEADER_BUFFERS` variable to allow configure of the NGINX `client_request_header_buffers` directive: [PR #1446](https://github.com/3scale/APIcast/pull/1446), [THREESCALE-10164](https://issues.redhat.com/browse/THREESCALE-10164)

Added `APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE` variable to allow configure batcher policy share memory size. [PR #1452](https://github.com/3scale/APIcast/pull/1452), [THREESCALE-9537](https://issues.redhat.com/browse/THREESCALE-9537)

## [3.14.0] 2023-07-25

### Fixed
Expand Down
7 changes: 7 additions & 0 deletions doc/parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -517,6 +517,13 @@ Sets the maximum number and size of buffers used for reading large client reques
The format for this value is defined by the [`large_client_header_buffers` NGINX
directive](https://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers)

### `APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE`

**Default:** 20m
**Value:** string

Sets the maximum size of shared memory used by batcher policy.

### `OPENTELEMETRY`

This environment variable enables NGINX instrumentation using OpenTelemetry tracing library.
Expand Down
2 changes: 1 addition & 1 deletion gateway/http.d/shdict.conf
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ lua_shared_dict limiter 1m;
# This is not ideal, but they'll need to be here until we allow policies to
# modify this template.
lua_shared_dict cached_auths 20m;
lua_shared_dict batched_reports 20m;
lua_shared_dict batched_reports {{env.APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE | default: "20m"}};
lua_shared_dict batched_reports_locks 1m;
108 changes: 108 additions & 0 deletions t/apicast-policy-3scale-batcher.t
Original file line number Diff line number Diff line change
Expand Up @@ -596,3 +596,111 @@ auth cache on every request (see rewrite_by_lua_block).
[ 200, 200, 200 ]
--- no_error_log
[error]
=== TEST 8: output error when shared storage is full
To test this, we first fill up the shared storage with data, due to safe_add being used
the next call will return with `no memory` error and the error in the log
--- configuration
{
"services": [
{
"id": 42,
"backend_version": 1,
"backend_authentication_type": "service_token",
"backend_authentication_value": "token-value",
"proxy": {
"api_backend": "http://test:$TEST_NGINX_SERVER_PORT/",
"proxy_rules": [
{ "pattern": "/", "http_method": "GET", "metric_system_name": "hits", "delta": 2 }
],
"policy_chain" : [
{ "name" : "apicast.policy.3scale_batcher", "configuration" : { "batch_report_seconds": 1 } },
{ "name" : "apicast.policy.apicast" }
]
}
}
]
}
--- backend
location /transactions/authorize.xml {
content_by_lua_block {
local dict = ngx.shared.batched_reports
local i = 0
while i < 200000 do
i = i + 1
local res, err = dict:safe_add("service_id:_" .. i ..",user_key:value,metric:hits", i)
if not res then
break
end
end
ngx.exit(ngx.OK)
}
}
--- upstream
location / {
echo 'yay, api backend';
}
--- request eval
["GET /?user_key=value"]
--- response_body
yay, api backend
--- error_code: 200
--- error_log
batching storage ran out of memory
=== TEST 9: set shared storage capcity with APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE
--- env eval
(
'APICAST_POLICY_BATCHER_SHARED_MEMORY_SIZE' => '40m',
)
--- configuration env
{
"services": [
{
"id": 42,
"backend_version": 1,
"backend_authentication_type": "service_token",
"backend_authentication_value": "token-value",
"proxy": {
"api_backend": "http://test:$TEST_NGINX_SERVER_PORT/",
"proxy_rules": [
{ "pattern": "/", "http_method": "GET", "metric_system_name": "hits", "delta": 2 }
],
"policy_chain" : [
{ "name" : "apicast.policy.3scale_batcher", "configuration" : { "batch_report_seconds": 1 } },
{ "name" : "apicast.policy.apicast" }
]
}
}
]
}
--- backend
location /transactions/authorize.xml {
content_by_lua_block {
local dict = ngx.shared.batched_reports
local i = 0
while i < 200000 do
i = i + 1
local res, err = dict:safe_add("service_id:_" .. i ..",user_key:value,metric:hits", i)
if not res then
break
end
end
ngx.exit(ngx.OK)
}
}
--- upstream
location / {
echo 'yay, api backend';
}
--- request eval
["GET /?user_key=value"]
--- response_body
yay, api backend
--- error_code: 200
--- no_error_log
[error]

0 comments on commit 1ba4383

Please sign in to comment.