Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(proxy-wasm) invoke 'on_http_call_response' on dispatch failures #625

Merged
merged 2 commits into from
Nov 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 47 additions & 19 deletions docs/DIRECTIVES.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ By alphabetical order:
- [module](#module)
- [proxy_wasm](#proxy_wasm)
- [proxy_wasm_isolation](#proxy_wasm_isolation)
- [proxy_wasm_log_dispatch_errors](#proxy_wasm_log_dispatch_errors)
- [proxy_wasm_lua_resolver](#proxy_wasm_lua_resolver)
- [proxy_wasm_request_headers_in_access](#proxy_wasm_request_headers_in_access)
- [resolver](#resolver)
Expand Down Expand Up @@ -45,6 +46,8 @@ By context:
- [compiler](#compiler)
- [backtraces](#backtraces)
- [module](#module)
- [proxy_wasm_log_dispatch_errors](#proxy_wasm_log_dispatch_errors)
- [proxy_wasm_lua_resolver](#proxy_wasm_lua_resolver)
- [resolver](#resolver)
- [resolver_timeout](#resolver_timeout)
- [shm_kv](#shm_kv)
Expand Down Expand Up @@ -72,6 +75,7 @@ By context:
- `http{}`, `server{}`, `location{}`
- [proxy_wasm](#proxy_wasm)
- [proxy_wasm_isolation](#proxy_wasm_isolation)
- [proxy_wasm_log_dispatch_errors](#proxy_wasm_log_dispatch_errors)
- [proxy_wasm_lua_resolver](#proxy_wasm_lua_resolver)
- [proxy_wasm_request_headers_in_access](#proxy_wasm_request_headers_in_access)
- [resolver_add](#resolver_add)
Expand Down Expand Up @@ -260,7 +264,7 @@ Load a Wasm module from disk.
- `path` must point to a bytecode file whose format is `.wasm` (binary) or
`.wat` (text).
- `config` is an optional configuration string passed to `on_vm_start` when
`module` is a proxy-wasm filter.
`module` is a Proxy-Wasm filter.

If successfully loaded, the module can later be referred to by `name`.

Expand All @@ -286,11 +290,11 @@ proxy_wasm
**default** |
**example** | `proxy_wasm my_filter_module 'foo=bar';`

Add a proxy-wasm filter to the context's execution chain (see [Execution
Add a Proxy-Wasm filter to the context's execution chain (see [Execution
Chain]).

- `module` must be a Wasm module name declared by a [module](#module) directive.
This module must be a valid proxy-wasm filter.
This module must be a valid Proxy-Wasm filter.
- `config` is an optional configuration string passed to the filter's
`on_configure` phase.

Expand All @@ -305,25 +309,25 @@ start.
> Notes

Each instance of the `proxy_wasm` directive in the configuration will be
represented by a proxy-wasm root filter context in a Wasm instance.
represented by a Proxy-Wasm root filter context in a Wasm instance.

All root filter contexts of the same module share the same instance.

All root filter contexts will be initialized during nginx worker process
initialization, which will invoke the filters' `on_vm_start` and `on_configure`
phases. Each root context may optionally start a single background tick, as
specified by the [proxy-wasm SDK](#proxy-wasm).
specified by the [Proxy-Wasm SDK](#proxy-wasm).

Note that when the master process is in use as a daemon (default Nginx
configuration), the `nginx` exit code may be `0` when its worker processes fail
initialization. Since proxy-wasm filters are started on a per-process basis,
initialization. Since Proxy-Wasm filters are started on a per-process basis,
filter initialization takes place (and may fail) during worker process
initialization, which will be reflected in the error logs.

On incoming HTTP requests traversing the context (see [Contexts]), the
[Execution Chain] will resume execution for the current Nginx phase, which will
cause each configured filter to resume its corresponding proxy-wasm phase. Each
request gets associated with a proxy-wasm HTTP filter context, and a Wasm
cause each configured filter to resume its corresponding Proxy-Wasm phase. Each
request gets associated with a Proxy-Wasm HTTP filter context, and a Wasm
instance to execute into.

HTTP filter contexts can execute on instances with various lifecycles and
Expand All @@ -341,13 +345,13 @@ proxy_wasm_isolation
**default** | `none`
**example** | `proxy_wasm_isolation stream;`

Select the Wasm instance isolation mode for proxy-wasm filters.
Select the Wasm instance isolation mode for Proxy-Wasm filters.

- `isolation` must be one of `none`, `stream`, `filter`.

> Notes

Each proxy-wasm filter within the context's [Execution Chain] will be given an
Each Proxy-Wasm filter within the context's [Execution Chain] will be given an
instance to execute onto. The lifecycle and isolation of that instance depend on
the chosen `isolation` mode:

Expand All @@ -359,16 +363,40 @@ the chosen `isolation` mode:

[Back to TOC](#directives)

proxy_wasm_log_dispatch_errors
------------------------------

**usage** | `proxy_wasm_log_dispatch_errors <on\|off>;`
------------:|:----------------------------------------------------------------
**contexts** | `wasm{}`, `http{}`, `server{}`, `location{}`
**default** | `on`
**example** | `proxy_wasm_log_dispatch_errors off;`

Toggles TCP socket error logs on Proxy-Wasm dispatch calls failure.

When enabled, an `[error]` log will be produced on failure conditions such as
timeout, broken connection, resolver failure, etc.

When used in the `wasm{}` context, this directive has a global effect on all
`location{}` contexts (unless overridden) as well as root Proxy-Wasm dispatch
calls.

[Back to TOC](#directives)

proxy_wasm_lua_resolver
-----------------------

**usage** | `proxy_wasm_lua_resolver <on\|off>;`
------------:|:----------------------------------------------------------------
**contexts** | `http{}`, `server{}`, `location{}`
**contexts** | `wasm{}`, `http{}`, `server{}`, `location{}`
**default** | `off`
**example** | `proxy_wasm_lua_resolver on;`

Toggles the "Lua DNS resolver for proxy-wasm" feature within the context.
Toggles the "Lua DNS resolver for Proxy-Wasm" feature within the context.

When used in the `wasm{}` context, this directive has a global effect on all
`location{}` contexts (unless overridden) as well as root Proxy-Wasm dispatch
calls.

**Note:** this directive requires Lua support and will only have an effect if
ngx_wasm_module was compiled alongside [OpenResty].
Expand All @@ -389,7 +417,7 @@ If not, a default client instance will be created pointing to `8.8.8.8` with a
timeout value of `30s`.

When in use, any [resolver] directive in the effective context will be ignored
for proxy-wasm HTTP dispatches.
for Proxy-Wasm HTTP dispatches.

[Back to TOC](#directives)

Expand Down Expand Up @@ -431,7 +459,7 @@ This directive's arguments are identical to Nginx's [resolver] directive.

Wasm sockets usually rely on the configured Nginx [resolver] to resolve
hostname. However, some contexts do not support a [resolver] yet still provide
access to Wasm sockets (e.g. proxy-wasm's `on_vm_start` or `on_tick`). In such
access to Wasm sockets (e.g. Proxy-Wasm's `on_vm_start` or `on_tick`). In such
contexts, the global `wasm{}` resolver will be used.

The global resolver is also used as a fallback if no resolver is configured in
Expand Down Expand Up @@ -525,7 +553,7 @@ start.
Shared memory zones are shared between all nginx worker processes, and serve as
a means of storage and exchange for worker processes of a server instance.

Shared key/value memory zones can be used via the [proxy-wasm
Shared key/value memory zones can be used via the [Proxy-Wasm
SDK](#proxy-wasm)'s `[get\|set]_shared_data` API.

[Back to TOC](#directives)
Expand Down Expand Up @@ -557,7 +585,7 @@ start.
Shared memory zones are shared between all nginx worker processes, and serve as
a means of storage and exchange for worker processes of a server instance.

Shared queue memory zones can be used via the [proxy-wasm SDK](#proxy-wasm)'s
Shared queue memory zones can be used via the [Proxy-Wasm SDK](#proxy-wasm)'s
`[enqueue\|dequeue]_shared_queue` API.

**Note:** shared memory queues do not presently implement an automatic eviction
Expand Down Expand Up @@ -663,7 +691,7 @@ This directive is effective for all Wasm sockets in all contexts.

> Notes

When using the [proxy-wasm SDK](#proxy-wasm) `dispatch_http_call()` method, a
When using the [Proxy-Wasm SDK](#proxy-wasm) `dispatch_http_call()` method, a
`timeout` argument can be specified which will override this setting.

For configuring Wasm sockets in `http{}` contexts, see
Expand Down Expand Up @@ -708,7 +736,7 @@ Set a default timeout value for Wasm sockets read operations.

> Notes

When using the [proxy-wasm SDK](#proxy-wasm) `dispatch_http_call()` method, a
When using the [Proxy-Wasm SDK](#proxy-wasm) `dispatch_http_call()` method, a
`timeout` argument can be specified which will override this setting.

For configuring Wasm sockets in `http{}` contexts, see
Expand All @@ -729,7 +757,7 @@ Set a default timeout value for Wasm sockets send operations.

> Notes

When using the [proxy-wasm SDK](#proxy-wasm) `dispatch_http_call()` method, a
When using the [Proxy-Wasm SDK](#proxy-wasm) `dispatch_http_call()` method, a
`timeout` argument can be specified which will override this setting.

For configuring Wasm sockets in `http{}` contexts, see
Expand Down
130 changes: 125 additions & 5 deletions docs/PROXY_WASM.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
- [Filters Execution in Nginx](#filters-execution-in-nginx)
- [Host Properties]
- [Nginx Properties]
- [HTTP Dispatches]
- [Supported Specifications]
- [Tested SDKs](#tested-sdks)
- [Supported Entrypoints](#supported-entrypoints)
Expand Down Expand Up @@ -107,13 +108,13 @@ proxy_wasm::main! {{
// the wasm instance initial entrypoint
// ...
proxy_wasm::set_log_level(LogLevel::Info);
// create and set the root context for this filter
// create and set the Root context for this filter
proxy_wasm::set_root_context(|_| -> Box<dyn MyRootContext> {
Box::new(MyRootContext {});
});
}}

// implement root entrypoints
// implement Root entrypoints
impl Context for TestRoot;
impl RootContext for TestRoot {
fn on_configure(&mut self, config_size: usize) -> bool {
Expand Down Expand Up @@ -376,6 +377,122 @@ impl HttpContext for MyHttpContext {

[Back to TOC](#table-of-contents)

### HTTP Dispatches

Proxy-Wasm filters can issue requests to an external HTTP service. This feature
is called an "HTTP dispatch".

In ngx_wasm_module, filters can issue a dispatch call with
`dispatch_http_call()` in the following steps:

- `on_http_request_headers`
- `on_http_request_body`
- `on_tick`
- `on_http_call_response` (to issue subsequent calls)

For example, in Rust:

```rust
impl Context for ExampleHttpContext {
fn on_http_request_headers(&mut self, nheaders: usize, eof: bool) -> Action {
match self.dispatch_http_call(
"service.com", // host
vec![("X-Header", "Foo")], // headers
Some(b"hello world"), // body
vec![], // trailers
Duration::from_secs(3), // timeout
) {
Ok(_) => info!("call scheduled"),
Err(status) => panic!("unexpected status \"{}\"", status as u32),
}

Action::Pause
}
}
```

Several calls can be scheduled at the same time before returning
`Action::Pause`; they will be executed in parallel, their response handlers will
be invoked when each response is received, and the filter chain will resume once
all dispatch calls have finished executing.

**Note:** in ngx_wasm_module, the `host` argument of a dispatch call must be a
valid IP address or a hostname which will be resolved using the [resolver] or
[proxy_wasm_lua_resolver] directives. This `host` argument may contain a port
component whether IP or hostname (i.e. `service.com:80`). This is unlike the
Envoy implementation in which the `host` argument receives a configured cluster
name.

Once the call is scheduled, the `on_http_call_response` handler will be invoked
when a response is received or the connection has encountered an error. If an
error was encountered while receiving the response, `on_http_call_response` is
invoked with its response arguments set to `0`.

For example, in Rust:

```rust
impl Context for ExampleHttpContext {
fn on_http_call_response(
&mut self,
token_id: u32,
nheaders: usize,
body_size: usize,
ntrailers: usize,
) {
let status = self.get_http_call_response_header(":status");
let body_bytes = self.get_http_call_response_body(0, body_size);

if status.is_none() {
// Dispatch had an issue
//
// nheaders == 0
// body_size == 0
// ntrailers == 0

// ngx_wasm_module extension: retrieve error via :dispatch_status pseudo-header
let dispatch_status = self.get_http_call_response_header(":dispatch_status");
match dispatch_status.as_deref() {
Some("timeout") => {},
Some("broken connection") => {},
Some("tls handshake failure") => {},
Some("resolver failure") => {},
Some("reader failure") => {},
Some(s) => error!("dispatch failure, status: {}", s),
None => {}
}
}

// ...
}
}
```

**Note:** if the dispatch call was invoked from the `on_tick` handler,
`on_http_call_response` must be implemented on the Root filter context instead
of the HTTP context:

```rust
proxy_wasm::main! {{
proxy_wasm::set_root_context(|_| -> Box<dyn RootContext> {
Box::new(ExampleRootContext {})
});
}}

impl Context for ExampleRootContext {
fn on_http_call_response(
&mut self,
token_id: u32,
nheaders: usize,
body_size: usize,
ntrailers: usize,
) {
// Root context handler
}
}
```

[Back to TOC](#table-of-contents)

## Supported Specifications

This section describes the current state of support for the Proxy-Wasm
Expand Down Expand Up @@ -429,7 +546,7 @@ SDK ABI `0.2.1`) and their present status in ngx_wasm_module:
**Name** | **Supported** | **Comment**
----------------------------------:|:-------------------:|:--------------
*Root contexts* | |
`proxy_wasm::main!` | :heavy_check_mark: | Allocate the root context.
`proxy_wasm::main!` | :heavy_check_mark: | Allocate the Root context.
`on_vm_start` | :heavy_check_mark: | VM configuration handler.
`on_configure` | :heavy_check_mark: | Filter configuration handler.
`on_tick` | :heavy_check_mark: | Background tick handler.
Expand Down Expand Up @@ -636,7 +753,7 @@ implementation state in ngx_wasm_module:
`source.port` | :heavy_check_mark: | :x: | Maps to [ngx.remote_port](https://nginx.org/en/docs/http/ngx_http_core_module.html#remote_port).
*Proxy-Wasm properties* | |
`plugin_name` | :heavy_check_mark: | :x: | Returns current filter name.
`plugin_root_id` | :heavy_check_mark: | :x: | Returns filter's root context id.
`plugin_root_id` | :heavy_check_mark: | :x: | Returns filter's Root context id.
`plugin_vm_id` | :x: | :x: | *NYI*.
`node` | :x: | :x: | Not supported.
`cluster_name` | :x: | :x: | Not supported.
Expand Down Expand Up @@ -743,7 +860,7 @@ payloads.

3. When making a dispatch call, a valid IP address or hostname must be given to
`dispatch_http_call`. This is in contrast to Envoy's implementation in which
a configured cluster name must be given.
a configured cluster name must be given. See [HTTP Dispatches].

4. The "queue" shared memory implementation does not implement an automatic
eviction mechanism when the allocated memory slab is full:
Expand All @@ -759,12 +876,15 @@ Proxy-Wasm SDK.
[Filter Chains]: #filter-chains
[Host Properties]: #host-properties
[Nginx Properties]: #nginx-properties
[HTTP Dispatches]: #http-dispatches
[Supported Specifications]: #supported-specifications
[Supported Properties]: #supported-properties
[Examples]: #examples
[Current Limitations]: #current-limitations

[wasm_response_body_buffers]: DIRECTIVES.md#wasm_response_body_buffers
[resolver]: DIRECTIVES.md#resolver
[proxy_wasm_lua_resolver]: DIRECTIVES.md#proxy_wasm_lua_resolver

[WebAssembly]: https://webassembly.org/
[Nginx Variables]: https://nginx.org/en/docs/varindex.html
Expand Down
Loading
Loading