-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Question] [HTTP Server] 🚀🚀 Handling multiple requests at the same time (IDFGH-9204) #10594
Comments
|
Ive seen that example before but I don't understand how it would work in a real situation.
Perhaps the example could show how to execute a long 30+ second calculation asynchronously on a separate thread and then return it on the http thread, without blocking other request handlers. Or more challenging, how to download a large file over HTTP for a few minutes while still executing other requests. |
+1 for this feature request |
You can save those file descriptor and handle to a structure and pass it to another thread for later use. In this case you cannot use those req-like functions. You have to assemble the http packet yourself. I can show you some c++ code to wrap around these functions when I got home |
#define HTTP_UTIL_CHECK(x) \
do { \
int ret = x; \
if (ret == HTTPD_SOCK_ERR_INVALID || ret == HTTPD_SOCK_ERR_TIMEOUT || ret == HTTPD_SOCK_ERR_FAIL) { \
return ret; \
} \
} while (0)
int httpd_socket_send_common_header(httpd_handle_t hd, int fd, const std::string& content_type, size_t content_length, int status = 200) {
char buf[80];
size_t sz = snprintf(
buf,
sizeof(buf),
"HTTP/1.1 %d OK\r\n"
"Content-Type: %s\r\n"
"Content-Length: %d\r\n",
status,
content_type.c_str(),
content_length);
return httpd_socket_send(hd, fd, buf, sz, 0);
}
int httpd_socket_header_finishes(httpd_handle_t hd, int fd) {
const char* sep = "\r\n";
return httpd_socket_send(hd, fd, sep, strlen(sep), 0);
}
int httpd_send_json(httpd_handle_t hd, int fd, const nlohmann::json& json, int status = 200) {
auto str = json.dump();
HTTP_UTIL_CHECK(httpd_socket_send_common_header(hd, fd, HTTPD_TYPE_JSON, str.size(), status));
HTTP_UTIL_CHECK(httpd_socket_header_finishes(hd, fd));
HTTP_UTIL_CHECK(httpd_socket_send(hd, fd, str.c_str(), str.size(), 0));
return 0;
}
void httpd_send_json(httpd_req_t* req, const nlohmann::json& json, const char* status = HTTPD_200) {
auto str = json.dump();
CHK_E(httpd_resp_set_type(req, HTTPD_TYPE_JSON), "failed to set content type");
CHK_E(httpd_resp_set_status(req, status), "failed to set status");
CHK_E(httpd_resp_sendstr(req, str.c_str()), "failed to send json");
} These are a part of my C++ http utility library. It depends on If you are using C++, you can delegate the response to a thread like this (note must capture by value) auto hd = req->handle;
auto fd = httpd_req_to_sockfd(req);
std::thread th([hd, fd]{
json = ....
httpd_send_json(hd, fd, json);
}); If you are using C, you can follow how the example works. As far as how this works, IIRC, the http socket was kept open if no I haven't test if the socket should be closed explicitly after sending all response. But from packet capture it looks this code works fine. |
Wow, thanks for the very impressive reply!!! Very helpful. Must have taken some time to figure this out! Very clever! Possible Improvements: Given that the |
@chipweinberger You are welcome! One catch is that For your proposed improvement, I do agree that there is no reason that the asynchronous response should use a different set of API. There should be some APIs to restore the |
I'm begining to understand the http server code: Where the req is fully deleted (not normally called - only called on error): Where the req is reset after a req handler completes, in order to be used again (this is the important one!): Where the req handler is invoked by the server: The main task loop of the http server (just repeatedly calls Where we create a new socket: Where the socket is closed: |
A roadbump, it loops like the http server is designed around a single req object existing at a time. They are not actively freed & created. This is the corresponding internal data for each httpd_handle_t:
|
Okay, after looking at more code. A It is my current understanding that we could just Note1: As you've found, each Note2: HTTP 1.1 Persisent Connections (https://en.wikipedia.org/wiki/HTTP_persistent_connection) does allow multiple http requests per socket, but you are not allowed to send another request until you get a response back first, so your proposed method of sending on the fd manually should be fine. lru_purge_enable: That said, if we are using
|
To alleviate questions about
I'm not sure exactly why you need to do this. But It seems like a simple way to get "thread safety" for doing various http related things. For example, you could do work on a separate (higher priority?) thread, and then come back to the httpd thread to finish sending a reply and be as "thread safe" as possible. Again, it seems not needed today -- It does not allow simultaneous requests in and of itself, as we already know. |
I also want to reiterate from the linked forum thread that the request body will be purged upon return from the handler. So while the above may work for requests with no/small body, a firmware upload, for example, where there is a lot of data to receive and a lot of processing time required to do so (ie. writing to flash) requires blocking the httpd task at least until the entire request is received. |
Open question: How does the http server handle closing of sockets? Will multiple async requests properly close sockets? Edit: yes, i think so. I don't see any handling of And I've yet to find the place where the http server closes existing sockets. My current understanding: the HTTP server will always keep the socket open forever until the client closes their tcp connection, in which case LWIP will handle the close for us. But I imagine the majority of sockets sockets are closed during LRU purge. Edit: In http 1.1, browsers close sockets very rarely. Browsers typically keep sockets open even when the browser window is closed. This means the most common way for a socket to close will be the LRU purge on the server side. I'd like to see Espressif be much more aggressive in server socket closing. A ESP32 server should close sockets regularly in order to keep a few free resources, i.e. after idle timeouts, or after hearing Anyway, long story short, 4 conclusions:
|
As what I heard the esp-idf http server was a bit primitive. The mongoose http server was said to be more full-fledged. Mongoose server might not be cheap for commerial choice though. If working on a hobbist project, maybe check mongoose server and see if they provide the necessary features? |
I've opened a PR to add lru purge support. I also added example code for doing async requests. |
This feature has been merged. |
This commit adds support for handling multiple requests simultaneously by introducing two new functions: `httpd_req_async_handler_begin()` and `httpd_req_async_handler_complete()`. These functions allow creating an asynchronous copy of a request that can be used on a separate thread and marking the asynchronous request as completed, respectively. Additionally, a new flag `for_async_req` has been added to the `httpd_sess_t` struct to indicate if a socket is being used for an asynchronous request and should not be purged from the LRU cache. An example have been added to demonstrate the usage of these new functions. Closes espressif/esp-idf#10594 Signed-off-by: Harshit Malpani <[email protected]>
HTTP Server
Is there a way to handle multiple requests at the same time for the http server?
is it even possible?
In ESP-IDF today, only a single request handler can run at a time. And even with a separate thread,
httpd_req_t
is freed when the synchronous handler completes so prolonged communication is impossible.How reasonable is it for me to hack ESP-IDF to support this?
Previous discussion: https://esp32.com/viewtopic.php?t=28000
The text was updated successfully, but these errors were encountered: