summaryrefslogtreecommitdiffstats
path: root/web/server
diff options
context:
space:
mode:
Diffstat (limited to 'web/server')
-rw-r--r--web/server/README.md104
-rw-r--r--web/server/static/README.md4
-rw-r--r--web/server/static/static-threaded.c132
-rw-r--r--web/server/web_client.c815
-rw-r--r--web/server/web_client.h172
-rw-r--r--web/server/web_client_cache.c309
-rw-r--r--web/server/web_client_cache.h20
-rw-r--r--web/server/web_server.c25
-rw-r--r--web/server/web_server.h1
9 files changed, 892 insertions, 690 deletions
diff --git a/web/server/README.md b/web/server/README.md
index 407df6c03..37577b6dd 100644
--- a/web/server/README.md
+++ b/web/server/README.md
@@ -1,20 +1,76 @@
<!--
title: "Web server"
description: "The Netdata Agent's local static-threaded web server serves dashboards and real-time visualizations with security and DDoS protection."
-custom_edit_url: "https://github.com/netdata/netdata/edit/master/web/server/README.md"
+type: reference
+custom_edit_url: https://github.com/netdata/netdata/edit/master/web/server/README.md
sidebar_label: "Web server"
learn_status: "Published"
-learn_topic_type: "References"
-learn_rel_path: "References/Configuration"
+learn_rel_path: "Configuration"
-->
# Web server
-The Netdata web server runs as `static-threaded`, i.e. with a fixed, configurable number of threads.
-It uses non-blocking I/O and respects the `keep-alive` HTTP header to serve multiple HTTP requests via the same connection.
+The Netdata web server is `static-threaded`, with a fixed, configurable number of threads.
+
+All the threads are concurrently listening for web requests on the same sockets, and the kernel distributes the incoming
+requests to them. Each thread uses non-blocking I/O so it can serve any number of web requests in parallel.
+
+This web server respects the `keep-alive` HTTP header to serve multiple HTTP requests via the same connection.
## Configuration
+From within your Netdata config directory (typically `/etc/netdata`), [use `edit-config`](https://github.com/netdata/netdata/blob/master/docs/configure/nodes.md) to
+open `netdata.conf`.
+
+```
+sudo ./edit-config netdata.conf
+```
+
+Scroll down to the `[web]` section to find the following settings.
+
+## Settings
+
+| Setting | Default | Description |
+|:-------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `ssl key` | `/etc/netdata/ssl/key.pem` | Declare the location of an SSL key to [enable HTTPS](#enable-httpstls-support). |
+| `ssl certificate` | `/etc/netdata/ssl/cert.pem` | Declare the location of an SSL certificate to [enable HTTPS](#enable-httpstls-support). |
+| `tls version` | `1.3` | Choose which TLS version to use. While all versions are allowed (`1` or `1.0`, `1.1`, `1.2` and `1.3`), we recommend `1.3` for the most secure encryption. If left blank, Netdata uses the highest available protocol version on your system. |
+| `tls ciphers` | `none` | Choose which TLS cipher to use. Options include `TLS_AES_256_GCM_SHA384`, `TLS_CHACHA20_POLY1305_SHA256`, and `TLS_AES_128_GCM_SHA256`. If left blank, Netdata uses the default cipher list for that protocol provided by your TLS implementation. |
+| `ses max window` | `15` | See [single exponential smoothing](https://github.com/netdata/netdata/blob/master/web/api/queries/ses/README.md). |
+| `des max window` | `15` | See [double exponential smoothing](https://github.com/netdata/netdata/blob/master/web/api/queries/des/README.md). |
+| `mode` | `static-threaded` | Turns on (`static-threaded` or off (`none`) the static-threaded web server. See the [example](#disable-the-web-server) to turn off the web server and disable the dashboard. |
+| `listen backlog` | `4096` | The port backlog. Check `man 2 listen`. |
+| `default port` | `19999` | The listen port for the static web server. |
+| `web files owner` | `netdata` | The user that owns the web static files. Netdata will refuse to serve a file that is not owned by this user, even if it has read access to that file. If the user given is not found, Netdata will only serve files owned by user given in `run as user`. |
+| `web files group` | `netdata` | If this is set, Netdata will check if the file is owned by this group and refuse to serve the file if it's not. |
+| `disconnect idle clients after seconds` | `60` | The time in seconds to disconnect web clients after being totally idle. |
+| `timeout for first request` | `60` | How long to wait for a client to send a request before closing the socket. Prevents slow request attacks. |
+| `accept a streaming request every seconds` | `0` | Can be used to set a limit on how often a parent node will accept streaming requests from child nodes in a [streaming and replication setup](https://github.com/netdata/netdata/blob/master/streaming/README.md). |
+| `respect do not track policy` | `no` | If set to `yes`, Netdata will respect the user's browser preferences for [Do Not Track](https://www.eff.org/issues/do-not-track) (DNT) and storing cookies. If DNT is _enabled_ in the browser, and this option is set to `yes`, users will not be able to sign in to Netdata Cloud via their local Agent dashboard, and their node will not connect to any [registry](https://github.com/netdata/netdata/blob/master/registry/README.md). For certain browsers, users must disable DNT and change this option to `yes` for full functionality. |
+| `x-frame-options response header` | ` ` | Avoid [clickjacking attacks](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options), by ensuring that the content is not embedded into other sites. |
+| `allow connections from` | `localhost *` | Declare which IP addresses or full-qualified domain names (FQDNs) are allowed to connect to the web server, including the [dashboard](https://github.com/netdata/netdata/blob/master/web/gui/README.md) or [HTTP API](https://github.com/netdata/netdata/blob/master/web/api/README.md). This is a global setting with higher priority to any of the ones below. |
+| `allow connections by dns` | `heuristic` | See the [access list examples](#access-lists) for details on using `allow` settings. |
+| `allow dashboard from` | `localhost *` | |
+| `allow dashboard by dns` | `heuristic` | |
+| `allow badges from` | `*` | |
+| `allow badges by dns` | `heuristic` | |
+| `allow streaming from` | `*` | |
+| `allow streaming by dns` | `heuristic` | |
+| `allow netdata.conf` | `localhost fd* 10.* 192.168.* 172.16.* 172.17.* 172.18.* 172.19.* 172.20.* 172.21.* 172.22.* 172.23.* 172.24.* 172.25.* 172.26.* 172.27.* 172.28.* 172.29.* 172.30.* 172.31.* UNKNOWN` | |
+| `allow netdata.conf by dns` | `no` | |
+| `allow management from` | `localhost` | |
+| `allow management by dns` | `heuristic` | |
+| `enable gzip compression` | `yes` | When set to `yes`, Netdata web responses will be GZIP compressed, if the web client accepts such responses. |
+| `gzip compression strategy` | `default` | Valid settings are `default`, `filtered`, `huffman only`, `rle` and `fixed`. |
+| `gzip compression level` | `3` | Valid settings are 1 (fastest) to 9 (best ratio). |
+| `web server threads` | ` ` | How many processor threads the web server is allowed. The default is system-specific, the minimum of `6` or the number of CPU cores. |
+| `web server max sockets` | ` ` | Available sockets. The default is system-specific, automatically adjusted to 50% of the max number of open files Netdata is allowed to use (via `/etc/security/limits.conf` or systemd), to allow enough file descriptors to be available for data collection. |
+| `custom dashboard_info.js` | ` ` | Specifies the location of a custom `dashboard.js` file. See [customizing the standard dashboard](https://github.com/netdata/netdata/blob/master/docs/dashboard/customize.md#customize-the-standard-dashboard) for details. |
+
+## Examples
+
+### Disable the web server
+
Disable the web server by editing `netdata.conf` and setting:
```
@@ -22,7 +78,9 @@ Disable the web server by editing `netdata.conf` and setting:
mode = none
```
-With the web server enabled, control the number of threads and sockets with the following settings:
+### Change the number of threads
+
+Control the number of threads and sockets with the following settings:
```
[web]
@@ -30,10 +88,6 @@ With the web server enabled, control the number of threads and sockets with the
web server max sockets = 512
```
-The default number of processor threads is `min(cpu cores, 6)`.
-
-The `web server max sockets` setting is automatically adjusted to 50% of the max number of open files Netdata is allowed to use (via `/etc/security/limits.conf` or systemd), to allow enough file descriptors to be available for data collection.
-
### Binding Netdata to multiple ports
Netdata can bind to multiple IPs and ports, offering access to different services on each. Up to 100 sockets can be used (increase it at compile time with `CFLAGS="-DMAX_LISTEN_FDS=200" ./netdata-installer.sh ...`).
@@ -68,7 +122,7 @@ The API requests are serviced as follows:
- `badges` gives access only to the badges API calls.
- `management` gives access only to the management API calls.
-### Enabling TLS support
+### Enable HTTPS/TLS support
Since v1.16.0, Netdata supports encrypted HTTP connections to the web server, plus encryption of streaming data to a
parent from its child nodes, via the TLS protocol.
@@ -106,7 +160,7 @@ openssl req -newkey rsa:2048 -nodes -sha512 -x509 -days 365 -keyout key.pem -out
### Select TLS version
-Beginning with version 1.21, specify the TLS version and the ciphers that you want to use:
+Beginning with version `v1.21.0`, specify the TLS version and the ciphers that you want to use:
```conf
[web]
@@ -116,8 +170,6 @@ Beginning with version 1.21, specify the TLS version and the ciphers that you wa
If you do not specify these options, Netdata will use the highest available protocol version on your system and the default cipher list for that protocol provided by your TLS implementation.
-While Netdata accepts all the TLS version as arguments (`1` or `1.0`, `1.1`, `1.2` and `1.3`), we recommend you use `1.3` for the most secure encryption.
-
#### TLS/SSL enforcement
When the certificates are defined and unless any other options are provided, a Netdata server will:
@@ -182,7 +234,7 @@ Netdata supports access lists in `netdata.conf`:
- `allow connections from` matches anyone that connects on the Netdata port(s).
So, if someone is not allowed, it will be connected and disconnected immediately, without reading even
- a single byte from its connection. This is a global settings with higher priority to any of the ones below.
+ a single byte from its connection. This is a global setting with higher priority to any of the ones below.
- `allow dashboard from` receives the request and examines if it is a static dashboard file or an API call the
dashboards do.
@@ -218,30 +270,12 @@ The three possible values for each of these options are `yes`, `no` and `heurist
the check when the pattern only contains IPv4/IPv6 addresses or `localhost`, and enables it when wildcards are
present that may match DNS FQDNs.
-### Other netdata.conf [web] section options
-
-|setting|default|info|
-|:-----:|:-----:|:---|
-|ses max window|`15`|See [single exponential smoothing](https://github.com/netdata/netdata/blob/master/web/api/queries/des/README.md)|
-|des max window|`15`|See [double exponential smoothing](https://github.com/netdata/netdata/blob/master/web/api/queries/des/README.md)|
-|listen backlog|`4096`|The port backlog. Check `man 2 listen`.|
-|disconnect idle clients after seconds|`60`|The time in seconds to disconnect web clients after being totally idle.|
-|timeout for first request|`60`|How long to wait for a client to send a request before closing the socket. Prevents slow request attacks.|
-|accept a streaming request every seconds|`0`|Can be used to set a limit on how often a parent node will accept streaming requests from child nodes in a [streaming and replication setup](https://github.com/netdata/netdata/blob/master/streaming/README.md)|
-|respect do not track policy|`no`|If set to `yes`, Netdata will respect the user's browser preferences for [Do Not Track](https://www.eff.org/issues/do-not-track) (DNT) and storing cookies. If DNT is _enabled_ in the browser, and this option is set to `yes`, users will not be able to sign in to Netdata Cloud via their local Agent dashboard, and their node will not connect to any [registry](https://github.com/netdata/netdata/blob/master/registry/README.md). For certain browsers, users must disable DNT and change this option to `yes` for full functionality.|
-|x-frame-options response header||[Avoid clickjacking attacks, by ensuring that the content is not embedded into other sites](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options).|
-|enable gzip compression|`yes`|When set to `yes`, Netdata web responses will be GZIP compressed, if the web client accepts such responses.|
-|gzip compression strategy|`default`|Valid strategies are `default`, `filtered`, `huffman only`, `rle` and `fixed`|
-|gzip compression level|`3`|Valid levels are 1 (fastest) to 9 (best ratio)|
-
## DDoS protection
-If you publish your Netdata to the internet, you may want to apply some protection against DDoS:
+If you publish your Netdata web server to the internet, you may want to apply some protection against DDoS:
1. Use the `static-threaded` web server (it is the default)
2. Use reasonable `[web].web server max sockets` (the default is)
3. Don't use all your CPU cores for Netdata (lower `[web].web server threads`)
4. Run the `netdata` process with a low process scheduling priority (the default is the lowest)
-5. If possible, proxy Netdata via a full featured web server (nginx, apache, etc)
-
-
+5. If possible, proxy Netdata via a full featured web server (Nginx, Apache, etc)
diff --git a/web/server/static/README.md b/web/server/static/README.md
index 6a83b70db..c4e5c4c18 100644
--- a/web/server/static/README.md
+++ b/web/server/static/README.md
@@ -2,6 +2,10 @@
title: "`static-threaded` web server"
description: "The Netdata Agent's static-threaded web server spawns a fixed number of threads that listen to web requests and uses non-blocking I/O."
custom_edit_url: https://github.com/netdata/netdata/edit/master/web/server/static/README.md
+sidebar_label: "`static-threaded` web server"
+learn_status: "Published"
+learn_topic_type: "Tasks"
+learn_rel_path: "Developers/Web"
-->
# `static-threaded` web server
diff --git a/web/server/static/static-threaded.c b/web/server/static/static-threaded.c
index aca7d7ec0..52bb56cd6 100644
--- a/web/server/static/static-threaded.c
+++ b/web/server/static/static-threaded.c
@@ -28,7 +28,7 @@ long web_client_streaming_rate_t = 0L;
static struct web_client *web_client_create_on_fd(POLLINFO *pi) {
struct web_client *w;
- w = web_client_get_from_cache_or_allocate();
+ w = web_client_get_from_cache();
w->ifd = w->ofd = pi->fd;
strncpyz(w->client_ip, pi->client_ip, sizeof(w->client_ip) - 1);
@@ -39,7 +39,19 @@ static struct web_client *web_client_create_on_fd(POLLINFO *pi) {
if(unlikely(!*w->client_port)) strcpy(w->client_port, "-");
w->port_acl = pi->port_acl;
- web_client_initialize_connection(w);
+ int flag = 1;
+ if(unlikely(web_client_check_tcp(w) && setsockopt(w->ifd, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int)) != 0))
+ debug(D_WEB_CLIENT, "%llu: failed to enable TCP_NODELAY on socket fd %d.", w->id, w->ifd);
+
+ flag = 1;
+ if(unlikely(setsockopt(w->ifd, SOL_SOCKET, SO_KEEPALIVE, (char *) &flag, sizeof(int)) != 0))
+ debug(D_WEB_CLIENT, "%llu: failed to enable SO_KEEPALIVE on socket fd %d.", w->id, w->ifd);
+
+ web_client_update_acl_matches(w);
+ web_client_enable_wait_receive(w);
+
+ web_server_log_connection(w, "CONNECTED");
+
w->pollinfo_slot = pi->slot;
return(w);
}
@@ -107,7 +119,10 @@ static void web_server_file_del_callback(POLLINFO *pi) {
if(unlikely(!w->pollinfo_slot)) {
debug(D_WEB_CLIENT, "%llu: CROSS WEB CLIENT CLEANUP (iFD %d, oFD %d)", w->id, pi->fd, w->ofd);
- web_client_release(w);
+ web_server_log_connection(w, "DISCONNECTED");
+ web_client_request_done(w);
+ web_client_release_to_cache(w);
+ global_statistics_web_client_disconnected();
}
worker_is_idle();
@@ -278,7 +293,10 @@ static void web_server_del_callback(POLLINFO *pi) {
pi->flags |= POLLINFO_FLAG_DONT_CLOSE;
debug(D_WEB_CLIENT, "%llu: CLOSING CLIENT FD %d", w->id, pi->fd);
- web_client_release(w);
+ web_server_log_connection(w, "DISCONNECTED");
+ web_client_request_done(w);
+ web_client_release_to_cache(w);
+ global_statistics_web_client_disconnected();
}
worker_is_idle();
@@ -293,61 +311,70 @@ static int web_server_rcv_callback(POLLINFO *pi, short int *events) {
struct web_client *w = (struct web_client *)pi->data;
int fd = pi->fd;
- if(unlikely(web_client_receive(w) < 0)) {
- ret = -1;
- goto cleanup;
- }
+ ssize_t bytes;
+ bytes = web_client_receive(w);
- debug(D_WEB_CLIENT, "%llu: processing received data on fd %d.", w->id, fd);
- worker_is_idle();
- worker_is_busy(WORKER_JOB_PROCESS);
- web_client_process_request(w);
+ if (likely(bytes > 0)) {
+ debug(D_WEB_CLIENT, "%llu: processing received data on fd %d.", w->id, fd);
+ worker_is_idle();
+ worker_is_busy(WORKER_JOB_PROCESS);
+ web_client_process_request(w);
- if (unlikely(w->mode == WEB_CLIENT_MODE_STREAM)) {
- web_client_send(w);
- }
+ if (unlikely(w->mode == WEB_CLIENT_MODE_STREAM)) {
+ web_client_send(w);
+ }
- else if(unlikely(w->mode == WEB_CLIENT_MODE_FILECOPY)) {
- if(w->pollinfo_filecopy_slot == 0) {
- debug(D_WEB_CLIENT, "%llu: FILECOPY DETECTED ON FD %d", w->id, pi->fd);
-
- if (unlikely(w->ifd != -1 && w->ifd != w->ofd && w->ifd != fd)) {
- // add a new socket to poll_events, with the same
- debug(D_WEB_CLIENT, "%llu: CREATING FILECOPY SLOT ON FD %d", w->id, pi->fd);
-
- POLLINFO *fpi = poll_add_fd(
- pi->p
- , w->ifd
- , pi->port_acl
- , 0
- , POLLINFO_FLAG_CLIENT_SOCKET
- , "FILENAME"
- , ""
- , ""
- , web_server_file_add_callback
- , web_server_file_del_callback
- , web_server_file_read_callback
- , web_server_file_write_callback
- , (void *) w
- );
-
- if(fpi)
- w->pollinfo_filecopy_slot = fpi->slot;
- else {
- error("Failed to add filecopy fd. Closing client.");
- ret = -1;
- goto cleanup;
+ else if(unlikely(w->mode == WEB_CLIENT_MODE_FILECOPY)) {
+ if(w->pollinfo_filecopy_slot == 0) {
+ debug(D_WEB_CLIENT, "%llu: FILECOPY DETECTED ON FD %d", w->id, pi->fd);
+
+ if (unlikely(w->ifd != -1 && w->ifd != w->ofd && w->ifd != fd)) {
+ // add a new socket to poll_events, with the same
+ debug(D_WEB_CLIENT, "%llu: CREATING FILECOPY SLOT ON FD %d", w->id, pi->fd);
+
+ POLLINFO *fpi = poll_add_fd(
+ pi->p
+ , w->ifd
+ , pi->port_acl
+ , 0
+ , POLLINFO_FLAG_CLIENT_SOCKET
+ , "FILENAME"
+ , ""
+ , ""
+ , web_server_file_add_callback
+ , web_server_file_del_callback
+ , web_server_file_read_callback
+ , web_server_file_write_callback
+ , (void *) w
+ );
+
+ if(fpi)
+ w->pollinfo_filecopy_slot = fpi->slot;
+ else {
+ error("Failed to add filecopy fd. Closing client.");
+ ret = -1;
+ goto cleanup;
+ }
}
}
}
- }
- else {
- if(unlikely(w->ifd == fd && web_client_has_wait_receive(w)))
+ else {
+ if(unlikely(w->ifd == fd && web_client_has_wait_receive(w)))
+ *events |= POLLIN;
+ }
+
+ if(unlikely(w->ofd == fd && web_client_has_wait_send(w)))
+ *events |= POLLOUT;
+ } else if(unlikely(bytes < 0)) {
+ ret = -1;
+ goto cleanup;
+ } else if (unlikely(bytes == 0)) {
+ if(unlikely(w->ifd == fd && web_client_has_ssl_wait_receive(w)))
*events |= POLLIN;
- }
- if(unlikely(w->ofd == fd && web_client_has_wait_send(w)))
- *events |= POLLOUT;
+ if(unlikely(w->ofd == fd && web_client_has_ssl_wait_send(w)))
+ *events |= POLLOUT;
+ }
ret = web_server_check_client_status(w);
@@ -393,9 +420,6 @@ cleanup:
static void socket_listen_main_static_threaded_worker_cleanup(void *ptr) {
worker_private = (struct web_server_static_threaded_worker *)ptr;
- info("freeing local web clients cache...");
- web_client_cache_destroy();
-
info("stopped after %zu connects, %zu disconnects (max concurrent %zu), %zu receptions and %zu sends",
worker_private->connected,
worker_private->disconnected,
diff --git a/web/server/web_client.c b/web/server/web_client.c
index c14b86f3e..8bc72e71f 100644
--- a/web/server/web_client.c
+++ b/web/server/web_client.c
@@ -13,45 +13,53 @@ int web_enable_gzip = 1, web_gzip_level = 3, web_gzip_strategy = Z_DEFAULT_STRAT
#endif /* NETDATA_WITH_ZLIB */
inline int web_client_permission_denied(struct web_client *w) {
- w->response.data->contenttype = CT_TEXT_PLAIN;
+ w->response.data->content_type = CT_TEXT_PLAIN;
buffer_flush(w->response.data);
buffer_strcat(w->response.data, "You are not allowed to access this resource.");
w->response.code = HTTP_RESP_FORBIDDEN;
return HTTP_RESP_FORBIDDEN;
}
-static inline int web_client_crock_socket(struct web_client *w) {
+static inline int web_client_crock_socket(struct web_client *w __maybe_unused) {
#ifdef TCP_CORK
if(likely(web_client_is_corkable(w) && !w->tcp_cork && w->ofd != -1)) {
- w->tcp_cork = 1;
+ w->tcp_cork = true;
if(unlikely(setsockopt(w->ofd, IPPROTO_TCP, TCP_CORK, (char *) &w->tcp_cork, sizeof(int)) != 0)) {
error("%llu: failed to enable TCP_CORK on socket.", w->id);
- w->tcp_cork = 0;
+ w->tcp_cork = false;
return -1;
}
}
-#else
- (void)w;
#endif /* TCP_CORK */
return 0;
}
-static inline int web_client_uncrock_socket(struct web_client *w) {
+static inline void web_client_enable_wait_from_ssl(struct web_client *w, int bytes) {
+ int ssl_err = SSL_get_error(w->ssl.conn, bytes);
+ if (ssl_err == SSL_ERROR_WANT_READ)
+ web_client_enable_ssl_wait_receive(w);
+ else if (ssl_err == SSL_ERROR_WANT_WRITE)
+ web_client_enable_ssl_wait_send(w);
+ else {
+ web_client_disable_ssl_wait_receive(w);
+ web_client_disable_ssl_wait_send(w);
+ }
+}
+
+static inline int web_client_uncrock_socket(struct web_client *w __maybe_unused) {
#ifdef TCP_CORK
if(likely(w->tcp_cork && w->ofd != -1)) {
- w->tcp_cork = 0;
if(unlikely(setsockopt(w->ofd, IPPROTO_TCP, TCP_CORK, (char *) &w->tcp_cork, sizeof(int)) != 0)) {
error("%llu: failed to disable TCP_CORK on socket.", w->id);
- w->tcp_cork = 1;
+ w->tcp_cork = true;
return -1;
}
}
-#else
- (void)w;
#endif /* TCP_CORK */
+ w->tcp_cork = false;
return 0;
}
@@ -67,14 +75,96 @@ char *strip_control_characters(char *url) {
return url;
}
+static void web_client_reset_allocations(struct web_client *w, bool free_all) {
+
+ if(free_all) {
+ // the web client is to be destroyed
+
+ buffer_free(w->url_as_received);
+ w->url_as_received = NULL;
+
+ buffer_free(w->url_path_decoded);
+ w->url_path_decoded = NULL;
+
+ buffer_free(w->url_query_string_decoded);
+ w->url_query_string_decoded = NULL;
+
+ buffer_free(w->response.header_output);
+ w->response.header_output = NULL;
+
+ buffer_free(w->response.header);
+ w->response.header = NULL;
+
+ buffer_free(w->response.data);
+ w->response.data = NULL;
+
+ freez(w->post_payload);
+ w->post_payload = NULL;
+ w->post_payload_size = 0;
+
+#ifdef ENABLE_HTTPS
+ if ((!web_client_check_unix(w)) && (netdata_ssl_srv_ctx)) {
+ if (w->ssl.conn) {
+ SSL_free(w->ssl.conn);
+ w->ssl.conn = NULL;
+ }
+ }
+#endif
+ }
+ else {
+ // the web client is to be re-used
+
+ buffer_reset(w->url_as_received);
+ buffer_reset(w->url_path_decoded);
+ buffer_reset(w->url_query_string_decoded);
+
+ buffer_reset(w->response.header_output);
+ buffer_reset(w->response.header);
+ buffer_reset(w->response.data);
+
+ // leave w->post_payload
+ // leave w->ssl
+ }
+
+ freez(w->server_host);
+ w->server_host = NULL;
+
+ freez(w->forwarded_host);
+ w->forwarded_host = NULL;
+
+ freez(w->origin);
+ w->origin = NULL;
+
+ freez(w->user_agent);
+ w->user_agent = NULL;
+
+ freez(w->auth_bearer_token);
+ w->auth_bearer_token = NULL;
+
+ // if we had enabled compression, release it
+#ifdef NETDATA_WITH_ZLIB
+ if(w->response.zinitialized) {
+ deflateEnd(&w->response.zstream);
+ w->response.zsent = 0;
+ w->response.zhave = 0;
+ w->response.zstream.avail_in = 0;
+ w->response.zstream.avail_out = 0;
+ w->response.zstream.total_in = 0;
+ w->response.zstream.total_out = 0;
+ w->response.zinitialized = false;
+ w->flags &= ~WEB_CLIENT_CHUNKED_TRANSFER;
+ }
+#endif // NETDATA_WITH_ZLIB
+}
+
void web_client_request_done(struct web_client *w) {
web_client_uncrock_socket(w);
debug(D_WEB_CLIENT, "%llu: Resetting client.", w->id);
- if(likely(w->last_url[0])) {
+ if(likely(buffer_strlen(w->url_as_received))) {
struct timeval tv;
- now_realtime_timeval(&tv);
+ now_monotonic_high_precision_timeval(&tv);
size_t size = (w->mode == WEB_CLIENT_MODE_FILECOPY)?w->response.rlen:w->response.data->len;
size_t sent = size;
@@ -85,14 +175,14 @@ void web_client_request_done(struct web_client *w) {
// --------------------------------------------------------------------
// global statistics
- global_statistics_web_request_completed(dt_usec(&tv, &w->tv_in),
- w->stats_received_bytes,
- w->stats_sent_bytes,
+ global_statistics_web_request_completed(dt_usec(&tv, &w->timings.tv_in),
+ w->statistics.received_bytes,
+ w->statistics.sent_bytes,
size,
sent);
- w->stats_received_bytes = 0;
- w->stats_sent_bytes = 0;
+ w->statistics.received_bytes = 0;
+ w->statistics.sent_bytes = 0;
// --------------------------------------------------------------------
@@ -111,7 +201,8 @@ void web_client_request_done(struct web_client *w) {
mode = "STREAM";
break;
- case WEB_CLIENT_MODE_NORMAL:
+ case WEB_CLIENT_MODE_POST:
+ case WEB_CLIENT_MODE_GET:
mode = "DATA";
break;
@@ -130,11 +221,11 @@ void web_client_request_done(struct web_client *w) {
, sent
, size
, -((size > 0) ? ((double)(size - sent) / (double) size * 100.0) : 0.0)
- , (double)dt_usec(&w->tv_ready, &w->tv_in) / 1000.0
- , (double)dt_usec(&tv, &w->tv_ready) / 1000.0
- , (double)dt_usec(&tv, &w->tv_in) / 1000.0
+ , (double)dt_usec(&w->timings.tv_ready, &w->timings.tv_in) / 1000.0
+ , (double)dt_usec(&tv, &w->timings.tv_ready) / 1000.0
+ , (double)dt_usec(&tv, &w->timings.tv_in) / 1000.0
, w->response.code
- , strip_control_characters(w->last_url)
+ , strip_control_characters((char *)buffer_tostring(w->url_as_received))
);
}
@@ -152,32 +243,13 @@ void web_client_request_done(struct web_client *w) {
}
}
- w->last_url[0] = '\0';
- w->cookie1[0] = '\0';
- w->cookie2[0] = '\0';
- w->origin[0] = '*';
- w->origin[1] = '\0';
+ web_client_reset_allocations(w, false);
- freez(w->user_agent); w->user_agent = NULL;
- if (w->auth_bearer_token) {
- freez(w->auth_bearer_token);
- w->auth_bearer_token = NULL;
- }
+ w->mode = WEB_CLIENT_MODE_GET;
- w->mode = WEB_CLIENT_MODE_NORMAL;
-
- w->tcp_cork = 0;
web_client_disable_donottrack(w);
web_client_disable_tracking_required(w);
web_client_disable_keepalive(w);
- w->decoded_url[0] = '\0';
-
- buffer_reset(w->response.header_output);
- buffer_reset(w->response.header);
- buffer_reset(w->response.data);
- w->response.rlen = 0;
- w->response.sent = 0;
- w->response.code = 0;
w->header_parse_tries = 0;
w->header_parse_last_size = 0;
@@ -185,23 +257,11 @@ void web_client_request_done(struct web_client *w) {
web_client_enable_wait_receive(w);
web_client_disable_wait_send(w);
- w->response.zoutput = 0;
-
- // if we had enabled compression, release it
-#ifdef NETDATA_WITH_ZLIB
- if(w->response.zinitialized) {
- debug(D_DEFLATE, "%llu: Freeing compression resources.", w->id);
- deflateEnd(&w->response.zstream);
- w->response.zsent = 0;
- w->response.zhave = 0;
- w->response.zstream.avail_in = 0;
- w->response.zstream.avail_out = 0;
- w->response.zstream.total_in = 0;
- w->response.zstream.total_out = 0;
- w->response.zinitialized = 0;
- w->flags &= ~WEB_CLIENT_CHUNKED_TRANSFER;
- }
-#endif // NETDATA_WITH_ZLIB
+ w->response.has_cookies = false;
+ w->response.rlen = 0;
+ w->response.sent = 0;
+ w->response.code = 0;
+ w->response.zoutput = false;
}
static struct {
@@ -273,7 +333,7 @@ static inline uint8_t contenttype_for_filename(const char *filename) {
}
static inline int access_to_file_is_not_permitted(struct web_client *w, const char *filename) {
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_strcat(w->response.data, "Access to file is not permitted: ");
buffer_strcat_htmlescape(w->response.data, filename);
return HTTP_RESP_FORBIDDEN;
@@ -295,7 +355,7 @@ int mysendfile(struct web_client *w, char *filename) {
for(s = filename; *s ;s++) {
if( !isalnum(*s) && *s != '/' && *s != '.' && *s != '-' && *s != '_') {
debug(D_WEB_CLIENT_ACCESS, "%llu: File '%s' is not acceptable.", w->id, filename);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_sprintf(w->response.data, "Filename contains invalid characters: ");
buffer_strcat_htmlescape(w->response.data, filename);
return HTTP_RESP_BAD_REQUEST;
@@ -305,7 +365,7 @@ int mysendfile(struct web_client *w, char *filename) {
// if the filename contains a double dot refuse to serve it
if(strstr(filename, "..") != 0) {
debug(D_WEB_CLIENT_ACCESS, "%llu: File '%s' is not acceptable.", w->id, filename);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_strcat(w->response.data, "Relative filenames are not supported: ");
buffer_strcat_htmlescape(w->response.data, filename);
return HTTP_RESP_BAD_REQUEST;
@@ -321,7 +381,7 @@ int mysendfile(struct web_client *w, char *filename) {
// check if the file exists
if (lstat(webfilename, &statbuf) != 0) {
debug(D_WEB_CLIENT_ACCESS, "%llu: File '%s' is not found.", w->id, webfilename);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_strcat(w->response.data, "File does not exist, or is not accessible: ");
buffer_strcat_htmlescape(w->response.data, webfilename);
return HTTP_RESP_NOT_FOUND;
@@ -347,7 +407,7 @@ int mysendfile(struct web_client *w, char *filename) {
if(errno == EBUSY || errno == EAGAIN) {
error("%llu: File '%s' is busy, sending 307 Moved Temporarily to force retry.", w->id, webfilename);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_sprintf(w->response.header, "Location: /%s\r\n", filename);
buffer_strcat(w->response.data, "File is currently busy, please try again later: ");
buffer_strcat_htmlescape(w->response.data, webfilename);
@@ -355,7 +415,7 @@ int mysendfile(struct web_client *w, char *filename) {
}
else {
error("%llu: Cannot open file '%s'.", w->id, webfilename);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_strcat(w->response.data, "Cannot open file: ");
buffer_strcat_htmlescape(w->response.data, webfilename);
return HTTP_RESP_NOT_FOUND;
@@ -364,7 +424,7 @@ int mysendfile(struct web_client *w, char *filename) {
sock_setnonblock(w->ifd);
- w->response.data->contenttype = contenttype_for_filename(webfilename);
+ w->response.data->content_type = contenttype_for_filename(webfilename);
debug(D_WEB_CLIENT_ACCESS, "%llu: Sending file '%s' (%"PRId64" bytes, ifd %d, ofd %d).", w->id, webfilename, (int64_t)statbuf.st_size, w->ifd, w->ofd);
w->mode = WEB_CLIENT_MODE_FILECOPY;
@@ -426,8 +486,8 @@ void web_client_enable_deflate(struct web_client *w, int gzip) {
}
w->response.zsent = 0;
- w->response.zoutput = 1;
- w->response.zinitialized = 1;
+ w->response.zoutput = true;
+ w->response.zinitialized = true;
w->flags |= WEB_CLIENT_CHUNKED_TRANSFER;
debug(D_DEFLATE, "%llu: Initialized compression.", w->id);
@@ -527,17 +587,19 @@ static inline int UNUSED_FUNCTION(check_host_and_mgmt_acl_and_call)(RRDHOST *hos
return check_host_and_call(host, w, url, func);
}
-int web_client_api_request(RRDHOST *host, struct web_client *w, char *url)
+int web_client_api_request(RRDHOST *host, struct web_client *w, char *url_path_fragment)
{
// get the api version
- char *tok = mystrsep(&url, "/");
+ char *tok = strsep_skip_consecutive_separators(&url_path_fragment, "/");
if(tok && *tok) {
debug(D_WEB_CLIENT, "%llu: Searching for API version '%s'.", w->id, tok);
- if(strcmp(tok, "v1") == 0)
- return web_client_api_request_v1(host, w, url);
+ if(strcmp(tok, "v2") == 0)
+ return web_client_api_request_v2(host, w, url_path_fragment);
+ else if(strcmp(tok, "v1") == 0)
+ return web_client_api_request_v1(host, w, url_path_fragment);
else {
buffer_flush(w->response.data);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_strcat(w->response.data, "Unsupported API version: ");
buffer_strcat_htmlescape(w->response.data, tok);
return HTTP_RESP_NOT_FOUND;
@@ -550,8 +612,8 @@ int web_client_api_request(RRDHOST *host, struct web_client *w, char *url)
}
}
-const char *web_content_type_to_string(uint8_t contenttype) {
- switch(contenttype) {
+const char *web_content_type_to_string(HTTP_CONTENT_TYPE content_type) {
+ switch(content_type) {
case CT_TEXT_HTML:
return "text/html; charset=utf-8";
@@ -715,7 +777,7 @@ static inline char *http_header_parse(struct web_client *w, char *s, int parse_u
uint32_t hash = simple_uhash(s);
if(hash == hash_origin && !strcasecmp(s, "Origin"))
- strncpyz(w->origin, v, NETDATA_WEB_REQUEST_ORIGIN_HEADER_SIZE);
+ w->origin = strdupz(v);
else if(hash == hash_connection && !strcasecmp(s, "Connection")) {
if(strcasestr(v, "keep-alive"))
@@ -727,11 +789,14 @@ static inline char *http_header_parse(struct web_client *w, char *s, int parse_u
}
else if(parse_useragent && hash == hash_useragent && !strcasecmp(s, "User-Agent")) {
w->user_agent = strdupz(v);
- } else if(hash == hash_authorization&& !strcasecmp(s, "X-Auth-Token")) {
+ }
+ else if(hash == hash_authorization&& !strcasecmp(s, "X-Auth-Token")) {
w->auth_bearer_token = strdupz(v);
}
- else if(hash == hash_host && !strcasecmp(s, "Host")){
- strncpyz(w->server_host, v, ((size_t)(ve - v) < sizeof(w->server_host)-1 ? (size_t)(ve - v) : sizeof(w->server_host)-1));
+ else if(hash == hash_host && !strcasecmp(s, "Host")) {
+ char buffer[NI_MAXHOST];
+ strncpyz(buffer, v, ((size_t)(ve - v) < sizeof(buffer) - 1 ? (size_t)(ve - v) : sizeof(buffer) - 1));
+ w->server_host = strdupz(buffer);
}
#ifdef NETDATA_WITH_ZLIB
else if(hash == hash_accept_encoding && !strcasecmp(s, "Accept-Encoding")) {
@@ -751,8 +816,10 @@ static inline char *http_header_parse(struct web_client *w, char *s, int parse_u
w->ssl.flags |= NETDATA_SSL_PROXY_HTTPS;
}
#endif
- else if(hash == hash_forwarded_host && !strcasecmp(s, "X-Forwarded-Host")){
- strncpyz(w->forwarded_host, v, ((size_t)(ve - v) < sizeof(w->server_host)-1 ? (size_t)(ve - v) : sizeof(w->server_host)-1));
+ else if(hash == hash_forwarded_host && !strcasecmp(s, "X-Forwarded-Host")) {
+ char buffer[NI_MAXHOST];
+ strncpyz(buffer, v, ((size_t)(ve - v) < sizeof(buffer) - 1 ? (size_t)(ve - v) : sizeof(buffer) - 1));
+ w->forwarded_host = strdupz(buffer);
}
*e = ':';
@@ -774,12 +841,16 @@ static inline char *web_client_valid_method(struct web_client *w, char *s) {
// is is a valid request?
if(!strncmp(s, "GET ", 4)) {
s = &s[4];
- w->mode = WEB_CLIENT_MODE_NORMAL;
+ w->mode = WEB_CLIENT_MODE_GET;
}
else if(!strncmp(s, "OPTIONS ", 8)) {
s = &s[8];
w->mode = WEB_CLIENT_MODE_OPTIONS;
}
+ else if(!strncmp(s, "POST ", 5)) {
+ s = &s[5];
+ w->mode = WEB_CLIENT_MODE_POST;
+ }
else if(!strncmp(s, "STREAM ", 7)) {
s = &s[7];
@@ -823,63 +894,6 @@ static inline char *web_client_valid_method(struct web_client *w, char *s) {
}
/**
- * Set Path Query
- *
- * Set the pointers to the path and query string according to the input.
- *
- * @param w is the structure with the client request
- * @param s is the first address of the string.
- * @param ptr is the address of the separator.
- */
-static void web_client_set_path_query(struct web_client *w, const char *s, char *ptr) {
- w->url_path_length = (size_t)(ptr -s);
- w->url_search_path = ptr;
-}
-
-/**
- * Split path query
- *
- * Do the separation between path and query string
- *
- * @param w is the structure with the client request
- * @param s is the string to parse
- */
-void web_client_split_path_query(struct web_client *w, char *s) {
- //I am assuming here that the separator character(?) is not encoded
- char *ptr = strchr(s, '?');
- if(ptr) {
- w->separator = '?';
- web_client_set_path_query(w, s, ptr);
- return;
- }
-
- //Here I test the second possibility, the URL is completely encoded by the user.
- //I am not using the strcasestr, because it is fastest to check %3f and compare
- //the next character.
- //We executed some tests with "encodeURI(uri);" described in https://www.w3schools.com/jsref/jsref_encodeuri.asp
- //on July 1st, 2019, that show us that URLs won't have '?','=' and '&' encoded, but we decided to move in front
- //with the next part, because users can develop their own encoded that won't follow this rule.
- char *moveme = s;
- while (moveme) {
- ptr = strchr(moveme, '%');
- if(ptr) {
- char *test = (ptr+1);
- if (!strncmp(test, "3f", 2) || !strncmp(test, "3F", 2)) {
- w->separator = *ptr;
- web_client_set_path_query(w, s, ptr);
- return;
- }
- ptr++;
- }
-
- moveme = ptr;
- }
-
- w->separator = 0x00;
- w->url_path_length = strlen(s);
-}
-
-/**
* Request validate
*
* @param w is the structure with the client request
@@ -903,14 +917,14 @@ static inline HTTP_VALIDATION http_request_validate(struct web_client *w) {
if(w->header_parse_last_size < last_pos)
last_pos = 0;
- is_it_valid = url_is_request_complete(s, &s[last_pos], w->header_parse_last_size);
+ is_it_valid = url_is_request_complete(s, &s[last_pos], w->header_parse_last_size, &w->post_payload, &w->post_payload_size);
if(!is_it_valid) {
- if(w->header_parse_tries > 10) {
+ if(w->header_parse_tries > HTTP_REQ_MAX_HEADER_FETCH_TRIES) {
info("Disabling slow client after %zu attempts to read the request (%zu bytes received)", w->header_parse_tries, buffer_strlen(w->response.data));
w->header_parse_tries = 0;
w->header_parse_last_size = 0;
web_client_disable_wait_receive(w);
- return HTTP_VALIDATION_NOT_SUPPORTED;
+ return HTTP_VALIDATION_TOO_MANY_READ_RETRIES;
}
return HTTP_VALIDATION_INCOMPLETE;
@@ -919,7 +933,7 @@ static inline HTTP_VALIDATION http_request_validate(struct web_client *w) {
is_it_valid = 1;
} else {
last_pos = w->header_parse_last_size;
- is_it_valid = url_is_request_complete(s, &s[last_pos], w->header_parse_last_size);
+ is_it_valid = url_is_request_complete(s, &s[last_pos], w->header_parse_last_size, &w->post_payload, &w->post_payload_size);
}
s = web_client_valid_method(w, s);
@@ -938,10 +952,9 @@ static inline HTTP_VALIDATION http_request_validate(struct web_client *w) {
w->header_parse_tries = 0;
w->header_parse_last_size = 0;
web_client_disable_wait_receive(w);
- return HTTP_VALIDATION_NOT_SUPPORTED;
+ return HTTP_VALIDATION_EXCESS_REQUEST_DATA;
}
}
-
web_client_enable_wait_receive(w);
return HTTP_VALIDATION_INCOMPLETE;
}
@@ -961,10 +974,6 @@ static inline HTTP_VALIDATION http_request_validate(struct web_client *w) {
// we have the end of encoded_url - remember it
char *ue = s;
- //Variables used to map the variables in the query string case it is present
- int total_variables;
- char *ptr_variables[WEB_FIELDS_MAX];
-
// make sure we have complete request
// complete requests contain: \r\n\r\n
while(*s) {
@@ -981,50 +990,16 @@ static inline HTTP_VALIDATION http_request_validate(struct web_client *w) {
if(unlikely(*s == '\r' && s[1] == '\n')) {
// a valid complete HTTP request found
+ char c = *ue;
*ue = '\0';
- //This is to avoid crash in line
- w->url_search_path = NULL;
- if(w->mode != WEB_CLIENT_MODE_NORMAL) {
- if(!url_decode_r(w->decoded_url, encoded_url, NETDATA_WEB_REQUEST_URL_SIZE + 1))
- return HTTP_VALIDATION_MALFORMED_URL;
- } else {
- web_client_split_path_query(w, encoded_url);
-
- if (w->url_search_path && w->separator) {
- *w->url_search_path = 0x00;
- }
-
- if(!url_decode_r(w->decoded_url, encoded_url, NETDATA_WEB_REQUEST_URL_SIZE + 1))
- return HTTP_VALIDATION_MALFORMED_URL;
-
- if (w->url_search_path && w->separator) {
- *w->url_search_path = w->separator;
-
- char *from = (encoded_url + w->url_path_length);
- total_variables = url_map_query_string(ptr_variables, from);
+ web_client_decode_path_and_query_string(w, encoded_url);
+ *ue = c;
- if (url_parse_query_string(w->decoded_query_string, NETDATA_WEB_REQUEST_URL_SIZE + 1, ptr_variables, total_variables)) {
- return HTTP_VALIDATION_MALFORMED_URL;
- }
- } else {
- //make sure there's no leftovers from previous request on the same web client
- w->decoded_query_string[1]='\0';
- }
- }
- *ue = ' ';
-
- // copy the URL - we are going to overwrite parts of it
- // TODO -- ideally we we should avoid copying buffers around
- snprintfz(w->last_url, NETDATA_WEB_REQUEST_URL_SIZE, "%s%s", w->decoded_url, w->decoded_query_string);
#ifdef ENABLE_HTTPS
if ( (!web_client_check_unix(w)) && (netdata_ssl_srv_ctx) ) {
if ((w->ssl.conn) && ((w->ssl.flags & NETDATA_SSL_NO_HANDSHAKE) && (web_client_is_using_ssl_force(w) || web_client_is_using_ssl_default(w)) && (w->mode != WEB_CLIENT_MODE_STREAM)) ) {
w->header_parse_tries = 0;
w->header_parse_last_size = 0;
- // The client will be redirected for Netdata and we are preserving the original request.
- *ue = '\0';
- strncpyz(w->last_url, encoded_url, NETDATA_WEB_REQUEST_URL_SIZE);
- *ue = ' ';
web_client_disable_wait_receive(w);
return HTTP_VALIDATION_REDIRECT;
}
@@ -1038,9 +1013,7 @@ static inline HTTP_VALIDATION http_request_validate(struct web_client *w) {
}
// another header line
- s = http_header_parse(w, s,
- (w->mode == WEB_CLIENT_MODE_STREAM) // parse user agent
- );
+ s = http_header_parse(w, s, (w->mode == WEB_CLIENT_MODE_STREAM)); // parse user agent
}
}
@@ -1056,6 +1029,7 @@ static inline ssize_t web_client_send_data(struct web_client *w,const void *buf,
if ( (!web_client_check_unix(w)) && (netdata_ssl_srv_ctx) ) {
if ( ( w->ssl.conn ) && ( !w->ssl.flags ) ){
bytes = netdata_ssl_write(w->ssl.conn, buf, len) ;
+ web_client_enable_wait_from_ssl(w, bytes);
} else {
bytes = send(w->ofd,buf, len , flags);
}
@@ -1076,15 +1050,15 @@ void web_client_build_http_header(struct web_client *w) {
// set a proper expiration date, if not already set
if(unlikely(!w->response.data->expires)) {
if(w->response.data->options & WB_CONTENT_NO_CACHEABLE)
- w->response.data->expires = w->tv_ready.tv_sec + localhost->rrd_update_every;
+ w->response.data->expires = w->timings.tv_ready.tv_sec + localhost->rrd_update_every;
else
- w->response.data->expires = w->tv_ready.tv_sec + 86400;
+ w->response.data->expires = w->timings.tv_ready.tv_sec + 86400;
}
// prepare the HTTP response header
debug(D_WEB_CLIENT, "%llu: Generating HTTP header with response %d.", w->id, w->response.code);
- const char *content_type_string = web_content_type_to_string(w->response.data->contenttype);
+ const char *content_type_string = web_content_type_to_string(w->response.data->content_type);
const char *code_msg = web_response_code_to_string(w->response.code);
// prepare the last modified and expiration dates
@@ -1104,8 +1078,8 @@ void web_client_build_http_header(struct web_client *w) {
"HTTP/1.1 %d %s\r\n"
"Location: https://%s%s\r\n",
w->response.code, code_msg,
- w->server_host,
- w->last_url);
+ w->server_host ? w->server_host : "",
+ buffer_tostring(w->url_as_received));
}else {
buffer_sprintf(w->response.header_output,
"HTTP/1.1 %d %s\r\n"
@@ -1119,7 +1093,7 @@ void web_client_build_http_header(struct web_client *w) {
code_msg,
web_client_has_keepalive(w)?"keep-alive":"close",
VERSION,
- w->origin,
+ w->origin ? w->origin : "*",
content_type_string,
date);
}
@@ -1127,31 +1101,19 @@ void web_client_build_http_header(struct web_client *w) {
if(unlikely(web_x_frame_options))
buffer_sprintf(w->response.header_output, "X-Frame-Options: %s\r\n", web_x_frame_options);
- if(w->cookie1[0] || w->cookie2[0]) {
- if(w->cookie1[0]) {
- buffer_sprintf(w->response.header_output,
- "Set-Cookie: %s\r\n",
- w->cookie1);
- }
-
- if(w->cookie2[0]) {
- buffer_sprintf(w->response.header_output,
- "Set-Cookie: %s\r\n",
- w->cookie2);
- }
-
+ if(w->response.has_cookies) {
if(respect_web_browser_do_not_track_policy)
buffer_sprintf(w->response.header_output,
- "Tk: T;cookies\r\n");
+ "Tk: T;cookies\r\n");
}
else {
if(respect_web_browser_do_not_track_policy) {
if(web_client_has_tracking_required(w))
buffer_sprintf(w->response.header_output,
- "Tk: T;cookies\r\n");
+ "Tk: T;cookies\r\n");
else
buffer_sprintf(w->response.header_output,
- "Tk: N\r\n");
+ "Tk: N\r\n");
}
}
@@ -1211,8 +1173,10 @@ static inline void web_client_send_http_header(struct web_client *w) {
ssize_t bytes;
#ifdef ENABLE_HTTPS
if ( (!web_client_check_unix(w)) && (netdata_ssl_srv_ctx) ) {
- if ( ( w->ssl.conn ) && ( w->ssl.flags == NETDATA_SSL_HANDSHAKE_COMPLETE ) )
+ if ( ( w->ssl.conn ) && ( w->ssl.flags == NETDATA_SSL_HANDSHAKE_COMPLETE ) ) {
bytes = netdata_ssl_write(w->ssl.conn, buffer_tostring(w->response.header_output), buffer_strlen(w->response.header_output));
+ web_client_enable_wait_from_ssl(w, bytes);
+ }
else {
while((bytes = send(w->ofd, buffer_tostring(w->response.header_output), buffer_strlen(w->response.header_output), 0)) == -1) {
count++;
@@ -1247,7 +1211,7 @@ static inline void web_client_send_http_header(struct web_client *w) {
if(bytes != (ssize_t) buffer_strlen(w->response.header_output)) {
if(bytes > 0)
- w->stats_sent_bytes += bytes;
+ w->statistics.sent_bytes += bytes;
if (bytes < 0) {
@@ -1260,12 +1224,10 @@ static inline void web_client_send_http_header(struct web_client *w) {
}
}
else
- w->stats_sent_bytes += bytes;
+ w->statistics.sent_bytes += bytes;
}
-static inline int web_client_process_url(RRDHOST *host, struct web_client *w, char *url);
-
-static inline int web_client_switch_host(RRDHOST *host, struct web_client *w, char *url) {
+static inline int web_client_switch_host(RRDHOST *host, struct web_client *w, char *url, bool nodeid, int (*func)(RRDHOST *, struct web_client *, char *)) {
static uint32_t hash_localhost = 0;
if(unlikely(!hash_localhost)) {
@@ -1278,51 +1240,132 @@ static inline int web_client_switch_host(RRDHOST *host, struct web_client *w, ch
return HTTP_RESP_BAD_REQUEST;
}
- char *tok = mystrsep(&url, "/");
+ char *tok = strsep_skip_consecutive_separators(&url, "/");
if(tok && *tok) {
debug(D_WEB_CLIENT, "%llu: Searching for host with name '%s'.", w->id, tok);
- if(!url) { //no delim found
- debug(D_WEB_CLIENT, "%llu: URL doesn't end with / generating redirect.", w->id);
- char *protocol, *url_host;
+ if(nodeid) {
+ host = find_host_by_node_id(tok);
+ if(!host) {
+ host = rrdhost_find_by_hostname(tok);
+ if (!host)
+ host = rrdhost_find_by_guid(tok);
+ }
+ }
+ else {
+ host = rrdhost_find_by_hostname(tok);
+ if(!host) {
+ host = rrdhost_find_by_guid(tok);
+ if (!host)
+ host = find_host_by_node_id(tok);
+ }
+ }
+
+ if(!host) {
+ // we didn't find it, but it may be a uuid case mismatch for MACHINE_GUID
+ // so, recreate the machine guid in lower-case.
+ uuid_t uuid;
+ char txt[UUID_STR_LEN];
+ if (uuid_parse(tok, uuid) == 0) {
+ uuid_unparse_lower(uuid, txt);
+ host = rrdhost_find_by_guid(txt);
+ }
+ }
+
+ if (host) {
+ if(!url) { //no delim found
+ debug(D_WEB_CLIENT, "%llu: URL doesn't end with / generating redirect.", w->id);
+ char *protocol, *url_host;
#ifdef ENABLE_HTTPS
- protocol = ((w->ssl.conn && !w->ssl.flags) || w->ssl.flags & NETDATA_SSL_PROXY_HTTPS) ? "https" : "http";
+ protocol = ((w->ssl.conn && !w->ssl.flags) || w->ssl.flags & NETDATA_SSL_PROXY_HTTPS) ? "https" : "http";
#else
- protocol = "http";
+ protocol = "http";
#endif
- url_host = (!w->forwarded_host[0])?w->server_host:w->forwarded_host;
- buffer_sprintf(w->response.header, "Location: %s://%s%s/\r\n", protocol, url_host, w->last_url);
- buffer_strcat(w->response.data, "Permanent redirect");
- return HTTP_RESP_REDIR_PERM;
- }
- // copy the URL, we need it to serve files
- w->last_url[0] = '/';
+ url_host = w->forwarded_host;
+ if(!url_host) {
+ url_host = w->server_host;
+ if(!url_host) url_host = "";
+ }
- if(url && *url) strncpyz(&w->last_url[1], url, NETDATA_WEB_REQUEST_URL_SIZE - 1);
- else w->last_url[1] = '\0';
+ buffer_sprintf(w->response.header, "Location: %s://%s/%s/%s/%s",
+ protocol, url_host, nodeid?"node":"host", tok, buffer_tostring(w->url_path_decoded));
- host = rrdhost_find_by_hostname(tok);
- if (!host)
- host = rrdhost_find_by_guid(tok);
- if (host) return web_client_process_url(host, w, url);
+ if(buffer_strlen(w->url_query_string_decoded)) {
+ const char *query_string = buffer_tostring(w->url_query_string_decoded);
+ if(*query_string) {
+ if(*query_string != '?')
+ buffer_fast_strcat(w->response.header, "?", 1);
+ buffer_strcat(w->response.header, query_string);
+ }
+ }
+ buffer_fast_strcat(w->response.header, "\r\n", 2);
+ buffer_strcat(w->response.data, "Permanent redirect");
+ return HTTP_RESP_REDIR_PERM;
+ }
+
+ size_t len = strlen(url) + 2;
+ char buf[len];
+ buf[0] = '/';
+ strcpy(&buf[1], url);
+ buf[len - 1] = '\0';
+
+ buffer_flush(w->url_path_decoded);
+ buffer_strcat(w->url_path_decoded, buf);
+ return func(host, w, buf);
+ }
}
buffer_flush(w->response.data);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_strcat(w->response.data, "This netdata does not maintain a database for host: ");
buffer_strcat_htmlescape(w->response.data, tok?tok:"");
return HTTP_RESP_NOT_FOUND;
}
-static inline int web_client_process_url(RRDHOST *host, struct web_client *w, char *url) {
+int web_client_api_request_with_node_selection(RRDHOST *host, struct web_client *w, char *decoded_url_path) {
+ static uint32_t
+ hash_api = 0,
+ hash_host = 0,
+ hash_node = 0;
+
+ if(unlikely(!hash_api)) {
+ hash_api = simple_hash("api");
+ hash_host = simple_hash("host");
+ hash_node = simple_hash("node");
+ }
+
+ char *tok = strsep_skip_consecutive_separators(&decoded_url_path, "/?");
+ if(likely(tok && *tok)) {
+ uint32_t hash = simple_hash(tok);
+
+ if(unlikely(hash == hash_api && strcmp(tok, "api") == 0)) {
+ // current API
+ debug(D_WEB_CLIENT_ACCESS, "%llu: API request ...", w->id);
+ return check_host_and_call(host, w, decoded_url_path, web_client_api_request);
+ }
+ else if(unlikely((hash == hash_host && strcmp(tok, "host") == 0) || (hash == hash_node && strcmp(tok, "node") == 0))) {
+ // host switching
+ debug(D_WEB_CLIENT_ACCESS, "%llu: host switch request ...", w->id);
+ return web_client_switch_host(host, w, decoded_url_path, hash == hash_node, web_client_api_request_with_node_selection);
+ }
+ }
+
+ buffer_flush(w->response.data);
+ buffer_strcat(w->response.data, "Unknown API endpoint.");
+ w->response.data->content_type = CT_TEXT_HTML;
+ return HTTP_RESP_NOT_FOUND;
+}
+
+static inline int web_client_process_url(RRDHOST *host, struct web_client *w, char *decoded_url_path) {
if(unlikely(!service_running(ABILITY_WEB_REQUESTS)))
return web_client_permission_denied(w);
static uint32_t
hash_api = 0,
hash_netdata_conf = 0,
- hash_host = 0;
+ hash_host = 0,
+ hash_node = 0;
#ifdef NETDATA_INTERNAL_CHECKS
static uint32_t hash_exit = 0, hash_debug = 0, hash_mirror = 0;
@@ -1332,6 +1375,7 @@ static inline int web_client_process_url(RRDHOST *host, struct web_client *w, ch
hash_api = simple_hash("api");
hash_netdata_conf = simple_hash("netdata.conf");
hash_host = simple_hash("host");
+ hash_node = simple_hash("node");
#ifdef NETDATA_INTERNAL_CHECKS
hash_exit = simple_hash("exit");
hash_debug = simple_hash("debug");
@@ -1339,25 +1383,29 @@ static inline int web_client_process_url(RRDHOST *host, struct web_client *w, ch
#endif
}
- char *tok = mystrsep(&url, "/?");
+ // keep a copy of the decoded path, in case we need to serve it as a filename
+ char filename[FILENAME_MAX + 1];
+ strncpyz(filename, buffer_tostring(w->url_path_decoded), FILENAME_MAX);
+
+ char *tok = strsep_skip_consecutive_separators(&decoded_url_path, "/?");
if(likely(tok && *tok)) {
uint32_t hash = simple_hash(tok);
debug(D_WEB_CLIENT, "%llu: Processing command '%s'.", w->id, tok);
if(unlikely(hash == hash_api && strcmp(tok, "api") == 0)) { // current API
debug(D_WEB_CLIENT_ACCESS, "%llu: API request ...", w->id);
- return check_host_and_call(host, w, url, web_client_api_request);
+ return check_host_and_call(host, w, decoded_url_path, web_client_api_request);
}
- else if(unlikely(hash == hash_host && strcmp(tok, "host") == 0)) { // host switching
+ else if(unlikely((hash == hash_host && strcmp(tok, "host") == 0) || (hash == hash_node && strcmp(tok, "node") == 0))) { // host switching
debug(D_WEB_CLIENT_ACCESS, "%llu: host switch request ...", w->id);
- return web_client_switch_host(host, w, url);
+ return web_client_switch_host(host, w, decoded_url_path, hash == hash_node, web_client_process_url);
}
else if(unlikely(hash == hash_netdata_conf && strcmp(tok, "netdata.conf") == 0)) { // netdata.conf
if(unlikely(!web_client_can_access_netdataconf(w)))
return web_client_permission_denied(w);
debug(D_WEB_CLIENT_ACCESS, "%llu: generating netdata.conf ...", w->id);
- w->response.data->contenttype = CT_TEXT_PLAIN;
+ w->response.data->content_type = CT_TEXT_PLAIN;
buffer_flush(w->response.data);
config_generate(w->response.data, 0);
return HTTP_RESP_OK;
@@ -1367,7 +1415,7 @@ static inline int web_client_process_url(RRDHOST *host, struct web_client *w, ch
if(unlikely(!web_client_can_access_netdataconf(w)))
return web_client_permission_denied(w);
- w->response.data->contenttype = CT_TEXT_PLAIN;
+ w->response.data->content_type = CT_TEXT_PLAIN;
buffer_flush(w->response.data);
if(!netdata_exit)
@@ -1386,7 +1434,7 @@ static inline int web_client_process_url(RRDHOST *host, struct web_client *w, ch
buffer_flush(w->response.data);
// get the name of the data to show
- tok = mystrsep(&url, "&");
+ tok = strsep_skip_consecutive_separators(&decoded_url_path, "&");
if(tok && *tok) {
debug(D_WEB_CLIENT, "%llu: Searching for RRD data with name '%s'.", w->id, tok);
@@ -1394,7 +1442,7 @@ static inline int web_client_process_url(RRDHOST *host, struct web_client *w, ch
RRDSET *st = rrdset_find_byname(host, tok);
if(!st) st = rrdset_find(host, tok);
if(!st) {
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_strcat(w->response.data, "Chart is not found: ");
buffer_strcat_htmlescape(w->response.data, tok);
debug(D_WEB_CLIENT_ACCESS, "%llu: %s is not found.", w->id, tok);
@@ -1408,7 +1456,7 @@ static inline int web_client_process_url(RRDHOST *host, struct web_client *w, ch
else
rrdset_flag_set(st, RRDSET_FLAG_DEBUG);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_sprintf(w->response.data, "Chart has now debug %s: ", rrdset_flag_check(st, RRDSET_FLAG_DEBUG)?"enabled":"disabled");
buffer_strcat_htmlescape(w->response.data, tok);
debug(D_WEB_CLIENT_ACCESS, "%llu: debug for %s is %s.", w->id, tok, rrdset_flag_check(st, RRDSET_FLAG_DEBUG)?"enabled":"disabled");
@@ -1436,18 +1484,14 @@ static inline int web_client_process_url(RRDHOST *host, struct web_client *w, ch
#endif /* NETDATA_INTERNAL_CHECKS */
}
- char filename[FILENAME_MAX+1];
- url = filename;
- strncpyz(filename, w->last_url, FILENAME_MAX);
- tok = mystrsep(&url, "?");
buffer_flush(w->response.data);
- return mysendfile(w, (tok && *tok)?tok:"/");
+ return mysendfile(w, filename);
}
void web_client_process_request(struct web_client *w) {
// start timing us
- now_realtime_timeval(&w->tv_in);
+ web_client_timeout_checkpoint_init(w);
switch(http_request_validate(w)) {
case HTTP_VALIDATION_OK:
@@ -1458,7 +1502,7 @@ void web_client_process_request(struct web_client *w) {
return;
}
- w->response.code = rrdpush_receiver_thread_spawn(w, w->decoded_url);
+ w->response.code = rrdpush_receiver_thread_spawn(w, (char *)buffer_tostring(w->url_query_string_decoded));
return;
case WEB_CLIENT_MODE_OPTIONS:
@@ -1473,14 +1517,15 @@ void web_client_process_request(struct web_client *w) {
break;
}
- w->response.data->contenttype = CT_TEXT_PLAIN;
+ w->response.data->content_type = CT_TEXT_PLAIN;
buffer_flush(w->response.data);
buffer_strcat(w->response.data, "OK");
w->response.code = HTTP_RESP_OK;
break;
case WEB_CLIENT_MODE_FILECOPY:
- case WEB_CLIENT_MODE_NORMAL:
+ case WEB_CLIENT_MODE_POST:
+ case WEB_CLIENT_MODE_GET:
if(unlikely(
!web_client_can_access_dashboard(w) &&
!web_client_can_access_registry(w) &&
@@ -1492,23 +1537,29 @@ void web_client_process_request(struct web_client *w) {
break;
}
- w->response.code = web_client_process_url(localhost, w, w->decoded_url);
+ w->response.code = web_client_process_url(localhost, w, (char *)buffer_tostring(w->url_path_decoded));
break;
}
break;
case HTTP_VALIDATION_INCOMPLETE:
if(w->response.data->len > NETDATA_WEB_REQUEST_MAX_SIZE) {
- strcpy(w->last_url, "too big request");
+ buffer_flush(w->url_as_received);
+ buffer_strcat(w->url_as_received, "too big request");
debug(D_WEB_CLIENT_ACCESS, "%llu: Received request is too big (%zu bytes).", w->id, w->response.data->len);
+ size_t len = w->response.data->len;
buffer_flush(w->response.data);
- buffer_sprintf(w->response.data, "Received request is too big (%zu bytes).\r\n", w->response.data->len);
+ buffer_sprintf(w->response.data, "Received request is too big (received %zu bytes, max is %zu bytes).\r\n", len, (size_t)NETDATA_WEB_REQUEST_MAX_SIZE);
w->response.code = HTTP_RESP_BAD_REQUEST;
}
else {
// wait for more data
+ // set to normal to prevent web_server_rcv_callback
+ // from going into stream mode
+ if (w->mode == WEB_CLIENT_MODE_STREAM)
+ w->mode = WEB_CLIENT_MODE_GET;
return;
}
break;
@@ -1516,7 +1567,7 @@ void web_client_process_request(struct web_client *w) {
case HTTP_VALIDATION_REDIRECT:
{
buffer_flush(w->response.data);
- w->response.data->contenttype = CT_TEXT_HTML;
+ w->response.data->content_type = CT_TEXT_HTML;
buffer_strcat(w->response.data,
"<!DOCTYPE html><!-- SPDX-License-Identifier: GPL-3.0-or-later --><html>"
"<body onload=\"window.location.href ='https://'+ window.location.hostname +"
@@ -1530,29 +1581,43 @@ void web_client_process_request(struct web_client *w) {
}
#endif
case HTTP_VALIDATION_MALFORMED_URL:
- debug(D_WEB_CLIENT_ACCESS, "%llu: URL parsing failed (malformed URL). Cannot understand '%s'.", w->id, w->response.data->buffer);
+ debug(D_WEB_CLIENT_ACCESS, "%llu: Malformed URL '%s'.", w->id, w->response.data->buffer);
+
+ buffer_flush(w->response.data);
+ buffer_strcat(w->response.data, "Malformed URL...\r\n");
+ w->response.code = HTTP_RESP_BAD_REQUEST;
+ break;
+ case HTTP_VALIDATION_EXCESS_REQUEST_DATA:
+ debug(D_WEB_CLIENT_ACCESS, "%llu: Excess data in request '%s'.", w->id, w->response.data->buffer);
+
+ buffer_flush(w->response.data);
+ buffer_strcat(w->response.data, "Excess data in request.\r\n");
+ w->response.code = HTTP_RESP_BAD_REQUEST;
+ break;
+ case HTTP_VALIDATION_TOO_MANY_READ_RETRIES:
+ debug(D_WEB_CLIENT_ACCESS, "%llu: Too many retries to read request '%s'.", w->id, w->response.data->buffer);
buffer_flush(w->response.data);
- buffer_strcat(w->response.data, "URL not valid. I don't understand you...\r\n");
+ buffer_strcat(w->response.data, "Too many retries to read request.\r\n");
w->response.code = HTTP_RESP_BAD_REQUEST;
break;
case HTTP_VALIDATION_NOT_SUPPORTED:
- debug(D_WEB_CLIENT_ACCESS, "%llu: Cannot understand '%s'.", w->id, w->response.data->buffer);
+ debug(D_WEB_CLIENT_ACCESS, "%llu: HTTP method requested is not supported '%s'.", w->id, w->response.data->buffer);
buffer_flush(w->response.data);
- buffer_strcat(w->response.data, "I don't understand you...\r\n");
+ buffer_strcat(w->response.data, "HTTP method requested is not supported...\r\n");
w->response.code = HTTP_RESP_BAD_REQUEST;
break;
}
// keep track of the processing time
- now_realtime_timeval(&w->tv_ready);
+ web_client_timeout_checkpoint_response_ready(w, NULL);
w->response.sent = 0;
// set a proper last modified date
if(unlikely(!w->response.data->date))
- w->response.data->date = w->tv_ready.tv_sec;
+ w->response.data->date = w->timings.tv_ready.tv_sec;
web_client_send_http_header(w);
@@ -1569,7 +1634,8 @@ void web_client_process_request(struct web_client *w) {
debug(D_WEB_CLIENT, "%llu: Done preparing the OPTIONS response. Sending data (%zu bytes) to client.", w->id, w->response.data->len);
break;
- case WEB_CLIENT_MODE_NORMAL:
+ case WEB_CLIENT_MODE_POST:
+ case WEB_CLIENT_MODE_GET:
debug(D_WEB_CLIENT, "%llu: Done preparing the response. Sending data (%zu bytes) to client.", w->id, w->response.data->len);
break;
@@ -1612,7 +1678,7 @@ ssize_t web_client_send_chunk_header(struct web_client *w, size_t len)
bytes = web_client_send_data(w,buf,strlen(buf),0);
if(bytes > 0) {
debug(D_DEFLATE, "%llu: Sent chunk header %zd bytes.", w->id, bytes);
- w->stats_sent_bytes += bytes;
+ w->statistics.sent_bytes += bytes;
}
else if(bytes == 0) {
@@ -1634,7 +1700,7 @@ ssize_t web_client_send_chunk_close(struct web_client *w)
bytes = web_client_send_data(w,"\r\n",2,0);
if(bytes > 0) {
debug(D_DEFLATE, "%llu: Sent chunk suffix %zd bytes.", w->id, bytes);
- w->stats_sent_bytes += bytes;
+ w->statistics.sent_bytes += bytes;
}
else if(bytes == 0) {
@@ -1656,7 +1722,7 @@ ssize_t web_client_send_chunk_finalize(struct web_client *w)
bytes = web_client_send_data(w,"\r\n0\r\n\r\n",7,0);
if(bytes > 0) {
debug(D_DEFLATE, "%llu: Sent chunk suffix %zd bytes.", w->id, bytes);
- w->stats_sent_bytes += bytes;
+ w->statistics.sent_bytes += bytes;
}
else if(bytes == 0) {
@@ -1734,7 +1800,7 @@ ssize_t web_client_send_deflate(struct web_client *w)
// ask for FINISH if we have all the input
int flush = Z_SYNC_FLUSH;
- if(w->mode == WEB_CLIENT_MODE_NORMAL
+ if((w->mode == WEB_CLIENT_MODE_GET || w->mode == WEB_CLIENT_MODE_POST)
|| (w->mode == WEB_CLIENT_MODE_FILECOPY && !web_client_has_wait_receive(w) && w->response.data->len == w->response.rlen)) {
flush = Z_FINISH;
debug(D_DEFLATE, "%llu: Requesting Z_FINISH, if possible.", w->id);
@@ -1768,7 +1834,7 @@ ssize_t web_client_send_deflate(struct web_client *w)
len = web_client_send_data(w,&w->response.zbuffer[w->response.zsent], (size_t) (w->response.zhave - w->response.zsent), MSG_DONTWAIT);
if(len > 0) {
- w->stats_sent_bytes += len;
+ w->statistics.sent_bytes += len;
w->response.zsent += len;
len += t;
debug(D_WEB_CLIENT, "%llu: Sent %zd bytes.", w->id, len);
@@ -1823,7 +1889,7 @@ ssize_t web_client_send(struct web_client *w) {
bytes = web_client_send_data(w,&w->response.data->buffer[w->response.sent], w->response.data->len - w->response.sent, MSG_DONTWAIT);
if(likely(bytes > 0)) {
- w->stats_sent_bytes += bytes;
+ w->statistics.sent_bytes += bytes;
w->response.sent += bytes;
debug(D_WEB_CLIENT, "%llu: Sent %zd bytes.", w->id, bytes);
}
@@ -1899,12 +1965,13 @@ ssize_t web_client_receive(struct web_client *w)
ssize_t left = (ssize_t)(w->response.data->size - w->response.data->len);
// do we have any space for more data?
- buffer_need_bytes(w->response.data, NETDATA_WEB_REQUEST_RECEIVE_SIZE);
+ buffer_need_bytes(w->response.data, NETDATA_WEB_REQUEST_INITIAL_SIZE);
#ifdef ENABLE_HTTPS
if ( (!web_client_check_unix(w)) && (netdata_ssl_srv_ctx) ) {
if ( ( w->ssl.conn ) && (!w->ssl.flags)) {
bytes = netdata_ssl_read(w->ssl.conn, &w->response.data->buffer[w->response.data->len], (size_t) (left - 1));
+ web_client_enable_wait_from_ssl(w, bytes);
}else {
bytes = recv(w->ifd, &w->response.data->buffer[w->response.data->len], (size_t) (left - 1), MSG_DONTWAIT);
}
@@ -1917,7 +1984,7 @@ ssize_t web_client_receive(struct web_client *w)
#endif
if(likely(bytes > 0)) {
- w->stats_received_bytes += bytes;
+ w->statistics.received_bytes += bytes;
size_t old = w->response.data->len;
(void)old;
@@ -1957,3 +2024,195 @@ int web_client_socket_is_now_used_for_streaming(struct web_client *w) {
return HTTP_RESP_OK;
}
+
+void web_client_decode_path_and_query_string(struct web_client *w, const char *path_and_query_string) {
+ char buffer[NETDATA_WEB_REQUEST_URL_SIZE + 2];
+ buffer[0] = '\0';
+
+ buffer_flush(w->url_path_decoded);
+ buffer_flush(w->url_query_string_decoded);
+
+ if(buffer_strlen(w->url_as_received) == 0)
+ // do not overwrite this if it is already filled
+ buffer_strcat(w->url_as_received, path_and_query_string);
+
+ if(w->mode == WEB_CLIENT_MODE_STREAM) {
+ // in stream mode, there is no path
+
+ url_decode_r(buffer, path_and_query_string, NETDATA_WEB_REQUEST_URL_SIZE + 1);
+
+ buffer[NETDATA_WEB_REQUEST_URL_SIZE + 1] = '\0';
+ buffer_strcat(w->url_query_string_decoded, buffer);
+ }
+ else {
+ // in non-stream mode, there is a path
+
+ // FIXME - the way this is implemented, query string params never accept the symbol &, not even encoded as %26
+ // To support the symbol & in query string params, we need to turn the url_query_string_decoded into a
+ // dictionary and decode each of the parameters individually.
+ // OR: in url_query_string_decoded use as separator a control character that cannot appear in the URL.
+
+ char *question_mark_start = strchr(path_and_query_string, '?');
+ if (question_mark_start)
+ url_decode_r(buffer, question_mark_start, NETDATA_WEB_REQUEST_URL_SIZE + 1);
+
+ buffer[NETDATA_WEB_REQUEST_URL_SIZE + 1] = '\0';
+ buffer_strcat(w->url_query_string_decoded, buffer);
+
+ if (question_mark_start) {
+ char c = *question_mark_start;
+ *question_mark_start = '\0';
+ url_decode_r(buffer, path_and_query_string, NETDATA_WEB_REQUEST_URL_SIZE + 1);
+ *question_mark_start = c;
+ } else
+ url_decode_r(buffer, path_and_query_string, NETDATA_WEB_REQUEST_URL_SIZE + 1);
+
+ buffer[NETDATA_WEB_REQUEST_URL_SIZE + 1] = '\0';
+ buffer_strcat(w->url_path_decoded, buffer);
+ }
+}
+
+#ifdef ENABLE_HTTPS
+void web_client_reuse_ssl(struct web_client *w) {
+ if (netdata_ssl_srv_ctx) {
+ if (w->ssl.conn) {
+ SSL_SESSION *session = SSL_get_session(w->ssl.conn);
+ SSL *old = w->ssl.conn;
+ w->ssl.conn = SSL_new(netdata_ssl_srv_ctx);
+ if (session) {
+#if OPENSSL_VERSION_NUMBER >= OPENSSL_VERSION_111
+ if (SSL_SESSION_is_resumable(session))
+#endif
+ SSL_set_session(w->ssl.conn, session);
+ }
+ SSL_free(old);
+ }
+ }
+}
+#endif
+
+void web_client_zero(struct web_client *w) {
+ // zero everything about it - but keep the buffers
+
+ web_client_reset_allocations(w, false);
+
+ // remember the pointers to the buffers
+ BUFFER *b1 = w->response.data;
+ BUFFER *b2 = w->response.header;
+ BUFFER *b3 = w->response.header_output;
+ BUFFER *b4 = w->url_path_decoded;
+ BUFFER *b5 = w->url_as_received;
+ BUFFER *b6 = w->url_query_string_decoded;
+
+#ifdef ENABLE_HTTPS
+ web_client_reuse_ssl(w);
+ SSL *ssl = w->ssl.conn;
+#endif
+
+ size_t use_count = w->use_count;
+ size_t *statistics_memory_accounting = w->statistics.memory_accounting;
+
+ // zero everything
+ memset(w, 0, sizeof(struct web_client));
+
+ w->ifd = w->ofd = -1;
+ w->statistics.memory_accounting = statistics_memory_accounting;
+ w->use_count = use_count;
+
+#ifdef ENABLE_HTTPS
+ w->ssl.conn = ssl;
+ w->ssl.flags = NETDATA_SSL_START;
+ debug(D_WEB_CLIENT_ACCESS,"Reusing SSL structure with (w->ssl = NULL, w->accepted = %u)", w->ssl.flags);
+#endif
+
+ // restore the pointers of the buffers
+ w->response.data = b1;
+ w->response.header = b2;
+ w->response.header_output = b3;
+ w->url_path_decoded = b4;
+ w->url_as_received = b5;
+ w->url_query_string_decoded = b6;
+}
+
+struct web_client *web_client_create(size_t *statistics_memory_accounting) {
+ struct web_client *w = (struct web_client *)callocz(1, sizeof(struct web_client));
+ w->use_count = 1;
+ w->statistics.memory_accounting = statistics_memory_accounting;
+
+ w->url_as_received = buffer_create(NETDATA_WEB_DECODED_URL_INITIAL_SIZE, w->statistics.memory_accounting);
+ w->url_path_decoded = buffer_create(NETDATA_WEB_DECODED_URL_INITIAL_SIZE, w->statistics.memory_accounting);
+ w->url_query_string_decoded = buffer_create(NETDATA_WEB_DECODED_URL_INITIAL_SIZE, w->statistics.memory_accounting);
+ w->response.data = buffer_create(NETDATA_WEB_RESPONSE_INITIAL_SIZE, w->statistics.memory_accounting);
+ w->response.header = buffer_create(NETDATA_WEB_RESPONSE_HEADER_INITIAL_SIZE, w->statistics.memory_accounting);
+ w->response.header_output = buffer_create(NETDATA_WEB_RESPONSE_HEADER_INITIAL_SIZE, w->statistics.memory_accounting);
+
+ __atomic_add_fetch(w->statistics.memory_accounting, sizeof(struct web_client), __ATOMIC_RELAXED);
+
+ return w;
+}
+
+void web_client_free(struct web_client *w) {
+ web_client_reset_allocations(w, true);
+
+ __atomic_sub_fetch(w->statistics.memory_accounting, sizeof(struct web_client), __ATOMIC_RELAXED);
+ freez(w);
+}
+
+inline void web_client_timeout_checkpoint_init(struct web_client *w) {
+ now_monotonic_high_precision_timeval(&w->timings.tv_in);
+}
+
+inline void web_client_timeout_checkpoint_set(struct web_client *w, int timeout_ms) {
+ w->timings.timeout_ut = timeout_ms * USEC_PER_MS;
+
+ if(!w->timings.tv_in.tv_sec)
+ web_client_timeout_checkpoint_init(w);
+
+ if(!w->timings.tv_timeout_last_checkpoint.tv_sec)
+ w->timings.tv_timeout_last_checkpoint = w->timings.tv_in;
+}
+
+inline usec_t web_client_timeout_checkpoint(struct web_client *w) {
+ struct timeval now;
+ now_monotonic_high_precision_timeval(&now);
+
+ if (!w->timings.tv_timeout_last_checkpoint.tv_sec)
+ w->timings.tv_timeout_last_checkpoint = w->timings.tv_in;
+
+ usec_t since_last_check_ut = dt_usec(&w->timings.tv_timeout_last_checkpoint, &now);
+
+ w->timings.tv_timeout_last_checkpoint = now;
+
+ return since_last_check_ut;
+}
+
+inline usec_t web_client_timeout_checkpoint_response_ready(struct web_client *w, usec_t *usec_since_last_checkpoint) {
+ usec_t since_last_check_ut = web_client_timeout_checkpoint(w);
+ if(usec_since_last_checkpoint)
+ *usec_since_last_checkpoint = since_last_check_ut;
+
+ w->timings.tv_ready = w->timings.tv_timeout_last_checkpoint;
+
+ // return the total time of the query
+ return dt_usec(&w->timings.tv_in, &w->timings.tv_ready);
+}
+
+inline bool web_client_timeout_checkpoint_and_check(struct web_client *w, usec_t *usec_since_last_checkpoint) {
+
+ usec_t since_last_check_ut = web_client_timeout_checkpoint(w);
+ if(usec_since_last_checkpoint)
+ *usec_since_last_checkpoint = since_last_check_ut;
+
+ if(!w->timings.timeout_ut)
+ return false;
+
+ usec_t since_reception_ut = dt_usec(&w->timings.tv_in, &w->timings.tv_timeout_last_checkpoint);
+ if (since_reception_ut >= w->timings.timeout_ut) {
+ buffer_flush(w->response.data);
+ buffer_strcat(w->response.data, "Query timeout exceeded");
+ w->response.code = HTTP_RESP_BACKEND_FETCH_FAILED;
+ return true;
+ }
+
+ return false;
+}
diff --git a/web/server/web_client.h b/web/server/web_client.h
index d0360f4f9..c61a8b813 100644
--- a/web/server/web_client.h
+++ b/web/server/web_client.h
@@ -24,33 +24,37 @@ extern int web_enable_gzip, web_gzip_level, web_gzip_strategy;
#define HTTP_RESP_NOT_FOUND 404
#define HTTP_RESP_CONFLICT 409
#define HTTP_RESP_PRECOND_FAIL 412
+#define HTTP_RESP_CONTENT_TOO_LONG 413
// HTTP_CODES 5XX Server Errors
#define HTTP_RESP_INTERNAL_SERVER_ERROR 500
-#define HTTP_RESP_BACKEND_FETCH_FAILED 503 // 503 is right
-#define HTTP_RESP_SERVICE_UNAVAILABLE 503 // 503 is right
+#define HTTP_RESP_BACKEND_FETCH_FAILED 503
+#define HTTP_RESP_SERVICE_UNAVAILABLE 503
#define HTTP_RESP_GATEWAY_TIMEOUT 504
#define HTTP_RESP_BACKEND_RESPONSE_INVALID 591
+#define HTTP_REQ_MAX_HEADER_FETCH_TRIES 100
+
extern int respect_web_browser_do_not_track_policy;
extern char *web_x_frame_options;
typedef enum web_client_mode {
- WEB_CLIENT_MODE_NORMAL = 0,
- WEB_CLIENT_MODE_FILECOPY = 1,
- WEB_CLIENT_MODE_OPTIONS = 2,
- WEB_CLIENT_MODE_STREAM = 3
+ WEB_CLIENT_MODE_GET = 0,
+ WEB_CLIENT_MODE_POST = 1,
+ WEB_CLIENT_MODE_FILECOPY = 2,
+ WEB_CLIENT_MODE_OPTIONS = 3,
+ WEB_CLIENT_MODE_STREAM = 4,
} WEB_CLIENT_MODE;
typedef enum {
HTTP_VALIDATION_OK,
HTTP_VALIDATION_NOT_SUPPORTED,
+ HTTP_VALIDATION_TOO_MANY_READ_RETRIES,
+ HTTP_VALIDATION_EXCESS_REQUEST_DATA,
HTTP_VALIDATION_MALFORMED_URL,
-#ifdef ENABLE_HTTPS
HTTP_VALIDATION_INCOMPLETE,
+#ifdef ENABLE_HTTPS
HTTP_VALIDATION_REDIRECT
-#else
- HTTP_VALIDATION_INCOMPLETE
#endif
} HTTP_VALIDATION;
@@ -71,6 +75,9 @@ typedef enum web_client_flags {
WEB_CLIENT_FLAG_DONT_CLOSE_SOCKET = 1 << 9, // don't close the socket when cleaning up (static-threaded web server)
WEB_CLIENT_CHUNKED_TRANSFER = 1 << 10, // chunked transfer (used with zlib compression)
+
+ WEB_CLIENT_FLAG_SSL_WAIT_RECEIVE = 1 << 11, // if set, we are waiting more input data from an ssl conn
+ WEB_CLIENT_FLAG_SSL_WAIT_SEND = 1 << 12, // if set, we have data to send to the client from an ssl conn
} WEB_CLIENT_FLAGS;
#define web_client_flag_check(w, flag) ((w)->flags & (flag))
@@ -100,6 +107,14 @@ typedef enum web_client_flags {
#define web_client_enable_wait_send(w) web_client_flag_set(w, WEB_CLIENT_FLAG_WAIT_SEND)
#define web_client_disable_wait_send(w) web_client_flag_clear(w, WEB_CLIENT_FLAG_WAIT_SEND)
+#define web_client_has_ssl_wait_receive(w) web_client_flag_check(w, WEB_CLIENT_FLAG_SSL_WAIT_RECEIVE)
+#define web_client_enable_ssl_wait_receive(w) web_client_flag_set(w, WEB_CLIENT_FLAG_SSL_WAIT_RECEIVE)
+#define web_client_disable_ssl_wait_receive(w) web_client_flag_clear(w, WEB_CLIENT_FLAG_SSL_WAIT_RECEIVE)
+
+#define web_client_has_ssl_wait_send(w) web_client_flag_check(w, WEB_CLIENT_FLAG_SSL_WAIT_SEND)
+#define web_client_enable_ssl_wait_send(w) web_client_flag_set(w, WEB_CLIENT_FLAG_SSL_WAIT_SEND)
+#define web_client_disable_ssl_wait_send(w) web_client_flag_clear(w, WEB_CLIENT_FLAG_SSL_WAIT_SEND)
+
#define web_client_set_tcp(w) web_client_flag_set(w, WEB_CLIENT_FLAG_TCP_CLIENT)
#define web_client_set_unix(w) web_client_flag_set(w, WEB_CLIENT_FLAG_UNIX_CLIENT)
#define web_client_check_unix(w) web_client_flag_check(w, WEB_CLIENT_FLAG_UNIX_CLIENT)
@@ -107,90 +122,107 @@ typedef enum web_client_flags {
#define web_client_is_corkable(w) web_client_flag_check(w, WEB_CLIENT_FLAG_TCP_CLIENT)
-#define NETDATA_WEB_REQUEST_URL_SIZE 8192
+#define NETDATA_WEB_REQUEST_URL_SIZE 65536 // static allocation
+
#define NETDATA_WEB_RESPONSE_ZLIB_CHUNK_SIZE 16384
-#define NETDATA_WEB_RESPONSE_HEADER_SIZE 4096
-#define NETDATA_WEB_REQUEST_COOKIE_SIZE 1024
-#define NETDATA_WEB_REQUEST_ORIGIN_HEADER_SIZE 1024
-#define NETDATA_WEB_RESPONSE_INITIAL_SIZE 16384
-#define NETDATA_WEB_REQUEST_RECEIVE_SIZE 16384
-#define NETDATA_WEB_REQUEST_MAX_SIZE 16384
+
+#define NETDATA_WEB_RESPONSE_HEADER_INITIAL_SIZE 4096
+#define NETDATA_WEB_RESPONSE_INITIAL_SIZE 8192
+#define NETDATA_WEB_REQUEST_INITIAL_SIZE 8192
+#define NETDATA_WEB_REQUEST_MAX_SIZE 65536
+#define NETDATA_WEB_DECODED_URL_INITIAL_SIZE 512
struct response {
- BUFFER *header; // our response header
- BUFFER *header_output; // internal use
- BUFFER *data; // our response data buffer
+ BUFFER *header; // our response header
+ BUFFER *header_output; // internal use
+ BUFFER *data; // our response data buffer
- int code; // the HTTP response code
+ short int code; // the HTTP response code
+ bool has_cookies;
size_t rlen; // if non-zero, the excepted size of ifd (input of firecopy)
size_t sent; // current data length sent to output
- int zoutput; // if set to 1, web_client_send() will send compressed data
+ bool zoutput; // if set to 1, web_client_send() will send compressed data
+
#ifdef NETDATA_WITH_ZLIB
+ bool zinitialized;
z_stream zstream; // zlib stream for sending compressed output to client
- Bytef zbuffer[NETDATA_WEB_RESPONSE_ZLIB_CHUNK_SIZE]; // temporary buffer for storing compressed output
size_t zsent; // the compressed bytes we have sent to the client
size_t zhave; // the compressed bytes that we have received from zlib
- unsigned int zinitialized : 1;
+ Bytef zbuffer[NETDATA_WEB_RESPONSE_ZLIB_CHUNK_SIZE]; // temporary buffer for storing compressed output
#endif /* NETDATA_WITH_ZLIB */
};
+struct web_client;
+typedef bool (*web_client_interrupt_t)(struct web_client *, void *data);
+
struct web_client {
unsigned long long id;
+ size_t use_count;
- WEB_CLIENT_FLAGS flags; // status flags for the client
- WEB_CLIENT_MODE mode; // the operational mode of the client
- WEB_CLIENT_ACL acl; // the access list of the client
- int port_acl; // the operations permitted on the port the client connected to
- char *auth_bearer_token; // the Bearer auth token (if sent)
+ WEB_CLIENT_FLAGS flags; // status flags for the client
+ WEB_CLIENT_MODE mode; // the operational mode of the client
+ WEB_CLIENT_ACL acl; // the access list of the client
+ int port_acl; // the operations permitted on the port the client connected to
size_t header_parse_tries;
size_t header_parse_last_size;
- int tcp_cork; // 1 = we have a cork on the socket
-
+ bool tcp_cork;
int ifd;
int ofd;
- char client_ip[INET6_ADDRSTRLEN]; // Defined buffer sizes include null-terminators
+ char client_ip[INET6_ADDRSTRLEN]; // Defined buffer sizes include null-terminators
char client_port[NI_MAXSERV];
- char server_host[NI_MAXHOST];
char client_host[NI_MAXHOST];
- char forwarded_host[NI_MAXHOST]; //Used with proxy
- char decoded_url[NETDATA_WEB_REQUEST_URL_SIZE + 1]; // we decode the URL in this buffer
- char decoded_query_string[NETDATA_WEB_REQUEST_URL_SIZE + 1]; // we decode the Query String in this buffer
- char last_url[NETDATA_WEB_REQUEST_URL_SIZE + 1]; // we keep a copy of the decoded URL here
- size_t url_path_length;
- char separator; // This value can be either '?' or 'f'
- char *url_search_path; //A pointer to the search path sent by the client
+ BUFFER *url_as_received; // the entire URL as received, used for logging - DO NOT MODIFY
+ BUFFER *url_path_decoded; // the path, decoded - it is incrementally parsed and altered
+ BUFFER *url_query_string_decoded; // the query string, decoded - it is incrementally parsed and altered
- struct timeval tv_in, tv_ready;
+ // THESE NEED TO BE FREED
+ char *auth_bearer_token; // the Bearer auth token (if sent)
+ char *server_host; // the Host: header
+ char *forwarded_host; // the X-Forwarded-For: header
+ char *origin; // the Origin: header
+ char *user_agent; // the User-Agent: header
- char cookie1[NETDATA_WEB_REQUEST_COOKIE_SIZE + 1];
- char cookie2[NETDATA_WEB_REQUEST_COOKIE_SIZE + 1];
- char origin[NETDATA_WEB_REQUEST_ORIGIN_HEADER_SIZE + 1];
- char *user_agent;
-
- struct response response;
-
- size_t stats_received_bytes;
- size_t stats_sent_bytes;
-
- // cache of web_client allocations
- struct web_client *prev; // maintain a linked list of web clients
- struct web_client *next; // for the web servers that need it
-
- // MULTI-THREADED WEB SERVER MEMBERS
- netdata_thread_t thread; // the thread servicing this client
- volatile int running; // 1 when the thread runs, 0 otherwise
+ char *post_payload; // when this request is a POST, this has the payload
+ size_t post_payload_size; // the size of the buffer allocated for the payload
+ // the actual contents may be less than the size
// STATIC-THREADED WEB SERVER MEMBERS
- size_t pollinfo_slot; // POLLINFO slot of the web client
- size_t pollinfo_filecopy_slot; // POLLINFO slot of the file read
+ size_t pollinfo_slot; // POLLINFO slot of the web client
+ size_t pollinfo_filecopy_slot; // POLLINFO slot of the file read
+
#ifdef ENABLE_HTTPS
struct netdata_ssl ssl;
#endif
+
+ struct { // A callback to check if the query should be interrupted / stopped
+ web_client_interrupt_t callback;
+ void *callback_data;
+ } interrupt;
+
+ struct {
+ size_t received_bytes;
+ size_t sent_bytes;
+ size_t *memory_accounting; // temporary pointer for constructor to use
+ } statistics;
+
+ struct {
+ usec_t timeout_ut; // timeout if set, or zero
+ struct timeval tv_in; // request received
+ struct timeval tv_ready; // request processed - response ready
+ struct timeval tv_timeout_last_checkpoint; // last checkpoint
+ } timings;
+
+ struct {
+ struct web_client *prev;
+ struct web_client *next;
+ } cache;
+
+ struct response response;
};
int web_client_permission_denied(struct web_client *w);
@@ -211,6 +243,28 @@ char *strip_control_characters(char *url);
int web_client_socket_is_now_used_for_streaming(struct web_client *w);
+void web_client_zero(struct web_client *w);
+struct web_client *web_client_create(size_t *statistics_memory_accounting);
+void web_client_free(struct web_client *w);
+
+#ifdef ENABLE_HTTPS
+void web_client_reuse_ssl(struct web_client *w);
+#endif
+
+#include "web/api/web_api_v1.h"
+#include "web/api/web_api_v2.h"
#include "daemon/common.h"
+void web_client_decode_path_and_query_string(struct web_client *w, const char *path_and_query_string);
+int web_client_api_request(RRDHOST *host, struct web_client *w, char *url_path_fragment);
+const char *web_content_type_to_string(HTTP_CONTENT_TYPE content_type);
+void web_client_enable_deflate(struct web_client *w, int gzip);
+int web_client_api_request_with_node_selection(RRDHOST *host, struct web_client *w, char *decoded_url_path);
+
+void web_client_timeout_checkpoint_init(struct web_client *w);
+void web_client_timeout_checkpoint_set(struct web_client *w, int timeout_ms);
+usec_t web_client_timeout_checkpoint(struct web_client *w);
+bool web_client_timeout_checkpoint_and_check(struct web_client *w, usec_t *usec_since_last_checkpoint);
+usec_t web_client_timeout_checkpoint_response_ready(struct web_client *w, usec_t *usec_since_last_checkpoint);
+
#endif
diff --git a/web/server/web_client_cache.c b/web/server/web_client_cache.c
index 4344209c8..b410ba7f9 100644
--- a/web/server/web_client_cache.c
+++ b/web/server/web_client_cache.c
@@ -6,77 +6,6 @@
// ----------------------------------------------------------------------------
// allocate and free web_clients
-#ifdef ENABLE_HTTPS
-
-static void web_client_reuse_ssl(struct web_client *w) {
- if (netdata_ssl_srv_ctx) {
- if (w->ssl.conn) {
- SSL_SESSION *session = SSL_get_session(w->ssl.conn);
- SSL *old = w->ssl.conn;
- w->ssl.conn = SSL_new(netdata_ssl_srv_ctx);
- if (session) {
-#if OPENSSL_VERSION_NUMBER >= OPENSSL_VERSION_111
- if (SSL_SESSION_is_resumable(session))
-#endif
- SSL_set_session(w->ssl.conn, session);
- }
- SSL_free(old);
- }
- }
-}
-#endif
-
-
-static void web_client_zero(struct web_client *w) {
- // zero everything about it - but keep the buffers
-
- // remember the pointers to the buffers
- BUFFER *b1 = w->response.data;
- BUFFER *b2 = w->response.header;
- BUFFER *b3 = w->response.header_output;
-
- // empty the buffers
- buffer_flush(b1);
- buffer_flush(b2);
- buffer_flush(b3);
-
- freez(w->user_agent);
-
- // zero everything
- memset(w, 0, sizeof(struct web_client));
-
- // restore the pointers of the buffers
- w->response.data = b1;
- w->response.header = b2;
- w->response.header_output = b3;
-}
-
-static void web_client_free(struct web_client *w) {
- buffer_free(w->response.header_output);
- buffer_free(w->response.header);
- buffer_free(w->response.data);
- freez(w->user_agent);
-#ifdef ENABLE_HTTPS
- if ((!web_client_check_unix(w)) && (netdata_ssl_srv_ctx)) {
- if (w->ssl.conn) {
- SSL_free(w->ssl.conn);
- w->ssl.conn = NULL;
- }
- }
-#endif
- freez(w);
- __atomic_sub_fetch(&netdata_buffers_statistics.buffers_web, sizeof(struct web_client), __ATOMIC_RELAXED);
-}
-
-static struct web_client *web_client_alloc(void) {
- struct web_client *w = callocz(1, sizeof(struct web_client));
- __atomic_add_fetch(&netdata_buffers_statistics.buffers_web, sizeof(struct web_client), __ATOMIC_RELAXED);
- w->response.data = buffer_create(NETDATA_WEB_RESPONSE_INITIAL_SIZE, &netdata_buffers_statistics.buffers_web);
- w->response.header = buffer_create(NETDATA_WEB_RESPONSE_HEADER_SIZE, &netdata_buffers_statistics.buffers_web);
- w->response.header_output = buffer_create(NETDATA_WEB_RESPONSE_HEADER_SIZE, &netdata_buffers_statistics.buffers_web);
- return w;
-}
-
// ----------------------------------------------------------------------------
// web clients caching
@@ -87,194 +16,134 @@ static struct web_client *web_client_alloc(void) {
// The size of the cache is adaptive. It caches the structures of 2x
// the number of currently connected clients.
-// Comments per server:
-// SINGLE-THREADED : 1 cache is maintained
-// MULTI-THREADED : 1 cache is maintained
-// STATIC-THREADED : 1 cache for each thread of the web server
-
-__thread struct clients_cache web_clients_cache = {
- .pid = 0,
- .used = NULL,
- .used_count = 0,
- .avail = NULL,
- .avail_count = 0,
- .allocated = 0,
- .reused = 0
+static struct clients_cache {
+ struct {
+ SPINLOCK spinlock;
+ struct web_client *head; // the structures of the currently connected clients
+ size_t count; // the count the currently connected clients
+
+ size_t allocated; // the number of allocations
+ size_t reused; // the number of re-uses
+ } used;
+
+ struct {
+ SPINLOCK spinlock;
+ struct web_client *head; // the cached structures, available for future clients
+ size_t count; // the number of cached structures
+ } avail;
+} web_clients_cache = {
+ .used = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .head = NULL,
+ .count = 0,
+ .reused = 0,
+ .allocated = 0,
+ },
+ .avail = {
+ .spinlock = NETDATA_SPINLOCK_INITIALIZER,
+ .head = NULL,
+ .count = 0,
+ },
};
-inline void web_client_cache_verify(int force) {
-#ifdef NETDATA_INTERNAL_CHECKS
- static __thread size_t count = 0;
- count++;
-
- if(unlikely(force || count > 1000)) {
- count = 0;
-
- struct web_client *w;
- size_t used = 0, avail = 0;
- for(w = web_clients_cache.used; w ; w = w->next) used++;
- for(w = web_clients_cache.avail; w ; w = w->next) avail++;
-
- info("web_client_cache has %zu (%zu) used and %zu (%zu) available clients, allocated %zu, reused %zu (hit %zu%%)."
- , used, web_clients_cache.used_count
- , avail, web_clients_cache.avail_count
- , web_clients_cache.allocated
- , web_clients_cache.reused
- , (web_clients_cache.allocated + web_clients_cache.reused)?(web_clients_cache.reused * 100 / (web_clients_cache.allocated + web_clients_cache.reused)):0
- );
- }
-#else
- if(unlikely(force)) {
- info("web_client_cache has %zu used and %zu available clients, allocated %zu, reused %zu (hit %zu%%)."
- , web_clients_cache.used_count
- , web_clients_cache.avail_count
- , web_clients_cache.allocated
- , web_clients_cache.reused
- , (web_clients_cache.allocated + web_clients_cache.reused)?(web_clients_cache.reused * 100 / (web_clients_cache.allocated + web_clients_cache.reused)):0
- );
- }
-#endif
-}
-
// destroy the cache and free all the memory it uses
void web_client_cache_destroy(void) {
-#ifdef NETDATA_INTERNAL_CHECKS
- if(unlikely(web_clients_cache.pid != 0 && web_clients_cache.pid != gettid()))
- error("Oops! wrong thread accessing the cache. Expected %d, found %d", (int)web_clients_cache.pid, (int)gettid());
-
- web_client_cache_verify(1);
-#endif
-
- netdata_thread_disable_cancelability();
+ internal_error(true, "web_client_cache has %zu used and %zu available clients, allocated %zu, reused %zu (hit %zu%%)."
+ , web_clients_cache.used.count
+ , web_clients_cache.avail.count
+ , web_clients_cache.used.allocated
+ , web_clients_cache.used.reused
+ , (web_clients_cache.used.allocated + web_clients_cache.used.reused)?(web_clients_cache.used.reused * 100 / (web_clients_cache.used.allocated + web_clients_cache.used.reused)):0
+ );
struct web_client *w, *t;
- w = web_clients_cache.used;
+ netdata_spinlock_lock(&web_clients_cache.avail.spinlock);
+ w = web_clients_cache.avail.head;
while(w) {
t = w;
- w = w->next;
+ w = w->cache.next;
web_client_free(t);
}
- web_clients_cache.used = NULL;
- web_clients_cache.used_count = 0;
-
- w = web_clients_cache.avail;
- while(w) {
- t = w;
- w = w->next;
- web_client_free(t);
- }
- web_clients_cache.avail = NULL;
- web_clients_cache.avail_count = 0;
-
- netdata_thread_enable_cancelability();
+ web_clients_cache.avail.head = NULL;
+ web_clients_cache.avail.count = 0;
+ netdata_spinlock_unlock(&web_clients_cache.avail.spinlock);
+
+// DO NOT FREE THEM IF THEY ARE USED
+// netdata_spinlock_lock(&web_clients_cache.used.spinlock);
+// w = web_clients_cache.used.head;
+// while(w) {
+// t = w;
+// w = w->next;
+// web_client_free(t);
+// }
+// web_clients_cache.used.head = NULL;
+// web_clients_cache.used.count = 0;
+// web_clients_cache.used.reused = 0;
+// web_clients_cache.used.allocated = 0;
+// netdata_spinlock_unlock(&web_clients_cache.used.spinlock);
}
-struct web_client *web_client_get_from_cache_or_allocate() {
-
-#ifdef NETDATA_INTERNAL_CHECKS
- if(unlikely(web_clients_cache.pid == 0))
- web_clients_cache.pid = gettid();
-
- if(unlikely(web_clients_cache.pid != 0 && web_clients_cache.pid != gettid()))
- error("Oops! wrong thread accessing the cache. Expected %d, found %d", (int)web_clients_cache.pid, (int)gettid());
-#endif
-
- netdata_thread_disable_cancelability();
-
- struct web_client *w = web_clients_cache.avail;
-
+struct web_client *web_client_get_from_cache(void) {
+ netdata_spinlock_lock(&web_clients_cache.avail.spinlock);
+ struct web_client *w = web_clients_cache.avail.head;
if(w) {
// get it from avail
- if (w == web_clients_cache.avail) web_clients_cache.avail = w->next;
- if(w->prev) w->prev->next = w->next;
- if(w->next) w->next->prev = w->prev;
- web_clients_cache.avail_count--;
-#ifdef ENABLE_HTTPS
- web_client_reuse_ssl(w);
- SSL *ssl = w->ssl.conn;
-#endif
+ DOUBLE_LINKED_LIST_REMOVE_ITEM_UNSAFE(web_clients_cache.avail.head, w, cache.prev, cache.next);
+ web_clients_cache.avail.count--;
+ netdata_spinlock_unlock(&web_clients_cache.avail.spinlock);
+
web_client_zero(w);
- web_clients_cache.reused++;
-#ifdef ENABLE_HTTPS
- w->ssl.conn = ssl;
- w->ssl.flags = NETDATA_SSL_START;
- debug(D_WEB_CLIENT_ACCESS,"Reusing SSL structure with (w->ssl = NULL, w->accepted = %u)", w->ssl.flags);
-#endif
+
+ netdata_spinlock_lock(&web_clients_cache.used.spinlock);
+ web_clients_cache.used.reused++;
}
else {
+ netdata_spinlock_unlock(&web_clients_cache.avail.spinlock);
+
// allocate it
- w = web_client_alloc();
+ w = web_client_create(&netdata_buffers_statistics.buffers_web);
+
#ifdef ENABLE_HTTPS
w->ssl.flags = NETDATA_SSL_START;
debug(D_WEB_CLIENT_ACCESS,"Starting SSL structure with (w->ssl = NULL, w->accepted = %u)", w->ssl.flags);
#endif
- web_clients_cache.allocated++;
+
+ netdata_spinlock_lock(&web_clients_cache.used.spinlock);
+ web_clients_cache.used.allocated++;
}
// link it to used web clients
- if (web_clients_cache.used) web_clients_cache.used->prev = w;
- w->next = web_clients_cache.used;
- w->prev = NULL;
- web_clients_cache.used = w;
- web_clients_cache.used_count++;
+ DOUBLE_LINKED_LIST_PREPEND_ITEM_UNSAFE(web_clients_cache.used.head, w, cache.prev, cache.next);
+ web_clients_cache.used.count++;
+ netdata_spinlock_unlock(&web_clients_cache.used.spinlock);
// initialize it
+ w->use_count++;
w->id = global_statistics_web_client_connected();
- w->mode = WEB_CLIENT_MODE_NORMAL;
-
- netdata_thread_enable_cancelability();
+ w->mode = WEB_CLIENT_MODE_GET;
return w;
}
-void web_client_release(struct web_client *w) {
-#ifdef NETDATA_INTERNAL_CHECKS
- if(unlikely(web_clients_cache.pid != 0 && web_clients_cache.pid != gettid()))
- error("Oops! wrong thread accessing the cache. Expected %d, found %d", (int)web_clients_cache.pid, (int)gettid());
-
- if(unlikely(w->running))
- error("%llu: releasing web client from %s port %s, but it still running.", w->id, w->client_ip, w->client_port);
-#endif
-
- debug(D_WEB_CLIENT_ACCESS, "%llu: Closing web client from %s port %s.", w->id, w->client_ip, w->client_port);
-
- web_server_log_connection(w, "DISCONNECTED");
- web_client_request_done(w);
- global_statistics_web_client_disconnected();
-
- netdata_thread_disable_cancelability();
-
- if(web_server_mode != WEB_SERVER_MODE_STATIC_THREADED) {
- if (w->ifd != -1) close(w->ifd);
- if (w->ofd != -1 && w->ofd != w->ifd) close(w->ofd);
- w->ifd = w->ofd = -1;
-#ifdef ENABLE_HTTPS
- web_client_reuse_ssl(w);
- w->ssl.flags = NETDATA_SSL_START;
-#endif
-
- }
-
+void web_client_release_to_cache(struct web_client *w) {
// unlink it from the used
- if (w == web_clients_cache.used) web_clients_cache.used = w->next;
- if(w->prev) w->prev->next = w->next;
- if(w->next) w->next->prev = w->prev;
- web_clients_cache.used_count--;
+ netdata_spinlock_lock(&web_clients_cache.used.spinlock);
+ DOUBLE_LINKED_LIST_REMOVE_ITEM_UNSAFE(web_clients_cache.used.head, w, cache.prev, cache.next);
+ ssize_t used_count = (ssize_t)--web_clients_cache.used.count;
+ netdata_spinlock_unlock(&web_clients_cache.used.spinlock);
+
+ netdata_spinlock_lock(&web_clients_cache.avail.spinlock);
+ if(w->use_count > 100 || (used_count > 0 && web_clients_cache.avail.count >= 2 * (size_t)used_count) || (used_count <= 10 && web_clients_cache.avail.count >= 20)) {
+ netdata_spinlock_unlock(&web_clients_cache.avail.spinlock);
- if(web_clients_cache.avail_count >= 2 * web_clients_cache.used_count) {
// we have too many of them - free it
web_client_free(w);
}
else {
// link it to the avail
- if (web_clients_cache.avail) web_clients_cache.avail->prev = w;
- w->next = web_clients_cache.avail;
- w->prev = NULL;
- web_clients_cache.avail = w;
- web_clients_cache.avail_count++;
+ DOUBLE_LINKED_LIST_PREPEND_ITEM_UNSAFE(web_clients_cache.avail.head, w, cache.prev, cache.next);
+ web_clients_cache.avail.count++;
+ netdata_spinlock_unlock(&web_clients_cache.avail.spinlock);
}
-
- netdata_thread_enable_cancelability();
}
-
diff --git a/web/server/web_client_cache.h b/web/server/web_client_cache.h
index 324f23ed9..85cde3e83 100644
--- a/web/server/web_client_cache.h
+++ b/web/server/web_client_cache.h
@@ -6,25 +6,9 @@
#include "libnetdata/libnetdata.h"
#include "web_client.h"
-struct clients_cache {
- pid_t pid;
-
- struct web_client *used; // the structures of the currently connected clients
- size_t used_count; // the count the currently connected clients
-
- struct web_client *avail; // the cached structures, available for future clients
- size_t avail_count; // the number of cached structures
-
- size_t reused; // the number of re-uses
- size_t allocated; // the number of allocations
-};
-
-extern __thread struct clients_cache web_clients_cache;
-
-void web_client_release(struct web_client *w);
-struct web_client *web_client_get_from_cache_or_allocate();
+void web_client_release_to_cache(struct web_client *w);
+struct web_client *web_client_get_from_cache(void);
void web_client_cache_destroy(void);
-void web_client_cache_verify(int force);
#include "web_server.h"
diff --git a/web/server/web_server.c b/web/server/web_server.c
index d5645a947..e136f728c 100644
--- a/web/server/web_server.c
+++ b/web/server/web_server.c
@@ -132,28 +132,3 @@ void web_client_update_acl_matches(struct web_client *w) {
void web_server_log_connection(struct web_client *w, const char *msg) {
log_access("%llu: %d '[%s]:%s' '%s'", w->id, gettid(), w->client_ip, w->client_port, msg);
}
-
-// --------------------------------------------------------------------------------------
-
-void web_client_initialize_connection(struct web_client *w) {
- int flag = 1;
-
- if(unlikely(web_client_check_tcp(w) && setsockopt(w->ifd, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int)) != 0))
- debug(D_WEB_CLIENT, "%llu: failed to enable TCP_NODELAY on socket fd %d.", w->id, w->ifd);
-
- flag = 1;
- if(unlikely(setsockopt(w->ifd, SOL_SOCKET, SO_KEEPALIVE, (char *) &flag, sizeof(int)) != 0))
- debug(D_WEB_CLIENT, "%llu: failed to enable SO_KEEPALIVE on socket fd %d.", w->id, w->ifd);
-
- web_client_update_acl_matches(w);
-
- w->origin[0] = '*'; w->origin[1] = '\0';
- w->cookie1[0] = '\0'; w->cookie2[0] = '\0';
- freez(w->user_agent); w->user_agent = NULL;
-
- web_client_enable_wait_receive(w);
-
- web_server_log_connection(w, "CONNECTED");
-
- web_client_cache_verify(0);
-}
diff --git a/web/server/web_server.h b/web/server/web_server.h
index 51230ed2b..3b88d1a22 100644
--- a/web/server/web_server.h
+++ b/web/server/web_server.h
@@ -51,7 +51,6 @@ extern long web_client_streaming_rate_t;
extern LISTEN_SOCKETS api_sockets;
void web_client_update_acl_matches(struct web_client *w);
void web_server_log_connection(struct web_client *w, const char *msg);
-void web_client_initialize_connection(struct web_client *w);
struct web_client *web_client_create_on_listenfd(int listener);
#include "web_client_cache.h"