diff options
Diffstat (limited to 'doc/design-thoughts')
-rw-r--r-- | doc/design-thoughts/backends-v0.txt | 27 | ||||
-rw-r--r-- | doc/design-thoughts/backends.txt | 125 | ||||
-rw-r--r-- | doc/design-thoughts/be-fe-changes.txt | 74 | ||||
-rw-r--r-- | doc/design-thoughts/binding-possibilities.txt | 167 | ||||
-rw-r--r-- | doc/design-thoughts/config-language.txt | 262 | ||||
-rw-r--r-- | doc/design-thoughts/connection-reuse.txt | 224 | ||||
-rw-r--r-- | doc/design-thoughts/connection-sharing.txt | 31 | ||||
-rw-r--r-- | doc/design-thoughts/dynamic-buffers.txt | 41 | ||||
-rw-r--r-- | doc/design-thoughts/entities-v2.txt | 276 | ||||
-rw-r--r-- | doc/design-thoughts/how-it-works.txt | 60 | ||||
-rw-r--r-- | doc/design-thoughts/http2.txt | 277 | ||||
-rw-r--r-- | doc/design-thoughts/http_load_time.url | 5 | ||||
-rw-r--r-- | doc/design-thoughts/pool-debugging.txt | 243 | ||||
-rw-r--r-- | doc/design-thoughts/rate-shaping.txt | 90 | ||||
-rw-r--r-- | doc/design-thoughts/sess_par_sec.txt | 13 | ||||
-rw-r--r-- | doc/design-thoughts/thread-group.txt | 528 |
16 files changed, 2443 insertions, 0 deletions
diff --git a/doc/design-thoughts/backends-v0.txt b/doc/design-thoughts/backends-v0.txt new file mode 100644 index 0000000..d350e22 --- /dev/null +++ b/doc/design-thoughts/backends-v0.txt @@ -0,0 +1,27 @@ +1 type générique "entité", avec les attributs suivants : + + - frontend *f + - l7switch *s + - backend *b + +des types spécifiques sont simplement des entités avec certains +de ces champs remplis et pas forcément tous : + + listen = f [s] b + frontend = f [s] + l7switch = s + backend = [s] b + +Ensuite, les traitements sont évalués dans l'ordre : + - listen -> s'il a des règles de l7, on les évalue, et potentiellement on branche vers d'autres listen, l7 ou back, ou on travaille avec le back local. + - frontend -> s'il a des règles de l7, on les évalue, et potentiellement on branche vers d'autres listen, l7 ou back + - l7switch -> on évalue ses règles, potentiellement on branche vers d'autres listen, l7 ou backends + - backend -> s'il a des règles l7, on les évalue (quitte à changer encore de backend) puis on traite. + +Les requêtes sont traitées dans l'ordre des chaînages f->s*->b, et les réponses doivent être +traitées dans l'ordre inverse b->s*->f. Penser aux réécritures de champs Host à l'aller et +Location en retour. + +D'autre part, prévoir des "profils" plutôt que des blocs de nouveaux paramètres par défaut. +Ca permettra d'avoir plein de jeux de paramètres par défaut à utiliser dans chacun de ces +types. diff --git a/doc/design-thoughts/backends.txt b/doc/design-thoughts/backends.txt new file mode 100644 index 0000000..d2474ce --- /dev/null +++ b/doc/design-thoughts/backends.txt @@ -0,0 +1,125 @@ +There has been a lot of confusion during the development because of the +backends and frontends. + +What we want : + +- being able to still use a listener as it has always been working + +- being able to write a rule stating that we will *change* the backend when we + match some pattern. Only one jump is allowed. + +- being able to write a "use_filters_from XXX" line stating that we will ignore + any filter in the current listener, and that those from XXX will be borrowed + instead. A warning would be welcome for options which will silently get + masked. This is used to factor configuration. + +- being able to write a "use_backend_from XXX" line stating that we will ignore + any server and timeout config in the current listener, and that those from + XXX will be borrowed instead. A warning would be welcome for options which + will silently get masked. This is used to factor configuration. + + + +Example : +--------- + + | # frontend HTTP/80 + | listen fe_http 1.1.1.1:80 + | use_filters_from default_http + | use_backend_from appli1 + | + | # frontend HTTPS/443 + | listen fe_https 1.1.1.1:443 + | use_filters_from default_https + | use_backend_from appli1 + | + | # frontend HTTP/8080 + | listen fe_http-dev 1.1.1.1:8080 + | reqadd "X-proto: http" + | reqisetbe "^Host: www1" appli1 + | reqisetbe "^Host: www2" appli2 + | reqisetbe "^Host: www3" appli-dev + | use_backend_from appli1 + | + | + | # filters default_http + | listen default_http + | reqadd "X-proto: http" + | reqisetbe "^Host: www1" appli1 + | reqisetbe "^Host: www2" appli2 + | + | # filters default_https + | listen default_https + | reqadd "X-proto: https" + | reqisetbe "^Host: www1" appli1 + | reqisetbe "^Host: www2" appli2 + | + | + | # backend appli1 + | listen appli1 + | reqidel "^X-appli1:.*" + | reqadd "Via: appli1" + | balance roundrobin + | cookie app1 + | server srv1 + | server srv2 + | + | # backend appli2 + | listen appli2 + | reqidel "^X-appli2:.*" + | reqadd "Via: appli2" + | balance roundrobin + | cookie app2 + | server srv1 + | server srv2 + | + | # backend appli-dev + | listen appli-dev + | reqadd "Via: appli-dev" + | use_backend_from appli2 + | + | + + +Now we clearly see multiple things : +------------------------------------ + + - a frontend can EITHER have filters OR reference a use_filter + + - a backend can EITHER have servers OR reference a use_backend + + - we want the evaluation to cross TWO levels per request. When a request is + being processed, it keeps track of its "frontend" side (where it came + from), and of its "backend" side (where the server-side parameters have + been found). + + - the use_{filters|backend} have nothing to do with how the request is + decomposed. + + +Conclusion : +------------ + + - a proxy is always its own frontend. It also has 2 parameters : + - "fi_prm" : pointer to the proxy holding the filters (itself by default) + - "be_prm" : pointer to the proxy holding the servers (itself by default) + + - a request has a frontend (fe) and a backend (be). By default, the backend + is initialized to the frontend. Everything related to the client side is + accessed through ->fe. Everything related to the server side is accessed + through ->be. + + - request filters are first called from ->fe then ->be. Since only the + filters can change ->be, it is possible to iterate the filters on ->be + only and stop when ->be does not change anymore. + + - response filters are first called from ->be then ->fe IF (fe != be). + + +When we parse the configuration, we immediately configure ->fi and ->be for +all proxies. + +Upon session creation, s->fe and s->be are initialized to the proxy. Filters +are executed via s->fe->fi_prm and s->be->fi_prm. Servers are found in +s->be->be_prm. + diff --git a/doc/design-thoughts/be-fe-changes.txt b/doc/design-thoughts/be-fe-changes.txt new file mode 100644 index 0000000..f242f8a --- /dev/null +++ b/doc/design-thoughts/be-fe-changes.txt @@ -0,0 +1,74 @@ +- PR_O_TRANSP => FE !!! devra peut-être changer vu que c'est un complément du mode dispatch. +- PR_O_NULLNOLOG => FE +- PR_O_HTTP_CLOSE => FE. !!! mettre BE aussi !!! +- PR_O_TCP_CLI_KA => FE + +- PR_O_FWDFOR => BE. FE aussi ? +- PR_O_FORCE_CLO => BE +- PR_O_PERSIST => BE +- PR_O_COOK_RW, PR_O_COOK_INS, PR_O_COOK_PFX, PR_O_COOK_POST => BE +- PR_O_COOK_NOC, PR_O_COOK_IND => BE +- PR_O_ABRT_CLOSE => BE +- PR_O_REDISP => BE +- PR_O_BALANCE, PR_O_BALANCE_RR, PR_O_BALANCE_SH => BE +- PR_O_CHK_CACHE => BE +- PR_O_TCP_SRV_KA => BE +- PR_O_BIND_SRC => BE +- PR_O_TPXY_MASK => BE + + +- PR_MODE_TCP : BE côté serveur, FE côté client + +- nbconn -> fe->nbconn, be->nbconn. + Pb: rendre impossible le fait que (fe == be) avant de faire ça, + sinon on va compter les connexions en double. Ce ne sera possible + que lorsque les FE et BE seront des entités distinctes. On va donc + commencer par laisser uniquement fe->nbconn (vu que le fe ne change + pas), et modifier ceci plus tard, ne serait-ce que pour prendre en + compte correctement les minconn/maxconn. + => solution : avoir beconn et feconn dans chaque proxy. + +- failed_conns, failed_secu (réponses bloquées), failed_resp... : be + Attention: voir les cas de ERR_SRVCL, il semble que parfois on + indique ça alors qu'il y a un write error côté client (ex: ligne + 2044 dans proto_http). + + => be et pas be->beprm + +- logs du backup : ->be (idem) + +- queue : be + +- logs/debug : srv toujours associé à be (ex: proxy->id:srv->id). Rien + pour le client pour le moment. D'une manière générale, les erreurs + provoquées côté serveur vont sur BE et celles côté client vont sur + FE. +- logswait & LW_BYTES : FE (puisqu'on veut savoir si on logue tout de suite) + +- messages d'erreurs personnalisés (errmsg, ...) -> fe + +- monitor_uri -> fe +- uri_auth -> (fe->firpm puis be->fiprm). Utilisation de ->be + +- req_add, req_exp => fe->fiprm, puis be->fiprm +- req_cap, rsp_cap -> fe->fiprm +- rsp_add, rsp_exp => be->fiprm, devrait être fait ensuite aussi sur fe->fiprm +- capture_name, capture_namelen : fe->fiprm + + Ce n'est pas la solution idéale, mais au moins la capture et configurable + par les filtres du FE et ne bouge pas lorsque le BE est réassigné. Cela + résoud aussi un pb d'allocation mémoire. + + +- persistance (appsessions, cookiename, ...) -> be +- stats:scope "." = fe (celui par lequel on arrive) + !!!ERREUR!!! => utiliser be pour avoir celui qui a été validé par + l'uri_auth. + + +--------- corrections à effectuer --------- + +- remplacement de headers : parser le header et éventuellement le supprimer puis le(les) rajouter. +- session->proto.{l4state,l7state,l7substate} pour CLI et SRV +- errorloc : si définie dans backend, la prendre, sinon dans front. +- logs : faire be sinon fe. diff --git a/doc/design-thoughts/binding-possibilities.txt b/doc/design-thoughts/binding-possibilities.txt new file mode 100644 index 0000000..3f5e432 --- /dev/null +++ b/doc/design-thoughts/binding-possibilities.txt @@ -0,0 +1,167 @@ +2013/10/10 - possibilities for setting source and destination addresses + + +When establishing a connection to a remote device, this device is designated +as a target, which designates an entity defined in the configuration. A same +target appears only once in a configuration, and multiple targets may share +the same settings if needed. + +The following types of targets are currently supported : + + - listener : all connections with this type of target come from clients ; + - server : connections to such targets are for "server" lines ; + - peer : connections to such target address "peer" lines in "peers" + sections ; + - proxy : these targets are used by "dispatch", "option transparent" + or "option http_proxy" statements. + +A connection might not be reused between two different targets, even if all +parameters seem similar. One of the reason is that some parameters are specific +to the target and are not easy or not cheap to compare (eg: bind to interface, +mss, ...). + +A number of source and destination addresses may be set for a given target. + + - listener : + - the "from" address:port is set by accept() + + - the "to" address:port is set if conn_get_to_addr() is called + + - peer : + - the "from" address:port is not set + + - the "to" address:port is static and dependent only on the peer + + - server : + - the "from" address may be set alone when "source" is used with + a forced IP address, or when "usesrc clientip" is used. + + - the "from" port may be set only combined with the address when + "source" is used with IP:port, IP:port-range or "usesrc client" is + used. Note that in this case, both the address and the port may be + 0, meaning that the kernel will pick the address or port and that + the final value might not match the one explicitly set (eg: + important for logging). + + - the "from" address may be forced from a header which implies it + may change between two consecutive requests on the same connection. + + - the "to" address and port are set together when connecting to a + regular server, or by copying the client's IP address when + "server 0.0.0.0" is used. Note that the destination port may be + an offset applied to the original destination port. + + - proxy : + - the "from" address may be set alone when "source" is used with a + forced IP address or when "usesrc clientip" is used. + + - the "from" port may be set only combined with the address when + "source" is used with IP:port or with "usesrc client". There is + no ip:port range for a proxy as of now. Same comment applies as + above when port and/or address are 0. + + - the "from" address may be forced from a header which implies it + may change between two consecutive requests on the same connection. + + - the "to" address and port are set together, either by configuration + when "dispatch" is used, or dynamically when "transparent" is used + (1:1 with client connection) or "option http_proxy" is used, where + each client request may lead to a different destination address. + + +At the moment, there are some limits in what might happen between multiple +concurrent requests to a same target. + + - peers parameter do not change, so no problem. + + - server parameters may change in this way : + - a connection may require a source bound to an IP address found in a + header, which will fall back to the "source" settings if the address + is not found in this header. This means that the source address may + switch between a dynamically forced IP address and another forced + IP and/or port range. + + - if the element is not found (eg: header), the remaining "forced" + source address might very well be empty (unset), so the connection + reuse is acceptable when switching in that direction. + + - it is not possible to switch between client and clientip or any of + these and hdr_ip() because they're exclusive. + + - using a source address/port belonging to a port range is compatible + with connection reuse because there is a single range per target, so + switching from a range to another range means we remain in the same + range. + + - destination address may currently not change since the only possible + case for dynamic destination address setting is the transparent mode, + reproducing the client's destination address. + + - proxy parameters may change in this way : + - a connection may require a source bound to an IP address found in a + header, which will fall back to the "source" settings if the address + is not found in this header. This means that the source address may + switch between a dynamically forced IP address and another forced + IP and/or port range. + + - if the element is not found (eg: header), the remaining "forced" + source address might very well be empty (unset), so the connection + reuse is acceptable when switching in that direction. + + - it is not possible to switch between client and clientip or any of + these and hdr_ip() because they're exclusive. + + - proxies do not support port ranges at the moment. + + - destination address might change in the case where "option http_proxy" + is used. + +So, for each source element (IP, port), we want to know : + - if the element was assigned by static configuration (eg: ":80") + - if the element was assigned from a connection-specific value (eg: usesrc clientip) + - if the element was assigned from a configuration-specific range (eg: 1024-65535) + - if the element was assigned from a request-specific value (eg: hdr_ip(xff)) + - if the element was not assigned at all + +For the destination, we want to know : + - if the element was assigned by static configuration (eg: ":80") + - if the element was assigned from a connection-specific value (eg: transparent) + - if the element was assigned from a request-specific value (eg: http_proxy) + +We don't need to store the information about the origin of the dynamic value +since we have the value itself. So in practice we have : + - default value, unknown (not yet checked with getsockname/getpeername) + - default value, known (check done) + - forced value (known) + - forced range (known) + +We can't do that on an ip:port basis because the port may be fixed regardless +of the address and conversely. + +So that means : + + enum { + CO_ADDR_NONE = 0, /* not set, unknown value */ + CO_ADDR_KNOWN = 1, /* not set, known value */ + CO_ADDR_FIXED = 2, /* fixed value, known */ + CO_ADDR_RANGE = 3, /* from assigned range, known */ + } conn_addr_values; + + unsigned int new_l3_src_status:2; + unsigned int new_l4_src_status:2; + unsigned int new_l3_dst_status:2; + unsigned int new_l4_dst_status:2; + + unsigned int cur_l3_src_status:2; + unsigned int cur_l4_src_status:2; + unsigned int cur_l3_dsp_status:2; + unsigned int cur_l4_dst_status:2; + + unsigned int new_family:2; + unsigned int cur_family:2; + +Note: this obsoletes CO_FL_ADDR_FROM_SET and CO_FL_ADDR_TO_SET. These flags +must be changed to individual l3+l4 checks ORed between old and new values, +or better, set to cur only which will inherit new. + +In the connection, these values may be merged in the same word as err_code. diff --git a/doc/design-thoughts/config-language.txt b/doc/design-thoughts/config-language.txt new file mode 100644 index 0000000..20c4fbd --- /dev/null +++ b/doc/design-thoughts/config-language.txt @@ -0,0 +1,262 @@ +Prévoir des commandes en plusieurs mots clés. +Par exemple : + + timeout connection XXX + connection scale XXX + +On doit aussi accepter les préfixes : + + tim co XXX + co sca XXX + +Prévoir de ranger les combinaisons dans un tableau. On doit même +pouvoir effectuer un mapping simplifiant le parseur. + + +Pour les filtres : + + + <direction> <where> <what> <operator> <pattern> <action> [ <args>* ] + + <direction> = [ req | rsp ] + <where> = [ in | out ] + <what> = [ line | LINE | METH | URI | h(hdr) | H(hdr) | c(cookie) | C(cookie) ] + <operator> = [ == | =~ | =* | =^ | =/ | != | !~ | !* | !^ | !/ ] + <pattern> = "<string>" + <action> = [ allow | permit | deny | delete | replace | switch | add | set | redir ] + <args> = optional action args + + examples: + + req in URI =^ "/images" switch images + req in h(host) =* ".mydomain.com" switch mydomain + req in h(host) =~ "localhost(.*)" replace "www\1" + + alternative : + + <direction> <where> <action> [not] <what> [<operator> <pattern> [ <args>* ]] + + req in switch URI =^ "/images" images + req in switch h(host) =* ".mydomain.com" mydomain + req in replace h(host) =~ "localhost(.*)" "www\1" + req in delete h(Connection) + req in deny not line =~ "((GET|HEAD|POST|OPTIONS) /)|(OPTIONS *)" + req out set h(Connection) "close" + req out add line "Server: truc" + + + <direction> <action> <where> [not] <what> [<operator> <pattern> [ <args>* ]] ';' <action2> <what2> + + req in switch URI =^ "/images/" images ; replace "/" + req in switch h(host) =* ".mydomain.com" mydomain + req in replace h(host) =~ "localhost(.*)" "www\1" + req in delete h(Connection) + req in deny not line =~ "((GET|HEAD|POST|OPTIONS) /)|(OPTIONS *)" + req out set h(Connection) "close" + req out add line == "Server: truc" + + +Extension avec des ACL : + + req in acl(meth_valid) METH =~ "(GET|POST|HEAD|OPTIONS)" + req in acl(meth_options) METH == "OPTIONS" + req in acl(uri_slash) URI =^ "/" + req in acl(uri_star) URI == "*" + + req in deny acl !(meth_options && uri_star || meth_valid && uri_slash) + +Peut-être plus simplement : + + acl meth_valid METH =~ "(GET|POST|HEAD|OPTIONS)" + acl meth_options METH == "OPTIONS" + acl uri_slash URI =^ "/" + acl uri_star URI == "*" + + req in deny not acl(meth_options uri_star, meth_valid uri_slash) + + req in switch URI =^ "/images/" images ; replace "/" + req in switch h(host) =* ".mydomain.com" mydomain + req in replace h(host) =~ "localhost(.*)" "www\1" + req in delete h(Connection) + req in deny not line =~ "((GET|HEAD|POST|OPTIONS) /)|(OPTIONS *)" + req out set h(Connection) "close" + req out add line == "Server: truc" + +Prévoir le cas du "if" pour exécuter plusieurs actions : + + req in if URI =^ "/images/" then replace "/" ; switch images + +Utiliser les noms en majuscules/minuscules pour indiquer si on veut prendre +en compte la casse ou non : + + if uri =^ "/watch/" setbe watch rebase "/watch/" "/" + if uri =* ".jpg" setbe images + if uri =~ ".*dll.*" deny + if HOST =* ".mydomain.com" setbe mydomain + etc... + +Another solution would be to have a dedicated keyword to URI remapping. It +would both rewrite the URI and optionally switch to another backend. + + uriremap "/watch/" "/" watch + uriremap "/chat/" "/" chat + uriremap "/event/" "/event/" event + +Or better : + + uriremap "/watch/" watch "/" + uriremap "/chat/" chat "/" + uriremap "/event/" event + +For the URI, using a regex is sometimes useful (eg: providing a set of possible prefixes. + + +Sinon, peut-être que le "switch" peut prendre un paramètre de mapping pour la partie matchée : + + req in switch URI =^ "/images/" images:"/" + + +2007/03/31 - Besoins plus précis. + +1) aucune extension de branchement ou autre dans les "listen", c'est trop complexe. + +Distinguer les données entrantes (in) et sortantes (out). + +Le frontend ne voit que les requetes entrantes et les réponses sortantes. +Le backend voir les requêtes in/out et les réponses in/out. +Le frontend permet les branchements d'ensembles de filtres de requêtes vers +d'autres. Le frontend et les ensembles de filtres de requêtes peuvent brancher +vers un backend. + +-----------+--------+----------+----------+---------+----------+ + \ Where | | | | | | + \______ | Listen | Frontend | ReqRules | Backend | RspRules | + \| | | | | | +Capability | | | | | | +-----------+--------+----------+----------+---------+----------+ +Frontend | X | X | | | | +-----------+--------+----------+----------+---------+----------+ +FiltReqIn | X | X | X | X | | +-----------+--------+----------+----------+---------+----------+ +JumpFiltReq| X | X | X | | | \ +-----------+--------+----------+----------+---------+----------+ > = ReqJump +SetBackend | X | X | X | | | / +-----------+--------+----------+----------+---------+----------+ +FiltReqOut | | | | X | | +-----------+--------+----------+----------+---------+----------+ +FiltRspIn | X | | | X | X | +-----------+--------+----------+----------+---------+----------+ +JumpFiltRsp| | | | X | X | +-----------+--------+----------+----------+---------+----------+ +FiltRspOut | | X | | X | X | +-----------+--------+----------+----------+---------+----------+ +Backend | X | | | X | | +-----------+--------+----------+----------+---------+----------+ + +En conclusion +------------- + +Il y a au moins besoin de distinguer 8 fonctionnalités de base : + - capacité à recevoir des connexions (frontend) + - capacité à filtrer les requêtes entrantes + - capacité à brancher vers un backend ou un ensemble de règles de requêtes + - capacité à filtrer les requêtes sortantes + - capacité à filtrer les réponses entrantes + - capacité à brancher vers un autre ensemble de règles de réponses + - capacité à filtrer la réponse sortante + - capacité à gérer des serveurs (backend) + +Remarque +-------- + - on a souvent besoin de pouvoir appliquer un petit traitement sur un ensemble + host/uri/autre. Le petit traitement peut consister en quelques filtres ainsi + qu'une réécriture du couple (host,uri). + + +Proposition : ACL + +Syntaxe : +--------- + + acl <name> <what> <operator> <value> ... + +Ceci créera une acl référencée sous le nom <name> qui sera validée si +l'application d'au moins une des valeurs <value> avec l'opérateur <operator> +sur le sujet <what> est validée. + +Opérateurs : +------------ + +Toujours 2 caractères : + + [=!][~=*^%/.] + +Premier caractère : + '=' : OK si test valide + '!' : OK si test échoué. + +Second caractère : + '~' : compare avec une regex + '=' : compare chaîne à chaîne + '*' : compare la fin de la chaîne (ex: =* ".mydomain.com") + '^' : compare le début de la chaîne (ex: =^ "/images/") + '%' : recherche une sous-chaîne + '/' : compare avec un mot entier en acceptant le '/' comme délimiteur + '.' : compare avec un mot entier en acceptant el '.' comme délimiteur + +Ensuite on exécute une action de manière conditionnelle si l'ensemble des ACLs +mentionnées sont validées (ou invalidées pour celles précédées d'un "!") : + + <what> <where> <action> on [!]<aclname> ... + + +Exemple : +--------- + + acl www_pub host =. www www01 dev preprod + acl imghost host =. images + acl imgdir uri =/ img + acl imagedir uri =/ images + acl msie h(user-agent) =% "MSIE" + + set_host "images" on www_pub imgdir + remap_uri "/img" "/" on www_pub imgdir + remap_uri "/images" "/" on www_pub imagedir + setbe images on imghost + reqdel "Cookie" on all + + + +Actions possibles : + + req {in|out} {append|delete|rem|add|set|rep|mapuri|rewrite|reqline|deny|allow|setbe|tarpit} + resp {in|out} {append|delete|rem|add|set|rep|maploc|rewrite|stsline|deny|allow} + + req in append <line> + req in delete <line_regex> + req in rem <header> + req in add <header> <new_value> + req in set <header> <new_value> + req in rep <header> <old_value> <new_value> + req in mapuri <old_uri_prefix> <new_uri_prefix> + req in rewrite <old_uri_regex> <new_uri> + req in reqline <old_req_regex> <new_req> + req in deny + req in allow + req in tarpit + req in setbe <backend> + + resp out maploc <old_location_prefix> <new_loc_prefix> + resp out stsline <old_sts_regex> <new_sts_regex> + +Les chaînes doivent être délimitées par un même caractère au début et à la fin, +qui doit être échappé s'il est présent dans la chaîne. Tout ce qui se trouve +entre le caractère de fin et les premiers espace est considéré comme des +options passées au traitement. Par exemple : + + req in rep host /www/i /www/ + req in rep connection /keep-alive/i "close" + +Il serait pratique de pouvoir effectuer un remap en même temps qu'un setbe. + +Captures: les séparer en in/out. Les rendre conditionnelles ? diff --git a/doc/design-thoughts/connection-reuse.txt b/doc/design-thoughts/connection-reuse.txt new file mode 100644 index 0000000..4eb22f6 --- /dev/null +++ b/doc/design-thoughts/connection-reuse.txt @@ -0,0 +1,224 @@ +2015/08/06 - server connection sharing + +Improvements on the connection sharing strategies +------------------------------------------------- + +4 strategies are currently supported : + - never + - safe + - aggressive + - always + +The "aggressive" and "always" strategies take into account the fact that the +connection has already been reused at least once or not. The principle is that +second requests can be used to safely "validate" connection reuse on newly +added connections, and that such validated connections may be used even by +first requests from other sessions. A validated connection is a connection +which has already been reused, hence proving that it definitely supports +multiple requests. Such connections are easy to verify : after processing the +response, if the txn already had the TX_NOT_FIRST flag, then it was not the +first request over that connection, and it is validated as safe for reuse. +Validated connections are put into a distinct list : server->safe_conns. + +Incoming requests with TX_NOT_FIRST first pick from the regular idle_conns +list so that any new idle connection is validated as soon as possible. + +Incoming requests without TX_NOT_FIRST only pick from the safe_conns list for +strategy "aggressive", guaranteeing that the server properly supports connection +reuse, or first from the safe_conns list, then from the idle_conns list for +strategy "always". + +Connections are always stacked into the list (LIFO) so that there are higher +changes to convert recent connections and to use them. This will first optimize +the likeliness that the connection works, and will avoid TCP metrics from being +lost due to an idle state, and/or the congestion window to drop and the +connection going to slow start mode. + + +Handling connections in pools +----------------------------- + +A per-server "pool-max" setting should be added to permit disposing unused idle +connections not attached anymore to a session for use by future requests. The +principle will be that attached connections are queued from the front of the +list while the detached connections will be queued from the tail of the list. + +This way, most reused connections will be fairly recent and detached connections +will most often be ignored. The number of detached idle connections in the lists +should be accounted for (pool_used) and limited (pool_max). + +After some time, a part of these detached idle connections should be killed. +For this, the list is walked from tail to head and connections without an owner +may be evicted. It may be useful to have a per-server pool_min setting +indicating how many idle connections should remain in the pool, ready for use +by new requests. Conversely, a pool_low metric should be kept between eviction +runs, to indicate the lowest amount of detached connections that were found in +the pool. + +For eviction, the principle of a half-life is appealing. The principle is +simple : over a period of time, half of the connections between pool_min and +pool_low should be gone. Since pool_low indicates how many connections were +remaining unused over a period, it makes sense to kill some of them. + +In order to avoid killing thousands of connections in one run, the purge +interval should be split into smaller batches. Let's call N the ratio of the +half-life interval and the effective interval. + +The algorithm consists in walking over them from the end every interval and +killing ((pool_low - pool_min) + 2 * N - 1) / (2 * N). It ensures that half +of the unused connections are killed over the half-life period, in N batches +of population/2N entries at most. + +Unsafe connections should be evicted first. There should be quite few of them +since most of them are probed and become safe. Since detached connections are +quickly recycled and attached to a new session, there should not be too many +detached connections in the pool, and those present there may be killed really +quickly. + +Another interesting point of pools is that when a pool-max is not null, then it +makes sense to automatically enable pretend-keep-alive on non-private connections +going to the server in order to be able to feed them back into the pool. With +the "aggressive" or "always" strategies, it can allow clients making a single +request over their connection to share persistent connections to the servers. + + + +2013/10/17 - server connection management and reuse + +Current state +------------- + +At the moment, a connection entity is needed to carry any address +information. This means in the following situations, we need a server +connection : + +- server is elected and the server's destination address is set + +- transparent mode is elected and the destination address is set from + the incoming connection + +- proxy mode is enabled, and the destination's address is set during + the parsing of the HTTP request + +- connection to the server fails and must be retried on the same + server using the same parameters, especially the destination + address (SN_ADDR_SET not removed) + + +On the accepting side, we have further requirements : + +- allocate a clean connection without a stream interface + +- incrementally set the accepted connection's parameters without + clearing it, and keep track of what is set (eg: getsockname). + +- initialize a stream interface in established mode + +- attach the accepted connection to a stream interface + + +This means several things : + +- the connection has to be allocated on the fly the first time it is + needed to store the source or destination address ; + +- the connection has to be attached to the stream interface at this + moment ; + +- it must be possible to incrementally set some settings on the + connection's addresses regardless of the connection's current state + +- the connection must not be released across connection retries ; + +- it must be possible to clear a connection's parameters for a + redispatch without having to detach/attach the connection ; + +- we need to allocate a connection without an existing stream interface + +So on the accept() side, it looks like this : + + fd = accept(); + conn = new_conn(); + get_some_addr_info(&conn->addr); + ... + si = new_si(); + si_attach_conn(si, conn); + si_set_state(si, SI_ST_EST); + ... + get_more_addr_info(&conn->addr); + +On the connect() side, it looks like this : + + si = new_si(); + while (!properly_connected) { + if (!(conn = si->end)) { + conn = new_conn(); + conn_clear(conn); + si_attach_conn(si, conn); + } + else { + if (connected) { + f = conn->flags & CO_FL_XPRT_TRACKED; + conn->flags &= ~CO_FL_XPRT_TRACKED; + conn_close(conn); + conn->flags |= f; + } + if (!correct_dest) + conn_clear(conn); + } + set_some_addr_info(&conn->addr); + si_set_state(si, SI_ST_CON); + ... + set_more_addr_info(&conn->addr); + conn->connect(); + if (must_retry) { + close_conn(conn); + } + } + +Note: we need to be able to set the control and transport protocols. +On outgoing connections, this is set once we know the destination address. +On incoming connections, this is set the earliest possible (once we know +the source address). + +The problem analysed below was solved on 2013/10/22 + +| ==> the real requirement is to know whether a connection is still valid or not +| before deciding to close it. CO_FL_CONNECTED could be enough, though it +| will not indicate connections that are still waiting for a connect to occur. +| This combined with CO_FL_WAIT_L4_CONN and CO_FL_WAIT_L6_CONN should be OK. +| +| Alternatively, conn->xprt could be used for this, but needs some careful checks +| (it's used by conn_full_close at least). +| +| Right now, conn_xprt_close() checks conn->xprt and sets it to NULL. +| conn_full_close() also checks conn->xprt and sets it to NULL, except +| that the check on ctrl is performed within xprt. So conn_xprt_close() +| followed by conn_full_close() will not close the file descriptor. +| Note that conn_xprt_close() is never called, maybe we should kill it ? +| +| Note: at the moment, it's problematic to leave conn->xprt to NULL before doing +| xprt_init() because we might end up with a pending file descriptor. Or at +| least with some transport not de-initialized. We might thus need +| conn_xprt_close() when conn_xprt_init() fails. +| +| The fd should be conditioned by ->ctrl only, and the transport layer by ->xprt. +| +| - conn_prepare_ctrl(conn, ctrl) +| - conn_prepare_xprt(conn, xprt) +| - conn_prepare_data(conn, data) +| +| Note: conn_xprt_init() needs conn->xprt so it's not a problem to set it early. +| +| One problem might be with conn_xprt_close() not being able to know if xprt_init() +| was called or not. That's where it might make sense to only set ->xprt during init. +| Except that it does not fly with outgoing connections (xprt_init is called after +| connect()). +| +| => currently conn_xprt_close() is only used by ssl_sock.c and decides whether +| to do something based on ->xprt_ctx which is set by ->init() from xprt_init(). +| So there is nothing to worry about. We just need to restore conn_xprt_close() +| and rely on ->ctrl to close the fd instead of ->xprt. +| +| => we have the same issue with conn_ctrl_close() : when is the fd supposed to be +| valid ? On outgoing connections, the control is set much before the fd... diff --git a/doc/design-thoughts/connection-sharing.txt b/doc/design-thoughts/connection-sharing.txt new file mode 100644 index 0000000..99be1cd --- /dev/null +++ b/doc/design-thoughts/connection-sharing.txt @@ -0,0 +1,31 @@ +2014/10/28 - Server connection sharing + +For HTTP/2 we'll have to use multiplexed connections to the servers and to +share them between multiple streams. We'll also have to do this for H/1, but +with some variations since H1 doesn't offer connection status verification. + +In order to validate that an idle connection is still usable, it is desirable +to periodically send health checks over it. Normally, idle connections are +meant to be heavily used, so there is no reason for having them idle for a long +time. Thus we have two possibilities : + + - either we time them out after some inactivity, this saves server resources ; + - or we check them after some inactivity. For this we can send the server- + side HTTP health check (only when the server uses HTTP checks), and avoid + using that to mark the server down, and instead consider the connection as + dead. + +For HTTP/2 we'll have to send pings periodically over these connections, so +it's worth considering a per-connection task to validate that the channel still +works. + +In the current model, a connection necessarily belongs to a session, so it's +not really possible to share them, at best they can be exchanged, but that +doesn't make much sense as it means that it could disturb parallel traffic. + +Thus we need to have a per-server list of idle connections and a max-idle-conn +setting to kill them when there are too many. In the case of H/1 it is also +advisable to consider that if a connection was created to pass a first non- +idempotent request while other idle connections were still existing, then a +connection will have to be killed in order not to exceed the limit. + diff --git a/doc/design-thoughts/dynamic-buffers.txt b/doc/design-thoughts/dynamic-buffers.txt new file mode 100644 index 0000000..564d868 --- /dev/null +++ b/doc/design-thoughts/dynamic-buffers.txt @@ -0,0 +1,41 @@ +2014/10/30 - dynamic buffer management + +Since HTTP/2 processing will significantly increase the need for buffering, it +becomes mandatory to be able to support dynamic buffer allocation. This also +means that at any moment some buffer allocation will fail and that a task or an +I/O operation will have to be paused for the time needed to allocate a buffer. + +There are 3 places where buffers are needed : + + - receive side of a stream interface. A connection notifies about a pending + recv() and the SI calls the receive function to put the data into a buffer. + Here the buffer will have to be picked from a pool first, and if the + allocation fails, the I/O will have to temporarily be disabled, the + connection will have to subscribe for buffer release notification to be + woken up once a buffer is available again. It's important to keep in mind + that buffer availability doesn't necessarily mean a desire to enable recv + again, just that recv is not paused anymore for resource reasons. + + - receive side of a stream interface when the other end point is an applet. + The applet wants to write into the buffer and for this the buffer needs to + be allocated as well. It is the same as above except that it is the applet + which is put to a pause. Since the applet might be at the core of the task + itself, it could become tricky to handle the situation correctly. Stats and + peers are in this situation. + + - Tx of a task : some tasks perform spontaneous writes to a buffer. Checks + are an example of this. The checks will have to be able to sleep while a + buffer is being awaited. + +One important point is that such pauses must not prevent the task from timing +out. There it becomes difficult because in the case of a time out, we could +want to emit a timeout error message and for this, require a buffer. So it is +important to keep the ability not to send messages upon error processing, and +to be able to give up and stop waiting for buffers. + +The refill mechanism needs to be designed in a thread-safe way because this +will become one of the rare cases of inter-task activity. Thus it is important +to ensure that checking the state of the task and passing of the freshly +released buffer are performed atomically, and that in case the task doesn't +want it anymore, it is responsible for passing it to the next one. + diff --git a/doc/design-thoughts/entities-v2.txt b/doc/design-thoughts/entities-v2.txt new file mode 100644 index 0000000..91c4fa9 --- /dev/null +++ b/doc/design-thoughts/entities-v2.txt @@ -0,0 +1,276 @@ +2012/07/05 - Connection layering and sequencing + + +An FD has a state : + - CLOSED + - READY + - ERROR (?) + - LISTEN (?) + +A connection has a state : + - CLOSED + - ACCEPTED + - CONNECTING + - ESTABLISHED + - ERROR + +A stream interface has a state : + - INI, REQ, QUE, TAR, ASS, CON, CER, EST, DIS, CLO + +Note that CON and CER might be replaced by EST if the connection state is used +instead. CON might even be more suited than EST to indicate that a connection +is known. + + +si_shutw() must do : + + data_shutw() + if (shutr) { + data_close() + ctrl_shutw() + ctrl_close() + } + +si_shutr() must do : + data_shutr() + if (shutw) { + data_close() + ctrl_shutr() + ctrl_close() + } + +Each of these steps may fail, in which case the step must be retained and the +operations postponed in an asynchronous task. + +The first asynchronous data_shut() might already fail so it is mandatory to +save the other side's status with the connection in order to let the async task +know whether the 3 next steps must be performed. + +The connection (or perhaps the FD) needs to know : + - the desired close operations : DSHR, DSHW, CSHR, CSHW + - the completed close operations : DSHR, DSHW, CSHR, CSHW + + +On the accept() side, we probably need to know : + - if a header is expected (eg: accept-proxy) + - if this header is still being waited for + => maybe both info might be combined into one bit + + - if a data-layer accept() is expected + - if a data-layer accept() has been started + - if a data-layer accept() has been performed + => possibly 2 bits, to indicate the need to free() + +On the connect() side, we need to know : + - the desire to send a header (eg: send-proxy) + - if this header has been sent + => maybe both info might be combined + + - if a data-layer connect() is expected + - if a data-layer connect() has been started + - if a data-layer connect() has been completed + => possibly 2 bits, to indicate the need to free() + +On the response side, we also need to know : + - the desire to send a header (eg: health check response for monitor-net) + - if this header was sent + => might be the same as sending a header over a new connection + +Note: monitor-net has precedence over proxy proto and data layers. Same for + health mode. + +For multi-step operations, use 2 bits : + 00 = operation not desired, not performed + 10 = operation desired, not started + 11 = operation desired, started but not completed + 01 = operation desired, started and completed + + => X != 00 ==> operation desired + X & 01 ==> operation at least started + X & 10 ==> operation not completed + +Note: no way to store status information for error reporting. + +Note2: it would be nice if "tcp-request connection" rules could work at the +connection level, just after headers ! This means support for tracking stick +tables, possibly not too much complicated. + + +Proposal for incoming connection sequence : + +- accept() +- if monitor-net matches or if mode health => try to send response +- if accept-proxy, wait for proxy request +- if tcp-request connection, process tcp rules and possibly keep the + pointer to stick-table +- if SSL is enabled, switch to SSL handshake +- then switch to DATA state and instantiate a session + +We just need a map of handshake handlers on the connection. They all manage the +FD status themselves and set the callbacks themselves. If their work succeeds, +they remove themselves from the list. If it fails, they remain subscribed and +enable the required polling until they are woken up again or the timeout strikes. + +Identified handshake handlers for incoming connections : + - HH_HEALTH (tries to send OK and dies) + - HH_MONITOR_IN (matches src IP and adds/removes HH_SEND_OK/HH_SEND_HTTP_OK) + - HH_SEND_OK (tries to send "OK" and dies) + - HH_SEND_HTTP_OK (tries to send "HTTP/1.0 200 OK" and dies) + - HH_ACCEPT_PROXY (waits for PROXY line and parses it) + - HH_TCP_RULES (processes TCP rules) + - HH_SSL_HS (starts SSL handshake) + - HH_ACCEPT_SESSION (instantiates a session) + +Identified handshake handlers for outgoing connections : + - HH_SEND_PROXY (tries to build and send the PROXY line) + - HH_SSL_HS (starts SSL handshake) + +For the pollers, we could check that handshake handlers are not 0 and decide to +call a generic connection handshake handler instead of usual callbacks. Problem +is that pollers don't know connections, they know fds. So entities which manage +handlers should update change the FD callbacks accordingly. + +With a bit of care, we could have : + - HH_SEND_LAST_CHUNK (sends the chunk pointed to by a pointer and dies) + => merges HEALTH, SEND_OK and SEND_HTTP_OK + +It sounds like the ctrl vs data state for the connection are per-direction +(eg: support an async ctrl shutw while still reading data). + +Also support shutr/shutw status at L4/L7. + +In practice, what we really need is : + +shutdown(conn) = + conn.data.shut() + conn.ctrl.shut() + conn.fd.shut() + +close(conn) = + conn.data.close() + conn.ctrl.close() + conn.fd.close() + +With SSL over Remote TCP (RTCP + RSSL) to reach the server, we would have : + + HTTP -> RTCP+RSSL connection <-> RTCP+RRAW connection -> TCP+SSL connection + +The connection has to be closed at 3 places after a successful response : + - DATA (RSSL over RTCP) + - CTRL (RTCP to close connection to server) + - SOCK (FD to close connection to second process) + +Externally, the connection is seen with very few flags : + - SHR + - SHW + - ERR + +We don't need a CLOSED flag as a connection must always be detached when it's closed. + +The internal status doesn't need to be exposed : + - FD allocated (Y/N) + - CTRL initialized (Y/N) + - CTRL connected (Y/N) + - CTRL handlers done (Y/N) + - CTRL failed (Y/N) + - CTRL shutr (Y/N) + - CTRL shutw (Y/N) + - DATA initialized (Y/N) + - DATA connected (Y/N) + - DATA handlers done (Y/N) + - DATA failed (Y/N) + - DATA shutr (Y/N) + - DATA shutw (Y/N) + +(note that having flags for operations needing to be completed might be easier) +-------------- + +Maybe we need to be able to call conn->fdset() and conn->fdclr() but it sounds +very unlikely since the only functions manipulating this are in the code of +the data/ctrl handlers. + +FDSET/FDCLR cannot be directly controlled by the stream interface since it also +depends on the DATA layer (WANT_READ/WANT_WRITE). + +But FDSET/FDCLR is probably controlled by who owns the connection (eg: DATA). + +Example: an SSL conn relies on an FD. The buffer is full, and wants the conn to +stop reading. It must not stop the FD itself. It is the read function which +should notice that it has nothing to do with a read wake-up, which needs to +disable reading. + +Conversely, when calling conn->chk_rcv(), the reader might get a WANT_READ or +even WANT_WRITE and adjust the FDs accordingly. + +------------------------ + +OK, the problem is simple : we don't manipulate the FD at the right level. +We should have : + ->connect(), ->chk_snd(), ->chk_rcv(), ->shutw(), ->shutr() which are + called from the upper layer (buffer) + ->recv(), ->send(), called from the lower layer + +Note that the SHR is *reported* by lower layer but can be forced by upper +layer. In this case it's like a delayed abort. The difficulty consists in +knowing the output data were correctly read. Probably we'd need to drain +incoming data past the active shutr(). + +The only four purposes of the top-down shutr() call are : + - acknowledge a shut read report : could probably be done better + - read timeout => disable reading : it's a delayed abort. We want to + report that the buffer is SHR, maybe even the connection, but the + FD clearly isn't. + - read abort due to error on the other side or desire to close (eg: + http-server-close) : delayed abort + - complete abort + +The active shutr() is problematic as we can't disable reading if we expect some +exchanges for data acknowledgement. We probably need to drain data only until +the shutw() has been performed and ACKed. + +A connection shut down for read would behave like this : + + 1) bidir exchanges + + 2) shutr() => read_abort_pending=1 + + 3) drain input, still send output + + 4) shutw() + + 5) drain input, wait for read0 or ack(shutw) + + 6) close() + +--------------------- 2012/07/05 ------------------- + +Communications must be performed this way : + + connection <-> channel <-> connection + +A channel is composed of flags and stats, and may store data in either a buffer +or a pipe. We need low-layer operations between sockets and buffers or pipes. +Right now we only support sockets, but later we might support remote sockets +and maybe pipes or shared memory segments. + +So we need : + + - raw_sock_to_buf() => receive raw data from socket into buffer + - raw_sock_to_pipe => receive raw data from socket into pipe (splice in) + - raw_sock_from_buf() => send raw data from buffer to socket + - raw_sock_from_pipe => send raw data from pipe to socket (splice out) + + - ssl_sock_to_buf() => receive ssl data from socket into buffer + - ssl_sock_to_pipe => receive ssl data from socket into a pipe (NULL) + - ssl_sock_from_buf() => send ssl data from buffer to socket + - ssl_sock_from_pipe => send ssl data from pipe to socket (NULL) + +These functions should set such status flags : + +#define ERR_IN 0x01 +#define ERR_OUT 0x02 +#define SHUT_IN 0x04 +#define SHUT_OUT 0x08 +#define EMPTY_IN 0x10 +#define FULL_OUT 0x20 + diff --git a/doc/design-thoughts/how-it-works.txt b/doc/design-thoughts/how-it-works.txt new file mode 100644 index 0000000..2d1cb89 --- /dev/null +++ b/doc/design-thoughts/how-it-works.txt @@ -0,0 +1,60 @@ +How it works ? (unfinished and inexact) + +For TCP and HTTP : + +- listeners create listening sockets with a READ callback pointing to the + protocol-specific accept() function. + +- the protocol-specific accept() function then accept()'s the connection and + instantiates a "server TCP socket" (which is dedicated to the client side), + and configures it (non_block, get_original_dst, ...). + +For TCP : +- in case of pure TCP, a request buffer is created, as well as a "client TCP + socket", which tries to connect to the server. + +- once the connection is established, the response buffer is allocated and + connected to both ends. + +- both sockets are set to "autonomous mode" so that they only wake up their + supervising session when they encounter a special condition (error or close). + + +For HTTP : +- in case of HTTP, a request buffer is created with the "HOLD" flag set and + a read limit to support header rewriting (may be this one will be removed + eventually because it's better to limit only to the buffer size and report + an error when rewritten data overflows) + +- a "flow analyzer" is attached to the buffer (or possibly multiple flow + analyzers). For the request, the flow analyzer is "http_lb_req". The flow + analyzer is a function which gets called when new data is present and + blocked. It has a timeout (request timeout). It can also be bypassed on + demand. + +- when the "http_lb_req" has received the whole request, it creates a client + socket with all the parameters needed to try to connect to the server. When + the connection establishes, the response buffer is allocated on the fly, + put to HOLD mode, and a an "http_lb_resp" flow analyzer is attached to the + buffer. + + +For client-side HTTPS : + +- the accept() function must completely instantiate a TCP socket + an SSL + reader. It is when the SSL session is complete that we call the + protocol-specific accept(), and create its buffer. + + + + +Conclusions +----------- + +- we need a generic TCP accept() function with a lot of flags set by the + listener, to tell it what info we need to get at the accept() time, and + what flags will have to be set on the socket. + +- once the TCP accept() function ends, it wakes up the protocol supervisor + which is in charge of creating the buffers, etc, switch states, etc... + diff --git a/doc/design-thoughts/http2.txt b/doc/design-thoughts/http2.txt new file mode 100644 index 0000000..c21ac10 --- /dev/null +++ b/doc/design-thoughts/http2.txt @@ -0,0 +1,277 @@ +2014/10/23 - design thoughts for HTTP/2 + +- connections : HTTP/2 depends a lot more on a connection than HTTP/1 because a + connection holds a compression context (headers table, etc...). We probably + need to have an h2_conn struct. + +- multiple transactions will be handled in parallel for a given h2_conn. They + are called streams in HTTP/2 terminology. + +- multiplexing : for a given client-side h2 connection, we can have multiple + server-side h2 connections. And for a server-side h2 connection, we can have + multiple client-side h2 connections. Streams circulate in N-to-N fashion. + +- flow control : flow control will be applied between multiple streams. Special + care must be taken so that an H2 client cannot block some H2 servers by + sending requests spread over multiple servers to the point where one server + response is blocked and prevents other responses from the same server from + reaching their clients. H2 connection buffers must always be empty or nearly + empty. The per-stream flow control needs to be respected as well as the + connection's buffers. It is important to implement some fairness between all + the streams so that it's not always the same which gets the bandwidth when + the connection is congested. + +- some clients can be H1 with an H2 server (is this really needed ?). Most of + the initial use case will be H2 clients to H1 servers. It is important to keep + in mind that H1 servers do not do flow control and that we don't want them to + block transfers (eg: post upload). + +- internal tasks : some H2 clients will be internal tasks (eg: health checks). + Some H2 servers will be internal tasks (eg: stats, cache). The model must be + compatible with this use case. + +- header indexing : headers are transported compressed, with a reference to a + static or a dynamic header, or a literal, possibly huffman-encoded. Indexing + is specific to the H2 connection. This means there is no way any binary data + can flow between both sides, headers will have to be decoded according to the + incoming connection's context and re-encoded according to the outgoing + connection's context, which can significantly differ. In order to avoid the + parsing trouble we currently face, headers will have to be clearly split + between name and value. It is worth noting that neither the incoming nor the + outgoing connections' contexts will be of any use while processing the + headers. At best we can have some shortcuts for well-known names that map + well to the static ones (eg: use the first static entry with same name), and + maybe have a few special cases for static name+value as well. Probably we can + classify headers in such categories : + + - static name + value + - static name + other value + - dynamic name + other value + + This will allow for better processing in some specific cases. Headers + supporting a single value (:method, :status, :path, ...) should probably + be stored in a single location with a direct access. That would allow us + to retrieve a method using hdr[METHOD]. All such indexing must be performed + while parsing. That also means that HTTP/1 will have to be converted to this + representation very early in the parser and possibly converted back to H/1 + after processing. + + Header names/values will have to be placed in a small memory area that will + inevitably get fragmented as headers are rewritten. An automatic packing + mechanism must be implemented so that when there's no more room, headers are + simply defragmented/packet to a new table and the old one is released. Just + like for the static chunks, we need to have a few such tables pre-allocated + and ready to be swapped at any moment. Repacking must not change any index + nor affect the way headers are compressed so that it can happen late after a + retry (send-name-header for example). + +- header processing : can still happen on a (header, value) basis. Reqrep/ + rsprep completely disappear and will have to be replaced with something else + to support renaming headers and rewriting url/path/... + +- push_promise : servers can push dummy requests+responses. They advertise + the stream ID in the push_promise frame indicating the associated stream ID. + This means that it is possible to initiate a client-server stream from the + information coming from the server and make the data flow as if the client + had made it. It's likely that we'll have to support two types of server + connections: those which support push and those which do not. That way client + streams will be distributed to existing server connections based on their + capabilities. It's important to keep in mind that PUSH will not be rewritten + in responses. + +- stream ID mapping : since the stream ID is per H2 connection, stream IDs will + have to be mapped. Thus a given stream is an entity with two IDs (one per + side). Or more precisely a stream has two end points, each one carrying an ID + when it ends on an HTTP2 connection. Also, for each stream ID we need to + quickly find the associated transaction in progress. Using a small quick + unique tree seems indicated considering the wide range of valid values. + +- frame sizes : frame have to be remapped between both sides as multiplexed + connections won't always have the same characteristics. Thus some frames + might be spliced and others will be sliced. + +- error processing : care must be taken to never break a connection unless it + is dead or corrupt at the protocol level. Stats counter must exist to observe + the causes. Timeouts are a great problem because silent connections might + die out of inactivity. Ping frames should probably be scheduled a few seconds + before the connection timeout so that an unused connection is verified before + being killed. Abnormal requests must be dealt with using RST_STREAM. + +- ALPN : ALPN must be observed on the client side, and transmitted to the server + side. + +- proxy protocol : proxy protocol makes little to no sense in a multiplexed + protocol. A per-stream equivalent will surely be needed if implementations + do not quickly generalize the use of Forward. + +- simplified protocol for local devices (eg: haproxy->varnish in clear and + without handshake, and possibly even with splicing if the connection's + settings are shared) + +- logging : logging must report a number of extra information such as the + stream ID, and whether the transaction was initiated by the client or by the + server (which can be deduced from the stream ID's parity). In case of push, + the number of the associated stream must also be reported. + +- memory usage : H2 increases memory usage by mandating use of 16384 bytes + frame size minimum. That means slightly more than 16kB of buffer in each + direction to process any frame. It will definitely have an impact on the + deployed maxconn setting in places using less than this (4..8kB are common). + Also, the header list is persistent per connection, so if we reach the same + size as the request, that's another 16kB in each direction, resulting in + about 48kB of memory where 8 were previously used. A more careful encoder + can work with a much smaller set even if that implies evicting entries + between multiple headers of the same message. + +- HTTP/1.0 should very carefully be transported over H2. Since there's no way + to pass version information in the protocol, the server could use some + features of HTTP/1.1 that are unsafe in HTTP/1.0 (compression, trailers, + ...). + +- host / :authority : ":authority" is the norm, and "host" will be absent when + H2 clients generate :authority. This probably means that a dummy Host header + will have to be produced internally from :authority and removed when passing + to H2 behind. This can cause some trouble when passing H2 requests to H1 + proxies, because there's no way to know if the request should contain scheme + and authority in H1 or not based on the H2 request. Thus a "proxy" option + will have to be explicitly mentioned on HTTP/1 server lines. One of the + problem that it creates is that it's not longer possible to pass H/1 requests + to H/1 proxies without an explicit configuration. Maybe a table of the + various combinations is needed. + + :scheme :authority host + HTTP/2 request present present absent + HTTP/1 server req absent absent present + HTTP/1 proxy req present present present + + So in the end the issue is only with H/2 requests passed to H/1 proxies. + +- ping frames : they don't indicate any stream ID so by definition they cannot + be forwarded to any server. The H2 connection should deal with them only. + +There's a layering problem with H2. The framing layer has to be aware of the +upper layer semantics. We can't simply re-encode HTTP/1 to HTTP/2 then pass +it over a framing layer to mux the streams, the frame type must be passed below +so that frames are properly arranged. Header encoding is connection-based and +all streams using the same connection will interact in the way their headers +are encoded. Thus the encoder *has* to be placed in the h2_conn entity, and +this entity has to know for each stream what its headers are. + +Probably that we should remove *all* headers from transported data and move +them on the fly to a parallel structure that can be shared between H1 and H2 +and consumed at the appropriate level. That means buffers only transport data. +Trailers have to be dealt with differently. + +So if we consider an H1 request being forwarded between a client and a server, +it would look approximately like this : + + - request header + body land into a stream's receive buffer + - headers are indexed and stripped out so that only the body and whatever + follows remain in the buffer + - both the header index and the buffer with the body stay attached to the + stream + - the sender can rebuild the whole headers. Since they're found in a table + supposed to be stable, it can rebuild them as many times as desired and + will always get the same result, so it's safe to build them into the trash + buffer for immediate sending, just as we do for the PROXY protocol. + - the upper protocol should probably provide a build_hdr() callback which + when called by the socket layer, builds this header block based on the + current stream's header list, ready to be sent. + - the socket layer has to know how many bytes from the headers are left to be + forwarded prior to processing the body. + - the socket layer needs to consume only the acceptable part of the body and + must not release the buffer if any data remains in it (eg: pipelining over + H1). This is already handled by channel->o and channel->to_forward. + - we could possibly have another optional callback to send a preamble before + data, that could be used to send chunk sizes in H1. The danger is that it + absolutely needs to be stable if it has to be retried. But it could + considerably simplify de-chunking. + +When the request is sent to an H2 server, an H2 stream request must be made +to the server, we find an existing connection whose settings are compatible +with our needs (eg: tls/clear, push/no-push), and with a spare stream ID. If +none is found, a new connection must be established, unless maxconn is reached. + +Servers must have a maxstream setting just like they have a maxconn. The same +queue may be used for that. + +The "tcp-request content" ruleset must apply to the TCP layer. But with HTTP/2 +that becomes impossible (and useless). We still need something like the +"tcp-request session" hook to apply just after the SSL handshake is done. + +It is impossible to defragment the body on the fly in HTTP/2. Since multiple +messages are interleaved, we cannot wait for all of them and block the head of +line. Thus if body analysis is required, it will have to use the stream's +buffer, which necessarily implies a copy. That means that with each H2 end we +necessarily have at least one copy. Sometimes we might be able to "splice" some +bytes from one side to the other without copying into the stream buffer (same +rules as for TCP splicing). + +In theory, only data should flow through the channel buffer, so each side's +connector is responsible for encoding data (H1: linear/chunks, H2: frames). +Maybe the same mechanism could be extrapolated to tunnels / TCP. + +Since we'd use buffers only for data (and for receipt of headers), we need to +have dynamic buffer allocation. + +Thus : +- Tx buffers do not exist. We allocate a buffer on the fly when we're ready to + send something that we need to build and that needs to be persistent in case + of partial send. H1 headers are built on the fly from the header table to a + temporary buffer that is immediately sent and whose amount of sent bytes is + the only information kept (like for PROXY protocol). H2 headers are more + complex since the encoding depends on what was successfully sent. Thus we + need to build them and put them into a temporary buffer that remains + persistent in case send() fails. It is possible to have a limited pool of + Tx buffers and refrain from sending if there is no more buffer available in + the pool. In that case we need a wake-up mechanism once a buffer is + available. Once the data are sent, the Tx buffer is then immediately recycled + in its pool. Note that no tx buffer being used (eg: for hdr or control) means + that we have to be able to serialize access to the connection and retry with + the same stream. It also means that a stream that times out while waiting for + the connector to read the second half of its request has to stay there, or at + least needs to be handled gracefully. However if the connector cannot read + the data to be sent, it means that the buffer is congested and the connection + is dead, so that probably means it can be killed. + +- Rx buffers have to be pre-allocated just before calling recv(). A connection + will first try to pick a buffer and disable reception if it fails, then + subscribe to the list of tasks waiting for an Rx buffer. + +- full Rx buffers might sometimes be moved around to the next buffer instead of + experiencing a copy. That means that channels and connectors must use the + same format of buffer, and that only the channel will have to see its + pointers adjusted. + +- Tx of data should be made as much as possible without copying. That possibly + means by directly looking into the connection buffer on the other side if + the local Tx buffer does not exist and the stream buffer is not allocated, or + even performing a splice() call between the two sides. One of the problem in + doing this is that it requires proper ordering of the operations (eg: when + multiple readers are attached to a same buffer). If the splitting occurs upon + receipt, there's no problem. If we expect to retrieve data directly from the + original buffer, it's harder since it contains various things in an order + which does not even indicate what belongs to whom. Thus possibly the only + mechanism to implement is the buffer permutation which guarantees zero-copy + and only in the 100% safe case. Also it's atomic and does not cause HOL + blocking. + +It makes sense to chose the frontend_accept() function right after the +handshake ended. It is then possible to check the ALPN, the SNI, the ciphers +and to accept to switch to the h2_conn_accept handler only if everything is OK. +The h2_conn_accept handler will have to deal with the connection setup, +initialization of the header table, exchange of the settings frames and +preparing whatever is needed to fire new streams upon receipt of unknown +stream IDs. Note: most of the time it will not be possible to splice() because +we need to know in advance the amount of bytes to write the header, and here it +will not be possible. + +H2 health checks must be seen as regular transactions/streams. The check runs a +normal client which seeks an available stream from a server. The server then +finds one on an existing connection or initiates a new H2 connection. The H2 +checks will have to be configurable for sharing streams or not. Another option +could be to specify how many requests can be made over existing connections +before insisting on getting a separate connection. Note that such separate +connections might end up stacking up once released. So probably that they need +to be recycled very quickly (eg: fix how many unused ones can exist max). + diff --git a/doc/design-thoughts/http_load_time.url b/doc/design-thoughts/http_load_time.url new file mode 100644 index 0000000..f178e46 --- /dev/null +++ b/doc/design-thoughts/http_load_time.url @@ -0,0 +1,5 @@ +Excellent paper about page load time for keepalive on/off, pipelining, +multiple host names, etc... + +http://www.die.net/musings/page_load_time/ + diff --git a/doc/design-thoughts/pool-debugging.txt b/doc/design-thoughts/pool-debugging.txt new file mode 100644 index 0000000..106e41c --- /dev/null +++ b/doc/design-thoughts/pool-debugging.txt @@ -0,0 +1,243 @@ +2022-02-22 - debugging options with pools + +Two goals: + - help developers spot bugs as early as possible + + - make the process more reliable in field, by killing sick ones as soon as + possible instead of letting them corrupt data, cause trouble, or even be + exploited. + +An allocated object may exist in 5 forms: + - in use: currently referenced and used by haproxy, 100% of its size are + dedicated to the application which can do absolutely anything with it, + but it may never touch anything before nor after that area. + + - in cache: the object is neither referenced nor used anymore, but it sits + in a thread's cache. The application may not touch it at all anymore, and + some parts of it could even be unmapped. Only the current thread may safely + reach it, though others might find/release it when under thread isolation. + The thread cache needs some LRU linking that may be stored anywhere, either + inside the area, or outside. The parts surrounding the <size> parts remain + invisible to the application layer, and can serve as a protection. + + - in shared cache: the object is neither referenced nor used anymore, but it + may be reached by any thread. Some parts of it could be unmapped. Any + thread may pick it but only one may find it, hence once grabbed, it is + guaranteed no other one will find it. The shared cache needs to set up a + linked list and a single pointer needs to be stored anywhere, either inside + or outside the area. The parts surrounding the <size> parts remain + invisible to the application layer, and can serve as a protection. + + - in the system's memory allocator: the object is not known anymore from + haproxy. It may be reassigned in parts or totally to other pools or other + subsystems (e.g. crypto library). Some or all of it may be unmapped. The + areas surrounding the <size> parts are also part of the object from the + library's point of view and may be delivered to other areas. Tampering + with these may cause any other part to malfunction in dirty ways. + + - in the OS only: the memory allocator gave it back to the OS. + +The following options need to be configurable: + - detect improper initialization: this is done by poisonning objects before + delivering them to the application. + + - help figure where an object was allocated when in use: a pointer to the + call place will help. Pointing to the last pool_free() as well for the + same reasons when dealing with a UAF. + + - detection of wrong pointer/pool when in use: a pointer to the pool before + or after the area will definitely help. + + - detection of overflows when in use: a canary at the end of the area + (closest possible to <size>) will definitely help. The pool above can do + that job. Ideally, we should fill some data at the end so that even + unaligned sizes can be checked (e.g. a buffer that gets a zero appended). + If we just align on 2 pointers, writing the same pointer twice at the end + may do the job, but we won't necessarily have our bytes. Thus a particular + end-of-string pattern would be useful (e.g. ff55aa01) to fill it. + + - detection of double free when in cache: similar to detection of wrong + pointer/pool when in use: the pointer at the end may simply be changed so + that it cannot match the pool anymore. By using a pointer to the caller of + the previous free() operation, we have the guarantee to see different + pointers, and this pointer can be inspected to figure where the object was + previously freed. An extra check may even distinguish a perfect double-free + (same caller) from just a wrong free (pointer differs from pool). + + - detection of late corruption when in cache: keeping a copy of the + checksum of the whole area upon free() will do the job, but requires one + extra storage area for the checksum. Filling the area with a pattern also + does the job and doesn't require extra storage, but it loses the contents + and can be a bit slower. Sometimes losing the contents can be a feature, + especially when trying to detect late reads. Probably that both need to + be implemented. Note that if contents are not strictly needed, storing a + checksum inside the area does the job. + + - preserve total contents in cache for debugging: losing some precious + information can be a problem. + + - pattern filling of the area helps detect use-after-free in read-only mode. + + - allocate cold first helps with both cases above. + +Uncovered: + - overflow/underflow when in cache/shared/libc: it belongs to use-after-free + pattern and such an error during regular use ought to be caught while the + object was still in use. + + - integrity when in libc: not under our control anymore, this is a libc + problem. + +Arbitrable: + - integrity when in shared cache: unlikely to happen only then if it could + have happened in the local cache. Shared cache not often used anymore, thus + probably not worth the effort + + - protection against double-free when in shared cache/libc: might be done for + a cheap price, probably worth being able to quickly tell that such an + object left the local cache (e.g. the mark points to the caller, but could + possibly just be incremented, hence still point to the same code location+1 + byte when released. Calls are 4 bytes min on RISC, 5 on x86 so we do have + some margin by having a caller's location be +0,+1,+2 or +3. + + - underflow when in use: hasn't been really needed over time but may change. + + - detection of late corruption when in shared cache: checksum or area filling + are possible, but is this as relevant as it used to considering the less + common use of the shared cache ? + +Design considerations: + - object allocation when in use must remain minimal + + - when in cache, there are 2 lists which the compiler expect to be at least + aligned each (e.g. if/when we start to use DWCAS). + + - the original "pool debugging" feature covers both pool tracking, double- + free detection, overflow detection and caller info at the cost of a single + pointer placed immediately after the area. + + - preserving the contents might be done by placing the cache links and the + shared cache's list outside of the area (either before or after). Placing + it before has the merit that the allocated object preserves the 4-ptr + alignment. But when a larger alignment is desired this often does not work + anymore. Placing it after requires some dynamic adjustment depending on the + object's size. If any protection is installed, this protection must be + placed before the links so that the list doesn't get randomly corrupted and + corrupts adjacent elements. Note that if protection is desired, the extra + waste is probably less critical. + + - a link to the last caller might have to be stored somewhere. Without + preservation the free() caller may be placed anywhere while the alloc() + caller may only be placed outside. With preservation, again the free() + caller may be placed either before the object or after the mark at the end. + There is no particular need that both share the same location though it may + help. Note that when debugging is enabled, the free() caller doesn't need + to be duplicated and can continue to serve as the double-free detection. + Thus maybe in the end we only need to store the caller to the last alloc() + but not the free() since if we want it it's available via the pool debug. + + - use-after-free detection: contents may be erased on free() and checked on + alloc(), but they can also be checksummed on free() and rechecked on + alloc(). In the latter case we need to store a checksum somewhere. Note + that with pure checksum we don't know what part was modified, but seeing + previous contents can be useful. + +Possibilities: + +1) Linked lists inside the area: + + V size alloc + ---+------------------------------+-----------------+-- + in use |##############################| (Pool) (Tracer) | + ---+------------------------------+-----------------+-- + + ---+--+--+------------------------+-----------------+-- + in cache |L1|L2|########################| (Caller) (Sum) | + ---+--+--+------------------------+-----------------+-- +or: + ---+--+--+------------------------+-----------------+-- + in cache |L1|L2|###################(sum)| (Caller) | + ---+--+--+------------------------+-----------------+-- + + ---+-+----------------------------+-----------------+-- + in global |N|XXXX########################| (Caller) | + ---+-+----------------------------+-----------------+-- + + +2) Linked lists before the the area leave room for tracer and pool before + the area, but the canary must remain at the end, however the area will + be more difficult to keep aligned: + + V head size alloc + ----+-+-+------------------------------+-----------------+-- + in use |T|P|##############################| (canary) | + ----+-+-+------------------------------+-----------------+-- + + --+-----+------------------------------+-----------------+-- + in cache |L1|L2|##############################| (Caller) (Sum) | + --+-----+------------------------------+-----------------+-- + + ------+-+------------------------------+-----------------+-- + in global |N|##############################| (Caller) | + ------+-+------------------------------+-----------------+-- + + +3) Linked lists at the end of the area, might be shared with extra data + depending on the state: + + V size alloc + ---+------------------------------+-----------------+-- + in use |##############################| (Pool) (Tracer) | + ---+------------------------------+-----------------+-- + + ---+------------------------------+--+--+-----------+-- + in cache |##############################|L1|L2| (Caller) (Sum) + ---+------------------------------+--+--+-----------+-- + + ---+------------------------------+-+---------------+-- + in global |##############################|N| (Caller) | + ---+------------------------------+-+---------------+-- + +This model requires a little bit of alignment at the end of the area, which is +not incompatible with pattern filling and/or checksumming: + - preserving the area for post-mortem analysis means nothing may be placed + inside. In this case it could make sense to always store the last releaser. + - detecting late corruption may be done either with filling or checksumming, + but the simple fact of assuming a risk of corruption that needs to be + chased means we must not store the lists nor caller inside the area. + +Some models imply dedicating some place when in cache: + - preserving contents forces the lists to be prefixed or appended, which + leaves unused places when in use. Thus we could systematically place the + pool pointer and the caller in this case. + + - if preserving contents is not desired, almost everything can be stored + inside when not in use. Then each situation's size should be calculated + so that the allocated size is known, and entries are filled from the + beginning while not in use, or after the size when in use. + + - if poisonning is requested, late corruption might be detected but then we + don't want the list to be stored inside at the risk of being corrupted. + +Maybe just implement a few models: + - compact/optimal: put l1/l2 inside + - detect late corruption: fill/sum, put l1/l2 out + - preserve contents: put l1/l2 out + - corruption+preserve: do not fill, sum out + - poisonning: not needed on free if pattern filling is done. + +try2: + - poison on alloc to detect missing initialization: yes/no + (note: nothing to do if filling done) + - poison on free to detect use-after-free: yes/no + (note: nothing to do if filling done) + - check on alloc for corruption-after-free: yes/no + If content-preserving => sum, otherwise pattern filling; in + any case, move L1/L2 out. + - check for overflows: yes/no: use a canary after the area. The + canary can be the pointer to the pool. + - check for alloc caller: yes/no => always after the area + - content preservation: yes/no + (disables filling, moves lists out) + - improved caller tracking: used to detect double-free, may benefit + from content-preserving but not only. diff --git a/doc/design-thoughts/rate-shaping.txt b/doc/design-thoughts/rate-shaping.txt new file mode 100644 index 0000000..ca09408 --- /dev/null +++ b/doc/design-thoughts/rate-shaping.txt @@ -0,0 +1,90 @@ +2010/01/24 - Design of multi-criteria request rate shaping. + +We want to be able to rate-shape traffic on multiple cirteria. For instance, we +may want to support shaping of per-host header requests, as well as per source. + +In order to achieve this, we will use checkpoints, one per criterion. Each of +these checkpoints will consist in a test, a rate counter and a queue. + +A request reaches the checkpoint and checks the counter. If the counter is +below the limit, it is updated and the request continues. If the limit is +reached, the request attaches itself into the queue and sleeps. The sleep time +is computed from the queue status, and updates the queue status. + +A task is dedicated to each queue. Its sole purpose is to be woken up when the +next task may wake up, to check the frequency counter, wake as many requests as +possible and update the counter. All the woken up requests are detached from +the queue. Maybe the task dedicated to the queue can be avoided and replaced +with all queued tasks's sleep counters, though this looks tricky. Or maybe it's +just the first request in the queue that should be responsible for waking up +other tasks, and not to forget to pass on this responsibility to next tasks if +it leaves the queue. + +The woken up request then goes on evaluating other criteria and possibly sleeps +again on another one. In the end, the task will have waited the amount of time +required to pass all checkpoints, and all checkpoints will be able to maintain +a permanent load of exactly their limit if enough streams flow through them. + +Since a request can only sleep in one queue at a time, it makes sense to use a +linked list element in each session to attach it to any queue. It could very +well be shared with the pendconn hooks which could then be part of the session. + +This mechanism could be used to rate-shape sessions and requests per backend +and per server. + +When rate-shaping on dynamic criteria, such as the source IP address, we have +to first extract the data pattern, then look it up in a table very similar to +the stickiness tables, but with a frequency counter. At the checkpoint, the +pattern is looked up, the entry created or refreshed, and the frequency counter +updated and checked. Then the request either goes on or sleeps as described +above, but if it sleeps, it's still in the checkpoint's queue, but with a date +computed from the criterion's status. + +This means that we need 3 distinct features : + + - optional pattern extraction + - per-pattern or per-queue frequency counter + - time-ordered queue with a task + +Based on past experiences with frequency counters, it does not appear very easy +to exactly compute sleep delays in advance for multiple requests. So most +likely we'll have to run per-criterion queues too, with only the head of the +queue holding a wake-up timeout. + +This finally leads us to the following : + + - optional pattern extraction + - per-pattern or per-queue frequency counter + - per-frequency counter queue + - head of the queue serves as a global queue timer. + +This brings us to a very flexible architecture : + - 1 list of rule-based checkpoints per frontend + - 1 list of rule-based checkpoints per backend + - 1 list of rule-based checkpoints per server + +Each of these lists have a lot of rules conditioned by ACLs, just like the +use-backend rules, except that all rules are evaluated in turn. + +Since we might sometimes just want to enable that without setting any limit and +just for enabling control in ACLs (or logging ?), we should probably try to +find a flexible way of declaring just a counter without a queue. + +These checkpoints could be of two types : + - rate-limit (described here) + - concurrency-limit (very similar with the counter and no timer). This + feature would require to keep track of all accounted criteria in a + request so that they can be released upon request completion. + +It should be possible to define a max of requests in the queue, above which a +503 is returned. The same applies for the max delay in the queue. We could have +it per-task (currently it's the connection timeout) and abort tasks with a 503 +when the delay is exceeded. + +Per-server connection concurrency could be converted to use this mechanism +which is very similar. + +The construct should be flexible enough so that the counters may be checked +from ACLs. That would allow to reject connections or switch to an alternate +backend when some limits are reached. + diff --git a/doc/design-thoughts/sess_par_sec.txt b/doc/design-thoughts/sess_par_sec.txt new file mode 100644 index 0000000..e936374 --- /dev/null +++ b/doc/design-thoughts/sess_par_sec.txt @@ -0,0 +1,13 @@ +Graphe des nombres de traitements par seconde unité de temps avec + - un algo linéaire et très peu coûteux unitairement (0.01 ut) + - un algo en log(2) et 5 fois plus coûteux (0.05 ut) + +set yrange [0:1] +plot [0:1000] 1/(1+0.01*x), 1/(1+0.05*log(x+1)/log(2)) + +Graphe de la latence induite par ces traitements en unités de temps : + +set yrange [0:1000] +plot [0:1000] x/(1+0.01*x), x/(1+0.05*log(x+1)/log(2)) + + diff --git a/doc/design-thoughts/thread-group.txt b/doc/design-thoughts/thread-group.txt new file mode 100644 index 0000000..f1aad7b --- /dev/null +++ b/doc/design-thoughts/thread-group.txt @@ -0,0 +1,528 @@ +Thread groups +############# + +2021-07-13 - first draft +========== + +Objective +--------- +- support multi-socket systems with limited cache-line bouncing between + physical CPUs and/or L3 caches + +- overcome the 64-thread limitation + +- Support a reasonable number of groups. I.e. if modern CPUs arrive with + core complexes made of 8 cores, with 8 CC per chip and 2 chips in a + system, it makes sense to support 16 groups. + + +Non-objective +------------- +- no need to optimize to the last possible cycle. I.e. some algos like + leastconn will remain shared across all threads, servers will keep a + single queue, etc. Global information remains global. + +- no stubborn enforcement of FD sharing. Per-server idle connection lists + can become per-group; listeners can (and should probably) be per-group. + Other mechanisms (like SO_REUSEADDR) can already overcome this. + +- no need to go beyond 64 threads per group. + + +Identified tasks +================ + +General +------- +Everywhere tid_bit is used we absolutely need to find a complement using +either the current group or a specific one. Thread debugging will need to +be extended as masks are extensively used. + + +Scheduler +--------- +The global run queue and global wait queue must become per-group. This +means that a task may only be queued into one of them at a time. It +sounds like tasks may only belong to a given group, but doing so would +bring back the original issue that it's impossible to perform remote wake +ups. + +We could probably ignore the group if we don't need to set the thread mask +in the task. the task's thread_mask is never manipulated using atomics so +it's safe to complement it with a group. + +The sleeping_thread_mask should become per-group. Thus possibly that a +wakeup may only be performed on the assigned group, meaning that either +a task is not assigned, in which case it be self-assigned (like today), +otherwise the tg to be woken up will be retrieved from the task itself. + +Task creation currently takes a thread mask of either tid_bit, a specific +mask, or MAX_THREADS_MASK. How to create a task able to run anywhere +(checks, Lua, ...) ? + +Profiling +--------- +There should be one task_profiling_mask per thread group. Enabling or +disabling profiling should be made per group (possibly by iterating). + +Thread isolation +---------------- +Thread isolation is difficult as we solely rely on atomic ops to figure +who can complete. Such operation is rare, maybe we could have a global +read_mostly flag containing a mask of the groups that require isolation. +Then the threads_want_rdv_mask etc can become per-group. However setting +and clearing the bits will become problematic as this will happen in two +steps hence will require careful ordering. + +FD +-- +Tidbit is used in a number of atomic ops on the running_mask. If we have +one fdtab[] per group, the mask implies that it's within the group. +Theoretically we should never face a situation where an FD is reported nor +manipulated for a remote group. + +There will still be one poller per thread, except that this time all +operations will be related to the current thread_group. No fd may appear +in two thread_groups at once, but we can probably not prevent that (e.g. +delayed close and reopen). Should we instead have a single shared fdtab[] +(less memory usage also) ? Maybe adding the group in the fdtab entry would +work, but when does a thread know it can leave it ? Currently this is +solved by running_mask and by update_mask. Having two tables could help +with this (each table sees the FD in a different group with a different +mask) but this looks overkill. + +There's polled_mask[] which needs to be decided upon. Probably that it +should be doubled as well. Note, polled_mask left fdtab[] for cacheline +alignment reasons in commit cb92f5cae4. + +If we have one fdtab[] per group, what *really* prevents from using the +same FD in multiple groups ? _fd_delete_orphan() and fd_update_events() +need to check for no-thread usage before closing the FD. This could be +a limiting factor. Enabling could require to wake every poller. + +Shouldn't we remerge fdinfo[] with fdtab[] (one pointer + one int/short, +used only during creation and close) ? + +Other problem, if we have one fdtab[] per TG, disabling/enabling an FD +(e.g. pause/resume on listener) can become a problem if it's not necessarily +on the current TG. We'll then need a way to figure that one. It sounds like +FDs from listeners and receivers are very specific and suffer from problems +all other ones under high load do not suffer from. Maybe something specific +ought to be done for them, if we can guarantee there is no risk of accidental +reuse (e.g. locate the TG info in the receiver and have a "MT" bit in the +FD's flags). The risk is always that a close() can result in instant pop-up +of the same FD on any other thread of the same process. + +Observations: right now fdtab[].thread_mask more or less corresponds to a +declaration of interest, it's very close to meaning "active per thread". It is +in fact located in the FD while it ought to do nothing there, as it should be +where the FD is used as it rules accesses to a shared resource that is not +the FD but what uses it. Indeed, if neither polled_mask nor running_mask have +a thread's bit, the FD is unknown to that thread and the element using it may +only be reached from above and not from the FD. As such we ought to have a +thread_mask on a listener and another one on connections. These ones will +indicate who uses them. A takeover could then be simplified (atomically set +exclusivity on the FD's running_mask, upon success, takeover the connection, +clear the running mask). Probably that the change ought to be performed on +the connection level first, not the FD level by the way. But running and +polled are the two relevant elements, one indicates userland knowledge, +the other one kernel knowledge. For listeners there's no exclusivity so it's +a bit different but the rule remains the same that we don't have to know +what threads are *interested* in the FD, only its holder. + +Not exact in fact, see FD notes below. + +activity +-------- +There should be one activity array per thread group. The dump should +simply scan them all since the cumuled values are not very important +anyway. + +applets +------- +They use tid_bit only for the task. It looks like the appctx's thread_mask +is never used (now removed). Furthermore, it looks like the argument is +*always* tid_bit. + +CPU binding +----------- +This is going to be tough. It will be needed to detect that threads overlap +and are not bound (i.e. all threads on same mask). In this case, if the number +of threads is higher than the number of threads per physical socket, one must +try hard to evenly spread them among physical sockets (e.g. one thread group +per physical socket) and start as many threads as needed on each, bound to +all threads/cores of each socket. If there is a single socket, the same job +may be done based on L3 caches. Maybe it could always be done based on L3 +caches. The difficulty behind this is the number of sockets to be bound: it +is not possible to bind several FDs per listener. Maybe with a new bind +keyword we can imagine to automatically duplicate listeners ? In any case, +the initially bound cpumap (via taskset) must always be respected, and +everything should probably start from there. + +Frontend binding +---------------- +We'll have to define a list of threads and thread-groups per frontend. +Probably that having a group mask and a same thread-mask for each group +would suffice. + +Threads should have two numbers: + - the per-process number (e.g. 1..256) + - the per-group number (1..64) + +The "bind-thread" lines ought to use the following syntax: + - bind 45 ## bind to process' thread 45 + - bind 1/45 ## bind to group 1's thread 45 + - bind all/45 ## bind to thread 45 in each group + - bind 1/all ## bind to all threads in group 1 + - bind all ## bind to all threads + - bind all/all ## bind to all threads in all groups (=all) + - bind 1/65 ## rejected + - bind 65 ## OK if there are enough + - bind 35-45 ## depends. Rejected if it crosses a group boundary. + +The global directive "nbthread 28" means 28 total threads for the process. The +number of groups will sub-divide this. E.g. 4 groups will very likely imply 7 +threads per group. At the beginning, the nbgroup should be manual since it +implies config adjustments to bind lines. + +There should be a trivial way to map a global thread to a group and local ID +and to do the opposite. + + +Panic handler + watchdog +------------------------ +Will probably depend on what's done for thread_isolate + +Per-thread arrays inside structures +----------------------------------- +- listeners have a thr_conn[] array, currently limited to MAX_THREADS. Should + we simply bump the limit ? +- same for servers with idle connections. +=> doesn't seem very practical. +- another solution might be to point to dynamically allocated arrays of + arrays (e.g. nbthread * nbgroup) or a first level per group and a second + per thread. +=> dynamic allocation based on the global number + +Other +----- +- what about dynamic thread start/stop (e.g. for containers/VMs) ? + E.g. if we decide to start $MANY threads in 4 groups, and only use + one, in the end it will not be possible to use less than one thread + per group, and at most 64 will be present in each group. + + +FD Notes +-------- + - updt_fd_polling() uses thread_mask to figure where to send the update, + the local list or a shared list, and which bits to set in update_mask. + This could be changed so that it takes the update mask in argument. The + call from the poller's fork would just have to broadcast everywhere. + + - pollers use it to figure whether they're concerned or not by the activity + update. This looks important as otherwise we could re-enable polling on + an FD that changed to another thread. + + - thread_mask being a per-thread active mask looks more exact and is + precisely used this way by _update_fd(). In this case using it instead + of running_mask to gauge a change or temporarily lock it during a + removal could make sense. + + - running should be conditioned by thread. Polled not (since deferred + or migrated). In this case testing thread_mask can be enough most of + the time, but this requires synchronization that will have to be + extended to tgid.. But migration seems a different beast that we shouldn't + care about here: if first performed at the higher level it ought to + be safe. + +In practice the update_mask can be dropped to zero by the first fd_delete() +as the only authority allowed to fd_delete() is *the* owner, and as soon as +all running_mask are gone, the FD will be closed, hence removed from all +pollers. This will be the only way to make sure that update_mask always +refers to the current tgid. + +However, it may happen that a takeover within the same group causes a thread +to read the update_mask late, while the FD is being wiped by another thread. +That other thread may close it, causing another thread in another group to +catch it, and change the tgid and start to update the update_mask. This means +that it would be possible for a thread entering do_poll() to see the correct +tgid, then the fd would be closed, reopened and reassigned to another tgid, +and the thread would see its bit in the update_mask, being confused. Right +now this should already happen when the update_mask is not cleared, except +that upon wakeup a migration would be detected and that would be all. + +Thus we might need to set the running bit to prevent the FD from migrating +before reading update_mask, which also implies closing on fd_clr_running() == 0 :-( + +Also even fd_update_events() leaves a risk of updating update_mask after +clearing running, thus affecting the wrong one. Probably that update_mask +should be updated before clearing running_mask there. Also, how about not +creating an update on a close ? Not trivial if done before running, unless +thread_mask==0. + +########################################################### + +Current state: + + +Mux / takeover / fd_delete() code ||| poller code +-------------------------------------------------|||--------------------------------------------------- + \|/ +mux_takeover(): | fd_set_running(): + if (fd_takeover()<0) | old = {running, thread}; + return fail; | new = {tid_bit, tid_bit}; + ... | +fd_takeover(): | do { + atomic_or(running, tid_bit); | if (!(old.thread & tid_bit)) + old = {running, thread}; | return -1; + new = {tid_bit, tid_bit}; | new = { running | tid_bit, old.thread } + if (owner != expected) { | } while (!dwcas({running, thread}, &old, &new)); + atomic_and(runnning, ~tid_bit); | + return -1; // fail | fd_clr_running(): + } | return atomic_and_fetch(running, ~tid_bit); + | + while (old == {tid_bit, !=0 }) | poll(): + if (dwcas({running, thread}, &old, &new)) { | if (!owner) + atomic_and(runnning, ~tid_bit); | continue; + return 0; // success | + } | if (!(thread_mask & tid_bit)) { + } | epoll_ctl_del(); + | continue; + atomic_and(runnning, ~tid_bit); | } + return -1; // fail | + | // via fd_update_events() +fd_delete(): | if (fd_set_running() != -1) { + atomic_or(running, tid_bit); | iocb(); + atomic_store(thread, 0); | if (fd_clr_running() == 0 && !thread_mask) + if (fd_clr_running(fd) = 0) | fd_delete_orphan(); + fd_delete_orphan(); | } + + +The idle_conns_lock prevents the connection from being *picked* and released +while someone else is reading it. What it does is guarantee that on idle +connections, the caller of the IOCB will not dereference the task's context +while the connection is still in the idle list, since it might be picked then +freed at the same instant by another thread. As soon as the IOCB manages to +get that lock, it removes the connection from the list so that it cannot be +taken over anymore. Conversely, the mux's takeover() code runs under that +lock so that if it frees the connection and task, this will appear atomic +to the IOCB. The timeout task (which is another entry point for connection +deletion) does the same. Thus, when coming from the low-level (I/O or timeout): + - task always exists, but ctx checked under lock validates; conn removal + from list prevents takeover(). + - t->context is stable, except during changes under takeover lock. So + h2_timeout_task may well run on a different thread than h2_io_cb(). + +Coming from the top: + - takeover() done under lock() clears task's ctx and possibly closes the FD + (unless some running remains present). + +Unlikely but currently possible situations: + - multiple pollers (up to N) may have an idle connection's FD being + polled, if the connection was passed from thread to thread. The first + event on the connection would wake all of them. Most of them would + see fdtab[].owner set (the late ones might miss it). All but one would + see that their bit is missing from fdtab[].thread_mask and give up. + However, just after this test, others might take over the connection, + so in practice if terribly unlucky, all but 1 could see their bit in + thread_mask just before it gets removed, all of them set their bit + in running_mask, and all of them call iocb() (sock_conn_iocb()). + Thus all of them dereference the connection and touch the subscriber + with no protection, then end up in conn_notify_mux() that will call + the mux's wake(). + + - multiple pollers (up to N-1) might still be in fd_update_events() + manipulating fdtab[].state. The cause is that the "locked" variable + is determined by atleast2(thread_mask) but that thread_mask is read + at a random instant (i.e. it may be stolen by another one during a + takeover) since we don't yet hold running to prevent this from being + done. Thus we can arrive here with thread_mask==something_else (1bit), + locked==0 and fdtab[].state assigned non-atomically. + + - it looks like nothing prevents h2_release() from being called on a + thread (e.g. from the top or task timeout) while sock_conn_iocb() + dereferences the connection on another thread. Those killing the + connection don't yet consider the fact that it's an FD that others + might currently be waking up on. + +################### + +pb with counter: + +users count doesn't say who's using the FD and two users can do the same +close in turn. The thread_mask should define who's responsible for closing +the FD, and all those with a bit in it ought to do it. + + +2021-08-25 - update with minimal locking on tgid value +========== + + - tgid + refcount at once using CAS + - idle_conns lock during updates + - update: + if tgid differs => close happened, thus drop update + otherwise normal stuff. Lock tgid until running if needed. + - poll report: + if tgid differs => closed + if thread differs => stop polling (migrated) + keep tgid lock until running + - test on thread_id: + if (xadd(&tgid,65536) != my_tgid) { + // was closed + sub(&tgid, 65536) + return -1 + } + if !(thread_id & tidbit) => migrated/closed + set_running() + sub(tgid,65536) + - note: either fd_insert() or the final close() ought to set + polled and update to 0. + +2021-09-13 - tid / tgroups etc. +========== + + - tid currently is the thread's global ID. It's essentially used as an index + for arrays. It must be clearly stated that it works this way. + + - tid_bit makes no sense process-wide, so it must be redefined to represent + the thread's tid within its group. The name is not much welcome though, but + there are 286 of it that are not going to be changed that fast. + + - just like "ti" is the thread_info, we need to have "tg" pointing to the + thread_group. + + - other less commonly used elements should be retrieved from ti->xxx. E.g. + the thread's local ID. + + - lock debugging must reproduce tgid + + - an offset might be placed in the tgroup so that even with 64 threads max + we could have completely separate tid_bits over several groups. + +2021-09-15 - bind + listen() + rx +========== + + - thread_mask (in bind_conf->rx_settings) should become an array of + MAX_TGROUP longs. + - when parsing "thread 123" or "thread 2/37", the proper bit is set, + assuming the array is either a contiguous bitfield or a tgroup array. + An option RX_O_THR_PER_GRP or RX_O_THR_PER_PROC is set depending on + how the thread num was parsed, so that we reject mixes. + - end of parsing: entries translated to the cleanest form (to be determined) + - binding: for each socket()/bind()/listen()... just perform one extra dup() + for each tgroup and store the multiple FDs into an FD array indexed on + MAX_TGROUP. => allows to use one FD per tgroup for the same socket, hence + to have multiple entries in all tgroup pollers without requiring the user + to duplicate the bind line. + +2021-09-15 - global thread masks +========== + +Some global variables currently expect to know about thread IDs and it's +uncertain what must be done with them: + - global_tasks_mask /* Mask of threads with tasks in the global runqueue */ + => touched under the rq lock. Change it per-group ? What exact use is made ? + + - sleeping_thread_mask /* Threads that are about to sleep in poll() */ + => seems that it can be made per group + + - all_threads_mask: a bit complicated, derived from nbthread and used with + masks and with my_ffsl() to wake threads up. Should probably be per-group + but we might miss something for global. + + - stopping_thread_mask: used in combination with all_threads_mask, should + move per-group. + + - threads_harmless_mask: indicates all threads that are currently harmless in + that they promise not to access a shared resource. Must be made per-group + but then we'll likely need a second stage to have the harmless groups mask. + threads_idle_mask, threads_sync_mask, threads_want_rdv_mask go with the one + above. Maybe the right approach will be to request harmless on a group mask + so that we can detect collisions and arbiter them like today, but on top of + this it becomes possible to request harmless only on the local group if + desired. The subtlety is that requesting harmless at the group level does + not mean it's achieved since the requester cannot vouch for the other ones + in the same group. + +In addition, some variables are related to the global runqueue: + __decl_aligned_spinlock(rq_lock); /* spin lock related to run queue */ + struct eb_root rqueue; /* tree constituting the global run queue, accessed under rq_lock */ + unsigned int grq_total; /* total number of entries in the global run queue, atomic */ + static unsigned int global_rqueue_ticks; /* insertion count in the grq, use rq_lock */ + +And others to the global wait queue: + struct eb_root timers; /* sorted timers tree, global, accessed under wq_lock */ + __decl_aligned_rwlock(wq_lock); /* RW lock related to the wait queue */ + struct eb_root timers; /* sorted timers tree, global, accessed under wq_lock */ + + +2021-09-29 - group designation and masks +========== + +Neither FDs nor tasks will belong to incomplete subsets of threads spanning +over multiple thread groups. In addition there may be a difference between +configuration and operation (for FDs). This allows to fix the following rules: + + group mask description + 0 0 bind_conf: groups & thread not set. bind to any/all + task: it would be nice to mean "run on the same as the caller". + + 0 xxx bind_conf: thread set but not group: thread IDs are global + FD/task: group 0, mask xxx + + G>0 0 bind_conf: only group is set: bind to all threads of group G + FD/task: mask 0 not permitted (= not owned). May be used to + mention "any thread of this group", though already covered by + G/xxx like today. + + G>0 xxx bind_conf: Bind to these threads of this group + FD/task: group G, mask xxx + +It looks like keeping groups starting at zero internally complicates everything +though. But forcing it to start at 1 might also require that we rescan all tasks +to replace 0 with 1 upon startup. This would also allow group 0 to be special and +be used as the default group for any new thread creation, so that group0.count +would keep the number of unassigned threads. Let's try: + + group mask description + 0 0 bind_conf: groups & thread not set. bind to any/all + task: "run on the same group & thread as the caller". + + 0 xxx bind_conf: thread set but not group: thread IDs are global + FD/task: invalid. Or maybe for a task we could use this to + mean "run on current group, thread XXX", which would cover + the need for health checks (g/t 0/0 while sleeping, 0/xxx + while running) and have wake_expired_tasks() detect 0/0 and + wake them up to a random group. + + G>0 0 bind_conf: only group is set: bind to all threads of group G + FD/task: mask 0 not permitted (= not owned). May be used to + mention "any thread of this group", though already covered by + G/xxx like today. + + G>0 xxx bind_conf: Bind to these threads of this group + FD/task: group G, mask xxx + +With a single group declared in the config, group 0 would implicitly find the +first one. + + +The problem with the approach above is that a task queued in one group+thread's +wait queue could very well receive a signal from another thread and/or group, +and that there is no indication about where the task is queued, nor how to +dequeue it. Thus it seems that it's up to the application itself to unbind/ +rebind a task. This contradicts the principle of leaving a task waiting in a +wait queue and waking it anywhere. + +Another possibility might be to decide that a task having a defined group but +a mask of zero is shared and will always be queued into its group's wait queue. +However, upon expiry, the scheduler would notice the thread-mask 0 and would +broadcast it to any group. + +Right now in the code we have: + - 18 calls of task_new(tid_bit) + - 18 calls of task_new(MAX_THREADS_MASK) + - 2 calls with a single bit + +Thus it looks like "task_new_anywhere()", "task_new_on()" and +"task_new_here()" would be sufficient. |